1
|
Subaar C, Addai FT, Addison ECK, Christos O, Adom J, Owusu-Mensah M, Appiah-Agyei N, Abbey S. Investigating the detection of breast cancer with deep transfer learning using ResNet18 and ResNet34. Biomed Phys Eng Express 2024; 10:035029. [PMID: 38599202 DOI: 10.1088/2057-1976/ad3cdf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Accepted: 04/10/2024] [Indexed: 04/12/2024]
Abstract
A lot of underdeveloped nations particularly in Africa struggle with cancer-related, deadly diseases. Particularly in women, the incidence of breast cancer is rising daily because of ignorance and delayed diagnosis. Only by correctly identifying and diagnosing cancer in its very early stages of development can be effectively treated. The classification of cancer can be accelerated and automated with the aid of computer-aided diagnosis and medical image analysis techniques. This research provides the use of transfer learning from a Residual Network 18 (ResNet18) and Residual Network 34 (ResNet34) architectures to detect breast cancer. The study examined how breast cancer can be identified in breast mammography pictures using transfer learning from ResNet18 and ResNet34, and developed a demo app for radiologists using the trained models with the best validation accuracy. 1, 200 datasets of breast x-ray mammography images from the National Radiological Society's (NRS) archives were employed in the study. The dataset was categorised as implant cancer negative, implant cancer positive, cancer negative and cancer positive in order to increase the consistency of x-ray mammography images classification and produce better features. For the multi-class classification of the images, the study gave an average accuracy for binary classification of benign or malignant cancer cases of 86.7% validation accuracy for ResNet34 and 92% validation accuracy for ResNet18. A prototype web application showcasing ResNet18 performance has been created. The acquired results show how transfer learning can improve the accuracy of breast cancer detection, providing invaluable assistance to medical professionals, particularly in an African scenario.
Collapse
Affiliation(s)
- Christiana Subaar
- Department of Physics, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | | | | | - Olivia Christos
- Department of Physics, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Joseph Adom
- Department of Physics, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Martin Owusu-Mensah
- Department of Physics, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| | - Nelson Appiah-Agyei
- Department of Health Physics and Diagnostic Sciences, University of Nevada, Las Vegas, United States of America
| | - Shadrack Abbey
- Department of Physics, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
| |
Collapse
|
2
|
Zarif S, Abdulkader H, Elaraby I, Alharbi A, Elkilani WS, Pławiak P. Using hybrid pre-trained models for breast cancer detection. PLoS One 2024; 19:e0296912. [PMID: 38252633 PMCID: PMC10802945 DOI: 10.1371/journal.pone.0296912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 12/21/2023] [Indexed: 01/24/2024] Open
Abstract
Breast cancer is a prevalent and life-threatening disease that affects women globally. Early detection and access to top-notch treatment are crucial in preventing fatalities from this condition. However, manual breast histopathology image analysis is time-consuming and prone to errors. This study proposed a hybrid deep learning model (CNN+EfficientNetV2B3). The proposed approach utilizes convolutional neural networks (CNNs) for the identification of positive invasive ductal carcinoma (IDC) and negative (non-IDC) tissue using whole slide images (WSIs), which use pre-trained models to classify breast cancer in images, supporting pathologists in making more accurate diagnoses. The proposed model demonstrates outstanding performance with an accuracy of 96.3%, precision of 93.4%, recall of 86.4%, F1-score of 89.7%, Matthew's correlation coefficient (MCC) of 87.6%, the Area Under the Curve (AUC) of a Receiver Operating Characteristic (ROC) curve of 97.5%, and the Area Under the Curve of the Precision-Recall Curve (AUPRC) of 96.8%, which outperforms the accuracy achieved by other models. The proposed model was also tested against MobileNet+DenseNet121, MobileNetV2+EfficientNetV2B0, and other deep learning models, proving more powerful than contemporary machine learning and deep learning approaches.
Collapse
Affiliation(s)
- Sameh Zarif
- Department of Information Technology, Faculty of Computers and Information, Menoufia University, Shebin El-kom, Menoufia, Egypt
- Artificial Intelligence Department, Faculty of Artificial Intelligence, Egyptian Russian University, Cairo, Egypt
| | - Hatem Abdulkader
- Department of Information Systems, Faculty of Computers and Information, Menoufia University, Shebin El-kom, Menoufia, Egypt
| | - Ibrahim Elaraby
- Department of Information Systems Management, Higher Institute of Qualitative Studies, Cairo, Egypt
| | - Abdullah Alharbi
- Department of Computer Science, Community College, King Saud University, Riyadh, Saudi Arabia
| | - Wail S. Elkilani
- College of Applied Computer Science, King Saud University, Riyadh, Saudi Arabia
| | - Paweł Pławiak
- Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Krakow, Poland
| |
Collapse
|
3
|
Abdallah N, Marion JM, Tauber C, Carlier T, Hatt M, Chauvet P. Enhancing histopathological image classification of invasive ductal carcinoma using hybrid harmonization techniques. Sci Rep 2023; 13:20014. [PMID: 37973797 PMCID: PMC10654662 DOI: 10.1038/s41598-023-46239-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 10/30/2023] [Indexed: 11/19/2023] Open
Abstract
This study aims to develop a robust pipeline for classifying invasive ductal carcinomas and benign tumors in histopathological images, addressing variability within and between centers. We specifically tackle the challenge of detecting atypical data and variability between common clusters within the same database. Our feature engineering-based pipeline comprises a feature extraction step, followed by multiple harmonization techniques to rectify intra- and inter-center batch effects resulting from image acquisition variability and diverse patient clinical characteristics. These harmonization steps facilitate the construction of more robust and efficient models. We assess the proposed pipeline's performance on two public breast cancer databases, BreaKHIS and IDCDB, utilizing recall, precision, and accuracy metrics. Our pipeline outperforms recent models, achieving 90-95% accuracy in classifying benign and malignant tumors. We demonstrate the advantage of harmonization for classifying patches from different databases. Our top model scored 94.7% for IDCDB and 95.2% for BreaKHis, surpassing existing feature engineering-based models (92.1% for IDCDB and 87.7% for BreaKHIS) and attaining comparable performance to deep learning models. The proposed feature-engineering-based pipeline effectively classifies malignant and benign tumors while addressing variability within and between centers through the incorporation of various harmonization techniques. Our findings reveal that harmonizing variabilities between patches from different batches directly impacts the learning and testing performance of classification models. This pipeline has the potential to enhance breast cancer diagnosis and treatment and may be applicable to other diseases.
Collapse
Affiliation(s)
- Nassib Abdallah
- LaTIM, INSERM, Université de Bretagne-Occidentale, Brest, France.
- LARIS, Université d'Angers, Angers, France.
| | | | | | | | - Mathieu Hatt
- LaTIM, INSERM, Université de Bretagne-Occidentale, Brest, France
| | | |
Collapse
|
4
|
Wang J, Quan H, Wang C, Yang G. Pyramid-based self-supervised learning for histopathological image classification. Comput Biol Med 2023; 165:107336. [PMID: 37708715 DOI: 10.1016/j.compbiomed.2023.107336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 07/14/2023] [Accepted: 08/07/2023] [Indexed: 09/16/2023]
Abstract
Large-scale labeled datasets are crucial for the success of supervised learning in medical imaging. However, annotating histopathological images is a time-consuming and labor-intensive task that requires highly trained professionals. To address this challenge, self-supervised learning (SSL) can be utilized to pre-train models on large amounts of unsupervised data and transfer the learned representations to various downstream tasks. In this study, we propose a self-supervised Pyramid-based Local Wavelet Transformer (PLWT) model for effectively extracting rich image representations. The PLWT model extracts both local and global features to pre-train a large number of unlabeled histopathology images in a self-supervised manner. Wavelet is used to replace average pooling in the downsampling of the multi-head attention, achieving a significant reduction in information loss during the transmission of image features. Additionally, we introduce a Local Squeeze-and-Excitation (Local SE) module in the feedforward network in combination with the inverse residual to capture local image information. We evaluate PLWT's performance on three histopathological images and demonstrate the impact of pre-training. Our experiment results indicate that PLWT with self-supervised learning performs highly competitive when compared with other SSL methods, and the transferability of visual representations generated by SSL on domain-relevant histopathological images exceeds that of the supervised baseline trained on ImageNet.
Collapse
Affiliation(s)
- Junjie Wang
- Ningbo Artificial Intelligence Institute of Shanghai Jiao Tong University, Zhejiang 315000, PR China; Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, PR China.
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110016, PR China.
| | - Chengguang Wang
- Ningbo Industrial Internet Institute, Zhejiang 315000, PR China.
| | - Genke Yang
- Ningbo Artificial Intelligence Institute of Shanghai Jiao Tong University, Zhejiang 315000, PR China; Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, PR China.
| |
Collapse
|
5
|
Suresh T, Brijet Z, Subha TD. Imbalanced medical disease dataset classification using enhanced generative adversarial network. Comput Methods Biomech Biomed Engin 2023; 26:1702-1718. [PMID: 36322625 DOI: 10.1080/10255842.2022.2134729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 09/17/2022] [Accepted: 10/06/2022] [Indexed: 11/05/2022]
Abstract
In general, the imbalanced dataset is a major issue in health applications. The medical data classification faces the imbalanced count of data samples, here at least one class forms only a very small minority of the data, but it is a drawback of most of the machine learning algorithms. The medical datasets are mostly imbalanced in its class labels. When the dataset is imbalanced, the existing classification algorithms typically perform badly on minority class cases. To deal the class imbalance issue, an enhanced generative adversarial network (E-GAN) is proposed in this article. The proposed approach is the consolidation of deep convolutional generative adversarial network and modified convolutional neural network (DCG-MCNN). Initially, the imbalanced data is converted into balanced data in pre-processing process. Data preprocessing comprise of data cleaning, data normalization, data transformation and data reduction using Radius Synthetic minority oversampling technique (RSMOTE) method. The DCG is considered for balancing the dataset generating extra samples under training dataset. This training dataset based, the medical disease classification is enhanced by modified CNN diagnosis model. The proposed system performed is executed in MATLAB. The performance analysis is implemented under the Breast Cancer Wisconsin Dataset that provides the higher maximum geometry mean (MGM) of 8.686, 2.931 and 5.413%, and higher Matthews's correlation coefficient (MCC) of 9.776, 1.841 and 5.413% compared to the existing methods.
Collapse
Affiliation(s)
- T Suresh
- Department of Electronics and Communication Engineering, R.M.K. Engineering college, Kavaraipettai, Tamil Nadu, India
| | - Z Brijet
- Department of Electronics and Instrumentation Engineering, Velammal Engineering College, Surapet, Chennai, Tamil Nadu, India
| | - T D Subha
- Department of Electronics and Communication Engineering, R.M.K. Engineering college, Kavaraipettai, Tamil Nadu, India
| |
Collapse
|
6
|
Li Y, Xu J, Wang P, Li P, Yang G, Chen R. Manifold reconstructed semi-supervised domain adaptation for histopathology images classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
7
|
Ogundokun RO, Misra S, Akinrotimi AO, Ogul H. MobileNet-SVM: A Lightweight Deep Transfer Learning Model to Diagnose BCH Scans for IoMT-Based Imaging Sensors. Sensors (Basel) 2023; 23:656. [PMID: 36679455 PMCID: PMC9863875 DOI: 10.3390/s23020656] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2022] [Revised: 12/02/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Many individuals worldwide pass away as a result of inadequate procedures for prompt illness identification and subsequent treatment. A valuable life can be saved or at least extended with the early identification of serious illnesses, such as various cancers and other life-threatening conditions. The development of the Internet of Medical Things (IoMT) has made it possible for healthcare technology to offer the general public efficient medical services and make a significant contribution to patients' recoveries. By using IoMT to diagnose and examine BreakHis v1 400× breast cancer histology (BCH) scans, disorders may be quickly identified and appropriate treatment can be given to a patient. Imaging equipment having the capability of auto-analyzing acquired pictures can be used to achieve this. However, the majority of deep learning (DL)-based image classification approaches are of a large number of parameters and unsuitable for application in IoMT-centered imaging sensors. The goal of this study is to create a lightweight deep transfer learning (DTL) model suited for BCH scan examination and has a good level of accuracy. In this study, a lightweight DTL-based model "MobileNet-SVM", which is the hybridization of MobileNet and Support Vector Machine (SVM), for auto-classifying BreakHis v1 400× BCH images is presented. When tested against a real dataset of BreakHis v1 400× BCH images, the suggested technique achieved a training accuracy of 100% on the training dataset. It also obtained an accuracy of 91% and an F1-score of 91.35 on the test dataset. Considering how complicated BCH scans are, the findings are encouraging. The MobileNet-SVM model is ideal for IoMT imaging equipment in addition to having a high degree of precision. According to the simulation findings, the suggested model requires a small computation speed and time.
Collapse
Affiliation(s)
- Roseline Oluwaseun Ogundokun
- Department of Multimedia Engineering, Kaunas University of Technology, 44249 Kaunas, Lithuania
- Department of Computer Science, Landmark University, Omu Aran 251103, Kwara, Nigeria
| | - Sanjay Misra
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| | | | - Hasan Ogul
- Department of Computer Science and Communication, Østfold University College, 1757 Halden, Norway
| |
Collapse
|
8
|
Choudhary T, Gujar S, Goswami A, Mishra V, Badal T. Deep learning-based important weights-only transfer learning approach for COVID-19 CT-scan classification. APPL INTELL 2023; 53:7201-15. [PMID: 35875199 DOI: 10.1007/s10489-022-03893-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/13/2022] [Indexed: 11/18/2022]
Abstract
COVID-19 has become a pandemic for the entire world, and it has significantly affected the world economy. The importance of early detection and treatment of the infection cannot be overstated. The traditional diagnosis techniques take more time in detecting the infection. Although, numerous deep learning-based automated solutions have recently been developed in this regard, nevertheless, the limitation of computational and battery power in resource-constrained devices makes it difficult to deploy trained models for real-time inference. In this paper, to detect the presence of COVID-19 in CT-scan images, an important weights-only transfer learning method has been proposed for devices with limited runt-time resources. In the proposed method, the pre-trained models are made point-of-care devices friendly by pruning less important weight parameters of the model. The experiments were performed on two popular VGG16 and ResNet34 models and the empirical results showed that pruned ResNet34 model achieved 95.47% accuracy, 0.9216 sensitivity, 0.9567 F-score, and 0.9942 specificity with 41.96% fewer FLOPs and 20.64% fewer weight parameters on the SARS-CoV-2 CT-scan dataset. The results of our experiments showed that the proposed method significantly reduces the run-time resource requirements of the computationally intensive models and makes them ready to be utilized on the point-of-care devices.
Collapse
|
9
|
Baghdadi NA, Malki A, Magdy Balaha H, AbdulAzeem Y, Badawy M, Elhosseini M. Classification of breast cancer using a manta-ray foraging optimized transfer learning framework. PeerJ Comput Sci 2022; 8:e1054. [PMID: 36092017 PMCID: PMC9454783 DOI: 10.7717/peerj-cs.1054] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Due to its high prevalence and wide dissemination, breast cancer is a particularly dangerous disease. Breast cancer survival chances can be improved by early detection and diagnosis. For medical image analyzers, diagnosing is tough, time-consuming, routine, and repetitive. Medical image analysis could be a useful method for detecting such a disease. Recently, artificial intelligence technology has been utilized to help radiologists identify breast cancer more rapidly and reliably. Convolutional neural networks, among other technologies, are promising medical image recognition and classification tools. This study proposes a framework for automatic and reliable breast cancer classification based on histological and ultrasound data. The system is built on CNN and employs transfer learning technology and metaheuristic optimization. The Manta Ray Foraging Optimization (MRFO) approach is deployed to improve the framework's adaptability. Using the Breast Cancer Dataset (two classes) and the Breast Ultrasound Dataset (three-classes), eight modern pre-trained CNN architectures are examined to apply the transfer learning technique. The framework uses MRFO to improve the performance of CNN architectures by optimizing their hyperparameters. Extensive experiments have recorded performance parameters, including accuracy, AUC, precision, F1-score, sensitivity, dice, recall, IoU, and cosine similarity. The proposed framework scored 97.73% on histopathological data and 99.01% on ultrasound data in terms of accuracy. The experimental results show that the proposed framework is superior to other state-of-the-art approaches in the literature review.
Collapse
Affiliation(s)
- Nadiah A. Baghdadi
- College of Nursing, Nursing Management and Education Department, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amer Malki
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
| | - Hossam Magdy Balaha
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Yousry AbdulAzeem
- Computer Engineering Department, Misr Higher Institute for Engineering and Technology, Mansoura, Egypt
| | - Mahmoud Badawy
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| | - Mostafa Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu, Saudi Arabia
- Computers and Control Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura, Egypt
| |
Collapse
|
10
|
Deo AJ, Sahoo A, Behera SK, Das DP. Vision-based size classification of iron ore pellets using ensembled convolutional neural network. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07473-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
Saravagi D, Agrawal S, Saravagi M, Rahman MH. Diagnosis of Lumbar Spondylolisthesis Using a Pruned CNN Model. Comput Math Methods Med 2022; 2022:2722315. [PMID: 35592683 PMCID: PMC9113885 DOI: 10.1155/2022/2722315] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 04/15/2022] [Indexed: 11/17/2022]
Abstract
Convolutional neural network (CNN) models have made tremendous progress in the medical domain in recent years. The application of the CNN model is restricted due to a huge number of redundant and unnecessary parameters. In this paper, the weight and unit pruning strategy are used to reduce the complexity of the CNN model so that it can be used on small devices for the diagnosis of lumbar spondylolisthesis. Experimental results reveal that by removing 90% of network load, the unit pruning strategy outperforms weight pruning while achieving 94.12% accuracy. Thus, only 30% (around 850532 out of 3955102) and 10% (around 251512 out of 3955102) of the parameters from each layer contribute to the outcome during weight and neuron pruning, respectively. The proposed pruned model had achieved higher accuracy as compared to the prior model suggested for lumbar spondylolisthesis diagnosis.
Collapse
Affiliation(s)
- Deepika Saravagi
- Department of Computer Application, SAGE University, Indore 452012, India
| | | | - Manisha Saravagi
- Physiotherapy Department, Railway Hospital, Kota, Rajasthan 324002, India
| | - Md Habibur Rahman
- Department of Computer Science and Engineering, Islamic University, Kushtia-7003, Bangladesh
| |
Collapse
|
12
|
Macias E, Lopez Vicario J, Serrano J, Ibeas J, Morell A. Transfer Learning Improving Predictive Mortality Models for Patients in End-Stage Renal Disease. Electronics 2022; 11:1447. [DOI: 10.3390/electronics11091447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Deep learning is becoming a fundamental piece in the paradigm shift from evidence-based to data-based medicine. However, its learning capacity is rarely exploited when working with small data sets. Through transfer learning (TL), information from a source domain is transferred to a target one to enhance a learning task in such domain. The proposed TL mechanisms are based on sample and feature space augmentation. Thus, deep autoencoders extract complex representations for the data in the TL approach. Their latent representations, the so-called codes, are handled to transfer information among domains. The transfer of samples is carried out by computing a latent space mapping matrix that links codes from both domains for later reconstruction. The feature space augmentation is based on the computation of the average of the most similar codes from one domain. Such an average augments the features in a target domain. The proposed framework is evaluated in the prediction of mortality in patients in end-stage renal disease, transferring information related to the mortality of patients with acute kidney injury from the massive database MIMIC-III. Compared to other TL mechanisms, the proposed approach improves 6–11% in previous mortality predictive models. The integration of TL approaches into learning tasks in pathologies with data volume issues could encourage the use of data-based medicine in a clinical setting.
Collapse
|
13
|
Chowdhury D, Das A, Dey A, Sarkar S, Dwivedi AD, Rao Mukkamala R, Murmu L. ABCanDroid: A Cloud Integrated Android App for Noninvasive Early Breast Cancer Detection Using Transfer Learning. Sensors (Basel) 2022; 22:832. [PMID: 35161576 DOI: 10.3390/s22030832] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 01/20/2022] [Accepted: 01/20/2022] [Indexed: 12/17/2022]
Abstract
Many patients affected by breast cancer die every year because of improper diagnosis and treatment. In recent years, applications of deep learning algorithms in the field of breast cancer detection have proved to be quite efficient. However, the application of such techniques has a lot of scope for improvement. Major works have been done in this field, however it can be made more efficient by the use of transfer learning to get impressive results. In the proposed approach, Convolutional Neural Network (CNN) is complemented with Transfer Learning for increasing the efficiency and accuracy of early detection of breast cancer for better diagnosis. The thought process involved using a pre-trained model, which already had some weights assigned rather than building the complete model from scratch. This paper mainly focuses on ResNet101 based Transfer Learning Model paired with the ImageNet dataset. The proposed framework provided us with an accuracy of 99.58%. Extensive experiments and tuning of hyperparameters have been performed to acquire the best possible results in terms of classification. The proposed frameworks aims to be an efficient tool for all doctors and society as a whole and help the user in early detection of breast cancer.
Collapse
|
14
|
Assari Z, Mahloojifar A, Ahmadinejad N. A bimodal BI-RADS-guided GoogLeNet-based CAD system for solid breast masses discrimination using transfer learning. Comput Biol Med 2021; 142:105160. [PMID: 34995955 DOI: 10.1016/j.compbiomed.2021.105160] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 12/14/2021] [Accepted: 12/18/2021] [Indexed: 12/14/2022]
Abstract
Numerous solid breast masses require sophisticated analysis to establish a differential diagnosis. Consequently, complementary modalities such as ultrasound imaging are frequently required to evaluate mammographically further detected masses. Radiologists mentally integrate complementary information from images acquired of the same patient to make a more conclusive and effective diagnosis. However, it has always been a challenging task. This paper details a novel bimodal GoogLeNet-based CAD system that addresses the challenges associated with combining information from mammographic and sonographic images for solid breast mass classification. Each modality is initially trained using two distinct monomodal models in the proposed framework. Then, using the high-level feature maps extracted from both modalities, a bimodal model is trained. In order to fully exploit the BI-RADS descriptors, different image content representations of each mass are obtained and used as input images. In addition, using an ImageNet pre-trained GoogLeNet model, two publicly available databases, and our collected dataset, a two-step transfer learning strategy has been proposed. Our bimodal model achieves the best recognition results in terms of sensitivity, specificity, F1-score, Matthews Correlation Coefficient, area under the receiver operating characteristic curve, and accuracy metrics of 90.91%, 89.87%, 90.32%, 80.78%, 95.82%, and 90.38%, respectively. The promising results indicate that the proposed CAD system can facilitate bimodal suspicious mass analysis and thus contribute significantly to improving breast cancer diagnostic performance.
Collapse
Affiliation(s)
- Zahra Assari
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Ali Mahloojifar
- Department of Biomedical Engineering, Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran.
| | - Nasrin Ahmadinejad
- Medical Imaging Center, Cancer Research Institute, Imam Khomeini Hospital Advanced Diagnostic and Interventional Radiology Research Center (ADIR), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| |
Collapse
|
15
|
Khairi SSM, Bakar MAA, Alias MA, Bakar SA, Liong CY, Rosli N, Farid M. Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis. Healthcare (Basel) 2021; 10:10. [PMID: 35052174 DOI: 10.3390/healthcare10010010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/07/2021] [Accepted: 12/12/2021] [Indexed: 12/16/2022] Open
Abstract
Medical imaging is gaining significant attention in healthcare, including breast cancer. Breast cancer is the most common cancer-related death among women worldwide. Currently, histopathology image analysis is the clinical gold standard in cancer diagnosis. However, the manual process of microscopic examination involves laborious work and can be misleading due to human error. Therefore, this study explored the research status and development trends of deep learning on breast cancer image classification using bibliometric analysis. Relevant works of literature were obtained from the Scopus database between 2014 and 2021. The VOSviewer and Bibliometrix tools were used for analysis through various visualization forms. This study is concerned with the annual publication trends, co-authorship networks among countries, authors, and scientific journals. The co-occurrence network of the authors’ keywords was analyzed for potential future directions of the field. Authors started to contribute to publications in 2016, and the research domain has maintained its growth rate since. The United States and China have strong research collaboration strengths. Only a few studies use bibliometric analysis in this research area. This study provides a recent review on this fast-growing field to highlight status and trends using scientific visualization. It is hoped that the findings will assist researchers in identifying and exploring the potential emerging areas in the related field.
Collapse
|