1
|
Chelloug SA, Ba Mahel AS, Alnashwan R, Rafiq A, Ali Muthanna MS, Aziz A. Enhanced breast cancer diagnosis using modified InceptionNet-V3: a deep learning approach for ultrasound image classification. Front Physiol 2025; 16:1558001. [PMID: 40330252 PMCID: PMC12052540 DOI: 10.3389/fphys.2025.1558001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2025] [Accepted: 04/07/2025] [Indexed: 05/08/2025] Open
Abstract
Introduction Breast cancer (BC) is a malignant neoplasm that originates in the mammary gland's cellular structures and remains one of the most prevalent cancers among women, ranking second in cancer-related mortality after lung cancer. Early and accurate diagnosis is crucial due to the heterogeneous nature of breast cancer and its rapid progression. However, manual detection and classification are often time-consuming and prone to errors, necessitating the development of automated and reliable diagnostic approaches. Methods Recent advancements in deep learning have significantly improved medical image analysis, demonstrating superior predictive performance in breast cancer detection using ultrasound images. Despite these advancements, training deep learning models from scratch can be computationally expensive and data-intensive. Transfer learning, leveraging pre-trained models on large-scale datasets, offers an effective solution to mitigate these challenges. In this study, we investigate and compare multiple deep-learning models for breast cancer classification using transfer learning. The evaluated architectures include modified InceptionV3, GoogLeNet, ShuffleNet, AlexNet, VGG-16, and SqueezeNet. Additionally, we propose a deep neural network model that integrates features from modified InceptionV3 to further enhance classification performance. Results The experimental results demonstrate that the modified InceptionV3 model achieves the highest classification accuracy of 99.10%, with a recall of 98.90%, precision of 99.00%, and an F1-score of 98.80%, outperforming all other evaluated models on the given datasets. Discussion The achieved findings underscore the potential of the proposed approach in enhancing diagnostic precision and confirm the superiority of the modified InceptionV3 model in breast cancer classification tasks.
Collapse
Affiliation(s)
- Samia Allaoua Chelloug
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Abduljabbar S. Ba Mahel
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Rana Alnashwan
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Ahsan Rafiq
- Institute of Information Technology and Information Security Southern Federal University, Taganrog, Russia
| | - Mohammed Saleh Ali Muthanna
- Department of International Business Management, Tashkent State University of Economics, Tashkent, Uzbekistan
| | - Ahmed Aziz
- Department of Computer Science, Faculty of Computer and Artificial Intelligence, Benha University, Benha, Egypt
- Engineering school, Central Asian University, Tashkent, Uzbekistan
| |
Collapse
|
2
|
Pan L, Tang M, Chen X, Du Z, Huang D, Yang M, Chen Y. M 2UNet: Multi-Scale Feature Acquisition and Multi-Input Edge Supplement Based on UNet for Efficient Segmentation of Breast Tumor in Ultrasound Images. Diagnostics (Basel) 2025; 15:944. [PMID: 40310342 PMCID: PMC12025914 DOI: 10.3390/diagnostics15080944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2025] [Revised: 04/03/2025] [Accepted: 04/05/2025] [Indexed: 05/02/2025] Open
Abstract
Background/Objectives: The morphological characteristics of breast tumors play a crucial role in the preliminary diagnosis of breast cancer. However, malignant tumors often exhibit rough, irregular edges and unclear, boundaries in ultrasound images. Additionally, variations in tumor size, location, and shape further complicate the accurate segmentation of breast tumors from ultrasound images. Methods: For these difficulties, this paper introduces a breast ultrasound tumor segmentation network comprising a multi-scale feature acquisition (MFA) module and a multi-input edge supplement (MES) module. The MFA module effectively incorporates dilated convolutions of various sizes in a serial-parallel fashion to capture tumor features at diverse scales. Then, the MES module is employed to enhance the output of each decoder layer by supplementing edge information. This process aims to improve the overall integrity of tumor boundaries, contributing to more refined segmentation results. Results: The mean Dice (mDice), Pixel Accuracy (PA), Intersection over Union (IoU), Recall, and Hausdorff Distance (HD) of this method for the publicly available breast ultrasound image (BUSI) dataset were 79.43%, 96.84%, 83.00%, 87.17%, and 19.71 mm, respectively, and for the dataset of Fujian Cancer Hospital, 90.45%, 97.55%, 90.08%, 93.72%, and 11.02 mm, respectively. In the BUSI dataset, compared to the original UNet, the Dice for malignant tumors increased by 14.59%, and the HD decreased by 17.13 mm. Conclusions: Our method is capable of accurately segmenting breast tumor ultrasound images, which provides very valuable edge information for subsequent diagnosis of breast cancer. The experimental results show that our method has made substantial progress in improving accuracy.
Collapse
Affiliation(s)
- Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (L.P.); (M.T.); (X.C.)
| | - Mengshi Tang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (L.P.); (M.T.); (X.C.)
| | - Xin Chen
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (L.P.); (M.T.); (X.C.)
| | - Zhongshi Du
- Department of Ultrasound, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou 350014, China; (Z.D.); (D.H.)
| | - Danfeng Huang
- Department of Ultrasound, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou 350014, China; (Z.D.); (D.H.)
| | - Mingjing Yang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China; (L.P.); (M.T.); (X.C.)
| | - Yijie Chen
- Department of Ultrasound, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou 350014, China; (Z.D.); (D.H.)
| |
Collapse
|
3
|
Bala PM, Palani U. Innovative breast cancer detection using a segmentation-guided ensemble classification framework. Biomed Eng Lett 2025; 15:179-191. [PMID: 39781047 PMCID: PMC11704121 DOI: 10.1007/s13534-024-00435-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 09/25/2024] [Accepted: 09/28/2024] [Indexed: 01/11/2025] Open
Abstract
Breast cancer (BC) remains a significant global health issue, necessitating innovative methodologies to improve early detection and diagnosis. Despite the existence of intelligent deep learning models, their efficacy is often limited due to the oversight of small-sized masses, leading to false positive and false negative outcomes. This research introduces a novel segmentation-guided classification model developed to increase BC detection accuracy. The designed model unfolds in two critical phases, each contributing to a comprehensive BC diagnostic pipeline. In Phase I, the Attention U-Net model is utilized for BC segmentation. The encoder extracts hierarchical features, while the decoder, supported by attention mechanisms, refines the segmentation, focusing on suspicious regions. In Phase II, a novel ensemble approach is introduced for BC classification, involving various feature extraction methods, base classifiers, and a meta-classifier. An ensemble of model classifiers-including support vector machine, decision trees, k-nearest neighbor and artificial neural network- captures diverse patterns within these features. The Random Forest meta-classifier amalgamates their outputs, leveraging their collective strengths. The proposed integrated model accurately identifies different breast tumor classes, including malignant, benign, and normal. The precise region-of-interest analysis from segmentation phase significantly boosted classification performance of ensemble meta-classifier. The model accomplished an overall accuracy rate of 99.57% with high segmentation performance of 95% f1-score, illustrating its high discriminative power in detecting malignant, benign, and normal cases within the ultrasound image dataset. This research contributes to reducing breast tumor morbidity and mortality by facilitating early detection and timely intervention, ultimately supporting better patient outcomes. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-024-00435-7.
Collapse
Affiliation(s)
- P. Manju Bala
- Computer Science and Engineering, IFET College of Engineering, Villupuram, Tamilnadu India
| | - U. Palani
- Electronics and Communication Engineering, IFET College of Engineering, Villupuram, Tamilnadu India
| |
Collapse
|
4
|
Joshi RC, Srivastava P, Mishra R, Burget R, Dutta MK. Biomarker profiling and integrating heterogeneous models for enhanced multi-grade breast cancer prognostication. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 255:108349. [PMID: 39096573 DOI: 10.1016/j.cmpb.2024.108349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/01/2024] [Accepted: 07/22/2024] [Indexed: 08/05/2024]
Abstract
BACKGROUND Breast cancer remains a leading cause of female mortality worldwide, exacerbated by limited awareness, inadequate screening resources, and treatment options. Accurate and early diagnosis is crucial for improving survival rates and effective treatment. OBJECTIVES This study aims to develop an innovative artificial intelligence (AI) based model for predicting breast cancer and its various histopathological grades by integrating multiple biomarkers and subject age, thereby enhancing diagnostic accuracy and prognostication. METHODS A novel ensemble-based machine learning (ML) framework has been introduced that integrates three distinct biomarkers-beta-human chorionic gonadotropin (β-hCG), Programmed Cell Death Ligand 1 (PD-L1), and alpha-fetoprotein (AFP)-alongside subject age. Hyperparameter optimization was performed using the Particle Swarm Optimization (PSO) algorithm, and minority oversampling techniques were employed to mitigate overfitting. The model's performance was validated through rigorous five-fold cross-validation. RESULTS The proposed model demonstrated superior performance, achieving a 97.93% accuracy and a 98.06% F1-score on meticulously labeled test data across diverse age groups. Comparative analysis showed that the model outperforms state-of-the-art approaches, highlighting its robustness and generalizability. CONCLUSION By providing a comprehensive analysis of multiple biomarkers and effectively predicting tumor grades, this study offers a significant advancement in breast cancer screening, particularly in regions with limited medical resources. The proposed framework has the potential to reduce breast cancer mortality rates and improve early intervention and personalized treatment strategies.
Collapse
Affiliation(s)
- Rakesh Chandra Joshi
- Amity Centre for Artificial Intelligence, Amity University, Noida, Uttar Pradesh, India; Centre for Advanced Studies, Dr. A.P.J. Abdul Kalam Technical University, Lucknow, Uttar Pradesh, India
| | - Pallavi Srivastava
- Department of Biotechnology, Noida Institute of Engineering & Technology, Greater Noida, Uttar Pradesh, India
| | - Rashmi Mishra
- Department of Biotechnology, Noida Institute of Engineering & Technology, Greater Noida, Uttar Pradesh, India
| | - Radim Burget
- Department of Telecommunications, Faculty of Electrical Engineering and Communication, Brno University of Technology, Brno, Czech Republic
| | - Malay Kishore Dutta
- Amity Centre for Artificial Intelligence, Amity University, Noida, Uttar Pradesh, India.
| |
Collapse
|
5
|
Kaur J, Kaur P. A systematic literature analysis of multi-organ cancer diagnosis using deep learning techniques. Comput Biol Med 2024; 179:108910. [PMID: 39032244 DOI: 10.1016/j.compbiomed.2024.108910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/23/2024]
Abstract
Cancer is becoming the most toxic ailment identified among individuals worldwide. The mortality rate has been increasing rapidly every year, which causes progression in the various diagnostic technologies to handle this illness. The manual procedure for segmentation and classification with a large set of data modalities can be a challenging task. Therefore, a crucial requirement is to significantly develop the computer-assisted diagnostic system intended for the initial cancer identification. This article offers a systematic review of Deep Learning approaches using various image modalities to detect multi-organ cancers from 2012 to 2023. It emphasizes the detection of five supreme predominant tumors, i.e., breast, brain, lung, skin, and liver. Extensive review has been carried out by collecting research and conference articles and book chapters from reputed international databases, i.e., Springer Link, IEEE Xplore, Science Direct, PubMed, and Wiley that fulfill the criteria for quality evaluation. This systematic review summarizes the overview of convolutional neural network model architectures and datasets used for identifying and classifying the diverse categories of cancer. This study accomplishes an inclusive idea of ensemble deep learning models that have achieved better evaluation results for classifying the different images into cancer or healthy cases. This paper will provide a broad understanding to the research scientists within the domain of medical imaging procedures of which deep learning technique perform best over which type of dataset, extraction of features, different confrontations, and their anticipated solutions for the complex problems. Lastly, some challenges and issues which control the health emergency have been discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| |
Collapse
|
6
|
Jiménez-Gaona Y, Álvarez MJR, Castillo-Malla D, García-Jaen S, Carrión-Figueroa D, Corral-Domínguez P, Lakshminarayanan V. BraNet: a mobil application for breast image classification based on deep learning algorithms. Med Biol Eng Comput 2024; 62:2737-2756. [PMID: 38693328 PMCID: PMC11330402 DOI: 10.1007/s11517-024-03084-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/26/2024] [Indexed: 05/03/2024]
Abstract
Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet" for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client-server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader's agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts' accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model.
Collapse
Affiliation(s)
- Yuliana Jiménez-Gaona
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador.
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain.
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada.
| | - María José Rodríguez Álvarez
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
| | - Darwin Castillo-Malla
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada
| | - Santiago García-Jaen
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
| | | | - Patricio Corral-Domínguez
- Corporación Médica Monte Sinaí-CIPAM (Centro Integral de Patología Mamaria) Cuenca-Ecuador, Facultad de Ciencias Médicas, Universidad de Cuenca, Cuenca, 010203, Ecuador
| | - Vasudevan Lakshminarayanan
- Department of Systems Design Engineering, Physics, and Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L3G1, Canada
| |
Collapse
|
7
|
Guo Y, Zhang H, Yuan L, Chen W, Zhao H, Yu QQ, Shi W. Machine learning and new insights for breast cancer diagnosis. J Int Med Res 2024; 52:3000605241237867. [PMID: 38663911 PMCID: PMC11047257 DOI: 10.1177/03000605241237867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 02/21/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer (BC) is the most prominent form of cancer among females all over the world. The current methods of BC detection include X-ray mammography, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography and breast thermographic techniques. More recently, machine learning (ML) tools have been increasingly employed in diagnostic medicine for its high efficiency in detection and intervention. The subsequent imaging features and mathematical analyses can then be used to generate ML models, which stratify, differentiate and detect benign and malignant breast lesions. Given its marked advantages, radiomics is a frequently used tool in recent research and clinics. Artificial neural networks and deep learning (DL) are novel forms of ML that evaluate data using computer simulation of the human brain. DL directly processes unstructured information, such as images, sounds and language, and performs precise clinical image stratification, medical record analyses and tumour diagnosis. Herein, this review thoroughly summarizes prior investigations on the application of medical images for the detection and intervention of BC using radiomics, namely DL and ML. The aim was to provide guidance to scientists regarding the use of artificial intelligence and ML in research and the clinic.
Collapse
Affiliation(s)
- Ya Guo
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Heng Zhang
- Department of Laboratory Medicine, Shandong Daizhuang Hospital, Jining, Shandong Province, China
| | - Leilei Yuan
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Weidong Chen
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Haibo Zhao
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Qing-Qing Yu
- Phase I Clinical Research Centre, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Wenjie Shi
- Molecular and Experimental Surgery, University Clinic for General-, Visceral-, Vascular- and Trans-Plantation Surgery, Medical Faculty University Hospital Magdeburg, Otto-von Guericke University, Magdeburg, Germany
| |
Collapse
|
8
|
Ragab M, Khadidos AO, Alshareef AM, Khadidos AO, Altwijri M, Alhebaishi N. Optimal deep transfer learning driven computer‐aided breast cancer classification using ultrasound images. EXPERT SYSTEMS 2024; 41. [DOI: 10.1111/exsy.13515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/12/2023] [Indexed: 10/28/2024]
Abstract
AbstractBreast cancer (BC) is regarded as the second leading type of cancer among women globally. Ultrasound images are typically used for the identification and classification of abnormalities that exist in the breast. To enhance diagnosis performance, the computer assisted diagnosis (CAD) model finds it effective for identifying and classifying BC. Generally, the CAD technique contains distinct procedures like feature extraction, preprocessing, segmentation, and classification. The recent developments of deep learning (DL) algorithms in the form of CAD system helps to minimize the cost and enhance the ability of radiologists to interpret medical images. Therefore, this study develops an optimal deep transfer learning driven computer aided BC classification (ODTLD‐CABCC) technique on ultrasound images. The presented ODTLD‐CABCC algorithm undergoes pre‐processing in two levels such as median filtering based noise removal and graph cut segmentation. Furthermore, the residual network (ResNet101) model can be used as a feature extractor. Finally, the sailfish optimizer (SFO) with a labelled weighted extreme learning machine (LWELM) algorithm is used for the classification process. The SFO technique is employed to choose optimal parameters involved in the LWELM algorithm. A comprehensive set of simulations are conducted on the benchmark data and the experimental outcomes are examined under numerous aspects. The comparative examination represents the supremacy of the ODTLD‐CABCC technique over the other approaches.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Department of Information Technology, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
- King Abdulaziz University ‐ University of Oxford Centre for Artificial Intelligence in Precision Medicines King Abdulaziz University Jeddah Saudi Arabia
| | - Alaa O. Khadidos
- Department of Information Systems, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
- Center of Research Excellence in Artificial Intelligence and Data Science King Abdulaziz University Jeddah Saudi Arabia
| | - Abdulrhman M. Alshareef
- Department of Information Systems, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Adil O. Khadidos
- Department of Information Technology, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Mohammed Altwijri
- Department of Computer Science, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Nawaf Alhebaishi
- Department of Information Systems, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| |
Collapse
|
9
|
Cheng K, Wang J, Liu J, Zhang X, Shen Y, Su H. Public health implications of computer-aided diagnosis and treatment technologies in breast cancer care. AIMS Public Health 2023; 10:867-895. [PMID: 38187901 PMCID: PMC10764974 DOI: 10.3934/publichealth.2023057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 10/10/2023] [Indexed: 01/09/2024] Open
Abstract
Breast cancer remains a significant public health issue, being a leading cause of cancer-related mortality among women globally. Timely diagnosis and efficient treatment are crucial for enhancing patient outcomes, reducing healthcare burdens and advancing community health. This systematic review, following the PRISMA guidelines, aims to comprehensively synthesize the recent advancements in computer-aided diagnosis and treatment for breast cancer. The study covers the latest developments in image analysis and processing, machine learning and deep learning algorithms, multimodal fusion techniques and radiation therapy planning and simulation. The results of the review suggest that machine learning, augmented and virtual reality and data mining are the three major research hotspots in breast cancer management. Moreover, this paper discusses the challenges and opportunities for future research in this field. The conclusion highlights the importance of computer-aided techniques in the management of breast cancer and summarizes the key findings of the review.
Collapse
Affiliation(s)
- Kai Cheng
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jiangtao Wang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jian Liu
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Xiangsheng Zhang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Yuanyuan Shen
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
10
|
Pati A, Parhi M, Pattanayak BK, Singh D, Singh V, Kadry S, Nam Y, Kang BG. Breast Cancer Diagnosis Based on IoT and Deep Transfer Learning Enabled by Fog Computing. Diagnostics (Basel) 2023; 13:2191. [PMID: 37443585 DOI: 10.3390/diagnostics13132191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/18/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023] Open
Abstract
Across all countries, both developing and developed, women face the greatest risk of breast cancer. Patients who have their breast cancer diagnosed and staged early have a better chance of receiving treatment before the disease spreads. The automatic analysis and classification of medical images are made possible by today's technology, allowing for quicker and more accurate data processing. The Internet of Things (IoT) is now crucial for the early and remote diagnosis of chronic diseases. In this study, mammography images from the publicly available online repository The Cancer Imaging Archive (TCIA) were used to train a deep transfer learning (DTL) model for an autonomous breast cancer diagnostic system. The data were pre-processed before being fed into the model. A popular deep learning (DL) technique, i.e., convolutional neural networks (CNNs), was combined with transfer learning (TL) techniques such as ResNet50, InceptionV3, AlexNet, VGG16, and VGG19 to boost prediction accuracy along with a support vector machine (SVM) classifier. Extensive simulations were analyzed by employing a variety of performances and network metrics to demonstrate the viability of the proposed paradigm. Outperforming some current works based on mammogram images, the experimental accuracy, precision, sensitivity, specificity, and f1-scores reached 97.99%, 99.51%, 98.43%, 80.08%, and 98.97%, respectively, on the huge dataset of mammography images categorized as benign and malignant, respectively. Incorporating Fog computing technologies, this model safeguards the privacy and security of patient data, reduces the load on centralized servers, and increases the output.
Collapse
Affiliation(s)
- Abhilash Pati
- Department of Computer Science and Engineering, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Manoranjan Parhi
- Centre for Data Sciences, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Binod Kumar Pattanayak
- Department of Computer Science and Engineering, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Debabrata Singh
- Department of Computer Applications, Faculty of Engineering and Technology (ITER), Siksha 'O' Anusandhan (Deemed to be University), Bhubaneswar 751030, India
| | - Vijendra Singh
- School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| | - Byeong-Gwon Kang
- Department of ICT Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
| |
Collapse
|
11
|
Overcoming limitation of dissociation between MD and MI classifications of breast cancer histopathological images through a novel decomposed feature-based knowledge distillation method. Comput Biol Med 2022; 145:105413. [PMID: 35325731 DOI: 10.1016/j.compbiomed.2022.105413] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 03/09/2022] [Accepted: 03/14/2022] [Indexed: 02/06/2023]
Abstract
Magnification-independent (MI) classification is considered a promising method for detecting the histopathological images of breast cancer. However, it has too many parameters for real implementation due to dependence on input images in different magnification factors. In addition, magnification-dependent (MD) classification usually performs poorly on unseen samples, although it has lower input image sizes and fewer parameters. This paper proposes a novel method based on knowledge distillation (KD) to overcome the limitation of dissociation between MI classification and MD classification of breast cancer in histopathological images. The proposed KD method includes a pre-trained MI teacher model that is responsible for training an unprepared MD student model developed through only one magnification factor. In the proposed method, the decomposed feature maps of a teacher's intermediate layers are transferred as dark knowledge to a student. According to the experimental results, the student model developed through 40X images yielded accuracy rates of 99.41%, 99.26%, 99.14%, and 99.09% in response to unseen samples of 40X, 100X, 200X, and 400X images, respectively. Moreover, comparison results indicated the competitive performance of the proposed student model as opposed to the state-of-the-art method based on deep learning on BreakHis.
Collapse
|
12
|
Ragab M, Albukhari A, Alyami J, Mansour RF. Ensemble Deep-Learning-Enabled Clinical Decision Support System for Breast Cancer Diagnosis and Classification on Ultrasound Images. BIOLOGY 2022; 11:439. [PMID: 35336813 PMCID: PMC8945718 DOI: 10.3390/biology11030439] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/25/2022] [Accepted: 03/11/2022] [Indexed: 01/02/2023]
Abstract
Clinical Decision Support Systems (CDSS) provide an efficient way to diagnose the presence of diseases such as breast cancer using ultrasound images (USIs). Globally, breast cancer is one of the major causes of increased mortality rates among women. Computer-Aided Diagnosis (CAD) models are widely employed in the detection and classification of tumors in USIs. The CAD systems are designed in such a way that they provide recommendations to help radiologists in diagnosing breast tumors and, furthermore, in disease prognosis. The accuracy of the classification process is decided by the quality of images and the radiologist's experience. The design of Deep Learning (DL) models is found to be effective in the classification of breast cancer. In the current study, an Ensemble Deep-Learning-Enabled Clinical Decision Support System for Breast Cancer Diagnosis and Classification (EDLCDS-BCDC) technique was developed using USIs. The proposed EDLCDS-BCDC technique was intended to identify the existence of breast cancer using USIs. In this technique, USIs initially undergo pre-processing through two stages, namely wiener filtering and contrast enhancement. Furthermore, Chaotic Krill Herd Algorithm (CKHA) is applied with Kapur's entropy (KE) for the image segmentation process. In addition, an ensemble of three deep learning models, VGG-16, VGG-19, and SqueezeNet, is used for feature extraction. Finally, Cat Swarm Optimization (CSO) with the Multilayer Perceptron (MLP) model is utilized to classify the images based on whether breast cancer exists or not. A wide range of simulations were carried out on benchmark databases and the extensive results highlight the better outcomes of the proposed EDLCDS-BCDC technique over recent methods.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Mathematics Department, Faculty of Science, Al-Azhar University, Cairo 11884, Egypt
| | - Ashwag Albukhari
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Biochemistry Department, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Jaber Alyami
- Diagnostic Radiology Department, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Imaging Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga 72511, Egypt;
| |
Collapse
|