1
|
Raju ASN, Venkatesh K, Rajababu M, Gatla RK, Eid MM, Ali E, Titova N, Sharaf ABA. A hybrid framework for colorectal cancer detection and U-Net segmentation using polynetDWTCADx. Sci Rep 2025; 15:847. [PMID: 39757273 PMCID: PMC11701104 DOI: 10.1038/s41598-025-85156-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2024] [Accepted: 01/01/2025] [Indexed: 01/07/2025] Open
Abstract
"PolynetDWTCADx" is a sophisticated hybrid model that was developed to identify and distinguish colorectal cancer. In this study, the CKHK-22 dataset, comprising 24 classes, served as the introduction. The proposed method, which combines CNNs, DWTs, and SVMs, enhances the accuracy of feature extraction and classification. The study employs DWT to optimize and enhance two integrated CNN models before classifying them with SVM following a systematic procedure. PolynetDWTCADx was the most effective model that we evaluated. It was capable of attaining a moderate level of recall, as well as an area under the curve (AUC) and accuracy during testing. The testing accuracy was 92.3%, and the training accuracy was 95.0%. This demonstrates that the model is capable of distinguishing between noncancerous and cancerous lesions in the colon. We can also employ the semantic segmentation algorithms of the U-Net architecture to accurately identify and segment cancerous colorectal regions. We assessed the model's exceptional success in segmenting and providing precise delineation of malignant tissues using its maximal IoU value of 0.93, based on intersection over union (IoU) scores. When these techniques are added to PolynetDWTCADx, they give doctors detailed visual information that is needed for diagnosis and planning treatment. These techniques are also very good at finding and separating colorectal cancer. PolynetDWTCADx has the potential to enhance the recognition and management of colorectal cancer, as this study underscores.
Collapse
Affiliation(s)
- Akella S Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, Telangana, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, 603203, Tamilnadu, India.
| | - Makineedi Rajababu
- Department of Information Technology, Aditya University, Surampalem, 533437, Andhra Pradesh, India
| | - Ranjith Kumar Gatla
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, Telangana, India
| | - Marwa M Eid
- Department of physical therapy, College of Applied Medical Science, Taif University, Taif, 21944, Saudi Arabia
| | - Enas Ali
- University Centre for Research and Development, Chandigarh University, Mohali, 140413, Punjab, India
| | - Nataliia Titova
- Biomedical Engineering Department, National University Odesa Polytechnic, Odesa, 65044, Ukraine.
| | - Ahmed B Abou Sharaf
- Ministry of Higher Education & Scientific Research, Industrial Technical Institute in Mataria, Cairo, 11718, Egypt
- Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, 174103, India
| |
Collapse
|
2
|
Raju ASN, Venkatesh K, Padmaja B, Kumar CHNS, Patnala PRM, Lasisi A, Islam S, Razak A, Khan WA. Exploring vision transformers and XGBoost as deep learning ensembles for transforming carcinoma recognition. Sci Rep 2024; 14:30052. [PMID: 39627293 PMCID: PMC11614869 DOI: 10.1038/s41598-024-81456-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 11/26/2024] [Indexed: 12/06/2024] Open
Abstract
Early detection of colorectal carcinoma (CRC), one of the most prevalent forms of cancer worldwide, significantly enhances the prognosis of patients. This research presents a new method for improving CRC detection using a deep learning ensemble with the Computer Aided Diagnosis (CADx). The method involves combining pre-trained convolutional neural network (CNN) models, such as ADaRDEV2I-22, DaRD-22, and ADaDR-22, using Vision Transformers (ViT) and XGBoost. The study addresses the challenges associated with imbalanced datasets and the necessity of sophisticated feature extraction in medical image analysis. Initially, the CKHK-22 dataset comprised 24 classes. However, we refined it to 14 classes, which led to an improvement in data balance and quality. This improvement enabled more precise feature extraction and improved classification results. We created two ensemble models: the first model used Vision Transformers to capture long-range spatial relationships in the images, while the second model combined CNNs with XGBoost to facilitate structured data classification. We implemented DCGAN-based augmentation to enhance the dataset's diversity. The tests showed big improvements in performance, with the ADaDR-22 + Vision Transformer group getting the best results, with a testing accuracy of 93.4% and an AUC of 98.8%. In contrast, the ADaDR-22 + XGBoost model had an AUC of 97.8% and an accuracy of 92.2%. These findings highlight the efficacy of the proposed ensemble models in detecting CRC and highlight the importance of using well-balanced, high-quality datasets. The proposed method significantly enhances the clinical diagnostic accuracy and the capabilities of medical image analysis or early CRC detection.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Computer Science and Engineering (Data Science), Institute of Aeronautical Engineering, Dundigul, Hyderabad, Telangana, 500043, India.
| | - K Venkatesh
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai, Tamilnadu, 603203, India
| | - B Padmaja
- Department of Computer Science and Engineering-AI&ML, Institute of Aeronautical Engineering, Dundigal, Hyderabad, 500043, India
| | - C H N Santhosh Kumar
- Department of Computer Science and Engineering, Anurag Engineering College, Kodada, Telangana, 508206, India
| | | | - Ayodele Lasisi
- Department of Computer Science, College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Saiful Islam
- Civil Engineering Department, College of Engineering, King Khalid University, 61421, Abha, Saudi Arabia
| | - Abdul Razak
- Department of Mechanical Engineering, P. A. College of Engineering (Affiliated to Visvesvaraya Technological UniversityBelagavi), Mangaluru, India
| | - Wahaj Ahmad Khan
- School of Civil Engineering & Architecture, Institute of Technology, Dire-Dawa University, 1362, Dire Dawa, Ethiopia.
| |
Collapse
|
3
|
Sharkas M, Attallah O. Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Sci Rep 2024; 14:6914. [PMID: 38519513 PMCID: PMC10959971 DOI: 10.1038/s41598-024-56820-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Colorectal cancer (CRC) exhibits a significant death rate that consistently impacts human lives worldwide. Histopathological examination is the standard method for CRC diagnosis. However, it is complicated, time-consuming, and subjective. Computer-aided diagnostic (CAD) systems using digital pathology can help pathologists diagnose CRC faster and more accurately than manual histopathology examinations. Deep learning algorithms especially convolutional neural networks (CNNs) are advocated for diagnosis of CRC. Nevertheless, most previous CAD systems obtained features from one CNN, these features are of huge dimension. Also, they relied on spatial information only to achieve classification. In this paper, a CAD system is proposed called "Color-CADx" for CRC recognition. Different CNNs namely ResNet50, DenseNet201, and AlexNet are used for end-to-end classification at different training-testing ratios. Moreover, features are extracted from these CNNs and reduced using discrete cosine transform (DCT). DCT is also utilized to acquire spectral representation. Afterward, it is used to further select a reduced set of deep features. Furthermore, DCT coefficients obtained in the previous step are concatenated and the analysis of variance (ANOVA) feature selection approach is applied to choose significant features. Finally, machine learning classifiers are employed for CRC classification. Two publicly available datasets were investigated which are the NCT-CRC-HE-100 K dataset and the Kather_texture_2016_image_tiles dataset. The highest achieved accuracy reached 99.3% for the NCT-CRC-HE-100 K dataset and 96.8% for the Kather_texture_2016_image_tiles dataset. DCT and ANOVA have successfully lowered feature dimensionality thus reducing complexity. Color-CADx has demonstrated efficacy in terms of accuracy, as its performance surpasses that of the most recent advancements.
Collapse
Affiliation(s)
- Maha Sharkas
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
4
|
Raju ASN, Venkatesh K. EnsemDeepCADx: Empowering Colorectal Cancer Diagnosis with Mixed-Dataset Features and Ensemble Fusion CNNs on Evidence-Based CKHK-22 Dataset. Bioengineering (Basel) 2023; 10:738. [PMID: 37370669 PMCID: PMC10295325 DOI: 10.3390/bioengineering10060738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/16/2023] [Accepted: 06/18/2023] [Indexed: 06/29/2023] Open
Abstract
Colorectal cancer is associated with a high mortality rate and significant patient risk. Images obtained during a colonoscopy are used to make a diagnosis, highlighting the importance of timely diagnosis and treatment. Using techniques of deep learning could enhance the diagnostic accuracy of existing systems. Using the most advanced deep learning techniques, a brand-new EnsemDeepCADx system for accurate colorectal cancer diagnosis has been developed. The optimal accuracy is achieved by combining Convolutional Neural Networks (CNNs) with transfer learning via bidirectional long short-term memory (BILSTM) and support vector machines (SVM). Four pre-trained CNN models comprise the ADaDR-22, ADaR-22, and DaRD-22 ensemble CNNs: AlexNet, DarkNet-19, DenseNet-201, and ResNet-50. In each of its stages, the CADx system is thoroughly evaluated. From the CKHK-22 mixed dataset, colour, greyscale, and local binary pattern (LBP) image datasets and features are utilised. In the second stage, the returned features are compared to a new feature fusion dataset using three distinct CNN ensembles. Next, they incorporate ensemble CNNs with SVM-based transfer learning by comparing raw features to feature fusion datasets. In the final stage of transfer learning, BILSTM and SVM are combined with a CNN ensemble. The testing accuracy for the ensemble fusion CNN DarD-22 using BILSTM and SVM on the original, grey, LBP, and feature fusion datasets was optimal (95.96%, 88.79%, 73.54%, and 97.89%). Comparing the outputs of all four feature datasets with those of the three ensemble CNNs at each stage enables the EnsemDeepCADx system to attain its highest level of accuracy.
Collapse
Affiliation(s)
- Akella Subrahmanya Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, SRM Nagar, Chennai 603203, India;
| | | |
Collapse
|
5
|
Attallah O. RADIC:A tool for diagnosing COVID-19 from chest CT and X-ray scans using deep learning and quad-radiomics. CHEMOMETRICS AND INTELLIGENT LABORATORY SYSTEMS : AN INTERNATIONAL JOURNAL SPONSORED BY THE CHEMOMETRICS SOCIETY 2023; 233:104750. [PMID: 36619376 PMCID: PMC9807270 DOI: 10.1016/j.chemolab.2022.104750] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 12/29/2022] [Accepted: 12/30/2022] [Indexed: 05/28/2023]
Abstract
Deep learning (DL) algorithms have demonstrated a high ability to perform speedy and accurate COVID-19 diagnosis utilizing computed tomography (CT) and X-Ray scans. The spatial information in these images was used to train DL models in the majority of relevant studies. However, training these models with images generated by radiomics approaches could enhance diagnostic accuracy. Furthermore, combining information from several radiomics approaches with time-frequency representations of the COVID-19 patterns can increase performance even further. This study introduces "RADIC", an automated tool that uses three DL models that are trained using radiomics-generated images to detect COVID-19. First, four radiomics approaches are used to analyze the original CT and X-ray images. Next, each of the three DL models is trained on a different set of radiomics, X-ray, and CT images. Then, for each DL model, deep features are obtained, and their dimensions are decreased using the Fast Walsh Hadamard Transform, yielding a time-frequency representation of the COVID-19 patterns. The tool then uses the discrete cosine transform to combine these deep features. Four classification models are then used to achieve classification. In order to validate the performance of RADIC, two benchmark datasets (CT and X-Ray) for COVID-19 are employed. The final accuracy attained using RADIC is 99.4% and 99% for the first and second datasets respectively. To prove the competing ability of RADIC, its performance is compared with related studies in the literature. The results reflect that RADIC achieve superior performance compared to other studies. The results of the proposed tool prove that a DL model can be trained more effectively with images generated by radiomics techniques than the original X-Ray and CT images. Besides, the incorporation of deep features extracted from DL models trained with multiple radiomics approaches will improve diagnostic accuracy.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering & Technology, Arab Academy for Science, Technology & Maritime Transport, Alexandria, Egypt
| |
Collapse
|
6
|
Houwen BBSL, Nass KJ, Vleugels JLA, Fockens P, Hazewinkel Y, Dekker E. Comprehensive review of publicly available colonoscopic imaging databases for artificial intelligence research: availability, accessibility, and usability. Gastrointest Endosc 2023; 97:184-199.e16. [PMID: 36084720 DOI: 10.1016/j.gie.2022.08.043] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 01/28/2023]
Abstract
BACKGROUND AND AIMS Publicly available databases containing colonoscopic imaging data are valuable resources for artificial intelligence (AI) research. Currently, little is known regarding the available number and content of these databases. This review aimed to describe the availability, accessibility, and usability of publicly available colonoscopic imaging databases, focusing on polyp detection, polyp characterization, and quality of colonoscopy. METHODS A systematic literature search was performed in MEDLINE and Embase to identify AI studies describing publicly available colonoscopic imaging databases published after 2010. Second, a targeted search using Google's Dataset Search, Google Search, GitHub, and Figshare was done to identify databases directly. Databases were included if they contained data about polyp detection, polyp characterization, or quality of colonoscopy. To assess accessibility of databases, the following categories were defined: open access, open access with barriers, and regulated access. To assess the potential usability of the included databases, essential details of each database were extracted using a checklist derived from the Checklist for Artificial Intelligence in Medical Imaging. RESULTS We identified 22 databases with open access, 3 databases with open access with barriers, and 15 databases with regulated access. The 22 open access databases contained 19,463 images and 952 videos. Nineteen of these databases focused on polyp detection, localization, and/or segmentation; 6 on polyp characterization, and 3 on quality of colonoscopy. Only half of these databases have been used by other researcher to develop, train, or benchmark their AI system. Although technical details were in general well reported, important details such as polyp and patient demographics and the annotation process were under-reported in almost all databases. CONCLUSIONS This review provides greater insight on public availability of colonoscopic imaging databases for AI research. Incomplete reporting of important details limits the ability of researchers to assess the usability of current databases.
Collapse
Affiliation(s)
- Britt B S L Houwen
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Karlijn J Nass
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Jasper L A Vleugels
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Paul Fockens
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| | - Yark Hazewinkel
- Department of Gastroenterology and Hepatology, Radboud University Nijmegen Medical Center, Radboud University of Nijmegen, Nijmegen, the Netherlands
| | - Evelien Dekker
- Department of Gastroenterology and Hepatology, Amsterdam Gastroenterology Endocrinology Metabolism, Amsterdam University Medical Centres, location Academic Medical Center, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
7
|
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13020171. [PMID: 36672981 PMCID: PMC9857608 DOI: 10.3390/diagnostics13020171] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 01/05/2023] Open
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP's superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time.
Collapse
|
8
|
Attallah O. MonDiaL-CAD: Monkeypox diagnosis via selected hybrid CNNs unified with feature selection and ensemble learning. Digit Health 2023; 9:20552076231180054. [PMID: 37312961 PMCID: PMC10259124 DOI: 10.1177/20552076231180054] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 05/18/2023] [Indexed: 06/15/2023] Open
Abstract
Objective Recently, monkeypox virus is slowly evolving and there are fears it will spread as COVID-19. Computer-aided diagnosis (CAD) based on deep learning approaches especially convolutional neural network (CNN) can assist in the rapid determination of reported incidents. The current CADs were mostly based on an individual CNN. Few CADs employed multiple CNNs but did not investigate which combination of CNNs has a greater impact on the performance. Furthermore, they relied on only spatial information of deep features to train their models. This study aims to construct a CAD tool named "Monkey-CAD" that can address the previous limitations and automatically diagnose monkeypox rapidly and accurately. Methods Monkey-CAD extracts features from eight CNNs and then examines the best possible combination of deep features that influence classification. It employs discrete wavelet transform (DWT) to merge features which diminishes fused features' size and provides a time-frequency demonstration. These deep features' sizes are then further reduced via an entropy-based feature selection approach. These reduced fused features are finally used to deliver a better representation of the input features and feed three ensemble classifiers. Results Two freely accessible datasets called Monkeypox skin image (MSID) and Monkeypox skin lesion (MSLD) are employed in this study. Monkey-CAD could discriminate among cases with and without Monkeypox achieving an accuracy of 97.1% for MSID and 98.7% for MSLD datasets respectively. Conclusions Such promising results demonstrate that the Monkey-CAD can be employed to assist health practitioners. They also verify that fusing deep features from selected CNNs can boost performance.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
9
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. ColoRectalCADx: Expeditious Recognition of Colorectal Cancer with Integrated Convolutional Neural Networks and Visual Explanations Using Mixed Dataset Evidence. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8723957. [PMID: 36404909 PMCID: PMC9671728 DOI: 10.1155/2022/8723957] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/27/2022] [Indexed: 12/07/2023]
Abstract
Colorectal cancer typically affects the gastrointestinal tract within the human body. Colonoscopy is one of the most accurate methods of detecting cancer. The current system facilitates the identification of cancer by computer-assisted diagnosis (CADx) systems with a limited number of deep learning methods. It does not imply the depiction of mixed datasets for the functioning of the system. The proposed system, called ColoRectalCADx, is supported by deep learning (DL) models suitable for cancer research. The CADx system comprises five stages: convolutional neural networks (CNN), support vector machine (SVM), long short-term memory (LSTM), visual explanation such as gradient-weighted class activation mapping (Grad-CAM), and semantic segmentation phases. Here, the key components of the CADx system are equipped with 9 individual and 12 integrated CNNs, implying that the system consists mainly of investigational experiments with a total of 21 CNNs. In the subsequent phase, the CADx has a combination of CNNs of concatenated transfer learning functions associated with the machine SVM classification. Additional classification is applied to ensure effective transfer of results from CNN to LSTM. The system is mainly made up of a combination of CVC Clinic DB, Kvasir2, and Hyper Kvasir input as a mixed dataset. After CNN and LSTM, in advanced stage, malignancies are detected by using a better polyp recognition technique with Grad-CAM and semantic segmentation using U-Net. CADx results have been stored on Google Cloud for record retention. In these experiments, among all the CNNs, the individual CNN DenseNet-201 (87.1% training and 84.7% testing accuracies) and the integrated CNN ADaDR-22 (84.61% training and 82.17% testing accuracies) were the most efficient for cancer detection with the CNN+LSTM model. ColoRectalCADx accurately identifies cancer through individual CNN DesnseNet-201 and integrated CNN ADaDR-22. In Grad-CAM's visual explanations, CNN DenseNet-201 displays precise visualization of polyps, and CNN U-Net provides precise malignant polyps.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| | - T. Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, 603203 Chennai, India
| |
Collapse
|
10
|
Narasimha Raju AS, Jayavel K, Rajalakshmi T. Dexterous Identification of Carcinoma through ColoRectalCADx with Dichotomous Fusion CNN and UNet Semantic Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4325412. [PMID: 36262620 PMCID: PMC9576362 DOI: 10.1155/2022/4325412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/16/2022] [Accepted: 08/20/2022] [Indexed: 11/18/2022]
Abstract
Human colorectal disorders in the digestive tract are recognized by reference colonoscopy. The current system recognizes cancer through a three-stage system that utilizes two sets of colonoscopy data. However, identifying polyps by visualization has not been addressed. The proposed system is a five-stage system called ColoRectalCADx, which provides three publicly accessible datasets as input data for cancer detection. The three main datasets are CVC Clinic DB, Kvasir2, and Hyper Kvasir. After the image preprocessing stages, system experiments were performed with the seven prominent convolutional neural networks (CNNs) (end-to-end) and nine fusion CNN models to extract the spatial features. Afterwards, the end-to-end CNN and fusion features are executed. These features are derived from Discrete Wavelet Transform (DWT) and Vector Support Machine (SVM) classification, that was used to retrieve time and spatial frequency features. Experimentally, the results were obtained for five stages. For each of the three datasets, from stage 1 to stage 3, end-to-end CNN, DenseNet-201 obtained the best testing accuracy (98%, 87%, 84%), ((98%, 97%), (87%, 87%), (84%, 84%)), ((99.03%, 99%), (88.45%, 88%), (83.61%, 84%)). For each of the three datasets, from stage 2, CNN DaRD-22 fusion obtained the optimal test accuracy ((93%, 97%) (82%, 84%), (69%, 57%)). And for stage 4, ADaRDEV2-22 fusion achieved the best test accuracy ((95.73%, 94%), (81.20%, 81%), (72.56%, 58%)). For the input image segmentation datasets CVC Clinc-Seg, KvasirSeg, and Hyper Kvasir, malignant polyps were identified with the UNet CNN model. Here, the loss score datasets (CVC clinic DB was 0.7842, Kvasir2 was 0.6977, and Hyper Kvasir was 0.6910) were obtained.
Collapse
Affiliation(s)
- Akella S. Narasimha Raju
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Kayalvizhi Jayavel
- Department of Networking and Communications, School of Computing, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| | - Thulasi Rajalakshmi
- Department of Electronics and Communication Engineering, School of Electrical and Electronics Engineering, SRM Institute of Science and Technology, Kattankulathur, Chennai 603203, India
| |
Collapse
|
11
|
Time-based self-supervised learning for Wireless Capsule Endoscopy. Comput Biol Med 2022; 146:105631. [DOI: 10.1016/j.compbiomed.2022.105631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 04/17/2022] [Accepted: 04/17/2022] [Indexed: 11/18/2022]
|
12
|
Attallah O, Samir A. A wavelet-based deep learning pipeline for efficient COVID-19 diagnosis via CT slices. Appl Soft Comput 2022; 128:109401. [PMID: 35919069 PMCID: PMC9335861 DOI: 10.1016/j.asoc.2022.109401] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 05/20/2022] [Accepted: 07/25/2022] [Indexed: 12/30/2022]
Abstract
The quick diagnosis of the novel coronavirus (COVID-19) disease is vital to prevent its propagation and improve therapeutic outcomes. Computed tomography (CT) is believed to be an effective tool for diagnosing COVID-19, however, the CT scan contains hundreds of slices that are complex to be analyzed and could cause delays in diagnosis. Artificial intelligence (AI) especially deep learning (DL), could facilitate and speed up COVID-19 diagnosis from such scans. Several studies employed DL approaches based on 2D CT images from a single view, nevertheless, 3D multiview CT slices demonstrated an excellent ability to enhance the efficiency of COVID-19 diagnosis. The majority of DL-based studies utilized the spatial information of the original CT images to train their models, though, using spectral–temporal information could improve the detection of COVID-19. This article proposes a DL-based pipeline called CoviWavNet for the automatic diagnosis of COVID-19. CoviWavNet uses a 3D multiview dataset called OMNIAHCOV. Initially, it analyzes the CT slices using multilevel discrete wavelet decomposition (DWT) and then uses the heatmaps of the approximation levels to train three ResNet CNN models. These ResNets use the spectral–temporal information of such images to perform classification. Subsequently, it investigates whether the combination of spatial information with spectral–temporal information could improve the diagnostic accuracy of COVID-19. For this purpose, it extracts deep spectral–temporal features from such ResNets using transfer learning and integrates them with deep spatial features extracted from the same ResNets trained with the original CT slices. Then, it utilizes a feature selection step to reduce the dimension of such integrated features and use them as inputs to three support vector machine (SVM) classifiers. To further validate the performance of CoviWavNet, a publicly available benchmark dataset called SARS-COV-2-CT-Scan is employed. The results of CoviWavNet have demonstrated that using the spectral–temporal information of the DWT heatmap images to train the ResNets is superior to utilizing the spatial information of the original CT images. Furthermore, integrating deep spectral–temporal features with deep spatial features has enhanced the classification accuracy of the three SVM classifiers reaching a final accuracy of 99.33% and 99.7% for the OMNIAHCOV and SARS-COV-2-CT-Scan datasets respectively. These accuracies verify the outstanding performance of CoviWavNet compared to other related studies. Thus, CoviWavNet can help radiologists in the rapid and accurate diagnosis of COVID-19 diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| | - Ahmed Samir
- Department of Radiodiagnosis, Faculty of Medicine, University of Alexandria, Egypt
| |
Collapse
|
13
|
Attallah O. An Intelligent ECG-Based Tool for Diagnosing COVID-19 via Ensemble Deep Learning Techniques. BIOSENSORS 2022; 12:299. [PMID: 35624600 PMCID: PMC9138764 DOI: 10.3390/bios12050299] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 04/06/2022] [Accepted: 04/24/2022] [Indexed: 06/01/2023]
Abstract
Diagnosing COVID-19 accurately and rapidly is vital to control its quick spread, lessen lockdown restrictions, and decrease the workload on healthcare structures. The present tools to detect COVID-19 experience numerous shortcomings. Therefore, novel diagnostic tools are to be examined to enhance diagnostic accuracy and avoid the limitations of these tools. Earlier studies indicated multiple structures of cardiovascular alterations in COVID-19 cases which motivated the realization of using ECG data as a tool for diagnosing the novel coronavirus. This study introduced a novel automated diagnostic tool based on ECG data to diagnose COVID-19. The introduced tool utilizes ten deep learning (DL) models of various architectures. It obtains significant features from the last fully connected layer of each DL model and then combines them. Afterward, the tool presents a hybrid feature selection based on the chi-square test and sequential search to select significant features. Finally, it employs several machine learning classifiers to perform two classification levels. A binary level to differentiate between normal and COVID-19 cases, and a multiclass to discriminate COVID-19 cases from normal and other cardiac complications. The proposed tool reached an accuracy of 98.2% and 91.6% for binary and multiclass levels, respectively. This performance indicates that the ECG could be used as an alternative means of diagnosis of COVID-19.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
14
|
Attallah O. ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration. Comput Biol Med 2022; 142:105210. [PMID: 35026574 PMCID: PMC8730786 DOI: 10.1016/j.compbiomed.2022.105210] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 12/29/2022]
Abstract
The accurate and speedy detection of COVID-19 is essential to avert the fast propagation of the virus, alleviate lockdown constraints and diminish the burden on health organizations. Currently, the methods used to diagnose COVID-19 have several limitations, thus new techniques need to be investigated to improve the diagnosis and overcome these limitations. Taking into consideration the great benefits of electrocardiogram (ECG) applications, this paper proposes a new pipeline called ECG-BiCoNet to investigate the potential of using ECG data for diagnosing COVID-19. ECG-BiCoNet employs five deep learning models of distinct structural design. ECG-BiCoNet extracts two levels of features from two different layers of each deep learning technique. Features mined from higher layers are fused using discrete wavelet transform and then integrated with lower-layers features. Afterward, a feature selection approach is utilized. Finally, an ensemble classification system is built to merge predictions of three machine learning classifiers. ECG-BiCoNet accomplishes two classification categories, binary and multiclass. The results of ECG-BiCoNet present a promising COVID-19 performance with an accuracy of 98.8% and 91.73% for binary and multiclass classification categories. These results verify that ECG data may be used to diagnose COVID-19 which can help clinicians in the automatic diagnosis and overcome limitations of manual diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 1029, Egypt.
| |
Collapse
|
15
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
16
|
Attallah O. A deep learning-based diagnostic tool for identifying various diseases via facial images. Digit Health 2022; 8:20552076221124432. [PMID: 36105626 PMCID: PMC9465585 DOI: 10.1177/20552076221124432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
With the current health crisis caused by the COVID-19 pandemic, patients have
become more anxious about infection, so they prefer not to have direct contact
with doctors or clinicians. Lately, medical scientists have confirmed that
several diseases exhibit corresponding specific features on the face the face.
Recent studies have indicated that computer-aided facial diagnosis can be a
promising tool for the automatic diagnosis and screening of diseases from facial
images. However, few of these studies used deep learning (DL) techniques. Most
of them focused on detecting a single disease, using handcrafted feature
extraction methods and conventional machine learning techniques based on
individual classifiers trained on small and private datasets using images taken
from a controlled environment. This study proposes a novel computer-aided facial
diagnosis system called FaceDisNet that uses a new public dataset based on
images taken from an unconstrained environment and could be employed for
forthcoming comparisons. It detects single and multiple diseases. FaceDisNet is
constructed by integrating several spatial deep features from convolutional
neural networks of various architectures. It does not depend only on spatial
features but also extracts spatial-spectral features. FaceDisNet searches for
the fused spatial-spectral feature set that has the greatest impact on the
classification. It employs two feature selection techniques to reduce the large
dimension of features resulting from feature fusion. Finally, it builds an
ensemble classifier based on stacking to perform classification. The performance
of FaceDisNet verifies its ability to diagnose single and multiple diseases.
FaceDisNet achieved a maximum accuracy of 98.57% and 98% after the ensemble
classification and feature selection steps for binary and multiclass
classification categories. These results prove that FaceDisNet is a reliable
tool and could be employed to avoid the difficulties and complications of manual
diagnosis. Also, it can help physicians achieve accurate diagnoses without the
need for physical contact with the patients.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
17
|
Attallah O. DIAROP: Automated Deep Learning-Based Diagnostic Tool for Retinopathy of Prematurity. Diagnostics (Basel) 2021; 11:2034. [PMID: 34829380 PMCID: PMC8620568 DOI: 10.3390/diagnostics11112034] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 09/24/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Retinopathy of Prematurity (ROP) affects preterm neonates and could cause blindness. Deep Learning (DL) can assist ophthalmologists in the diagnosis of ROP. This paper proposes an automated and reliable diagnostic tool based on DL techniques called DIAROP to support the ophthalmologic diagnosis of ROP. It extracts significant features by first obtaining spatial features from the four Convolution Neural Networks (CNNs) DL techniques using transfer learning and then applying Fast Walsh Hadamard Transform (FWHT) to integrate these features. Moreover, DIAROP explores the best-integrated features extracted from the CNNs that influence its diagnostic capability. The results of DIAROP indicate that DIAROP achieved an accuracy of 93.2% and an area under receiving operating characteristic curve (AUC) of 0.98. Furthermore, DIAROP performance is compared with recent ROP diagnostic tools. Its promising performance shows that DIAROP may assist the ophthalmologic diagnosis of ROP.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
18
|
Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:7192016. [PMID: 34621146 PMCID: PMC8457955 DOI: 10.1155/2021/7192016] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 08/20/2021] [Accepted: 09/01/2021] [Indexed: 02/06/2023]
Abstract
The rates of skin cancer (SC) are rising every year and becoming a critical health issue worldwide. SC's early and accurate diagnosis is the key procedure to reduce these rates and improve survivability. However, the manual diagnosis is exhausting, complicated, expensive, prone to diagnostic error, and highly dependent on the dermatologist's experience and abilities. Thus, there is a vital need to create automated dermatologist tools that are capable of accurately classifying SC subclasses. Recently, artificial intelligence (AI) techniques including machine learning (ML) and deep learning (DL) have verified the success of computer-assisted dermatologist tools in the automatic diagnosis and detection of SC diseases. Previous AI-based dermatologist tools are based on features which are either high-level features based on DL methods or low-level features based on handcrafted operations. Most of them were constructed for binary classification of SC. This study proposes an intelligent dermatologist tool to accurately diagnose multiple skin lesions automatically. This tool incorporates manifold radiomics features categories involving high-level features such as ResNet-50, DenseNet-201, and DarkNet-53 and low-level features including discrete wavelet transform (DWT) and local binary pattern (LBP). The results of the proposed intelligent tool prove that merging manifold features of different categories has a high influence on the classification accuracy. Moreover, these results are superior to those obtained by other related AI-based dermatologist tools. Therefore, the proposed intelligent tool can be used by dermatologists to help them in the accurate diagnosis of the SC subcategory. It can also overcome manual diagnosis limitations, reduce the rates of infection, and enhance survival rates.
Collapse
|
19
|
Attallah O, Anwar F, Ghanem NM, Ismail MA. Histo-CADx: duo cascaded fusion stages for breast cancer diagnosis from histopathological images. PeerJ Comput Sci 2021; 7:e493. [PMID: 33987459 PMCID: PMC8093954 DOI: 10.7717/peerj-cs.493] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/26/2021] [Indexed: 05/06/2023]
Abstract
Breast cancer (BC) is one of the most common types of cancer that affects females worldwide. It may lead to irreversible complications and even death due to late diagnosis and treatment. The pathological analysis is considered the gold standard for BC detection, but it is a challenging task. Automatic diagnosis of BC could reduce death rates, by creating a computer aided diagnosis (CADx) system capable of accurately identifying BC at an early stage and decreasing the time consumed by pathologists during examinations. This paper proposes a novel CADx system named Histo-CADx for the automatic diagnosis of BC. Most related studies were based on individual deep learning methods. Also, studies did not examine the influence of fusing features from multiple CNNs and handcrafted features. In addition, related studies did not investigate the best combination of fused features that influence the performance of the CADx. Therefore, Histo-CADx is based on two stages of fusion. The first fusion stage involves the investigation of the impact of fusing several deep learning (DL) techniques with handcrafted feature extraction methods using the auto-encoder DL method. This stage also examines and searches for a suitable set of fused features that could improve the performance of Histo-CADx. The second fusion stage constructs a multiple classifier system (MCS) for fusing outputs from three classifiers, to further improve the accuracy of the proposed Histo-CADx. The performance of Histo-CADx is evaluated using two public datasets; specifically, the BreakHis and the ICIAR 2018 datasets. The results from the analysis of both datasets verified that the two fusion stages of Histo-CADx successfully improved the accuracy of the CADx compared to CADx constructed with individual features. Furthermore, using the auto-encoder for the fusion process has reduced the computation cost of the system. Moreover, the results after the two fusion stages confirmed that Histo-CADx is reliable and has the capacity of classifying BC more accurately compared to other latest studies. Consequently, it can be used by pathologists to help them in the accurate diagnosis of BC. In addition, it can decrease the time and effort needed by medical experts during the examination.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Alexandria, Egypt
| | - Fatma Anwar
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Nagia M. Ghanem
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Mohamed A. Ismail
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| |
Collapse
|