1
|
Sarah P, Krishnapriya S, Saladi S, Karuna Y, Bavirisetti DP. A novel approach to brain tumor detection using K-Means++, SGLDM, ResNet50, and synthetic data augmentation. Front Physiol 2024; 15:1342572. [PMID: 39077759 PMCID: PMC11284281 DOI: 10.3389/fphys.2024.1342572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 06/24/2024] [Indexed: 07/31/2024] Open
Abstract
Introduction: Brain tumors are abnormal cell growths in the brain, posing significant treatment challenges. Accurate early detection using non-invasive methods is crucial for effective treatment. This research focuses on improving the early detection of brain tumors in MRI images through advanced deep-learning techniques. The primary goal is to identify the most effective deep-learning model for classifying brain tumors from MRI data, enhancing diagnostic accuracy and reliability. Methods: The proposed method for brain tumor classification integrates segmentation using K-means++, feature extraction from the Spatial Gray Level Dependence Matrix (SGLDM), and classification with ResNet50, along with synthetic data augmentation to enhance model robustness. Segmentation isolates tumor regions, while SGLDM captures critical texture information. The ResNet50 model then classifies the tumors accurately. To further improve the interpretability of the classification results, Grad-CAM is employed, providing visual explanations by highlighting influential regions in the MRI images. Result: In terms of accuracy, sensitivity, and specificity, the evaluation on the Br35H::BrainTumorDetection2020 dataset showed superior performance of the suggested method compared to existing state-of-the-art approaches. This indicates its effectiveness in achieving higher precision in identifying and classifying brain tumors from MRI data, showcasing advancements in diagnostic reliability and efficacy. Discussion: The superior performance of the suggested method indicates its robustness in accurately classifying brain tumors from MRI images, achieving higher accuracy, sensitivity, and specificity compared to existing methods. The method's enhanced sensitivity ensures a greater detection rate of true positive cases, while its improved specificity reduces false positives, thereby optimizing clinical decision-making and patient care in neuro-oncology.
Collapse
Affiliation(s)
- Ponuku Sarah
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Srigiri Krishnapriya
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, India
| | - Saritha Saladi
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Yepuganti Karuna
- School of Electronics Engineering, VIT-AP University, Amaravati, India
| | - Durga Prasad Bavirisetti
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
2
|
Zhou J, Song W, Liu Y, Yuan X. An efficient computational framework for gastrointestinal disorder prediction using attention-based transfer learning. PeerJ Comput Sci 2024; 10:e2059. [PMID: 38855223 PMCID: PMC11157572 DOI: 10.7717/peerj-cs.2059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 04/23/2024] [Indexed: 06/11/2024]
Abstract
Diagnosing gastrointestinal (GI) disorders, which affect parts of the digestive system such as the stomach and intestines, can be difficult even for experienced gastroenterologists due to the variety of ways these conditions present. Early diagnosis is critical for successful treatment, but the review process is time-consuming and labor-intensive. Computer-aided diagnostic (CAD) methods provide a solution by automating diagnosis, saving time, reducing workload, and lowering the likelihood of missing critical signs. In recent years, machine learning and deep learning approaches have been used to develop many CAD systems to address this issue. However, existing systems need to be improved for better safety and reliability on larger datasets before they can be used in medical diagnostics. In our study, we developed an effective CAD system for classifying eight types of GI images by combining transfer learning with an attention mechanism. Our experimental results show that ConvNeXt is an effective pre-trained network for feature extraction, and ConvNeXt+Attention (our proposed method) is a robust CAD system that outperforms other cutting-edge approaches. Our proposed method had an area under the receiver operating characteristic curve of 0.9997 and an area under the precision-recall curve of 0.9973, indicating excellent performance. The conclusion regarding the effectiveness of the system was also supported by the values of other evaluation metrics.
Collapse
Affiliation(s)
- Jiajie Zhou
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| | - Wei Song
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| | - Yeliu Liu
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| | - Xiaoming Yuan
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| |
Collapse
|
3
|
Siddiqui S, Akram T, Ashraf I, Raza M, Khan MA, Damaševičius R. CG‐Net: A novel CNN framework for gastrointestinal tract diseases classification. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2024; 34. [DOI: 10.1002/ima.23081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 03/31/2024] [Indexed: 09/23/2024]
Abstract
AbstractThe classification of medical images has had a significant influence on the diagnostic techniques and therapeutic interventions. Conventional disease diagnosis procedures require a substantial amount of time and effort to accurately diagnose. Based on global statistics, gastrointestinal cancer has been recognized as a major contributor to cancer‐related deaths. The complexities involved in resolving gastrointestinal tract (GIT) ailments arise from the need for elaborate methods to precisely identify the exact location of the problem. Therefore, doctors frequently use wireless capsule endoscopy to diagnose and treat GIT problems. This research aims to develop a robust framework using deep learning techniques to effectively classify GIT diseases for therapeutic purposes. A CNN based framework, in conjunction with the feature selection method, has been proposed to improve the classification rate. The proposed framework has been evaluated using various performance measures, including accuracy, recall, precision, F1 measure, mean absolute error, and mean squared error.
Collapse
Affiliation(s)
- Samra Siddiqui
- Department of Computer Science HITEC University Taxila Pakistan
- Department of Computer Science COMSATS University Islamabad Wah Campus Pakistan
| | - Tallha Akram
- Department of Information Systems, College of Computer Engineering and Sciences Prince Sattam bin Abdulaziz University Al‐Kharj Saudi Arabia
- Department of Machine Learning Convex Solutions Pvt (Ltd) Islamabad Pakistan
| | - Imran Ashraf
- Department of Computer Science, Department of Computer Science NUCES (FAST) Islamabad Pakistan
| | - Muddassar Raza
- Department of Computer Science HITEC University Taxila Pakistan
| | | | | |
Collapse
|
4
|
Akram T, Khan MA, Sharif M, Yasmin M. Skin lesion segmentation and recognition using multichannel saliency estimation and M-SVM on selected serially fused features. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2024; 15:1083-1102. [DOI: 10.1007/s12652-018-1051-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 09/15/2018] [Indexed: 08/25/2024]
|
5
|
Sharif MI, Li JP, Khan MA, Kadry S, Tariq U. M3BTCNet: multi model brain tumor classification using metaheuristic deep neural network features optimization. Neural Comput Appl 2024; 36:95-110. [DOI: 10.1007/s00521-022-07204-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 03/29/2022] [Indexed: 12/01/2022]
|
6
|
Wu S, Zhang R, Yan J, Li C, Liu Q, Wang L, Wang H. High-Speed and Accurate Diagnosis of Gastrointestinal Disease: Learning on Endoscopy Images Using Lightweight Transformer with Local Feature Attention. Bioengineering (Basel) 2023; 10:1416. [PMID: 38136007 PMCID: PMC10741161 DOI: 10.3390/bioengineering10121416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 12/04/2023] [Accepted: 12/10/2023] [Indexed: 12/24/2023] Open
Abstract
In response to the pressing need for robust disease diagnosis from gastrointestinal tract (GIT) endoscopic images, we proposed FLATer, a fast, lightweight, and highly accurate transformer-based model. FLATer consists of a residual block, a vision transformer module, and a spatial attention block, which concurrently focuses on local features and global attention. It can leverage the capabilities of both convolutional neural networks (CNNs) and vision transformers (ViT). We decomposed the classification of endoscopic images into two subtasks: a binary classification to discern between normal and pathological images and a further multi-class classification to categorize images into specific diseases, namely ulcerative colitis, polyps, and esophagitis. FLATer has exhibited exceptional prowess in these tasks, achieving 96.4% accuracy in binary classification and 99.7% accuracy in ternary classification, surpassing most existing models. Notably, FLATer could maintain impressive performance when trained from scratch, underscoring its robustness. In addition to the high precision, FLATer boasted remarkable efficiency, reaching a notable throughput of 16.4k images per second, which positions FLATer as a compelling candidate for rapid disease identification in clinical practice.
Collapse
Affiliation(s)
- Shibin Wu
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China; (S.W.); (R.Z.); (J.Y.)
| | - Ruxin Zhang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China; (S.W.); (R.Z.); (J.Y.)
| | - Jiayi Yan
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China; (S.W.); (R.Z.); (J.Y.)
| | - Chengquan Li
- School of Clinical Medicine, Tsinghua University, Beijing 100084, China;
| | - Qicai Liu
- Vanke School of Public Health, Tsinghua University, Beijing 100084, China;
| | - Liyang Wang
- School of Clinical Medicine, Tsinghua University, Beijing 100084, China;
| | - Haoqian Wang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China; (S.W.); (R.Z.); (J.Y.)
| |
Collapse
|
7
|
Sharma A, Kumar R, Garg P. Deep learning-based prediction model for diagnosing gastrointestinal diseases using endoscopy images. Int J Med Inform 2023; 177:105142. [PMID: 37422969 DOI: 10.1016/j.ijmedinf.2023.105142] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/01/2023] [Accepted: 07/04/2023] [Indexed: 07/11/2023]
Abstract
BACKGROUND Gastrointestinal (GI) infections are quite common today around the world. Colonoscopy or wireless capsule endoscopy (WCE) are noninvasive methods for examining the whole GI tract for abnormalities. Nevertheless, it requires a great deal of time and effort for doctors to visualize a large number of images, and diagnosis is prone to human error. As a result, developing automated artificial intelligence (AI) based GI disease diagnosis methods is a crucial and emerging research area. AI-based prediction models may lead to improvements in the early diagnosis of gastrointestinal disorders, assessing severity, and healthcare systems for the benefit of patients as well as clinicians. The focus of this research is on the early diagnosis of gastrointestinal diseases using a convolution neural network (CNN) to enhance diagnosis accuracy. METHODS Various CNN models (baseline model and using transfer learning (VGG16, InceptionV3, and ResNet50)) were trained on a benchmark image dataset, KVASIR, containing images from inside the GI tract using n-fold cross-validation. The dataset comprises images of three disease states-polyps, ulcerative colitis, and esophagitis-as well as images of the healthy colon. Data augmentation strategies together with statistical measures were used to improve and evaluate the model's performance. Additionally, the test set comprising 1200 images was used to evaluate the model's accuracy and robustness. RESULTS The CNN model using the weights of the ResNet50 pre-trained model achieved the highest average accuracy of approximately 99.80% on the training set (100% precision and approximately 99% recall) and accuracies of 99.50% and 99.16% on the validation and additional test set, respectively, while diagnosing GI diseases. When compared to other existing systems, the proposed ResNet50 model outperforms them all. CONCLUSION The findings of this study indicate that AI-based prediction models using CNNs, specifically ResNet50, can improve diagnostic accuracy for detecting gastrointestinal polyps, ulcerative colitis, and esophagitis. The prediction model is available at https://github.com/anjus02/GI-disease-classification.git.
Collapse
Affiliation(s)
- Anju Sharma
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S. Nagar, Punjab 160062, India
| | - Rajnish Kumar
- Department of Veterinary Medicine and Surgery, College of Veterinary Medicine, University of Missouri, Columbia, MO, USA
| | - Prabha Garg
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S. Nagar, Punjab 160062, India.
| |
Collapse
|
8
|
Musha A, Hasnat R, Mamun AA, Ping EP, Ghosh T. Computer-Aided Bleeding Detection Algorithms for Capsule Endoscopy: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7170. [PMID: 37631707 PMCID: PMC10459126 DOI: 10.3390/s23167170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
Capsule endoscopy (CE) is a widely used medical imaging tool for the diagnosis of gastrointestinal tract abnormalities like bleeding. However, CE captures a huge number of image frames, constituting a time-consuming and tedious task for medical experts to manually inspect. To address this issue, researchers have focused on computer-aided bleeding detection systems to automatically identify bleeding in real time. This paper presents a systematic review of the available state-of-the-art computer-aided bleeding detection algorithms for capsule endoscopy. The review was carried out by searching five different repositories (Scopus, PubMed, IEEE Xplore, ACM Digital Library, and ScienceDirect) for all original publications on computer-aided bleeding detection published between 2001 and 2023. The Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) methodology was used to perform the review, and 147 full texts of scientific papers were reviewed. The contributions of this paper are: (I) a taxonomy for computer-aided bleeding detection algorithms for capsule endoscopy is identified; (II) the available state-of-the-art computer-aided bleeding detection algorithms, including various color spaces (RGB, HSV, etc.), feature extraction techniques, and classifiers, are discussed; and (III) the most effective algorithms for practical use are identified. Finally, the paper is concluded by providing future direction for computer-aided bleeding detection research.
Collapse
Affiliation(s)
- Ahmmad Musha
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Rehnuma Hasnat
- Department of Electrical and Electronic Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh; (A.M.); (R.H.)
| | - Abdullah Al Mamun
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Em Poh Ping
- Faculty of Engineering and Technology, Multimedia University, Melaka 75450, Malaysia;
| | - Tonmoy Ghosh
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| |
Collapse
|
9
|
Kodipalli A, Fernandes SL, Gururaj V, Varada Rameshbabu S, Dasar S. Performance Analysis of Segmentation and Classification of CT-Scanned Ovarian Tumours Using U-Net and Deep Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:2282. [PMID: 37443676 DOI: 10.3390/diagnostics13132282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 06/29/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Difficulty in detecting tumours in early stages is the major cause of mortalities in patients, despite the advancements in treatment and research regarding ovarian cancer. Deep learning algorithms were applied to serve the purpose as a diagnostic tool and applied to CT scan images of the ovarian region. The images went through a series of pre-processing techniques and, further, the tumour was segmented using the UNet model. The instances were then classified into two categories-benign and malignant tumours. Classification was performed using deep learning models like CNN, ResNet, DenseNet, Inception-ResNet, VGG16 and Xception, along with machine learning models such as Random Forest, Gradient Boosting, AdaBoosting and XGBoosting. DenseNet 121 emerges as the best model on this dataset after applying optimization on the machine learning models by obtaining an accuracy of 95.7%. The current work demonstrates the comparison of multiple CNN architectures with common machine learning algorithms, with and without optimization techniques applied.
Collapse
Affiliation(s)
- Ashwini Kodipalli
- Department of Artificial Intelligence & Data Science, Global Academy of Technology, Bangalore 560098, India
| | - Steven L Fernandes
- Department of Computer Science, Design, Journalism, Creighton University, Omaha, NE 68178, USA
| | - Vaishnavi Gururaj
- Department of Computer Science, George Mason University, Fairfax, VA 22030, USA
| | - Shriya Varada Rameshbabu
- Department of Computer Science & Engineering, Global Academy of Technology, Bangalore 560098, India
| | - Santosh Dasar
- Department of Radiologist, SDM College of Medical Sciences and Hospital, Dharwad 580009, India
| |
Collapse
|
10
|
Naz J, Sharif MI, Sharif MI, Kadry S, Rauf HT, Ragab AE. A Comparative Analysis of Optimization Algorithms for Gastrointestinal Abnormalities Recognition and Classification Based on Ensemble XcepNet23 and ResNet18 Features. Biomedicines 2023; 11:1723. [PMID: 37371819 DOI: 10.3390/biomedicines11061723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/23/2023] [Accepted: 06/09/2023] [Indexed: 06/29/2023] Open
Abstract
Esophagitis, cancerous growths, bleeding, and ulcers are typical symptoms of gastrointestinal disorders, which account for a significant portion of human mortality. For both patients and doctors, traditional diagnostic methods can be exhausting. The major aim of this research is to propose a hybrid method that can accurately diagnose the gastrointestinal tract abnormalities and promote early treatment that will be helpful in reducing the death cases. The major phases of the proposed method are: Dataset Augmentation, Preprocessing, Features Engineering (Features Extraction, Fusion, Optimization), and Classification. Image enhancement is performed using hybrid contrast stretching algorithms. Deep Learning features are extracted through transfer learning from the ResNet18 model and the proposed XcepNet23 model. The obtained deep features are ensembled with the texture features. The ensemble feature vector is optimized using the Binary Dragonfly algorithm (BDA), Moth-Flame Optimization (MFO) algorithm, and Particle Swarm Optimization (PSO) algorithm. In this research, two datasets (Hybrid dataset and Kvasir-V1 dataset) consisting of five and eight classes, respectively, are utilized. Compared to the most recent methods, the accuracy achieved by the proposed method on both datasets was superior. The Q_SVM's accuracies on the Hybrid dataset, which was 100%, and the Kvasir-V1 dataset, which was 99.24%, were both promising.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah 47040, Pakistan
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah 47040, Pakistan
| | - Muhammad Irfan Sharif
- Department of Computer Science, University of Education Lahore, Jauharabad Campus, Lahore 54770, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos P.O. Box 13-5053, Lebanon
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
- MEU Research Unit, Middle East University, Amman 11831, Jordan
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
| | - Adham E Ragab
- Industrial Engineering Department, College of Engineering, King Saud University, P.O. Box 800, Riyadh 11421, Saudi Arabia
| |
Collapse
|
11
|
Krenzer A, Heil S, Fitting D, Matti S, Zoller WG, Hann A, Puppe F. Automated classification of polyps using deep learning architectures and few-shot learning. BMC Med Imaging 2023; 23:59. [PMID: 37081495 PMCID: PMC10120204 DOI: 10.1186/s12880-023-01007-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 03/24/2023] [Indexed: 04/22/2023] Open
Abstract
BACKGROUND Colorectal cancer is a leading cause of cancer-related deaths worldwide. The best method to prevent CRC is a colonoscopy. However, not all colon polyps have the risk of becoming cancerous. Therefore, polyps are classified using different classification systems. After the classification, further treatment and procedures are based on the classification of the polyp. Nevertheless, classification is not easy. Therefore, we suggest two novel automated classifications system assisting gastroenterologists in classifying polyps based on the NICE and Paris classification. METHODS We build two classification systems. One is classifying polyps based on their shape (Paris). The other classifies polyps based on their texture and surface patterns (NICE). A two-step process for the Paris classification is introduced: First, detecting and cropping the polyp on the image, and secondly, classifying the polyp based on the cropped area with a transformer network. For the NICE classification, we design a few-shot learning algorithm based on the Deep Metric Learning approach. The algorithm creates an embedding space for polyps, which allows classification from a few examples to account for the data scarcity of NICE annotated images in our database. RESULTS For the Paris classification, we achieve an accuracy of 89.35 %, surpassing all papers in the literature and establishing a new state-of-the-art and baseline accuracy for other publications on a public data set. For the NICE classification, we achieve a competitive accuracy of 81.13 % and demonstrate thereby the viability of the few-shot learning paradigm in polyp classification in data-scarce environments. Additionally, we show different ablations of the algorithms. Finally, we further elaborate on the explainability of the system by showing heat maps of the neural network explaining neural activations. CONCLUSION Overall we introduce two polyp classification systems to assist gastroenterologists. We achieve state-of-the-art performance in the Paris classification and demonstrate the viability of the few-shot learning paradigm in the NICE classification, addressing the prevalent data scarcity issues faced in medical machine learning.
Collapse
Affiliation(s)
- Adrian Krenzer
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany.
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany.
| | - Stefan Heil
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Daniel Fitting
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Safa Matti
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstrasse 60, 70174, Stuttgart, Germany
| | - Alexander Hann
- Interventional and Experimental Endoscopy (InExEn), Department of Internal Medicine II, University Hospital Würzburg, Oberdürrbacher Straße 6, 97080, Würzburg, Germany
| | - Frank Puppe
- Department of Artificial Intelligence and Knowledge Systems, Julius-Maximilians University of Würzburg, Sanderring 2, 97070, Würzburg, Germany
| |
Collapse
|
12
|
Khan MA, Khan A, Alhaisoni M, Alqahtani A, Alsubai S, Alharbi M, Malik NA, Damaševičius R. Multimodal brain tumor detection and classification using deep saliency map and improved dragonfly optimization algorithm. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2023; 33:572-587. [DOI: 10.1002/ima.22831] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/18/2022] [Indexed: 08/25/2024]
Abstract
AbstractIn the last decade, there has been a significant increase in medical cases involving brain tumors. Brain tumor is the tenth most common type of tumor, affecting millions of people. However, if it is detected early, the cure rate can increase. Computer vision researchers are working to develop sophisticated techniques for detecting and classifying brain tumors. MRI scans are primarily used for tumor analysis. We proposed an automated system for brain tumor detection and classification using a saliency map and deep learning feature optimization in this paper. The proposed framework was implemented in stages. In the initial phase of the proposed framework, a fusion‐based contrast enhancement technique is proposed. In the following phase, a tumor segmentation technique based on saliency maps is proposed, which is then mapped on original images based on active contour. Following that, a pre‐trained CNN model named EfficientNetB0 is fine‐tuned and trained in two ways: on enhanced images and on tumor localization images. Deep transfer learning is used to train both models, and features are extracted from the average pooling layer. The deep learning features are then fused using an improved fusion approach known as Entropy Serial Fusion. The best features are chosen in the final step using an improved dragonfly optimization algorithm. Finally, the best features are classified using an extreme learning machine (ELM). The experimental process is conducted on three publically available datasets and achieved an improved accuracy of 95.14, 94.89, and 95.94%, respectively. The comparison with several neural nets shows the improvement of proposed framework.
Collapse
Affiliation(s)
| | - Awais Khan
- Department of Computer Science HITEC University Taxila Pakistan
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences Princess Nourah bint Abdulrahman University Riyadh Saudi Arabia
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences Prince Sattam bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Shtwai Alsubai
- College of Computer Engineering and Sciences Prince Sattam bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Meshal Alharbi
- Department of Computer Science, College of Computer Engineering and Sciences Prince Sattam Bin Abdulaziz University Al‐Kharj Saudi Arabia
| | - Nazir Ahmed Malik
- Cyber Reconnaissance and Combat Lab Bahria University Islamabad Islamabad Pakistan
| | | |
Collapse
|
13
|
Robust Ulcer Classification: Contrast and Illumination Invariant Approach. Diagnostics (Basel) 2022; 12:diagnostics12122898. [PMID: 36552905 PMCID: PMC9777244 DOI: 10.3390/diagnostics12122898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 11/09/2022] [Accepted: 11/17/2022] [Indexed: 11/24/2022] Open
Abstract
Gastrointestinal (GI) disease cases are on the rise throughout the world. Ulcers, being the most common type of GI disease, if left untreated, can cause internal bleeding resulting in anemia and bloody vomiting. Early detection and classification of different types of ulcers can reduce the death rate and severity of the disease. Manual detection and classification of ulcers are tedious and error-prone. This calls for automated systems based on computer vision techniques to detect and classify ulcers in images and video data. A major challenge in accurate detection and classification is dealing with the similarity among classes and the poor quality of input images. Improper contrast and illumination reduce the anticipated classification accuracy. In this paper, contrast and illumination invariance was achieved by utilizing log transformation and power law transformation. Optimal values of the parameters for both these techniques were achieved and combined to obtain the fused image dataset. Augmentation was used to handle overfitting and classification was performed using the lightweight and efficient deep learning model MobilNetv2. Experiments were conducted on the KVASIR dataset to assess the efficacy of the proposed approach. An accuracy of 96.71% was achieved, which is a considerable improvement over the state-of-the-art techniques.
Collapse
|
14
|
Khan MA, Sahar N, Khan WZ, Alhaisoni M, Tariq U, Zayyan MH, Kim YJ, Chang B. GestroNet: A Framework of Saliency Estimation and Optimal Deep Learning Features Based Gastrointestinal Diseases Detection and Classification. Diagnostics (Basel) 2022; 12:2718. [PMID: 36359566 PMCID: PMC9689856 DOI: 10.3390/diagnostics12112718] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 10/23/2022] [Accepted: 11/04/2022] [Indexed: 08/25/2024] Open
Abstract
In the last few years, artificial intelligence has shown a lot of promise in the medical domain for the diagnosis and classification of human infections. Several computerized techniques based on artificial intelligence (AI) have been introduced in the literature for gastrointestinal (GIT) diseases such as ulcer, bleeding, polyp, and a few others. Manual diagnosis of these infections is time consuming, expensive, and always requires an expert. As a result, computerized methods that can assist doctors as a second opinion in clinics are widely required. The key challenges of a computerized technique are accurate infected region segmentation because each infected region has a change of shape and location. Moreover, the inaccurate segmentation affects the accurate feature extraction that later impacts the classification accuracy. In this paper, we proposed an automated framework for GIT disease segmentation and classification based on deep saliency maps and Bayesian optimal deep learning feature selection. The proposed framework is made up of a few key steps, from preprocessing to classification. Original images are improved in the preprocessing step by employing a proposed contrast enhancement technique. In the following step, we proposed a deep saliency map for segmenting infected regions. The segmented regions are then used to train a pre-trained fine-tuned model called MobileNet-V2 using transfer learning. The fine-tuned model's hyperparameters were initialized using Bayesian optimization (BO). The average pooling layer is then used to extract features. However, several redundant features are discovered during the analysis phase and must be removed. As a result, we proposed a hybrid whale optimization algorithm for selecting the best features. Finally, the selected features are classified using an extreme learning machine classifier. The experiment was carried out on three datasets: Kvasir 1, Kvasir 2, and CUI Wah. The proposed framework achieved accuracy of 98.20, 98.02, and 99.61% on these three datasets, respectively. When compared to other methods, the proposed framework shows an improvement in accuracy.
Collapse
Affiliation(s)
| | - Naveera Sahar
- Department of Computer Science, University of Wah, Wah Cantt, Rawalpindi 47040, Pakistan
| | - Wazir Zada Khan
- Department of Computer Science, University of Wah, Wah Cantt, Rawalpindi 47040, Pakistan
| | - Majed Alhaisoni
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Usman Tariq
- Department of Management Information Systems, CoBA, Prince Sattam Bin Abdulaziz University, Al-Khraj 16278, Saudi Arabia
| | - Muhammad H. Zayyan
- Computer Science Department, Faculty of Computers and Information Sciences, Mansoura University, Mansoura 35516, Egypt
| | - Ye Jin Kim
- Department of Computer Science, Hanyang University, Seoul 04763, Korea
| | - Byoungchol Chang
- Center for Computational Social Science, Hanyang University, Seoul 04763, Korea
| |
Collapse
|
15
|
Umer MJ, Amin J, Sharif M, Anjum MA, Azam F, Shah JH. An integrated framework for COVID-19 classification based on classical and quantum transfer learning from a chest radiograph. CONCURRENCY AND COMPUTATION : PRACTICE & EXPERIENCE 2022; 34:e6434. [PMID: 34512201 PMCID: PMC8420477 DOI: 10.1002/cpe.6434] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/21/2021] [Accepted: 05/13/2021] [Indexed: 05/07/2023]
Abstract
COVID-19 is a quickly spreading over 10 million persons globally. The overall number of infected patients worldwide is estimated to be around 133,381,413 people. Infection rate is being increased on daily basis. It has also caused a devastating effect on the world economy and public health. Early stage detection of this disease is mandatory to reduce the mortality rate. Artificial intelligence performs a vital role for COVID-19 detection at an initial stage using chest radiographs. The proposed methods comprise of the two phases. Deep features (DFs) are derived from its last fully connected layers of pre-trained models like AlexNet and MobileNet in phase-I. Later these feature vectors are fused serially. Best features are selected through feature selection method of PCA and passed to the SVM and KNN for classification. In phase-II, quantum transfer learning model is utilized, in which a pre-trained ResNet-18 model is applied for DF collection and then these features are supplied as an input to the 4-qubit quantum circuit for model training with the tuned hyperparameters. The proposed technique is evaluated on two publicly available x-ray imaging datasets. The proposed methodology achieved an accuracy index of 99.0% with three classes including corona virus-positive images, normal images, and pneumonia radiographs. In comparison to other recently published work, the experimental findings show that the proposed approach outperforms it.
Collapse
Affiliation(s)
- Muhammad Junaid Umer
- Department of Computer ScienceComsats University Islamabad, Wah CampusRawalpindiPakistan
| | - Javeria Amin
- Department of Computer ScienceUniversity of WahRawalpindiPakistan
| | - Muhammad Sharif
- Department of Computer ScienceComsats University Islamabad, Wah CampusRawalpindiPakistan
| | | | - Faisal Azam
- Department of Computer ScienceComsats University Islamabad, Wah CampusRawalpindiPakistan
| | - Jamal Hussain Shah
- Department of Computer ScienceComsats University Islamabad, Wah CampusRawalpindiPakistan
| |
Collapse
|
16
|
Sharif MI, Khan MA, Alhussein M, Aurangzeb K, Raza M. A decision support system for multimodal brain tumor classification using deep learning. COMPLEX INTELL SYST 2022; 8:3007-3020. [DOI: 10.1007/s40747-021-00321-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 03/01/2021] [Indexed: 02/08/2023]
Abstract
AbstractMulticlass classification of brain tumors is an important area of research in the field of medical imaging. Since accuracy is crucial in the classification, a number of techniques are introduced by computer vision researchers; however, they still face the issue of low accuracy. In this article, a new automated deep learning method is proposed for the classification of multiclass brain tumors. To realize the proposed method, the Densenet201 Pre-Trained Deep Learning Model is fine-tuned and later trained using a deep transfer of imbalanced data learning. The features of the trained model are extracted from the average pool layer, which represents the very deep information of each type of tumor. However, the characteristics of this layer are not sufficient for a precise classification; therefore, two techniques for the selection of features are proposed. The first technique is Entropy–Kurtosis-based High Feature Values (EKbHFV) and the second technique is a modified genetic algorithm (MGA) based on metaheuristics. The selected features of the GA are further refined by the proposed new threshold function. Finally, both EKbHFV and MGA-based features are fused using a non-redundant serial-based approach and classified using a multiclass SVM cubic classifier. For the experimental process, two datasets, including BRATS2018 and BRATS2019, are used without increase and have achieved an accuracy of more than 95%. The precise comparison of the proposed method with other neural nets shows the significance of this work.
Collapse
|
17
|
Khan MA, Sharif MI, Raza M, Anjum A, Saba T, Shad SA. Skin lesion segmentation and classification: A unified framework of deep neural network features fusion and selection. EXPERT SYSTEMS 2022; 39. [DOI: 10.1111/exsy.12497] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 10/25/2019] [Indexed: 08/25/2024]
Abstract
AbstractAutomated skin lesion diagnosis from dermoscopic images is a difficult process due to several notable problems such as artefacts (hairs), irregularity, lesion shape, and irrelevant features extraction. These problems make the segmentation and classification process difficult. In this research, we proposed an optimized colour feature (OCF) of lesion segmentation and deep convolutional neural network (DCNN)‐based skin lesion classification. A hybrid technique is proposed to remove the artefacts and improve the lesion contrast. Then, colour segmentation technique is presented known as OCFs. The OCF approach is further improved by an existing saliency approach, which is fused by a novel pixel‐based method. A DCNN‐9 model is implemented to extract deep features and fused with OCFs by a novel parallel fusion approach. After this, a normal distribution‐based high‐ranking feature selection technique is utilized to select the most robust features for classification. The suggested method is evaluated on ISBI series (2016, 2017, and 2018) datasets. The experiments are performed in two steps and achieved average segmentation accuracy of more than 90% on selected datasets. Moreover, the achieve classification accuracy of 92.1%, 96.5%, and 85.1%, respectively, on all three datasets shows that the presented method has remarkable performance.
Collapse
Affiliation(s)
| | - Muhammad Imran Sharif
- Department of Computer Science COMSATS University Islamabad, Wah Campus Islamabad Pakistan
| | - Mudassar Raza
- Department of Computer Science COMSATS University Islamabad, Wah Campus Islamabad Pakistan
| | - Almas Anjum
- Department of Computer Science University of Wah Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences Prince Sultan University Riyadh Saudi Arabia
| | | |
Collapse
|
18
|
Arshad H, Khan MA, Sharif MI, Yasmin M, Tavares JMRS, Zhang Y, Satapathy SC. A multilevel paradigm for deep convolutional neural network features selection with an application to human gait recognition. EXPERT SYSTEMS 2022; 39. [DOI: 10.1111/exsy.12541] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 02/07/2020] [Indexed: 08/25/2024]
Abstract
AbstractHuman gait recognition (HGR) shows high importance in the area of video surveillance due to remote access and security threats. HGR is a technique commonly used for the identification of human style in daily life. However, many typical situations like change of clothes condition and variation in view angles degrade the system performance. Lately, different machine learning (ML) techniques have been introduced for video surveillance which gives promising results among which deep learning (DL) shows best performance in complex scenarios. In this article, an integrated framework is proposed for HGR using deep neural network and fuzzy entropy controlled skewness (FEcS) approach. The proposed technique works in two phases: In the first phase, deep convolutional neural network (DCNN) features are extracted by pre‐trained CNN models (VGG19 and AlexNet) and their information is mixed by parallel fusion approach. In the second phase, entropy and skewness vectors are calculated from fused feature vector (FV) to select best subsets of features by suggested FEcS approach. The best subsets of picked features are finally fed to multiple classifiers and finest one is chosen on the basis of accuracy value. The experiments were carried out on four well‐known datasets, namely, AVAMVG gait, CASIA A, B and C. The achieved accuracy of each dataset was 99.8, 99.7, 93.3 and 92.2%, respectively. Therefore, the obtained overall recognition results lead to conclude that the proposed system is very promising.
Collapse
Affiliation(s)
- Habiba Arshad
- Department of Computer Science COMSATS University Islamabad Islamabad Pakistan
| | | | - Muhammad Irfan Sharif
- School of Computer Science and Engineering University of Electronic Science and Technology of China Chengdu China
| | - Mussarat Yasmin
- Department of Computer Science COMSATS University Islamabad Islamabad Pakistan
| | - João Manuel R. S. Tavares
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia Universidade do Porto Porto Portugal
| | - Yu‐Dong Zhang
- Department of Informatics University of Leicester Leicester UK
| | | |
Collapse
|
19
|
Mohammad F, Al-Razgan M. Deep Feature Fusion and Optimization-Based Approach for Stomach Disease Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:2801. [PMID: 35408415 PMCID: PMC9003289 DOI: 10.3390/s22072801] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/26/2022] [Accepted: 04/02/2022] [Indexed: 01/10/2023]
Abstract
Cancer is the deadliest disease among all the diseases and the main cause of human mortality. Several types of cancer sicken the human body and affect organs. Among all the types of cancer, stomach cancer is the most dangerous disease that spreads rapidly and needs to be diagnosed at an early stage. The early diagnosis of stomach cancer is essential to reduce the mortality rate. The manual diagnosis process is time-consuming, requires many tests, and the availability of an expert doctor. Therefore, automated techniques are required to diagnose stomach infections from endoscopic images. Many computerized techniques have been introduced in the literature but due to a few challenges (i.e., high similarity among the healthy and infected regions, irrelevant features extraction, and so on), there is much room to improve the accuracy and reduce the computational time. In this paper, a deep-learning-based stomach disease classification method employing deep feature extraction, fusion, and optimization using WCE images is proposed. The proposed method comprises several phases: data augmentation performed to increase the dataset images, deep transfer learning adopted for deep features extraction, feature fusion performed on deep extracted features, fused feature matrix optimized with a modified dragonfly optimization method, and final classification of the stomach disease was performed. The features extraction phase employed two pre-trained deep CNN models (Inception v3 and DenseNet-201) performing activation on feature derivation layers. Later, the parallel concatenation was performed on deep-derived features and optimized using the meta-heuristic method named the dragonfly algorithm. The optimized feature matrix was classified by employing machine-learning algorithms and achieved an accuracy of 99.8% on the combined stomach disease dataset. A comparison has been conducted with state-of-the-art techniques and shows improved accuracy.
Collapse
Affiliation(s)
- Farah Mohammad
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
| | - Muna Al-Razgan
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia;
| |
Collapse
|
20
|
|
21
|
Ayyaz MS, Lali MIU, Hussain M, Rauf HT, Alouffi B, Alyami H, Wasti S. Hybrid Deep Learning Model for Endoscopic Lesion Detection and Classification Using Endoscopy Videos. Diagnostics (Basel) 2021; 12:diagnostics12010043. [PMID: 35054210 PMCID: PMC8775223 DOI: 10.3390/diagnostics12010043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 12/22/2021] [Accepted: 12/23/2021] [Indexed: 02/06/2023] Open
Abstract
In medical imaging, the detection and classification of stomach diseases are challenging due to the resemblance of different symptoms, image contrast, and complex background. Computer-aided diagnosis (CAD) plays a vital role in the medical imaging field, allowing accurate results to be obtained in minimal time. This article proposes a new hybrid method to detect and classify stomach diseases using endoscopy videos. The proposed methodology comprises seven significant steps: data acquisition, preprocessing of data, transfer learning of deep models, feature extraction, feature selection, hybridization, and classification. We selected two different CNN models (VGG19 and Alexnet) to extract features. We applied transfer learning techniques before using them as feature extractors. We used a genetic algorithm (GA) in feature selection, due to its adaptive nature. We fused selected features of both models using a serial-based approach. Finally, the best features were provided to multiple machine learning classifiers for detection and classification. The proposed approach was evaluated on a personally collected dataset of five classes, including gastritis, ulcer, esophagitis, bleeding, and healthy. We observed that the proposed technique performed superbly on Cubic SVM with 99.8% accuracy. For the authenticity of the proposed technique, we considered these statistical measures: classification accuracy, recall, precision, False Negative Rate (FNR), Area Under the Curve (AUC), and time. In addition, we provided a fair state-of-the-art comparison of our proposed technique with existing techniques that proves its worthiness.
Collapse
Affiliation(s)
- M Shahbaz Ayyaz
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Muhammad Ikram Ullah Lali
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| | - Mubbashar Hussain
- Department of Computer Science, University of Gujrat, Gujrat 50700, Pakistan; (M.S.A.); (M.H.)
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
- Correspondence:
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Hashem Alyami
- Department of Computer Science, College of Computers and Information Technology, Taif University, P. O. Box 11099, Taif 21944, Saudi Arabia; (B.A.); (H.A.)
| | - Shahbaz Wasti
- Department of Information Sciences, University of Education Lahore, Lahore 41000, Pakistan; (M.I.U.L.); (S.W.)
| |
Collapse
|
22
|
Rangaraj S, Islam M, Vs V, Wijethilake N, Uppal U, See AAQ, Chan J, James ML, King NKK, Ren H. Identifying risk factors of intracerebral hemorrhage stability using explainable attention model. Med Biol Eng Comput 2021; 60:337-348. [PMID: 34859369 DOI: 10.1007/s11517-021-02459-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Accepted: 09/29/2021] [Indexed: 10/19/2022]
Abstract
Segmentation of intracerebral hemorrhage (ICH) helps improve the quality of diagnosis, draft the desired treatment methods, and clinically observe the variations with healthy patients. The clinical utilization of various ICH progression scoring systems has limitations due to the systems' modest predictive value. This paper proposes a single pipeline of a multi-task model for end-to-end hemorrhage segmentation and risk estimation. We introduce a 3D spatial attention unit and integrate it into the state-of-the-art segmentation architecture, UNet, to enhance the accuracy by bootstrapping the global spatial representation. We further extract the geometric features from the segmented hemorrhage volume and fuse them with clinical features such as CT angiography (CTA) spot, Glasgow Coma Scale (GCS), and age to predict the ICH stability. Several state-of-the-art machine learning techniques such as multilayer perceptron (MLP), support vector machine (SVM), gradient boosting, and random forests are applied to train stability estimation and to compare the performances. To align clinical intuition with model learning, we determine the shapely values (SHAP) and explain the most significant features for the ICH risk scoring system. A total of 79 patients are included, of which 20 are found in critical condition. Our proposed single pipeline model achieves a segmentation accuracy of 86.3%, stability prediction accuracy of 78.3%, and precision of 82.9%; the mean square error of exact expansion rate regression is observed to be 0.46. The SHAP analysis reveals that CTA spot sign, age, solidity, location, and length of the first axis of the ICH volume are the most critical characteristics that help define the stability of the stroke lesion. We also show that integrating significant geometric features with clinical features can improve the ICH progression scoring by predicting long-term outcomes. Graphical abstract Overview of our proposed method comprising of spatial attention and feature extraction mechanisms. The architecture is trained on the input CT images, and the first step output is the predicted segmentation of the hemorrhagic region. The output is fed into a geometric feature extractor and is fused with clinical features to estimate ICH stability using a multilayer perceptron (MLP).
Collapse
Affiliation(s)
- Seshasayi Rangaraj
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,Department of ECE, National Institute of Technology, Tiruchirappalli, India
| | - Mobarakol Islam
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,NUS Graduate School for Integrative Sciences and Engineering (NGS), National University of Singapore, Singapore, Singapore
| | - Vibashan Vs
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,Department of ECE, National Institute of Technology, Tiruchirappalli, India
| | - Navodini Wijethilake
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,Department of ENTC, University of Moratuwa, Moratuwa, Sri Lanka
| | - Utkarsh Uppal
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore.,Department of Electrical Engineering, Punjab Engineering College, Chandigarh, India
| | - Angela An Qi See
- Department of Neurosurgery, National Neuroscience Institute, Singapore, Singapore
| | - Jasmine Chan
- Department of Neurosurgery, National Neuroscience Institute, Singapore, Singapore
| | | | - Nicolas Kon Kam King
- Department of Neurosurgery, National Neuroscience Institute, Singapore, Singapore.,Neuro Asia Care, Mount Elizabeth Hospital, Singapore, Singapore
| | - Hongliang Ren
- Department of Biomedical Engineering, National University of Singapore, Singapore, Singapore. .,Department of Electronic Engineering and Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong (CUHK), Hong Kong, Hong Kong.
| |
Collapse
|
23
|
Khan MA, Akram T, Sharif M, Alhaisoni M, Saba T, Nawaz N. A probabilistic segmentation and entropy-rank correlation-based feature selection approach for the recognition of fruit diseases. EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING 2021; 2021:14. [DOI: 10.1186/s13640-021-00558-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Accepted: 04/22/2021] [Indexed: 08/25/2024]
Abstract
AbstractAgriculture plays a critical role in the economy of several countries, by providing the main sources of income, employment, and food to their rural population. However, in recent years, it has been observed that plants and fruits are widely damaged by different diseases which cause a huge loss to the farmers, although this loss can be minimized by detecting plants’ diseases at their earlier stages using pattern recognition (PR) and machine learning (ML) techniques. In this article, an automated system is proposed for the identification and recognition of fruit diseases. Our approach is distinctive in a way, it overcomes the challenges like convex edges, inconsistency between colors, irregularity, visibility, scale, and origin. The proposed approach incorporates five primary steps including preprocessing,Standard instruction requires city and country for affiliations. Hence, please check if the provided information for each affiliation with missing data is correct and amend if deemed necessary. disease identification through segmentation, feature extraction and fusion, feature selection, and classification. The infection regions are extracted using the proposed adaptive and quartile deviation-based segmentation approach and fused resultant binary images by employing the weighted coefficient of correlation (CoC). Then the most appropriate features are selected using a novel framework of entropy and rank-based correlation (EaRbC). Finally, selected features are classified using multi-class support vector machine (MC-SCM). A PlantVillage dataset is utilized for the evaluation of the proposed system to achieving an average segmentation and classification accuracy of 93.74% and 97.7%, respectively. From the set of statistical measure, we sincerely believe that our proposed method outperforms existing method with greater accuracy.
Collapse
|
24
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|
25
|
Gastrointestinal Tract Disease Classification from Wireless Endoscopy Images Using Pretrained Deep Learning Model. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:5940433. [PMID: 34545292 PMCID: PMC8449743 DOI: 10.1155/2021/5940433] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Revised: 07/03/2021] [Accepted: 08/16/2021] [Indexed: 12/28/2022]
Abstract
Wireless capsule endoscopy is a noninvasive wireless imaging technology that becomes increasingly popular in recent years. One of the major drawbacks of this technology is that it generates a large number of photos that must be analyzed by medical personnel, which takes time. Various research groups have proposed different image processing and machine learning techniques to classify gastrointestinal tract diseases in recent years. Traditional image processing algorithms and a data augmentation technique are combined with an adjusted pretrained deep convolutional neural network to classify diseases in the gastrointestinal tract from wireless endoscopy images in this research. We take advantage of pretrained models VGG16, ResNet-18, and GoogLeNet, a convolutional neural network (CNN) model with adjusted fully connected and output layers. The proposed models are validated with a dataset consisting of 6702 images of 8 classes. The VGG16 model achieved the highest results with 96.33% accuracy, 96.37% recall, 96.5% precision, and 96.5% F1-measure. Compared to other state-of-the-art models, the VGG16 model has the highest Matthews Correlation Coefficient value of 0.95 and Cohen's kappa score of 0.96.
Collapse
|
26
|
Vieira PM, Freitas NR, Lima VB, Costa D, Rolanda C, Lima CS. Multi-pathology detection and lesion localization in WCE videos by using the instance segmentation approach. Artif Intell Med 2021; 119:102141. [PMID: 34531016 DOI: 10.1016/j.artmed.2021.102141] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 06/10/2021] [Accepted: 08/03/2021] [Indexed: 12/13/2022]
Abstract
The majority of current systems for automatic diagnosis considers the detection of a unique and previously known pathology. Considering specifically the diagnosis of lesions in the small bowel using endoscopic capsule images, very few consider the possible existence of more than one pathology and when they do, they are mainly detection based systems therefore unable to localize the suspected lesions. Such systems do not fully satisfy the medical community, that in fact needs a system that detects any pathology and eventually more than one, when they coexist. In addition, besides the diagnostic capability of these systems, localizing the lesions in the image has been of great interest to the medical community, mainly for training medical personnel purposes. So, nowadays, the inclusion of the lesion location in automatic diagnostic systems is practically mandatory. Multi-pathology detection can be seen as a multi-object detection task and as each frame can contain different instances of the same lesion, instance segmentation seems to be appropriate for the purpose. Consequently, we argue that a multi-pathology system benefits from using the instance segmentation approach, since classification and segmentation modules are both required complementing each other in lesion detection and localization. According to our best knowledge such a system does not yet exist for the detection of WCE pathologies. This paper proposes a multi-pathology system that can be applied to WCE images, which uses the Mask Improved RCNN (MI-RCNN), a new mask subnet scheme which has shown to significantly improve mask predictions of the high performing state-of-the-art Mask-RCNN and PANet systems. A novel training strategy based on the second momentum is also proposed for the first time for training Mask-RCNN and PANet based systems. These approaches were tested using the public database KID, and the included pathologies were bleeding, angioectasias, polyps and inflammatory lesions. Experimental results show significant improvements for the proposed versions, reaching increases of almost 7% over the PANet model when the new proposed training approach was employed.
Collapse
Affiliation(s)
- Pedro M Vieira
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal.
| | - Nuno R Freitas
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal
| | - Veríssimo B Lima
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal; School of Engineering (ISEP), Polytechnic Institute of Porto (P.PORTO), Porto, Portugal
| | - Dalila Costa
- Life and Health Sciences Research Institute, University of Minho, Campus Gualtar, 4710-057 Braga, Portugal; ICVS/3Bs - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Department of Gastroenterology, Hospital de Braga, Braga, Portugal
| | - Carla Rolanda
- Life and Health Sciences Research Institute, University of Minho, Campus Gualtar, 4710-057 Braga, Portugal; ICVS/3Bs - PT Government Associate Laboratory, Braga/Guimarães, Portugal; Department of Gastroenterology, Hospital de Braga, Braga, Portugal
| | - Carlos S Lima
- CMEMS-UMinho Research Unit, Universidade do Minho, Guimarães, Portugal
| |
Collapse
|
27
|
Amin J, Anjum MA, Sharif M, Rehman A, Saba T, Zahra R. Microscopic segmentation and classification of COVID-19 infection with ensemble convolutional neural network. Microsc Res Tech 2021; 85:385-397. [PMID: 34435702 PMCID: PMC8646237 DOI: 10.1002/jemt.23913] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Revised: 07/10/2021] [Accepted: 08/11/2021] [Indexed: 01/19/2023]
Abstract
The detection of biological RNA from sputum has a comparatively poor positive rate in the initial/early stages of discovering COVID‐19, as per the World Health Organization. It has a different morphological structure as compared to healthy images, manifested by computer tomography (CT). COVID‐19 diagnosis at an early stage can aid in the timely cure of patients, lowering the mortality rate. In this reported research, three‐phase model is proposed for COVID‐19 detection. In Phase I, noise is removed from CT images using a denoise convolutional neural network (DnCNN). In the Phase II, the actual lesion region is segmented from the enhanced CT images by using deeplabv3 and ResNet‐18. In Phase III, segmented images are passed to the stack sparse autoencoder (SSAE) deep learning model having two stack auto‐encoders (SAE) with the selected hidden layers. The designed SSAE model is based on both SAE and softmax layers for COVID19 classification. The proposed method is evaluated on actual patient data of Pakistan Ordinance Factories and other public benchmark data sets with different scanners/mediums. The proposed method achieved global segmentation accuracy of 0.96 and 0.97 for classification.
Collapse
Affiliation(s)
- Javeria Amin
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| | - Muhammad Almas Anjum
- Dean of University, National University of Technology (NUTECH), Islamabad, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad Wah Campus, Wah Cantt, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Rida Zahra
- Department of Computer Science, University of Wah, Wah Cantt, Pakistan
| |
Collapse
|
28
|
Sharif M, Attique Khan M, Rashid M, Yasmin M, Afza F, Tanik UJ. Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. J EXP THEOR ARTIF IN 2021; 33:577-599. [DOI: 10.1080/0952813x.2019.1572657] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2018] [Accepted: 12/27/2018] [Indexed: 01/16/2023]
Affiliation(s)
- Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | | | - Muhammad Rashid
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Farhat Afza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Urcun John Tanik
- Department of Computer Science and Information Systems, Texas A&M University-Commerce, USA
| |
Collapse
|
29
|
Recognizing Gastrointestinal Malignancies on WCE and CCE Images by an Ensemble of Deep and Handcrafted Features with Entropy and PCA Based Features Optimization. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10481-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
30
|
3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00328-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
AbstractWireless capsule endoscopy (WCE) might move through human body and captures the small bowel and captures the video and require the analysis of all frames of video due to which the diagnosis of gastrointestinal infections by the physician is a tedious task. This tiresome assignment has fuelled the researcher’s efforts to present an automated technique for gastrointestinal infections detection. The segmentation of stomach infections is a challenging task because the lesion region having low contrast and irregular shape and size. To handle this challenging task, in this research work a new deep semantic segmentation model is suggested for 3D-segmentation of the different types of stomach infections. In the segmentation model, deep labv3 is employed as a backbone of the ResNet-50 model. The model is trained with ground-masks and accurately performs pixel-wise classification in the testing phase. Similarity among the different types of stomach lesions accurate classification is a difficult task, which is addressed in this reported research by extracting deep features from global input images using a pre-trained ResNet-50 model. Furthermore, the latest advances in the estimation of uncertainty and model interpretability in the classification of different types of stomach infections is presented. The classification results estimate uncertainty related to the vital features in input and show how uncertainty and interpretability might be modeled in ResNet-50 for the classification of the different types of stomach infections. The proposed model achieved up to 90% prediction scores to authenticate the method performance.
Collapse
|
31
|
Khan MA, Qasim M, Lodhi HMJ, Nazir M, Javed K, Rubab S, Din A, Habib U. Automated design for recognition of blood cells diseases from hematopathology using classical features selection and ELM. Microsc Res Tech 2021; 84:202-216. [PMID: 32893918 DOI: 10.1002/jemt.23578] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 07/31/2020] [Accepted: 08/09/2020] [Indexed: 12/18/2022]
Abstract
In the human immune system, the white blood cells (WBC) creates bone and lymphoid masses. These cells defend the human body toward several infections, such as fungi and bacteria. The popular WBC types are Eosinophils, Lymphocytes, Neutrophils, and Monocytes, which are manually diagnosis by the experts. The manual diagnosis process is complicated and time-consuming; therefore, an automated system is required to classify these WBC. In this article, a new method is presented for WBC classification using feature selection and extreme learning machine (ELM). At the very first step, data augmentation is performed to increases the number of images and then implement a new contrast stretching technique name pixel stretch (PS). In the next step, color and gray level size zone matrix (GLSZM) features are calculated from PS images and fused in one vector based on the level of high similarity. However, few redundant features are also included that affect the classification performance. For handling this problem, a maximum relevance probability (MRP) based feature selection technique is implemented. The best-selected features computed from a fitness function are ELM in this work. All maximum relevance features are put to ELM, and this process is continued until the error rate is minimized. In the end, the final selected features are classified through Cubic SVM. For validation of the proposed method, LISC and Dhruv datasets are used, and it achieved the highest accuracy of 96.60%. From the results, it is clearly shown that the proposed method results are improved as compared to other implemented techniques.
Collapse
Affiliation(s)
| | - Muhammad Qasim
- Department of Computer Science, HITEC University, Museum Road, Taxila, Pakistan
| | | | - Muhammad Nazir
- Department of Computer Science, HITEC University, Museum Road, Taxila, Pakistan
| | - Kashif Javed
- Department of Robotics, SMME NUST, Islamabad, Pakistan
| | - Saddaf Rubab
- Military College of Signals, NUST, Islamabad, Pakistan
| | - Ahmad Din
- Department of CS, COMSATS University Islamabad, Abbottabad, Pakistan
| | - Usman Habib
- Department of Computer Science, FAST- National University of Computer & Emerging Sciences (NUCES), Chiniot-Faisalabad Campus, Faisalabad-Chiniot Road, Faisalabad, Punjab, Pakistan
| |
Collapse
|
32
|
Caroppo A, Leone A, Siciliano P. Deep transfer learning approaches for bleeding detection in endoscopy images. Comput Med Imaging Graph 2021; 88:101852. [PMID: 33493998 DOI: 10.1016/j.compmedimag.2020.101852] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 12/17/2020] [Accepted: 12/18/2020] [Indexed: 12/17/2022]
Abstract
Wireless capsule endoscopy is a non-invasive, wireless imaging tool that has developed rapidly over the last several years. One of the main limiting factors using this technology is that it produces a huge number of images, whose analysis, to be done by a doctor, is an extremely time-consuming process. In this research area, the management of this problem has been addressed with the development of Computer-aided Diagnosis systems thanks to which the automatic inspection and analysis of images acquired by the capsule has clearly improved. Recently, a big advance in classification of endoscopic images is achieved with the emergence of deep learning methods. The proposed expert system employs three pre-trained deep convolutional neural networks for feature extraction. In order to construct efficient feature sets, the features from VGG19, InceptionV3 and ResNet50 models are then selected and fused using the minimum Redundancy Maximum Relevance method and different fusion rules. Finally, supervised machine learning algorithms are employed to classify the images using the extracted features into two categories: bleeding and nonbleeding images. For performance evaluation a series of experiments are performed on two standard benchmark datasets. It has been observed that the proposed architecture outclass the single deep learning architectures, with an average accuracy in detection bleeding regions of 97.65 % and 95.70 % on well-known state-of-the-art datasets considering three different fusion rules, with the best combination in terms of accuracy and training time obtained using mean value pooling as fusion rule and Support Vector Machine as classifier.
Collapse
Affiliation(s)
- Andrea Caroppo
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| | - Alessandro Leone
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| | - Pietro Siciliano
- Institute for Microelectronics and Microsystems, National Research Council of Italy, Lecce 73100, Italy.
| |
Collapse
|
33
|
Attique Khan M, Mashood Nasir I, Sharif M, Alhaisoni M, Kadry S, Ahmad Chan Bukhari S, Nam Y. A Blockchain based Framework for Stomach Abnormalities Recognition. COMPUTERS, MATERIALS & CONTINUA 2021; 67:141-158. [DOI: 10.32604/cmc.2021.013217] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 10/19/2020] [Indexed: 08/25/2024]
|
34
|
Attique Khan M, Majid A, Hussain N, Alhaisoni M, Zhang YD, Kadry S, Nam Y. Multiclass Stomach Diseases Classification Using Deep Learning Features Optimization. COMPUTERS, MATERIALS & CONTINUA 2021; 67:3381-3399. [DOI: 10.32604/cmc.2021.014983] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 12/16/2020] [Indexed: 08/25/2024]
|
35
|
Naz J, Attique Khan M, Alhaisoni M, Song OY, Tariq U, Kadry S. Segmentation and Classification of Stomach Abnormalities Using Deep Learning. COMPUTERS, MATERIALS & CONTINUA 2021; 69:607-625. [DOI: 10.32604/cmc.2021.017101] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 02/21/2021] [Indexed: 08/25/2024]
|
36
|
Naz J, Sharif M, Yasmin M, Raza M, Khan MA. Detection and Classification of Gastrointestinal Diseases using Machine Learning. Curr Med Imaging 2021; 17:479-490. [PMID: 32988355 DOI: 10.2174/1573405616666200928144626] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 07/07/2020] [Accepted: 07/23/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND Traditional endoscopy is an invasive and painful method of examining the gastrointestinal tract (GIT) not supported by physicians and patients. To handle this issue, video endoscopy (VE) or wireless capsule endoscopy (WCE) is recommended and utilized for GIT examination. Furthermore, manual assessment of captured images is not possible for an expert physician because it's a time taking task to analyze thousands of images thoroughly. Hence, there comes the need for a Computer-Aided-Diagnosis (CAD) method to help doctors analyze images. Many researchers have proposed techniques for automated recognition and classification of abnormality in captured images. METHODS In this article, existing methods for automated classification, segmentation and detection of several GI diseases are discussed. Paper gives a comprehensive detail about these state-of-theart methods. Furthermore, literature is divided into several subsections based on preprocessing techniques, segmentation techniques, handcrafted features based techniques and deep learning based techniques. Finally, issues, challenges and limitations are also undertaken. RESULTS A comparative analysis of different approaches for the detection and classification of GI infections. CONCLUSION This comprehensive review article combines information related to a number of GI diseases diagnosis methods at one place. This article will facilitate the researchers to develop new algorithms and approaches for early detection of GI diseases detection with more promising results as compared to the existing ones of literature.
Collapse
Affiliation(s)
- Javeria Naz
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | | |
Collapse
|
37
|
Ramzan M, Raza M, Sharif M, Attique Khan M, Nam Y. Gastrointestinal Tract Infections Classification Using Deep Learning. COMPUTERS, MATERIALS & CONTINUA 2021; 69:3239-3257. [DOI: 10.32604/cmc.2021.015920] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Accepted: 03/29/2021] [Indexed: 08/25/2024]
|
38
|
Attique Khan M, Hussain N, Majid A, Alhaisoni M, Ahmad Chan Bukhari S, Kadry S, Nam Y, Zhang YD. Classification of Positive COVID-19 CT Scans using Deep Learning. COMPUTERS, MATERIALS & CONTINUA 2021; 66:2923-2938. [DOI: 10.32604/cmc.2021.013191] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 10/17/2020] [Indexed: 08/25/2024]
|
39
|
Khan MA, Akram T, Sharif M, Javed K, Rashid M, Bukhari SAC. An integrated framework of skin lesion detection and recognition through saliency method and optimal deep neural network features selection. Neural Comput Appl 2020; 32:15929-15948. [DOI: 10.1007/s00521-019-04514-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 10/09/2019] [Indexed: 12/22/2022]
|
40
|
Rashid M, Khan MA, Alhaisoni M, Wang SH, Naqvi SR, Rehman A, Saba T. A Sustainable Deep Learning Framework for Object Recognition Using Multi-Layers Deep Features Fusion and Selection. SUSTAINABILITY 2020; 12:5037. [DOI: 10.3390/su12125037] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
With an overwhelming increase in the demand of autonomous systems, especially in the applications related to intelligent robotics and visual surveillance, come stringent accuracy requirements for complex object recognition. A system that maintains its performance against a change in the object’s nature is said to be sustainable and it has become a major area of research for the computer vision research community in the past few years. In this work, we present a sustainable deep learning architecture, which utilizes multi-layer deep features fusion and selection, for accurate object classification. The proposed approach comprises three steps: (1) By utilizing two deep learning architectures, Very Deep Convolutional Networks for Large-Scale Image Recognition and Inception V3, it extracts features based on transfer learning, (2) Fusion of all the extracted feature vectors is performed by means of a parallel maximum covariance approach, and (3) The best features are selected using Multi Logistic Regression controlled Entropy-Variances method. For verification of the robust selected features, the Ensemble Learning method named Subspace Discriminant Analysis is utilized as a fitness function. The experimental process is conducted using four publicly available datasets, including Caltech-101, Birds database, Butterflies database and CIFAR-100, and a ten-fold validation process which yields the best accuracies of 95.5%, 100%, 98%, and 68.80% for the datasets respectively. Based on the detailed statistical analysis and comparison with the existing methods, the proposed selection method gives significantly more accuracy. Moreover, the computational time of the proposed selection method is better for real-time implementation.
Collapse
Affiliation(s)
- Muhammad Rashid
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan
| | | | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia
| | - Shui-Hua Wang
- School of Architecture Building and Civil Engineering, Loughborough University, Loughborough LE11 3TU, UK
| | - Syed Rameez Naqvi
- Department of EE, COMSATS University Islamabad, Wah Campus, Wah 47040, Pakistan
| | - Amjad Rehman
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
| |
Collapse
|
41
|
Majid A, Khan MA, Yasmin M, Rehman A, Yousafzai A, Tariq U. Classification of stomach infections: A paradigm of convolutional neural network along with classical features fusion and selection. Microsc Res Tech 2020; 83:562-576. [PMID: 31984630 DOI: 10.1002/jemt.23447] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Revised: 12/28/2019] [Accepted: 01/13/2020] [Indexed: 12/11/2022]
Abstract
Automated detection and classification of gastric infections (i.e., ulcer, polyp, esophagitis, and bleeding) through wireless capsule endoscopy (WCE) is still a key challenge. Doctors can identify these endoscopic diseases by using the computer-aided diagnostic (CAD) systems. In this article, a new fully automated system is proposed for the recognition of gastric infections through multi-type features extraction, fusion, and robust features selection. Five key steps are performed-database creation, handcrafted and convolutional neural network (CNN) deep features extraction, a fusion of extracted features, selection of best features using a genetic algorithm (GA), and recognition. In the features extraction step, discrete cosine transform, discrete wavelet transform strong color feature, and VGG16-based CNN features are extracted. Later, these features are fused by simple array concatenation and GA is performed through which best features are selected based on K-Nearest Neighbor fitness function. In the last, best selected features are provided to Ensemble classifier for recognition of gastric diseases. A database is prepared using four datasets-Kvasir, CVC-ClinicDB, Private, and ETIS-LaribPolypDB with four types of gastric infections such as ulcer, polyp, esophagitis, and bleeding. Using this database, proposed technique performs better as compared to existing methods and achieves an accuracy of 96.5%.
Collapse
Affiliation(s)
- Abdul Majid
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Museum Road, Taxila, Rawalpindi, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Amjad Rehman
- AIDA Lab CCIS, Prince Sultan University Riyadh, Riyadh, Saudi Arabia
| | - Abdullah Yousafzai
- Department of Computer Science, HITEC University Museum Road, Taxila, Rawalpindi, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
42
|
Sharif M, Attique M, Tahir MZ, Yasmim M, Saba T, Tanik UJ. A Machine Learning Method with Threshold Based Parallel Feature Fusion and Feature Selection for Automated Gait Recognition. J ORGAN END USER COM 2020. [DOI: 10.4018/joeuc.2020040104] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Gait is a vital biometric process for human identification in the domain of machine learning. In this article, a new method is implemented for human gait recognition based on accurate segmentation and multi-level features extraction. Four major steps are performed including: a) enhancement of motion region in frame by the implementation of linear transformation with HSI color space; b) Region of Interest (ROI) detection based on parallel implementation of optical flow and background subtraction; c) shape and geometric features extraction and parallel fusion; d) Multi-class support vector machine (MSVM) utilization for recognition. The presented approach reduces error rate and increases the CCR. Extensive experiments are done on three data sets namely CASIA-A, CASIA-B and CASIA-C which present different variations in clothing and carrying conditions. The proposed method achieved maximum recognition results of 98.6% on CASIA-A, 93.5% on CASIA-B and 97.3% on CASIA-C, respectively.
Collapse
Affiliation(s)
- Muhammad Sharif
- Department of CS, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Attique
- Department of Computer Science, HITEC University, Museum Road Taxila, Pakistan
| | | | - Mussarat Yasmim
- Department of CS, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | | |
Collapse
|
43
|
Amin J, Sharif A, Gul N, Anjum MA, Nisar MW, Azam F, Bukhari SAC. Integrated design of deep features fusion for localization and classification of skin cancer. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.11.042] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
44
|
Choi J, Shin K, Jung J, Bae HJ, Kim DH, Byeon JS, Kim N. Convolutional Neural Network Technology in Endoscopic Imaging: Artificial Intelligence for Endoscopy. Clin Endosc 2020; 53:117-126. [PMID: 32252504 PMCID: PMC7137563 DOI: 10.5946/ce.2020.054] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 03/10/2020] [Accepted: 03/13/2020] [Indexed: 12/11/2022] Open
Abstract
Recently, significant improvements have been made in artificial intelligence. The artificial neural network was introduced in the 1950s. However, because of the low computing power and insufficient datasets available at that time, artificial neural networks suffered from overfitting and vanishing gradient problems for training deep networks. This concept has become more promising owing to the enhanced big data processing capability, improvement in computing power with parallel processing units, and new algorithms for deep neural networks, which are becoming increasingly successful and attracting interest in many domains, including computer vision, speech recognition, and natural language processing. Recent studies in this technology augur well for medical and healthcare applications, especially in endoscopic imaging. This paper provides perspectives on the history, development, applications, and challenges of deep-learning technology.
Collapse
Affiliation(s)
- Joonmyeong Choi
- Department of Convergence Medicine, University of Ulsan College of Medicine, Seoul, Korea
| | - Keewon Shin
- Department of Convergence Medicine, University of Ulsan College of Medicine, Seoul, Korea
| | | | | | - Do Hoon Kim
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jeong-Sik Byeon
- Department of Gastroenterology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Namku Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Seoul, Korea
- Department of Radiology, Asan Medical Center, Seoul, Korea
| |
Collapse
|
45
|
Khan MA, Khan MA, Ahmed F, Mittal M, Goyal LM, Jude Hemanth D, Satapathy SC. Gastrointestinal diseases segmentation and classification based on duo-deep architectures. Pattern Recognit Lett 2020; 131:193-204. [DOI: 10.1016/j.patrec.2019.12.024] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
46
|
Sharif MI, Li JP, Naz J, Rashid I. A comprehensive review on multi-organs tumor detection based on machine learning. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.12.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
47
|
|
48
|
|
49
|
Sharif M, Amin J, Raza M, Yasmin M, Satapathy SC. An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.11.017] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
50
|
Khan MA, Sharif M, Akram T, Bukhari SAC, Nayak RS. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit Lett 2020; 129:293-303. [DOI: 10.1016/j.patrec.2019.11.034] [Citation(s) in RCA: 61] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|