1
|
Al-Shamasneh AR, Ibrahim RW. Classification of tomato leaf images for detection of plant disease using conformable polynomials image features. MethodsX 2024; 13:102844. [PMID: 39092277 PMCID: PMC11292356 DOI: 10.1016/j.mex.2024.102844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 07/02/2024] [Indexed: 08/04/2024] Open
Abstract
Plant diseases can spread rapidly, leading to significant crop losses if not detected early. By accurately identifying diseased plants, farmers can target treatment only to the affected areas, reducing the number of pesticides or fungicides needed and minimizing environmental impact. Tomatoes are among the most significant and extensively consumed crops worldwide. The main factor affecting crop yield quantity and quality is leaf disease. Various diseases can affect tomato production, impacting both yield and quality. Automated classification of leaf images allows for the early identification of diseased plants, enabling prompt intervention and control measures. Many creative approaches to diagnosing and categorizing specific illnesses have been widely employed. The manual method is costly and labor-intensive. Without the assistance of an agricultural specialist, disease detection can be facilitated by image processing combined with machine learning algorithms. In this study, the diseases in tomato leaves will be detected using new feature extraction method using conformable polynomials image features for accurate solution and faster detection of plant diseases through a machine learning model. The methodology of this study based on:•Preprocessing, feature extraction, dimension reduction and classification modules.•Conformable polynomials method is used to extract the texture features which is passed classifier.•The proposed texture feature is constructed by two parts the enhanced based term, and the texture detail part for textual analysis.•The tomato leaf samples from the plant village image dataset were used to gather the data for this model. The disease detected are 98.80 % accurate for tomato leaf images using SVM classifier. In addition to lowering financial loss, the suggested feature extraction method can help manage plant diseases effectively, improving crop yield and food security.
Collapse
Affiliation(s)
- Ala'a R. Al-Shamasneh
- Department of Computer Science, College of Computer & Information Sciences, Prince Sultan University, Rafha Street, Riyadh 11586, Saudi Arabia
| | - Rabha W. Ibrahim
- Faculty of Engineering and Natural Sciences, Advanced Computing Lab, Istanbul Okan University, 34959, Türkiye
- Information and Communication Technology Research Group, Scientific Research Center, Alayen University, Nile Street, 64001, Dhi Qar, Iraq
| |
Collapse
|
2
|
Zhang S, Webers CAB, Berendschot TTJM. Computational single fundus image restoration techniques: a review. FRONTIERS IN OPHTHALMOLOGY 2024; 4:1332197. [PMID: 38984141 PMCID: PMC11199880 DOI: 10.3389/fopht.2024.1332197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 04/19/2024] [Indexed: 07/11/2024]
Abstract
Fundus cameras are widely used by ophthalmologists for monitoring and diagnosing retinal pathologies. Unfortunately, no optical system is perfect, and the visibility of retinal images can be greatly degraded due to the presence of problematic illumination, intraocular scattering, or blurriness caused by sudden movements. To improve image quality, different retinal image restoration/enhancement techniques have been developed, which play an important role in improving the performance of various clinical and computer-assisted applications. This paper gives a comprehensive review of these restoration/enhancement techniques, discusses their underlying mathematical models, and shows how they may be effectively applied in real-life practice to increase the visual quality of retinal images for potential clinical applications including diagnosis and retinal structure recognition. All three main topics of retinal image restoration/enhancement techniques, i.e., illumination correction, dehazing, and deblurring, are addressed. Finally, some considerations about challenges and the future scope of retinal image restoration/enhancement techniques will be discussed.
Collapse
Affiliation(s)
- Shuhe Zhang
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| | - Carroll A B Webers
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| | - Tos T J M Berendschot
- University Eye Clinic Maastricht, Maastricht University Medical Center, Maastricht, Netherlands
| |
Collapse
|
3
|
Alyami J. Computer-aided analysis of radiological images for cancer diagnosis: performance analysis on benchmark datasets, challenges, and directions. EJNMMI REPORTS 2024; 8:7. [PMID: 38748374 PMCID: PMC10982256 DOI: 10.1186/s41824-024-00195-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 02/05/2024] [Indexed: 05/19/2024]
Abstract
Radiological image analysis using machine learning has been extensively applied to enhance biopsy diagnosis accuracy and assist radiologists with precise cures. With improvements in the medical industry and its technology, computer-aided diagnosis (CAD) systems have been essential in detecting early cancer signs in patients that could not be observed physically, exclusive of introducing errors. CAD is a detection system that combines artificially intelligent techniques with image processing applications thru computer vision. Several manual procedures are reported in state of the art for cancer diagnosis. Still, they are costly, time-consuming and diagnose cancer in late stages such as CT scans, radiography, and MRI scan. In this research, numerous state-of-the-art approaches on multi-organs detection using clinical practices are evaluated, such as cancer, neurological, psychiatric, cardiovascular and abdominal imaging. Additionally, numerous sound approaches are clustered together and their results are assessed and compared on benchmark datasets. Standard metrics such as accuracy, sensitivity, specificity and false-positive rate are employed to check the validity of the current models reported in the literature. Finally, existing issues are highlighted and possible directions for future work are also suggested.
Collapse
Affiliation(s)
- Jaber Alyami
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
- King Fahd Medical Research Center, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
- Smart Medical Imaging Research Group, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
- Medical Imaging and Artificial Intelligence Research Unit, Center of Modern Mathematical Sciences and its Applications, King Abdulaziz University, 21589, Jeddah, Saudi Arabia.
| |
Collapse
|
4
|
Zedan MJM, Zulkifley MA, Ibrahim AA, Moubark AM, Kamari NAM, Abdani SR. Automated Glaucoma Screening and Diagnosis Based on Retinal Fundus Images Using Deep Learning Approaches: A Comprehensive Review. Diagnostics (Basel) 2023; 13:2180. [PMID: 37443574 DOI: 10.3390/diagnostics13132180] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 06/16/2023] [Accepted: 06/17/2023] [Indexed: 07/15/2023] Open
Abstract
Glaucoma is a chronic eye disease that may lead to permanent vision loss if it is not diagnosed and treated at an early stage. The disease originates from an irregular behavior in the drainage flow of the eye that eventually leads to an increase in intraocular pressure, which in the severe stage of the disease deteriorates the optic nerve head and leads to vision loss. Medical follow-ups to observe the retinal area are needed periodically by ophthalmologists, who require an extensive degree of skill and experience to interpret the results appropriately. To improve on this issue, algorithms based on deep learning techniques have been designed to screen and diagnose glaucoma based on retinal fundus image input and to analyze images of the optic nerve and retinal structures. Therefore, the objective of this paper is to provide a systematic analysis of 52 state-of-the-art relevant studies on the screening and diagnosis of glaucoma, which include a particular dataset used in the development of the algorithms, performance metrics, and modalities employed in each article. Furthermore, this review analyzes and evaluates the used methods and compares their strengths and weaknesses in an organized manner. It also explored a wide range of diagnostic procedures, such as image pre-processing, localization, classification, and segmentation. In conclusion, automated glaucoma diagnosis has shown considerable promise when deep learning algorithms are applied. Such algorithms could increase the accuracy and efficiency of glaucoma diagnosis in a better and faster manner.
Collapse
Affiliation(s)
- Mohammad J M Zedan
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
- Computer and Information Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Ahmad Asrul Ibrahim
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Asraf Mohamed Moubark
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Nor Azwan Mohamed Kamari
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Bangi 43600, Selangor, Malaysia
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Selangor, Malaysia
| |
Collapse
|
5
|
Muchuchuti S, Viriri S. Retinal Disease Detection Using Deep Learning Techniques: A Comprehensive Review. J Imaging 2023; 9:84. [PMID: 37103235 PMCID: PMC10145952 DOI: 10.3390/jimaging9040084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/02/2023] [Accepted: 04/07/2023] [Indexed: 04/28/2023] Open
Abstract
Millions of people are affected by retinal abnormalities worldwide. Early detection and treatment of these abnormalities could arrest further progression, saving multitudes from avoidable blindness. Manual disease detection is time-consuming, tedious and lacks repeatability. There have been efforts to automate ocular disease detection, riding on the successes of the application of Deep Convolutional Neural Networks (DCNNs) and vision transformers (ViTs) for Computer-Aided Diagnosis (CAD). These models have performed well, however, there remain challenges owing to the complex nature of retinal lesions. This work reviews the most common retinal pathologies, provides an overview of prevalent imaging modalities and presents a critical evaluation of current deep-learning research for the detection and grading of glaucoma, diabetic retinopathy, Age-Related Macular Degeneration and multiple retinal diseases. The work concluded that CAD, through deep learning, will increasingly be vital as an assistive technology. As future work, there is a need to explore the potential impact of using ensemble CNN architectures in multiclass, multilabel tasks. Efforts should also be expended on the improvement of model explainability to win the trust of clinicians and patients.
Collapse
Affiliation(s)
| | - Serestina Viriri
- School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4001, South Africa
| |
Collapse
|
6
|
Bahaj SA. A hybrid intelligent model for early validation of infectious diseases: An explorative study of machine learning approaches. Microsc Res Tech 2023; 86:507-515. [PMID: 36704844 DOI: 10.1002/jemt.24290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 12/17/2022] [Accepted: 01/05/2023] [Indexed: 01/28/2023]
Abstract
Literature reports several infectious diseases news validation approaches, but none is economically effective for collecting and classifying information on different infectious diseases. This work presents a hybrid machine-learning model that could predict the validity of the infectious disease's news spread on the media. The proposed hybrid machine learning (ML) model uses the Dynamic Classifier Selection (DCS) process to validate news. Several machine learning models, such as K-Neighbors-Neighbor (KNN), AdaBoost (AB), Decision Tree (DT), Random Forest (RF), SVC, Gaussian Naïve Base (GNB), and Logistic Regression (LR) are tested in the simulation process on benchmark dataset. The simulation employs three DCS process methods: overall Local Accuracy (OLA), Meta Dynamic ensemble selection (META-DES), and Bagging. From seven ML classifiers, the AdaBoost with Bagging DCS method got a 97.45% high accuracy rate for training samples and a 97.56% high accuracy rate for testing samples. The second high accuracy was obtained at 96.12% for training and 96.45% for testing samples from AdaBoost with the Meta-DES method. Overall, the AdaBoost with Bagging model obtained higher accuracy, AUC, sensitivity, and specificity rate with minimum FPR and FNR for validation.
Collapse
Affiliation(s)
- Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
7
|
Kurdi SZ, Ali MH, Jaber MM, Saba T, Rehman A, Damaševičius R. Brain Tumor Classification Using Meta-Heuristic Optimized Convolutional Neural Networks. J Pers Med 2023; 13:jpm13020181. [PMID: 36836415 PMCID: PMC9965936 DOI: 10.3390/jpm13020181] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 01/08/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
The field of medical image processing plays a significant role in brain tumor classification. The survival rate of patients can be increased by diagnosing the tumor at an early stage. Several automatic systems have been developed to perform the tumor recognition process. However, the existing systems could be more efficient in identifying the exact tumor region and hidden edge details with minimum computation complexity. The Harris Hawks optimized convolution network (HHOCNN) is used in this work to resolve these issues. The brain magnetic resonance (MR) images are pre-processed, and the noisy pixels are eliminated to minimize the false tumor recognition rate. Then, the candidate region process is applied to identify the tumor region. The candidate region method investigates the boundary regions with the help of the line segments concept, which reduces the loss of hidden edge details. Various features are extracted from the segmented region, which is classified by applying a convolutional neural network (CNN). The CNN computes the exact region of the tumor with fault tolerance. The proposed HHOCNN system was implemented using MATLAB, and performance was evaluated using pixel accuracy, error rate, accuracy, specificity, and sensitivity metrics. The nature-inspired Harris Hawks optimization algorithm minimizes the misclassification error rate and improves the overall tumor recognition accuracy to 98% achieved on the Kaggle dataset.
Collapse
Affiliation(s)
- Sarah Zuhair Kurdi
- Medical College, Kufa University, Al.Najaf Teaching Hospital M.B.ch.B/F.I.C.M Neurosurgery, Baghdad 54001, Iraq
| | - Mohammed Hasan Ali
- Computer Techniques Engineering Department, Faculty of Information Technology, Imam Ja’afar Al-Sadiq University, Baghdad 10021, Iraq
- College of Computer Science and Mathematics, University of Kufa, Najaf 540011, Iraq
| | - Mustafa Musa Jaber
- Department of Medical Instruments Engineering Techniques, Dijlah University College, Baghdad 00964, Iraq
- Department of Medical Instruments Engineering Techniques, Al-Turath University College, Baghdad 10021, Iraq
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh 11586, Saudi Arabia
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
- Correspondence:
| |
Collapse
|
8
|
Identification of Anomalies in Mammograms through Internet of Medical Things (IoMT) Diagnosis System. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1100775. [PMID: 36188701 PMCID: PMC9522488 DOI: 10.1155/2022/1100775] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 08/16/2022] [Accepted: 08/26/2022] [Indexed: 11/17/2022]
Abstract
Breast cancer is the primary health issue that women may face at some point in their lifetime. This may lead to death in severe cases. A mammography procedure is used for finding suspicious masses in the breast. Teleradiology is employed for online treatment and diagnostics processes due to the unavailability and shortage of trained radiologists in backward and remote areas. The availability of online radiologists is uncertain due to inadequate network coverage in rural areas. In such circumstances, the Computer-Aided Diagnosis (CAD) framework is useful for identifying breast abnormalities without expert radiologists. This research presents a decision-making system based on IoMT (Internet of Medical Things) to identify breast anomalies. The proposed technique encompasses the region growing algorithm to segment tumor that extracts suspicious part. Then, texture and shape-based features are employed to characterize breast lesions. The extracted features include first and second-order statistics, center-symmetric local binary pattern (CS-LBP), a histogram of oriented gradients (HOG), and shape-based techniques used to obtain various features from the mammograms. Finally, a fusion of machine learning algorithms including K-Nearest Neighbor (KNN), Support Vector Machine (SVM), and Linear Discriminant Analysis (LDA are employed to classify breast cancer using composite feature vectors. The experimental results exhibit the proposed framework's efficacy that separates the cancerous lesions from the benign ones using 10-fold cross-validations. The accuracy, sensitivity, and specificity attained are 96.3%, 94.1%, and 98.2%, respectively, through shape-based features from the MIAS database. Finally, this research contributes a model with the ability for earlier and improved accuracy of breast tumor detection.
Collapse
|
9
|
Ramzan M, Raza M, Sharif MI, Kadry S. Gastrointestinal Tract Polyp Anomaly Segmentation on Colonoscopy Images Using Graft-U-Net. J Pers Med 2022; 12:jpm12091459. [PMID: 36143244 PMCID: PMC9503374 DOI: 10.3390/jpm12091459] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 11/21/2022] Open
Abstract
Computer-aided polyp segmentation is a crucial task that supports gastroenterologists in examining and resecting anomalous tissue in the gastrointestinal tract. The disease polyps grow mainly in the colorectal area of the gastrointestinal tract and in the mucous membrane, which has protrusions of micro-abnormal tissue that increase the risk of incurable diseases such as cancer. So, the early examination of polyps can decrease the chance of the polyps growing into cancer, such as adenomas, which can change into cancer. Deep learning-based diagnostic systems play a vital role in diagnosing diseases in the early stages. A deep learning method, Graft-U-Net, is proposed to segment polyps using colonoscopy frames. Graft-U-Net is a modified version of UNet, which comprises three stages, including the preprocessing, encoder, and decoder stages. The preprocessing technique is used to improve the contrast of the colonoscopy frames. Graft-U-Net comprises encoder and decoder blocks where the encoder analyzes features, while the decoder performs the features’ synthesizing processes. The Graft-U-Net model offers better segmentation results than existing deep learning models. The experiments were conducted using two open-access datasets, Kvasir-SEG and CVC-ClinicDB. The datasets were prepared from the large bowel of the gastrointestinal tract by performing a colonoscopy procedure. The anticipated model outperforms in terms of its mean Dice of 96.61% and mean Intersection over Union (mIoU) of 82.45% with the Kvasir-SEG dataset. Similarly, with the CVC-ClinicDB dataset, the method achieved a mean Dice of 89.95% and an mIoU of 81.38%.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
- Correspondence:
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 999095, Lebanon
| |
Collapse
|
10
|
Alyami J, Rehman A, Sadad T, Alruwaythi M, Saba T, Bahaj SA. Automatic skin lesions detection from images through microscopic hybrid features set and machine learning classifiers. Microsc Res Tech 2022; 85:3600-3607. [PMID: 35876390 DOI: 10.1002/jemt.24211] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 04/11/2022] [Accepted: 06/11/2022] [Indexed: 11/07/2022]
Abstract
Skin cancer occurrences increase exponentially worldwide due to the lack of awareness of significant populations and skin specialists. Medical imaging can help with early detection and more accurate diagnosis of skin cancer. The physicians usually follow the manual diagnosis method in their clinics but nonprofessional dermatologists sometimes affect the accuracy of the results. Thus, the automated system is required to assist physicians in diagnosing skin cancer at early stage precisely to decrease the mortality rate. This article presents an automatic skin lesions detection through a microscopic hybrid feature set and machine learning-based classification. The employment of deep features through AlexNet architecture with local optimal-oriented pattern can accurately predict skin lesions. The proposed model is tested on two open-access datasets PAD-UFES-20 and MED-NODE comprising melanoma and nevus images. Experimental results on both datasets exhibit the efficacy of hybrid features with the help of machine learning. Finally, the proposed model achieved 94.7% accuracy using an ensemble classifier.
Collapse
Affiliation(s)
- Jaber Alyami
- Department of Diagnostic Radiology, King Abdulaziz University, Jeddah, Saudi Arabia.,Animal House Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah, Saudi Arabia.,Smart Medical Imaging Research Group, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science & Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Maryam Alruwaythi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
11
|
Facial Emotion Recognition Using Conventional Machine Learning and Deep Learning Methods: Current Achievements, Analysis and Remaining Challenges. INFORMATION 2022. [DOI: 10.3390/info13060268] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Facial emotion recognition (FER) is an emerging and significant research area in the pattern recognition domain. In daily life, the role of non-verbal communication is significant, and in overall communication, its involvement is around 55% to 93%. Facial emotion analysis is efficiently used in surveillance videos, expression analysis, gesture recognition, smart homes, computer games, depression treatment, patient monitoring, anxiety, detecting lies, psychoanalysis, paralinguistic communication, detecting operator fatigue and robotics. In this paper, we present a detailed review on FER. The literature is collected from different reputable research published during the current decade. This review is based on conventional machine learning (ML) and various deep learning (DL) approaches. Further, different FER datasets for evaluation metrics that are publicly available are discussed and compared with benchmark results. This paper provides a holistic review of FER using traditional ML and DL methods to highlight the future gap in this domain for new researchers. Finally, this review work is a guidebook and very helpful for young researchers in the FER area, providing a general understating and basic knowledge of the current state-of-the-art methods, and to experienced researchers looking for productive directions for future work.
Collapse
|
12
|
Bambo MM, Gebremariam MG. Statistical Analysis on Time to Blindness of Glaucoma Patients at Jimma University Specialized Hospital: Application of Accelerated Failure Time Model. J Ophthalmol 2022; 2022:9145921. [PMID: 35607611 PMCID: PMC9124144 DOI: 10.1155/2022/9145921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/30/2022] [Accepted: 04/23/2022] [Indexed: 11/17/2022] Open
Abstract
Background Glaucoma is one of the most frequent vision-threatening eye diseases. It is frequently associated with excessive intraocular pressure (IOP), which can cause vision loss and damaged optic nerves. The main objective of this study was to model time to blindness of glaucoma patients by using appropriate statistical models. Study Design. A Retrospective Community-Based Longitudinal Study design was applied. Materials and Procedures. The data were obtained from Ophthalmology Department of JUSH from the period of January 2016 to August 2020. The glaucoma patient's information was extracted from the patient card and 321 samples were included in the study. To discover the factors that affect time to blindness of glaucoma patients', researchers used the Accelerated Failure Time (AFT) model. Results 81.3 percent of the 321 glaucoma patients were blind. Unilaterally and bilaterally blinded female and male glaucoma patients were 24.92 and 56.38%, respectively. After glaucoma disease was confirmed, the median time to the blindness of both eyes and one eye was 12 months. The multivariable log-logistic accelerated failure-time model fits the glaucoma patient's time to blind dataset well. The result showed that the chance of blindness of glaucoma patients who have absolute stage of glaucoma, medium duration of diagnosis, long duration of diagnosis, and IOP greater than 21 mmHg were high with parameters (ϕ = 2.425, p value = 0.049, 95% CI [2.249, 2.601]), (ϕ = 1.505, p value = 0.001, 95% CI [0.228, 0.589]), (ϕ = 3.037, p value = 0.001, 95% C.I [2.850, 3.22]) and (ϕ 0.851, p value = 0.034, 95% C.I [0.702, 0.999]), respectively. Conclusion The multivariable log-logistic accelerated failure time model evaluates the prognostic factors of time to blindness of glaucoma patients. Under this finding, duration of diagnosis, IOP, and stage of glaucoma were a key determinant factors of time to blindness of glaucoma patients'. Finally, the log-logistic accelerated failure-time model was the best-fitted parametric model based on AIC and BIC values.
Collapse
Affiliation(s)
- Meseret Mesfin Bambo
- Department of Statistics, College of Natural and Computational Science, Mizan-Tepi University, Tepi, Ethiopia
| | | |
Collapse
|
13
|
Painuli D, Bhardwaj S, Köse U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput Biol Med 2022; 146:105580. [PMID: 35551012 DOI: 10.1016/j.compbiomed.2022.105580] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Revised: 04/14/2022] [Accepted: 04/30/2022] [Indexed: 02/07/2023]
Abstract
Being a second most cause of mortality worldwide, cancer has been identified as a perilous disease for human beings, where advance stage diagnosis may not help much in safeguarding patients from mortality. Thus, efforts to provide a sustainable architecture with proven cancer prevention estimate and provision for early diagnosis of cancer is the need of hours. Advent of machine learning methods enriched cancer diagnosis area with its overwhelmed efficiency & low error-rate then humans. A significant revolution has been witnessed in the development of machine learning & deep learning assisted system for segmentation & classification of various cancers during past decade. This research paper includes a review of various types of cancer detection via different data modalities using machine learning & deep learning-based methods along with different feature extraction techniques and benchmark datasets utilized in the recent six years studies. The focus of this study is to review, analyse, classify, and address the recent development in cancer detection and diagnosis of six types of cancers i.e., breast, lung, liver, skin, brain and pancreatic cancer, using machine learning & deep learning techniques. Various state-of-the-art technique are clustered into same group and results are examined through key performance indicators like accuracy, area under the curve, precision, sensitivity, dice score on benchmark datasets and concluded with future research work challenges.
Collapse
Affiliation(s)
- Deepak Painuli
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India.
| | - Suyash Bhardwaj
- Department of Computer Science and Engineering, Gurukula Kangri Vishwavidyalaya, Haridwar, India
| | - Utku Köse
- Department of Computer Engineering, Suleyman Demirel University, Isparta, Turkey
| |
Collapse
|
14
|
Kako NA, Abdulazeez AM. Peripapillary Atrophy Segmentation and Classification Methodologies for Glaucoma Image Detection: A Review. Curr Med Imaging 2022; 18:1140-1159. [PMID: 35260060 DOI: 10.2174/1573405618666220308112732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/04/2021] [Accepted: 12/22/2021] [Indexed: 11/22/2022]
Abstract
Information-based image processing and computer vision methods are utilized in several healthcare organizations to diagnose diseases. The irregularities in the visual system are identified over fundus images shaped over a fundus camera. Among ophthalmology diseases, glaucoma is measured as the most common case that can lead to neurodegenerative illness. The unsuitable fluid pressure inside the eye within the visual system is described as the major cause of those diseases. Glaucoma has no symptoms in the early stages, and if it is not treated, it may result in total blindness. Diagnosing glaucoma at an early stage may prevent permanent blindness. Manual inspection of the human eye may be a solution, but it depends on the skills of the individuals involved. The auto diagnosis of glaucoma by applying a consolidation of computer vision, artificial intelligence, and image processing can aid in the ban and detection of those diseases. In this review article, we aim to introduce a review of the numerous approaches based on peripapillary atrophy segmentation and classification that can detect these diseases, as well as details about the publicly available image benchmarks, datasets, and measurement of performance. The review article introduces the demonstrated research of numerous available study models that objectively diagnose glaucoma via peripapillary atrophy from the lowest level of feature extraction to the current direction based on deep learning. The advantages and disadvantages of each method are addressed in detail, and tabular descriptions are included to highlight the results of each category. Moreover, the frameworks of each approach and fundus image datasets are provided. The improved reporting of our study would help in providing possible future work directions to diagnose glaucoma in conclusion.
Collapse
Affiliation(s)
- Najdavan A Kako
- Duhok Polytechnic University, Technical Institute of Administration, MIS, Duhok, Iraq
| | | |
Collapse
|
15
|
Li X, Hu X, Qi X, Yu L, Zhao W, Heng PA, Xing L. Rotation-Oriented Collaborative Self-Supervised Learning for Retinal Disease Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2284-2294. [PMID: 33891550 DOI: 10.1109/tmi.2021.3075244] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
The automatic diagnosis of various conventional ophthalmic diseases from fundus images is important in clinical practice. However, developing such automatic solutions is challenging due to the requirement of a large amount of training data and the expensive annotations for medical images. This paper presents a novel self-supervised learning framework for retinal disease diagnosis to reduce the annotation efforts by learning the visual features from the unlabeled images. To achieve this, we present a rotation-oriented collaborative method that explores rotation-related and rotation-invariant features, which capture discriminative structures from fundus images and also explore the invariant property used for retinal disease classification. We evaluate the proposed method on two public benchmark datasets for retinal disease classification. The experimental results demonstrate that our method outperforms other self-supervised feature learning methods (around 4.2% area under the curve (AUC)). With a large amount of unlabeled data available, our method can surpass the supervised baseline for pathologic myopia (PM) and is very close to the supervised baseline for age-related macular degeneration (AMD), showing the potential benefit of our method in clinical practice.
Collapse
|
16
|
Sajjad M, Ramzan F, Khan MUG, Rehman A, Kolivand M, Fati SM, Bahaj SA. Deep convolutional generative adversarial network for Alzheimer's disease classification using positron emission tomography (PET) and synthetic data augmentation. Microsc Res Tech 2021; 84:3023-3034. [PMID: 34245203 DOI: 10.1002/jemt.23861] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 05/13/2021] [Accepted: 06/15/2021] [Indexed: 11/09/2022]
Abstract
With the evolution of deep learning technologies, computer vision-related tasks achieved tremendous success in the biomedical domain. For supervised deep learning training, we need a large number of labeled datasets. The task of achieving a large number of label dataset is a challenging. The availability of data makes it difficult to achieve and enhance an automated disease diagnosis model's performance. To synthesize data and improve the disease diagnosis model's accuracy, we proposed a novel approach for the generation of images for three different stages of Alzheimer's disease using deep convolutional generative adversarial networks. The proposed model out-perform in synthesis of brain positron emission tomography images for all three stages of Alzheimer disease. The three-stage of Alzheimer's disease is normal control, mild cognitive impairment, and Alzheimer's disease. The model performance is measured using a classification model that achieved an accuracy of 72% against synthetic images. We also experimented with quantitative measures, that is, peak signal-to-noise (PSNR) and structural similarity index measure (SSIM). We achieved average PSNR score values of 82 for AD, 72 for CN, and 73 for MCI and SSIM average score values of 25.6 for AD, 22.6 for CN, and 22.8 for MCI.
Collapse
Affiliation(s)
- Muhammad Sajjad
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan
| | - Farheen Ramzan
- Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Muhammad Usman Ghani Khan
- National Center of Artificial Intelligence (NCAI), Al-Khawarizmi Institute of Computer Science (KICS), University of Engineering and Technology (UET), Lahore, Pakistan.,Department of Computer Science, University of Engineering and Technology (UET), Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Mahyar Kolivand
- Department of Medicine, University of Liverpool, Liverpool, UK
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics (AIDA) Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department College of Business Administration, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
17
|
Amin J, Anjum MA, Sharif M, Saba T, Tariq U. An intelligence design for detection and classification of COVID19 using fusion of classical and convolutional neural network and improved microscopic features selection approach. Microsc Res Tech 2021; 84:2254-2267. [PMID: 33964096 PMCID: PMC8237066 DOI: 10.1002/jemt.23779] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 02/15/2021] [Accepted: 04/03/2021] [Indexed: 12/31/2022]
Abstract
Coronavirus19 is caused due to infection in the respiratory system. It is the type of RNA virus that might infect animal and human species. In the severe stage, it causes pneumonia in human beings. In this research, hand‐crafted and deep microscopic features are used to classify lung infection. The proposed work consists of two phases; in phase I, infected lung region is segmented using proposed U‐Net deep learning model. The hand‐crafted features are extracted such as histogram orientation gradient (HOG), noise to the harmonic ratio (NHr), and segmentation based fractal texture analysis (SFTA) from the segmented image, and optimum features are selected from each feature vector using entropy. In phase II, local binary patterns (LBPs), speeded up robust feature (Surf), and deep learning features are extracted using a pretrained network such as inceptionv3, ResNet101 from the input CT images, and select optimum features based on entropy. Finally, the optimum selected features using entropy are fused in two ways, (i) The hand‐crafted features (HOG, NHr, SFTA, LBP, SURF) are horizontally concatenated/fused (ii) The hand‐crafted features (HOG, NHr, SFTA, LBP, SURF) are combined/fused with deep features. The fused optimum features vector is passed to the ensemble models (Boosted tree, bagged tree, and RUSBoosted tree) in two ways for the COVID19 classification, (i) classification using fused hand‐crafted features (ii) classification using fusion of hand‐crafted features and deep features. The proposed methodology is tested /evaluated on three benchmark datasets. Two datasets employed for experiments and results show that hand‐crafted & deep microscopic feature's fusion provide better results compared to only hand‐crafted fused features.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, Wah, Pakistan
| | | | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad - Wah Campus, Wah Cantt, Pakistan, 4740, Pakistan
| | - Tanzila Saba
- Artificial Intelligence and Data Analytics (AIDA) Lab CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| |
Collapse
|
18
|
Shabbir A, Rasheed A, Shehraz H, Saleem A, Zafar B, Sajid M, Ali N, Dar SH, Shehryar T. Detection of glaucoma using retinal fundus images: A comprehensive review. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:2033-2076. [PMID: 33892536 DOI: 10.3934/mbe.2021106] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Content-based image analysis and computer vision techniques are used in various health-care systems to detect the diseases. The abnormalities in a human eye are detected through fundus images captured through a fundus camera. Among eye diseases, glaucoma is considered as the second leading case that can result in neurodegeneration illness. The inappropriate intraocular pressure within the human eye is reported as the main cause of this disease. There are no symptoms of glaucoma at earlier stages and if the disease remains unrectified then it can lead to complete blindness. The early diagnosis of glaucoma can prevent permanent loss of vision. Manual examination of human eye is a possible solution however it is dependant on human efforts. The automatic detection of glaucoma by using a combination of image processing, artificial intelligence and computer vision can help to prevent and detect this disease. In this review article, we aim to present a comprehensive review about the various types of glaucoma, causes of glaucoma, the details about the possible treatment, details about the publicly available image benchmarks, performance metrics, and various approaches based on digital image processing, computer vision, and deep learning. The review article presents a detailed study of various published research models that aim to detect glaucoma from low-level feature extraction to recent trends based on deep learning. The pros and cons of each approach are discussed in detail and tabular representations are used to summarize the results of each category. We report our findings and provide possible future research directions to detect glaucoma in conclusion.
Collapse
Affiliation(s)
- Amsa Shabbir
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Aqsa Rasheed
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Huma Shehraz
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Aliya Saleem
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Bushra Zafar
- Department of Computer Science, Government College University, Faisalabad 38000, Pakistan
| | - Muhammad Sajid
- Department of Electrical Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Nouman Ali
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Saadat Hanif Dar
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| | - Tehmina Shehryar
- Department of Software Engineering, Mirpur University of Science and Technology (MUST), Mirpur- AJK 10250, Pakistan
| |
Collapse
|
19
|
Khan AR, Khan S, Harouni M, Abbasi R, Iqbal S, Mehmood Z. Brain tumor segmentation using K-means clustering and deep learning with synthetic data augmentation for classification. Microsc Res Tech 2021; 84:1389-1399. [PMID: 33524220 DOI: 10.1002/jemt.23694] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 11/11/2020] [Accepted: 11/27/2020] [Indexed: 12/19/2022]
Abstract
Image processing plays a major role in neurologists' clinical diagnosis in the medical field. Several types of imagery are used for diagnostics, tumor segmentation, and classification. Magnetic resonance imaging (MRI) is favored among all modalities due to its noninvasive nature and better representation of internal tumor information. Indeed, early diagnosis may increase the chances of being lifesaving. However, the manual dissection and classification of brain tumors based on MRI is vulnerable to error, time-consuming, and formidable task. Consequently, this article presents a deep learning approach to classify brain tumors using an MRI data analysis to assist practitioners. The recommended method comprises three main phases: preprocessing, brain tumor segmentation using k-means clustering, and finally, classify tumors into their respective categories (benign/malignant) using MRI data through a finetuned VGG19 (i.e., 19 layered Visual Geometric Group) model. Moreover, for better classification accuracy, the synthetic data augmentation concept i s introduced to increase available data size for classifier training. The proposed approach was evaluated on BraTS 2015 benchmarks data sets through rigorous experiments. The results endorse the effectiveness of the proposed strategy and it achieved better accuracy compared to the previously reported state of the art techniques.
Collapse
Affiliation(s)
- Amjad Rehman Khan
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Siraj Khan
- Department of Computer Science, Islamia College University, Peshawar, Pakistan
| | - Majid Harouni
- Department of Computer Engineering, Dolatabad Branch, Islamic Azad University, Isfahan, Iran
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Sichuan, China
| | - Sajid Iqbal
- Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| |
Collapse
|
20
|
Saba T, Abunadi I, Shahzad MN, Khan AR. Machine learning techniques to detect and forecast the daily total COVID-19 infected and deaths cases under different lockdown types. Microsc Res Tech 2021; 84:1462-1474. [PMID: 33522669 PMCID: PMC8014446 DOI: 10.1002/jemt.23702] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 11/27/2020] [Accepted: 12/27/2020] [Indexed: 12/13/2022]
Abstract
COVID-19 has impacted the world in many ways, including loss of lives, economic downturn and social isolation. COVID-19 was emerged due to the SARS-CoV-2 that is highly infectious pandemic. Every country tried to control the COVID-19 spread by imposing different types of lockdowns. Therefore, there is an urgent need to forecast the daily confirmed infected cases and deaths in different types of lockdown to select the most appropriate lockdown strategies to control the intensity of this pandemic and reduce the burden in hospitals. Currently are imposed three types of lockdown (partial, herd, complete) in different countries. In this study, three countries from every type of lockdown were studied by applying time-series and machine learning models, named as random forests, K-nearest neighbors, SVM, decision trees (DTs), polynomial regression, Holt winter, ARIMA, and SARIMA to forecast daily confirm infected cases and deaths due to COVID-19. The models' accuracy and effectiveness were evaluated by error based on three performance criteria. Actually, a single forecasting model could not capture all data sets' trends due to the varying nature of data sets and lockdown types. Three top-ranked models were used to predict the confirmed infected cases and deaths, the outperformed models were also adopted for the out-of-sample prediction and obtained very close results to the actual values of cumulative infected cases and deaths due to COVID-19. This study has proposed the auspicious models for forecasting and the best lockdown strategy to mitigate the causalities of COVID-19.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Amjad Rehman Khan
- Artificial Intelligence and Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
21
|
Rehman A. Light microscopic iris classification using ensemble multi-class support vector machine. Microsc Res Tech 2021; 84:982-991. [PMID: 33438285 DOI: 10.1002/jemt.23659] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 10/24/2020] [Accepted: 11/06/2020] [Indexed: 02/04/2023]
Abstract
Similar to other biometric systems such as fingerprint, face, DNA, iris classification could assist law enforcement agencies in identifying humans. Iris classification technology helps law-enforcement agencies to recognize humans by matching their iris with iris data sets. However, iris classification is challenging in the real environment due to its invertible and complex texture variations in the human iris. Accordingly, this article presents an improved Oriented FAST and Rotated BRIEF with Bag-of-Words model to extract distinct and robust features from the iris image, followed by ensemble multi-class-SVM to classify iris. The proposed methodology consists of four main steps; first, iris image normalization and enhancement; second, localizing iris region; third, iris feature extraction; finally, iris classification using ensemble multi-class support vector machine. For preprocessing of input images, histogram equalization, Gaussian mask and median filters are applied. The proposed technique is tested on two benchmark databases, that is, CASIA-v1 and iris image database, and achieved higher accuracy than other existing techniques reported in state of the art.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
22
|
Sadad T, Rehman A, Munir A, Saba T, Tariq U, Ayesha N, Abbasi R. Brain tumor detection and multi-classification using advanced deep learning techniques. Microsc Res Tech 2021; 84:1296-1308. [PMID: 33400339 DOI: 10.1002/jemt.23688] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/14/2020] [Accepted: 12/06/2020] [Indexed: 11/11/2022]
Abstract
A brain tumor is an uncontrolled development of brain cells in brain cancer if not detected at an early stage. Early brain tumor diagnosis plays a crucial role in treatment planning and patients' survival rate. There are distinct forms, properties, and therapies of brain tumors. Therefore, manual brain tumor detection is complicated, time-consuming, and vulnerable to error. Hence, automated computer-assisted diagnosis at high precision is currently in demand. This article presents segmentation through Unet architecture with ResNet50 as a backbone on the Figshare data set and achieved a level of 0.9504 of the intersection over union (IoU). The preprocessing and data augmentation concept were introduced to enhance the classification rate. The multi-classification of brain tumors is performed using evolutionary algorithms and reinforcement learning through transfer learning. Other deep learning methods such as ResNet50, DenseNet201, MobileNet V2, and InceptionV3 are also applied. Results thus obtained exhibited that the proposed research framework performed better than reported in state of the art. Different CNN, models applied for tumor classification such as MobileNet V2, Inception V3, ResNet50, DenseNet201, NASNet and attained accuracy 91.8, 92.8, 92.9, 93.1, 99.6%, respectively. However, NASNet exhibited the highest accuracy.
Collapse
Affiliation(s)
- Tariq Sadad
- Department of Computer Science, University of Central Punjab, Lahore, Pakistan
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Asim Munir
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam bin Abdulaziz University, Alkharj, Saudi Arabia
| | - Noor Ayesha
- School of Clinical Medicine, Zhengzhou University, Zhengzhou, Henan, China
| | - Rashid Abbasi
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| |
Collapse
|
23
|
Saba T. Computer vision for microscopic skin cancer diagnosis using handcrafted and non-handcrafted features. Microsc Res Tech 2021; 84:1272-1283. [PMID: 33399251 DOI: 10.1002/jemt.23686] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 11/15/2020] [Accepted: 11/30/2020] [Indexed: 12/31/2022]
Abstract
Skin covers the entire body and is the largest organ. Skin cancer is one of the most dreadful cancers that is primarily triggered by sensitivity to ultraviolet rays from the sun. However, the riskiest is melanoma, although it starts in a few different ways. The patient is extremely unaware of recognizing skin malignant growth at the initial stage. Literature is evident that various handcrafted and automatic deep learning features are employed to diagnose skin cancer using the traditional machine and deep learning techniques. The current research presents a comparison of skin cancer diagnosis techniques using handcrafted and non-handcrafted features. Additionally, clinical features such as Menzies method, seven-point detection, asymmetry, border color and diameter, visual textures (GRC), local binary patterns, Gabor filters, random fields of Markov, fractal dimension, and an oriental histography are also explored in the process of skin cancer detection. Several parameters, such as jacquard index, accuracy, dice efficiency, preciseness, sensitivity, and specificity, are compared on benchmark data sets to assess reported techniques. Finally, publicly available skin cancer data sets are described and the remaining issues are highlighted.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
24
|
Rehman A, Khan MA, Saba T, Mehmood Z, Tariq U, Ayesha N. Microscopic brain tumor detection and classification using 3D CNN and feature selection architecture. Microsc Res Tech 2021; 84:133-149. [PMID: 32959422 DOI: 10.1002/jemt.23597] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 08/10/2020] [Accepted: 08/31/2020] [Indexed: 12/20/2022]
Abstract
Brain tumor is one of the most dreadful natures of cancer and caused a huge number of deaths among kids and adults from the past few years. According to WHO standard, the 700,000 humans are being with a brain tumor and around 86,000 are diagnosed since 2019. While the total number of deaths due to brain tumors is 16,830 since 2019 and the average survival rate is 35%. Therefore, automated techniques are needed to grade brain tumors precisely from MRI scans. In this work, a new deep learning-based method is proposed for microscopic brain tumor detection and tumor type classification. A 3D convolutional neural network (CNN) architecture is designed at the first step to extract brain tumor and extracted tumors are passed to a pretrained CNN model for feature extraction. The extracted features are transferred to the correlation-based selection method and as the output, the best features are selected. These selected features are validated through feed-forward neural network for final classification. Three BraTS datasets 2015, 2017, and 2018 are utilized for experiments, validation, and accomplished an accuracy of 98.32, 96.97, and 92.67%, respectively. A comparison with existing techniques shows the proposed design yields comparable accuracy.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh, Saudi Arabia
| | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam bin Abdulaziz University, Saudi Arabia
| | - Noor Ayesha
- School of Clinical Medicine, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
25
|
Saba T. Recent advancement in cancer detection using machine learning: Systematic survey of decades, comparisons and challenges. J Infect Public Health 2020; 13:1274-1289. [DOI: 10.1016/j.jiph.2020.06.033] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 06/21/2020] [Accepted: 06/28/2020] [Indexed: 12/24/2022] Open
|
26
|
Clinical-Evolutionary Staging System of Primary Open-Angle Glaucoma Using Optical Coherence Tomography. J Clin Med 2020; 9:jcm9051530. [PMID: 32438726 PMCID: PMC7290744 DOI: 10.3390/jcm9051530] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 04/30/2020] [Accepted: 05/13/2020] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Primary open-angle glaucoma (POAG) is considered one of the main causes of blindness. Detection of POAG at early stages and classification into evolutionary stages is crucial to blindness prevention. METHODS 1001 patients were enrolled, of whom 766 were healthy subjects and 235 were ocular hypertensive or glaucomatous patients in different stages of the disease. Spectral domain optical coherence tomography (SD-OCT) was used to determine Bruch's membrane opening-minimum rim width (BMO-MRW) and the thicknesses of peripapillary retinal nerve fibre layer (RNFL) rings with diameters of 3.0, 4.1 and 4.7 mm centred on the optic nerve. The BMO-MRW rim and RNFL rings were divided into seven sectors (G-T-TS-TI-N-NS-NI). The k-means algorithm and linear discriminant analysis were used to classify patients into disease stages. RESULTS We defined four glaucoma stages and provided a new model for classifying eyes into these stages, with an overall accuracy greater than 92% (88% when including healthy eyes). An online application was also implemented to predict the probability of glaucoma stage for any given eye. CONCLUSIONS We propose a new objective algorithm for classifying POAG into clinical-evolutionary stages using SD-OCT.
Collapse
|
27
|
Rehman A, Khan MA, Mehmood Z, Saba T, Sardaraz M, Rashid M. Microscopic melanoma detection and classification: A framework of pixel-based fusion and multilevel features reduction. Microsc Res Tech 2020; 83:410-423. [PMID: 31898863 DOI: 10.1002/jemt.23429] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2018] [Revised: 11/26/2019] [Accepted: 12/15/2019] [Indexed: 11/06/2022]
Abstract
The numbers of diagnosed patients by melanoma are drastic and contribute more deaths annually among young peoples. An approximately 192,310 new cases of skin cancer are diagnosed in 2019, which shows the importance of automated systems for the diagnosis process. Accordingly, this article presents an automated method for skin lesions detection and recognition using pixel-based seed segmented images fusion and multilevel features reduction. The proposed method involves four key steps: (a) mean-based function is implemented and fed input to top-hat and bottom-hat filters which later fused for contrast stretching, (b) seed region growing and graph-cut method-based lesion segmentation and fused both segmented lesions through pixel-based fusion, (c) multilevel features such as histogram oriented gradient (HOG), speeded up robust features (SURF), and color are extracted and simple concatenation is performed, and (d) finally variance precise entropy-based features reduction and classification through SVM via cubic kernel function. Two different experiments are performed for the evaluation of this method. The segmentation performance is evaluated on PH2, ISBI2016, and ISIC2017 with an accuracy of 95.86, 94.79, and 94.92%, respectively. The classification performance is evaluated on PH2 and ISBI2016 dataset with an accuracy of 98.20 and 95.42%, respectively. The results of the proposed automated systems are outstanding as compared to the current techniques reported in state of art, which demonstrate the validity of the proposed method.
Collapse
Affiliation(s)
- Amjad Rehman
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | | | - Zahid Mehmood
- Department of Computer Engineering, University of Engineering and Technology, Taxila, Pakistan
| | - Tanzila Saba
- Artificial Intelligence and Data Analytics (AIDA) Lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Muhammad Sardaraz
- Department of Computer Science, COMSATS University Islamabad, Attock, Pakistan
| | - Muhammad Rashid
- Department of Computer Engineering, Umm Al-Qura University, Makkah, Saudi Arabia
| |
Collapse
|
28
|
Sharif MI, Li JP, Naz J, Rashid I. A comprehensive review on multi-organs tumor detection based on machine learning. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2019.12.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
29
|
Saba T, Khan MA, Rehman A, Marie-Sainte SL. Region Extraction and Classification of Skin Cancer: A Heterogeneous framework of Deep CNN Features Fusion and Reduction. J Med Syst 2019; 43:289. [PMID: 31327058 DOI: 10.1007/s10916-019-1413-3] [Citation(s) in RCA: 92] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 07/03/2019] [Indexed: 01/12/2023]
Abstract
Cancer is one of the leading causes of deaths in the last two decades. It is either diagnosed malignant or benign - depending upon the severity of the infection and the current stage. The conventional methods require a detailed physical inspection by an expert dermatologist, which is time-consuming and imprecise. Therefore, several computer vision methods are introduced lately, which are cost-effective and somewhat accurate. In this work, we propose a new automated approach for skin lesion detection and recognition using a deep convolutional neural network (DCNN). The proposed cascaded design incorporates three fundamental steps including; a) contrast enhancement through fast local Laplacian filtering (FlLpF) along HSV color transformation; b) lesion boundary extraction using color CNN approach by following XOR operation; c) in-depth features extraction by applying transfer learning using Inception V3 model prior to feature fusion using hamming distance (HD) approach. An entropy controlled feature selection method is also introduced for the selection of the most discriminant features. The proposed method is tested on PH2 and ISIC 2017 datasets, whereas the recognition phase is validated on PH2, ISBI 2016, and ISBI 2017 datasets. From the results, it is concluded that the proposed method outperforms several existing methods and attained accuracy 98.4% on PH2 dataset, 95.1% on ISBI dataset and 94.8% on ISBI 2017 dataset.
Collapse
Affiliation(s)
- Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Muhammad Attique Khan
- Department of Computer Science and Engineering, HITEC Universit, Museum Road, Taxila, Pakistan
| | - Amjad Rehman
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh, Saudi Arabia.
| | | |
Collapse
|
30
|
Saba T. Automated lung nodule detection and classification based on multiple classifiers voting. Microsc Res Tech 2019; 82:1601-1609. [DOI: 10.1002/jemt.23326] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 03/30/2019] [Accepted: 06/08/2019] [Indexed: 01/06/2023]
Affiliation(s)
- Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| |
Collapse
|
31
|
Khan MA, Akram T, Sharif M, Saba T, Javed K, Lali IU, Tanik UJ, Rehman A. Construction of saliency map and hybrid set of features for efficient segmentation and classification of skin lesion. Microsc Res Tech 2019; 82:741-763. [PMID: 30768826 DOI: 10.1002/jemt.23220] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Revised: 11/09/2018] [Accepted: 12/29/2018] [Indexed: 01/22/2023]
Abstract
Skin cancer is being a most deadly type of cancers which have grown extensively worldwide from the last decade. For an accurate detection and classification of melanoma, several measures should be considered which include, contrast stretching, irregularity measurement, selection of most optimal features, and so forth. A poor contrast of lesion affects the segmentation accuracy and also increases classification error. To overcome this problem, an efficient model for accurate border detection and classification is presented. The proposed model improves the segmentation accuracy in its preprocessing phase, utilizing contrast enhancement of lesion area compared to the background. The enhanced 2D blue channel is selected for the construction of saliency map, at the end of which threshold function produces the binary image. In addition, particle swarm optimization (PSO) based segmentation is also utilized for accurate border detection and refinement. Few selected features including shape, texture, local, and global are also extracted which are later selected based on genetic algorithm with an advantage of identifying the fittest chromosome. Finally, optimized features are later fed into the support vector machine (SVM) for classification. Comprehensive experiments have been carried out on three datasets named as PH2, ISBI2016, and ISIC (i.e., ISIC MSK-1, ISIC MSK-2, and ISIC UDA). The improved accuracy of 97.9, 99.1, 98.4, and 93.8%, respectively obtained for each dataset. The SVM outperforms on the selected dataset in terms of sensitivity, precision rate, accuracy, and FNR. Furthermore, the selection method outperforms and successfully removed the redundant features.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science and Engineering, HITEC University, Museum Road, Taxila, Pakistan
| | - Tallha Akram
- Department of Electrical Engineering, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, SA
| | - Kashif Javed
- Department of Robotics, SMME NUST, Islamabad, Pakistan
| | - Ikram Ullah Lali
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Urcun John Tanik
- Computer Science and Information Systems Texas A&M University-Commerce, USA
| | - Amjad Rehman
- Department of Information Systems, Al Yamamah University, Riyadh, KSA
| |
Collapse
|
32
|
Khan MA, Lali IU, Rehman A, Ishaq M, Sharif M, Saba T, Zahoor S, Akram T. Brain tumor detection and classification: A framework of marker-based watershed algorithm and multilevel priority features selection. Microsc Res Tech 2019; 82:909-922. [PMID: 30801840 DOI: 10.1002/jemt.23238] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 01/20/2019] [Accepted: 01/28/2019] [Indexed: 08/25/2024]
Abstract
Brain tumor identification using magnetic resonance images (MRI) is an important research domain in the field of medical imaging. Use of computerized techniques helps the doctors for the diagnosis and treatment against brain cancer. In this article, an automated system is developed for tumor extraction and classification from MRI. It is based on marker-based watershed segmentation and features selection. Five primary steps are involved in the proposed system including tumor contrast, tumor extraction, multimodel features extraction, features selection, and classification. A gamma contrast stretching approach is implemented to improve the contrast of a tumor. Then, segmentation is done using marker-based watershed algorithm. Shape, texture, and point features are extracted in the next step and high ranked 70% features are only selected through chi-square max conditional priority features approach. In the later step, selected features are fused using a serial-based concatenation method before classifying using support vector machine. All the experiments are performed on three data sets including Harvard, BRATS 2013, and privately collected MR images data set. Simulation results clearly reveal that the proposed system outperforms existing methods with greater precision and accuracy.
Collapse
Affiliation(s)
- Muhammad A Khan
- Department of Computer Science and Engineering, HITEC University Museum Road, Taxila, Pakistan
| | - Ikram U Lali
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Amjad Rehman
- College of Business Administration, Al Yamamah University, Riyadh 11512, Saudi Arabia
| | - Mubashar Ishaq
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences, Prince Sultan University, Riyadh, Saudi Arabia
| | - Saliha Zahoor
- Department of Computer Science, University of Gujrat, Gujrat, Pakistan
| | - Tallha Akram
- Department of EE, COMSATS University Islamabad, Wah Cantt, Pakistan
| |
Collapse
|
33
|
Iqbal S, Ghani Khan MU, Saba T, Mehmood Z, Javaid N, Rehman A, Abbasi R. Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation. Microsc Res Tech 2019; 82:1302-1315. [DOI: 10.1002/jemt.23281] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 03/24/2019] [Accepted: 04/12/2019] [Indexed: 01/09/2023]
Affiliation(s)
- Sajid Iqbal
- Department of Computer ScienceBahauddin Zakariya University Multan Pakistan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Muhammad U. Ghani Khan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Computer EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Amjad Rehman
- College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Rashid Abbasi
- School of Computer and TechnologyAnhui University Hefei China
| |
Collapse
|
34
|
Abbas N, Saba T, Rehman A, Mehmood Z, Javaid N, Tahir M, Khan NU, Ahmed KT, Shah R. Plasmodium
species aware based quantification of malaria parasitemia in light microscopy thin blood smear. Microsc Res Tech 2019; 82:1198-1214. [DOI: 10.1002/jemt.23269] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/19/2019] [Accepted: 03/15/2019] [Indexed: 01/03/2023]
Affiliation(s)
- Naveed Abbas
- Department of Computer ScienceIslamia College Peshawar KPK Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Amjad Rehman
- College of Business AdministrationAl Yamamah University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Computer EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Muhammad Tahir
- Department of Computer ScienceCOMSATS University Islamabad, Attock Campus Pakistan
| | | | | | - Roaider Shah
- Department of Computer ScienceIslamia College Peshawar KPK Pakistan
| |
Collapse
|
35
|
Tahir B, Iqbal S, Usman Ghani Khan M, Saba T, Mehmood Z, Anjum A, Mahmood T. Feature enhancement framework for brain tumor segmentation and classification. Microsc Res Tech 2019; 82:803-811. [DOI: 10.1002/jemt.23224] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 11/20/2018] [Accepted: 12/29/2018] [Indexed: 11/08/2022]
Affiliation(s)
- Bilal Tahir
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Sajid Iqbal
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
- Department of Computer ScienceBahauddin Zakariya University Multan Pakistan
| | - M. Usman Ghani Khan
- Department of Computer Science and EngineeringUniversity of Engineering and Technology Lahore Pakistan
| | - Tanzila Saba
- Department of Information Systems, College of Computer and Information Sciences, Prince Sultan University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Toqeer Mahmood
- Department of Computer ScienceUniversity of Engineering and Technology Taxila Pakistan
| |
Collapse
|
36
|
Saba T, Khan SU, Islam N, Abbas N, Rehman A, Javaid N, Anjum A. Cloud‐based decision support system for the detection and classification of malignant cells in breast cancer using breast cytology images. Microsc Res Tech 2019; 82:775-785. [DOI: 10.1002/jemt.23222] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 11/14/2018] [Accepted: 12/30/2018] [Indexed: 12/16/2022]
Affiliation(s)
- Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Sana Ullah Khan
- Department of Computer ScienceIslamia College University Peshawar KPK Pakistan
| | - Naveed Islam
- Department of Computer ScienceIslamia College University Peshawar KPK Pakistan
| | - Naveed Abbas
- Department of Computer ScienceIslamia College University Peshawar KPK Pakistan
| | - Amjad Rehman
- MIS Department COBAAl Yamamah University Riyadh Saudi Arabia
| | - Nadeem Javaid
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| |
Collapse
|
37
|
Ullah H, Saba T, Islam N, Abbas N, Rehman A, Mehmood Z, Anjum A. An ensemble classification of exudates in color fundus images using an evolutionary algorithm based optimal features selection. Microsc Res Tech 2019; 82:361-372. [DOI: 10.1002/jemt.23178] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 10/13/2018] [Accepted: 10/31/2018] [Indexed: 11/10/2022]
Affiliation(s)
- Hidayat Ullah
- Department of Computer ScienceIslamia College Peshawar, Khyber, Pakhtunkhwa Pakistan
| | - Tanzila Saba
- Information System, College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Naveed Islam
- Department of Computer ScienceIslamia College Peshawar, Khyber, Pakhtunkhwa Pakistan
| | - Naveed Abbas
- Department of Computer ScienceIslamia College Peshawar, Khyber, Pakhtunkhwa Pakistan
| | - Amjad Rehman
- Information System, College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Pakistan
| |
Collapse
|
38
|
Abbas N, Saba T, Rehman A, Mehmood Z, kolivand H, Uddin M, Anjum A. Plasmodium life cycle stage classification based quantification of malaria parasitaemia in thin blood smears. Microsc Res Tech 2018; 82:283-295. [DOI: 10.1002/jemt.23170] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Revised: 08/28/2018] [Accepted: 10/14/2018] [Indexed: 11/11/2022]
Affiliation(s)
- Naveed Abbas
- Department of Computer ScienceIslamia College Peshawar Pakistan
| | - Tanzila Saba
- College of Computer and Information SciencesPrince Sultan University Riyadh Saudi Arabia
| | - Amjad Rehman
- College of Computer and Information SystemsAl Yamamah University Riyadh Saudi Arabia
| | - Zahid Mehmood
- Department of Software EngineeringUniversity of Engineering and Technology Taxila Pakistan
| | - Hoshang kolivand
- Department of Computer ScienceLiverpool John Moores University Liverpool UK
| | - Mueen Uddin
- Information System DepartmentCollege of Engineering, Effat University of Jeddah Jeddah Saudi Arabia
| | - Adeel Anjum
- Department of Computer ScienceCOMSATS University Islamabad Islamabad Pakistan
| |
Collapse
|