1
|
Shaukat N, Amin J, Sharif M, Azam F, Kadry S, Krishnamoorthy S. Three-Dimensional Semantic Segmentation of Diabetic Retinopathy Lesions and Grading Using Transfer Learning. J Pers Med 2022; 12:jpm12091454. [PMID: 36143239 PMCID: PMC9501488 DOI: 10.3390/jpm12091454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 08/18/2022] [Accepted: 08/20/2022] [Indexed: 11/23/2022] Open
Abstract
Diabetic retinopathy (DR) is a drastic disease. DR embarks on vision impairment when it is left undetected. In this article, learning-based techniques are presented for the segmentation and classification of DR lesions. The pre-trained Xception model is utilized for deep feature extraction in the segmentation phase. The extracted features are fed to Deeplabv3 for semantic segmentation. For the training of the segmentation model, an experiment is performed for the selection of the optimal hyperparameters that provided effective segmentation results in the testing phase. The multi-classification model is developed for feature extraction using the fully connected (FC) MatMul layer of efficient-net-b0 and pool-10 of the squeeze-net. The extracted features from both models are fused serially, having the dimension of N × 2020, amidst the best N × 1032 features chosen by applying the marine predictor algorithm (MPA). The multi-classification of the DR lesions into grades 0, 1, 2, and 3 is performed using neural network and KNN classifiers. The proposed method performance is validated on open access datasets such as DIARETDB1, e-ophtha-EX, IDRiD, and Messidor. The obtained results are better compared to those of the latest published works.
Collapse
Affiliation(s)
- Natasha Shaukat
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan
| | - Javeria Amin
- Department of Computer Science, University of Wah, Wah Campus, Wah Cantt 47010, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan
- Correspondence: (M.S.); (S.K.)
| | - Faisal Azam
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
| | - Sujatha Krishnamoorthy
- Zhejiang Bioinformatics International Science and Technology Cooperation Center, Wenzhou-Kean University, Wenzhou 325060, China
- Wenzhou Municipal Key Lab of Applied Biomedical and Biopharmaceutical Informatics, Wenzhou-Kean University, Wenzhou 325060, China
- Correspondence: (M.S.); (S.K.)
| |
Collapse
|
2
|
Yunus U, Amin J, Sharif M, Yasmin M, Kadry S, Krishnamoorthy S. Recognition of Knee Osteoarthritis (KOA) Using YOLOv2 and Classification Based on Convolutional Neural Network. Life (Basel) 2022; 12:1126. [PMID: 36013305 PMCID: PMC9410095 DOI: 10.3390/life12081126] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 07/02/2022] [Accepted: 07/05/2022] [Indexed: 12/23/2022] Open
Abstract
Knee osteoarthritis (KOA) is one of the deadliest forms of arthritis. If not treated at an early stage, it may lead to knee replacement. That is why early diagnosis of KOA is necessary for better treatment. Manually KOA detection is a time-consuming and error-prone task. Computerized methods play a vital role in accurate and speedy detection. Therefore, the classification and localization of the KOA method are proposed in this work using radiographic images. The two-dimensional radiograph images are converted into three-dimensional and LBP features are extracted having the dimension of N × 59 out of which the best features of N × 55 are selected using PCA. The deep features are also extracted using Alex-Net and Dark-net-53 with the dimensions of N × 1024 and N × 4096, respectively, where N represents the number of images. Then, N × 1000 features are selected individually from both models using PCA. Finally, the extracted features are fused serially with the dimension of N × 2055 and passed to the classifiers on a 10-fold cross-validation that provides an accuracy of 90.6% for the classification of KOA grades. The localization model is proposed with the combination of an open exchange neural network (ONNX) and YOLOv2 that is trained on the selected hyper-parameters. The proposed model provides 0.98 mAP for the localization of classified images. The experimental analysis proves that the presented framework provides better results as compared to existing works.
Collapse
Affiliation(s)
- Usman Yunus
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan; (U.Y.); (M.S.); (M.Y.)
| | - Javeria Amin
- Department of Computer Science, University of Wah, Wah Cantt 47010, Pakistan;
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan; (U.Y.); (M.S.); (M.Y.)
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt 47010, Pakistan; (U.Y.); (M.S.); (M.Y.)
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway;
| | - Sujatha Krishnamoorthy
- Zhejiang Bioinformatics International Science and Technology Cooperation Center, Wenzhou-Kean University, Wenzhou 325060, China
- Wenzhou Municipal Key Lab of Applied Biomedical and Biopharmaceutical Informatics, Wenzhou-Kean University, Wenzhou 325060, China
| |
Collapse
|
3
|
Amin J, Anjum MA, Sharif M, Kadry S, Nadeem A, Ahmad SF. Liver Tumor Localization Based on YOLOv3 and 3D-Semantic Segmentation Using Deep Neural Networks. Diagnostics (Basel) 2022; 12:diagnostics12040823. [PMID: 35453870 PMCID: PMC9025116 DOI: 10.3390/diagnostics12040823] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 03/18/2022] [Accepted: 03/22/2022] [Indexed: 12/17/2022] Open
Abstract
Worldwide, more than 1.5 million deaths are occur due to liver cancer every year. The use of computed tomography (CT) for early detection of liver cancer could save millions of lives per year. There is also an urgent need for a computerized method to interpret, detect and analyze CT scans reliably, easily, and correctly. However, precise segmentation of minute tumors is a difficult task because of variation in the shape, intensity, size, low contrast of the tumor, and the adjacent tissues of the liver. To address these concerns, a model comprised of three parts: synthetic image generation, localization, and segmentation, is proposed. An optimized generative adversarial network (GAN) is utilized for generation of synthetic images. The generated images are localized by using the improved localization model, in which deep features are extracted from pre-trained Resnet-50 models and fed into a YOLOv3 detector as an input. The proposed modified model localizes and classifies the minute liver tumor with 0.99 mean average precision (mAp). The third part is segmentation, in which pre-trained Inceptionresnetv2 employed as a base-Network of Deeplabv3 and subsequently is trained on fine-tuned parameters with annotated ground masks. The experiments reflect that the proposed approach has achieved greater than 95% accuracy in the testing phase and it is proven that, in comparison to the recently published work in this domain, this research has localized and segmented the liver and minute liver tumor with more accuracy.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, Wah Cantt 47040, Pakistan;
| | | | - Muhammad Sharif
- Department of Computer Science, Comsats University Islamabad, Wah Campus, Wah Cantt 47040, Pakistan;
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4609 Kristiansand, Norway
- Correspondence:
| | - Ahmed Nadeem
- Department of Pharmacology & Toxicology, College of Pharmacy, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia; (A.N.); (S.F.A.)
| | - Sheikh F. Ahmad
- Department of Pharmacology & Toxicology, College of Pharmacy, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia; (A.N.); (S.F.A.)
| |
Collapse
|
4
|
Nazir T, Nawaz M, Javed A, Malik KM, Saudagar AKJ, Khan MB, Abul Hasanat MH, AlTameem A, AlKathami M. COVID-DAI: A novel framework for COVID-19 detection and infection growth estimation using computed tomography images. Microsc Res Tech 2022; 85:2313-2330. [PMID: 35194866 PMCID: PMC9088346 DOI: 10.1002/jemt.24088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 02/01/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
The COVID‐19 pandemic is spreading at a fast pace around the world and has a high mortality rate. Since there is no proper treatment of COVID‐19 and its multiple variants, for example, Alpha, Beta, Gamma, and Delta, being more infectious in nature are affecting millions of people, further complicates the detection process, so, victims are at the risk of death. However, timely and accurate diagnosis of this deadly virus can not only save the patients from life loss but can also prevent them from the complex treatment procedures. Accurate segmentation and classification of COVID‐19 is a tedious job due to the extensive variations in its shape and similarity with other diseases like Pneumonia. Furthermore, the existing techniques have hardly focused on the infection growth estimation over time which can assist the doctors to better analyze the condition of COVID‐19‐affected patients. In this work, we tried to overcome the shortcomings of existing studies by proposing a model capable of segmenting, classifying the COVID‐19 from computed tomography images, and predicting its behavior over a certain period. The framework comprises four main steps: (i) data preparation, (ii) segmentation, (iii) infection growth estimation, and (iv) classification. After performing the pre‐processing step, we introduced the DenseNet‐77 based UNET approach. Initially, the DenseNet‐77 is used at the Encoder module of the UNET model to calculate the deep keypoints which are later segmented to show the coronavirus region. Then, the infection growth estimation of COVID‐19 per patient is estimated using the blob analysis. Finally, we employed the DenseNet‐77 framework as an end‐to‐end network to classify the input images into three classes namely healthy, COVID‐19‐affected, and pneumonia images. We evaluated the proposed model over the COVID‐19‐20 and COVIDx CT‐2A datasets for segmentation and classification tasks, respectively. Furthermore, unlike existing techniques, we performed a cross‐dataset evaluation to show the generalization ability of our method. The quantitative and qualitative evaluation confirms that our method is robust to both COVID‐19 segmentation and classification and can accurately predict the infection growth in a certain time frame.
Collapse
Affiliation(s)
- Tahira Nazir
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Ali Javed
- Department of Computer Science, University of Engineering and Technology, Taxila, Pakistan
| | - Khalid Mahmood Malik
- Department of Computer Science and Engineering, Oakland University, Rochester, Michigan, USA
| | - Abdul Khader Jilani Saudagar
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Muhammad Badruddin Khan
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mozaherul Hoque Abul Hasanat
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Abdullah AlTameem
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Mohammad AlKathami
- Information Systems Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| |
Collapse
|
5
|
Saleem S, Amin J, Sharif M, Anjum MA, Iqbal M, Wang SH. A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00473-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
AbstractWhite blood cells (WBCs) are a portion of the immune system which fights against germs. Leukemia is the most common blood cancer which may lead to death. It occurs due to the production of a large number of immature WBCs in the bone marrow that destroy healthy cells. To overcome the severity of this disease, it is necessary to diagnose the shapes of immature cells at an early stage that ultimately reduces the modality rate of the patients. Recently different types of segmentation and classification methods are presented based upon deep-learning (DL) models but still have some limitations. This research aims to propose a modified DL approach for the accurate segmentation of leukocytes and their classification. The proposed technique includes two core steps: preprocessing-based classification and segmentation. In preprocessing, synthetic images are generated using a generative adversarial network (GAN) and normalized by color transformation. The optimal deep features are extracted from each blood smear image using pretrained deep models i.e., DarkNet-53 and ShuffleNet. More informative features are selected by principal component analysis (PCA) and fused serially for classification. The morphological operations based on color thresholding with the deep semantic method are utilized for leukemia segmentation of classified cells. The classification accuracy achieved with ALL-IDB and LISC dataset is 100% and 99.70% for the classification of leukocytes i.e., blast, no blast, basophils, neutrophils, eosinophils, lymphocytes, and monocytes, respectively. Whereas semantic segmentation achieved 99.10% and 98.60% for average and global accuracy, respectively. The proposed method achieved outstanding outcomes as compared to the latest existing research works.
Collapse
|
6
|
Amin J, Anjum MA, Sharif M, Saba T, Tariq U. An intelligence design for detection and classification of COVID19 using fusion of classical and convolutional neural network and improved microscopic features selection approach. Microsc Res Tech 2021; 84:2254-2267. [PMID: 33964096 PMCID: PMC8237066 DOI: 10.1002/jemt.23779] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 02/15/2021] [Accepted: 04/03/2021] [Indexed: 12/31/2022]
Abstract
Coronavirus19 is caused due to infection in the respiratory system. It is the type of RNA virus that might infect animal and human species. In the severe stage, it causes pneumonia in human beings. In this research, hand‐crafted and deep microscopic features are used to classify lung infection. The proposed work consists of two phases; in phase I, infected lung region is segmented using proposed U‐Net deep learning model. The hand‐crafted features are extracted such as histogram orientation gradient (HOG), noise to the harmonic ratio (NHr), and segmentation based fractal texture analysis (SFTA) from the segmented image, and optimum features are selected from each feature vector using entropy. In phase II, local binary patterns (LBPs), speeded up robust feature (Surf), and deep learning features are extracted using a pretrained network such as inceptionv3, ResNet101 from the input CT images, and select optimum features based on entropy. Finally, the optimum selected features using entropy are fused in two ways, (i) The hand‐crafted features (HOG, NHr, SFTA, LBP, SURF) are horizontally concatenated/fused (ii) The hand‐crafted features (HOG, NHr, SFTA, LBP, SURF) are combined/fused with deep features. The fused optimum features vector is passed to the ensemble models (Boosted tree, bagged tree, and RUSBoosted tree) in two ways for the COVID19 classification, (i) classification using fused hand‐crafted features (ii) classification using fusion of hand‐crafted features and deep features. The proposed methodology is tested /evaluated on three benchmark datasets. Two datasets employed for experiments and results show that hand‐crafted & deep microscopic feature's fusion provide better results compared to only hand‐crafted fused features.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, Wah, Pakistan
| | | | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad - Wah Campus, Wah Cantt, Pakistan, 4740, Pakistan
| | - Tanzila Saba
- Artificial Intelligence and Data Analytics (AIDA) Lab CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia
| |
Collapse
|