1
|
Devaraj AR, Marianthiran VJ. Advancements in Viral Genomics: Gated Recurrent Unit Modeling of SARS-CoV-2, SARS, MERS, and Ebola viruses. Rev Soc Bras Med Trop 2025; 58:e004012024. [PMID: 39936709 PMCID: PMC11805527 DOI: 10.1590/0037-8682-0178-2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 11/08/2024] [Indexed: 02/13/2025] Open
Abstract
BACKGROUND Emerging infections have posed persistent threats to humanity throughout history. Rapid and unprecedented anthropogenic, behavioral, and social transformations witnessed in the past century have expedited the emergence of novel pathogens, intensifying their impact on the global human population. METHODS This study aimed to comprehensively analyze and compare the genomic sequences of four distinct viruses: SARS-CoV-2, SARS, MERS, and Ebola. Advanced genomic sequencing techniques and a Gated Recurrent Unit-based deep learning model were used to examine the intricate genetic makeup of these viruses. The proposed study sheds light on their evolutionary dynamics, transmission patterns, and pathogenicity and contributes to the development of effective diagnostic and therapeutic interventions. RESULTS This model exhibited exceptional performance as evidenced by accuracy values of 99.01%, 98.91%, 98.35%, and 98.04% for SARS-CoV-2, SARS, MERS, and Ebola respectively. Precision values ranged from 98.1% to 98.72%, recall values consistently surpassed 92%, and F1 scores ranged from 95.47% to 96.37%. CONCLUSIONS These results underscore the robustness of this model and its potential utility in genomic analysis, paving the way for enhanced understanding, preparedness, and response to emerging viral threats. In the future, this research will focus on creating better diagnostic instruments for the early identification of viral illnesses, developing vaccinations, and tailoring treatments based on the genetic composition and evolutionary patterns of different viruses. This model can be modified to examine a more extensive variety of diseases and recently discovered viruses to predict future outbreaks and their effects on global health.
Collapse
Affiliation(s)
- Abhishak Raj Devaraj
- Noorul Islam Centre for Higher Education, Department of Computer Applications, Tamilnadu, India
| | - Victor Jose Marianthiran
- Vel Tech Multi Tech Dr. Rangarajan. Sakunthala Engineering College, Department of Artificial Intelligence and Data Science, Tamilnadu, India
| |
Collapse
|
2
|
Gugulothu P, Bhukya R. Coot-Lion optimized deep learning algorithm for COVID-19 point mutation rate prediction using genome sequences. Comput Methods Biomech Biomed Engin 2024; 27:1410-1429. [PMID: 37668061 DOI: 10.1080/10255842.2023.2244109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 07/08/2023] [Accepted: 07/28/2023] [Indexed: 09/06/2023]
Abstract
In this study, a deep quantum neural network (DQNN) based on the Lion-based Coot algorithm (LBCA-based Deep QNN) is employed to predict COVID-19. Here, the genome sequences are subjected to feature extraction. The fusion of features is performed using the Bray-Curtis distance and the deep belief network (DBN). Lastly, a deep quantum neural network (Deep QNN) is used to predict COVID-19. The LBCA is obtained by integrating Coot algorithm and LOA. The COVID-19 predictions are done with mutation points. The LBCA-based Deep QNN outperformed with testing accuracy of 0.941, true positive rate of 0.931, and false positive rate of 0.869.
Collapse
Affiliation(s)
- Praveen Gugulothu
- Department of Computer Science and Engineering, National Institute of Technology Warangal, Hanamkonda, Telangana 506004, India
| | - Raju Bhukya
- Department of Computer Science and Engineering, National Institute of Technology Warangal, Hanamkonda, Telangana 506004, India
| |
Collapse
|
3
|
Chen Z, Yu Y, Liu S, Du W, Hu L, Wang C, Li J, Liu J, Zhang W, Peng X. A deep learning and radiomics fusion model based on contrast-enhanced computer tomography improves preoperative identification of cervical lymph node metastasis of oral squamous cell carcinoma. Clin Oral Investig 2023; 28:39. [PMID: 38151672 DOI: 10.1007/s00784-023-05423-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVES In this study, we constructed and validated models based on deep learning and radiomics to facilitate preoperative diagnosis of cervical lymph node metastasis (LNM) using contrast-enhanced computed tomography (CECT). MATERIALS AND METHODS CECT scans of 100 patients with OSCC (217 metastatic and 1973 non-metastatic cervical lymph nodes: development set, 76 patients; internally independent test set, 24 patients) who received treatment at the Peking University School and Hospital of Stomatology between 2012 and 2016 were retrospectively collected. Clinical diagnoses and pathological findings were used to establish the gold standard for metastatic cervical LNs. A reader study with two clinicians was also performed to evaluate the lymph node status in the test set. The performance of the proposed models and the clinicians was evaluated and compared by measuring using the area under the receiver operating characteristic curve (AUC), accuracy (ACC), sensitivity (SEN), and specificity (SPE). RESULTS A fusion model combining deep learning with radiomics showed the best performance (ACC, 89.2%; SEN, 92.0%; SPE, 88.9%; and AUC, 0.950 [95% confidence interval: 0.908-0.993, P < 0.001]) in the test set. In comparison with the clinicians, the fusion model showed higher sensitivity (92.0 vs. 72.0% and 60.0%) but lower specificity (88.9 vs. 97.5% and 98.8%). CONCLUSION A fusion model combining radiomics and deep learning approaches outperformed other single-technique models and showed great potential to accurately predict cervical LNM in patients with OSCC. CLINICAL RELEVANCE The fusion model can complement the preoperative identification of LNM of OSCC performed by the clinicians.
Collapse
Affiliation(s)
- Zhen Chen
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Yao Yu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Shuo Liu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Wen Du
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Leihao Hu
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Congwei Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jiaqi Li
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Jianbo Liu
- Huafang Hanying Medical Technology Co., Ltd, No.19, West Bridge Road, Miyun District, Beijing, 101520, People's Republic of China
| | - Wenbo Zhang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China
| | - Xin Peng
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology & National Center of Stomatology & National Clinical Research Center for Oral Diseases & National Engineering Research Center of Oral Biomaterials and Digital Medical Devices & Beijing Key Laboratory of Digital Stomatology & Research Center of Engineering and Technology for Computerized Dentistry Ministry of Health & NMPA Key Laboratory for Dental Materials, No. 22, Zhongguancun South Avenue, Haidian District, Beijing, 100081, People's Republic of China.
| |
Collapse
|
4
|
Liu S, Masurkar AV, Rusinek H, Chen J, Zhang B, Zhu W, Fernandez-Granda C, Razavian N. Generalizable deep learning model for early Alzheimer's disease detection from structural MRIs. Sci Rep 2022; 12:17106. [PMID: 36253382 PMCID: PMC9576679 DOI: 10.1038/s41598-022-20674-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 09/16/2022] [Indexed: 01/25/2023] Open
Abstract
Early diagnosis of Alzheimer's disease plays a pivotal role in patient care and clinical trials. In this study, we have developed a new approach based on 3D deep convolutional neural networks to accurately differentiate mild Alzheimer's disease dementia from mild cognitive impairment and cognitively normal individuals using structural MRIs. For comparison, we have built a reference model based on the volumes and thickness of previously reported brain regions that are known to be implicated in disease progression. We validate both models on an internal held-out cohort from The Alzheimer's Disease Neuroimaging Initiative (ADNI) and on an external independent cohort from The National Alzheimer's Coordinating Center (NACC). The deep-learning model is accurate, achieved an area-under-the-curve (AUC) of 85.12 when distinguishing between cognitive normal subjects and subjects with either MCI or mild Alzheimer's dementia. In the more challenging task of detecting MCI, it achieves an AUC of 62.45. It is also significantly faster than the volume/thickness model in which the volumes and thickness need to be extracted beforehand. The model can also be used to forecast progression: subjects with mild cognitive impairment misclassified as having mild Alzheimer's disease dementia by the model were faster to progress to dementia over time. An analysis of the features learned by the proposed model shows that it relies on a wide range of regions associated with Alzheimer's disease. These findings suggest that deep neural networks can automatically learn to identify imaging biomarkers that are predictive of Alzheimer's disease, and leverage them to achieve accurate early detection of the disease.
Collapse
Affiliation(s)
- Sheng Liu
- Center for Data Science, NYU, 60 Fifth Avenue, 5th Floor, New York, NY, 10011, USA
| | - Arjun V Masurkar
- Center for Cognitive Neurology, Department of Neurology, NYU Grossman School of Medicine, 60 Fifth Avenue, 5th Floor, New York, NY, 10011, USA
- Neuroscience Institute, NYU Grossman School of Medicine, 145 E 32nd St #2, New York, NY, 10016, USA
| | - Henry Rusinek
- Department of Radiology, NYU Grossman School of Medicine, 660 First Avenue, New York, NY, 10016, USA
- Department of Psychiatry, NYU Grossman School of Medicine, 227 East 30th St, 6th Floor, New York, NY, 10016, USA
| | - Jingyun Chen
- Center for Cognitive Neurology, Department of Neurology, NYU Grossman School of Medicine, 60 Fifth Avenue, 5th Floor, New York, NY, 10011, USA
- Department of Radiology, NYU Grossman School of Medicine, 660 First Avenue, New York, NY, 10016, USA
| | - Ben Zhang
- Department of Radiology, NYU Grossman School of Medicine, 660 First Avenue, New York, NY, 10016, USA
| | - Weicheng Zhu
- Center for Data Science, NYU, 60 Fifth Avenue, 5th Floor, New York, NY, 10011, USA
| | - Carlos Fernandez-Granda
- Center for Data Science, NYU, 60 Fifth Avenue, 5th Floor, New York, NY, 10011, USA.
- Courant Institute of Mathematical Sciences, NYU, 251 Mercer St # 801, New York, NY, 10012, USA.
| | - Narges Razavian
- Center for Data Science, NYU, 60 Fifth Avenue, 5th Floor, New York, NY, 10011, USA.
- Center for Cognitive Neurology, Department of Neurology, NYU Grossman School of Medicine, 60 Fifth Avenue, 5th Floor, New York, NY, 10011, USA.
- Department of Radiology, NYU Grossman School of Medicine, 660 First Avenue, New York, NY, 10016, USA.
- Department of Population Health, NYU Grossman School of Medicine, 227 East 30th street 639, New York, NY, 10016, USA.
| |
Collapse
|
5
|
Abstract
Medical images of brain tumors are critical for characterizing the pathology of tumors and early diagnosis. There are multiple modalities for medical images of brain tumors. Fusing the unique features of each modality of the magnetic resonance imaging (MRI) scans can accurately determine the nature of brain tumors. The current genetic analysis approach is time-consuming and requires surgical extraction of brain tissue samples. Accurate classification of multi-modal brain tumor images can speed up the detection process and alleviate patient suffering. Medical image fusion refers to effectively merging the significant information of multiple source images of the same tissue into one image, which will carry abundant information for diagnosis. This paper proposes a novel attentive deep-learning-based classification model that integrates multi-modal feature aggregation, lite attention mechanism, separable embedding, and modal-wise shortcuts for performance improvement. We evaluate our model on the RSNA-MICCAI dataset, a scenario-specific medical image dataset, and demonstrate that the proposed method outperforms the state-of-the-art (SOTA) by around 3%.
Collapse
|
6
|
Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction. Symmetry (Basel) 2022. [DOI: 10.3390/sym14020194] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Fundus images have been established as an important factor in analyzing and recognizing many cardiovascular and ophthalmological diseases. Consequently, precise segmentation of blood using computer vision is vital in the recognition of ailments. Although clinicians have adopted computer-aided diagnostics (CAD) in day-to-day diagnosis, it is still quite difficult to conduct fully automated analysis based exclusively on information contained in fundus images. In fundus image applications, one of the methods for conducting an automatic analysis is to ascertain symmetry/asymmetry details from corresponding areas of the retina and investigate their association with positive clinical findings. In the field of diabetic retinopathy, matched filters have been shown to be an established technique for vessel extraction. However, there is reduced efficiency in matched filters due to noisy images. In this work, a joint model of a fast guided filter and a matched filter is suggested for enhancing abnormal retinal images containing low vessel contrasts. Extracting all information from an image correctly is one of the important factors in the process of image enhancement. A guided filter has an excellent property in edge-preserving, but still tends to suffer from halo artifacts near the edges. Fast guided filtering is a technique that subsamples the filtering input image and the guidance image and calculates the local linear coefficients for upsampling. In short, the proposed technique applies a fast guided filter and a matched filter for attaining improved performance measures for vessel extraction. The recommended technique was assessed on DRIVE and CHASE_DB1 datasets and achieved accuracies of 0.9613 and 0.960, respectively, both of which are higher than the accuracy of the original matched filter and other suggested vessel segmentation algorithms.
Collapse
|
7
|
A Hybrid Method to Enhance Thick and Thin Vessels for Blood Vessel Segmentation. Diagnostics (Basel) 2021; 11:diagnostics11112017. [PMID: 34829365 PMCID: PMC8621384 DOI: 10.3390/diagnostics11112017] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 10/25/2021] [Accepted: 10/25/2021] [Indexed: 11/16/2022] Open
Abstract
Retinal blood vessels have been presented to contribute confirmation with regard to tortuosity, branching angles, or change in diameter as a result of ophthalmic disease. Although many enhancement filters are extensively utilized, the Jerman filter responds quite effectively at vessels, edges, and bifurcations and improves the visualization of structures. In contrast, curvelet transform is specifically designed to associate scale with orientation and can be used to recover from noisy data by curvelet shrinkage. This paper describes a method to improve the performance of curvelet transform further. A distinctive fusion of curvelet transform and the Jerman filter is presented for retinal blood vessel segmentation. Mean-C thresholding is employed for the segmentation purpose. The suggested method achieves average accuracies of 0.9600 and 0.9559 for DRIVE and CHASE_DB1, respectively. Simulation results establish a better performance and faster implementation of the suggested scheme in comparison with similar approaches seen in the literature.
Collapse
|
8
|
Kou Z, Huang YF, Shen A, Kosari S, Liu XR, Qiang XL. Prediction of pandemic risk for animal-origin coronavirus using a deep learning method. Infect Dis Poverty 2021; 10:128. [PMID: 34689829 PMCID: PMC8542360 DOI: 10.1186/s40249-021-00912-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 10/11/2021] [Indexed: 12/26/2022] Open
Abstract
Background Coronaviruses can be isolated from bats, civets, pangolins, birds and other wild animals. As an animal-origin pathogen, coronavirus can cross species barrier and cause pandemic in humans. In this study, a deep learning model for early prediction of pandemic risk was proposed based on the sequences of viral genomes. Methods A total of 3257 genomes were downloaded from the Coronavirus Genome Resource Library. We present a deep learning model of cross-species coronavirus infection that combines a bidirectional gated recurrent unit network with a one-dimensional convolution. The genome sequence of animal-origin coronavirus was directly input to extract features and predict pandemic risk. The best performances were explored with the use of pre-trained DNA vector and attention mechanism. The area under the receiver operating characteristic curve (AUROC) and the area under precision-recall curve (AUPR) were used to evaluate the predictive models. Results The six specific models achieved good performances for the corresponding virus groups (1 for AUROC and 1 for AUPR). The general model with pre-training vector and attention mechanism provided excellent predictions for all virus groups (1 for AUROC and 1 for AUPR) while those without pre-training vector or attention mechanism had obviously reduction of performance (about 5–25%). Re-training experiments showed that the general model has good capabilities of transfer learning (average for six groups: 0.968 for AUROC and 0.942 for AUPR) and should give reasonable prediction for potential pathogen of next pandemic. The artificial negative data with the replacement of the coding region of the spike protein were also predicted correctly (100% accuracy). With the application of the Python programming language, an easy-to-use tool was created to implements our predictor. Conclusions Robust deep learning model with pre-training vector and attention mechanism mastered the features from the whole genomes of animal-origin coronaviruses and could predict the risk of cross-species infection for early warning of next pandemic. Graphical Abstract ![]()
Supplementary Information The online version contains supplementary material available at 10.1186/s40249-021-00912-6.
Collapse
Affiliation(s)
- Zheng Kou
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China.
| | - Yi-Fan Huang
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
| | - Ao Shen
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
| | - Saeed Kosari
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
| | - Xiang-Rong Liu
- Department of Computer Science, Xiamen University, Xiamen, 361005, China.
| | - Xiao-Li Qiang
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, 510006, China
| |
Collapse
|
9
|
Kundu N, Rani G, Dhaka VS, Gupta K, Nayak SC, Verma S, Ijaz MF, Woźniak M. IoT and Interpretable Machine Learning Based Framework for Disease Prediction in Pearl Millet. SENSORS (BASEL, SWITZERLAND) 2021; 21:5386. [PMID: 34450827 PMCID: PMC8397940 DOI: 10.3390/s21165386] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 07/23/2021] [Accepted: 07/28/2021] [Indexed: 12/02/2022]
Abstract
Decrease in crop yield and degradation in product quality due to plant diseases such as rust and blast in pearl millet is the cause of concern for farmers and the agriculture industry. The stipulation of expert advice for disease identification is also a challenge for the farmers. The traditional techniques adopted for plant disease detection require more human intervention, are unhandy for farmers, and have a high cost of deployment, operation, and maintenance. Therefore, there is a requirement for automating plant disease detection and classification. Deep learning and IoT-based solutions are proposed in the literature for plant disease detection and classification. However, there is a huge scope to develop low-cost systems by integrating these techniques for data collection, feature visualization, and disease detection. This research aims to develop the 'Automatic and Intelligent Data Collector and Classifier' framework by integrating IoT and deep learning. The framework automatically collects the imagery and parametric data from the pearl millet farmland at ICAR, Mysore, India. It automatically sends the collected data to the cloud server and the Raspberry Pi. The 'Custom-Net' model designed as a part of this research is deployed on the cloud server. It collaborates with the Raspberry Pi to precisely predict the blast and rust diseases in pearl millet. Moreover, the Grad-CAM is employed to visualize the features extracted by the 'Custom-Net'. Furthermore, the impact of transfer learning on the 'Custom-Net' and state-of-the-art models viz. Inception ResNet-V2, Inception-V3, ResNet-50, VGG-16, and VGG-19 is shown in this manuscript. Based on the experimental results, and features visualization by Grad-CAM, it is observed that the 'Custom-Net' extracts the relevant features and the transfer learning improves the extraction of relevant features. Additionally, the 'Custom-Net' model reports a classification accuracy of 98.78% that is equivalent to state-of-the-art models viz. Inception ResNet-V2, Inception-V3, ResNet-50, VGG-16, and VGG-19. Although the classification of 'Custom-Net' is comparable to state-of-the-art models, it is effective in reducing the training time by 86.67%. It makes the model more suitable for automating disease detection. This proves that the proposed model is effective in providing a low-cost and handy tool for farmers to improve crop yield and product quality.
Collapse
Affiliation(s)
- Nidhi Kundu
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur 303007, India; (N.K.); (V.S.D.); (K.G.)
| | - Geeta Rani
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur 303007, India; (N.K.); (V.S.D.); (K.G.)
| | - Vijaypal Singh Dhaka
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur 303007, India; (N.K.); (V.S.D.); (K.G.)
| | - Kalpit Gupta
- Department of Computer and Communication Engineering, Manipal University Jaipur, Jaipur 303007, India; (N.K.); (V.S.D.); (K.G.)
| | | | - Sahil Verma
- Department of Computer Science and Engineering, Chandigarh University, Mohali 140413, India;
| | - Muhammad Fazal Ijaz
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea
| | - Marcin Woźniak
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland;
| |
Collapse
|