1
|
Mohanta A, Sandhya Kiran G, Malhi RKM, Prajapati PC, Oza KK, Rajput S, Shitole S, Srivastava PK. Harnessing Spectral Libraries From AVIRIS-NG Data for Precise PFT Classification: A Deep Learning Approach. PLANT, CELL & ENVIRONMENT 2025. [PMID: 39866067 DOI: 10.1111/pce.15393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Accepted: 12/27/2024] [Indexed: 01/28/2025]
Abstract
The generation of spectral libraries using hyperspectral data allows for the capture of detailed spectral signatures, uncovering subtle variations in plant physiology, biochemistry, and growth stages, marking a significant advancement over traditional land cover classification methods. These spectral libraries enable improved forest classification accuracy and more precise differentiation of plant species and plant functional types (PFTs), thereby establishing hyperspectral sensing as a critical tool for PFT classification. This study aims to advance the classification and monitoring of PFTs in Shoolpaneshwar wildlife sanctuary, Gujarat, India using Airborne Visible/Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) and machine learning techniques. A comprehensive spectral library was developed, encompassing data from 130 plant species, with a focus on their spectral features to support precise PFT classification. The spectral data were collected using AVIRIS-NG hyperspectral imaging and ASD Handheld Spectroradiometer, capturing a wide range of wavelengths (400-1600 nm) to encompass the key physiological and biochemical traits of the plants. Plant species were grouped into five distinct PFTs using Fuzzy C-means clustering. Key spectral features, including band reflectance, vegetation indices, and derivative/continuum properties, were identified through a combination of ISODATA clustering and Jeffries-Matusita (JM) distance analysis, enabling effective feature selection for classification. To assess the utility of the spectral library, three advanced machine learning classifiers-Parzen Window (PW), Gradient Boosted Machine (GBM), and Stochastic Gradient Descent (SGD)-were rigorously evaluated. The GBM classifier achieved the highest accuracy, with an overall accuracy (OAA) of 0.94 and a Kappa coefficient of 0.93 across five PFTs.
Collapse
Affiliation(s)
- Agradeep Mohanta
- Ecophysiology and RS-GIS Laboratory, Department of Botany, Faculty of Science, The Maharaja Sayajirao University of Baroda, Vadodara, India
| | - Garge Sandhya Kiran
- Ecophysiology and RS-GIS Laboratory, Department of Botany, Faculty of Science, The Maharaja Sayajirao University of Baroda, Vadodara, India
| | - Ramandeep Kaur M Malhi
- Ecophysiology and RS-GIS Laboratory, Department of Botany, Faculty of Science, The Maharaja Sayajirao University of Baroda, Vadodara, India
| | - Pankajkumar C Prajapati
- Ecophysiology and RS-GIS Laboratory, Department of Botany, Faculty of Science, The Maharaja Sayajirao University of Baroda, Vadodara, India
| | - Kavi K Oza
- Ecophysiology and RS-GIS Laboratory, Department of Botany, Faculty of Science, The Maharaja Sayajirao University of Baroda, Vadodara, India
| | - Shrishti Rajput
- Ecophysiology and RS-GIS Laboratory, Department of Botany, Faculty of Science, The Maharaja Sayajirao University of Baroda, Vadodara, India
| | - Sanjay Shitole
- Department of Information Technology, Usha Mittal Institute of Technology, SNDT Women's University, Mumbai, India
| | - Prashant Kumar Srivastava
- Remote Sensing Laboratory, Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi, India
| |
Collapse
|
2
|
Lyakhova UA, Lyakhov PA. Systematic review of approaches to detection and classification of skin cancer using artificial intelligence: Development and prospects. Comput Biol Med 2024; 178:108742. [PMID: 38875908 DOI: 10.1016/j.compbiomed.2024.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/16/2024]
Abstract
In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.
Collapse
Affiliation(s)
- U A Lyakhova
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia.
| | - P A Lyakhov
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia; North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355017, Stavropol, Russia.
| |
Collapse
|
3
|
Li C, Zhang F, Du Y, Li H. Classification of brain tumor types through MRIs using parallel CNNs and firefly optimization. Sci Rep 2024; 14:15057. [PMID: 38956224 PMCID: PMC11219740 DOI: 10.1038/s41598-024-65714-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 06/24/2024] [Indexed: 07/04/2024] Open
Abstract
Image segmentation is a critical and challenging endeavor in the field of medicine. A magnetic resonance imaging (MRI) scan is a helpful method for locating any abnormal brain tissue these days. It is a difficult undertaking for radiologists to diagnose and classify the tumor from several pictures. This work develops an intelligent method for accurately identifying brain tumors. This research investigates the identification of brain tumor types from MRI data using convolutional neural networks and optimization strategies. Two novel approaches are presented: the first is a novel segmentation technique based on firefly optimization (FFO) that assesses segmentation quality based on many parameters, and the other is a combination of two types of convolutional neural networks to categorize tumor traits and identify the kind of tumor. These upgrades are intended to raise the general efficacy of the MRI scan technique and increase identification accuracy. Using MRI scans from BBRATS2018, the testing is carried out, and the suggested approach has shown improved performance with an average accuracy of 98.6%.
Collapse
Affiliation(s)
- Chen Li
- Department of Neurosurgery, Shandong Provincial Third Hospital, Shandong University, No.12 Wuyingshan Middle Road, Jinan, 250031, Shandong, China
| | - Faxue Zhang
- Department of Neurosurgery, Shandong Provincial Third Hospital, Shandong University, No.12 Wuyingshan Middle Road, Jinan, 250031, Shandong, China
| | - Yongjian Du
- Department of Neurosurgery, The Fifth People's Hospital of Jinan, No.24297, Jingshi Road, Jinan, 250022, Shandong, China
| | - Huachao Li
- Department of Neurosurgery, Shandong Provincial Third Hospital, Shandong University, No.12 Wuyingshan Middle Road, Jinan, 250031, Shandong, China.
| |
Collapse
|
4
|
Abid MH, Ashraf R, Mahmood T, Faisal CMN. Multi-modal medical image classification using deep residual network and genetic algorithm. PLoS One 2023; 18:e0287786. [PMID: 37384779 PMCID: PMC10309999 DOI: 10.1371/journal.pone.0287786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 06/13/2023] [Indexed: 07/01/2023] Open
Abstract
Artificial intelligence (AI) development across the health sector has recently been the most crucial. Early medical information, identification, diagnosis, classification, then analysis, along with viable remedies, are always beneficial developments. Precise and consistent image classification has critical in diagnosing and tactical decisions for healthcare. The core issue with image classification has become the semantic gap. Conventional machine learning algorithms for classification rely mainly on low-level but rather high-level characteristics, employ some handmade features to close the gap, but force intense feature extraction as well as classification approaches. Deep learning is a powerful tool with considerable advances in recent years, with deep convolution neural networks (CNNs) succeeding in image classification. The main goal is to bridge the semantic gap and enhance the classification performance of multi-modal medical images based on the deep learning-based model ResNet50. The data set included 28378 multi-modal medical images to train and validate the model. Overall accuracy, precision, recall, and F1-score evaluation parameters have been calculated. The proposed model classifies medical images more accurately than other state-of-the-art methods. The intended research experiment attained an accuracy level of 98.61%. The suggested study directly benefits the health service.
Collapse
Affiliation(s)
- Muhammad Haris Abid
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Rehan Ashraf
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Toqeer Mahmood
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - C. M. Nadeem Faisal
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| |
Collapse
|
5
|
Shaheed K, Szczuko P, Abbas Q, Hussain A, Albathan M. Computer-Aided Diagnosis of COVID-19 from Chest X-ray Images Using Hybrid-Features and Random Forest Classifier. Healthcare (Basel) 2023; 11:healthcare11060837. [PMID: 36981494 PMCID: PMC10047954 DOI: 10.3390/healthcare11060837] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/16/2023] Open
Abstract
In recent years, a lot of attention has been paid to using radiology imaging to automatically find COVID-19. (1) Background: There are now a number of computer-aided diagnostic schemes that help radiologists and doctors perform diagnostic COVID-19 tests quickly, accurately, and consistently. (2) Methods: Using chest X-ray images, this study proposed a cutting-edge scheme for the automatic recognition of COVID-19 and pneumonia. First, a pre-processing method based on a Gaussian filter and logarithmic operator is applied to input chest X-ray (CXR) images to improve the poor-quality images by enhancing the contrast, reducing the noise, and smoothing the image. Second, robust features are extracted from each enhanced chest X-ray image using a Convolutional Neural Network (CNNs) transformer and an optimal collection of grey-level co-occurrence matrices (GLCM) that contain features such as contrast, correlation, entropy, and energy. Finally, based on extracted features from input images, a random forest machine learning classifier is used to classify images into three classes, such as COVID-19, pneumonia, or normal. The predicted output from the model is combined with Gradient-weighted Class Activation Mapping (Grad-CAM) visualisation for diagnosis. (3) Results: Our work is evaluated using public datasets with three different train–test splits (70–30%, 80–20%, and 90–10%) and achieved an average accuracy, F1 score, recall, and precision of 97%, 96%, 96%, and 96%, respectively. A comparative study shows that our proposed method outperforms existing and similar work. The proposed approach can be utilised to screen COVID-19-infected patients effectively. (4) Conclusions: A comparative study with the existing methods is also performed. For performance evaluation, metrics such as accuracy, sensitivity, and F1-measure are calculated. The performance of the proposed method is better than that of the existing methodologies, and it can thus be used for the effective diagnosis of the disease.
Collapse
Affiliation(s)
- Kashif Shaheed
- Department of Multimedia Systems, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Piotr Szczuko
- Department of Multimedia Systems, Faculty of Electronics, Telecommunication and Informatics, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
| | - Ayyaz Hussain
- Department of Computer Science, Quaid-i-Azam University, Islamabad 44000, Pakistan
| | - Mubarak Albathan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia;
- Correspondence: ; Tel.: +966-503451575
| |
Collapse
|
6
|
Color Image Retrieval Method Using Low Dimensional Salient Visual Feature Descriptors for IoT Applications. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:6257573. [PMID: 36873380 PMCID: PMC9981286 DOI: 10.1155/2023/6257573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 09/19/2022] [Accepted: 10/11/2022] [Indexed: 02/25/2023]
Abstract
Digital data are rising fast as Internet technology advances through many sources, such as smart phones, social networking sites, IoT, and other communication channels. Therefore, successfully storing, searching, and retrieving desired images from such large-scale databases are critical. Low-dimensional feature descriptors play an essential role in speeding up the retrieval process in such a large-scale dataset. A feature extraction approach based on the integration of color and texture contents has been proposed in the proposed system for the construction of a low-dimensional feature descriptor. In which color contents are quantified from a preprocessed quantized HSV color image and texture contents are retrieved from a Sobel edge detection-based preprocessed V-plane of HSV color image using a block level DCT (discrete cosine transformation) and gray level co-occurrence matrix. On a benchmark image dataset, the suggested image retrieval scheme is validated. The experimental outcomes were compared to ten cutting-edge image retrieval algorithms, which outperformed in the vast majority of cases.
Collapse
|
7
|
Babu EK, Mistry K, Anwar MN, Zhang L. Facial Feature Extraction Using a Symmetric Inline Matrix-LBP Variant for Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 22:8635. [PMID: 36433232 PMCID: PMC9696972 DOI: 10.3390/s22228635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/02/2022] [Accepted: 11/04/2022] [Indexed: 06/16/2023]
Abstract
With a large number of Local Binary Patterns (LBP) variants being currently used today, the significant and importance of visual descriptors in computer vision applications are prominent. This paper presents a novel visual descriptor, i.e., SIM-LBP. It employs a new matrix technique called the Symmetric Inline Matrix generator method, which acts as a new variant of LBP. The key feature that separates our variant from existing counterparts is that our variant is very efficient in extracting facial expression features like eyes, eye brows, nose and mouth in a wide range of lighting conditions. For testing our model, we applied SIM-LBP on the JAFFE dataset to convert all the images to its corresponding SIM-LBP transformed variant. These transformed images are then used to train a Convolution Neural Network (CNN) based deep learning model for facial expressions recognition (FER). Several performance evaluation metrics, i.e., recognition accuracy rate, precision, recall, and F1-score, were used to test mode efficiency in comparison with those using the traditional LBP descriptor and other LBP variants. Our model outperformed in all four matrices with the proposed SIM-LBP transformation on the input images against those of baseline methods. In comparison analysis with the other state-of-the-art methods, it shows the usefulness of the proposed SIM-LBP model. Our proposed SIM-LBP variant transformation can also be applied on facial images to identify a person's mental states and predict mood variations.
Collapse
Affiliation(s)
- Eaby Kollonoor Babu
- Faculty of Engineering and Environment, Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK
| | - Kamlesh Mistry
- Faculty of Engineering and Environment, Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK
| | - Muhammad Naveed Anwar
- Faculty of Engineering and Environment, Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK
| | - Li Zhang
- Department of Computer Science, Royal Holloway, University of London, Surrey TW20 0EX, UK
| |
Collapse
|
8
|
Kim YJ. Machine Learning Model Based on Radiomic Features for Differentiation between COVID-19 and Pneumonia on Chest X-ray. SENSORS (BASEL, SWITZERLAND) 2022; 22:6709. [PMID: 36081170 PMCID: PMC9460643 DOI: 10.3390/s22176709] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 08/20/2022] [Accepted: 09/02/2022] [Indexed: 06/15/2023]
Abstract
Machine learning approaches are employed to analyze differences in real-time reverse transcription polymerase chain reaction scans to differentiate between COVID-19 and pneumonia. However, these methods suffer from large training data requirements, unreliable images, and uncertain clinical diagnosis. Thus, in this paper, we used a machine learning model to differentiate between COVID-19 and pneumonia via radiomic features using a bias-minimized dataset of chest X-ray scans. We used logistic regression (LR), naive Bayes (NB), support vector machine (SVM), k-nearest neighbor (KNN), bagging, random forest (RF), extreme gradient boosting (XGB), and light gradient boosting machine (LGBM) to differentiate between COVID-19 and pneumonia based on training data. Further, we used a grid search to determine optimal hyperparameters for each machine learning model and 5-fold cross-validation to prevent overfitting. The identification performances of COVID-19 and pneumonia were compared with separately constructed test data for four machine learning models trained using the maximum probability, contrast, and difference variance of the gray level co-occurrence matrix (GLCM), and the skewness as input variables. The LGBM and bagging model showed the highest and lowest performances; the GLCM difference variance showed a high overall effect in all models. Thus, we confirmed that the radiomic features in chest X-rays can be used as indicators to differentiate between COVID-19 and pneumonia using machine learning.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, Gachon University, 21, Namdong-daero 774 beon-gil, Namdong-gu, Inchon 21936, Korea
| |
Collapse
|
9
|
Zhang H, Liang W, Li C, Xiong Q, Shi H, Hu L, Li G. DCML: Deep contrastive mutual learning for COVID-19 recognition. Biomed Signal Process Control 2022; 77:103770. [PMID: 35530170 PMCID: PMC9058053 DOI: 10.1016/j.bspc.2022.103770] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 03/24/2022] [Accepted: 04/27/2022] [Indexed: 01/15/2023]
Abstract
COVID-19 is a form of disease triggered by a new strain of coronavirus. Automatic COVID-19 recognition using computer-aided methods is beneficial for speeding up diagnosis efficiency. Current researches usually focus on a deeper or wider neural network for COVID-19 recognition. And the implicit contrastive relationship between different samples has not been fully explored. To address these problems, we propose a novel model, called deep contrastive mutual learning (DCML), to diagnose COVID-19 more effectively. A multi-way data augmentation strategy based on Fast AutoAugment (FAA) was employed to enrich the original training dataset, which helps reduce the risk of overfitting. Then, we incorporated the popular contrastive learning idea into the conventional deep mutual learning (DML) framework to mine the relationship between diverse samples and created more discriminative image features through a new adaptive model fusion method. Experimental results on three public datasets demonstrate that the DCML model outperforms other state-of-the-art baselines. More importantly, DCML is easier to reproduce and relatively efficient, strengthening its high practicality.
Collapse
Affiliation(s)
- Hongbin Zhang
- School of Software, East China Jiaotong University, China
| | - Weinan Liang
- School of Software, East China Jiaotong University, China
| | - Chuanxiu Li
- School of Information Engineering, East China Jiaotong University, China
| | - Qipeng Xiong
- School of Software, East China Jiaotong University, China
| | - Haowei Shi
- School of Software, East China Jiaotong University, China
| | - Lang Hu
- School of Software, East China Jiaotong University, China
| | - Guangli Li
- School of Information Engineering, East China Jiaotong University, China
| |
Collapse
|
10
|
Soni M, Gomathi S, Kumar P, Churi PP, Mohammed MA, Salman AO. Hybridizing Convolutional Neural Network for Classification of Lung Diseases. INTERNATIONAL JOURNAL OF SWARM INTELLIGENCE RESEARCH 2022. [DOI: 10.4018/ijsir.287544] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Pulmonary disease is widespread worldwide. There is persistent blockage of the lungs, pneumonia, asthma, TB, etc. It is essential to diagnose the lungs promptly. For this reason, machine learning models were developed. For lung disease prediction, many deep learning technologies, including the CNN, and the capsule network, are used. The fundamental CNN has low rotating, inclined, or other irregular image orientation efficiency. Therefore by integrating the space transformer network (STN) with CNN, we propose a new hybrid deep learning architecture named STNCNN. The new model is implemented on the dataset from the Kaggle repository for an NIH chest X-ray image. STNCNN has an accuracy of 69% in respect of the entire dataset, while the accuracy values of vanilla grey, vanilla RGB, hybrid CNN are 67.8%, 69.5%, and 63.8%, respectively. When the sample data set is applied, STNCNN takes much less time to train at the cost of a slightly less reliable validation. Therefore both specialists and physicians are simplified by the proposed STNCNN System for the diagnosis of lung disease.
Collapse
Affiliation(s)
| | - S. Gomathi
- UK International Qualifications, Ltd., India
| | - Pankaj Kumar
- Noida Institute of Engineering and Technology, Greater Noida, India
| | | | | | | |
Collapse
|
11
|
Renal Cancer Detection: Fusing Deep and Texture Features from Histopathology Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:9821773. [PMID: 35386304 PMCID: PMC8979690 DOI: 10.1155/2022/9821773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 02/16/2022] [Accepted: 02/21/2022] [Indexed: 11/18/2022]
Abstract
Histopathological images contain morphological markers of disease progression that have diagnostic and predictive values, with many computer-aided diagnosis systems using common deep learning methods that have been proposed to save time and labour. Even though deep learning methods are an end-to-end method, they perform exceptionally well given a large dataset and often show relatively inferior results for a small dataset. In contrast, traditional feature extraction methods have greater robustness and perform well with a small/medium dataset. Moreover, a texture representation-based global approach is commonly used to classify histological tissue images expect in explicit segmentation to extract the structure properties. Considering the scarcity of medical datasets and the usefulness of texture representation, we would like to integrate both the advantages of deep learning and traditional machine learning, i.e., texture representation. To accomplish this task, we proposed a classification model to detect renal cancer using a histopathology dataset by fusing the features from a deep learning model with the extracted texture feature descriptors. Here, five texture feature descriptors from three texture feature families were applied to complement Alex-Net for the extensive validation of the fusion between the deep features and texture features. The texture features are from (1) statistic feature family: histogram of gradient, gray-level cooccurrence matrix, and local binary pattern; (2) transform-based texture feature family: Gabor filters; and (3) model-based texture feature family: Markov random field. The final experimental results for classification outperformed both Alex-Net and a singular texture descriptor, showing the effectiveness of combining the deep features and texture features in renal cancer detection.
Collapse
|
12
|
Praveena HD, Guptha NS, Kazemzadeh A, Parameshachari BD, Hemalatha KL. Effective CBMIR System Using Hybrid Features-Based Independent Condensed Nearest Neighbor Model. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3297316. [PMID: 35378946 PMCID: PMC8976656 DOI: 10.1155/2022/3297316] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 02/28/2022] [Accepted: 03/08/2022] [Indexed: 11/18/2022]
Abstract
In recent times, a large number of medical images are generated, due to the evolution of digital imaging modalities and computer vision application. Due to variation in the shape and size of the images, the retrieval task becomes more tedious in the large medical databases. So, it is essential in designing an effective automated system for medical image retrieval. In this research study, the input medical images are acquired from new Pap smear dataset, and then, the visible quality of acquired medical images is improved by applying image normalization technique. Furthermore, the hybrid feature extraction is accomplished using histogram of oriented gradients and modified local binary pattern to extract the color and texture feature vectors that significantly reduces the semantic gap between the feature vectors. The obtained feature vectors are fed to the independent condensed nearest neighbor classifier to classify the seven classes of cell images. Finally, relevant medical images are retrieved using chi square distance measure. Simulation results confirmed that the proposed model obtained effective performance in image retrieval in light of specificity, recall, precision, accuracy, and f-score. The proposed model almost achieved 98.88% of retrieval accuracy, which is better compared to other deep learning models such as long short-term memory network, deep neural network, and convolutional neural network.
Collapse
Affiliation(s)
- Hirald Dwaraka Praveena
- Department of Electronics and Communication Engineering, Sree Vidyanikethan Engineering College, Tirupati 517102, Andhra Pradesh, India
| | - Nirmala S. Guptha
- Department of CSE-Artificial Intelligence, Sri Venkateshwara College of Engineering, Bengaluru 562157, India
| | | | - B. D. Parameshachari
- Department of Telecommunication Engineering, GSSS Institute of Engineering and Technology for Women, Mysuru 570016, India
| | - K. L. Hemalatha
- Department of ISE, Sri Krishna Institute of Technology, Bengaluru 560090, India
| |
Collapse
|
13
|
V SS, R RK. Iris Recognition using Multi Objective Artificial Bee Colony Optimization Algorithm with Autoencoder Classifier. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10775-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
14
|
De Jesus KLM, Senoro DB, Dela Cruz JC, Chan EB. Neuro-Particle Swarm Optimization Based In-Situ Prediction Model for Heavy Metals Concentration in Groundwater and Surface Water. TOXICS 2022; 10:95. [PMID: 35202281 PMCID: PMC8879014 DOI: 10.3390/toxics10020095] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 02/12/2022] [Accepted: 02/16/2022] [Indexed: 11/22/2022]
Abstract
Limited monitoring activities to assess data on heavy metal (HM) concentration contribute to worldwide concern for the environmental quality and the degree of toxicants in areas where there are elevated metals concentrations. Hence, this study used in-situ physicochemical parameters to the limited data on HM concentration in SW and GW. The site of the study was Marinduque Island Province in the Philippines, which experienced two mining disasters. Prediction model results showed that the SW models during the dry and wet seasons recorded a mean squared error (MSE) ranging from 6 × 10-7 to 0.070276. The GW models recorded a range from 5 × 10-8 to 0.045373, all of which were approaching the ideal MSE value of 0. Kling-Gupta efficiency values of developed models were all greater than 0.95. The developed neural network-particle swarm optimization (NN-PSO) models for SW and GW were compared to linear and support vector machine (SVM) models and previously published deterministic and artificial intelligence (AI) models. The findings indicated that the developed NN-PSO models are superior to the developed linear and SVM models, up to 1.60 and 1.40 times greater than the best model observed created by linear and SVM models for SW and GW, respectively. The developed models were also on par with previously published deterministic and AI-based models considering their prediction capability. Sensitivity analysis using Olden's connection weights approach showed that pH influenced the concentration of HM significantly. Established on the research findings, it can be stated that the NN-PSO is an effective and practical approach in the prediction of HM concentration in water resources that contributes a solution to the limited HM concentration monitored data.
Collapse
Affiliation(s)
- Kevin Lawrence M. De Jesus
- School of Graduate Studies, Mapua University, Manila 1002, Philippines; (K.L.M.D.J.); (J.C.D.C.)
- School of Chemical, Biological, Materials Engineering and Sciences, Mapua University, Manila 1002, Philippines
- Resiliency and Sustainable Development Center, Yuchengco Innovation Center, Mapua University, Manila 1002, Philippines
| | - Delia B. Senoro
- School of Graduate Studies, Mapua University, Manila 1002, Philippines; (K.L.M.D.J.); (J.C.D.C.)
- School of Chemical, Biological, Materials Engineering and Sciences, Mapua University, Manila 1002, Philippines
- Resiliency and Sustainable Development Center, Yuchengco Innovation Center, Mapua University, Manila 1002, Philippines
- School of Civil, Environmental and Geological Engineering, Mapua University, Manila 1002, Philippines
| | - Jennifer C. Dela Cruz
- School of Graduate Studies, Mapua University, Manila 1002, Philippines; (K.L.M.D.J.); (J.C.D.C.)
- School of Electrical, Electronics and Computer Engineering, Mapua University, Manila 1002, Philippines
| | - Eduardo B. Chan
- Dyson College of Arts and Science, Pace University, New York, NY 10038, USA;
| |
Collapse
|
15
|
PCA-Based Advanced Local Octa-Directional Pattern (ALODP-PCA): A Texture Feature Descriptor for Image Retrieval. ELECTRONICS 2022. [DOI: 10.3390/electronics11020202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This paper presents a novel feature descriptor termed principal component analysis (PCA)-based Advanced Local Octa-Directional Pattern (ALODP-PCA) for content-based image retrieval. The conventional approaches compare each pixel of an image with certain neighboring pixels providing discrete image information. The descriptor proposed in this work utilizes the local intensity of pixels in all eight directions of its neighborhood. The local octa-directional pattern results in two patterns, i.e., magnitude and directional, and each is quantized into a 40-bin histogram. A joint histogram is created by concatenating directional and magnitude histograms. To measure similarities between images, the Manhattan distance is used. Moreover, to maintain the computational cost, PCA is applied, which reduces the dimensionality. The proposed methodology is tested on a subset of a Multi-PIE face dataset. The dataset contains almost 800,000 images of over 300 people. These images carries different poses and have a wide range of facial expressions. Results were compared with state-of-the-art local patterns, namely, the local tri-directional pattern (LTriDP), local tetra directional pattern (LTetDP), and local ternary pattern (LTP). The results of the proposed model supersede the work of previously defined work in terms of precision, accuracy, and recall.
Collapse
|
16
|
Yi J, Lei X, Zhang L, Zheng Q, Jin J, Xie C, Jin X, Ai Y. The Influence of Different Ultrasonic Machines on Radiomics Models in Prediction Lymph Node Metastasis for Patients with Cervical Cancer. Technol Cancer Res Treat 2022; 21:15330338221118412. [PMID: 35971568 PMCID: PMC9386859 DOI: 10.1177/15330338221118412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Objective To investigate the effects of different ultrasonic machines on the performance of radiomics models using ultrasound (US) images in the prediction of lymph node metastasis (LNM) for patients with cervical cancer (CC) preoperatively. Methods A total of 536 CC patients with confirmed histological characteristics and lymph node status after radical hysterectomy and pelvic lymphadenectomy were enrolled. Radiomics features were extracted and selected with US images acquired with ATL HDI5000, Voluson E8, MyLab classC, ACUSON S2000, and HI VISION Preirus to build radiomics models for LNM prediction using support vector machine (SVM) and logistic regression, respectively. Results There were 148 patients (training vs validation: 102:46) scanned in machine HDI5000, 75 patients (53:22) in machine Voluson E8, 100 patients (69:31) in machine MyLab classC, 110 patients (76:34) in machine ACUSON S2000, and 103 patients (73:30) in machine HI VISION Preirus, respectively. Few radiomics features were reproducible among different machines. The area under the curves (AUCs) ranged from 0.75 to 0.86, 0.73 to 0.86 in the training cohorts, and from 0.71 to 0.82, 0.70 to 0.80 in the validation cohorts for SVM and logistic regression models, respectively. The highest difference in AUCs for different machines reaches 17.8% and 15.5% in the training and validation cohorts, respectively. Conclusions The performance of radiomics model is dependent on the type of scanner. The problem of scanner dependency on radiomics features should be considered, and their effects should be minimized in future studies for US images.
Collapse
Affiliation(s)
- Jinling Yi
- Radiotherapy Center, The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xiyao Lei
- Radiotherapy Center, The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Lei Zhang
- Radiotherapy Center, The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Qiao Zheng
- Radiotherapy Center, The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Juebin Jin
- Department of Medical Engineering, 89657The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Congying Xie
- Radiotherapy Center, The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China.,Department of Radiation and Medical Oncology, 26452The 2nd Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xiance Jin
- Radiotherapy Center, The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China.,School of Basic Medical Science, 26453Wenzhou Medical University, Wenzhou, China
| | - Yao Ai
- Radiotherapy Center, The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
17
|
Deep Learning Framework to Detect Ischemic Stroke Lesion in Brain MRI Slices of Flair/DW/T1 Modalities. Symmetry (Basel) 2021. [DOI: 10.3390/sym13112080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Ischemic stroke lesion (ISL) is a brain abnormality. Studies proved that early detection and treatment could reduce the disease impact. This research aimed to develop a deep learning (DL) framework to detect the ISL in multi-modality magnetic resonance image (MRI) slices. It proposed a convolutional neural network (CNN)-supported segmentation and classification to execute a consistent disease detection framework. The developed framework consisted of the following phases; (i) visual geometry group (VGG) developed VGG16 scheme supported SegNet (VGG-SegNet)-based ISL mining, (ii) handcrafted feature extraction, (iii) deep feature extraction using the chosen DL scheme, (iv) feature ranking and serial feature concatenation, and (v) classification using binary classifiers. Fivefold cross-validation was employed in this work, and the best feature was selected as the final result. The attained results were separately examined for (i) segmentation; (ii) deep-feature-based classification, and (iii) concatenated feature-based classification. The experimental investigation is presented using the Ischemic Stroke Lesion Segmentation (ISLES2015) database. The attained result confirms that the proposed ISL detection framework gives better segmentation and classification results. The VGG16 scheme helped to obtain a better result with deep features (accuracy > 97%) and concatenated features (accuracy > 98%).
Collapse
|
18
|
|
19
|
Modeling of texture quantification and image classification for change prediction due to COVID lockdown using Skysat and Planetscope imagery. ACTA ACUST UNITED AC 2021; 8:2767-2792. [PMID: 34458559 PMCID: PMC8384559 DOI: 10.1007/s40808-021-01258-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 08/09/2021] [Indexed: 12/26/2022]
Abstract
This research work models two methods together to provide maximum information about a study area. The quantification of image texture is performed using the “grey level co-occurrence matrix (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{GLCM}$$\end{document}GLCM)” technique. Image classification-based “object-based change detection (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{OBCD}$$\end{document}OBCD)” methods are used to visually represent the developed transformation in the study area. Pre-COVID and post-COVID (during lockdown) panchromatic images of Connaught Place, New Delhi, are investigated in this research work to develop a model for the study area. Texture classification of the study area is performed based on visual texture features for eight distances and four orientations. Six different image classification methodologies are used for mapping the study area. These methodologies are “Parallelepiped classification (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{PC}$$\end{document}PC),” “Minimum distance classification (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{MDC}$$\end{document}MDC),” “Maximum likelihood classification (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{MLC}$$\end{document}MLC),” “Spectral angle mapper (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{SAM}$$\end{document}SAM),” “Spectral information divergence (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{SID}$$\end{document}SID)” and “Support vector machine (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{SVM}$$\end{document}SVM).” \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{GLCM}$$\end{document}GLCM calculations have provided a pattern in texture features contrast, correlation, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{ASM}$$\end{document}ASM, and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{IDM}$$\end{document}IDM. Maximum classification accuracy of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$83.68\%$$\end{document}83.68% and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$73.65\%$$\end{document}73.65% are obtained for pre-COVID and post-COVID image data through \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathrm{MLC}$$\end{document}MLC classification technique. Finally, a model is presented to analyze before and after COVID images to get complete information about the study area numerically and visually.
Collapse
|
20
|
Kim YJ. Machine Learning Models for Sarcopenia Identification Based on Radiomic Features of Muscles in Computed Tomography. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18168710. [PMID: 34444459 PMCID: PMC8394435 DOI: 10.3390/ijerph18168710] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 08/13/2021] [Accepted: 08/16/2021] [Indexed: 12/12/2022]
Abstract
The diagnosis of sarcopenia requires accurate muscle quantification. As an alternative to manual muscle mass measurement through computed tomography (CT), artificial intelligence can be leveraged for the automation of these measurements. Although generally difficult to identify with the naked eye, the radiomic features in CT images are informative. In this study, the radiomic features were extracted from L3 CT images of the entire muscle area and partial areas of the erector spinae collected from non-small cell lung carcinoma (NSCLC) patients. The first-order statistics and gray-level co-occurrence, gray-level size zone, gray-level run length, neighboring gray-tone difference, and gray-level dependence matrices were the radiomic features analyzed. The identification performances of the following machine learning models were evaluated: logistic regression, support vector machine (SVM), random forest, and extreme gradient boosting (XGB). Sex, coarseness, skewness, and cluster prominence were selected as the relevant features effectively identifying sarcopenia. The XGB model demonstrated the best performance for the entire muscle, whereas the SVM was the worst-performing model. Overall, the models demonstrated improved performance for the entire muscle compared to the erector spinae. Although further validation is required, the radiomic features presented here could become reliable indicators for quantifying the phenomena observed in the muscles of NSCLC patients, thus facilitating the diagnosis of sarcopenia.
Collapse
Affiliation(s)
- Young Jae Kim
- Department of Biomedical Engineering, Gachon University, Inchon 21936, Korea
| |
Collapse
|
21
|
Combining Spectral and Texture Features of UAV Images for the Remote Estimation of Rice LAI throughout the Entire Growing Season. REMOTE SENSING 2021. [DOI: 10.3390/rs13153001] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Leaf area index (LAI) estimation is very important, and not only for canopy structure analysis and yield prediction. The unmanned aerial vehicle (UAV) serves as a promising solution for LAI estimation due to its great applicability and flexibility. At present, vegetation index (VI) is still the most widely used method in LAI estimation because of its fast speed and simple calculation. However, VI only reflects the spectral information and ignores the texture information of images, so it is difficult to adapt to the unique and complex morphological changes of rice in different growth stages. In this study we put forward a novel method by combining the texture information derived from the local binary pattern and variance features (LBP and VAR) with the spectral information based on VI to improve the estimation accuracy of rice LAI throughout the entire growing season. The multitemporal images of two study areas located in Hainan and Hubei were acquired by a 12-band camera, and the main typical bands for constituting VIs such as green, red, red edge, and near-infrared were selected to analyze their changes in spectrum and texture during the entire growing season. After the mathematical combination of plot-level spectrum and texture values, new indices were constructed to estimate rice LAI. Comparing the corresponding VI, the new indices were all less sensitive to the appearance of panicles and slightly weakened the saturation issue. The coefficient of determination (R2) can be improved for all tested VIs throughout the entire growing season. The results showed that the combination of spectral and texture features exhibited a better predictive ability than VI for estimating rice LAI. This method only utilized the texture and spectral information of the UAV image itself, which is fast, easy to operate, does not need manual intervention, and can be a low-cost method for monitoring crop growth.
Collapse
|
22
|
Singh PD, Kaur R, Singh KD, Dhiman G. A Novel Ensemble-based Classifier for Detecting the COVID-19 Disease for Infected Patients. INFORMATION SYSTEMS FRONTIERS : A JOURNAL OF RESEARCH AND INNOVATION 2021; 23:1385-1401. [PMID: 33935584 PMCID: PMC8068562 DOI: 10.1007/s10796-021-10132-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2021] [Indexed: 05/02/2023]
Abstract
The recently discovered coronavirus, SARS-CoV-2, which was detected in Wuhan, China, has spread worldwide and is still being studied at the end of 2019. Detection of COVID-19 at an early stage is essential to provide adequate healthcare to affected patients and protect the uninfected community. This paper aims to design and develop a novel ensemble-based classifier to predict COVID-19 cases at a very early stage so that appropriate action can be taken by patients, doctors, health organizations, and the government. In this paper, a synthetic dataset of COVID-19 is generated by a dataset generation algorithm. A novel ensemble-based classifier of machine learning is employed on the COVID-19 dataset to predict the disease. A convex hull-based approach is also applied to the data to improve the proposed novel, ensemble-based classifier's accuracy and speed. The model is designed and developed through the python programming language and compares with the most popular classifier, i.e., Decision Tree, ID3, and support vector machine. The results indicate that the proposed novel classifier provides a more significant precision, kappa static, root means a square error, recall, F-measure, and accuracy.
Collapse
Affiliation(s)
- Prabh Deep Singh
- Department of Computer Science & Engineering, Punjabi University, Patiala, Punjab India
| | - Rajbir Kaur
- Department of Electronics & Communication Engineering, Punjabi University, Patiala, Punjab India
| | - Kiran Deep Singh
- Department of Computer Science & Engineering, IKG Punjab Technical University, Punjab, India
| | - Gaurav Dhiman
- Department of Computer Science, Government Bikram College of Commerce, Punjabi University, Patiala, Punjab India
| |
Collapse
|
23
|
Ubiquitous Vehicular Ad-Hoc Network Computing Using Deep Neural Network with IoT-Based Bat Agents for Traffic Management. ELECTRONICS 2021. [DOI: 10.3390/electronics10070785] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
In this paper, Deep Neural Networks (DNN) with Bat Algorithms (BA) offer a dynamic form of traffic control in Vehicular Adhoc Networks (VANETs). The former is used to route vehicles across highly congested paths to enhance efficiency, with a lower average latency. The latter is combined with the Internet of Things (IoT) and it moves across the VANETs to analyze the traffic congestion status between the network nodes. The experimental analysis tests the effectiveness of DNN-IoT-BA in various machine or deep learning algorithms in VANETs. DNN-IoT-BA is validated through various network metrics, like packet delivery ratio, latency and packet error rate. The simulation results show that the proposed method provides lower energy consumption and latency than conventional methods to support real-time traffic conditions.
Collapse
|
24
|
Integration of Discrete Wavelet Transform, DBSCAN, and Classifiers for Efficient Content Based Image Retrieval. ELECTRONICS 2020. [DOI: 10.3390/electronics9111886] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In the domain of computer vision, the efficient representation of an image feature vector for the retrieval of images remains a significant problem. Extensive research has been undertaken on Content-Based Image Retrieval (CBIR) using various descriptors, and machine learning algorithms with certain descriptors have significantly improved the performance of these systems. In this proposed research, a new scheme for CBIR was implemented to address the semantic gap issue and to form an efficient feature vector. This technique was based on the histogram formation of query and dataset images. The auto-correlogram of the images was computed w.r.t RGB format, followed by a moment’s extraction. To form efficient feature vectors, Discrete Wavelet Transform (DWT) in a multi-resolution framework was applied. A codebook was formed using a density-based clustering approach known as Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The similarity index was computed using the Euclidean distance between the feature vector of the query image and the dataset images. Different classifiers, like Support Vector (SVM), K-Nearest Neighbor (KNN), and Decision Tree, were used for the classification of images. The set experiment was performed on three publicly available datasets, and the performance of the proposed framework was compared with another state of the proposed frameworks which have had a positive performance in terms of accuracy.
Collapse
|
25
|
MoSSE: a novel hybrid multi-objective meta-heuristic algorithm for engineering design problems. Soft comput 2020. [DOI: 10.1007/s00500-020-05046-9] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|