1
|
Soundarya B, Poongodi C. A novel hybrid feature fusion approach using handcrafted features with transfer learning model for enhanced skin cancer classification. Comput Biol Med 2025; 190:110104. [PMID: 40168807 DOI: 10.1016/j.compbiomed.2025.110104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 03/20/2025] [Accepted: 03/26/2025] [Indexed: 04/03/2025]
Abstract
Skin cancer is a deadly disease and has the highest rising rates globally. It arises from aberrant skin cells, which are often caused by prolonged exposure to ultraviolet rays from sunlight or artificial tanning devices. Dermatologists rely on visual inspection and need to identify suspicious lesions. Prompt and accurate diagnosis is pivotal for effective treatment and enhancing the chances of recovery. Recently, skin cancer prediction has been made utilising machine and deep learning algorithms for early detection. This methodology presents a novel hybrid feature extraction and is fused with a deep learning model for dermoscopic image analysis. Skin lesion images from sources like ISIC were pre-processed. Features were extracted using the Grey-Level Co-Occurrence Matrix (GLCM), Redundant Discrete Wavelet Transform (RDWT) and a various pre-trained model. After evaluating all the combinations, the proposed feature fusion model performed well rather than all other models. This proposed feature fusion model includes GLCM, RDWT, and DenseNet121 features, which were estimated with the various classifiers, among which an impressive accuracy of 93.46 % was obtained with the XGBoost classifier and 94.25 % with the ensemble classifier. This study underscores the efficacy of integrating diverse feature extraction techniques to increase the reliability and effectiveness of skin cancer diagnosis.
Collapse
Affiliation(s)
- B Soundarya
- Bannari Amman Institute of Technology Sathyamangalam, India.
| | - C Poongodi
- Bannari Amman Institute of Technology Sathyamangalam, India.
| |
Collapse
|
2
|
Omari Alaoui A, Boutahir MK, El Bahi O, Hessane A, Farhaoui Y, El Allaoui A. Accelerating deep learning model development—towards scalable automated architecture generation for optimal model design. MULTIMEDIA TOOLS AND APPLICATIONS 2024; 84:3053-3069. [DOI: 10.1007/s11042-024-20481-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2024] [Revised: 10/14/2024] [Accepted: 11/21/2024] [Indexed: 01/15/2025]
|
3
|
Ramamoorthy P, Ramakantha Reddy BR, Askar SS, Abouhawwash M. Histopathology-based breast cancer prediction using deep learning methods for healthcare applications. Front Oncol 2024; 14:1300997. [PMID: 38894870 PMCID: PMC11184215 DOI: 10.3389/fonc.2024.1300997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 04/12/2024] [Indexed: 06/21/2024] Open
Abstract
Breast cancer (BC) is the leading cause of female cancer mortality and is a type of cancer that is a major threat to women's health. Deep learning methods have been used extensively in many medical domains recently, especially in detection and classification applications. Studying histological images for the automatic diagnosis of BC is important for patients and their prognosis. Owing to the complication and variety of histology images, manual examination can be difficult and susceptible to errors and thus needs the services of experienced pathologists. Therefore, publicly accessible datasets called BreakHis and invasive ductal carcinoma (IDC) are used in this study to analyze histopathological images of BC. Next, using super-resolution generative adversarial networks (SRGANs), which create high-resolution images from low-quality images, the gathered images from BreakHis and IDC are pre-processed to provide useful results in the prediction stage. The components of conventional generative adversarial network (GAN) loss functions and effective sub-pixel nets were combined to create the concept of SRGAN. Next, the high-quality images are sent to the data augmentation stage, where new data points are created by making small adjustments to the dataset using rotation, random cropping, mirroring, and color-shifting. Next, patch-based feature extraction using Inception V3 and Resnet-50 (PFE-INC-RES) is employed to extract the features from the augmentation. After the features have been extracted, the next step involves processing them and applying transductive long short-term memory (TLSTM) to improve classification accuracy by decreasing the number of false positives. The results of suggested PFE-INC-RES is evaluated using existing methods on the BreakHis dataset, with respect to accuracy (99.84%), specificity (99.71%), sensitivity (99.78%), and F1-score (99.80%), while the suggested PFE-INC-RES performed better in the IDC dataset based on F1-score (99.08%), accuracy (99.79%), specificity (98.97%), and sensitivity (99.17%).
Collapse
Affiliation(s)
- Prabhu Ramamoorthy
- Department of Electronics and Communication Engineering, Gnanamani College of Technology, Namakkal, India
| | | | - S. S. Askar
- Department of Statistics and Operations Research, College of Science, King Saud University, Riyadh, Saudi Arabia
| | - Mohamed Abouhawwash
- Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, Egypt
| |
Collapse
|
4
|
Sharkas M, Attallah O. Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Sci Rep 2024; 14:6914. [PMID: 38519513 PMCID: PMC10959971 DOI: 10.1038/s41598-024-56820-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Colorectal cancer (CRC) exhibits a significant death rate that consistently impacts human lives worldwide. Histopathological examination is the standard method for CRC diagnosis. However, it is complicated, time-consuming, and subjective. Computer-aided diagnostic (CAD) systems using digital pathology can help pathologists diagnose CRC faster and more accurately than manual histopathology examinations. Deep learning algorithms especially convolutional neural networks (CNNs) are advocated for diagnosis of CRC. Nevertheless, most previous CAD systems obtained features from one CNN, these features are of huge dimension. Also, they relied on spatial information only to achieve classification. In this paper, a CAD system is proposed called "Color-CADx" for CRC recognition. Different CNNs namely ResNet50, DenseNet201, and AlexNet are used for end-to-end classification at different training-testing ratios. Moreover, features are extracted from these CNNs and reduced using discrete cosine transform (DCT). DCT is also utilized to acquire spectral representation. Afterward, it is used to further select a reduced set of deep features. Furthermore, DCT coefficients obtained in the previous step are concatenated and the analysis of variance (ANOVA) feature selection approach is applied to choose significant features. Finally, machine learning classifiers are employed for CRC classification. Two publicly available datasets were investigated which are the NCT-CRC-HE-100 K dataset and the Kather_texture_2016_image_tiles dataset. The highest achieved accuracy reached 99.3% for the NCT-CRC-HE-100 K dataset and 96.8% for the Kather_texture_2016_image_tiles dataset. DCT and ANOVA have successfully lowered feature dimensionality thus reducing complexity. Color-CADx has demonstrated efficacy in terms of accuracy, as its performance surpasses that of the most recent advancements.
Collapse
Affiliation(s)
- Maha Sharkas
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
5
|
Chhillar I, Singh A. A feature engineering-based machine learning technique to detect and classify lung and colon cancer from histopathological images. Med Biol Eng Comput 2024; 62:913-924. [PMID: 38091162 DOI: 10.1007/s11517-023-02984-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 11/29/2023] [Indexed: 02/22/2024]
Abstract
Globally, lung and colon cancers are among the most prevalent and lethal tumors. Early cancer identification is essential to increase the likelihood of survival. Histopathological images are considered an appropriate tool for diagnosing cancer, which is tedious and error-prone if done manually. Recently, machine learning methods based on feature engineering have gained prominence in automatic histopathological image classification. Furthermore, these methods are more interpretable than deep learning, which operates in a "black box" manner. In the medical profession, the interpretability of a technique is critical to gaining the trust of end users to adopt it. In view of the above, this work aims to create an accurate and interpretable machine-learning technique for the automated classification of lung and colon cancers from histopathology images. In the proposed approach, following the preprocessing steps, texture and color features are retrieved by utilizing the Haralick and Color histogram feature extraction algorithms, respectively. The obtained features are concatenated to form a single feature set. The three feature sets (texture, color, and combined features) are passed into the Light Gradient Boosting Machine (LightGBM) classifier for classification. And their performance is evaluated on the LC25000 dataset using hold-out and stratified 10-fold cross-validation (Stratified 10-FCV) techniques. With a test/hold-out set, the LightGBM with texture, color, and combined features classifies the lung and colon cancer images with 97.72%, 99.92%, and 100% accuracy respectively. In addition, a stratified 10-fold cross-validation method also revealed that LightGBM's combined or color features performed well, with an excellent mean auc_mu score and a low mean multi_logloss value. Thus, this proposed technique can help histologists detect and classify lung and colon histopathology images more efficiently, effectively, and economically, resulting in more productivity.
Collapse
Affiliation(s)
- Indu Chhillar
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India.
| | - Ajmer Singh
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India
| |
Collapse
|
6
|
Pham TD, Holmes SB, Coulthard P. A review on artificial intelligence for the diagnosis of fractures in facial trauma imaging. Front Artif Intell 2024; 6:1278529. [PMID: 38249794 PMCID: PMC10797131 DOI: 10.3389/frai.2023.1278529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/11/2023] [Indexed: 01/23/2024] Open
Abstract
Patients with facial trauma may suffer from injuries such as broken bones, bleeding, swelling, bruising, lacerations, burns, and deformity in the face. Common causes of facial-bone fractures are the results of road accidents, violence, and sports injuries. Surgery is needed if the trauma patient would be deprived of normal functioning or subject to facial deformity based on findings from radiology. Although the image reading by radiologists is useful for evaluating suspected facial fractures, there are certain challenges in human-based diagnostics. Artificial intelligence (AI) is making a quantum leap in radiology, producing significant improvements of reports and workflows. Here, an updated literature review is presented on the impact of AI in facial trauma with a special reference to fracture detection in radiology. The purpose is to gain insights into the current development and demand for future research in facial trauma. This review also discusses limitations to be overcome and current important issues for investigation in order to make AI applications to the trauma more effective and realistic in practical settings. The publications selected for review were based on their clinical significance, journal metrics, and journal indexing.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | | | | |
Collapse
|
7
|
Omari Alaoui A, El Bahi O, Rida Fethi M, Farhaoui O, El Allaoui A, Farhaoui Y. Pre-trained CNNs: Evaluating Emergency Vehicle Image Classification. DATA AND METADATA 2023; 2:153. [DOI: 10.56294/dm2023153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2025]
Abstract
In this paper, we aim to provide a comprehensive analysis of image classification, specifically in the context of emergency vehicle classification. We have conducted an in-depth investigation, exploring the effectiveness of six pre-trained Convolutional Neural Network (CNN) models. These models, namely VGG19, VGG16, MobileNetV3Large, MobileNetV3Small, MobileNetV2, and MobileNetV1, have been thoroughly examined and evaluated within the domain of emergency vehicle classification. The research methodology utilized in this study is carefully designed with a systematic approach. It includes the thorough preparation of datasets, deliberate modifications to the model architecture, careful selection of layer operations, and fine-tuning of the model compilation. To gain a comprehensive understanding of the performance, we conducted a detailed series of experiments. We analyzed nuanced performance metrics such as accuracy, loss, and training time, considering important factors in the evaluation process. The results obtained from this study provide a comprehensive understanding of the advantages and disadvantages of each model. Moreover, they emphasize the crucial significance of carefully choosing a suitable pre-trained Convolutional Neural Network (CNN) model for image classification tasks. Essentially, this article provides a comprehensive overview of image classification, highlighting the crucial significance of pre-trained CNN models in achieving precise outcomes, especially in the demanding field of emergency vehicle classification
Collapse
|
8
|
Rai HM, Yoo J. Analysis of Colorectal and Gastric Cancer Classification: A Mathematical Insight Utilizing Traditional Machine Learning Classifiers. MATHEMATICS 2023; 11:4937. [DOI: 10.3390/math11244937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/11/2024]
Abstract
Cancer remains a formidable global health challenge, claiming millions of lives annually. Timely and accurate cancer diagnosis is imperative. While numerous reviews have explored cancer classification using machine learning and deep learning techniques, scant literature focuses on traditional ML methods. In this manuscript, we undertake a comprehensive review of colorectal and gastric cancer detection specifically employing traditional ML classifiers. This review emphasizes the mathematical underpinnings of cancer detection, encompassing preprocessing techniques, feature extraction, machine learning classifiers, and performance assessment metrics. We provide mathematical formulations for these key components. Our analysis is limited to peer-reviewed articles published between 2017 and 2023, exclusively considering medical imaging datasets. Benchmark and publicly available imaging datasets for colorectal and gastric cancers are presented. This review synthesizes findings from 20 articles on colorectal cancer and 16 on gastric cancer, culminating in a total of 36 research articles. A significant focus is placed on mathematical formulations for commonly used preprocessing techniques, features, ML classifiers, and assessment metrics. Crucially, we introduce our optimized methodology for the detection of both colorectal and gastric cancers. Our performance metrics analysis reveals remarkable results: 100% accuracy in both cancer types, but with the lowest sensitivity recorded at 43.1% for gastric cancer.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, Seongnam-si 13120, Republic of Korea
| | - Joon Yoo
- School of Computing, Gachon University, Seongnam-si 13120, Republic of Korea
| |
Collapse
|
9
|
Umuhoza IS, Mukamakuza C. SVM Model-Based Digital System for Malaria Screening and Parasite Monitoring. 2023 IEEE THIRD INTERNATIONAL CONFERENCE ON SIGNAL, CONTROL AND COMMUNICATION (SCC) 2023:1-6. [DOI: 10.1109/scc59637.2023.10527507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
10
|
Rajinikanth V, Kadry S, Mohan R, Rama A, Khan MA, Kim J. Colon histology slide classification with deep-learning framework using individual and fused features. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:19454-19467. [PMID: 38052609 DOI: 10.3934/mbe.2023861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Cancer occurrence rates are gradually rising in the population, which reasons a heavy diagnostic burden globally. The rate of colorectal (bowel) cancer (CC) is gradually rising, and is currently listed as the third most common cancer globally. Therefore, early screening and treatments with a recommended clinical protocol are necessary to trat cancer. The proposed research aim of this paper to develop a Deep-Learning Framework (DLF) to classify the colon histology slides into normal/cancer classes using deep-learning-based features. The stages of the framework include the following: (ⅰ) Image collection, resizing, and pre-processing; (ⅱ) Deep-Features (DF) extraction with a chosen scheme; (ⅲ) Binary classification with a 5-fold cross-validation; and (ⅳ) Verification of the clinical significance. This work classifies the considered image database using the follwing: (ⅰ) Individual DF, (ⅱ) Fused DF, and (ⅲ) Ensemble DF. The achieved results are separately verified using binary classifiers. The proposed work considered 4000 (2000 normal and 2000 cancer) histology slides for the examination. The result of this research confirms that the fused DF helps to achieve a detection accuracy of 99% with the K-Nearest Neighbor (KNN) classifier. In contrast, the individual and ensemble DF provide classification accuracies of 93.25 and 97.25%, respectively.
Collapse
Affiliation(s)
- Venkatesan Rajinikanth
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 1401, Lebanon
| | - Ramya Mohan
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Arunmozhi Rama
- Department of Computer Science and Engineering, Division of Research and Innovation, Saveetha School of Engineering, SIMATS, Chennai 602105, India
| | - Muhammad Attique Khan
- Department of Computer Science and Mathematics, Lebanese American University, Beirut, Lebanon
| | - Jungeun Kim
- Department of Software, Kongju National University, Cheonan, 31080, Korea
| |
Collapse
|
11
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
12
|
Wanjiku RN, Nderu L, Kimwele M. Improved transfer learning using textural features conflation and dynamically fine-tuned layers. PeerJ Comput Sci 2023; 9:e1601. [PMID: 37810335 PMCID: PMC10557498 DOI: 10.7717/peerj-cs.1601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/29/2023] [Indexed: 10/10/2023]
Abstract
Transfer learning involves using previously learnt knowledge of a model task in addressing another task. However, this process works well when the tasks are closely related. It is, therefore, important to select data points that are closely relevant to the previous task and fine-tune the suitable pre-trained model's layers for effective transfer. This work utilises the least divergent textural features of the target datasets and pre-trained model's layers, minimising the lost knowledge during the transfer learning process. This study extends previous works on selecting data points with good textural features and dynamically selected layers using divergence measures by combining them into one model pipeline. Five pre-trained models are used: ResNet50, DenseNet169, InceptionV3, VGG16 and MobileNetV2 on nine datasets: CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Stanford Dogs, Caltech 256, ISIC 2016, ChestX-ray8 and MIT Indoor Scenes. Experimental results show that data points with lower textural feature divergence and layers with more positive weights give better accuracy than other data points and layers. The data points with lower divergence give an average improvement of 3.54% to 6.75%, while the layers improve by 2.42% to 13.04% for the CIFAR-100 dataset. Combining the two methods gives an extra accuracy improvement of 1.56%. This combined approach shows that data points with lower divergence from the source dataset samples can lead to a better adaptation for the target task. The results also demonstrate that selecting layers with more positive weights reduces instances of trial and error in selecting fine-tuning layers for pre-trained models.
Collapse
Affiliation(s)
| | - Lawrence Nderu
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| | - Michael Kimwele
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| |
Collapse
|
13
|
Arora P, Singh P, Girdhar A, Vijayvergiya R. Calcification Detection in Intravascular Ultrasound (IVUS) Images Using Transfer Learning Based MultiSVM model. ULTRASONIC IMAGING 2023; 45:136-150. [PMID: 37052393 DOI: 10.1177/01617346231164574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Cardiovascular disease serves as the leading cause of death worldwide. Calcification detection is considered an important factor in cardiovascular diseases. Currently, medical practitioners visually inspect the presence of calcification using intravascular ultrasound (IVUS) images. The study aims to detect the extent of calcification as belonging to class I, II as mild calcification, and class III, IV as dense calcification from IVUS images acquired at 40 MHz. To detect calcification, the features were extracted using improved AlexNet architecture and then were fed into machine learning classifiers. The experiments were carried out using 14 real IVUS pullbacks of 10 patients. Experimental results show that the combination of traditional machine learning with deep learning approaches significantly improves accuracy. The results show that support vector machines outperform all other classifiers. The proposed model is compared with two other pre-trained models GoogLeNet (98.8%), SqueezeNet (99.2%), and exhibits considerable improvement in classification accuracy (99.8%). In the future other models such as Vision Transformers could be explored with additional feature selection methods such as ReliefF, PSO, ACO, etc. to improve the overall accuracy of diagnosis.
Collapse
Affiliation(s)
- Priyanka Arora
- IKG Punjab Technical University, Punjab, India
- Department of Computer Science and Engineering, Guru Nanak Dev Engineering College, Ludhiana, Punjab, India
| | - Parminder Singh
- Department of Computer Science and Engineering, Guru Nanak Dev Engineering College, Ludhiana, Punjab, India
| | - Akshay Girdhar
- Department of Information Technology, Guru Nanak Dev Engineering College, Ludhiana, Punjab, India
| | - Rajesh Vijayvergiya
- Department of Cardiology, Postgraduate Institute of Medical Education and Research (PGIMER), Chandigarh, India
| |
Collapse
|
14
|
Shao H, Wang S. Deep Classification with Linearity-Enhanced Logits to Softmax Function. ENTROPY (BASEL, SWITZERLAND) 2023; 25:e25050727. [PMID: 37238482 DOI: 10.3390/e25050727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Revised: 03/20/2023] [Accepted: 03/27/2023] [Indexed: 05/28/2023]
Abstract
Recently, there has been a rapid increase in deep classification tasks, such as image recognition and target detection. As one of the most crucial components in Convolutional Neural Network (CNN) architectures, softmax arguably encourages CNN to achieve better performance in image recognition. Under this scheme, we present a conceptually intuitive learning objection function: Orthogonal-Softmax. The primary property of the loss function is to use a linear approximation model that is designed by Gram-Schmidt orthogonalization. Firstly, compared with the traditional softmax and Taylor-Softmax, Orthogonal-Softmax has a stronger relationship through orthogonal polynomials expansion. Secondly, a new loss function is advanced to acquire highly discriminative features for classification tasks. At last, we present a linear softmax loss to further promote the intra-class compactness and inter-class discrepancy simultaneously. The results of the widespread experimental discussion on four benchmark datasets manifest the validity of the presented method. Besides, we want to explore the non-ground truth samples in the future.
Collapse
Affiliation(s)
- Hao Shao
- School of Mathematics and Statistics, Yunnan Unverisity, Kunming 650504, China
| | - Shunfang Wang
- School of Information Science and Engineering, Yunnan Unverisity, Kunming 650504, China
- The Key Lab of Intelligent Systems and Computing of Yunnan Province, Yunnan University, Kunming 650504, China
| |
Collapse
|
15
|
Arora P, Singh P, Girdhar A, Vijayvergiya R, Chaudhary P. CADNet: an advanced architecture for automatic detection of coronary artery calcification and shadow border in intravascular ultrasound (IVUS) images. Phys Eng Sci Med 2023; 46:773-786. [PMID: 37039978 PMCID: PMC10088744 DOI: 10.1007/s13246-023-01250-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 03/21/2023] [Indexed: 04/12/2023]
Abstract
Intravascular Ultrasound (IVUS) is a medical imaging modality widely used for the detection and treatment of coronary heart disease. The detection of vascular structures is extremely important for accurate treatment procedures. Manual detection of lumen and calcification is very time-consuming and requires technical experience. Ultrasound imaging suffers from the generation of artifacts which obstructs the clear delineation among structures. Considering, the need, to provide special attention to crucial areas, convolutional block attention modules (CBAM) is integrated into an encoder-decoder-based U-Net architecture along with Atrous Spatial Pyramid Pooling (ASPP) to detect vessel components: lumen, calcification and shadow borders. The attention modules prove effective in dealing with areas of special attention by assigning additional weights to crucial channels and preserving spatial features. The IVUS data of 12 patients undergoing the treatment is taken for this study. The novelty of the model design is such that it is able to detect the lumen area in the presence/absence of calcification and bifurcation artifacts too. Also, the model efficiently detects the calcification area even in case of severely complex lesions with shadows behind them. The main contribution of the work is that IVUS images of varying degrees of calcification till 360° are also considered in this work, which is usually neglected in previous studies. The experimental results of 1097 IVUS images of 12 patients resulted in meanIoU (0.7894 ± 0.011), Dice Coefficient (0.8763 ± 0.070), precision (0.8768 ± 0.069) and recall (0.8774 ± 0.071) of the proposed model CADNet which show the model's effectiveness relative to other state-of-the art methods.
Collapse
Affiliation(s)
- Priyanka Arora
- IKG Punjab Technical University, Punjab, India.
- Department of Computer Science & Engineering, Guru Nanak Dev Engineering College, Ludhiana, Punjab, India.
| | - Parminder Singh
- Department of Computer Science & Engineering, Guru Nanak Dev Engineering College, Ludhiana, Punjab, India
| | - Akshay Girdhar
- Department of Information Technology, Guru Nanak Dev Engineering College, Ludhiana, Punjab, India
| | - Rajesh Vijayvergiya
- Department of Cardiology, Postgraduate Institute of Medical Education and Research (PGIMER), Chandigarh, India
| | - Prince Chaudhary
- Business Development Manager, Therapy Awareness Group (TAG), Boston Scientific India Private Limited, Gurgaon, India
| |
Collapse
|
16
|
Pham TD, Ravi V, Luo B, Fan C, Sun XF. Artificial intelligence fusion for predicting survival of rectal cancer patients using immunohistochemical expression of Ras homolog family member B in biopsy. EXPLORATION OF TARGETED ANTI-TUMOR THERAPY 2023; 4:1-16. [PMID: 36937315 PMCID: PMC10017185 DOI: 10.37349/etat.2023.00119] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 10/31/2022] [Indexed: 02/10/2023] Open
Abstract
Aim The process of biomarker discovery is being accelerated with the application of artificial intelligence (AI), including machine learning. Biomarkers of diseases are useful because they are indicators of pathogenesis or measures of responses to therapeutic treatments, and therefore, play a key role in new drug development. Proteins are among the candidates for biomarkers of rectal cancer, which need to be explored using state-of-the-art AI to be utilized for prediction, prognosis, and therapeutic treatment. This paper aims to investigate the predictive power of Ras homolog family member B (RhoB) protein in rectal cancer. Methods This study introduces the integration of pretrained convolutional neural networks and support vector machines (SVMs) for classifying biopsy samples of immunohistochemical expression of protein RhoB in rectal-cancer patients to validate its biologic measure in biopsy. Features of the immunohistochemical expression images were extracted by the pretrained networks and used for binary classification by the SVMs into two groups of less and more than 5-year survival rates. Results The fusion of neural search architecture network (NASNet)-Large for deep-layer feature extraction and classifier using SVMs provided the best average classification performance with a total accuracy = 85%, prediction of survival rate of more than 5 years = 90%, and prediction of survival rate of less than 5 years = 75%. Conclusions The finding obtained from the use of AI reported in this study suggest that RhoB expression on rectal-cancer biopsy can be potentially used as a biomarker for predicting survival outcomes in rectal-cancer patients, which can be informative for clinical decision making if the patient would be recommended for preoperative therapy.
Collapse
Affiliation(s)
- Tuan D. Pham
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
| | - Vinayakumar Ravi
- Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
| | - Bin Luo
- Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
- Department of Gastrointestinal Surgery, Sichuan Provincial People’s Hospital, Chengdu 610032, Sichuan, China
| | - Chuanwen Fan
- Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
| | - Xiao-Feng Sun
- Department of Biomedical and Clinical Sciences, Linköping University, 58183 Linköping, Sweden
| |
Collapse
|
17
|
Hyperparameter Optimizer with Deep Learning-Based Decision-Support Systems for Histopathological Breast Cancer Diagnosis. Cancers (Basel) 2023; 15:cancers15030885. [PMID: 36765839 PMCID: PMC9913140 DOI: 10.3390/cancers15030885] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 01/20/2023] [Accepted: 01/25/2023] [Indexed: 02/04/2023] Open
Abstract
Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing maximal image classification performance in numerous application zones. This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, this comparative study states that the AOADL-HBCC technique displays better performance than other recent methodologies, with a maximum accuracy of 96.77%.
Collapse
|
18
|
AL-Ghamdi ASALM, Ragab M. Tunicate swarm algorithm with deep convolutional neural network-driven colorectal cancer classification from histopathological imaging data. ELECTRONIC RESEARCH ARCHIVE 2023; 31:2793-2812. [DOI: 10.3934/era.2023141] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2024]
Abstract
<abstract>
<p>Colorectal cancer (CRC) is one of the most popular cancers among both men and women, with increasing incidence. The enhanced analytical load data from the pathology laboratory, integrated with described intra- and inter-variabilities through the calculation of biomarkers, has prompted the quest for robust machine-based approaches in combination with routine practice. In histopathology, deep learning (DL) techniques have been applied at large due to their potential for supporting the analysis and forecasting of medically appropriate molecular phenotypes and microsatellite instability. Considering this background, the current research work presents a metaheuristics technique with deep convolutional neural network-based colorectal cancer classification based on histopathological imaging data (MDCNN-C3HI). The presented MDCNN-C3HI technique majorly examines the histopathological images for the classification of colorectal cancer (CRC). At the initial stage, the MDCNN-C3HI technique applies a bilateral filtering approach to get rid of the noise. Then, the proposed MDCNN-C3HI technique uses an enhanced capsule network with the Adam optimizer for the extraction of feature vectors. For CRC classification, the MDCNN-C3HI technique uses a DL modified neural network classifier, whereas the tunicate swarm algorithm is used to fine-tune its hyperparameters. To demonstrate the enhanced performance of the proposed MDCNN-C3HI technique on CRC classification, a wide range of experiments was conducted. The outcomes from the extensive experimentation procedure confirmed the superior performance of the proposed MDCNN-C3HI technique over other existing techniques, achieving a maximum accuracy of 99.45%, a sensitivity of 99.45% and a specificity of 99.45%.</p>
</abstract>
Collapse
Affiliation(s)
- Abdullah S. AL-Malaise AL-Ghamdi
- Information Systems Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Information Systems Department, HECI School, Dar Alhekma University, Jeddah, Saudi Arabia
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Mathematics Department, Faculty of Science, Al-Azhar University, Naser City 11884, Cairo, Egypt
| |
Collapse
|
19
|
Li G, Wu G, Xu G, Li C, Zhu Z, Ye Y, Zhang H. Pathological image classification via embedded fusion mutual learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
20
|
Felipe GZ, Teixeira LO, Pereira RM, Zanoni JN, Souza SRG, Nanni L, Cavalcanti GDC, Costa YMG. Cancer Identification in Enteric Nervous System Preclinical Images Using Handcrafted and Automatic Learned Features. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-11114-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
21
|
An Improved Endoscopic Automatic Classification Model for Gastroesophageal Reflux Disease Using Deep Learning Integrated Machine Learning. Diagnostics (Basel) 2022; 12:diagnostics12112827. [PMID: 36428887 PMCID: PMC9689126 DOI: 10.3390/diagnostics12112827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/05/2022] [Accepted: 11/15/2022] [Indexed: 11/18/2022] Open
Abstract
Gastroesophageal reflux disease (GERD) is a common digestive tract disease, and most physicians use the Los Angeles classification and diagnose the severity of the disease to provide appropriate treatment. With the advancement of artificial intelligence, deep learning models have been used successfully to help physicians with clinical diagnosis. This study combines deep learning and machine learning techniques and proposes a two-stage process for endoscopic classification in GERD, including transfer learning techniques applied to the target dataset to extract more precise image features and machine learning algorithms to build the best classification model. The experimental results demonstrate that the performance of the GerdNet-RF model proposed in this work is better than that of previous studies. Test accuracy can be improved from 78.8% ± 8.5% to 92.5% ± 2.1%. By enhancing the automated diagnostic capabilities of AI models, patient health care will be more assured.
Collapse
|
22
|
Kosaraju S, Park J, Lee H, Yang JW, Kang M. Deep learning-based framework for slide-based histopathological image analysis. Sci Rep 2022; 12:19075. [PMID: 36351997 PMCID: PMC9646838 DOI: 10.1038/s41598-022-23166-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 10/26/2022] [Indexed: 11/11/2022] Open
Abstract
Digital pathology coupled with advanced machine learning (e.g., deep learning) has been changing the paradigm of whole-slide histopathological images (WSIs) analysis. Major applications in digital pathology using machine learning include automatic cancer classification, survival analysis, and subtyping from pathological images. While most pathological image analyses are based on patch-wise processing due to the extremely large size of histopathology images, there are several applications that predict a single clinical outcome or perform pathological diagnosis per slide (e.g., cancer classification, survival analysis). However, current slide-based analyses are task-dependent, and a general framework of slide-based analysis in WSI has been seldom investigated. We propose a novel slide-based histopathology analysis framework that creates a WSI representation map, called HipoMap, that can be applied to any slide-based problems, coupled with convolutional neural networks. HipoMap converts a WSI of various shapes and sizes to structured image-type representation. Our proposed HipoMap outperformed existing methods in intensive experiments with various settings and datasets. HipoMap showed the Area Under the Curve (AUC) of 0.96±0.026 (5% improved) in the experiments for lung cancer classification, and c-index of 0.787±0.013 (3.5% improved) and coefficient of determination ([Formula: see text]) of 0.978±0.032 (24% improved) in survival analysis and survival prediction with TCGA lung cancer data respectively, as a general framework of slide-based analysis with a flexible capability. The results showed significant improvement comparing to the current state-of-the-art methods on each task. We further discussed experimental results of HipoMap as pathological viewpoints and verified the performance using publicly available TCGA datasets. A Python package is available at https://pypi.org/project/hipomap , and the package can be easily installed using Python PIP. The open-source codes in Python are available at: https://github.com/datax-lab/HipoMap .
Collapse
Affiliation(s)
- Sai Kosaraju
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| | - Jeongyeon Park
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Hyun Lee
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Jung Wook Yang
- grid.256681.e0000 0001 0661 1492Department of Pathology, Gyeongsang National University Hospital, Gyeongsang National University College of Medicine, Jinju, South Korea
| | - Mingon Kang
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| |
Collapse
|
23
|
CJT-DEO: Condorcet’s Jury Theorem and Differential Evolution Optimization based ensemble of deep neural networks for pulmonary and colorectal cancer classification. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
24
|
X-Ray Lung Image Classification Using a Canny Edge Detector. JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING 2022. [DOI: 10.1155/2022/3081584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The medical imaging technique is used in order to obtain a tissue image of a specific part of the human body without any surgical intervention. The presence of differences in the clinical experiences of a section of doctors or doctors in general can lead to discrepancies in the analysis and understanding of medical images and thus affects the accuracy of the diagnosis for the patient’s condition. The use of a medical imaging system for reliable diagnosis through the use of the computer will lead to high accuracy in diagnosis. For this reason, the need to improve the special performance of systems that perform computer-aided diagnosis used in the medical imaging process has increased special performance of the computer-aided diagnostic systems used in the medical imaging process. The medical image classification technique has the ability to perform a preliminary analysis as well as an understanding of medical images and also can identify the affected parts of the human body, which leads to helping doctors in the process of optimal diagnosis. The process of classifying medical images needs to extract the features of the image so that the classification process is carried out with high accuracy, and one of these features is detecting the edges of the image using a Canny edge detector. This is what the author performed in the research, and the experimental results show the effectiveness and goodness of this method.
Collapse
|
25
|
Chang CC, Li YZ, Wu HC, Tseng MH. Melanoma Detection Using XGB Classifier Combined with Feature Extraction and K-Means SMOTE Techniques. Diagnostics (Basel) 2022; 12:1747. [PMID: 35885650 PMCID: PMC9320570 DOI: 10.3390/diagnostics12071747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/14/2022] [Accepted: 07/18/2022] [Indexed: 12/02/2022] Open
Abstract
Melanoma, a very severe form of skin cancer, spreads quickly and has a high mortality rate if not treated early. Recently, machine learning, deep learning, and other related technologies have been successfully applied to computer-aided diagnostic tasks of skin lesions. However, some issues in terms of image feature extraction and imbalanced data need to be addressed. Based on a method for manually annotating image features by dermatologists, we developed a melanoma detection model with four improvement strategies, including applying the transfer learning technique to automatically extract image features, adding gender and age metadata, using an oversampling technique for imbalanced data, and comparing machine learning algorithms. According to the experimental results, the improved strategies proposed in this study have statistically significant performance improvement effects. In particular, our proposed ensemble model can outperform previous related models.
Collapse
Affiliation(s)
- Chih-Chi Chang
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan; (C.-C.C.); (Y.-Z.L.)
| | - Yu-Zhen Li
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan; (C.-C.C.); (Y.-Z.L.)
| | - Hui-Ching Wu
- Department of Medical Sociology and Social Work, Chung Shan Medical University, Taichung 402, Taiwan
| | - Ming-Hseng Tseng
- Department of Medical Informatics, Chung Shan Medical University, Taichung 402, Taiwan; (C.-C.C.); (Y.-Z.L.)
- Information Technology Office, Chung Shan Medical University Hospital, Taichung 402, Taiwan
| |
Collapse
|
26
|
Transfer learning for histopathology images: an empirical study. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07516-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
27
|
Sohail A, Yu Z, Nutini A. COVID-19 Variants and Transfer Learning for the Emerging Stringency Indices. Neural Process Lett 2022; 55:1-10. [PMID: 35573262 PMCID: PMC9087157 DOI: 10.1007/s11063-022-10834-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2022] [Indexed: 12/23/2022]
Abstract
The pandemics in the history of world health organization have always left memorable hallmarks, on the health care systems and on the economy of highly effected areas. The ongoing pandemic is one of the most harmful pandemics and is threatening due to its transformation to more contiguous variants. Here in this manuscript, we will first outline the variants and then their impact on the associated health issues. The deep learning algorithms are useful in developing models, from a higher dimensional problem/ dataset, but these algorithms fail to provide insight during the training process and do not generalize the conditions. Transfer learning, a new subfield of machine learning has acquired fame due to its ability to exploit the information/learning gained from a previous process to improve generalization for the next. In short, transfer learning is the optimization of the stored knowledge. With the aid of transfer learning, we will show that the stringency index and cardiovascular death rates were the most important and appropriate predictors to develop the model for the forecasting of the COVID-19 death rates.
Collapse
Affiliation(s)
- Ayesha Sohail
- Department of Mathematics, Comsats University Islamabad, Lahore Campus, Lahore, Pakistan
| | - Zhenhua Yu
- Institute of Systems Security and Control, College of Computer Science and Technology, Xi’an University of Science and Technology, Xi’an, 710054 China
| | - Alessandro Nutini
- Centro Studi Attività Motorie - Biology and Biomechanics Department, Via di Tiglio 94, loc. Arancio, 55100 Lucca, Italy
| |
Collapse
|
28
|
TweezBot: An AI-Driven Online Media Bot Identification Algorithm for Twitter Social Networks. ELECTRONICS 2022. [DOI: 10.3390/electronics11050743] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
In the ultra-connected age of information, online social media platforms have become an indispensable part of our daily routines. Recently, this online public space is getting largely occupied by suspicious and manipulative social media bots. Such automated deceptive bots often attempt to distort ground realities and manipulate global trends, thus creating astroturfing attacks on the social media online portals. Moreover, these bots often tend to participate in duplicitous activities, including promotion of hidden agendas and indulgence in biased propagation meant for personal gain or scams. Thus, online bots have eventually become one of the biggest menaces for social media platforms. Therefore, we have proposed an AI-driven social media bot identification framework, namely TweezBot, which can identify fraudulent Twitter bots. The proposed bot detection method analyzes Twitter-specific user profiles having essential profile-centric features and several activity-centric characteristics. We have constructed a set of filtering criteria and devised an exhaustive bag of words for performing language-based processing. In order to substantiate our research, we have performed a comparative study of our model with the existing benchmark classifiers, such as Support Vector Machine, Categorical Naïve Bayes, Bernoulli Naïve Bayes, Multilayer Perceptron, Decision Trees, Random Forest and other automation identifiers.
Collapse
|
29
|
Prediction Model of Tunnel Boring Machine Disc Cutter Replacement Using Kernel Support Vector Machine. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12052267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
During tunneling processes, disc cutters of a tunnel boring machine (TBM) usually need to be frequently and unexpectedly replaced. Regular inspections are needed to check disc cutters’ status, which significantly reduces the work efficiency and increases the cost. This paper proposes a new prediction model based on TBM operational parameters and geological conditions that determines whether disc cutter replacement is needed. Firstly, an evaluation criterion for whether the cutters need to be replaced is constructed. Secondly, specific parameters related to the evaluation criterion are analyzed and 18 features are established on tunneling monitoring information. Then, the mapping model between the cutter replacement judgement and the established features is built based on a kernel support vector machine (KSVM). Finally, the data obtained from a Jilin water transport tunnel project is utilized to verify the performance of the proposed model. Test results show that the new model can obtain an average accuracy of 90.0% and an average F1 score of 86.2% on field data prediction based on data from past tunneling days. Therefore, the proposed data-predictive model can be used in tunneling to accurately predict whether disc cutters need to be replaced before human judgment, and thereby greatly improve tunneling safety and efficiency.
Collapse
|
30
|
Bag of Features (BoF) Based Deep Learning Framework for Bleached Corals Detection. BIG DATA AND COGNITIVE COMPUTING 2021. [DOI: 10.3390/bdcc5040053] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Coral reefs are the sub-aqueous calcium carbonate structures collected by the invertebrates known as corals. The charm and beauty of coral reefs attract tourists, and they play a vital role in preserving biodiversity, ceasing coastal erosion, and promoting business trade. However, they are declining because of over-exploitation, damaging fishery, marine pollution, and global climate changes. Also, coral reefs help treat human immune-deficiency virus (HIV), heart disease, and coastal erosion. The corals of Australia’s great barrier reef have started bleaching due to the ocean acidification, and global warming, which is an alarming threat to the earth’s ecosystem. Many techniques have been developed to address such issues. However, each method has a limitation due to the low resolution of images, diverse weather conditions, etc. In this paper, we propose a bag of features (BoF) based approach that can detect and localize the bleached corals before the safety measures are applied. The dataset contains images of bleached and unbleached corals, and various kernels are used to support the vector machine so that extracted features can be classified. The accuracy of handcrafted descriptors and deep convolutional neural networks is analyzed and provided in detail with comparison to the current method. Various handcrafted descriptors like local binary pattern, a histogram of an oriented gradient, locally encoded transform feature histogram, gray level co-occurrence matrix, and completed joint scale local binary pattern are used for feature extraction. Specific deep convolutional neural networks such as AlexNet, GoogLeNet, VGG-19, ResNet-50, Inception v3, and CoralNet are being used for feature extraction. From experimental analysis and results, the proposed technique outperforms in comparison to the current state-of-the-art methods. The proposed technique achieves 99.08% accuracy with a classification error of 0.92%. A novel bleached coral positioning algorithm is also proposed to locate bleached corals in the coral reef images.
Collapse
|