1
|
Mahmood T, Saba T, Al-Otaibi S, Ayesha N, Almasoud AS. AI-Driven Microscopy: Cutting-Edge Approach for Breast Tissue Prognosis Using Microscopic Images. Microsc Res Tech 2025; 88:1335-1359. [PMID: 39748498 DOI: 10.1002/jemt.24788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 10/31/2024] [Accepted: 12/18/2024] [Indexed: 01/04/2025]
Abstract
Microscopic imaging aids disease diagnosis by describing quantitative cell morphology and tissue size. However, the high spatial resolution of these images poses significant challenges for manual quantitative evaluation. This project proposes using computer-aided analysis methods to address these challenges, enabling rapid and precise clinical diagnosis, course analysis, and prognostic prediction. This research introduces advanced deep learning frameworks such as squeeze-and-excitation and dilated dense convolution blocks to tackle the complexities of quantifying small and intricate breast cancer tissues and meeting the real-time requirements of pathological image analysis. Our proposed framework integrates a dense convolutional network (DenseNet) with an attention mechanism, enhancing the capability for rapid and accurate clinical assessments. These multi-classification models facilitate the precise prediction and segmentation of breast lesions in microscopic images by leveraging lightweight multi-scale feature extraction, dynamic region attention, sub-region classification, and regional regularization loss functions. This research will employ transfer learning paradigms and data enhancement methods to enhance the models' learning further and prevent overfitting. We propose the fine-tuning employing pre-trained architectures such as VGGNet-19, ResNet152V2, EfficientNetV2-B1, and DenseNet-121, modifying the final pooling layer in each model's last block with an SPP layer and associated BN layer. The study uses labeled and unlabeled data for tissue microscopic image analysis, enhancing models' robust features and classification abilities. This method reduces the costs and time associated with traditional methods, alleviating the burden of data labeling in computational pathology. The goal is to provide a sophisticated, efficient quantitative pathological image analysis solution, improving clinical outcomes and advancing the computational field. The model, trained, validated, and tested on a microscope breast image dataset, achieved recognition accuracy of 99.6% for benign and malignant secondary classification and 99.4% for eight breast subtypes classification. Our proposed approach demonstrates substantial improvement compared to existing methods, which generally report lower accuracies for breast subtype classification ranging between 85% and 94%. This high level of accuracy underscores the potential of our approach to provide reliable diagnostic support, enhancing precision in clinical decision-making.
Collapse
Affiliation(s)
- Tariq Mahmood
- Artificial Intelligence and Data Analytics (AIDA) lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
- Faculty of Information Sciences, University of Education, Vehari Campus, Vehari, Pakistan
| | - Tanzila Saba
- Artificial Intelligence and Data Analytics (AIDA) lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| | - Shaha Al-Otaibi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Noor Ayesha
- Center of Excellence in CyberSecurity, Prince Sultan University, Riyadh, Saudi Arabia
| | - Ahmed S Almasoud
- Artificial Intelligence and Data Analytics (AIDA) lab, CCIS Prince Sultan University, Riyadh, Saudi Arabia
| |
Collapse
|
2
|
Li J. Fusion feature-based hybrid methods for diagnosing oral squamous cell carcinoma in histopathological images. Front Oncol 2025; 15:1551876. [PMID: 40265007 PMCID: PMC12011784 DOI: 10.3389/fonc.2025.1551876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2024] [Accepted: 03/11/2025] [Indexed: 04/24/2025] Open
Abstract
Objective This study is experimental in nature and assesses the effectiveness of the Cross-Attention Vision Transformer (CrossViT) in the early detection of Oral Squamous Cell Carcinoma (OSCC) and proposes a hybrid model that combines CrossViT features with manually extracted features to improve the accuracy and robustness of OSCC diagnosis. Methods We employed the CrossViT architecture, which utilizes a dual attention mechanism to process multi-scale features, in combination with Convolutional Neural Networks (CNN) technology for the effective analysis of image patches. Simultaneously, features were manually extracted by experts from OSCC pathological images and subsequently fused with the features extracted by CrossViT to enhance diagnostic performance. The classification task was performed using an Artificial Neural Networks (ANN) to further improve diagnostic accuracy. Model performance was evaluated based on classification accuracy on two independent OSCC datasets. Results The proposed hybrid feature model demonstrated excellent performance in pathological diagnosis, achieving accuracies of 99.36% and 99.59%, respectively. Compared to CNN and Vision Transformer (ViT) models, the hybrid model was more effective in distinguishing between malignant and benign lesions, significantly improving diagnostic accuracy. Conclusion By combining CrossViT with expert features, diagnostic accuracy for OSCC was significantly enhanced, thereby validating the potential of hybrid artificial intelligence models in clinical pathology. Future research will expand the dataset and explore the model's interpretability to facilitate its practical application in clinical settings.
Collapse
Affiliation(s)
- Jiaxing Li
- Baoan Central Hospital of Shenzhen, Shenzhen, Guangdong, China
| |
Collapse
|
3
|
Aftab J, Khan MA, Arshad S, Rehman SU, AlHammadi DA, Nam Y. Artificial intelligence based classification and prediction of medical imaging using a novel framework of inverted and self-attention deep neural network architecture. Sci Rep 2025; 15:8724. [PMID: 40082642 PMCID: PMC11906919 DOI: 10.1038/s41598-025-93718-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Accepted: 03/10/2025] [Indexed: 03/16/2025] Open
Abstract
Classifying medical images is essential in computer-aided diagnosis (CAD). Although the recent success of deep learning in the classification tasks has proven advantages over the traditional feature extraction techniques, it remains challenging due to the inter and intra-class similarity caused by the diversity of imaging modalities (i.e., dermoscopy, mammography, wireless capsule endoscopy, and CT). In this work, we proposed a novel deep-learning framework for classifying several medical imaging modalities. In the training phase of the deep learning models, data augmentation is performed at the first stage on all selected datasets. After that, two novel custom deep learning architectures were introduced, called the Inverted Residual Convolutional Neural Network (IRCNN) and Self Attention CNN (SACNN). Both models are trained on the augmented datasets with manual hyperparameter selection. Each dataset's testing images are used to extract features during the testing stage. The extracted features are fused using a modified serial fusion with a strong correlation approach. An optimization algorithm- slap swarm controlled standard Error mean (SScSEM) has been employed, and the best features that passed to the shallow wide neural network (SWNN) classifier for the final classification have been selected. GradCAM, an explainable artificial intelligence (XAI) approach, analyzes custom models. The proposed architecture was tested on five publically available datasets of different imaging modalities and obtained improved accuracy of 98.6 (INBreast), 95.3 (KVASIR), 94.3 (ISIC2018), 95.0 (Lung Cancer), and 98.8% (Oral Cancer), respectively. A detailed comparison is conducted based on precision and accuracy, showing that the proposed architecture performs better. The implemented models are available on GitHub ( https://github.com/ComputerVisionLabPMU/ScientificImagingPaper.git ).
Collapse
Affiliation(s)
- Junaid Aftab
- Department of Computer Engineering, HITEC University, Taxila, 47080, Pakistan
| | - Muhammad Attique Khan
- Department of Artificial Intelligence, College of Computer Engineering and Science, Prince Mohammad bin Fahd University, Al Khobar, Saudi Arabia.
| | - Sobia Arshad
- Department of Computer Engineering, HITEC University, Taxila, 47080, Pakistan
| | - Shams Ur Rehman
- Department of Computer Engineering, HITEC University, Taxila, 47080, Pakistan
| | - Dina Abdulaziz AlHammadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O.Box 84428, 11671, Riyadh, Saudi Arabia
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan, South Korea.
| |
Collapse
|
4
|
Vinay V, Jodalli P, Chavan MS, Buddhikot CS, Luke AM, Ingafou MSH, Reda R, Pawar AM, Testarelli L. Artificial Intelligence in Oral Cancer: A Comprehensive Scoping Review of Diagnostic and Prognostic Applications. Diagnostics (Basel) 2025; 15:280. [PMID: 39941210 PMCID: PMC11816433 DOI: 10.3390/diagnostics15030280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Revised: 01/19/2025] [Accepted: 01/22/2025] [Indexed: 02/16/2025] Open
Abstract
Background/Objectives: Oral cancer, the sixth most common cancer worldwide, is linked to smoke, alcohol, and HPV. This scoping analysis summarized early-onset oral cancer diagnosis applications to address a gap. Methods: A scoping review identified, selected, and synthesized AI-based oral cancer diagnosis, screening, and prognosis literature. The review verified study quality and relevance using frameworks and inclusion criteria. A full search included keywords, MeSH phrases, and Pubmed. Oral cancer AI applications were tested through data extraction and synthesis. Results: AI outperforms traditional oral cancer screening, analysis, and prediction approaches. Medical pictures can be used to diagnose oral cancer with convolutional neural networks. Smartphone and AI-enabled telemedicine make screening affordable and accessible in resource-constrained areas. AI methods predict oral cancer risk using patient data. AI can also arrange treatment using histopathology images and address data heterogeneity, restricted longitudinal research, clinical practice inclusion, and ethical and legal difficulties. Future potential includes uniform standards, long-term investigations, ethical and regulatory frameworks, and healthcare professional training. Conclusions: AI may transform oral cancer diagnosis and treatment. It can develop early detection, risk modelling, imaging phenotypic change, and prognosis. AI approaches should be standardized, tested longitudinally, and ethical and practical issues related to real-world deployment should be addressed.
Collapse
Affiliation(s)
- Vineet Vinay
- Department of Public Health Dentistry, Manipal College of Dental Sciences Mangalore, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
- Department of Public Health Dentistry, Sinhgad Dental College & Hospital, Pune 411041, Maharashtra, India
| | - Praveen Jodalli
- Department of Public Health Dentistry, Manipal College of Dental Sciences Mangalore, Manipal Academy of Higher Education, Manipal 576104, Karnataka, India;
| | - Mahesh S. Chavan
- Department of Oral Medicine and Radiology, Sinhgad Dental College & Hospital, Pune 411041, Maharashtra, India;
| | - Chaitanya. S. Buddhikot
- Department of Public Health Dentistry, Dr. D. Y. Patil Dental College and Hospital Pune, Dr. D. Y. Patil Vidyapeeth Pimpri Pune, Pune 411018, Maharashtra, India;
| | - Alexander Maniangat Luke
- Department of Clinical Science, College of Dentistry, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates; (A.M.L.); (M.S.H.I.)
- Centre of Medical and Bio-Allied Health Science Research, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates
| | - Mohamed Saleh Hamad Ingafou
- Department of Clinical Science, College of Dentistry, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates; (A.M.L.); (M.S.H.I.)
- Centre of Medical and Bio-Allied Health Science Research, Ajman University, Al-Jruf, Ajman P.O. Box 346, United Arab Emirates
| | - Rodolfo Reda
- Department of Oral and Maxillo-Facial Sciences, Sapienza University of Rome, Via Caserta 06, 00161 Rome, Italy;
| | - Ajinkya M. Pawar
- Department of Conservative Dentistry and Endodontics, Nair Hospital Dental College, Mumbai 400034, Maharashtra, India
| | - Luca Testarelli
- Department of Oral and Maxillo-Facial Sciences, Sapienza University of Rome, Via Caserta 06, 00161 Rome, Italy;
| |
Collapse
|
5
|
Kulkarni P, Sarwe N, Pingale A, Sarolkar Y, Patil RR, Shinde G, Kaur G. Exploring the efficacy of various CNN architectures in diagnosing oral cancer from squamous cell carcinoma. MethodsX 2024; 13:103034. [PMID: 39610794 PMCID: PMC11603122 DOI: 10.1016/j.mex.2024.103034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Accepted: 11/04/2024] [Indexed: 11/30/2024] Open
Abstract
Oral cancer can result from mutations in cells located in the lips or mouth. Diagnosing oral cavity squamous cell carcinoma (OCSCC) is particularly challenging, often occurring at advanced stages. To address this, computer-aided diagnosis methods are increasingly being used. In this work, a deep learning-based approach utilizing models such as VGG16, ResNet50, LeNet-5, MobileNetV2, and Inception V3 is presented. NEOR and OCSCC datasets were used for feature extraction, with virtual slide images divided into tiles and classified as normal or squamous cell cancer. Performance metrics like accuracy, F1-score, AUC, precision, and recall were analyzed to determine the prerequisites for optimal CNN performance. The proposed CNN approaches were effective for classifying OCSCC and oral dysplasia, with the highest accuracy of 95.41 % achieved using MobileNetV2. Key findings Deep learning models, particularly MobileNetV2, achieved high classification accuracy (95.41 %) for OCSCC.CNN-based methods show promise for early-stage OCSCC and oral dysplasia diagnosis. Performance parameters like precision, recall, and F1-score help optimize CNN model selection for this task.
Collapse
Affiliation(s)
- Prerna Kulkarni
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Nidhi Sarwe
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Abhishek Pingale
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Yash Sarolkar
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Rutuja Rajendra Patil
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Gitanjali Shinde
- Department of CSE (AIML), Vishwakarma Institute of Information Technology, Kondhwa (Budruk) Pune, Maharashtra 411048, India
| | - Gagandeep Kaur
- CSE Department, Symbiosis Institute of Technology, Nagpur Campus, Symbiosis International (Deemed University), Pune, India
| |
Collapse
|
6
|
Sahoo RK, Sahoo KC, Dash GC, Kumar G, Baliarsingh SK, Panda B, Pati S. Diagnostic performance of artificial intelligence in detecting oral potentially malignant disorders and oral cancer using medical diagnostic imaging: a systematic review and meta-analysis. FRONTIERS IN ORAL HEALTH 2024; 5:1494867. [PMID: 39568787 PMCID: PMC11576460 DOI: 10.3389/froh.2024.1494867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 10/22/2024] [Indexed: 11/22/2024] Open
Abstract
Objective Oral cancer is a widespread global health problem characterised by high mortality rates, wherein early detection is critical for better survival outcomes and quality of life. While visual examination is the primary method for detecting oral cancer, it may not be practical in remote areas. AI algorithms have shown some promise in detecting cancer from medical images, but their effectiveness in oral cancer detection remains Naïve. This systematic review aims to provide an extensive assessment of the existing evidence about the diagnostic accuracy of AI-driven approaches for detecting oral potentially malignant disorders (OPMDs) and oral cancer using medical diagnostic imaging. Methods Adhering to PRISMA guidelines, the review scrutinised literature from PubMed, Scopus, and IEEE databases, with a specific focus on evaluating the performance of AI architectures across diverse imaging modalities for the detection of these conditions. Results The performance of AI models, measured by sensitivity and specificity, was assessed using a hierarchical summary receiver operating characteristic (SROC) curve, with heterogeneity quantified through I2 statistic. To account for inter-study variability, a random effects model was utilized. We screened 296 articles, included 55 studies for qualitative synthesis, and selected 18 studies for meta-analysis. Studies evaluating the diagnostic efficacy of AI-based methods reveal a high sensitivity of 0.87 and specificity of 0.81. The diagnostic odds ratio (DOR) of 131.63 indicates a high likelihood of accurate diagnosis of oral cancer and OPMDs. The SROC curve (AUC) of 0.9758 indicates the exceptional diagnostic performance of such models. The research showed that deep learning (DL) architectures, especially CNNs (convolutional neural networks), were the best at finding OPMDs and oral cancer. Histopathological images exhibited the greatest sensitivity and specificity in these detections. Conclusion These findings suggest that AI algorithms have the potential to function as reliable tools for the early diagnosis of OPMDs and oral cancer, offering significant advantages, particularly in resource-constrained settings. Systematic Review Registration https://www.crd.york.ac.uk/, PROSPERO (CRD42023476706).
Collapse
Affiliation(s)
- Rakesh Kumar Sahoo
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| | - Krushna Chandra Sahoo
- Health Technology Assessment in India (HTAIn), Department of Health Research, Ministry of Health & Family Welfare, Govt. of India, New Delhi, India
| | | | - Gunjan Kumar
- Kalinga Institute of Dental Sciences, KIIT Deemed to be University, Bhubaneswar, India
| | | | - Bhuputra Panda
- School of Public Health, Kalinga Institute of Industrial Technology (KIIT) Deemed to be University, Bhubaneswar, India
| | - Sanghamitra Pati
- Health Technology Assessment in India (HTAIn), ICMR-Regional Medical Research Centre, Bhubaneswar, India
| |
Collapse
|
7
|
Ragab M, Asar TO. Deep transfer learning with improved crayfish optimization algorithm for oral squamous cell carcinoma cancer recognition using histopathological images. Sci Rep 2024; 14:25348. [PMID: 39455617 PMCID: PMC11512072 DOI: 10.1038/s41598-024-75330-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 10/04/2024] [Indexed: 10/28/2024] Open
Abstract
Oral Squamous Cell Carcinoma (OSCC) causes a severe challenge in oncology due to the lack of diagnostic devices, leading to delays in detecting the disorder. The OSCC diagnosis through histopathology demands a pathologist expert because the cellular presentation is variable and highly complex. Existing diagnostic approaches for OSCC have specific efficiency and accuracy restrictions, highlighting the necessity for more reliable techniques. The increase of deep neural networks (DNN) model and their applications in medical imaging have been instrumental in disease diagnosis and detection. Automatic detection systems using deep learning (DL) approaches show tremendous promise in investigating medical imagery with speed, efficiency, and accuracy. In terms of OSCC, this system allows the diagnostic method to be streamlined, facilitating earlier diagnosis and enhancing survival rates. Automatic analysis of histopathological image (HI) can assist in accurately detecting and identifying tumorous tissue, reducing diagnostic turnaround times and increasing the efficacy of pathologists. This study presents a Squeeze-Excitation with Hybrid Deep Learning for Oral Squamous Cell Carcinoma Recognition (SEHDL-OSCCR) on HIs. The presented SEHDL-OSCCR technique mainly focuses on detecting oral cancer (OC) using hybrid DL models. The bilateral filtering (BF) technique is initially used to remove the noise. Next, the SEHDL-OSCCR technique employs the SE-CapsNet model to recognize the feature extractors. An improved crayfish optimization algorithm (ICOA) technique is utilized to improve the performance of the SE-CapsNet model. At last, the classification of the OSCC technique is performed by employing a convolutional neural network with a bidirectional long short-term memory (CNN-BiLSTM) model. The simulation results obtained using the SEHDL-OSCCR technique are investigated using a benchmark medical image dataset. The experimental validation of the SEHDL-OSCCR technique illustrated a greater accuracy outcome of 98.75% compared to recent approaches.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia.
| | - Turky Omar Asar
- Department of Biology, College of Science and Arts at Alkamil, University of Jeddah, Jeddah, Saudi Arabia
| |
Collapse
|
8
|
Kapoor DU, Saini PK, Sharma N, Singh A, Prajapati BG, Elossaily GM, Rashid S. AI illuminates paths in oral cancer: transformative insights, diagnostic precision, and personalized strategies. EXCLI JOURNAL 2024; 23:1091-1116. [PMID: 39391057 PMCID: PMC11464865 DOI: 10.17179/excli2024-7253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2024] [Accepted: 08/29/2024] [Indexed: 10/12/2024]
Abstract
Oral cancer retains one of the lowest survival rates worldwide, despite recent therapeutic advancements signifying a tenacious challenge in healthcare. Artificial intelligence exhibits noteworthy potential in escalating diagnostic and treatment procedures, offering promising advancements in healthcare. This review entails the traditional imaging techniques for the oral cancer treatment. The role of artificial intelligence in prognosis of oral cancer including predictive modeling, identification of prognostic factors and risk stratification also discussed significantly in this review. The review also encompasses the utilization of artificial intelligence such as automated image analysis, computer-aided detection and diagnosis integration of machine learning algorithms for oral cancer diagnosis and treatment. The customizing treatment approaches for oral cancer through artificial intelligence based personalized medicine is also part of this review. See also the graphical abstract(Fig. 1).
Collapse
Affiliation(s)
- Devesh U. Kapoor
- Dr. Dayaram Patel Pharmacy College, Bardoli-394601, Gujarat, India
| | - Pushpendra Kumar Saini
- Department of Pharmaceutics, Sri Balaji College of Pharmacy, Jaipur, Rajasthan-302013, India
| | - Narendra Sharma
- Department of Pharmaceutics, Sri Balaji College of Pharmacy, Jaipur, Rajasthan-302013, India
| | - Ankul Singh
- Faculty of Pharmacy, Department of Pharmacology, Dr MGR Educational and Research Institute, Velapanchavadi, Chennai-77, Tamil Nadu, India
| | - Bhupendra G. Prajapati
- Shree S. K. Patel College of Pharmaceutical Education and Research, Ganpat University, Kherva-384012, Gujarat, India
- Faculty of Pharmacy, Silpakorn University, Nakhon Pathom 73000, Thailand
| | - Gehan M. Elossaily
- Department of Basic Medical Sciences, College of Medicine, AlMaarefa University, P.O. Box 71666, Riyadh, 11597, Saudi Arabia
| | - Summya Rashid
- Department of Pharmacology & Toxicology, College of Pharmacy, Prince Sattam Bin Abdulaziz University, P.O. Box 173, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
9
|
Silva AB, Martins AS, Tosta TAA, Loyola AM, Cardoso SV, Neves LA, de Faria PR, do Nascimento MZ. OralEpitheliumDB: A Dataset for Oral Epithelial Dysplasia Image Segmentation and Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1691-1710. [PMID: 38409608 PMCID: PMC11589032 DOI: 10.1007/s10278-024-01041-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 02/03/2024] [Accepted: 02/06/2024] [Indexed: 02/28/2024]
Abstract
Early diagnosis of potentially malignant disorders, such as oral epithelial dysplasia, is the most reliable way to prevent oral cancer. Computational algorithms have been used as an auxiliary tool to aid specialists in this process. Usually, experiments are performed on private data, making it difficult to reproduce the results. There are several public datasets of histological images, but studies focused on oral dysplasia images use inaccessible datasets. This prevents the improvement of algorithms aimed at this lesion. This study introduces an annotated public dataset of oral epithelial dysplasia tissue images. The dataset includes 456 images acquired from 30 mouse tongues. The images were categorized among the lesion grades, with nuclear structures manually marked by a trained specialist and validated by a pathologist. Also, experiments were carried out in order to illustrate the potential of the proposed dataset in classification and segmentation processes commonly explored in the literature. Convolutional neural network (CNN) models for semantic and instance segmentation were employed on the images, which were pre-processed with stain normalization methods. Then, the segmented and non-segmented images were classified with CNN architectures and machine learning algorithms. The data obtained through these processes is available in the dataset. The segmentation stage showed the F1-score value of 0.83, obtained with the U-Net model using the ResNet-50 as a backbone. At the classification stage, the most expressive result was achieved with the Random Forest method, with an accuracy value of 94.22%. The results show that the segmentation contributed to the classification results, but studies are needed for the improvement of these stages of automated diagnosis. The original, gold standard, normalized, and segmented images are publicly available and may be used for the improvement of clinical applications of CAD methods on oral epithelial dysplasia tissue images.
Collapse
Affiliation(s)
- Adriano Barbosa Silva
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil.
| | - Alessandro Santana Martins
- Federal Institute of Triângulo Mineiro (IFTM), R. Belarmino Vilela Junqueira, S/N, 38305-200, Ituiutaba, MG, Brazil
| | - Thaína Aparecida Azevedo Tosta
- Science and Technology Institute, Federal University of São Paulo (UNIFESP), Av. Cesare Mansueto Giulio Lattes, 1201, 12247-014, São José dos Campos, SP, Brazil
| | - Adriano Mota Loyola
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Sérgio Vitorino Cardoso
- School of Dentistry, Federal University of Uberlândia (UFU), Av. Pará - 1720, 38405-320, Uberlândia, MG, Brazil
| | - Leandro Alves Neves
- Department of Computer Science and Statistics (DCCE), São Paulo State University (UNESP), R. Cristóvão Colombo, 2265, 38305-200, São José do Rio Preto, SP, Brazil
| | - Paulo Rogério de Faria
- Department of Histology and Morphology, Institute of Biomedical Science, Federal University of Uberlândia (UFU), Av. Amazonas, S/N, 38405-320, Uberlândia, MG, Brazil
| | - Marcelo Zanchetta do Nascimento
- Faculty of Computer Science (FACOM) - Federal University of Uberlândia (UFU), Av. João Naves de Ávila 2121, BLB, 38400-902, Uberlândia, MG, Brazil
| |
Collapse
|
10
|
Malhotra M, Shaw AK, Priyadarshini SR, Metha SS, Sahoo PK, Gachake A. Diagnostic Accuracy of Artificial Intelligence Compared to Biopsy in Detecting Early Oral Squamous Cell Carcinoma: A Systematic Review and Meta Analysis. Asian Pac J Cancer Prev 2024; 25:2593-2603. [PMID: 39205556 PMCID: PMC11495466 DOI: 10.31557/apjcp.2024.25.8.2593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Indexed: 09/04/2024] Open
Abstract
OBJECTIVE To summarize and compare the existing evidence on diagnostic accuracy of artificial intelligence (AI) models in detecting early oral squamous cell carcinoma (OSCC). METHOD Review was performed in accordance to Preferred Reporting Items for Systematic Reviews and Meta-Analysis - Diagnostic Test Accuracy (PRISMA- DTA) checklist and the review protocol is registered under PROSPERO(CRD42023456355). PubMed, Google Scholar, EBSCOhost were searched from January 2000 to November 2023 to identify the diagnostic potential of AI based tools and models. True-positive, false-positive, true-negative, false-negative, sensitivity, specificity values were extracted or calculated if not present for each study. Quality of selected studies was evaluated based on QUADAS (Quality assessment of diagnostic accuracy studies)- 2 tool. Meta-analysis was performed in Meta-Disc 1.4 software and Review Manager 5.3 RevMan using a bivariate model parameter for the sensitivity and specificity and summary points, summary receiver operating curve (SROC), diagnostic odds ratio (DOR) confidence region, and area under curve (AUC) were calculated. RESULTS Fourteen studies were included for qualitative synthesis and for meta-analysis. Included studies had presence of low to moderate risk of bias. Pooled sensitivity and specificity of 0.43 (CI 0.18- 0.71) and 0.50 (CI 0.20- 0.80) was observed with a pooled positive likelihood ratio of (PLR) 0.86 (0.43 - 1.71) and negative likelihood ratio (NLR) of 1.04 (0.42 - 1.68) was observed with DOR of 0.78 (0.12 - 5.18) and overall accuracy (AUC) being 0.45 respectively. CONCLUSION AI based tools has poor to moderate overall diagnostic accuracy. However, to validate our study findings further more standardized diagnostic accuracy studies should be conducted with proper reporting through QUADAS-2 tool. Thus, we can conclude AI based based tool for secondary level of prevention for early OSCC under early diagnosis and prompt treatment.
Collapse
Affiliation(s)
- Mehak Malhotra
- BDS, Bharati Vidyapeeth (Deemed to be University), Dental College and Hospital, Pune, Maharashtra, India.
| | - Amar Kumar Shaw
- Assistant Professor, Department of Public Health Dentistry Bharati Vidyapeeth (Deemed to be University), Dental College and Hospital, Pune, Maharashtra, India.
| | - Smita R Priyadarshini
- BDS, Bharati Vidyapeeth (Deemed to be University), Dental College and Hospital, Pune, Maharashtra, India.
| | - Samruddhi Swapnil Metha
- Professor, Department of Oral Medicine and Radiology, Institute of Dental sciences, Siksha O Anusandhan University, Bhubaneswar, Odisha, India.
| | - Pradyumna Kumar Sahoo
- Assistant Professor, Department of Oral Medicine and Radiology Bharati Vidyapeeth (Deemed to be University), Dental College and Hospital, Sangli, Maharashtra, India.
| | - Arti Gachake
- Professor, Department of Prosthodontics, Institute of Dental Sciences, Siksha O Anusandhan University, Bhubaneshwar, Odisha, India.
- Assistant Professor, Bharati Vidyapeeth (Deemed to be University), Dental College and Hospital, Pune, Maharashtra, India.
| |
Collapse
|
11
|
Alajaji SA, Khoury ZH, Jessri M, Sciubba JJ, Sultan AS. An Update on the Use of Artificial Intelligence in Digital Pathology for Oral Epithelial Dysplasia Research. Head Neck Pathol 2024; 18:38. [PMID: 38727841 PMCID: PMC11087425 DOI: 10.1007/s12105-024-01643-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 03/30/2024] [Indexed: 05/13/2024]
Abstract
INTRODUCTION Oral epithelial dysplasia (OED) is a precancerous histopathological finding which is considered the most important prognostic indicator for determining the risk of malignant transformation into oral squamous cell carcinoma (OSCC). The gold standard for diagnosis and grading of OED is through histopathological examination, which is subject to inter- and intra-observer variability, impacting accurate diagnosis and prognosis. The aim of this review article is to examine the current advances in digital pathology for artificial intelligence (AI) applications used for OED diagnosis. MATERIALS AND METHODS We included studies that used AI for diagnosis, grading, or prognosis of OED on histopathology images or intraoral clinical images. Studies utilizing imaging modalities other than routine light microscopy (e.g., scanning electron microscopy), or immunohistochemistry-stained histology slides, or immunofluorescence were excluded from the study. Studies not focusing on oral dysplasia grading and diagnosis, e.g., to discriminate OSCC from normal epithelial tissue were also excluded. RESULTS A total of 24 studies were included in this review. Nineteen studies utilized deep learning (DL) convolutional neural networks for histopathological OED analysis, and 4 used machine learning (ML) models. Studies were summarized by AI method, main study outcomes, predictive value for malignant transformation, strengths, and limitations. CONCLUSION ML/DL studies for OED grading and prediction of malignant transformation are emerging as promising adjunctive tools in the field of digital pathology. These adjunctive objective tools can ultimately aid the pathologist in more accurate diagnosis and prognosis prediction. However, further supportive studies that focus on generalization, explainable decisions, and prognosis prediction are needed.
Collapse
Affiliation(s)
- Shahd A Alajaji
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, 650 W. Baltimore Street, 7 Floor, Baltimore, MD, 21201, USA
- Department of Oral Medicine and Diagnostic Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA
| | - Zaid H Khoury
- Department of Oral Diagnostic Sciences and Research, Meharry Medical College School of Dentistry, Nashville, TN, USA
| | - Maryam Jessri
- Oral Medicine and Pathology Department, School of Dentistry, University of Queensland, Herston, QLD, Australia
- Oral Medicine Department, Metro North Hospital and Health Services, Queensland Health, Brisbane, QLD, Australia
| | - James J Sciubba
- Department of Otolaryngology, Head & Neck Surgery, The Johns Hopkins University, Baltimore, MD, USA
| | - Ahmed S Sultan
- Department of Oncology and Diagnostic Sciences, University of Maryland School of Dentistry, 650 W. Baltimore Street, 7 Floor, Baltimore, MD, 21201, USA.
- Division of Artificial Intelligence Research, University of Maryland School of Dentistry, Baltimore, MD, USA.
- University of Maryland Marlene and Stewart Greenebaum Comprehensive Cancer Center, Baltimore, MD, USA.
| |
Collapse
|
12
|
Chudobiński C, Świderski B, Antoniuk I, Kurek J. Enhancements in Radiological Detection of Metastatic Lymph Nodes Utilizing AI-Assisted Ultrasound Imaging Data and the Lymph Node Reporting and Data System Scale. Cancers (Basel) 2024; 16:1564. [PMID: 38672646 PMCID: PMC11048706 DOI: 10.3390/cancers16081564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 04/11/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The paper presents a novel approach for the automatic detection of neoplastic lesions in lymph nodes (LNs). It leverages the latest advances in machine learning (ML) with the LN Reporting and Data System (LN-RADS) scale. By integrating diverse datasets and network structures, the research investigates the effectiveness of ML algorithms in improving diagnostic accuracy and automation potential. Both Multinominal Logistic Regression (MLR)-integrated and fully connected neuron layers are included in the analysis. The methods were trained using three variants of combinations of histopathological data and LN-RADS scale labels to assess their utility. The findings demonstrate that the LN-RADS scale improves prediction accuracy. MLR integration is shown to achieve higher accuracy, while the fully connected neuron approach excels in AUC performance. All of the above suggests a possibility for significant improvement in the early detection and prognosis of cancer using AI techniques. The study underlines the importance of further exploration into combined datasets and network architectures, which could potentially lead to even greater improvements in the diagnostic process.
Collapse
Affiliation(s)
- Cezary Chudobiński
- Copernicus Regional Multi-Specialty Oncology and Trauma Centre, 93-513 Lódź, Poland;
| | - Bartosz Świderski
- Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, 02-776 Warsaw, Poland; (B.Ś.); (I.A.)
| | - Izabella Antoniuk
- Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, 02-776 Warsaw, Poland; (B.Ś.); (I.A.)
| | - Jarosław Kurek
- Department of Artificial Intelligence, Institute of Information Technology, Warsaw University of Life Sciences, 02-776 Warsaw, Poland; (B.Ś.); (I.A.)
| |
Collapse
|
13
|
Mhaske S, Ramalingam K, Nair P, Patel S, Menon P A, Malik N, Mhaske S. Automated Analysis of Nuclear Parameters in Oral Exfoliative Cytology Using Machine Learning. Cureus 2024; 16:e58744. [PMID: 38779230 PMCID: PMC11110917 DOI: 10.7759/cureus.58744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/22/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND As oral cancer remains a major worldwide health concern, sophisticated diagnostic tools are needed to aid in early diagnosis. Non-invasive methods like exfoliative cytology, albeit with the help of artificial intelligence (AI), have drawn additional interest. AIM The study aimed to harness the power of machine learning algorithms for the automated analysis of nuclear parameters in oral exfoliative cytology. Further, the analysis of two different AI systems, namely convoluted neural networks (CNN) and support vector machine (SVM), were compared for accuracy. METHODS A comparative diagnostic study was performed in two groups of patients (n=60). The control group without evidence of lesions (n=30) and the other group with clinically suspicious oral malignancy (n=30) were evaluated. All patients underwent cytological smears using an exfoliative cytology brush, followed by routine Hematoxylin and Eosin staining. Image preprocessing, data splitting, machine learning, model development, feature extraction, and model evaluation were done. An independent t-test was run on each nuclear characteristic, and Pearson's correlation coefficient test was performed with Statistical Package for the Social Sciences (SPSS) software (IBM SPSS Statistics for Windows, Version 28.0. IBM Corp, Armonk, NY, USA). RESULTS The study found substantial variations between the study and control groups in nuclear size (p<0.05), nuclear shape (p<0.01), and chromatin distribution (p<0.001). The Pearson correlation coefficient of SVM was 0.6472, and CNN was 0.7790, showing that SVM had more accuracy. CONCLUSION The availability of multidimensional datasets, combined with breakthroughs in high-performance computers and new deep-learning architectures, has resulted in an explosion of AI use in numerous areas of oncology research. The discerned diagnostic accuracy exhibited by the SVM and CNN models suggests prospective improvements in early detection rates, potentially improving patient outcomes and enhancing healthcare practices.
Collapse
Affiliation(s)
- Shubhangi Mhaske
- Oral Pathology and Microbiology, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, IND
- Oral and Maxillofacial Pathology, People's College Of Dental Science and Research Center, Bhopal, IND
| | - Karthikeyan Ramalingam
- Oral Pathology and Microbiology, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, IND
| | - Preeti Nair
- Oral Medicine and Radiology, People's College Of Dental Science and Research Center, Bhopal, IND
| | - Shubham Patel
- Oral and Maxillofacial Pathology, People's College Of Dental Science and Research Center, Bhopal, IND
| | - Arathi Menon P
- Dentistry, Indian Council of Medical Research, Bhopal, IND
| | - Nida Malik
- Periodontics, Kamala Nehru Hospital, Bhopal, IND
| | - Sumedh Mhaske
- Medicine, Government Medical College & Hospital, Aurangabad, IND
| |
Collapse
|
14
|
Warin K, Suebnukarn S. Deep learning in oral cancer- a systematic review. BMC Oral Health 2024; 24:212. [PMID: 38341571 PMCID: PMC10859022 DOI: 10.1186/s12903-024-03993-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 02/06/2024] [Indexed: 02/12/2024] Open
Abstract
BACKGROUND Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.
Collapse
Affiliation(s)
- Kritsasith Warin
- Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand.
| | | |
Collapse
|
15
|
Albalawi E, Thakur A, Ramakrishna MT, Bhatia Khan S, SankaraNarayanan S, Almarri B, Hadi TH. Oral squamous cell carcinoma detection using EfficientNet on histopathological images. Front Med (Lausanne) 2024; 10:1349336. [PMID: 38348235 PMCID: PMC10859441 DOI: 10.3389/fmed.2023.1349336] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 12/28/2023] [Indexed: 02/15/2024] Open
Abstract
Introduction Oral Squamous Cell Carcinoma (OSCC) poses a significant challenge in oncology due to the absence of precise diagnostic tools, leading to delays in identifying the condition. Current diagnostic methods for OSCC have limitations in accuracy and efficiency, highlighting the need for more reliable approaches. This study aims to explore the discriminative potential of histopathological images of oral epithelium and OSCC. By utilizing a database containing 1224 images from 230 patients, captured at varying magnifications and publicly available, a customized deep learning model based on EfficientNetB3 was developed. The model's objective was to differentiate between normal epithelium and OSCC tissues by employing advanced techniques such as data augmentation, regularization, and optimization. Methods The research utilized a histopathological imaging database for Oral Cancer analysis, incorporating 1224 images from 230 patients. These images, taken at various magnifications, formed the basis for training a specialized deep learning model built upon the EfficientNetB3 architecture. The model underwent training to distinguish between normal epithelium and OSCC tissues, employing sophisticated methodologies including data augmentation, regularization techniques, and optimization strategies. Results The customized deep learning model achieved significant success, showcasing a remarkable 99% accuracy when tested on the dataset. This high accuracy underscores the model's efficacy in effectively discerning between normal epithelium and OSCC tissues. Furthermore, the model exhibited impressive precision, recall, and F1-score metrics, reinforcing its potential as a robust diagnostic tool for OSCC. Discussion This research demonstrates the promising potential of employing deep learning models to address the diagnostic challenges associated with OSCC. The model's ability to achieve a 99% accuracy rate on the test dataset signifies a considerable leap forward in earlier and more accurate detection of OSCC. Leveraging advanced techniques in machine learning, such as data augmentation and optimization, has shown promising results in improving patient outcomes through timely and precise identification of OSCC.
Collapse
Affiliation(s)
- Eid Albalawi
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Arastu Thakur
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Mahesh Thyluru Ramakrishna
- Department of Computer Science and Engineering, Faculty of Engineering and Technology, JAIN (Deemed-to-be University), Bangalore, India
| | - Surbhi Bhatia Khan
- Department of Data Science, School of Science, Engineering and Environment, University of Salford, Salford, United Kingdom
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos, Lebanon
| | - Suresh SankaraNarayanan
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Badar Almarri
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Theyazn Hassn Hadi
- Applied College in Abqaiq, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
16
|
Shamsan A, Senan EM, Ahmad Shatnawi HS. Predicting of diabetic retinopathy development stages of fundus images using deep learning based on combined features. PLoS One 2023; 18:e0289555. [PMID: 37862328 PMCID: PMC10588832 DOI: 10.1371/journal.pone.0289555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Accepted: 07/20/2023] [Indexed: 10/22/2023] Open
Abstract
The number of diabetic retinopathy (DR) patients is increasing every year, and this causes a public health problem. Therefore, regular diagnosis of diabetes patients is necessary to avoid the progression of DR stages to advanced stages that lead to blindness. Manual diagnosis requires effort and expertise and is prone to errors and differing expert diagnoses. Therefore, artificial intelligence techniques help doctors make a proper diagnosis and resolve different opinions. This study developed three approaches, each with two systems, for early diagnosis of DR disease progression. All colour fundus images have been subjected to image enhancement and increasing contrast ROI through filters. All features extracted by the DenseNet-121 and AlexNet (Dense-121 and Alex) were fed to the Principal Component Analysis (PCA) method to select important features and reduce their dimensions. The first approach is to DR image analysis for early prediction of DR disease progression by Artificial Neural Network (ANN) with selected, low-dimensional features of Dense-121 and Alex models. The second approach is to DR image analysis for early prediction of DR disease progression is by integrating important and low-dimensional features of Dense-121 and Alex models before and after PCA. The third approach is to DR image analysis for early prediction of DR disease progression by ANN with the radiomic features. The radiomic features are a combination of the features of the CNN models (Dense-121 and Alex) separately with the handcrafted features extracted by Discrete Wavelet Transform (DWT), Local Binary Pattern (LBP), Fuzzy colour histogram (FCH), and Gray Level Co-occurrence Matrix (GLCM) methods. With the radiomic features of the Alex model and the handcrafted features, ANN reached a sensitivity of 97.92%, an AUC of 99.56%, an accuracy of 99.1%, a specificity of 99.4% and a precision of 99.06%.
Collapse
Affiliation(s)
- Ahlam Shamsan
- Computer Department, Applied College, Najran University, Najran, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | | |
Collapse
|
17
|
Pošta P, Kolk A, Pivovarčíková K, Liška J, Genčur J, Moztarzadeh O, Micopulos C, Pěnkava A, Frolo M, Bissinger O, Hauer L. Clinical Experience with Autofluorescence Guided Oral Squamous Cell Carcinoma Surgery. Diagnostics (Basel) 2023; 13:3161. [PMID: 37891982 PMCID: PMC10605623 DOI: 10.3390/diagnostics13203161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/02/2023] [Accepted: 10/05/2023] [Indexed: 10/29/2023] Open
Abstract
In our study, the effect of the use of autofluorescence (Visually Enhanced Lesion Scope-VELscope) on increasing the success rate of surgical treatment in oral squamous carcinoma (OSCC) was investigated. Our hypothesis was tested on a group of 122 patients suffering from OSCC, randomized into a study and a control group enrolled in our study after meeting the inclusion criteria. The preoperative checkup via VELscope, accompanied by the marking of the range of a loss of fluorescence in the study group, was performed before the surgery. We developed a unique mucosal tattoo marking technique for this purpose. The histopathological results after surgical treatment, i.e., the margin status, were then compared. In the study group, we achieved pathological free margin (pFM) in 55 patients, pathological close margin (pCM) in 6 cases, and we encountered no cases of pathological positive margin (pPM) in the mucosal layer. In comparison, the control group results revealed pPM in 7 cases, pCM in 14 cases, and pFM in 40 of all cases in the mucosal layer. This study demonstrated that preoperative autofluorescence assessment of the mucosal surroundings of OSCC increased the ability to achieve pFM resection 4.8 times in terms of lateral margins.
Collapse
Affiliation(s)
- Petr Pošta
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Andreas Kolk
- Department of Oral and Maxillofacial Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria; (A.K.); (O.B.)
| | - Kristýna Pivovarčíková
- Sikl’s Department of Pathology, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic;
- Bioptic Laboratory Ltd., 32600 Pilsen, Czech Republic
| | - Jan Liška
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Jiří Genčur
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Omid Moztarzadeh
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
- Department of Anatomy, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic
| | - Christos Micopulos
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Adam Pěnkava
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Maria Frolo
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| | - Oliver Bissinger
- Department of Oral and Maxillofacial Surgery, Medical University of Innsbruck, 6020 Innsbruck, Austria; (A.K.); (O.B.)
| | - Lukáš Hauer
- Department of Stomatology, University Hospital Pilsen, Faculty of Medicine, Charles University, 32300 Pilsen, Czech Republic; (J.L.); (L.H.)
| |
Collapse
|
18
|
Song S, Ren X, He J, Gao M, Wang J, Wang B. An Optimal Hierarchical Approach for Oral Cancer Diagnosis Using Rough Set Theory and an Amended Version of the Competitive Search Algorithm. Diagnostics (Basel) 2023; 13:2454. [PMID: 37510198 PMCID: PMC10377835 DOI: 10.3390/diagnostics13142454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/06/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023] Open
Abstract
Oral cancer is introduced as the uncontrolled cells' growth that causes destruction and damage to nearby tissues. This occurs when a sore or lump grows in the mouth that does not disappear. Cancers of the cheeks, lips, floor of the mouth, tongue, sinuses, hard and soft palate, and lungs (throat) are types of this cancer that will be deadly if not detected and cured in the beginning stages. The present study proposes a new pipeline procedure for providing an efficient diagnosis system for oral cancer images. In this procedure, after preprocessing and segmenting the area of interest of the inputted images, the useful characteristics are achieved. Then, some number of useful features are selected, and the others are removed to simplify the method complexity. Finally, the selected features move into a support vector machine (SVM) to classify the images by selected characteristics. The feature selection and classification steps are optimized by an amended version of the competitive search optimizer. The technique is finally implemented on the Oral Cancer (Lips and Tongue) images (OCI) dataset, and its achievements are confirmed by the comparison of it with some other latest techniques, which are weight balancing, a support vector machine, a gray-level co-occurrence matrix (GLCM), the deep method, transfer learning, mobile microscopy, and quadratic discriminant analysis. The simulation results were authenticated by four indicators and indicated the suggested method's efficiency in relation to the others in diagnosing the oral cancer cases.
Collapse
Affiliation(s)
- Simin Song
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Xiaojing Ren
- The First Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100853, China
| | - Jing He
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Meng Gao
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Jia'nan Wang
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| | - Bin Wang
- The Second Medical Center, Chinese People's Liberation Army General Hospital, Beijing 100089, China
| |
Collapse
|
19
|
Hamdi M, Senan EM, Jadhav ME, Olayah F, Awaji B, Alalayah KM. Hybrid Models Based on Fusion Features of a CNN and Handcrafted Features for Accurate Histopathological Image Analysis for Diagnosing Malignant Lymphomas. Diagnostics (Basel) 2023; 13:2258. [PMID: 37443652 DOI: 10.3390/diagnostics13132258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 06/10/2023] [Accepted: 06/28/2023] [Indexed: 07/15/2023] Open
Abstract
Malignant lymphoma is one of the most severe types of disease that leads to death as a result of exposure of lymphocytes to malignant tumors. The transformation of cells from indolent B-cell lymphoma to B-cell lymphoma (DBCL) is life-threatening. Biopsies taken from the patient are the gold standard for lymphoma analysis. Glass slides under a microscope are converted into whole slide images (WSI) to be analyzed by AI techniques through biomedical image processing. Because of the multiplicity of types of malignant lymphomas, manual diagnosis by pathologists is difficult, tedious, and subject to disagreement among physicians. The importance of artificial intelligence (AI) in the early diagnosis of malignant lymphoma is significant and has revolutionized the field of oncology. The use of AI in the early diagnosis of malignant lymphoma offers numerous benefits, including improved accuracy, faster diagnosis, and risk stratification. This study developed several strategies based on hybrid systems to analyze histopathological images of malignant lymphomas. For all proposed models, the images and extraction of malignant lymphocytes were optimized by the gradient vector flow (GVF) algorithm. The first strategy for diagnosing malignant lymphoma images relied on a hybrid system between three types of deep learning (DL) networks, XGBoost algorithms, and decision tree (DT) algorithms based on the GVF algorithm. The second strategy for diagnosing malignant lymphoma images was based on fusing the features of the MobileNet-VGG16, VGG16-AlexNet, and MobileNet-AlexNet models and classifying them by XGBoost and DT algorithms based on the ant colony optimization (ACO) algorithm. The color, shape, and texture features, which are called handcrafted features, were extracted by four traditional feature extraction algorithms. Because of the similarity in the biological characteristics of early-stage malignant lymphomas, the features of the fused MobileNet-VGG16, VGG16-AlexNet, and MobileNet-AlexNet models were combined with the handcrafted features and classified by the XGBoost and DT algorithms based on the ACO algorithm. We concluded that the performance of the two networks XGBoost and DT, with fused features between DL networks and handcrafted, achieved the best performance. The XGBoost network based on the fused features of MobileNet-VGG16 and handcrafted features resulted in an AUC of 99.43%, accuracy of 99.8%, precision of 99.77%, sensitivity of 99.7%, and specificity of 99.8%. This highlights the significant role of AI in the early diagnosis of malignant lymphoma, offering improved accuracy, expedited diagnosis, and enhanced risk stratification. This study highlights leveraging AI techniques and biomedical image processing; the analysis of whole slide images (WSI) converted from biopsies allows for improved accuracy, faster diagnosis, and risk stratification. The developed strategies based on hybrid systems, combining deep learning networks, XGBoost and decision tree algorithms, demonstrated promising results in diagnosing malignant lymphoma images. Furthermore, the fusion of handcrafted features with features extracted from DL networks enhanced the performance of the classification models.
Collapse
Affiliation(s)
- Mohammed Hamdi
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | - Mukti E Jadhav
- Shri Shivaji Science & Arts College, Chikhli Dist., Buldana 443201, India
| | - Fekry Olayah
- Department of Information System, Faculty Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Bakri Awaji
- Department of Computer Science, Faculty of Computer Science and Information System, Najran University, Najran 66462, Saudi Arabia
| | - Khaled M Alalayah
- Department of Computer Science, Faculty of Science and Arts, Sharurah, Najran University, Najran 66462, Saudi Arabia
| |
Collapse
|
20
|
Khanagar SB, Alkadi L, Alghilan MA, Kalagi S, Awawdeh M, Bijai LK, Vishwanathaiah S, Aldhebaib A, Singh OG. Application and Performance of Artificial Intelligence (AI) in Oral Cancer Diagnosis and Prediction Using Histopathological Images: A Systematic Review. Biomedicines 2023; 11:1612. [PMID: 37371706 DOI: 10.3390/biomedicines11061612] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 05/27/2023] [Accepted: 05/31/2023] [Indexed: 06/29/2023] Open
Abstract
Oral cancer (OC) is one of the most common forms of head and neck cancer and continues to have the lowest survival rates worldwide, even with advancements in research and therapy. The prognosis of OC has not significantly improved in recent years, presenting a persistent challenge in the biomedical field. In the field of oncology, artificial intelligence (AI) has seen rapid development, with notable successes being reported in recent times. This systematic review aimed to critically appraise the available evidence regarding the utilization of AI in the diagnosis, classification, and prediction of oral cancer (OC) using histopathological images. An electronic search of several databases, including PubMed, Scopus, Embase, the Cochrane Library, Web of Science, Google Scholar, and the Saudi Digital Library, was conducted for articles published between January 2000 and January 2023. Nineteen articles that met the inclusion criteria were then subjected to critical analysis utilizing QUADAS-2, and the certainty of the evidence was assessed using the GRADE approach. AI models have been widely applied in diagnosing oral cancer, differentiating normal and malignant regions, predicting the survival of OC patients, and grading OC. The AI models used in these studies displayed an accuracy in a range from 89.47% to 100%, sensitivity from 97.76% to 99.26%, and specificity ranging from 92% to 99.42%. The models' abilities to diagnose, classify, and predict the occurrence of OC outperform existing clinical approaches. This demonstrates the potential for AI to deliver a superior level of precision and accuracy, helping pathologists significantly improve their diagnostic outcomes and reduce the probability of errors. Considering these advantages, regulatory bodies and policymakers should expedite the process of approval and marketing of these products for application in clinical scenarios.
Collapse
Affiliation(s)
- Sanjeev B Khanagar
- Preventive Dental Science Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Lubna Alkadi
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Maryam A Alghilan
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Sara Kalagi
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Restorative and Prosthetic Dental Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Mohammed Awawdeh
- Preventive Dental Science Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Lalitytha Kumar Bijai
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Maxillofacial Surgery and Diagnostic Sciences Department, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Satish Vishwanathaiah
- Department of Preventive Dental Sciences, Division of Pediatric Dentistry, College of Dentistry, Jazan University, Jazan 45142, Saudi Arabia
| | - Ali Aldhebaib
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Radiological Sciences Program, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| | - Oinam Gokulchandra Singh
- King Abdullah International Medical Research Centre, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Radiological Sciences Program, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11426, Saudi Arabia
| |
Collapse
|
21
|
Ghaleb Al-Mekhlafi Z, Mohammed Senan E, Sulaiman Alshudukhi J, Abdulkarem Mohammed B. Hybrid Techniques for Diagnosing Endoscopy Images for Early Detection of Gastrointestinal Disease Based on Fusion Features. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8616939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Gastrointestinal (GI) diseases, particularly tumours, are considered one of the most widespread and dangerous diseases and thus need timely health care for early detection to reduce deaths. Endoscopy technology is an effective technique for diagnosing GI diseases, thus producing a video containing thousands of frames. However, it is difficult to analyse all the images by a gastroenterologist, and it takes a long time to keep track of all the frames. Thus, artificial intelligence systems provide solutions to this challenge by analysing thousands of images with high speed and effective accuracy. Hence, systems with different methodologies are developed in this work. The first methodology for diagnosing endoscopy images of GI diseases is by using VGG-16 + SVM and DenseNet-121 + SVM. The second methodology for diagnosing endoscopy images of gastrointestinal diseases by artificial neural network (ANN) is based on fused features between VGG-16 and DenseNet-121 before and after high-dimensionality reduction by the principal component analysis (PCA). The third methodology is by ANN and is based on the fused features between VGG-16 and handcrafted features and features fused between DenseNet-121 and the handcrafted features. Herein, handcrafted features combine the features of gray level cooccurrence matrix (GLCM), discrete wavelet transform (DWT), fuzzy colour histogram (FCH), and local binary pattern (LBP) methods. All systems achieved promising results for diagnosing endoscopy images of the gastroenterology data set. The ANN network reached an accuracy, sensitivity, precision, specificity, and an AUC of 98.9%, 98.70%, 98.94%, 99.69%, and 99.51%, respectively, based on fused features of the VGG-16 and the handcrafted.
Collapse
Affiliation(s)
- Zeyad Ghaleb Al-Mekhlafi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | - Jalawi Sulaiman Alshudukhi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Badiea Abdulkarem Mohammed
- Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| |
Collapse
|
22
|
Al-Jabbar M, Alshahrani M, Senan EM, Ahmed IA. Histopathological Analysis for Detecting Lung and Colon Cancer Malignancies Using Hybrid Systems with Fused Features. Bioengineering (Basel) 2023; 10:bioengineering10030383. [PMID: 36978774 PMCID: PMC10045080 DOI: 10.3390/bioengineering10030383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/05/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
Lung and colon cancer are among humanity's most common and deadly cancers. In 2020, there were 4.19 million people diagnosed with lung and colon cancer, and more than 2.7 million died worldwide. Some people develop lung and colon cancer simultaneously due to smoking which causes lung cancer, leading to an abnormal diet, which also causes colon cancer. There are many techniques for diagnosing lung and colon cancer, most notably the biopsy technique and its analysis in laboratories. Due to the scarcity of health centers and medical staff, especially in developing countries. Moreover, manual diagnosis takes a long time and is subject to differing opinions of doctors. Thus, artificial intelligence techniques solve these challenges. In this study, three strategies were developed, each with two systems for early diagnosis of histological images of the LC25000 dataset. Histological images have been improved, and the contrast of affected areas has been increased. The GoogLeNet and VGG-19 models of all systems produced high dimensional features, so redundant and unnecessary features were removed to reduce high dimensionality and retain essential features by the PCA method. The first strategy for diagnosing the histological images of the LC25000 dataset by ANN uses crucial features of GoogLeNet and VGG-19 models separately. The second strategy uses ANN with the combined features of GoogLeNet and VGG-19. One system reduced dimensions and combined, while the other combined high features and then reduced high dimensions. The third strategy uses ANN with fusion features of CNN models (GoogLeNet and VGG-19) and handcrafted features. With the fusion features of VGG-19 and handcrafted features, the ANN reached a sensitivity of 99.85%, a precision of 100%, an accuracy of 99.64%, a specificity of 100%, and an AUC of 99.86%.
Collapse
Affiliation(s)
- Mohammed Al-Jabbar
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Mohammed Alshahrani
- Computer Department, Applied College, Najran University, Najran 66462, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana'a, Yemen
| | | |
Collapse
|
23
|
Ananthakrishnan B, Shaik A, Kumar S, Narendran SO, Mattu K, Kavitha MS. Automated Detection and Classification of Oral Squamous Cell Carcinoma Using Deep Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13050918. [PMID: 36900062 PMCID: PMC10001077 DOI: 10.3390/diagnostics13050918] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/02/2023] [Accepted: 02/14/2023] [Indexed: 03/05/2023] Open
Abstract
This work aims to classify normal and carcinogenic cells in the oral cavity using two different approaches with an eye towards achieving high accuracy. The first approach extracts local binary patterns and metrics derived from a histogram from the dataset and is fed to several machine-learning models. The second approach uses a combination of neural networks as a backbone feature extractor and a random forest for classification. The results show that information can be learnt effectively from limited training images using these approaches. Some approaches use deep learning algorithms to generate a bounding box that can locate the suspected lesion. Other approaches use handcrafted textural feature extraction techniques and feed the resultant feature vectors to a classification model. The proposed method will extract the features pertaining to the images using pre-trained convolution neural networks (CNN) and train a classification model using the resulting feature vectors. By using the extracted features from a pre-trained CNN model to train a random forest, the problem of requiring a large amount of data to train deep learning models is bypassed. The study selected a dataset consisting of 1224 images, which were divided into two sets with varying resolutions.The performance of the model is calculated based on accuracy, specificity, sensitivity, and the area under curve (AUC). The proposed work is able to produce a highest test accuracy of 96.94% and an AUC of 0.976 using 696 images of 400× magnification and a highest test accuracy of 99.65% and an AUC of 0.9983 using only 528 images of 100× magnification images.
Collapse
Affiliation(s)
- Balasundaram Ananthakrishnan
- Centre for Cyber Physical Systems, Vellore Institute of Technology, Chennai 600127, India
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
- Correspondence: (B.A.); (A.S.)
| | - Ayesha Shaik
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
- Correspondence: (B.A.); (A.S.)
| | - Soham Kumar
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - S. O. Narendran
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Khushi Mattu
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, India
| | - Muthu Subash Kavitha
- School of Information and Data Sciences, Nagasaki University, Nagasaki 852-8521, Japan
| |
Collapse
|