1
|
Wu C, Chen Q, Wang H, Guan Y, Mian Z, Huang C, Ruan C, Song Q, Jiang H, Pan J, Li X. A review of deep learning approaches for multimodal image segmentation of liver cancer. J Appl Clin Med Phys 2024; 25:e14540. [PMID: 39374312 PMCID: PMC11633801 DOI: 10.1002/acm2.14540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 05/30/2024] [Accepted: 08/13/2024] [Indexed: 10/09/2024] Open
Abstract
This review examines the recent developments in deep learning (DL) techniques applied to multimodal fusion image segmentation for liver cancer. Hepatocellular carcinoma is a highly dangerous malignant tumor that requires accurate image segmentation for effective treatment and disease monitoring. Multimodal image fusion has the potential to offer more comprehensive information and more precise segmentation, and DL techniques have achieved remarkable progress in this domain. This paper starts with an introduction to liver cancer, then explains the preprocessing and fusion methods for multimodal images, then explores the application of DL methods in this area. Various DL architectures such as convolutional neural networks (CNN) and U-Net are discussed and their benefits in multimodal image fusion segmentation. Furthermore, various evaluation metrics and datasets currently used to measure the performance of segmentation models are reviewed. While reviewing the progress, the challenges of current research, such as data imbalance, model generalization, and model interpretability, are emphasized and future research directions are suggested. The application of DL in multimodal image segmentation for liver cancer is transforming the field of medical imaging and is expected to further enhance the accuracy and efficiency of clinical decision making. This review provides useful insights and guidance for medical practitioners.
Collapse
Affiliation(s)
- Chaopeng Wu
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Qiyao Chen
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Haoyu Wang
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Yu Guan
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Zhangyang Mian
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Cong Huang
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Changli Ruan
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Qibin Song
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| | - Hao Jiang
- School of Electronic InformationWuhan UniversityWuhanHubeiChina
| | - Jinghui Pan
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
- School of Electronic InformationWuhan UniversityWuhanHubeiChina
| | - Xiangpan Li
- Department of Radiation OncologyRenmin HospitalWuhan UniversityWuhanHubeiChina
| |
Collapse
|
2
|
Kaur J, Kaur P. A systematic literature analysis of multi-organ cancer diagnosis using deep learning techniques. Comput Biol Med 2024; 179:108910. [PMID: 39032244 DOI: 10.1016/j.compbiomed.2024.108910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/23/2024]
Abstract
Cancer is becoming the most toxic ailment identified among individuals worldwide. The mortality rate has been increasing rapidly every year, which causes progression in the various diagnostic technologies to handle this illness. The manual procedure for segmentation and classification with a large set of data modalities can be a challenging task. Therefore, a crucial requirement is to significantly develop the computer-assisted diagnostic system intended for the initial cancer identification. This article offers a systematic review of Deep Learning approaches using various image modalities to detect multi-organ cancers from 2012 to 2023. It emphasizes the detection of five supreme predominant tumors, i.e., breast, brain, lung, skin, and liver. Extensive review has been carried out by collecting research and conference articles and book chapters from reputed international databases, i.e., Springer Link, IEEE Xplore, Science Direct, PubMed, and Wiley that fulfill the criteria for quality evaluation. This systematic review summarizes the overview of convolutional neural network model architectures and datasets used for identifying and classifying the diverse categories of cancer. This study accomplishes an inclusive idea of ensemble deep learning models that have achieved better evaluation results for classifying the different images into cancer or healthy cases. This paper will provide a broad understanding to the research scientists within the domain of medical imaging procedures of which deep learning technique perform best over which type of dataset, extraction of features, different confrontations, and their anticipated solutions for the complex problems. Lastly, some challenges and issues which control the health emergency have been discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| |
Collapse
|
3
|
Lilhore UK, Dalal S, Faujdar N, Margala M, Chakrabarti P, Chakrabarti T, Simaiya S, Kumar P, Thangaraju P, Velmurugan H. Hybrid CNN-LSTM model with efficient hyperparameter tuning for prediction of Parkinson's disease. Sci Rep 2023; 13:14605. [PMID: 37669970 PMCID: PMC10480168 DOI: 10.1038/s41598-023-41314-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 08/24/2023] [Indexed: 09/07/2023] Open
Abstract
The patients' vocal Parkinson's disease (PD) changes could be identified early on, allowing for management before physically incapacitating symptoms appear. In this work, static as well as dynamic speech characteristics that are relevant to PD identification are examined. Speech changes or communication issues are among the challenges that Parkinson's individuals may encounter. As a result, avoiding the potential consequences of speech difficulties brought on by the condition depends on getting the appropriate diagnosis early. PD patients' speech signals change significantly from those of healthy individuals. This research presents a hybrid model utilizing improved speech signals with dynamic feature breakdown using CNN and LSTM. The proposed hybrid model employs a new, pre-trained CNN with LSTM to recognize PD in linguistic features utilizing Mel-spectrograms derived from normalized voice signal and dynamic mode decomposition. The proposed Hybrid model works in various phases, which include Noise removal, extraction of Mel-spectrograms, feature extraction using pre-trained CNN model ResNet-50, and the final stage is applied for classification. An experimental analysis was performed using the PC-GITA disease dataset. The proposed hybrid model is compared with traditional NN and well-known machine learning-based CART and SVM & XGBoost models. The accuracy level achieved in Neural Network, CART, SVM, and XGBoost models is 72.69%, 84.21%, 73.51%, and 90.81%. The results show that under these four machine approaches of tenfold cross-validation and dataset splitting without samples overlapping one individual, the proposed hybrid model achieves an accuracy of 93.51%, significantly outperforming traditional ML models utilizing static features in detecting Parkinson's disease.
Collapse
Affiliation(s)
- Umesh Kumar Lilhore
- Department of Computer Science and Engineering, Chandigarh University, Chandigarh, Punjab, India
| | - Surjeet Dalal
- Amity School of Engineering and Technology, Amity University Haryana, Gurugram, India
| | - Neetu Faujdar
- Department of Computer Engineering and Application, GLA University, Mathura, Uttar Pradesh, India
| | - Martin Margala
- School of Computing and Informatics, University of Louisiana at Lafayette, Lafayette, USA
| | - Prasun Chakrabarti
- Department of Computer Science and Engineering, Sir Padampat Singhania University, Udaipur, 313601, Rajasthan, India
| | | | - Sarita Simaiya
- Department of Computer Science and Engineering, Chandigarh University, Chandigarh, Punjab, India
- Apex Institute of Technology, Department of Computer Science and Engineering, Chandigarh University, Mohali, Punjab, India
| | - Pawan Kumar
- Department of Computer Science and Engineering, Chandigarh University, Chandigarh, Punjab, India
- College of Computing Sciences & IT, Teerthanker Mahaveer University, Moradabad, Uttar Pradesh, India
| | - Pugazhenthan Thangaraju
- Department of Pharmacology, All India Institute of Medical Sciences, Raipur, Chhattisgarh, India.
| | - Hemasri Velmurugan
- Department of Pharmacology, All India Institute of Medical Sciences, Raipur, Chhattisgarh, India
| |
Collapse
|
4
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
5
|
Wajeed MA, Tiwari S, Gupta R, Ahmad AJ, Agarwal S, Jamal SS, Hinga SK. A Breast Cancer Image Classification Algorithm with 2c Multiclass Support Vector Machine. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:3875525. [PMID: 37457494 PMCID: PMC10349674 DOI: 10.1155/2023/3875525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 03/14/2022] [Accepted: 03/17/2022] [Indexed: 07/18/2023]
Abstract
Breast cancer is the most frequent type of cancer in women; however, early identification has reduced the mortality rate associated with the condition. Studies have demonstrated that the earlier this sickness is detected by mammography, the lower the death rate. Breast mammography is a critical technique in the early identification of breast cancer since it can detect abnormalities in the breast months or years before a patient is aware of the presence of such abnormalities. Mammography is a type of breast scanning used in medical imaging that involves using x-rays to image the breasts. It is a method that produces high-resolution digital pictures of the breasts known as mammography. Immediately following the capture of digital images and transmission of those images to a piece of high-tech digital mammography equipment, our radiologists evaluate the photos to establish the specific position and degree of the sickness in the breast. When compared to the many classifiers typically used in the literature, the suggested Multiclass Support Vector Machine (MSVM) approach produces promising results, according to the authors. This method may pave the way for developing more advanced statistical characteristics based on most cancer prognostic models shortly. It is demonstrated in this paper that the suggested 2C algorithm with MSVM outperforms a decision tree model in terms of accuracy, which follows prior findings. According to our findings, new screening mammography technologies can increase the accuracy and accessibility of screening mammography around the world.
Collapse
Affiliation(s)
- Mohammed Abdul Wajeed
- Department of Computer Science and Engineering, Swami Vivekananda Institute of Technology, Secunderabad, Telangana, India
| | - Shivam Tiwari
- Department of Computer Science and Engineering, G L Bajaj Institute of Technology and Management, Greater Noida, Uttar Pradesh, India
| | - Rajat Gupta
- Engineering and Technology, Career Point University, Kota, Rajasthan, India
| | - Aamir Junaid Ahmad
- Department of Computer Science and Engineering, Maulana Azad College of Engineering and Technology, Patna, India
| | - Seema Agarwal
- SRM institute of Science and Technology, Delhi-NCR, Campus, Ghaziabad, India
| | - Sajjad Shaukat Jamal
- Department of Mathematics College of Science, King Khalid University, Abha, Saudi Arabia
| | - Simon Karanja Hinga
- Department of Electrical and Electronic Engineering, Technical University of Mombasa, Mombasa, Kenya
| |
Collapse
|
6
|
G S, Appadurai JP, Kavin BP, C K, Lai WC. En-DeNet Based Segmentation and Gradational Modular Network Classification for Liver Cancer Diagnosis. Biomedicines 2023; 11:biomedicines11051309. [PMID: 37238979 DOI: 10.3390/biomedicines11051309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/23/2023] [Accepted: 04/25/2023] [Indexed: 05/28/2023] Open
Abstract
Liver cancer ranks as the sixth most prevalent cancer among all cancers globally. Computed tomography (CT) scanning is a non-invasive analytic imaging sensory system that provides greater insight into human structures than traditional X-rays, which are typically used to make the diagnosis. Often, the final product of a CT scan is a three-dimensional image constructed from a series of interlaced two-dimensional slices. Remember that not all slices deliver useful information for tumor detection. Recently, CT scan images of the liver and its tumors have been segmented using deep learning techniques. The primary goal of this study is to develop a deep learning-based system for automatically segmenting the liver and its tumors from CT scan pictures, and also reduce the amount of time and labor required by speeding up the process of diagnosing liver cancer. At its core, an Encoder-Decoder Network (En-DeNet) uses a deep neural network built on UNet to serve as an encoder, and a pre-trained EfficientNet to serve as a decoder. In order to improve liver segmentation, we developed specialized preprocessing techniques, such as the production of multichannel pictures, de-noising, contrast enhancement, ensemble, and the union of model predictions. Then, we proposed the Gradational modular network (GraMNet), which is a unique and estimated efficient deep learning technique. In GraMNet, smaller networks called SubNets are used to construct larger and more robust networks using a variety of alternative configurations. Only one new SubNet modules is updated for learning at each level. This helps in the optimization of the network and minimizes the amount of computational resources needed for training. The segmentation and classification performance of this study is compared to the Liver Tumor Segmentation Benchmark (LiTS) and 3D Image Rebuilding for Comparison of Algorithms Database (3DIRCADb01). By breaking down the components of deep learning, a state-of-the-art level of performance can be attained in the scenarios used in the evaluation. In comparison to more conventional deep learning architectures, the GraMNets generated here have a low computational difficulty. When associated with the benchmark study methods, the straight forward GraMNet is trained faster, consumes less memory, and processes images more rapidly.
Collapse
Affiliation(s)
- Suganeshwari G
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, Tamil Nadu, India
| | - Jothi Prabha Appadurai
- Computer Science and Engineering Department, Kakatiya Institute of Technology and Science, Warangal 506015, Telangana, India
| | - Balasubramanian Prabhu Kavin
- Department of Data Science and Business Systems, College of Engineering and Technology, SRM Institute of Science and Technology, SRM Nagar, Chengalpattu District, Kattankulathur 603203, Tamilnadu, India
| | - Kavitha C
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Jeppiaar Nagar, Rajiv Gandhi Salai, Chennai 600119, Tamil Nadu, India
| | - Wen-Cheng Lai
- Bachelor Program in Industrial Projects, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
- Department Electronic Engineering, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
| |
Collapse
|
7
|
Lakshmi KL, Muthulakshmi P, Nithya AA, Jeyavathana RB, Usharani R, Das NS, Devi GNR. Recognition of emotions in speech using deep CNN and RESNET. Soft comput 2023. [DOI: 10.1007/s00500-023-07969-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023]
|
8
|
Region-Based Segmentation and Classification for Ovarian Cancer Detection Using Convolution Neural Network. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:5968939. [PMID: 36475297 PMCID: PMC9701126 DOI: 10.1155/2022/5968939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 07/15/2022] [Accepted: 07/18/2022] [Indexed: 11/21/2022]
Abstract
Ovarian cancer is a serious sickness for elderly women. According to data, it is the seventh leading cause of death in women as well as the fifth most frequent disease worldwide. Many researchers classified ovarian cancer using Artificial Neural Networks (ANNs). Doctors consider classification accuracy to be an important aspect of making decisions. Doctors consider improved classification accuracy for providing proper treatment. Early and precise diagnosis lowers mortality rates and saves lives. On basis of ROI (region of interest) segmentation, this research presents a novel annotated ovarian image classification utilizing FaRe-ConvNN (rapid region-based Convolutional neural network). The input photos were divided into three categories: epithelial, germ, and stroma cells. This image is segmented as well as preprocessed. After that, FaRe-ConvNN is used to perform the annotation procedure. For region-based classification, the method compares manually annotated features as well as trained feature in FaRe-ConvNN. This will aid in the analysis of higher accuracy in disease identification, as human annotation has lesser accuracy in previous studies; therefore, this effort will empirically prove that ML classification will provide higher accuracy. Classification is done using a combination of SVC and Gaussian NB classifiers after the region-based training in FaRe-ConvNN. The ensemble technique was employed in feature classification due to better data indexing. To diagnose ovarian cancer, the simulation provides an accurate portion of the input image. FaRe-ConvNN has a precision value of more than 95%, SVC has a precision value of 95.96%, and Gaussian NB has a precision value of 97.7%, with FR-CNN enhancing precision in Gaussian NB. For recall/sensitivity, SVC is 94.31 percent and Gaussian NB is 97.7 percent, while for specificity, SVC is 97.39 percent and Gaussian NB is 98.69 percent using FaRe-ConvNN.
Collapse
|
9
|
Analysis of Smart Lung Tumour Detector and Stage Classifier Using Deep Learning Techniques with Internet of Things. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4608145. [PMID: 36148416 PMCID: PMC9489382 DOI: 10.1155/2022/4608145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 07/15/2022] [Accepted: 07/27/2022] [Indexed: 11/30/2022]
Abstract
The use of artificial intelligence (AI) and the Internet of Things (IoT), which is a developing technology in medical applications that assists physicians in making more informed decisions regarding patients' courses of treatment, has become increasingly widespread in recent years in the field of healthcare. On the other hand, the number of PET scans that are being performed is rising, and radiologists are getting significantly overworked as a result. As a direct result of this, a novel approach that goes by the name “computer-aided diagnostics” is now being investigated as a potential method for reducing the tremendous workloads. A Smart Lung Tumor Detector and Stage Classifier (SLD-SC) is presented in this study as a hybrid technique for PET scans. This detector can identify the stage of a lung tumour. Following the development of the modified LSTM for the detection of lung tumours, the proposed SLD-SC went on to develop a Multilayer Convolutional Neural Network (M-CNN) for the classification of the various stages of lung cancer. This network was then modelled and validated utilising standard benchmark images. The suggested SLD-SC is now being evaluated on lung cancer pictures taken from patients with the disease. We observed that our recommended method gave good results when compared to other tactics that are currently being used in the literature. These findings were outstanding in terms of the performance metrics accuracy, recall, and precision that were assessed. As can be shown by the much better outcomes that were achieved with each of the test images that were used, our proposed method excels its rivals in a variety of respects. In addition to this, it achieves an average accuracy of 97 percent in the categorization of lung tumours, which is much higher than the accuracy achieved by the other approaches.
Collapse
|
10
|
Rahman A, Hossain MS, Muhammad G, Kundu D, Debnath T, Rahman M, Khan MSI, Tiwari P, Band SS. Federated learning-based AI approaches in smart healthcare: concepts, taxonomies, challenges and open issues. CLUSTER COMPUTING 2022; 26:1-41. [PMID: 35996680 PMCID: PMC9385101 DOI: 10.1007/s10586-022-03658-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 05/10/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Federated Learning (FL), Artificial Intelligence (AI), and Explainable Artificial Intelligence (XAI) are the most trending and exciting technology in the intelligent healthcare field. Traditionally, the healthcare system works based on centralized agents sharing their raw data. Therefore, huge vulnerabilities and challenges are still existing in this system. However, integrating with AI, the system would be multiple agent collaborators who are capable of communicating with their desired host efficiently. Again, FL is another interesting feature, which works decentralized manner; it maintains the communication based on a model in the preferred system without transferring the raw data. The combination of FL, AI, and XAI techniques can be capable of minimizing several limitations and challenges in the healthcare system. This paper presents a complete analysis of FL using AI for smart healthcare applications. Initially, we discuss contemporary concepts of emerging technologies such as FL, AI, XAI, and the healthcare system. We integrate and classify the FL-AI with healthcare technologies in different domains. Further, we address the existing problems, including security, privacy, stability, and reliability in the healthcare field. In addition, we guide the readers to solving strategies of healthcare using FL and AI. Finally, we address extensive research areas as well as future potential prospects regarding FL-based AI research in the healthcare management system.
Collapse
Affiliation(s)
- Anichur Rahman
- Present Address: Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka Bangladesh
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Md. Sazzad Hossain
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Ghulam Muhammad
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Dipanjali Kundu
- Present Address: Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka Bangladesh
| | - Tanoy Debnath
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Muaz Rahman
- Present Address: Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka Bangladesh
| | - Md. Saikat Islam Khan
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Prayag Tiwari
- Department of Computer Science, Aalto University, Espoo, Finland
| | - Shahab S. Band
- Future Technology Research Center, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin, 64002 Taiwan
| |
Collapse
|
11
|
Othman E, Mahmoud M, Dhahri H, Abdulkader H, Mahmood A, Ibrahim M. Automatic Detection of Liver Cancer Using Hybrid Pre-Trained Models. SENSORS (BASEL, SWITZERLAND) 2022; 22:5429. [PMID: 35891111 PMCID: PMC9322134 DOI: 10.3390/s22145429] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/02/2022] [Accepted: 07/18/2022] [Indexed: 06/15/2023]
Abstract
Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the world. Consequently, the early detection of liver cancer leads to lower mortality rates. This work aims to build a model that will help clinicians determine the type of tumor when it occurs within the liver region by analyzing images of tissue taken from a biopsy of this tumor. Working within this stage requires effort, time, and accumulated experience that must be possessed by a tissue expert to determine whether this tumor is malignant and needs treatment. Thus, a histology expert can make use of this model to obtain an initial diagnosis. This study aims to propose a deep learning model using convolutional neural networks (CNNs), which are able to transfer knowledge from pre-trained global models and decant this knowledge into a single model to help diagnose liver tumors from CT scans. Thus, we obtained a hybrid model capable of detecting CT images of a biopsy of a liver tumor. The best results that we obtained within this research reached an accuracy of 0.995, a precision value of 0.864, and a recall value of 0.979, which are higher than those obtained using other models. It is worth noting that this model was tested on a limited set of data and gave good detection results. This model can be used as an aid to support the decisions of specialists in this field and save their efforts. In addition, it saves the effort and time incurred by the treatment of this type of cancer by specialists, especially during periodic examination campaigns every year.
Collapse
Affiliation(s)
- Esam Othman
- Faculty of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia; (E.O.); (H.D.); (A.M.)
| | - Muhammad Mahmoud
- Department of Information Systems, Madina Higher Institute of Management and Technology, Shabramant 12947, Egypt;
| | - Habib Dhahri
- Faculty of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia; (E.O.); (H.D.); (A.M.)
| | - Hatem Abdulkader
- Department of Information Systems, Faculty of Computers and Information, Menoufia University, Shebin El-kom 32511, Menoufia, Egypt;
| | - Awais Mahmood
- Faculty of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia; (E.O.); (H.D.); (A.M.)
| | - Mina Ibrahim
- Department of Information Technology, Faculty of Computers and Information, Menoufia University, Shebin El-kom 32511, Menoufia, Egypt
| |
Collapse
|
12
|
Instinctive Recognition of Pathogens in Rice Using Reformed Fractional Differential Segmentation and Innovative Fuzzy Logic-Based Probabilistic Neural Network. J FOOD QUALITY 2022. [DOI: 10.1155/2022/8662254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Rice is an essential primary food crop in the world, and it plays a significant part in the country’s economy. It is the most often eaten stable food and is in great demand in the market as the world’s population continues to expand. Rice output should be boosted to fulfil the growing demand. As a result, the yield of plant crops diminishes, creating an environment conducive to the spread of infectious illnesses. To boost the production of agricultural fields, it is necessary to remove plant diseases from the environment. This study presents ways for recognising three types of rice plant diseases, as well as a healthy leaf, in rice plants. This includes image capture, image preprocessing, segmentation, feature extraction, and classification of three rice plant illnesses, as well as classification of a healthy leaf, among other techniques. Following the K-means segmentation, the features are extracted utilising three criteria, which are colour, shape, and texture, to generate a final product. Colour, shape, and texture are the parameters used in the extraction of the features. It is proposed that a novel intensity-based technique is used to retrieve colour features from the infected section, whereas the form parameters of the infected section, such as the area and diameter, and the texture characteristics of the infected section are extracted using a grey-level co-occurrence matrix. The colour features are retrieved depending on the characteristics of the features. All three previous techniques were surpassed by the proposed fuzzy logic-based probabilistic neural network on a range of performance metrics, with the new network obtaining greater accuracy. Finally, the result is validated using the fivefold cross-validation method, with the final accuracy for the diseases such as bacterial leaf blight, brown spot, healthy leaf, and rice blast being 95.20 percent, 97.60 percent, 99.20 percent, and 98.40 percent, respectively, and 95.40 percent for the disease brown spot.
Collapse
|
13
|
The Application of the Unsupervised Migration Method Based on Deep Learning Model in the Marketing Oriented Allocation of High Level Accounting Talents. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:5653942. [PMID: 35707184 PMCID: PMC9192229 DOI: 10.1155/2022/5653942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 02/18/2022] [Accepted: 02/22/2022] [Indexed: 12/02/2022]
Abstract
Deep learning is a branch of machine learning that uses neural networks to mimic the behaviour of the human brain. Various types of models are used in deep learning technology. This article will look at two important models and especially concentrate on unsupervised learning methodology. The two important models are as follows: the supervised and unsupervised models. The main difference is the method of training that they undergo. Supervised models are provided with training on a particular dataset and its outcome. In the case of unsupervised models, only input data is given, and there is no set outcome from which they can learn. The predicting/forecasting column is not present in an unsupervised model, unlike in the supervised model. Supervised models use regression to predict continuous quantities and classification to predict discrete class labels; unsupervised models use clustering to group similar models and association learning to find associations between items. Unsupervised migration is a combination of the unsupervised learning method and migration. In unsupervised learning, there is no need to supervise the models. Migration is an effective tool in processing and imaging data. Unsupervised learning allows the model to work independently to discover patterns and information that were previously undetected. It mainly works on unlabeled data. Unsupervised learning can achieve more complex processing tasks when compared to supervised learning. The unsupervised learning method is more unpredictable when compared with other types of learning methods. Some of the popular unsupervised learning algorithms include k-means clustering, hierarchal clustering, Apriori algorithm, clustering, anomaly detection, association mining, neural networks, etc. In this research article, we implement this particular deep learning model in the marketing oriented asset allocation of high level accounting talents. When the proposed unsupervised migration algorithm was compared to the existing Fractional Hausdorff Grey Model, it was discovered that the proposed system provided 99.12% accuracy by the high level accounting talented candidate in market-oriented asset allocation.
Collapse
|
14
|
An Innovative Machine Learning Approach for Classifying ECG Signals in Healthcare Devices. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:7194419. [PMID: 35463679 PMCID: PMC9020932 DOI: 10.1155/2022/7194419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 02/20/2022] [Accepted: 02/23/2022] [Indexed: 12/24/2022]
Abstract
An ECG is a diagnostic technique that examines and records the heart's electrical impulses. It is easy to categorise and prevent computational abstractions in the ECG signal using the conventional method for obtaining ECG features. It is a significant issue, but it is also a difficult and time-consuming chore for cardiologists and medical professionals. The proposed classifier eliminates all of the following limitations. Machine learning in healthcare equipment reduces moral transgressions. This study's primary purpose is to calculate the R-R interval and analyze the blockage utilising simple algorithms and approaches that give high accuracy. The MIT-BIH dataset may be used to rebuild the data. The acquired data may include both normal and abnormal ECGs. A Gabor filter is employed to generate a noiseless signal, and DCT-DOST is used to calculate the signal's amplitude. The amplitude is computed to detect any cardiac anomalies. A genetic algorithm derives the main highlights from the R peak and cycle segment length underlying the ECG signal. So, combining data with specific qualities maximises identification. The genetic algorithm aids in hereditary computations, which aids in multitarget improvement. Finally, Radial Basis Function Neural Network (RBFNN) is presented as an example. An efficient feedforward neural network lowers the number of local minima in the signal. It shows progress in identifying both normal and abnormal ECG signals.
Collapse
|
15
|
Augmented Reality-Centered Position Navigation for Wearable Devices with Machine Learning Techniques. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1083978. [PMID: 35432829 PMCID: PMC9010156 DOI: 10.1155/2022/1083978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 02/17/2022] [Accepted: 02/22/2022] [Indexed: 11/25/2022]
Abstract
People have always relied on some form of instrument to assist them to get to their destination, from hand-drawn maps and compasses to technology-based navigation systems. Many individuals these days have a smartphone with them at all times, making it a common part of their routine. Using GPS technology, these cellphones offer applications such as Google Maps that let people find their way around the outside world. Indoor navigation, on the other hand, does not offer the same level of precision. The development of indoor navigation systems is continuously ongoing. Bluetooth, Wi-Fi, RFID, and computer vision are some of the existing technologies used for interior navigation in current systems. In this article, we discuss the shortcomings of current indoor navigation solutions and offer an alternative approach based on augmented reality and ARCore. Navigating an indoor environment is made easier with ARCore, which brings augmented reality to your smartphone or tablet.
Collapse
|
16
|
Aggarwal A, Srivastava A, Agarwal A, Chahal N, Singh D, Alnuaim AA, Alhadlaq A, Lee HN. Two-Way Feature Extraction for Speech Emotion Recognition Using Deep Learning. SENSORS 2022; 22:s22062378. [PMID: 35336548 PMCID: PMC8949356 DOI: 10.3390/s22062378] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/03/2022] [Accepted: 03/15/2022] [Indexed: 02/01/2023]
Abstract
Recognizing human emotions by machines is a complex task. Deep learning models attempt to automate this process by rendering machines to exhibit learning capabilities. However, identifying human emotions from speech with good performance is still challenging. With the advent of deep learning algorithms, this problem has been addressed recently. However, most research work in the past focused on feature extraction as only one method for training. In this research, we have explored two different methods of extracting features to address effective speech emotion recognition. Initially, two-way feature extraction is proposed by utilizing super convergence to extract two sets of potential features from the speech data. For the first set of features, principal component analysis (PCA) is applied to obtain the first feature set. Thereafter, a deep neural network (DNN) with dense and dropout layers is implemented. In the second approach, mel-spectrogram images are extracted from audio files, and the 2D images are given as input to the pre-trained VGG-16 model. Extensive experiments and an in-depth comparative analysis over both the feature extraction methods with multiple algorithms and over two datasets are performed in this work. The RAVDESS dataset provided significantly better accuracy than using numeric features on a DNN.
Collapse
Affiliation(s)
- Apeksha Aggarwal
- Department of Computer Science Engineering & Information Technology, Jaypee Institute of Information Technology, A 10, Sector 62, Noida 201307, India; or
| | - Akshat Srivastava
- School of Computer Science Engineering and Technology, Bennett University, Plot Nos 8-11, TechZone 2, Greater Noida 201310, India;
| | - Ajay Agarwal
- Department of Information Technology, KIET Group of Institutions, Delhi-NCR, Meerut Road (NH-58), Ghaziabad 201206, India;
| | - Nidhi Chahal
- Nidhi Chahal, NIIT Limited, Gurugram 110019, India;
| | - Dilbag Singh
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Korea; or
| | - Abeer Ali Alnuaim
- Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, P.O. Box 22459, Riyadh 11495, Saudi Arabia; (A.A.A.); (A.A.)
| | - Aseel Alhadlaq
- Department of Computer Science and Engineering, College of Applied Studies and Community Services, King Saud University, P.O. Box 22459, Riyadh 11495, Saudi Arabia; (A.A.A.); (A.A.)
| | - Heung-No Lee
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju 61005, Korea; or
- Correspondence:
| |
Collapse
|