1
|
Singh YP, Lobiyal D. Automatic prediction of epileptic seizure using hybrid deep ResNet-LSTM model. AI COMMUN 2023. [DOI: 10.3233/aic-220177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Numerous advanced data processing and machine learning techniques for identifying epileptic seizures have been developed in the last two decades. Nonetheless, many of these solutions need massive data sets and intricate computations. Our approach transforms electroencephalogram (EEG) data into the time-frequency domain by utilizing a short-time fourier transform (STFT) and the spectrogram (t-f) images as the input stage of the deep learning model. Using EEG data, we have constructed a hybrid model comprising of a Deep Convolution Network (ResNet50) and a Long Short-Term Memory (LSTM) for predicting epileptic seizures. Spectrogram images are used to train the proposed hybrid model for feature extraction and classification. We analyzed the CHB-MIT scalp EEG dataset. For each preictal period of 5, 15, and 30 minutes, experiments are conducted to evaluate the performance of the proposed model. The experimental results indicate that the proposed model produced the optimum performance with a 5-minute preictal duration. We achieved an average accuracy of 94.5%, the average sensitivity of 93.7%, the f1-score of 0.9376, and the average false positive rate (FPR) of 0.055. Our proposed technique surpassed the random predictor and other current algorithms used for seizure prediction for all patients’ data in the dataset. One can use the effectiveness of our proposed model to help in the early diagnosis of epilepsy and provide early treatment.
Collapse
Affiliation(s)
| | - D.K. Lobiyal
- School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi 110067, India
| |
Collapse
|
2
|
von Atzingen GV, Arteaga H, da Silva AR, Ortega NF, Costa EJX, Silva ACDS. The convolutional neural network as a tool to classify electroencephalography data resulting from the consumption of juice sweetened with caloric or non-caloric sweeteners. Front Nutr 2022; 9:901333. [PMID: 35928831 PMCID: PMC9343958 DOI: 10.3389/fnut.2022.901333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Accepted: 06/27/2022] [Indexed: 11/17/2022] Open
Abstract
Sweetener type can influence sensory properties and consumer's acceptance and preference for low-calorie products. An ideal sweetener does not exist, and each sweetener must be used in situations to which it is best suited. Aspartame and sucralose can be good substitutes for sucrose in passion fruit juice. Despite the interest in artificial sweeteners, little is known about how artificial sweeteners are processed in the human brain. Here, we applied the convolutional neural network (CNN) to evaluate brain signals of 11 healthy subjects when they tasted passion fruit juice equivalently sweetened with sucrose (9.4 g/100 g), sucralose (0.01593 g/100 g), or aspartame (0.05477 g/100 g). Electroencephalograms were recorded for two sites in the gustatory cortex (i.e., C3 and C4). Data with artifacts were disregarded, and the artifact-free data were used to feed a Deep Neural Network with tree branches that applied a Convolutions and pooling for different feature filtering and selection. The CNN received raw signal as input for multiclass classification and with supervised training was able to extract underling features and patterns from the signal with better performance than handcrafted filters like FFT. Our results indicated that CNN is an useful tool for electroencephalography (EEG) analyses and classification of perceptually similar tastes.
Collapse
Affiliation(s)
| | - Hubert Arteaga
- Escuela Ingeniería de Industrias Alimentarias, Universidad Nacional de Jaén, Jaén, Peru
| | | | - Nathalia Fontanari Ortega
- Departamento de Ciências Básicas, Faculdade de Zootecnia e Engenharia de Alimentos, Universidade de São Paulo, São Paulo, Brazil
| | - Ernane Jose Xavier Costa
- Departamento de Ciências Básicas, Faculdade de Zootecnia e Engenharia de Alimentos, Universidade de São Paulo, São Paulo, Brazil
| | - Ana Carolina de Sousa Silva
- Departamento de Ciências Básicas, Faculdade de Zootecnia e Engenharia de Alimentos, Universidade de São Paulo, São Paulo, Brazil
| |
Collapse
|
3
|
Advanced Fusion-Based Speech Emotion Recognition System Using a Dual-Attention Mechanism with Conv-Caps and Bi-GRU Features. ELECTRONICS 2022. [DOI: 10.3390/electronics11091328] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Recognizing the speaker’s emotional state from speech signals plays a very crucial role in human–computer interaction (HCI). Nowadays, numerous linguistic resources are available, but most of them contain samples of a discrete length. In this article, we address the leading challenge in Speech Emotion Recognition (SER), which is how to extract the essential emotional features from utterances of a variable length. To obtain better emotional information from the speech signals and increase the diversity of the information, we present an advanced fusion-based dual-channel self-attention mechanism using convolutional capsule (Conv-Cap) and bi-directional gated recurrent unit (Bi-GRU) networks. We extracted six spectral features (Mel-spectrograms, Mel-frequency cepstral coefficients, chromagrams, the contrast, the zero-crossing rate, and the root mean square). The Conv-Cap module was used to obtain Mel-spectrograms, while the Bi-GRU was used to obtain the rest of the spectral features from the input tensor. The self-attention layer was employed in each module to selectively focus on optimal cues and determine the attention weight to yield high-level features. Finally, we utilized a confidence-based fusion method to fuse all high-level features and pass them through the fully connected layers to classify the emotional states. The proposed model was evaluated on the Berlin (EMO-DB), Interactive Emotional Dyadic Motion Capture (IEMOCAP), and Odia (SITB-OSED) datasets to improve the recognition rate. During experiments, we found that our proposed model achieved high weighted accuracy (WA) and unweighted accuracy (UA) values, i.e., 90.31% and 87.61%, 76.84% and 70.34%, and 87.52% and 86.19%, respectively, demonstrating that the proposed model outperformed the state-of-the-art models using the same datasets.
Collapse
|
4
|
Sathies Kumar T, Arun C, Ezhumalai P. An approach for brain tumor detection using optimal feature selection and optimized deep belief network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103440] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
5
|
Brain Tumor Segmentation of MRI Images Using Processed Image Driven U-Net Architecture. COMPUTERS 2021. [DOI: 10.3390/computers10110139] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Brain tumor segmentation seeks to separate healthy tissue from tumorous regions. This is an essential step in diagnosis and treatment planning to maximize the likelihood of successful treatment. Magnetic resonance imaging (MRI) provides detailed information about brain tumor anatomy, making it an important tool for effective diagnosis which is requisite to replace the existing manual detection system where patients rely on the skills and expertise of a human. In order to solve this problem, a brain tumor segmentation & detection system is proposed where experiments are tested on the collected BraTS 2018 dataset. This dataset contains four different MRI modalities for each patient as T1, T2, T1Gd, and FLAIR, and as an outcome, a segmented image and ground truth of tumor segmentation, i.e., class label, is provided. A fully automatic methodology to handle the task of segmentation of gliomas in pre-operative MRI scans is developed using a U-Net-based deep learning model. The first step is to transform input image data, which is further processed through various techniques—subset division, narrow object region, category brain slicing, watershed algorithm, and feature scaling was done. All these steps are implied before entering data into the U-Net Deep learning model. The U-Net Deep learning model is used to perform pixel label segmentation on the segment tumor region. The algorithm reached high-performance accuracy on the BraTS 2018 training, validation, as well as testing dataset. The proposed model achieved a dice coefficient of 0.9815, 0.9844, 0.9804, and 0.9954 on the testing dataset for sets HGG-1, HGG-2, HGG-3, and LGG-1, respectively.
Collapse
|
6
|
AbuRahma AF, Avgerinos ED, Chang RW, Darling RC, Duncan AA, Forbes TL, Malas MB, Perler BA, Powell RJ, Rockman CB, Zhou W. The Society for Vascular Surgery implementation document for management of extracranial cerebrovascular disease. J Vasc Surg 2021; 75:26S-98S. [PMID: 34153349 DOI: 10.1016/j.jvs.2021.04.074] [Citation(s) in RCA: 82] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 04/28/2021] [Indexed: 12/24/2022]
Affiliation(s)
- Ali F AbuRahma
- Department of Surgery, West Virginia University-Charleston Division, Charleston, WV.
| | - Efthymios D Avgerinos
- Division of Vascular Surgery, University of Pittsburgh School of Medicine, UPMC Hearrt & Vascular Institute, Pittsburgh, Pa
| | - Robert W Chang
- Vascular Surgery, Permanente Medical Group, San Francisco, Calif
| | | | - Audra A Duncan
- Division of Vascular & Endovascular Surgery, University of Western Ontario, London, Ontario, Canada
| | - Thomas L Forbes
- Division of Vascular & Endovascular Surgery, University of Western Ontario, London, Ontario, Canada
| | - Mahmoud B Malas
- Vascular & Endovascular Surgery, University of California San Diego, La Jolla, Calif
| | - Bruce Alan Perler
- Division of Vascular Surgery & Endovascular Therapy, Johns Hopkins, Baltimore, Md
| | | | - Caron B Rockman
- Division of Vascular Surgery, New York University Langone, New York, NY
| | - Wei Zhou
- Division of Vascular Surgery, University of Arizona, Tucson, Ariz
| |
Collapse
|
7
|
Machine-Learning-Based Elderly Stroke Monitoring System Using Electroencephalography Vital Signals. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11041761] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Stroke is the third highest cause of death worldwide after cancer and heart disease, and the number of stroke diseases due to aging is set to at least triple by 2030. As the top three causes of death worldwide are all related to chronic disease, the importance of healthcare is increasing even more. Models that can predict real-time health conditions and diseases using various healthcare services are attracting increasing attention. Most diagnosis and prediction methods of stroke for the elderly involve imaging techniques such as magnetic resonance imaging (MRI). It is difficult to rapidly and accurately diagnose and predict stroke diseases due to the long testing times and high costs associated with MRI. Thus, in this paper, we design and implement a health monitoring system that can predict the precursors of stroke diseases in the elderly in real time during daily walking. First, raw electroencephalography (EEG) data from six channels were preprocessed via Fast Fourier Transform (FFT). The raw EEG power values were then extracted from the raw spectra: alpha (α), beta (β), gamma (γ), delta (δ), and theta (θ) as well as the low β, high β, and θ to β ratio, respectively. The experiments in this paper confirm that the important features of EEG biometric signals alone during walking can accurately determine stroke precursors and occurrence in the elderly with more than 90% accuracy. Further, the Random Forest algorithm with quartiles and Z-score normalization validates the clinical significance and performance of the system proposed in this paper with a 92.51% stroke prediction accuracy. The proposed system can be implemented at a low cost, and it can be applied for early disease detection and prediction using the precursor symptoms of real-time stroke. Furthermore, it is expected that it will be able to detect other diseases such as cancer and heart disease in the future.
Collapse
|
8
|
Gong S, Xing K, Cichocki A, Li J. Deep Learning in EEG: Advance of the Last Ten-Year Critical Period. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3079712] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
9
|
Jheng YC, Chou YB, Kao CL, Yarmishyn AA, Hsu CC, Lin TC, Chen PY, Kao ZK, Chen SJ, Hwang DK. A novelty route for smartphone-based artificial intelligence approach to ophthalmic screening. J Chin Med Assoc 2020; 83:898-899. [PMID: 32520771 PMCID: PMC7526562 DOI: 10.1097/jcma.0000000000000369] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Artificial intelligence (AI) has been widely applied in the medical field and achieved enormous milestones in helping specialists to make diagnosis and remedy decisions, particularly in the field of eye diseases and ophthalmic screening. With the development of AI-based systems, the enormous hardware and software resources are required for optimal performance. In reality, there are many places on the planet where such resources are highly limited. Hence, the smartphone-based AI systems can be used to provide a remote control route to quickly screen eye diseases such as diabetic-related retinopathy or diabetic macular edema. However, the performance of such mobile-based AI systems is still uncharted territory. In this article, we discuss the issues of computing resource consumption and performance of the mobile device-based AI systems and highlight recent research on the feasibility and future potential of application of the mobile device-based AI systems in telemedicine.
Collapse
Affiliation(s)
- Ying-Chun Jheng
- Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Big Data Center, Taipei Veterans General Hospital Taipei, Taiwan, ROC
- Department of Physical Medicine & Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Department of Physical Medicine & Rehabilitation, School of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | - Yu-Bai Chou
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Faculty of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | - Chung-Lan Kao
- Department of Physical Medicine & Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Department of Physical Medicine & Rehabilitation, School of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | | | - Chih-Chien Hsu
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Faculty of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | - Tai-Chi Lin
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Faculty of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | - Po-Yin Chen
- Department of Physical Medicine & Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Department of Physical Medicine & Rehabilitation, School of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | - Zih-Kai Kao
- Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
| | - Shih-Jen Chen
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Faculty of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
| | - De-Kuang Hwang
- Department of Ophthalmology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Faculty of Medicine, National Yang-Ming University, Taipei, Taiwan, ROC
- Address correspondence. Dr. De-Kuang Hwang, Department of Ophthalmology, Taipei Veterans General Hospital, 201, Section 2, Shi-Pai Road, Taipei 112, Taiwan, ROC. E-mail address: (D.-K. Hwang)
| |
Collapse
|
10
|
Çınar A, Tuncer SA. Classification of normal sinus rhythm, abnormal arrhythmia and congestive heart failure ECG signals using LSTM and hybrid CNN-SVM deep neural networks. Comput Methods Biomech Biomed Engin 2020; 24:203-214. [PMID: 32955928 DOI: 10.1080/10255842.2020.1821192] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Effective monitoring of heart patients according to heart signals can save a huge amount of life. In the last decade, the classification and prediction of heart diseases according to ECG signals has gained great importance for patients and doctors. In this paper, the deep learning architecture with high accuracy and popularity has been proposed in recent years for the classification of Normal Sinus Rhythm, (NSR) Abnormal Arrhythmia (ARR) and Congestive Heart Failure (CHF) ECG signals. The proposed architecture is based on Hybrid Alexnet-SVM (Support Vector Machine). 96 Arrhythmia, 30 CHF, 36 NSR signals are available in a total of 192 ECG signals. In order to demonstrate the classification performance of deep learning architectures, ARR, CHR and NSR signals are firstly classified by SVM, KNN algorithm, achieving 68.75% and 65.63% accuracy. The signals are then classified in their raw form with LSTM (Long Short Time Memory) with 90.67% accuracy. By obtaining the spectrograms of the signals, Hybrid Alexnet-SVM algorithm is applied to the images and 96.77% accuracy is obtained. The results show that with the proposed deep learning architecture, it classifies ECG signals with higher accuracy than conventional machine learning classifiers.
Collapse
Affiliation(s)
- Ahmet Çınar
- Faculty of Engineering, Computer Engineering, Fırat University, Elazığ, Turkey
| | - Seda Arslan Tuncer
- Faculty of Engineering, Software Engineering, Fırat University, Elazığ, Turkey
| |
Collapse
|
11
|
Detection of tumors on brain MRI images using the hybrid convolutional neural network architecture. Med Hypotheses 2020; 139:109684. [DOI: 10.1016/j.mehy.2020.109684] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Revised: 03/06/2020] [Accepted: 03/18/2020] [Indexed: 11/21/2022]
|
12
|
Rim B, Sung NJ, Min S, Hong M. Deep Learning in Physiological Signal Data: A Survey. SENSORS (BASEL, SWITZERLAND) 2020; 20:E969. [PMID: 32054042 PMCID: PMC7071412 DOI: 10.3390/s20040969] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 01/31/2020] [Accepted: 02/09/2020] [Indexed: 12/11/2022]
Abstract
Deep Learning (DL), a successful promising approach for discriminative and generative tasks, has recently proved its high potential in 2D medical imaging analysis; however, physiological data in the form of 1D signals have yet to be beneficially exploited from this novel approach to fulfil the desired medical tasks. Therefore, in this paper we survey the latest scientific research on deep learning in physiological signal data such as electromyogram (EMG), electrocardiogram (ECG), electroencephalogram (EEG), and electrooculogram (EOG). We found 147 papers published between January 2018 and October 2019 inclusive from various journals and publishers. The objective of this paper is to conduct a detailed study to comprehend, categorize, and compare the key parameters of the deep-learning approaches that have been used in physiological signal analysis for various medical applications. The key parameters of deep-learning approach that we review are the input data type, deep-learning task, deep-learning model, training architecture, and dataset sources. Those are the main key parameters that affect system performance. We taxonomize the research works using deep-learning method in physiological signal analysis based on: (1) physiological signal data perspective, such as data modality and medical application; and (2) deep-learning concept perspective such as training architecture and dataset sources.
Collapse
Affiliation(s)
- Beanbonyka Rim
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Nak-Jun Sung
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Sedong Min
- Department of Medical IT Engineering, Soonchunhyang University, Asan 31538, Korea
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Korea
| |
Collapse
|