1
|
van Gorp H, van Gilst MM, Overeem S, Dujardin S, Pijpers A, van Wetten B, Fonseca P, van Sloun RJG. Single-channel EOG sleep staging on a heterogeneous cohort of subjects with sleep disorders. Physiol Meas 2024; 45:055007. [PMID: 38653318 DOI: 10.1088/1361-6579/ad4251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 04/23/2024] [Indexed: 04/25/2024]
Abstract
Objective.Sleep staging based on full polysomnography is the gold standard in the diagnosis of many sleep disorders. It is however costly, complex, and obtrusive due to the use of multiple electrodes. Automatic sleep staging based on single-channel electro-oculography (EOG) is a promising alternative, requiring fewer electrodes which could be self-applied below the hairline. EOG sleep staging algorithms are however yet to be validated in clinical populations with sleep disorders.Approach.We utilized the SOMNIA dataset, comprising 774 recordings from subjects with various sleep disorders, including insomnia, sleep-disordered breathing, hypersomnolence, circadian rhythm disorders, parasomnias, and movement disorders. The recordings were divided into train (574), validation (100), and test (100) groups. We trained a neural network that integrated transformers within a U-Net backbone. This design facilitated learning of arbitrary-distance temporal relationships within and between the EOG and hypnogram.Main results.For 5-class sleep staging, we achieved median accuracies of 85.0% and 85.2% and Cohen's kappas of 0.781 and 0.796 for left and right EOG, respectively. The performance using the right EOG was significantly better than using the left EOG, possibly because in the recommended AASM setup, this electrode is located closer to the scalp. The proposed model is robust to the presence of a variety of sleep disorders, displaying no significant difference in performance for subjects with a certain sleep disorder compared to those without.Significance.The results show that accurate sleep staging using single-channel EOG can be done reliably for subjects with a variety of sleep disorders.
Collapse
Affiliation(s)
- Hans van Gorp
- Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands
- Philips Sleep and Respiratory Care, Eindhoven, The Netherlands
| | - Merel M van Gilst
- Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands
- Sleep Medicine Centre Kempenhaeghe, Heeze, The Netherlands
| | - Sebastiaan Overeem
- Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands
- Sleep Medicine Centre Kempenhaeghe, Heeze, The Netherlands
| | | | | | | | - Pedro Fonseca
- Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands
- Philips Sleep and Respiratory Care, Eindhoven, The Netherlands
| | - Ruud J G van Sloun
- Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands
| |
Collapse
|
2
|
Oh S, Kweon YS, Shin GH, Lee SW. Association Between Sleep Quality and Deep Learning-Based Sleep Onset Latency Distribution Using an Electroencephalogram. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1806-1816. [PMID: 38696294 DOI: 10.1109/tnsre.2024.3396169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024]
Abstract
To evaluate sleep quality, it is necessary to monitor overnight sleep duration. However, sleep monitoring typically requires more than 7 hours, which can be inefficient in termxs of data size and analysis. Therefore, we proposed to develop a deep learning-based model using a 30 sec sleep electroencephalogram (EEG) early in the sleep cycle to predict sleep onset latency (SOL) distribution and explore associations with sleep quality (SQ). We propose a deep learning model composed of a structure that decomposes and restores the signal in epoch units and a structure that predicts the SOL distribution. We used the Sleep Heart Health Study public dataset, which includes a large number of study subjects, to estimate and evaluate the proposed model. The proposed model estimated the SOL distribution and divided it into four clusters. The advantage of the proposed model is that it shows the process of falling asleep for individual participants as a probability graph over time. Furthermore, we compared the baseline of good SQ and SOL and showed that less than 10 minutes SOL correlated better with good SQ. Moreover, it was the most suitable sleep feature that could be predicted using early EEG, compared with the total sleep time, sleep efficiency, and actual sleep time. Our study showed the feasibility of estimating SOL distribution using deep learning with an early EEG and showed that SOL distribution within 10 minutes was associated with good SQ.
Collapse
|
3
|
Lee JH, Nam H, Kim DH, Koo DL, Choi JW, Hong SN, Jeon ET, Lim S, Jang GS, Kim BH. Developing a deep learning model for sleep stage prediction in obstructive sleep apnea cohort using 60 GHz frequency-modulated continuous-wave radar. J Sleep Res 2024; 33:e14050. [PMID: 37752626 DOI: 10.1111/jsr.14050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 08/18/2023] [Accepted: 08/24/2023] [Indexed: 09/28/2023]
Abstract
Given the significant impact of sleep on overall health, radar technology offers a promising, non-invasive, and cost-effective avenue for the early detection of sleep disorders, even prior to relying on polysomnography (PSG)-based classification. In this study, we employed an attention-based bidirectional long short-term memory (Attention Bi-LSTM) model to accurately predict sleep stages using 60 GHz frequency-modulated continuous-wave (FMCW) radar. Our dataset comprised 78 participants from an ongoing obstructive sleep apnea (OSA) cohort, recruited between July 2021 and November 2022, who underwent overnight polysomnography alongside radar sensor monitoring. The dataset encompasses comprehensive polysomnography recordings, spanning both sleep and wakefulness states. The predictions achieved a Cohen's kappa coefficient of 0.746 and an overall accuracy of 85.2% in classifying wakefulness, rapid-eye-movement (REM) sleep, and non-REM (NREM) sleep (N1 + N2 + N3). The results demonstrated that the models incorporating both Radar 1 and Radar 2 data consistently outperformed those using only Radar 1 data, indicating the potential benefits of utilising multiple radars for sleep stage classification. Although the performance of the models tended to decline with increasing OSA severity, the addition of Radar 2 data notably improved the classification accuracy. These findings demonstrate the potential of radar technology as a valuable screening tool for sleep stage classification.
Collapse
Affiliation(s)
- Ji Hyun Lee
- Department of Radiology, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
| | - Hyunwoo Nam
- Department of Neurology, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
| | - Dong Hyun Kim
- Department of Radiology, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
| | - Dae Lim Koo
- Department of Neurology, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
| | - Jae Won Choi
- Department of Radiology, Armed Forces Yangju Hospital, Yangju, Korea
| | - Seung-No Hong
- Department of Otorhinolaryngology - Head and Neck Surgery, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
| | - Eun-Tae Jeon
- Department of Radiology, Seoul Metropolitan Government - Seoul National University Boramae Medical Center, Seoul National University College of Medicine, Seoul, Korea
| | | | | | | |
Collapse
|
4
|
Van Der Aar JF, Van Den Ende DA, Fonseca P, Van Meulen FB, Overeem S, Van Gilst MM, Peri E. Deep transfer learning for automated single-lead EEG sleep staging with channel and population mismatches. Front Physiol 2024; 14:1287342. [PMID: 38250654 PMCID: PMC10796543 DOI: 10.3389/fphys.2023.1287342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 12/08/2023] [Indexed: 01/23/2024] Open
Abstract
Introduction: Automated sleep staging using deep learning models typically requires training on hundreds of sleep recordings, and pre-training on public databases is therefore common practice. However, suboptimal sleep stage performance may occur from mismatches between source and target datasets, such as differences in population characteristics (e.g., an unrepresented sleep disorder) or sensors (e.g., alternative channel locations for wearable EEG). Methods: We investigated three strategies for training an automated single-channel EEG sleep stager: pre-training (i.e., training on the original source dataset), training-from-scratch (i.e., training on the new target dataset), and fine-tuning (i.e., training on the original source dataset, fine-tuning on the new target dataset). As source dataset, we used the F3-M2 channel of healthy subjects (N = 94). Performance of the different training strategies was evaluated using Cohen's Kappa (κ) in eight smaller target datasets consisting of healthy subjects (N = 60), patients with obstructive sleep apnea (OSA, N = 60), insomnia (N = 60), and REM sleep behavioral disorder (RBD, N = 22), combined with two EEG channels, F3-M2 and F3-F4. Results: No differences in performance between the training strategies was observed in the age-matched F3-M2 datasets, with an average performance across strategies of κ = .83 in healthy, κ = .77 in insomnia, and κ = .74 in OSA subjects. However, in the RBD set, where data availability was limited, fine-tuning was the preferred method (κ = .67), with an average increase in κ of .15 to pre-training and training-from-scratch. In the presence of channel mismatches, targeted training is required, either through training-from-scratch or fine-tuning, increasing performance with κ = .17 on average. Discussion: We found that, when channel and/or population mismatches cause suboptimal sleep staging performance, a fine-tuning approach can yield similar to superior performance compared to building a model from scratch, while requiring a smaller sample size. In contrast to insomnia and OSA, RBD data contains characteristics, either inherent to the pathology or age-related, which apparently demand targeted training.
Collapse
Affiliation(s)
- Jaap F. Van Der Aar
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
- Philips Research, Eindhoven, Netherlands
| | | | - Pedro Fonseca
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
- Philips Research, Eindhoven, Netherlands
| | - Fokke B. Van Meulen
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
- Kempenhaeghe Center for Sleep Medicine, Heeze, Netherlands
| | - Sebastiaan Overeem
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
- Kempenhaeghe Center for Sleep Medicine, Heeze, Netherlands
| | - Merel M. Van Gilst
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
- Kempenhaeghe Center for Sleep Medicine, Heeze, Netherlands
| | - Elisabetta Peri
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
5
|
Reddy YRM, Muralidhar P, Srinivas M. An Effective Hybrid Deep Learning Model for Single-Channel EEG-Based Subject-Independent Drowsiness Recognition. Brain Topogr 2024; 37:1-18. [PMID: 37995000 DOI: 10.1007/s10548-023-01016-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 10/22/2023] [Indexed: 11/24/2023]
Abstract
Nowadays, road accidents pose a severe risk in cases of sleep disorders. We proposed a novel hybrid deep-learning model for detecting drowsiness to address this issue. The proposed model combines the strengths of discrete wavelet long short-term memory (DWLSTM) and convolutional neural networks (CNN) models to classify single-channel electroencephalogram (EEG) signals. Baseline models such as support vector machine (SVM), linear discriminant analysis (LDA), back propagation neural networks (BPNN), CNN, and CNN merged with LSTM (CNN+LSTM) did not fully utilize the time sequence information. Our proposed model incorporates a majority voting between LSTM layers integrated with discrete wavelet transform (DWT) and the CNN model fed with spectrograms as images. The features extracted from sub-bands generated by DWT can provide more informative & discriminating than using the raw EEG signal. Similarly, spectrogram images fed to CNN learn the specific patterns and features with different levels of drowsiness. Furthermore, the proposed model outperformed state-of-the-art deep learning techniques and conventional baseline methods, achieving an average accuracy of 74.62%, 77.76% (using rounding, F1-score maximization approach respectively for generating labels) on 11 subjects for leave-one-out subject method. It achieved high accuracy while maintaining relatively shorter training and testing times, making it more desirable for quicker drowsiness detection. The performance metrics (accuracy, precision, recall, F1-score) are evaluated after 100 randomized tests along with a 95% confidence interval for classification. Additionally, we validated the mean accuracies from five types of wavelet families, including daubechis, symlet, bi-orthogonal, coiflets, and haar, merged with LSTM layers.
Collapse
Affiliation(s)
- Y Rama Muni Reddy
- Department of Electronics and Communication Engineering, National Institute of Technology, Warangal, Telangana, 506004, India.
| | - P Muralidhar
- Department of Electronics and Communication Engineering, National Institute of Technology, Warangal, Telangana, 506004, India
| | - M Srinivas
- Department of Computer Science Engineering, National Institute of Technology, Warangal, Telangana, 506004, India
| |
Collapse
|
6
|
Garcia-Molina G. Feasibility of Unobtrusively Estimating Blood Pressure Using Load Cells under the Legs of a Bed. SENSORS (BASEL, SWITZERLAND) 2023; 24:96. [PMID: 38202958 PMCID: PMC10780971 DOI: 10.3390/s24010096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 12/08/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024]
Abstract
The ability to monitor blood pressure unobtrusively and continuously, even during sleep, may promote the prevention of cardiovascular diseases, enable the early detection of cardiovascular risk, and facilitate the timely administration of treatment. Publicly available data from forty participants containing synchronously recorded signals from four force sensors (load cells located under each leg of a bed) and continuous blood pressure waveforms were leveraged in this research. The focus of this study was on using a deep neural network with load-cell data as input composed of three recurrent layers to reconstruct blood pressure (BP) waveforms. Systolic (SBP) and diastolic (DBP) blood pressure values were estimated from the reconstructed BP waveform. The dataset was partitioned into training, validation, and testing sets, such that the data from a given participant were only used in a single set. The BP waveform reconstruction performance resulted in an R2 of 0.61 and a mean absolute error < 0.1 mmHg. The estimation of the mean SBP and DBP values was characterized by Bland-Altman-derived limits of agreement in intervals of [-11.99 to 15.52 mmHg] and [-7.95 to +3.46 mmHg], respectively. These results may enable the detection of abnormally large or small variations in blood pressure, which indicate cardiovascular health degradation. The apparent contrast between the small reconstruction error and the limit-of-agreement width owes to the fact that reconstruction errors manifest more prominently at the maxima and minima, which are relevant for SBP and DBP estimation. While the focus here was on SBD and DBP estimation, reconstructing the entire BP waveform enables the calculation of additional hemodynamic parameters.
Collapse
Affiliation(s)
- Gary Garcia-Molina
- Sleep Number Labs, San Jose, CA 95113, USA; or
- Center for Sleep and Consciousness, Department of Psychiatry, University of Wisconsin-Madison, Madison, WI 53719, USA
| |
Collapse
|
7
|
Yeckle J, Manian V. Automated Sleep Stage Classification in Home Environments: An Evaluation of Seven Deep Neural Network Architectures. SENSORS (BASEL, SWITZERLAND) 2023; 23:8942. [PMID: 37960641 PMCID: PMC10649735 DOI: 10.3390/s23218942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/25/2023] [Accepted: 10/31/2023] [Indexed: 11/15/2023]
Abstract
Sleep is an essential human physiological need that has garnered increasing scientific attention due to the burgeoning prevalence of sleep-related disorders and their impact on public health. Among contemporary challenges, the demand for authentic sleep monitoring outside the confines of specialized laboratories, ideally within the home environment, has arisen. Addressing this, we explore the development of pragmatic approaches that facilitate implementation within domestic settings. Such approaches necessitate the deployment of streamlined, computationally efficient automated classifiers. In pursuit of a sleep stage classifier tailored for home use, this study rigorously assessed seven conventional neural network architectures prominent in deep learning (LeNet, ResNet, VGG, MLP, LSTM-CNN, LSTM, BLSTM). Leveraging sleep recordings from a cohort of 20 subjects, we elucidate that LeNet, VGG, and ResNet exhibit superior performance compared to recent advancements reported in the literature. Furthermore, a comprehensive architectural analysis was conducted, illuminating the strengths and limitations of each in the context of home-based sleep monitoring. Our findings distinctly identify LeNet as the most-amenable architecture for this purpose, with LSTM and BLSTM demonstrating relatively lesser compatibility. Ultimately, this research substantiates the feasibility of automating sleep stage classification employing lightweight neural networks, thereby accommodating scenarios with constrained computational resources. This advancement aims at revolutionizing the field of sleep monitoring, making it more accessible and reliable for individuals in their homes.
Collapse
Affiliation(s)
- Jaime Yeckle
- Department of Electrical and Computer Engineering, University of Puerto Rico, Mayaguez, PR 00681, USA;
| | | |
Collapse
|
8
|
Vaquerizo-Villar F, Gutiérrez-Tobal GC, Calvo E, Álvarez D, Kheirandish-Gozal L, Del Campo F, Gozal D, Hornero R. An explainable deep-learning model to stage sleep states in children and propose novel EEG-related patterns in sleep apnea. Comput Biol Med 2023; 165:107419. [PMID: 37703716 DOI: 10.1016/j.compbiomed.2023.107419] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 07/26/2023] [Accepted: 08/28/2023] [Indexed: 09/15/2023]
Abstract
Automatic deep-learning models used for sleep scoring in children with obstructive sleep apnea (OSA) are perceived as black boxes, limiting their implementation in clinical settings. Accordingly, we aimed to develop an accurate and interpretable deep-learning model for sleep staging in children using single-channel electroencephalogram (EEG) recordings. We used EEG signals from the Childhood Adenotonsillectomy Trial (CHAT) dataset (n = 1637) and a clinical sleep database (n = 980). Three distinct deep-learning architectures were explored to automatically classify sleep stages from a single-channel EEG data. Gradient-weighted Class Activation Mapping (Grad-CAM), an explainable artificial intelligence (XAI) algorithm, was then applied to provide an interpretation of the singular EEG patterns contributing to each predicted sleep stage. Among the tested architectures, a standard convolutional neural network (CNN) demonstrated the highest performance for automated sleep stage detection in the CHAT test set (accuracy = 86.9% and five-class kappa = 0.827). Furthermore, the CNN-based estimation of total sleep time exhibited strong agreement in the clinical dataset (intra-class correlation coefficient = 0.772). Our XAI approach using Grad-CAM effectively highlighted the EEG features associated with each sleep stage, emphasizing their influence on the CNN's decision-making process in both datasets. Grad-CAM heatmaps also allowed to identify and analyze epochs within a recording with a highly likelihood to be misclassified, revealing mixed features from different sleep stages within these epochs. Finally, Grad-CAM heatmaps unveiled novel features contributing to sleep scoring using a single EEG channel. Consequently, integrating an explainable CNN-based deep-learning model in the clinical environment could enable automatic sleep staging in pediatric sleep apnea tests.
Collapse
Affiliation(s)
- Fernando Vaquerizo-Villar
- Biomedical Engineering Group, University of Valladolid, Valladolid, Spain; CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Valladolid, Spain.
| | - Gonzalo C Gutiérrez-Tobal
- Biomedical Engineering Group, University of Valladolid, Valladolid, Spain; CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Valladolid, Spain
| | - Eva Calvo
- Biomedical Engineering Group, University of Valladolid, Valladolid, Spain
| | - Daniel Álvarez
- Biomedical Engineering Group, University of Valladolid, Valladolid, Spain; CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Valladolid, Spain; Sleep-Ventilation Unit, Pneumology Department, Río Hortega University Hospital, Valladolid, Spain
| | - Leila Kheirandish-Gozal
- Departments of Neurology and Child Health and Child Health Research Institute, The University of Missouri School of Medicine, Columbia, MO, USA
| | - Félix Del Campo
- Biomedical Engineering Group, University of Valladolid, Valladolid, Spain; CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Valladolid, Spain; Sleep-Ventilation Unit, Pneumology Department, Río Hortega University Hospital, Valladolid, Spain
| | - David Gozal
- Office of The Dean, Joan C. Edwards School of Medicine, Marshall University, 1600 Medical Center Dr, Huntington, WV, 25701, USA
| | - Roberto Hornero
- Biomedical Engineering Group, University of Valladolid, Valladolid, Spain; CIBER de Bioingeniería, Biomateriales y Nanomedicina, Instituto de Salud Carlos III, Valladolid, Spain
| |
Collapse
|
9
|
Ji X, Li Y, Wen P. 3DSleepNet: A Multi-Channel Bio-Signal Based Sleep Stages Classification Method Using Deep Learning. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3513-3523. [PMID: 37639413 DOI: 10.1109/tnsre.2023.3309542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
A novel multi-channel-based 3D convolutional neural network (3D-CNN) is proposed in this paper to classify sleep stages. Time domain features, frequency domain features, and time-frequency domain features are extracted from electroencephalography (EEG), electromyogram (EMG), and electrooculogram (EOG) channels and fed into the 3D-CNN model to classify sleep stages. Intrinsic connections among different bio-signals and different frequency bands in time series and time-frequency are learned by 3D convolutional layers, while the frequency relations are learned by 2D convolutional layers. Partial dot-product attention layers help this model find the most important channels and frequency bands in different sleep stages. A long short-term memory unit is added to learn the transition rules among neighboring epochs. Classification experiments were conducted using both ISRUC-S3 datasets and ISRUC-S1, sleep-disorder datasets. The experimental results showed that the overall accuracy achieved 0.832 and the F1-score and Cohen's kappa reached 0.814 and 0.783, respectively, on ISRUC-S3, which are a competitive classification performance with the state-of-the-art baselines. The overall accuracy, F1-score, and Cohen's kappa on ISRUC-S1 achieved 0.820, 0.797, and 0.768, respectively, which also demonstrate its generality on unhealthy subjects. Further experiments were conducted on ISRUC-S3 subset to evaluate its training time. The training time on 10 subjects from ISRUC-S3 with 8549 epochs is 4493s, which indicates its highest calculation speed compared with the existing high-performance graph convolutional networks and [Formula: see text]Net architecture algorithms.
Collapse
|
10
|
Abstract
Automatic polysomnography analysis can be leveraged to shorten scoring times, reduce associated costs, and ultimately improve the overall diagnosis of sleep disorders. Multiple and diverse strategies have been attempted for implementation of this technology at scale in the routine workflow of sleep centers. The field, however, is complex and presents unsolved challenges in a number of areas. Recent developments in computer science and artificial intelligence are nevertheless closing the gap. Technological advances are also opening new pathways for expanding our current understanding of the domain and its analysis.
Collapse
Affiliation(s)
- Diego Alvarez-Estevez
- Center for Information and Communications Technology Research (CITIC), Universidade da Coruña, 15071 A Coruña, Spain.
| |
Collapse
|
11
|
He Z, Tang M, Wang P, Du L, Chen X, Cheng G, Fang Z. Cross-scenario automatic sleep stage classification using transfer learning and single-channel EEG. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
12
|
Pei W, He T, Yang P, Lv X, Jiao B, Meng F, Yan Y, Cui L, He G, Zhou X, Wen G, Ruan J, Lu L. Acupuncture combined with cognitive-behavioural therapy for insomnia (CBT-I) in patients with insomnia: study protocol for a randomised controlled trial. BMJ Open 2022; 12:e063442. [PMID: 36585134 PMCID: PMC9809230 DOI: 10.1136/bmjopen-2022-063442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
INTRODUCTION Insomnia affects physical and mental health due to the lack of continuous and complete sleep architecture. Polysomnograms (PSGs) are used to record electrical information to perform sleep architecture using deep learning. Although acupuncture combined with cognitive-behavioural therapy for insomnia (CBT-I) could not only improve sleep quality, solve anxiety, depression but also ameliorate poor sleep habits and detrimental cognition. Therefore, this study will focus on the effects of electroacupuncture combined with CBT-I on sleep architecture with deep learning. METHODS AND ANALYSIS This randomised controlled trial will evaluate the efficacy and effectiveness of electroacupuncture combined with CBT-I in patients with insomnia. Participants will be randomised to receive either electroacupuncture combined with CBT-I or sham acupuncture combined with CBT-I and followed up for 4 weeks. The primary outcome is sleep quality, which is evaluated by the Pittsburgh Sleep Quality Index. The secondary outcome measures include a measurement of depression severity, anxiety, maladaptive cognitions associated with sleep and adverse events. Sleep architecture will be assessed using deep learning on PSGs. ETHICS AND DISSEMINATION This trial has been approved by the institutional review boards and ethics committees of the First Affiliated Hospital of Sun Yat-sun University (2021763). The results will be disseminated through peer-reviewed journals. The results of this trial will be disseminated through peer-reviewed publications and conference abstracts or posters. TRIAL REGISTRATION NUMBER CTR2100052502.
Collapse
Affiliation(s)
- Wenya Pei
- Department of Acupuncture, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Te He
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Pei Yang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Xiaozhou Lv
- Department of Traditional Chinese Medicine, Zhongshan School of Medicine, Sun Yat-senUniversity, Guangzhou, China
| | - Boyu Jiao
- Department of Acupuncture, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Fanqi Meng
- Department of Acupuncture, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yingshuo Yan
- Department of Respiratory Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Liqian Cui
- Department of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzho, China
| | - Guanheng He
- Department of Acupuncture, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xin Zhou
- Department of Acupuncture, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Guihua Wen
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
| | - Jingwen Ruan
- Department of Acupuncture, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Liming Lu
- South China Research Center for Acupuncture and Moxibustion, Medical College of Acu-Moxi and Rehabilitation, Guangzhou University of Chinese Medicine, Guangzhou, China
| |
Collapse
|
13
|
Xie J, Zhang J, Sun J, Ma Z, Qin L, Li G, Zhou H, Zhan Y. A Transformer-Based Approach Combining Deep Learning Network and Spatial-Temporal Information for Raw EEG Classification. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2126-2136. [PMID: 35914032 DOI: 10.1109/tnsre.2022.3194600] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The attention mechanism of the Transformer has the advantage of extracting feature correlation in the long-sequence data and visualizing the model. As time-series data, the spatial and temporal dependencies of the EEG signals between the time points and the different channels contain important information for accurate classification. So far, Transformer-based approaches have not been widely explored in motor-imagery EEG classification and visualization, especially lacking general models based on cross-individual validation. Taking advantage of the Transformer model and the spatial-temporal characteristics of the EEG signals, we designed Transformer-based models for classifications of motor imagery EEG based on the PhysioNet dataset. With 3s EEG data, our models obtained the best classification accuracy of 83.31%, 74.44%, and 64.22% on two-, three-, and four-class motor-imagery tasks in cross-individual validation, which outperformed other state-of-the-art models by 0.88%, 2.11%, and 1.06%. The inclusion of the positional embedding modules in the Transformer could improve the EEG classification performance. Furthermore, the visualization results of attention weights provided insights into the working mechanism of the Transformer-based networks during motor imagery tasks. The topography of the attention weights revealed a pattern of event-related desynchronization (ERD) which was consistent with the results from the spectral analysis of Mu and beta rhythm over the sensorimotor areas. Together, our deep learning methods not only provide novel and powerful tools for classifying and understanding EEG data but also have broad applications for brain-computer interface (BCI) systems.
Collapse
|
14
|
A comprehensive evaluation of contemporary methods used for automatic sleep staging. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
15
|
Automatic sleep stage classification: From classical machine learning methods to deep learning. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103751] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
16
|
Ji X, Li Y, Wen P. Jumping Knowledge Based Spatial-temporal Graph Convolutional Networks for Automatic Sleep Stage Classification. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1464-1472. [PMID: 35584068 DOI: 10.1109/tnsre.2022.3176004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A novel jumping knowledge spatial-temporal graph convolutional network (JK-STGCN) is proposed in this paper to classify sleep stages. Based on this method, different types of multi-channel bio-signals, including electroencephalography (EEG), electromyogram (EMG), electrooculogram (EOG), and electrocardiogram (ECG) are utilized to classify sleep stages, after extracting features by a standard convolutional neural network (CNN) named FeatureNet. Intrinsic connections among different bio-signal channels from the identical epoch and neighboring epochs can be obtained through two adaptive adjacency matrices learning methods. A jumping knowledge spatial-temporal graph convolution module helps the JK-STGCN model to extract spatial features from the graph convolutions efficiently and temporal features are extracted from its common standard convolutions to learn the transition rules among sleep stages. Experimental results on the ISRUC-S3 dataset showed that the overall accuracy achieved 0.831 and the F1-score and Cohen kappa reached 0.814 and 0.782, respectively, which are the competitive classification performance with the state-of-the-art baselines. Further experiments on the ISRUC-S3 dataset are also conducted to evaluate the execution efficiency of the JK-STGCN model. The training time on 10 subjects is 2621s and the testing time on 50 subjects is 6.8s, which indicates its highest calculation speed compared with the existing high-performance graph convolutional networks and U-Net architecture algorithms. Experimental results on the ISRUC-S1 dataset also demonstrate its generality, whose accuracy, F1-score, and Cohen kappa achieve 0.820, 0.798, and 0.767 respectively.
Collapse
|
17
|
Deep learning for predicting respiratory rate from biosignals. Comput Biol Med 2022; 144:105338. [DOI: 10.1016/j.compbiomed.2022.105338] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/27/2022] [Accepted: 02/10/2022] [Indexed: 12/23/2022]
|
18
|
Garcia-Molina G, Jiang J. Interbeat interval-based sleep staging: work in progress toward real-time implementation. Physiol Meas 2022; 43. [PMID: 35297780 DOI: 10.1088/1361-6579/ac5a78] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 03/03/2022] [Indexed: 01/27/2023]
Abstract
Objective. Cardiac activity changes during sleep enable real-time sleep staging. We developed a deep neural network (DNN) to detect sleep stages using interbeat intervals (IBIs) extracted from electrocardiogram signals.Approach. Data from healthy and apnea subjects were used for training and validation; 2 additional datasets (healthy and sleep disorders subjects) were used for testing. R-peak detection was used to determine IBIs before resampling at 2 Hz; the resulting signal was segmented into 150 s windows (30 s shift). DNN output approximated the probabilities of a window belonging to light, deep, REM, or wake stages. Cohen's Kappa, accuracy, and sensitivity/specificity per stage were determined, and Kappa was optimized using thresholds on probability ratios for each stage versus light sleep.Main results. Mean (SD) Kappa and accuracy for 4 sleep stages were 0.44 (0.09) and 0.65 (0.07), respectively, in healthy subjects. For 3 sleep stages (light+deep, REM, and wake), Kappa and accuracy were 0.52 (0.12) and 0.76 (0.07), respectively. Algorithm performance on data from subjects with REM behavior disorder or periodic limb movement disorder was significantly worse, with Kappa of 0.24 (0.09) and 0.36 (0.12), respectively. Average processing time by an ARM microprocessor for a 300-sample window was 19.2 ms.Significance. IBIs can be obtained from a variety of cardiac signals, including electrocardiogram, photoplethysmography, and ballistocardiography. The DNN algorithm presented is 3 orders of magnitude smaller compared with state-of-the-art algorithms and was developed to perform real-time, IBI-based sleep staging. With high specificity and moderate sensitivity for deep and REM sleep, small footprint, and causal processing, this algorithm may be used across different platforms to perform real-time sleep staging and direct intervention strategies.Novelty & Significance(92/100 words) This article describes the development and testing of a deep neural network-based algorithm to detect sleep stages using interbeat intervals, which can be obtained from a variety of cardiac signals including photoplethysmography, electrocardiogram, and ballistocardiography. Based on the interbeat intervals identified in electrocardiogram signals, the algorithm architecture included a group of convolution layers and a group of long short-term memory layers. With its small footprint, fast processing time, high specificity and good sensitivity for deep and REM sleep, this algorithm may provide a good option for real-time sleep staging to direct interventions.
Collapse
Affiliation(s)
| | - Jiewei Jiang
- Sleep Number Labs, San Jose, CA, United States of America
| |
Collapse
|
19
|
Kwon K, Kwon S, Yeo WH. Automatic and Accurate Sleep Stage Classification via a Convolutional Deep Neural Network and Nanomembrane Electrodes. BIOSENSORS 2022; 12:bios12030155. [PMID: 35323425 PMCID: PMC8946692 DOI: 10.3390/bios12030155] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 02/09/2022] [Accepted: 02/28/2022] [Indexed: 05/13/2023]
Abstract
Sleep stage classification is an essential process of diagnosing sleep disorders and related diseases. Automatic sleep stage classification using machine learning has been widely studied due to its higher efficiency compared with manual scoring. Typically, a few polysomnography data are selected as input signals, and human experts label the corresponding sleep stages manually. However, the manual process includes human error and inconsistency in the scoring and stage classification. Here, we present a convolutional neural network (CNN)-based classification method that offers highly accurate, automatic sleep stage detection, validated by a public dataset and new data measured by wearable nanomembrane dry electrodes. First, our study makes a training and validation model using a public dataset with two brain signal and two eye signal channels. Then, we validate this model with a new dataset measured by a set of nanomembrane electrodes. The result of the automatic sleep stage classification shows that our CNN model with multi-taper spectrogram pre-processing achieved 88.85% training accuracy on the validation dataset and 81.52% prediction accuracy on our laboratory dataset. These results validate the reliability of our classification method on the standard polysomnography dataset and the transferability of our CNN model for other datasets measured with the wearable electrodes.
Collapse
Affiliation(s)
- Kangkyu Kwon
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA;
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA;
| | - Shinjae Kwon
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA;
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Woon-Hong Yeo
- IEN Center for Human-Centric Interfaces and Engineering, Institute for Electronics and Nanotechnology, Georgia Institute of Technology, Atlanta, GA 30332, USA;
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Wallace H. Coulter Department of Biomedical Engineering, Parker H. Petit Institute for Bioengineering and Biosciences, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Neural Engineering Center, Institute for Materials, Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Correspondence: ; Tel.: +1-404-385-5710; Fax: +1-404-894-1658
| |
Collapse
|
20
|
A Novel FPGA-Based Intent Recognition System Utilizing Deep Recurrent Neural Networks. ELECTRONICS 2021. [DOI: 10.3390/electronics10202495] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent years, systems that monitor and control home environments, based on non-vocal and non-manual interfaces, have been introduced to improve the quality of life of people with mobility difficulties. In this work, we present the reconfigurable implementation and optimization of such a novel system that utilizes a recurrent neural network (RNN). As demonstrated in the real-world results, FPGAs have proved to be very efficient when implementing RNNs. In particular, our reconfigurable implementation is more than 150× faster than a high-end Intel Xeon CPU executing the reference inference tasks. Moreover, the proposed system achieves more than 300× the improvements, in terms of energy efficiency, when compared with the server CPU, while, in terms of the reported achieved GFLOPS/W, it outperforms even a server-tailored GPU. An additional important contribution of the work discussed in this study is that the implementation and optimization process demonstrated can also act as a reference to anyone implementing the inference tasks of RNNs in reconfigurable hardware; this is further facilitated by the fact that our C++ code, which is tailored for a high-level-synthesis (HLS) tool, is distributed in open-source, and can easily be incorporated to existing HLS libraries.
Collapse
|
21
|
Home-Use and Real-Time Sleep-Staging System Based on Eye Masks and Mobile Devices with a Deep Learning Model. J Med Biol Eng 2021; 41:659-668. [PMID: 34512223 PMCID: PMC8418457 DOI: 10.1007/s40846-021-00649-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Accepted: 08/17/2021] [Indexed: 11/16/2022]
Abstract
Purpose Sleep is an important human activity. Comfortable sensing and accurate analysis in sleep monitoring is beneficial to many healthcare and medical applications. From 2020, owing to the COVID‑19 pandemic that spreads between people when they come into close physical contact with one another, the willingness to go to hospital for receiving care has reduced; care-at-home is the trend in modern healthcare. Therefore, a home-use and real-time sleep-staging system is developed in this paper. Methods We developed and implemented a real-time sleep staging system that integrates a wearable eye mask for high-quality electroencephalogram/electrooculogram measurement and a mobile device with MobileNETV2 deep learning model for sleep-stage identification. In the experiments, 25 all-night recordings were acquired, 17 of which were used for training, and the remaining eight were used for testing. Results The averaged scoring agreements for the wake, light sleep, deep sleep, and rapid eye movement stages were 85.20%, 87.17%, 82.87%, and 89.30%, respectively, for our system compared with the manual scoring of PSG recordings. In addition, the mean absolute errors of four objective sleep measurements, including sleep efficiency, total sleep time, sleep onset time, and wake after sleep onset time were 1.68%, 7.56 min, 5.50 min, and 3.94 min, respectively. No significant differences were observed between the proposed system and manual PSG scoring in terms of the percentage of each stage and the objective sleep measurements. Conclusion These experimental results demonstrate that our system provides high scoring agreements in sleep staging and unbiased sleep measurements owing to the use of EEG and EOG signals and powerful mobile computing based on deep learning networks. These results also suggest that our system is applicable for home-use real-time sleep monitoring.
Collapse
|
22
|
Matsuda N, Kinoshita K, Okamura A, Shirakawa T, Suzuki I. Histograms of Frequency-Intensity Distribution Deep Learning to Predict the Seizure Liability of Drugs in Electroencephalography. Toxicol Sci 2021; 182:229-242. [PMID: 34021344 DOI: 10.1093/toxsci/kfab061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Detection of seizures as well as that of seizure auras is effective in improving the predictive accuracy of seizure liability of drugs. Whereas electroencephalography has been known to be effective for the detection of seizure liability, no established methods are available for the detection of seizure auras. We developed a method for detecting seizure auras through machine learning using frequency-characteristic images of electroencephalograms. Histograms of frequency-intensity distribution prepared from electroencephalograms of rats analyzed during seizures induced with 4-aminopyridine (6 mg/kg), strychnine (3 mg/kg), and pilocarpine (400 mg/kg), were used to create an artificial intelligence (AI) system that learned the features of frequency-characteristic images during seizures. The AI system detected seizure states learned in advance with 100% accuracy induced even by convulsants acting through different mechanisms, and the risk of seizure before a seizure was detected in general observation. The developed AI system determined that the unlearned convulsant Tramadol (150 mg/kg) was the risk of seizure and the negative compounds aspirin and vehicle were negative. Moreover, the AI system detected seizure liability even in electroencephalography data associated with the use of 4-aminopyridine (3 mg/kg), strychnine (1 mg/kg), and pilocarpine (150 mg/kg), which did not induce seizures detectable in general observation. These results suggest that the AI system developed herein is an effective means for electroencephalographic detection of seizure auras, raising expectations for its practical use as a new analytical method that allows for the sensitive detection of seizure liability of drugs that has been overlooked previously in preclinical studies.
Collapse
Affiliation(s)
- Naoki Matsuda
- Department of Electronics, Graduate School of Engineering, Tohoku Institute of Technology, Sendai, Miyagi 982-8577, Japan
| | - Kenichi Kinoshita
- Drug Safety Research Labs, Astellas Pharma Inc., Tsukuba, Ibaraki 305-8585, Japan
| | - Ai Okamura
- Drug Safety Research Labs, Astellas Pharma Inc., Tsukuba, Ibaraki 305-8585, Japan
| | - Takafumi Shirakawa
- Drug Safety Research Labs, Astellas Pharma Inc., Tsukuba, Ibaraki 305-8585, Japan
| | - Ikuro Suzuki
- Department of Electronics, Graduate School of Engineering, Tohoku Institute of Technology, Sendai, Miyagi 982-8577, Japan
| |
Collapse
|
23
|
Alvarez-Estevez D, Rijsman RM. Inter-database validation of a deep learning approach for automatic sleep scoring. PLoS One 2021; 16:e0256111. [PMID: 34398931 PMCID: PMC8366993 DOI: 10.1371/journal.pone.0256111] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 08/01/2021] [Indexed: 12/17/2022] Open
Abstract
STUDY OBJECTIVES Development of inter-database generalizable sleep staging algorithms represents a challenge due to increased data variability across different datasets. Sharing data between different centers is also a problem due to potential restrictions due to patient privacy protection. In this work, we describe a new deep learning approach for automatic sleep staging, and address its generalization capabilities on a wide range of public sleep staging databases. We also examine the suitability of a novel approach that uses an ensemble of individual local models and evaluate its impact on the resulting inter-database generalization performance. METHODS A general deep learning network architecture for automatic sleep staging is presented. Different preprocessing and architectural variant options are tested. The resulting prediction capabilities are evaluated and compared on a heterogeneous collection of six public sleep staging datasets. Validation is carried out in the context of independent local and external dataset generalization scenarios. RESULTS Best results were achieved using the CNN_LSTM_5 neural network variant. Average prediction capabilities on independent local testing sets achieved 0.80 kappa score. When individual local models predict data from external datasets, average kappa score decreases to 0.54. Using the proposed ensemble-based approach, average kappa performance on the external dataset prediction scenario increases to 0.62. To our knowledge this is the largest study by the number of datasets so far on validating the generalization capabilities of an automatic sleep staging algorithm using external databases. CONCLUSIONS Validation results show good general performance of our method, as compared with the expected levels of human agreement, as well as to state-of-the-art automatic sleep staging methods. The proposed ensemble-based approach enables flexible and scalable design, allowing dynamic integration of local models into the final ensemble, preserving data locality, and increasing generalization capabilities of the resulting system at the same time.
Collapse
Affiliation(s)
- Diego Alvarez-Estevez
- Sleep Center, Haaglanden Medisch Centrum, The Hague, South-Holland, The Netherlands
- Center for Information and Communications Technology Research (CITIC), University of A Coruña, A Coruña, Spain
| | - Roselyne M. Rijsman
- Sleep Center, Haaglanden Medisch Centrum, The Hague, South-Holland, The Netherlands
| |
Collapse
|
24
|
Tezuka T, Kumar D, Singh S, Koyanagi I, Naoi T, Sakaguchi M. Real-time, automatic, open-source sleep stage classification system using single EEG for mice. Sci Rep 2021; 11:11151. [PMID: 34045518 PMCID: PMC8160151 DOI: 10.1038/s41598-021-90332-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 05/07/2021] [Indexed: 11/09/2022] Open
Abstract
We developed a real-time sleep stage classification system with a convolutional neural network using only a one-channel electro-encephalogram source from mice and universally available features in any time-series data: raw signal, spectrum, and zeitgeber time. To accommodate historical information from each subject, we included a long short-term memory recurrent neural network in combination with the universal features. The resulting system (UTSN-L) achieved 90% overall accuracy and 81% multi-class Matthews Correlation Coefficient, with particularly high-quality judgements for rapid eye movement sleep (91% sensitivity and 98% specificity). This system can enable automatic real-time interventions during rapid eye movement sleep, which has been difficult due to its relatively low abundance and short duration. Further, it eliminates the need for ordinal pre-calibration, electromyogram recording, and manual classification and thus is scalable. The code is open-source with a graphical user interface and closed feedback loop capability, making it easily adaptable to a wide variety of end-user needs. By allowing large-scale, automatic, and real-time sleep stage-specific interventions, this system can aid further investigations of the functions of sleep and the development of new therapeutic strategies for sleep-related disorders.
Collapse
Affiliation(s)
- Taro Tezuka
- Faculty of Library, Information and Media Science/Center for Artificial Intelligence Research (C-AIR), University of Tsukuba, Tsukuba, Japan.
| | - Deependra Kumar
- International Institute for Integrative Sleep Medicine (WPI-IIIS), University of Tsukuba, Tsukuba, Japan
| | - Sima Singh
- International Institute for Integrative Sleep Medicine (WPI-IIIS), University of Tsukuba, Tsukuba, Japan
| | - Iyo Koyanagi
- International Institute for Integrative Sleep Medicine (WPI-IIIS), University of Tsukuba, Tsukuba, Japan
| | - Toshie Naoi
- International Institute for Integrative Sleep Medicine (WPI-IIIS), University of Tsukuba, Tsukuba, Japan
| | - Masanori Sakaguchi
- International Institute for Integrative Sleep Medicine (WPI-IIIS), University of Tsukuba, Tsukuba, Japan.
| |
Collapse
|
25
|
Casciola AA, Carlucci SK, Kent BA, Punch AM, Muszynski MA, Zhou D, Kazemi A, Mirian MS, Valerio J, McKeown MJ, Nygaard HB. A Deep Learning Strategy for Automatic Sleep Staging Based on Two-Channel EEG Headband Data. SENSORS 2021; 21:s21103316. [PMID: 34064694 PMCID: PMC8151443 DOI: 10.3390/s21103316] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/04/2021] [Accepted: 05/06/2021] [Indexed: 12/31/2022]
Abstract
Sleep disturbances are common in Alzheimer’s disease and other neurodegenerative disorders, and together represent a potential therapeutic target for disease modification. A major barrier for studying sleep in patients with dementia is the requirement for overnight polysomnography (PSG) to achieve formal sleep staging. This is not only costly, but also spending a night in a hospital setting is not always advisable in this patient group. As an alternative to PSG, portable electroencephalography (EEG) headbands (HB) have been developed, which reduce cost, increase patient comfort, and allow sleep recordings in a person’s home environment. However, naïve applications of current automated sleep staging systems tend to perform inadequately with HB data, due to their relatively lower quality. Here we present a deep learning (DL) model for automated sleep staging of HB EEG data to overcome these critical limitations. The solution includes a simple band-pass filtering, a data augmentation step, and a model using convolutional (CNN) and long short-term memory (LSTM) layers. With this model, we have achieved 74% (±10%) validation accuracy on low-quality two-channel EEG headband data and 77% (±10%) on gold-standard PSG. Our results suggest that DL approaches achieve robust sleep staging of both portable and in-hospital EEG recordings, and may allow for more widespread use of ambulatory sleep assessments across clinical conditions, including neurodegenerative disorders.
Collapse
Affiliation(s)
- Amelia A. Casciola
- Department of Electrical and Computer Engineering Capstone, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (A.A.C.); (S.K.C.); (A.M.P.); (M.A.M.); (D.Z.)
| | - Sebastiano K. Carlucci
- Department of Electrical and Computer Engineering Capstone, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (A.A.C.); (S.K.C.); (A.M.P.); (M.A.M.); (D.Z.)
| | - Brianne A. Kent
- Djavad Mowafaghian Centre for Brain Health, Division of Neurology, University of British Columbia, Vancouver, BC V6T 1Z3, Canada; (B.A.K.); (M.S.M.); (J.V.)
- Department of Psychology, Simon Fraser University, Burnaby, BC V5A 1S6, Canada
| | - Amanda M. Punch
- Department of Electrical and Computer Engineering Capstone, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (A.A.C.); (S.K.C.); (A.M.P.); (M.A.M.); (D.Z.)
| | - Michael A. Muszynski
- Department of Electrical and Computer Engineering Capstone, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (A.A.C.); (S.K.C.); (A.M.P.); (M.A.M.); (D.Z.)
| | - Daniel Zhou
- Department of Electrical and Computer Engineering Capstone, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (A.A.C.); (S.K.C.); (A.M.P.); (M.A.M.); (D.Z.)
| | - Alireza Kazemi
- Center for Mind and Brain, Department of Psychology, University of California, Davis, CA 95618, USA;
| | - Maryam S. Mirian
- Djavad Mowafaghian Centre for Brain Health, Division of Neurology, University of British Columbia, Vancouver, BC V6T 1Z3, Canada; (B.A.K.); (M.S.M.); (J.V.)
| | - Jason Valerio
- Djavad Mowafaghian Centre for Brain Health, Division of Neurology, University of British Columbia, Vancouver, BC V6T 1Z3, Canada; (B.A.K.); (M.S.M.); (J.V.)
| | - Martin J. McKeown
- Djavad Mowafaghian Centre for Brain Health, Division of Neurology, University of British Columbia, Vancouver, BC V6T 1Z3, Canada; (B.A.K.); (M.S.M.); (J.V.)
- Correspondence: (M.J.M.); (H.B.N.)
| | - Haakon B. Nygaard
- Djavad Mowafaghian Centre for Brain Health, Division of Neurology, University of British Columbia, Vancouver, BC V6T 1Z3, Canada; (B.A.K.); (M.S.M.); (J.V.)
- Correspondence: (M.J.M.); (H.B.N.)
| |
Collapse
|
26
|
A compact and interpretable convolutional neural network for cross-subject driver drowsiness detection from single-channel EEG. Methods 2021; 202:173-184. [PMID: 33901644 DOI: 10.1016/j.ymeth.2021.04.017] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 03/25/2021] [Accepted: 04/21/2021] [Indexed: 11/21/2022] Open
Abstract
Driver drowsiness is one of the main factors leading to road fatalities and hazards in the transportation industry. Electroencephalography (EEG) has been considered as one of the best physiological signals to detect drivers' drowsy states, since it directly measures neurophysiological activities in the brain. However, designing a calibration-free system for driver drowsiness detection with EEG is still a challenging task, as EEG suffers from serious mental and physical drifts across different subjects. In this paper, we propose a compact and interpretable Convolutional Neural Network (CNN) to discover shared EEG features across different subjects for driver drowsiness detection. We incorporate the Global Average Pooling (GAP) layer in the model structure, allowing the Class Activation Map (CAM) method to be used for localizing regions of the input signal that contribute most for classification. Results show that the proposed model can achieve an average accuracy of 73.22% on 11 subjects for 2-class cross-subject EEG signal classification, which is higher than conventional machine learning methods and other state-of-art deep learning methods. It is revealed by the visualization technique that the model has learned biologically explainable features, e.g., Alpha spindles and Theta burst, as evidence for the drowsy state. It is also interesting to see that the model uses artifacts that usually dominate the wakeful EEG, e.g., muscle artifacts and sensor drifts, to recognize the alert state. The proposed model illustrates a potential direction to use CNN models as a powerful tool to discover shared features related to different mental states across different subjects from EEG signals.
Collapse
|
27
|
Abou Jaoude M, Sun H, Pellerin KR, Pavlova M, Sarkis RA, Cash SS, Westover MB, Lam AD. Expert-level automated sleep staging of long-term scalp electroencephalography recordings using deep learning. Sleep 2021; 43:5849506. [PMID: 32478820 DOI: 10.1093/sleep/zsaa112] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 03/20/2020] [Indexed: 12/25/2022] Open
Abstract
STUDY OBJECTIVES Develop a high-performing, automated sleep scoring algorithm that can be applied to long-term scalp electroencephalography (EEG) recordings. METHODS Using a clinical dataset of polysomnograms from 6,431 patients (MGH-PSG dataset), we trained a deep neural network to classify sleep stages based on scalp EEG data. The algorithm consists of a convolutional neural network for feature extraction, followed by a recurrent neural network that extracts temporal dependencies of sleep stages. The algorithm's inputs are four scalp EEG bipolar channels (F3-C3, C3-O1, F4-C4, and C4-O2), which can be derived from any standard PSG or scalp EEG recording. We initially trained the algorithm on the MGH-PSG dataset and used transfer learning to fine-tune it on a dataset of long-term (24-72 h) scalp EEG recordings from 112 patients (scalpEEG dataset). RESULTS The algorithm achieved a Cohen's kappa of 0.74 on the MGH-PSG holdout testing set and cross-validated Cohen's kappa of 0.78 after optimization on the scalpEEG dataset. The algorithm also performed well on two publicly available PSG datasets, demonstrating high generalizability. Performance on all datasets was comparable to the inter-rater agreement of human sleep staging experts (Cohen's kappa ~ 0.75 ± 0.11). The algorithm's performance on long-term scalp EEGs was robust over a wide age range and across common EEG background abnormalities. CONCLUSION We developed a deep learning algorithm that achieves human expert level sleep staging performance on long-term scalp EEG recordings. This algorithm, which we have made publicly available, greatly facilitates the use of large long-term EEG clinical datasets for sleep-related research.
Collapse
Affiliation(s)
- Maurice Abou Jaoude
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Haoqi Sun
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Kyle R Pellerin
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Milena Pavlova
- Department of Neurology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA
| | - Rani A Sarkis
- Department of Neurology, Brigham and Women's Hospital and Harvard Medical School, Boston, MA
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - M Brandon Westover
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| | - Alice D Lam
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA
| |
Collapse
|
28
|
Motor Imagery Classification Based on a Recurrent-Convolutional Architecture to Control a Hexapod Robot. MATHEMATICS 2021. [DOI: 10.3390/math9060606] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Advances in the field of Brain-Computer Interfaces (BCIs) aim, among other applications, to improve the movement capacities of people suffering from the loss of motor skills. The main challenge in this area is to achieve real-time and accurate bio-signal processing for pattern recognition, especially in Motor Imagery (MI). The significant interaction between brain signals and controllable machines requires instantaneous brain data decoding. In this study, an embedded BCI system based on fist MI signals is developed. It uses an Emotiv EPOC+ Brainwear®, an Altera SoCKit® development board, and a hexapod robot for testing locomotion imagery commands. The system is tested to detect the imagined movements of closing and opening the left and right hand to control the robot locomotion. Electroencephalogram (EEG) signals associated with the motion tasks are sensed on the human sensorimotor cortex. Next, the SoCKit processes the data to identify the commands allowing the controlled robot locomotion. The classification of MI-EEG signals from the F3, F4, FC5, and FC6 sensors is performed using a hybrid architecture of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. This method takes advantage of the deep learning recognition model to develop a real-time embedded BCI system, where signal processing must be seamless and precise. The proposed method is evaluated using k-fold cross-validation on both created and public Scientific-Data datasets. Our dataset is comprised of 2400 trials obtained from four test subjects, lasting three seconds of closing and opening fist movement imagination. The recognition tasks reach 84.69% and 79.2% accuracy using our data and a state-of-the-art dataset, respectively. Numerical results support that the motor imagery EEG signals can be successfully applied in BCI systems to control mobile robots and related applications such as intelligent vehicles.
Collapse
|
29
|
Fu M, Wang Y, Chen Z, Li J, Xu F, Liu X, Hou F. Deep Learning in Automatic Sleep Staging With a Single Channel Electroencephalography. Front Physiol 2021; 12:628502. [PMID: 33746774 PMCID: PMC7965953 DOI: 10.3389/fphys.2021.628502] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 02/01/2021] [Indexed: 11/13/2022] Open
Abstract
This study centers on automatic sleep staging with a single channel electroencephalography (EEG), with some significant findings for sleep staging. In this study, we proposed a deep learning-based network by integrating attention mechanism and bidirectional long short-term memory neural network (AT-BiLSTM) to classify wakefulness, rapid eye movement (REM) sleep and non-REM (NREM) sleep stages N1, N2 and N3. The AT-BiLSTM network outperformed five other networks and achieved an accuracy of 83.78%, a Cohen's kappa coefficient of 0.766 and a macro F1-score of 82.14% on the PhysioNet Sleep-EDF Expanded dataset, and an accuracy of 81.72%, a Cohen's kappa coefficient of 0.751 and a macro F1-score of 80.74% on the DREAMS Subjects dataset. The proposed AT-BiLSTM network even achieved a higher accuracy than the existing methods based on traditional feature extraction. Moreover, better performance was obtained by the AT-BiLSTM network with the frontal EEG derivations than with EEG channels located at the central, occipital or parietal lobe. As EEG signal can be easily acquired using dry electrodes on the forehead, our findings might provide a promising solution for automatic sleep scoring without feature extraction and may prove very useful for the screening of sleep disorders.
Collapse
Affiliation(s)
- Mingyu Fu
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Yitian Wang
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Zixin Chen
- College of Engineering, University of California, Berkeley, Berkeley, CA, United States
| | - Jin Li
- College of Physics and Information Technology, Shaanxi Normal University, Xi’an, China
| | - Fengguo Xu
- Key Laboratory of Drug Quality Control and Pharmacovigilance, China Pharmaceutical University, Nanjing, China
| | - Xinyu Liu
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Fengzhen Hou
- School of Science, China Pharmaceutical University, Nanjing, China
| |
Collapse
|
30
|
Gong S, Xing K, Cichocki A, Li J. Deep Learning in EEG: Advance of the Last Ten-Year Critical Period. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3079712] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
31
|
Mentink LJ, Thomas J, Melis RJF, Olde Rikkert MGM, Overeem S, Claassen JAHR. Home-EEG assessment of possible compensatory mechanisms for sleep disruption in highly irregular shift workers - The ANCHOR study. PLoS One 2020; 15:e0237622. [PMID: 33382689 PMCID: PMC7774973 DOI: 10.1371/journal.pone.0237622] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 11/12/2020] [Indexed: 11/30/2022] Open
Abstract
Study objectives While poor sleep quality has been related to increased risk of Alzheimer’s disease, long-time shift workers (maritime pilots) did not manifest evidence of early Alzheimer’s disease in a recent study. We explored two hypotheses of possible compensatory mechanisms for sleep disruption: Increased efficiency in generating deep sleep during workweeks (model 1) and rebound sleep during rest weeks (model 2). Methods We used data from ten male maritime pilots (mean age: 51.6±2.4 years) with a history of approximately 18 years of irregular shift work. Subjective sleep quality was assessed with the Pittsburgh Sleep Quality Index (PSQI). A single lead EEG-device was used to investigate sleep in the home/work environment, quantifying total sleep time (TST), deep sleep time (DST), and deep sleep time percentage (DST%). Using multilevel models, we studied the sleep architecture of maritime pilots over time, at the transition of a workweek to a rest week. Results Maritime pilots reported worse sleep quality in workweeks compared to rest weeks (PSQI = 8.2±2.2 vs. 3.9±2.0; p<0.001). Model 1 showed a trend towards an increase in DST% of 0.6% per day during the workweek (p = 0.08). Model 2 did not display an increase in DST% in the rest week (p = 0.87). Conclusions Our findings indicated that increased efficiency in generating deep sleep during workweeks is a more likely compensatory mechanism for sleep disruption in the maritime pilot cohort than rebound sleep during rest weeks. Compensatory mechanisms for poor sleep quality might mitigate sleep disruption-related risk of developing Alzheimer’s disease. These results should be used as a starting point for future studies including larger, more diverse populations of shift workers.
Collapse
Affiliation(s)
- Lara J. Mentink
- Department of Geriatric Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Radboud Alzheimer Centre, Radboud University Medical Center, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
- * E-mail:
| | - Jana Thomas
- Department of Geriatric Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Radboud Alzheimer Centre, Radboud University Medical Center, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - René J. F. Melis
- Department of Geriatric Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Radboud Alzheimer Centre, Radboud University Medical Center, Nijmegen, The Netherlands
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Marcel G. M. Olde Rikkert
- Department of Geriatric Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Radboud Alzheimer Centre, Radboud University Medical Center, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Sebastiaan Overeem
- Sleep Medicine Center Kempenhaeghe, Heeze, The Netherlands
- Biomedical Diagnostics Group, Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Jurgen A. H. R. Claassen
- Department of Geriatric Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Radboud Alzheimer Centre, Radboud University Medical Center, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
32
|
Tortora S, Ghidoni S, Chisari C, Micera S, Artoni F. Deep learning-based BCI for gait decoding from EEG with LSTM recurrent neural network. J Neural Eng 2020; 17:046011. [DOI: 10.1088/1741-2552/ab9842] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
33
|
Garcia-Molina G, Kalyan B, Aquino A. Closed-loop Electroencephalogram-based modulated audio to fall and deepen sleep faster. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:565-568. [PMID: 33018052 DOI: 10.1109/embc44109.2020.9175689] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The transition from wake to sleep is a continuum that is well characterized by the electroencephalogram (EEG) power spectral ratio (ρ) between the beta (15 to 30 Hz) and theta (4 to 8 Hz) bands. From wake to sleep, the value of ρ gradually decreases.We have designed and implemented a single EEG-signal based closed-loop system that leverages ρ to modulate the volume of a pink-noise type of audio such that the volume becomes gradually softer as sleep initiates. A proof-of-concept trial was conducted with this system and it was found that using this concept resulted in a reduction of sleep latency and latency to deep sleep.
Collapse
|
34
|
Garcia-Molina G, Tsoneva T, Neff A, Salazar J, Bresch E, Grossekathofer U, Pastoor S, Aquino A. Hybrid in-phase and continuous auditory stimulation significantly enhances slow wave activity during sleep. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:4052-4055. [PMID: 31946762 DOI: 10.1109/embc.2019.8857678] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Recent evidence has shown that enhancing slow-wave activity (SWA) during sleep has positive effects on cognitive, metabolic, and autonomic function. We have developed a consumer, integrated device that automatically detects sleep stages from a single electroencephalogram (EEG) signal and delivers auditory stimulation in a closed-loop manner. The stimulation was delivered in 15-auditory tone blocks separated from each other by at least 15 seconds. The first tone in a block was synchronized to the up-state of a detected slow-wave while subsequent ones were separated from each other by a constant 1-second inter-tone interval. The system was tested in a study involving 22 participants and SWA enhancement (average 45.8%; p=0.0027) was found in 19/22 participants.
Collapse
|
35
|
Honrado C, McGrath JS, Reale R, Bisegna P, Swami NS, Caselli F. A neural network approach for real-time particle/cell characterization in microfluidic impedance cytometry. Anal Bioanal Chem 2020; 412:3835-3845. [PMID: 32189012 DOI: 10.1007/s00216-020-02497-9] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 01/30/2020] [Accepted: 02/06/2020] [Indexed: 11/26/2022]
Abstract
Microfluidic applications such as active particle sorting or selective enrichment require particle classification techniques that are capable of working in real time. In this paper, we explore the use of neural networks for fast label-free particle characterization during microfluidic impedance cytometry. A recurrent neural network is designed to process data from a novel impedance chip layout for enabling real-time multiparametric analysis of the measured impedance data streams. As demonstrated with both synthetic and experimental datasets, the trained network is able to characterize with good accuracy size, velocity, and cross-sectional position of beads, red blood cells, and yeasts, with a unitary prediction time of 0.4 ms. The proposed approach can be extended to other device designs and cell types for electrical parameter extraction. This combination of microfluidic impedance cytometry and machine learning can serve as a stepping stone to real-time single-cell analysis and sorting. Graphical Abstract.
Collapse
Affiliation(s)
- Carlos Honrado
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22904, USA
| | - John S McGrath
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22904, USA
| | - Riccardo Reale
- Department of Civil Engineering and Computer Science, University of Rome Tor Vergata, Via del Politecnico 1, 00133, Rome, Italy
| | - Paolo Bisegna
- Department of Civil Engineering and Computer Science, University of Rome Tor Vergata, Via del Politecnico 1, 00133, Rome, Italy
| | - Nathan S Swami
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22904, USA.
| | - Frederica Caselli
- Department of Civil Engineering and Computer Science, University of Rome Tor Vergata, Via del Politecnico 1, 00133, Rome, Italy.
| |
Collapse
|
36
|
Rim B, Sung NJ, Min S, Hong M. Deep Learning in Physiological Signal Data: A Survey. SENSORS (BASEL, SWITZERLAND) 2020; 20:E969. [PMID: 32054042 PMCID: PMC7071412 DOI: 10.3390/s20040969] [Citation(s) in RCA: 66] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 01/31/2020] [Accepted: 02/09/2020] [Indexed: 12/11/2022]
Abstract
Deep Learning (DL), a successful promising approach for discriminative and generative tasks, has recently proved its high potential in 2D medical imaging analysis; however, physiological data in the form of 1D signals have yet to be beneficially exploited from this novel approach to fulfil the desired medical tasks. Therefore, in this paper we survey the latest scientific research on deep learning in physiological signal data such as electromyogram (EMG), electrocardiogram (ECG), electroencephalogram (EEG), and electrooculogram (EOG). We found 147 papers published between January 2018 and October 2019 inclusive from various journals and publishers. The objective of this paper is to conduct a detailed study to comprehend, categorize, and compare the key parameters of the deep-learning approaches that have been used in physiological signal analysis for various medical applications. The key parameters of deep-learning approach that we review are the input data type, deep-learning task, deep-learning model, training architecture, and dataset sources. Those are the main key parameters that affect system performance. We taxonomize the research works using deep-learning method in physiological signal analysis based on: (1) physiological signal data perspective, such as data modality and medical application; and (2) deep-learning concept perspective such as training architecture and dataset sources.
Collapse
Affiliation(s)
- Beanbonyka Rim
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Nak-Jun Sung
- Department of Computer Science, Soonchunhyang University, Asan 31538, Korea
| | - Sedong Min
- Department of Medical IT Engineering, Soonchunhyang University, Asan 31538, Korea
| | - Min Hong
- Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Korea
| |
Collapse
|
37
|
Xu Z, Yang X, Sun J, Liu P, Qin W. Sleep Stage Classification Using Time-Frequency Spectra From Consecutive Multi-Time Points. Front Neurosci 2020; 14:14. [PMID: 32047422 PMCID: PMC6997491 DOI: 10.3389/fnins.2020.00014] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/08/2020] [Indexed: 11/20/2022] Open
Abstract
Sleep stage classification is an open challenge in the field of sleep research. Considering the relatively small size of datasets used by previous studies, in this paper we used the Sleep Heart Health Study dataset from the National Sleep Research Resource database. A long short-term memory (LSTM) network using a time-frequency spectra of several consecutive 30 s time points as an input was used to perform the sleep stage classification. Four classical convolutional neural networks (CNNs) using a time-frequency spectra of a single 30 s time point as an input were used for comparison. Results showed that, when considering the temporal information within the time-frequency spectrum of a single 30 s time point, the LSTM network had a better classification performance than the CNNs. Moreover, when additional temporal information was taken into consideration, the classification performance of the LSTM network gradually increased. It reached its peak when temporal information from three consecutive 30 s time points was considered, with a classification accuracy of 87.4% and a Cohen’s Kappa coefficient of 0.8216. Compared with CNNs, our results indicate that for sleep stage classification, the temporal information within the data or the features extracted from the data should be considered. LSTM networks take this temporal information into account, and thus, may be more suitable for sleep stage classification.
Collapse
Affiliation(s)
- Ziliang Xu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Xuejuan Yang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Jinbo Sun
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Peng Liu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| | - Wei Qin
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Sciences and Technology, Xidian University, Xi'an, China
| |
Collapse
|
38
|
Dubost C, Humbert P, Benizri A, Tourtier JP, Vayatis N, Vidal PP. Selection of the Best Electroencephalogram Channel to Predict the Depth of Anesthesia. Front Comput Neurosci 2019; 13:65. [PMID: 31632257 PMCID: PMC6779712 DOI: 10.3389/fncom.2019.00065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2018] [Accepted: 09/06/2019] [Indexed: 11/13/2022] Open
Abstract
Precise cerebral dynamics of action of the anesthetics are a challenge for neuroscientists. This explains why there is no gold standard for monitoring the Depth of Anesthesia (DoA) and why experimental studies may use several electroencephalogram (EEG) channels, ranging from 2 to 128 EEG-channels. Our study aimed at finding the scalp area providing valuable information about brain activity under general anesthesia (GA) to select the more optimal EEG channel to characterized the DoA. We included 30 patients undergoing elective, minor surgery under GA and used a 32-channel EEG to record their electrical brain activity. In addition, we recorded their physiological parameters and the BIS monitor. Each individual EEG channel data were processed to test their ability to differentiate awake from asleep states. Due to strict quality criteria adopted for the EEG data and the difficulties of the real-life setting of the study, only 8 patients recordings were taken into consideration in the final analysis. Using 2 classification algorithms, we identified the optimal channels to discriminate between asleep and awake states: the frontal and temporal F8 and T7 were retrieved as being the two bests channels to monitor DoA. Then, using only data from the F8 channel, we tried to minimize the number of features required to discriminate between the awake and asleep state. The best algorithm turned out to be the Gaussian Naïve Bayes (GNB) requiring only 5 features (Area Under the ROC Curve - AUC- of 0.93 ± 0.04). This finding may pave the way to improve the assessment of DoA by combining one EEG channel recordings with a multimodal physiological monitoring of the brain state under GA. Further work is needed to see if these results may be valid to asses the depth of sedation in ICU.
Collapse
Affiliation(s)
- Clement Dubost
- Department of Anesthesiology and Intensive Care, Begin Military Hospital, Saint-Mande, France
- Cognac-G Cognition and Action Group, CNRS, Université Paris Descartes, SSA, Paris, France
| | - Pierre Humbert
- Centre de Mathematiques et de Leurs Applications, CNRS, ENS Paris-Saclay, Université Paris-Saclay, Cachan, France
| | - Arno Benizri
- Cognac-G Cognition and Action Group, CNRS, Université Paris Descartes, SSA, Paris, France
| | - Jean-Pierre Tourtier
- Department of Anesthesiology and Intensive Care, Begin Military Hospital, Saint-Mande, France
| | - Nicolas Vayatis
- Centre de Mathematiques et de Leurs Applications, CNRS, ENS Paris-Saclay, Université Paris-Saclay, Cachan, France
| | - Pierre-Paul Vidal
- Cognac-G Cognition and Action Group, CNRS, Université Paris Descartes, SSA, Paris, France
- Institute of Information and Control, Hangzhou Dianzi University, Zhejiang, China
| |
Collapse
|
39
|
Craik A, He Y, Contreras-Vidal JL. Deep learning for electroencephalogram (EEG) classification tasks: a review. J Neural Eng 2019; 16:031001. [PMID: 30808014 DOI: 10.1088/1741-2552/ab0ab5] [Citation(s) in RCA: 421] [Impact Index Per Article: 84.2] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVE Electroencephalography (EEG) analysis has been an important tool in neuroscience with applications in neuroscience, neural engineering (e.g. Brain-computer interfaces, BCI's), and even commercial applications. Many of the analytical tools used in EEG studies have used machine learning to uncover relevant information for neural classification and neuroimaging. Recently, the availability of large EEG data sets and advances in machine learning have both led to the deployment of deep learning architectures, especially in the analysis of EEG signals and in understanding the information it may contain for brain functionality. The robust automatic classification of these signals is an important step towards making the use of EEG more practical in many applications and less reliant on trained professionals. Towards this goal, a systematic review of the literature on deep learning applications to EEG classification was performed to address the following critical questions: (1) Which EEG classification tasks have been explored with deep learning? (2) What input formulations have been used for training the deep networks? (3) Are there specific deep learning network structures suitable for specific types of tasks? APPROACH A systematic literature review of EEG classification using deep learning was performed on Web of Science and PubMed databases, resulting in 90 identified studies. Those studies were analyzed based on type of task, EEG preprocessing methods, input type, and deep learning architecture. MAIN RESULTS For EEG classification tasks, convolutional neural networks, recurrent neural networks, deep belief networks outperform stacked auto-encoders and multi-layer perceptron neural networks in classification accuracy. The tasks that used deep learning fell into five general groups: emotion recognition, motor imagery, mental workload, seizure detection, event related potential detection, and sleep scoring. For each type of task, we describe the specific input formulation, major characteristics, and end classifier recommendations found through this review. SIGNIFICANCE This review summarizes the current practices and performance outcomes in the use of deep learning for EEG classification. Practical suggestions on the selection of many hyperparameters are provided in the hope that they will promote or guide the deployment of deep learning to EEG datasets in future research.
Collapse
|
40
|
Korkalainen H, Leppanen T, Aakko J, Nikkonen S, Kainulainen S, Leino A, Duce B, Afara IO, Myllymaa S, Toyras J. Accurate Deep Learning-Based Sleep Staging in a Clinical Population with Suspected Obstructive Sleep Apnea. IEEE J Biomed Health Inform 2019; 24:2073-2081. [DOI: 10.1109/jbhi.2019.2951346] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|