1
|
Ramezani M, Kim JH, Liu X, Ren C, Alothman A, De-Eknamkul C, Wilson MN, Cubukcu E, Gilja V, Komiyama T, Kuzum D. High-density transparent graphene arrays for predicting cellular calcium activity at depth from surface potential recordings. NATURE NANOTECHNOLOGY 2024; 19:504-513. [PMID: 38212523 DOI: 10.1038/s41565-023-01576-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 11/16/2023] [Indexed: 01/13/2024]
Abstract
Optically transparent neural microelectrodes have facilitated simultaneous electrophysiological recordings from the brain surface with the optical imaging and stimulation of neural activity. A remaining challenge is to scale down the electrode dimensions to the single-cell size and increase the density to record neural activity with high spatial resolution across large areas to capture nonlinear neural dynamics. Here we developed transparent graphene microelectrodes with ultrasmall openings and a large, transparent recording area without any gold extensions in the field of view with high-density microelectrode arrays up to 256 channels. We used platinum nanoparticles to overcome the quantum capacitance limit of graphene and to scale down the microelectrode diameter to 20 µm. An interlayer-doped double-layer graphene was introduced to prevent open-circuit failures. We conducted multimodal experiments, combining the recordings of cortical potentials of microelectrode arrays with two-photon calcium imaging of the mouse visual cortex. Our results revealed that visually evoked responses are spatially localized for high-frequency bands, particularly for the multiunit activity band. The multiunit activity power was found to be correlated with cellular calcium activity. Leveraging this, we employed dimensionality reduction techniques and neural networks to demonstrate that single-cell and average calcium activities can be decoded from surface potentials recorded by high-density transparent graphene arrays.
Collapse
Affiliation(s)
- Mehrdad Ramezani
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Jeong-Hoon Kim
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Xin Liu
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Chi Ren
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
| | - Abdullah Alothman
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Chawina De-Eknamkul
- Department of NanoEngineering, University of California San Diego, La Jolla, CA, USA
| | - Madison N Wilson
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Ertugrul Cubukcu
- Department of NanoEngineering, University of California San Diego, La Jolla, CA, USA
| | - Vikash Gilja
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA
| | - Takaki Komiyama
- Department of Neurosciences, University of California San Diego, La Jolla, CA, USA
| | - Duygu Kuzum
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA.
| |
Collapse
|
2
|
Kothe C, Hanada G, Mullen S, Mullen T. On decoding of rapid motor imagery in a diverse population using a high-density NIRS device. FRONTIERS IN NEUROERGONOMICS 2024; 5:1355534. [PMID: 38529269 PMCID: PMC10961353 DOI: 10.3389/fnrgo.2024.1355534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 02/20/2024] [Indexed: 03/27/2024]
Abstract
Introduction Functional near-infrared spectroscopy (fNIRS) aims to infer cognitive states such as the type of movement imagined by a study participant in a given trial using an optical method that can differentiate between oxygenation states of blood in the brain and thereby indirectly between neuronal activity levels. We present findings from an fNIRS study that aimed to test the applicability of a high-density (>3000 channels) NIRS device for use in short-duration (2 s) left/right hand motor imagery decoding in a diverse, but not explicitly balanced, subject population. A side aim was to assess relationships between data quality, self-reported demographic characteristics, and brain-computer interface (BCI) performance, with no subjects rejected from recruitment or analysis. Methods BCI performance was quantified using several published methods, including subject-specific and subject-independent approaches, along with a high-density fNIRS decoder previously validated in a separate study. Results We found that decoding of motor imagery on this population proved extremely challenging across all tested methods. Overall accuracy of the best-performing method (the high-density decoder) was 59.1 +/- 6.7% after excluding subjects where almost no optode-scalp contact was made over motor cortex and 54.7 +/- 7.6% when all recorded sessions were included. Deeper investigation revealed that signal quality, hemodynamic responses, and BCI performance were all strongly impacted by the hair phenotypical and demographic factors under investigation, with over half of variance in signal quality explained by demographic factors alone. Discussion Our results contribute to the literature reporting on challenges in using current-generation NIRS devices on subjects with long, dense, dark, and less pliable hair types along with the resulting potential for bias. Our findings confirm the need for increased focus on these populations, accurate reporting of data rejection choices across subject intake, curation, and final analysis in general, and signal a need for NIRS optode designs better optimized for the general population to facilitate more robust and inclusive research outcomes.
Collapse
|
3
|
Xie H, Yang H, Zhang P, Dong Z, He J, Jiang M, Wang L, Yuan Z, Chen X. Evaluation of the learning state of online video courses based on functional near infrared spectroscopy. BIOMEDICAL OPTICS EXPRESS 2024; 15:1486-1499. [PMID: 38495712 PMCID: PMC10942712 DOI: 10.1364/boe.516174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/13/2024] [Accepted: 01/31/2024] [Indexed: 03/19/2024]
Abstract
Studying brain activity during online learning will help to improve research on brain function based on real online learning situations, and will also promote the scientific evaluation of online education. Existing research focuses on enhancing learning effects and evaluating the learning process associated with online learning from an attentional perspective. We aimed to comparatively analyze the differences in prefrontal cortex (PFC) activity during resting, studying, and question-answering states in online learning and to establish a classification model of the learning state that would be useful for the evaluation of online learning. Nineteen university students performed experiments using functional near-infrared spectroscopy (fNIRS) to monitor the prefrontal lobes. The resting time at the start of the experiment was the resting state, watching 13 videos was the learning state, and answering questions after the video was the answering state. Differences in student activity between these three states were analyzed using a general linear model, 1s fNIRS data clips, and features, including averages from the three states, were classified using machine learning classification models such as support vector machines and k-nearest neighbor. The results show that the resting state is more active than learning in the dorsolateral prefrontal cortex, while answering questions is the most active of the three states in the entire PFC, and k-nearest neighbor achieves 98.5% classification accuracy for 1s fNIRS data. The results clarify the differences in PFC activity between resting, learning, and question-answering states in online learning scenarios and support the feasibility of developing an online learning assessment system using fNIRS and machine learning techniques.
Collapse
Affiliation(s)
- Hui Xie
- Center for Biomedical-Photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Huiting Yang
- Center for Biomedical-Photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Pengyuan Zhang
- Center for Biomedical-Photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Zexiao Dong
- Center for Biomedical-Photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Jiangshan He
- Center for Biomedical-Photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
| | - Mingzhe Jiang
- Innovation Center for Advanced Medical Imaging and Intelligent Medicine, Guangzhou Institute of Technology, Xidian University, Guangzhou, Guangdong 51055, China
| | - Lin Wang
- School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, Shaanxi 710048, China
| | - Zhen Yuan
- Faculty of Health Sciences, University of Macau, Macau, 999078, China
| | - Xueli Chen
- Center for Biomedical-Photonics and Molecular Imaging, Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-Scale Life Information, School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710126, China
- Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education, Xi’an, Shaanxi 710126, China
- Innovation Center for Advanced Medical Imaging and Intelligent Medicine, Guangzhou Institute of Technology, Xidian University, Guangzhou, Guangdong 51055, China
| |
Collapse
|
4
|
Nagarajan A, Robinson N, Ang KK, Chua KSG, Chew E, Guan C. Transferring a deep learning model from healthy subjects to stroke patients in a motor imagery brain-computer interface. J Neural Eng 2024; 21:016007. [PMID: 38091617 DOI: 10.1088/1741-2552/ad152f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024]
Abstract
Objective.Motor imagery (MI) brain-computer interfaces (BCIs) based on electroencephalogram (EEG) have been developed primarily for stroke rehabilitation, however, due to limited stroke data, current deep learning methods for cross-subject classification rely on healthy data. This study aims to assess the feasibility of applying MI-BCI models pre-trained using data from healthy individuals to detect MI in stroke patients.Approach.We introduce a new transfer learning approach where features from two-class MI data of healthy individuals are used to detect MI in stroke patients. We compare the results of the proposed method with those obtained from analyses within stroke data. Experiments were conducted using Deep ConvNet and state-of-the-art subject-specific machine learning MI classifiers, evaluated on OpenBMI two-class MI-EEG data from healthy subjects and two-class MI versus rest data from stroke patients.Main results.Results of our study indicate that through domain adaptation of a model pre-trained using healthy subjects' data, an average MI detection accuracy of 71.15% (±12.46%) can be achieved across 71 stroke patients. We demonstrate that the accuracy of the pre-trained model increased by 18.15% after transfer learning (p<0.001). Additionally, the proposed transfer learning method outperforms the subject-specific results achieved by Deep ConvNet and FBCSP, with significant enhancements of 7.64% (p<0.001) and 5.55% (p<0.001) in performance, respectively. Notably, the healthy-to-stroke transfer learning approach achieved similar performance to stroke-to-stroke transfer learning, with no significant difference (p>0.05). Explainable AI analyses using transfer models determined channel relevance patterns that indicate contributions from the bilateral motor, frontal, and parietal regions of the cortex towards MI detection in stroke patients.Significance.Transfer learning from healthy to stroke can enhance the clinical use of BCI algorithms by overcoming the challenge of insufficient clinical data for optimal training.
Collapse
Affiliation(s)
- Aarthy Nagarajan
- School of Computer Science and Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798, Singapore
| | - Neethu Robinson
- School of Computer Science and Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798, Singapore
| | - Kai Keng Ang
- School of Computer Science and Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798, Singapore
- Institute for Infocomm Research, Agency of Science, Technology and Research (A*STAR), 1 Fusionopolis Way, Singapore 138632, Singapore
| | - Karen Sui Geok Chua
- Department of Rehabilitation Medicine, Tan Tock Seng Hospital, 11 Jln Tan Tock Seng, Singapore 308433, Singapore
| | - Effie Chew
- National University Health System, 1E Kent Ridge Road, Singapore 119228, Singapore
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University, 50 Nanyang Ave, Singapore 639798, Singapore
| |
Collapse
|
5
|
Chen J, Xia Y, Zhou X, Vidal Rosas E, Thomas A, Loureiro R, Cooper RJ, Carlson T, Zhao H. fNIRS-EEG BCIs for Motor Rehabilitation: A Review. Bioengineering (Basel) 2023; 10:1393. [PMID: 38135985 PMCID: PMC10740927 DOI: 10.3390/bioengineering10121393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/26/2023] [Accepted: 11/30/2023] [Indexed: 12/24/2023] Open
Abstract
Motor impairment has a profound impact on a significant number of individuals, leading to a substantial demand for rehabilitation services. Through brain-computer interfaces (BCIs), people with severe motor disabilities could have improved communication with others and control appropriately designed robotic prosthetics, so as to (at least partially) restore their motor abilities. BCI plays a pivotal role in promoting smoother communication and interactions between individuals with motor impairments and others. Moreover, they enable the direct control of assistive devices through brain signals. In particular, their most significant potential lies in the realm of motor rehabilitation, where BCIs can offer real-time feedback to assist users in their training and continuously monitor the brain's state throughout the entire rehabilitation process. Hybridization of different brain-sensing modalities, especially functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG), has shown great potential in the creation of BCIs for rehabilitating the motor-impaired populations. EEG, as a well-established methodology, can be combined with fNIRS to compensate for the inherent disadvantages and achieve higher temporal and spatial resolution. This paper reviews the recent works in hybrid fNIRS-EEG BCIs for motor rehabilitation, emphasizing the methodologies that utilized motor imagery. An overview of the BCI system and its key components was introduced, followed by an introduction to various devices, strengths and weaknesses of different signal processing techniques, and applications in neuroscience and clinical contexts. The review concludes by discussing the possible challenges and opportunities for future development.
Collapse
Affiliation(s)
- Jianan Chen
- HUB of Intelligent Neuro-engineering (HUBIN), Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (J.C.); (Y.X.); (X.Z.); (A.T.)
| | - Yunjia Xia
- HUB of Intelligent Neuro-engineering (HUBIN), Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (J.C.); (Y.X.); (X.Z.); (A.T.)
- DOT-HUB, Department of Medical Physics & Biomedical Engineering, University College London (UCL), London WC1E 6BT, UK; (E.V.R.); (R.J.C.)
| | - Xinkai Zhou
- HUB of Intelligent Neuro-engineering (HUBIN), Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (J.C.); (Y.X.); (X.Z.); (A.T.)
| | - Ernesto Vidal Rosas
- DOT-HUB, Department of Medical Physics & Biomedical Engineering, University College London (UCL), London WC1E 6BT, UK; (E.V.R.); (R.J.C.)
- Digital Health and Biomedical Engineering, School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
| | - Alexander Thomas
- HUB of Intelligent Neuro-engineering (HUBIN), Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (J.C.); (Y.X.); (X.Z.); (A.T.)
- Aspire CREATe, Department of Orthopaedics & Musculoskeletal Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (R.L.); (T.C.)
| | - Rui Loureiro
- Aspire CREATe, Department of Orthopaedics & Musculoskeletal Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (R.L.); (T.C.)
| | - Robert J. Cooper
- DOT-HUB, Department of Medical Physics & Biomedical Engineering, University College London (UCL), London WC1E 6BT, UK; (E.V.R.); (R.J.C.)
| | - Tom Carlson
- Aspire CREATe, Department of Orthopaedics & Musculoskeletal Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (R.L.); (T.C.)
| | - Hubin Zhao
- HUB of Intelligent Neuro-engineering (HUBIN), Aspire CREATe, IOMS, Division of Surgery and Interventional Science, University College London (UCL), Stanmore, London HA7 4LP, UK; (J.C.); (Y.X.); (X.Z.); (A.T.)
- DOT-HUB, Department of Medical Physics & Biomedical Engineering, University College London (UCL), London WC1E 6BT, UK; (E.V.R.); (R.J.C.)
| |
Collapse
|
6
|
Zhang C, Chu H, Ma M. Decoding Algorithm of Motor Imagery Electroencephalogram Signal Based on CLRNet Network Model. SENSORS (BASEL, SWITZERLAND) 2023; 23:7694. [PMID: 37765751 PMCID: PMC10536050 DOI: 10.3390/s23187694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 08/02/2023] [Accepted: 08/23/2023] [Indexed: 09/29/2023]
Abstract
EEG decoding based on motor imagery is an important part of brain-computer interface technology and is an important indicator that determines the overall performance of the brain-computer interface. Due to the complexity of motor imagery EEG feature analysis, traditional classification models rely heavily on the signal preprocessing and feature design stages. End-to-end neural networks in deep learning have been applied to the classification task processing of motor imagery EEG and have shown good results. This study uses a combination of a convolutional neural network (CNN) and a long short-term memory (LSTM) network to obtain spatial information and temporal correlation from EEG signals. The use of cross-layer connectivity reduces the network gradient dispersion problem and enhances the overall network model stability. The effectiveness of this network model is demonstrated on the BCI Competition IV dataset 2a by integrating CNN, BiLSTM and ResNet (called CLRNet in this study) to decode motor imagery EEG. The network model combining CNN and BiLSTM achieved 87.0% accuracy in classifying motor imagery patterns in four classes. The network stability is enhanced by adding ResNet for cross-layer connectivity, which further improved the accuracy by 2.0% to achieve 89.0% classification accuracy. The experimental results show that CLRNet has good performance in decoding the motor imagery EEG dataset. This study provides a better solution for motor imagery EEG decoding in brain-computer interface technology research.
Collapse
Affiliation(s)
- Chaozhu Zhang
- Department of Electronics Electricity and Control, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China
| | | | | |
Collapse
|
7
|
Wu Z, Tang X, Wu J, Huang J, Shen J, Hong H. Portable deep-learning decoder for motor imaginary EEG signals based on a novel compact convolutional neural network incorporating spatial-attention mechanism. Med Biol Eng Comput 2023; 61:2391-2404. [PMID: 37095297 DOI: 10.1007/s11517-023-02840-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 04/13/2023] [Indexed: 04/26/2023]
Abstract
Due to high computational requirements, deep-learning decoders for motor imaginary (MI) electroencephalography (EEG) signals are usually implemented on bulky and heavy computing devices that are inconvenient for physical actions. To date, the application of deep-learning techniques in independent portable brain-computer-interface (BCI) devices has not been extensively explored. In this study, we proposed a high-accuracy MI EEG decoder by incorporating spatial-attention mechanism into convolution neural network (CNN), and deployed it on fully integrated single-chip microcontroller unit (MCU). After the CNN model was trained on workstation computer using GigaDB MI datasets (52 subjects), its parameters were then extracted and converted to build deep-learning architecture interpreter on MCU. For comparison, EEG-Inception model was also trained using the same dataset, and was deployed on MCU. The results indicate that our deep-learning model can independently decode imaginary left-/right-hand motions. The mean accuracy of the proposed compact CNN reaches 96.75 ± 2.41% (8 channels: Frontocentral3 (FC3), FC4, Central1 (C1), C2, Central-Parietal1 (CP1), CP2, C3, and C4), versus 76.96 ± 19.08% of EEG-Inception (6 channels: FC3, FC4, C1, C2, CP1, and CP2). To the best of our knowledge, this is the first portable deep-learning decoder for MI EEG signals. The findings demonstrate high-accuracy deep-learning decoding of MI EEG in a portable mode, which has great implications for hand-disabled patients. Our portable system can be used for developing artificial-intelligent wearable BCI devices, as it is less computationally expensive and convenient for real-life application.
Collapse
Affiliation(s)
- Zhanxiong Wu
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China.
| | - Xudong Tang
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Jinhui Wu
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Jiye Huang
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| | - Jian Shen
- Neurosurgery Department, The First Affiliated Hospital of Zhejiang University School of Medicine, Zhejiang University, Hangzhou, 310003, Zhejiang, China
| | - Hui Hong
- School of Electronic Information, Hangzhou Dianzi University, Hangzhou, 310018, Zhejiang, China
| |
Collapse
|
8
|
Ma D, Izzetoglu M, Holtzer R, Jiao X. Deep Learning Based Walking Tasks Classification in Older Adults Using fNIRS. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3437-3447. [PMID: 37594868 PMCID: PMC11044905 DOI: 10.1109/tnsre.2023.3306365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/20/2023]
Abstract
Decline in gait features is common in older adults and an indicator of increased risk of disability, morbidity, and mortality. Under dual task walking (DTW) conditions, further degradation in the performance of both the gait and the secondary cognitive task were found in older adults which were significantly correlated to falls history. Cortical control of gait, specifically in the pre-frontal cortex (PFC) as measured by functional near infrared spectroscopy (fNIRS), during DTW in older adults has recently been studied. However, the automatic classification of differences in cognitive activations under single and dual task gait conditions has not been extensively studied yet. In this paper, by considering single task walking (STW) as a lower attentional walking state and DTW as a higher attentional walking state, we aimed to formulate this as an automatic detection of low and high attentional walking states and leverage deep learning methods to perform their classification. We conduct analysis on the data samples which reveals the characteristics on the difference between HbO2 and Hb values that are subsequently used as additional features. We perform feature engineering to formulate the fNIRS features as a 3-channel image and apply various image processing techniques for data augmentation to enhance the performance of deep learning models. Experimental results show that pre-trained deep learning models that are fine-tuned using the collected fNIRS dataset together with gender and cognitive status information can achieve around 81% classification accuracy which is about 10% higher than the traditional machine learning algorithms. We present additional sensitivity metrics such as confusion matrix, precision and F1 score, as well as accuracy on two-way classification between condition pairings. We further performed an extensive ablation study to evaluate factors such as the voxel locations, channels of input images, zero-paddings and pre-training of deep learning model on their contribution or impact to the classification task. Results showed that using pre-trained model, all the voxel locations, and HbO2 - Hb as the third channel of the input image can achieve the best classification accuracy.
Collapse
|
9
|
Yu M, Peterson MR, Cherukuri V, Hehnly C, Mbabazi-Kabachelor E, Mulondo R, Kaaya BN, Broach JR, Schiff SJ, Monga V. Infection diagnosis in hydrocephalus CT images: a domain enriched attention learning approach. J Neural Eng 2023; 20:10.1088/1741-2552/acd9ee. [PMID: 37253355 PMCID: PMC11099590 DOI: 10.1088/1741-2552/acd9ee] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 05/30/2023] [Indexed: 06/01/2023]
Abstract
Objective. Hydrocephalus is the leading indication for pediatric neurosurgical care worldwide. Identification of postinfectious hydrocephalus (PIH) verses non-postinfectious hydrocephalus, as well as the pathogen involved in PIH is crucial for developing an appropriate treatment plan. Accurate identification requires clinical diagnosis by neuroscientists and microbiological analysis, which are time-consuming and expensive. In this study, we develop a domain enriched AI method for computerized tomography (CT)-based infection diagnosis in hydrocephalic imagery. State-of-the-art (SOTA) convolutional neural network (CNN) approaches form an attractive neural engineering solution for addressing this problem as pathogen-specific features need discovery. Yet black-box deep networks often need unrealistic abundant training data and are not easily interpreted.Approach. In this paper, a novel brain attention regularizer is proposed, which encourages the CNN to put more focus inside brain regions in its feature extraction and decision making. Our approach is then extended to a hybrid 2D/3D network that mines inter-slice information. A new strategy of regularization is also designed for enabling collaboration between 2D and 3D branches.Main results. Our proposed method achieves SOTA results on a CURE Children's Hospital of Uganda dataset with an accuracy of 95.8% in hydrocephalus classification and 84% in pathogen classification. Statistical analysis is performed to demonstrate that our proposed methods obtain significant improvements over the existing SOTA alternatives.Significance. Such attention regularized learning has particularly pronounced benefits in regimes where training data may be limited, thereby enhancing generalizability. To the best of our knowledge, our findings are unique among early efforts in interpretable AI-based models for classification of hydrocephalus and underlying pathogen using CT scans.
Collapse
Affiliation(s)
- Mingzhao Yu
- Department of Electrical Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
- Center for Neural Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
| | - Mallory R Peterson
- Center for Neural Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
| | - Venkateswararao Cherukuri
- Department of Electrical Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
- Center for Neural Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
| | - Christine Hehnly
- College of Medicine, the Pennsylvania State University, University Park, PA 16801, United States of America
| | | | | | | | - James R Broach
- College of Medicine, the Pennsylvania State University, University Park, PA 16801, United States of America
| | - Steven J Schiff
- Department of Neurosurgery, Yale University, New Haven, CT 06510, United States of America
| | - Vishal Monga
- Department of Electrical Engineering, the Pennsylvania State University, University Park, PA 16801, United States of America
| |
Collapse
|
10
|
Lakshminarayanan K, Shah R, Daulat SR, Moodley V, Yao Y, Madathil D. The effect of combining action observation in virtual reality with kinesthetic motor imagery on cortical activity. Front Neurosci 2023; 17:1201865. [PMID: 37383098 PMCID: PMC10299830 DOI: 10.3389/fnins.2023.1201865] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 05/25/2023] [Indexed: 06/30/2023] Open
Abstract
Introduction In the past, various techniques have been used to improve motor imagery (MI), such as immersive virtual-reality (VR) and kinesthetic rehearsal. While electroencephalography (EEG) has been used to study the differences in brain activity between VR-based action observation and kinesthetic motor imagery (KMI), there has been no investigation into their combined effect. Prior research has demonstrated that VR-based action observation can enhance MI by providing both visual information and embodiment, which is the perception of oneself as part of the observed entity. Additionally, KMI has been found to produce similar brain activity to physically performing a task. Therefore, we hypothesized that utilizing VR to offer an immersive visual scenario for action observation while participants performed kinesthetic motor imagery would significantly improve cortical activity related to MI. Methods In this study, 15 participants (9 male, 6 female) performed kinesthetic motor imagery of three hand tasks (drinking, wrist flexion-extension, and grabbing) both with and without VR-based action observation. Results Our results indicate that combining VR-based action observation with KMI enhances brain rhythmic patterns and provides better task differentiation compared to KMI without action observation. Discussion These findings suggest that using VR-based action observation alongside kinesthetic motor imagery can improve motor imagery performance.
Collapse
Affiliation(s)
- Kishor Lakshminarayanan
- Neuro-Rehabilitation Lab, School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Rakshit Shah
- Department of Chemical and Biomedical Engineering, Cleveland State University, Cleveland, OH, United States
| | - Sohail R. Daulat
- Department of Physiology, University of Arizona College of Medicine – Tucson, Tucson, AZ, United States
| | - Viashen Moodley
- Arizona Center for Hand to Shoulder Surgery, Phoenix, AZ, United States
| | - Yifei Yao
- Soft Tissue Biomechanics Laboratory, School of Biomedical Engineering, Med-X Research Institute, Shanghai Jiao Tong University, Shanghai, China
| | - Deepa Madathil
- Jindal Institute of Behavioural Sciences, O.P. Jindal Global University, Sonipat, Haryana, India
| |
Collapse
|
11
|
Zhang Y, Qiu S, He H. Multimodal motor imagery decoding method based on temporal spatial feature alignment and fusion. J Neural Eng 2023; 20. [PMID: 36854181 DOI: 10.1088/1741-2552/acbfdf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 02/28/2023] [Indexed: 03/02/2023]
Abstract
Objective. A motor imagery-based brain-computer interface (MI-BCI) translates spontaneous movement intention from the brain to outside devices. Multimodal MI-BCI that uses multiple neural signals contains rich common and complementary information and is promising for enhancing the decoding accuracy of MI-BCI. However, the heterogeneity of different modalities makes the multimodal decoding task difficult. How to effectively utilize multimodal information remains to be further studied.Approach. In this study, a multimodal MI decoding neural network was proposed. Spatial feature alignment losses were designed to enhance the feature representations extracted from the heterogeneous data and guide the fusion of features from different modalities. An attention-based modality fusion module was built to align and fuse the features in the temporal dimension. To evaluate the proposed decoding method, a five-class MI electroencephalography (EEG) and functional near infrared spectroscopy (fNIRS) dataset were constructed.Main results and significance. The comparison experimental results showed that the proposed decoding method achieved higher decoding accuracy than the compared methods on both the self-collected dataset and a public dataset. The ablation results verified the effectiveness of each part of the proposed method. Feature distribution visualization results showed that the proposed losses enhance the feature representation of EEG and fNIRS modalities. The proposed method based on EEG and fNIRS modalities has significant potential for improving decoding performance of MI tasks.
Collapse
Affiliation(s)
- Yukun Zhang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, People's Republic of China.,Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Shuang Qiu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, People's Republic of China.,Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Huiguang He
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, People's Republic of China.,Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, People's Republic of China
| |
Collapse
|
12
|
Shibu CJ, Sreedharan S, Arun KM, Kesavadas C, Sitaram R. Explainable artificial intelligence model to predict brain states from fNIRS signals. Front Hum Neurosci 2023; 16:1029784. [PMID: 36741783 PMCID: PMC9892761 DOI: 10.3389/fnhum.2022.1029784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Accepted: 11/21/2022] [Indexed: 01/20/2023] Open
Abstract
Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Learning mode's output onto the input variables for fNIRS signals is described here. Approach: We propose an xAI-fNIRS system that consists of a classification module and an explanation module. The classification module consists of two separately trained sliding window-based classifiers, namely, (i) 1-D Convolutional Neural Network (CNN); and (ii) Long Short-Term Memory (LSTM). The explanation module uses SHAP (SHapley Additive exPlanations) to explain the CNN model's output in terms of the model's input. Main results: We observed that the classification module was able to classify two types of datasets: (a) Motor task (MT), acquired from three subjects; and (b) Motor imagery (MI), acquired from 29 subjects, with an accuracy of over 96% for both CNN and LSTM models. The explanation module was able to identify the channels contributing the most to the classification of MI or MT and therefore identify the channel locations and whether they correspond to oxy- or deoxy-hemoglobin levels in those locations. Significance: The xAI-fNIRS system can distinguish between the brain states related to overt and covert motor imagery from fNIRS signals with high classification accuracy and is able to explain the signal features that discriminate between the brain states of interest.
Collapse
Affiliation(s)
- Caleb Jones Shibu
- Department of Computer Science, University of Arizona, Tucson, AZ, United States
| | - Sujesh Sreedharan
- Division of Artificial Internal Organs, Department of Medical Devices Engineering, Biomedical Technology Wing, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, India
| | - KM Arun
- Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, India
| | - Chandrasekharan Kesavadas
- Department of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, India
| | - Ranganatha Sitaram
- Department of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, TN, United States
| |
Collapse
|
13
|
Khademi Z, Ebrahimi F, Kordy HM. A review of critical challenges in MI-BCI: From conventional to deep learning methods. J Neurosci Methods 2023; 383:109736. [PMID: 36349568 DOI: 10.1016/j.jneumeth.2022.109736] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 09/20/2022] [Accepted: 10/27/2022] [Indexed: 11/08/2022]
Abstract
Brain-computer interfaces (BCIs) have achieved significant success in controlling external devices through the Electroencephalogram (EEG) signal processing. BCI-based Motor Imagery (MI) system bridges brain and external devices as communication tools to control, for example, wheelchair for people with disabilities, robotic control, and exoskeleton control. This success largely depends on the machine learning (ML) approaches like deep learning (DL) models. DL algorithms provide effective and powerful models to analyze compact and complex EEG data optimally for MI-BCI applications. DL models with CNN network have revolutionized computer vision through end-to-end learning from raw data. Meanwhile, RNN networks have been able to decode EEG signals by processing sequences of time series data. However, many challenges in the MI-BCI field have affected the performance of DL models. A major challenge is the individual differences in the EEG signal of different subjects. Therefore, the model must be retrained from the scratch for each new subject, which leads to computational costs. Analyzing the EEG signals is challenging due to its low signal to noise ratio and non-stationary nature. Additionally, limited size of existence datasets can lead to overfitting which can be prevented by using transfer learning (TF) approaches. The main contributions of this study are discovering major challenges in the MI-BCI field by reviewing the state of art machine learning models and then suggesting solutions to address these challenges by focusing on feature selection, feature extraction and classification methods.
Collapse
Affiliation(s)
- Zahra Khademi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| | - Farideh Ebrahimi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| | - Hussain Montazery Kordy
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| |
Collapse
|
14
|
Sorkhi M, Jahed-Motlagh MR, Minaei-Bidgoli B, Daliri MR. Hybrid fuzzy deep neural network toward temporal-spatial-frequency features learning of motor imagery signals. Sci Rep 2022; 12:22334. [PMID: 36567362 PMCID: PMC9790889 DOI: 10.1038/s41598-022-26882-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/21/2022] [Indexed: 12/26/2022] Open
Abstract
Achieving an efficient and reliable method is essential to interpret a user's brain wave and deliver an accurate response in biomedical signal processing. However, EEG patterns exhibit high variability across time and uncertainty due to noise and it is a significant problem to be addressed in mental task as motor imagery. Therefore, fuzzy components may help to enable a higher tolerance to noisy conditions. With the advent of Deep Learning and its considerable contributions to Artificial intelligence and data analysis, numerous efforts have been made to evaluate and analyze brain signals. In this study, to make use of neural activity phenomena, the feature extraction preprocessing is applied based on Multi-scale filter bank CSP. In the following, the hybrid series architecture named EEG-CLFCNet is proposed which extract the frequency and spatial features by Compact-CNN and the temporal features by the LSTM network. However, the classification results are evaluated by merging the fully connected network and fuzzy neural block. Here, the proposed method is further validated by the BCI competition IV-2a dataset and compare with two hyperparameter tuning methods, Coordinate-descent and Bayesian optimization algorithm. The proposed architecture that used fuzzy neural block and Bayesian optimization as tuning approach, results in better classification accuracy compared with the state-of-the-art literatures. As results shown, the remarkable performance of the proposed model, EEG-CLFCNet, and the general integration of fuzzy units to other classifiers would pave the way for enhanced MI-based BCI systems.
Collapse
Affiliation(s)
- Maryam Sorkhi
- grid.411748.f0000 0001 0387 0587School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Mohammad Reza Jahed-Motlagh
- grid.411748.f0000 0001 0387 0587School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran ,grid.411748.f0000 0001 0387 0587School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Behrouz Minaei-Bidgoli
- grid.411748.f0000 0001 0387 0587School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Mohammad Reza Daliri
- grid.411748.f0000 0001 0387 0587School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| |
Collapse
|
15
|
Bibliometric analysis on Brain-computer interfaces in a 30-year period. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04226-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
16
|
Mussi MG, Adams KD. EEG hybrid brain-computer interfaces: A scoping review applying an existing hybrid-BCI taxonomy and considerations for pediatric applications. Front Hum Neurosci 2022; 16:1007136. [DOI: 10.3389/fnhum.2022.1007136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 10/27/2022] [Indexed: 11/18/2022] Open
Abstract
Most hybrid brain-computer interfaces (hBCI) aim at improving the performance of single-input BCI. Many combinations are possible to configure an hBCI, such as using multiple brain input signals, different stimuli or more than one input system. Multiple studies have been done since 2010 where such interfaces have been tested and analyzed. Results and conclusions are promising but little has been discussed as to what is the best approach for the pediatric population, should they use hBCI as an assistive technology. Children might face greater challenges when using BCI and might benefit from less complex interfaces. Hence, in this scoping review we included 42 papers that developed hBCI systems for the purpose of control of assistive devices or communication software, and we analyzed them through the lenses of potential use in clinical settings and for children. We extracted taxonomic categories proposed in previous studies to describe the types of interfaces that have been developed. We also proposed interface characteristics that could be observed in different hBCI, such as type of target, number of targets and number of steps before selection. Then, we discussed how each of the extracted characteristics could influence the overall complexity of the system and what might be the best options for applications for children. Effectiveness and efficiency were also collected and included in the analysis. We concluded that the least complex hBCI interfaces might involve having a brain inputs and an external input, with a sequential role of operation, and visual stimuli. Those interfaces might also use a minimal number of targets of the strobic type, with one or two steps before the final selection. We hope this review can be used as a guideline for future hBCI developments and as an incentive to the design of interfaces that can also serve children who have motor impairments.
Collapse
|
17
|
Liu S. Applying antagonistic activation pattern to the single-trial classification of mental arithmetic. Heliyon 2022; 8:e11102. [PMID: 36303917 PMCID: PMC9593203 DOI: 10.1016/j.heliyon.2022.e11102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 06/28/2022] [Accepted: 10/11/2022] [Indexed: 11/05/2022] Open
Abstract
Background At present, the application of fNIRS in the field of brain-computer interface (BCI) is being a hot topic. By fNIRS-BCI, the brain realizes the control of external devices. A state-of-the-art BCI system has five steps which are cerebral cortex signal acquisition, data pre-processing, feature selection and extraction, feature classification and application interface. Proper feature selection and extraction are crucial to the final fNIRS-BCI effect. This paper proposes a feature selection and extraction method for the mental arithmetic task. Specifically, we modified the antagonistic activation pattern approach and used the combination of antagonistic activation patterns to extract features for enhancement of the classification accuracy with low calculation costs. Methods Experiments are conducted on an open-acquisition dataset including fNIRS signals of eight healthy subjects of mental arithmetic (MA) tasks and rest tasks. First, the signals are filtered using band-pass filters to remove noise. Second, channels are selected by prior knowledge about antagonistic activation patterns. We used cerebral blood volume (CBV) and cerebral oxygen exchange (COE) of selected each channel to build novel attributes. Finally, we proposed three groups of attributes which are CBV, COE and CBV + COE. Based on attributes generated by the proposed method, we calculated temporal statistical measures (average, variance, maximum, minimum and slope). Any two of five statistical measures were combined as feature sets. Main results With the LDA, QDA, and SVM classifiers, the proposed method obtained higher classification accuracies the basic control method. The maximum classification accuracies achieved by the proposed method are 67.45 ± 14.56% with LDA classifier, 89.73 ± 5.71% with QDA classifier, and 87.04 ± 6.88% with SVM classifier. The novel method reduced the running time by 3.75 times compared with the method incorporating all channels into the feature set. Therefore, the novel method reduces the computational costs while maintaining high classification accuracy. The results are validated by another open-access dataset including MA and rest tasks of 29 healthy subjects.
Collapse
Affiliation(s)
- Shixian Liu
- Department of Mechatronics Engineering, Qingdao University of Science and Technology, Qingdao, China
| |
Collapse
|
18
|
Hosni SMI, Borgheai SB, McLinden J, Zhu S, Huang X, Ostadabbas S, Shahriari Y. A Graph-Based Nonlinear Dynamic Characterization of Motor Imagery Toward an Enhanced Hybrid BCI. Neuroinformatics 2022; 20:1169-1189. [PMID: 35907174 DOI: 10.1007/s12021-022-09595-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Decoding neural responses from multimodal information sources, including electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), has the transformative potential to advance hybrid brain-computer interfaces (hBCIs). However, existing modest performance improvement of hBCIs might be attributed to the lack of computational frameworks that exploit complementary synergistic properties in multimodal features. This study proposes a multimodal data fusion framework to represent and decode synergistic multimodal motor imagery (MI) neural responses. We hypothesize that exploiting EEG nonlinear dynamics adds a new informative dimension to the commonly combined EEG-fNIRS features and will ultimately increase the synergy between EEG and fNIRS features toward an enhanced hBCI. The EEG nonlinear dynamics were quantified by extracting graph-based recurrence quantification analysis (RQA) features to complement the commonly used spectral features for an enhanced multimodal configuration when combined with fNIRS. The high-dimensional multimodal features were further given to a feature selection algorithm relying on the least absolute shrinkage and selection operator (LASSO) for fused feature selection. Linear support vector machine (SVM) was then used to evaluate the framework. The mean hybrid classification performance improved by up to 15% and 4% compared to the unimodal EEG and fNIRS, respectively. The proposed graph-based framework substantially increased the contribution of EEG features for hBCI classification from 28.16% up to 52.9% when introduced the nonlinear dynamics and improved the performance by approximately 2%. These findings suggest that graph-based nonlinear dynamics can increase the synergy between EEG and fNIRS features for an enhanced MI response representation that is not dominated by a single modality.
Collapse
Affiliation(s)
- Sarah M I Hosni
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA
| | - Seyyed B Borgheai
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA
| | - John McLinden
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA
| | - Shaotong Zhu
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA
| | - Xiaofei Huang
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA
| | - Sarah Ostadabbas
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA
| | - Yalda Shahriari
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA.
| |
Collapse
|
19
|
Eastmond C, Subedi A, De S, Intes X. Deep learning in fNIRS: a review. NEUROPHOTONICS 2022; 9:041411. [PMID: 35874933 PMCID: PMC9301871 DOI: 10.1117/1.nph.9.4.041411] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Accepted: 06/22/2022] [Indexed: 05/28/2023]
Abstract
Significance: Optical neuroimaging has become a well-established clinical and research tool to monitor cortical activations in the human brain. It is notable that outcomes of functional near-infrared spectroscopy (fNIRS) studies depend heavily on the data processing pipeline and classification model employed. Recently, deep learning (DL) methodologies have demonstrated fast and accurate performances in data processing and classification tasks across many biomedical fields. Aim: We aim to review the emerging DL applications in fNIRS studies. Approach: We first introduce some of the commonly used DL techniques. Then, the review summarizes current DL work in some of the most active areas of this field, including brain-computer interface, neuro-impairment diagnosis, and neuroscience discovery. Results: Of the 63 papers considered in this review, 32 report a comparative study of DL techniques to traditional machine learning techniques where 26 have been shown outperforming the latter in terms of the classification accuracy. In addition, eight studies also utilize DL to reduce the amount of preprocessing typically done with fNIRS data or increase the amount of data via data augmentation. Conclusions: The application of DL techniques to fNIRS studies has shown to mitigate many of the hurdles present in fNIRS studies such as lengthy data preprocessing or small sample sizes while achieving comparable or improved classification accuracy.
Collapse
Affiliation(s)
- Condell Eastmond
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| | - Aseem Subedi
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| | - Suvranu De
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| | - Xavier Intes
- Center for Modeling, Simulation and Imaging for Medicine, Rensselaer Polytechnic, Department of Biomedical Engineering, Troy, New York, United States
| |
Collapse
|
20
|
Bourguignon NJ, Bue SL, Guerrero-Mosquera C, Borragán G. Bimodal EEG-fNIRS in Neuroergonomics. Current Evidence and Prospects for Future Research. FRONTIERS IN NEUROERGONOMICS 2022; 3:934234. [PMID: 38235461 PMCID: PMC10790898 DOI: 10.3389/fnrgo.2022.934234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 06/20/2022] [Indexed: 01/19/2024]
Abstract
Neuroergonomics focuses on the brain signatures and associated mental states underlying behavior to design human-machine interfaces enhancing performance in the cognitive and physical domains. Brain imaging techniques such as functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) have been considered key methods for achieving this goal. Recent research stresses the value of combining EEG and fNIRS in improving these interface systems' mental state decoding abilities, but little is known about whether these improvements generalize over different paradigms and methodologies, nor about the potentialities for using these systems in the real world. We review 33 studies comparing mental state decoding accuracy between bimodal EEG-fNIRS and unimodal EEG and fNIRS in several subdomains of neuroergonomics. In light of these studies, we also consider the challenges of exploiting wearable versions of these systems in real-world contexts. Overall the studies reviewed suggest that bimodal EEG-fNIRS outperforms unimodal EEG or fNIRS despite major differences in their conceptual and methodological aspects. Much work however remains to be done to reach practical applications of bimodal EEG-fNIRS in naturalistic conditions. We consider these points to identify aspects of bimodal EEG-fNIRS research in which progress is expected or desired.
Collapse
Affiliation(s)
| | - Salvatore Lo Bue
- Department of Life Sciences, Royal Military Academy of Belgium, Brussels, Belgium
| | | | - Guillermo Borragán
- Center for Research in Cognition and Neuroscience, Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
21
|
Erdoğan SB, Yükselen G. Four-Class Classification of Neuropsychiatric Disorders by Use of Functional Near-Infrared Spectroscopy Derived Biomarkers. SENSORS (BASEL, SWITZERLAND) 2022; 22:5407. [PMID: 35891088 PMCID: PMC9322944 DOI: 10.3390/s22145407] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 06/20/2022] [Accepted: 06/22/2022] [Indexed: 06/15/2023]
Abstract
Diagnosis of most neuropsychiatric disorders relies on subjective measures, which makes the reliability of final clinical decisions questionable. The aim of this study was to propose a machine learning-based classification approach for objective diagnosis of three disorders of neuropsychiatric or neurological origin with functional near-infrared spectroscopy (fNIRS) derived biomarkers. Thirteen healthy adolescents and sixty-seven patients who were clinically diagnosed with migraine, obsessive compulsive disorder, or schizophrenia performed a Stroop task, while prefrontal cortex hemodynamics were monitored with fNIRS. Hemodynamic and cognitive features were extracted for training three supervised learning algorithms (naïve bayes (NB), linear discriminant analysis (LDA), and support vector machines (SVM)). The performance of each algorithm in correctly predicting the class of each participant across the four classes was tested with ten runs of a ten-fold cross-validation procedure. All algorithms achieved four-class classification performances with accuracies above 81% and specificities above 94%. SVM had the highest performance in terms of accuracy (85.1 ± 1.77%), sensitivity (84 ± 1.7%), specificity (95 ± 0.5%), precision (86 ± 1.6%), and F1-score (85 ± 1.7%). fNIRS-derived features have no subjective report bias when used for automated classification purposes. The presented methodology might have significant potential for assisting in the objective diagnosis of neuropsychiatric disorders associated with frontal lobe dysfunction.
Collapse
|
22
|
Chen J, Wang D, Hu B, Yi W, Xu M, Chen D, Zhao Q. MCFHNet: Multi-Channel Fusion Hybrid Network for Efficient EEG-fNIRS Multi-modal Motor Imagery Decoding. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4821-4825. [PMID: 36085621 DOI: 10.1109/embc48229.2022.9871385] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Motor Imagery-based Brain Computer Interface (MI-BCI) is a typical active BCI with a main focus on motor intention identification. Hybrid motor imagery (MI) decoding methods that based on multi-modal fusion of Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), especially deep learning-based methods, become popular in recent MI-BCI studies. However, the fusion strategy and network design in deep learning-based methods are complex. To solve this problem, we proposed the multi-channel fusion method (MCF) to simplify current fusion methods, and we designed a multi-channel fusion hybrid network (MCFHNet) based on MCF. MCFHNet combines depthwise convolutional layers, channel attention mechanism, and Bidirectional Long Short Term Memory (Bi-LSTM) layers, which enables strong capability of feature extraction in spatial and temporal domain. The comparison between MCFHNet and representative deep learning-based methods was performed on an open EEG-fNIRS dataset. We found the proposed method can yield superior performance (mean accuracy of 99.641 % in 5-fold cross validation of an intra-subject experiment). This work provides a new option for multi-modal MI decoding, which can be applied in the rehabilitation field based on hybrid BCI systems.
Collapse
|
23
|
Yi H. Efficient machine learning algorithm for electroencephalogram modeling in brain–computer interfaces. Neural Comput Appl 2022. [DOI: 10.1007/s00521-020-04861-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
24
|
Brain Melody Interaction: Understanding Effects of Music on Cerebral Hemodynamic Responses. MULTIMODAL TECHNOLOGIES AND INTERACTION 2022. [DOI: 10.3390/mti6050035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Music elicits strong emotional reactions in people, regardless of their gender, age or cultural background. Understanding the effects of music on brain activity can enhance existing music therapy techniques and lead to improvements in various medical and affective computing research. We explore the effects of three different music genres on people’s cerebral hemodynamic responses. Functional near-infrared spectroscopy (fNIRS) signals were collected from 27 participants while they listened to 12 different pieces of music. The signals were pre-processed to reflect oxyhemoglobin (HbO2) and deoxyhemoglobin (HbR) concentrations in the brain. K-nearest neighbor (KNN), random forest (RF) and a one-dimensional (1D) convolutional neural network (CNN) were used to classify the signals using music genre and subjective responses provided by the participants as labels. Results from this study show that the highest accuracy in distinguishing three music genres was achieved by deep learning models (73.4% accuracy in music genre classification and 80.5% accuracy when predicting participants’ subjective rating of emotional content of music). This study validates a strong motivation for using fNIRS signals to detect people’s emotional state while listening to music. It could also be beneficial in giving personalised music recommendations based on people’s brain activity to improve their emotional well-being.
Collapse
|
25
|
Yang J, Liu L, Yu H, Ma Z, Shen T. Multi-Hierarchical Fusion to Capture the Latent Invariance for Calibration-Free Brain-Computer Interfaces. Front Neurosci 2022; 16:824471. [PMID: 35546894 PMCID: PMC9082749 DOI: 10.3389/fnins.2022.824471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 02/17/2022] [Indexed: 11/29/2022] Open
Abstract
Brain-computer interfaces (BCI) based motor imagery (MI) has become a research hotspot for establishing a flexible communication channel for patients with apoplexy or degenerative pathologies. Accurate decoding of motor imagery electroencephalography (MI-EEG) signals, while essential for effective BCI systems, is still challenging due to the significant noise inherent in the EEG signals and the lack of informative correlation between the signals and brain activities. The application of deep learning for EEG feature representation has been rarely investigated, nevertheless bringing improvements to the performance of motor imagery classification. This paper proposes a deep learning decoding method based on multi-hierarchical representation fusion (MHRF) on MI-EEG. It consists of a concurrent framework constructed of bidirectional LSTM (Bi-LSTM) and convolutional neural network (CNN) to fully capture the contextual correlations of MI-EEG and the spectral feature. Also, the stacked sparse autoencoder (SSAE) is employed to concentrate these two domain features into a high-level representation for cross-session and subject training guidance. The experimental analysis demonstrated the efficacy and practicality of the proposed approach using a public dataset from BCI competition IV and a private one collected by our MI task. The proposed approach can serve as a robust and competitive method to improve inter-session and inter-subject transferability, adding anticipation and prospective thoughts to the practical implementation of a calibration-free BCI system.
Collapse
Affiliation(s)
| | | | | | | | - Tao Shen
- School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
26
|
Khademi Z, Ebrahimi F, Kordy HM. A transfer learning-based CNN and LSTM hybrid deep learning model to classify motor imagery EEG signals. Comput Biol Med 2022; 143:105288. [PMID: 35168083 DOI: 10.1016/j.compbiomed.2022.105288] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/21/2022] [Accepted: 01/24/2022] [Indexed: 12/26/2022]
Abstract
In the Motor Imagery (MI)-based Brain Computer Interface (BCI), users' intention is converted into a control signal through processing a specific pattern in brain signals reflecting motor characteristics. There are such restrictions as the limited size of the existing datasets and low signal to noise ratio in the classification of MI Electroencephalogram (EEG) signals. Machine learning (ML) methods, particularly Deep Learning (DL), have overcome these limitations relatively. In this study, three hybrid models were proposed to classify the EEG signal in the MI-based BCI. The proposed hybrid models consist of the convolutional neural networks (CNN) and the Long-Short Term Memory (LSTM). In the first model, the CNN with different number of convolutional-pooling blocks (from shallow to deep CNN) was examined; a two-block CNN model not affected by the vanishing gradient descent and yet able to extract desirable features employed; the second and third models contained pre-trained CNNs conducing to the exploration of more complex features. The transfer learning strategy and data augmentation methods were applied to overcome the limited size of the datasets by transferring learning from one model to another. This was achieved by employing two powerful pre-trained convolutional neural networks namely ResNet-50 and Inception-v3. The continuous wavelet transform (CWT) was used to generate images for the CNN. The performance of the proposed models was evaluated on the BCI Competition IV dataset 2a. The mean accuracy vlaues of 86%, 90%, and 92%, and mean Kappa values of 81%, 86%, and 88% were obtained for the hybrid neural network with the customized CNN, the hybrid neural network with ResNet-50 and the hybrid neural network with Inception-v3, respectively. Despite the promising performance of the three proposed models, the hybrid neural network with Inception-v3 outperformed the two other models. The best obtained result in the present study improved the previous best result in the literature by 7% in terms of classification accuracy. From the findings, it can be concluded that transfer learning based on a pre-trained CNN in combination with LSTM is a novel method in MI-based BCI. The study also has implications for the discrimination of motor imagery tasks in each EEG recording channel and in different brain regions which can reduce computational time in future works by only selecting the most effective channels.
Collapse
Affiliation(s)
- Zahra Khademi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| | - Farideh Ebrahimi
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| | - Hussain Montazery Kordy
- Faculty of Electrical and Computer Engineering, Babol Noshirvani University of Technology, Shariati Ave., Babol, Iran.
| |
Collapse
|
27
|
Khalil K, Asgher U, Ayaz Y. Novel fNIRS study on homogeneous symmetric feature-based transfer learning for brain-computer interface. Sci Rep 2022; 12:3198. [PMID: 35210460 PMCID: PMC8873341 DOI: 10.1038/s41598-022-06805-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Accepted: 01/04/2022] [Indexed: 01/23/2023] Open
Abstract
The brain-computer interface (BCI) provides an alternate means of communication between the brain and external devices by recognizing the brain activities and translating them into external commands. The functional Near-Infrared Spectroscopy (fNIRS) is becoming popular as a non-invasive modality for brain activity detection. The recent trends show that deep learning has significantly enhanced the performance of the BCI systems. But the inherent bottleneck for deep learning (in the domain of BCI) is the requirement of the vast amount of training data, lengthy recalibrating time, and expensive computational resources for training deep networks. Building a high-quality, large-scale annotated dataset for deep learning-based BCI systems is exceptionally tedious, complex, and expensive. This study investigates the novel application of transfer learning for fNIRS-based BCI to solve three objective functions (concerns), i.e., the problem of insufficient training data, reduced training time, and increased accuracy. We applied symmetric homogeneous feature-based transfer learning on convolutional neural network (CNN) designed explicitly for fNIRS data collected from twenty-six (26) participants performing the n-back task. The results suggested that the proposed method achieves the maximum saturated accuracy sooner and outperformed the traditional CNN model on averaged accuracy by 25.58% in the exact duration of training time, reducing the training time, recalibrating time, and computational resources.
Collapse
Affiliation(s)
- Khurram Khalil
- National Center of Artificial Intelligence (NCAI), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan
| | - Umer Asgher
- National Center of Artificial Intelligence (NCAI), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan.,Department of Mechatronics Engineering, College of Electrical and Mechanical Engineering, National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan
| | - Yasar Ayaz
- National Center of Artificial Intelligence (NCAI), School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Islamabad, 44000, Pakistan.
| |
Collapse
|
28
|
Kwak Y, Song WJ, Kim SE. FGANet: fNIRS-guided Attention Network for Hybrid EEG-fNIRS Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2022; 30:329-339. [PMID: 35130163 DOI: 10.1109/tnsre.2022.3149899] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Non-invasive brain-computer interfaces (BCIs) have been widely used for neural decoding, linking neural signals to control devices. Hybrid BCI systems using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) have received significant attention for overcoming the limitations of EEG- and fNIRS-standalone BCI systems. However, most hybrid EEG-fNIRS BCI studies have focused on late fusion because of discrepancies in their temporal resolutions and recording locations. Despite the enhanced performance of hybrid BCIs, late fusion methods have difficulty in extracting correlated features in both EEG and fNIRS signals. Therefore, in this study, we proposed a deep learning-based early fusion structure, which combines two signals before the fully-connected layer, called the fNIRS-guided attention network (FGANet). First, 1D EEG and fNIRS signals were converted into 3D EEG and fNIRS tensors to spatially align EEG and fNIRS signals at the same time point. The proposed fNIRS-guided attention layer extracted a joint representation of EEG and fNIRS tensors based on neurovascular coupling, in which the spatially important regions were identified from fNIRS signals, and detailed neural patterns were extracted from EEG signals. Finally, the final prediction was obtained by weighting the sum of the prediction scores of the EEG and fNIRS-guided attention features to alleviate performance degradation owing to delayed fNIRS response. In the experimental results, the FGANet significantly outperformed the EEG-standalone network. Furthermore, the FGANet has 4.0% and 2.7% higher accuracy than the state-of-the-art algorithms in mental arithmetic and motor imagery tasks, respectively.
Collapse
|
29
|
Chen C, Ma Z, Liu Z, Zhou L, Wang G, Li Y, Zhao J. An Energy-Efficient Wearable Functional Near-infrared Spectroscopy System Employing Dual-level Adaptive Sampling Technique. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:119-128. [PMID: 35133967 DOI: 10.1109/tbcas.2022.3149766] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Functional near-infrared spectroscopy (fNIRS) is a powerful medical imaging tool in brain science and psychology, it can also be employed in brain-computer interface (BCI) due to its noninvasive and artifact-less-sensitive characteristics. Conventional ways to detect large-area brain activity using near-infrared (NIR) technology are based on Time-division or Frequency-division modulation technique, which traverses all physical sensory channels in a specific period. To achieve higher imaging resolution or brain-tasks classification accuracy, the NIRS system require higher density and more channels, which conflict with the limited battery capacity. Inspired by the functional atlas of the human brain, this paper proposes a spatial adaptive sampling (SAS) method. It can change the active channel pattern of the fNIRS system to match with the real-time brain activity, to increase the energy efficiency without significant reduction on the brain imaging quality or the accuracy of brain activity classification. Therefore, the number of the averaging enabled channels will be dramatically reduced in practice. To verify the proposed SAS technique, a wearable and flexible NIRS system has been implemented, in which each channel of light-emitting diode (LED) drive circuits and photodiode (PD) detection circuits can be power gated independently. Brain task experiments have been conducted to validate the proposed method, the power consumption of the LED drive module is reduced by 46.58% compared to that without SAS technology while maintaining an average brain imaging PSNR (Peak Signal to Noise Ratio) of 35 dB. The brain-task classification accuracy is 80.47%, which has a 2.67% reduction compared to that without the SAS technique.
Collapse
|
30
|
Deep CNN model based on serial-parallel structure optimization for four-class motor imagery EEG classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103338] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
31
|
Kim M, Lee S, Dan I, Tak S. A deep convolutional neural network for estimating hemodynamic response function with reduction of motion artifacts in fNIRS. J Neural Eng 2022; 19. [PMID: 35038682 DOI: 10.1088/1741-2552/ac4bfc] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 01/17/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Functional near-infrared spectroscopy (fNIRS) is a neuroimaging technique for monitoring hemoglobin concentration changes in a non-invasive manner. However, subject movements are often significant sources of artifacts. While several methods have been developed for suppressing this confounding noise, the conventional techniques have limitations on optimal selections of model parameters across participants or brain regions. To address this shortcoming, we aim to propose a method based on a deep convolutional neural network (CNN). APPROACH The U-net is employed as a CNN architecture. Specifically, large-scale training and testing data are generated by combining variants of hemodynamic response function (HRF) with experimental measurements of motion noises. The neural network is then trained to reconstruct hemodynamic response coupled to neuronal activity with a reduction of motion artifacts. MAIN RESULTS Using extensive analysis, we show that the proposed method estimates the task-related HRF more accurately than the existing methods of wavelet decomposition and autoregressive models. Specifically, the mean squared error and variance of HRF estimates, based on the CNN, are the smallest among all methods considered in this study. These results are more prominent when the semi-simulated data contains variants of shapes and amplitudes of HRF. SIGNIFICANCE The proposed CNN method allows for accurately estimating amplitude and shape of HRF with significant reduction of motion artifacts. This method may have a great potential for monitoring HRF changes in real-life settings that involve excessive motion artifacts.
Collapse
Affiliation(s)
- MinWoo Kim
- School of Biomedical Convergence Engineering, Pusan National University, 49 Busandaehak-ro, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Yangsan, 50612, Korea (the Republic of)
| | - Seonjin Lee
- Research Center for Bioconvergence Analysis, Korea Basic Science Institute, 162 Yeongudanji-ro, Cheongwon-gu, Ochang-eup, Cheongju, 28119, Korea (the Republic of)
| | - Ippeita Dan
- Faculty of Science and Engineering, Chuo University, Tama Campus 742-1 Higashinakano Hachioji-shi, Tokyo, 192-0393, JAPAN
| | - Sungho Tak
- Research Center for Bioconvergence Analysis, Korea Basic Science Institute, 162 Yeongudanji-ro, Cheongwon-gu, Ochang-eup, Cheongju, 28119, Korea (the Republic of)
| |
Collapse
|
32
|
Wang Z, Zhang J, Zhang X, Chen P, Wang B. Transformer Model for Functional Near-Infrared Spectroscopy Classification. IEEE J Biomed Health Inform 2022; 26:2559-2569. [PMID: 34986110 DOI: 10.1109/jbhi.2022.3140531] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Functional near-infrared spectroscopy (fNIRS) is a promising neuroimaging technology. The fNIRS classification problem has always been the focus of the brain-computer interface (BCI). Inspired by the success of Transformer based on self-attention mechanism in the fields of natural language processing and computer vision, we propose an fNIRS classification network based on Transformer, named fNIRS-T. We explore the spatial-level and channel-level representation of fNIRS signals to improve data utilization and network representation capacity. Besides, a preprocessing module, which consists of one-dimensional average pooling and layer normalization, is designed to replace filtering and baseline correction of data preprocessing. It makes fNIRS-T an end-to-end network, called fNIRS-PreT. Compared with traditional machine learning classifiers, convolutional neural network (CNN), and long short-term memory (LSTM), the proposed models obtain the best accuracy on three open-access datasets. Specifically, in the most extensive ternary classification task (30 subjects) that includes three types of overt movements, fNIRS-T, CNN, and LSTM obtain 75.49%, 72.89%, and 61.94% on test sets, respectively. Compared to traditional classifiers, fNIRS-T is at least 27.41% higher than statistical features and 6.79% higher than well-designed features. In the individual subject experiment of the ternary classification task, fNIRS-T achieves an average subject accuracy of 78.22% and surpasses CNN and LSTM by a large margin of +4.75% and +11.33%. fNIRS-PreT using raw data also achieves competitive performance to fNIRS-T. Therefore, the proposed models improve the performance of fNIRS-based BCI significantly.
Collapse
|
33
|
Arif A, Jawad Khan M, Javed K, Sajid H, Rubab S, Naseer N, Irfan Khan T. Hemodynamic Response Detection Using Integrated EEG-fNIRS-VPA for BCI. COMPUTERS, MATERIALS & CONTINUA 2022; 70:535-555. [DOI: 10.32604/cmc.2022.018318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 04/21/2021] [Indexed: 09/01/2023]
|
34
|
Xu R, Spataro R, Allison BZ, Guger C. Brain-Computer Interfaces in Acute and Subacute Disorders of Consciousness. J Clin Neurophysiol 2022; 39:32-39. [PMID: 34474428 DOI: 10.1097/wnp.0000000000000810] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
SUMMARY Disorders of consciousness include coma, unresponsive wakefulness syndrome (also known as vegetative state), and minimally conscious state. Neurobehavioral scales such as coma recovery scale-revised are the gold standard for disorder of consciousness assessment. Brain-computer interfaces have been emerging as an alternative tool for these patients. The application of brain-computer interfaces in disorders of consciousness can be divided into four fields: assessment, communication, prediction, and rehabilitation. The operational theoretical model of consciousness that brain-computer interfaces explore was reviewed in this article, with a focus on studies with acute and subacute patients. We then proposed a clinically friendly guideline, which could contribute to the implementation of brain-computer interfaces in neurorehabilitation settings. Finally, we discussed limitations and future directions, including major challenges and possible solutions.
Collapse
Affiliation(s)
- Ren Xu
- Guger Technologies OG, Schiedlberg, Austria
| | - Rossella Spataro
- g.tec medical engineering GmbH, Schiedlberg, Austria
- IRCCS Centro Neurolesi Bonino Pulejo, Palermo, Italy; and
| | - Brendan Z Allison
- Cognitive Science Department, University of California San Diego, La Jolla, California, U.S.A
| | - Christoph Guger
- Guger Technologies OG, Schiedlberg, Austria
- g.tec medical engineering GmbH, Schiedlberg, Austria
| |
Collapse
|
35
|
Cooney C, Folli R, Coyle D. A bimodal deep learning architecture for EEG-fNIRS decoding of overt and imagined speech. IEEE Trans Biomed Eng 2021; 69:1983-1994. [PMID: 34874850 DOI: 10.1109/tbme.2021.3132861] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Brain-computer interfaces (BCI) studies are increasingly leveraging different attributes of multiple signal modalities simultaneously. Bimodal data acquisition protocols combining the temporal resolution of electroencephalography (EEG) with the spatial resolution of functional near-infrared spectroscopy (fNIRS) require novel approaches to decoding. METHODS We present an EEG-fNIRS Hybrid BCI that employs a new bimodal deep neural network architecture consisting of two convolutional sub-networks (subnets) to decode overt and imagined speech. Features from each subnet are fused before further feature extraction and classification. Nineteen participants performed overt and imagined speech in a novel cue-based paradigm enabling investigation of stimulus and linguistic effects on decoding. RESULTS Using the hybrid approach, classification accuracies (46.31% and 34.29% for overt and imagined speech, respectively (chance: 25%)) indicated a significant improvement on EEG used independently for imagined speech (p=0.020) while tending towards significance for overt speech (p=0.098). In comparison with fNIRS, significant improvements for both speech-types were achieved with bimodal decoding (p<0.001). There was a mean difference of ~12.02% between overt and imagined speech with accuracies as high as 87.18% and 53%. Deeper subnets enhanced performance while stimulus effected overt and imagined speech in significantly different ways. CONCLUSION The bimodal approach was a significant improvement on unimodal results for several tasks. Results indicate the potential of multi-modal deep learning for enhancing neural signal decoding. SIGNIFICANCE This novel architecture can be used to enhance speech decoding from bimodal neural signals.
Collapse
|
36
|
Moslehi AH, Davies TC. EEG Electrode Selection for a Two-Class Motor Imagery Task in a BCI Using fNIRS Prior Data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6627-6630. [PMID: 34892627 DOI: 10.1109/embc46164.2021.9630786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This study investigated the possibility of using functional near infrared spectroscopy (fNIRS) during right- and left-hand motor imagery tasks to select an optimum set of electroencephalography (EEG) electrodes for a brain computer interface. fNIRS has better spatial resolution allowing areas of brain activity to more readily be identified. The ReliefF algorithm was used to identify the most reliable fNIRS channels. Then, EEG electrodes adjacent to those channels were selected for classification. This study used three different classifiers of linear and quadratic discriminant analyses, and support vector machine to examine the proposed method.Clinical Relevance- Reducing the number of sensors in a BCI makes the system more usable for patients with severe disabilities.
Collapse
|
37
|
Zhu S, Hosni SI, Huang X, Borgheai SB, McLinden J, Shahriari Y, Ostadabbas S. A Graph-Based Feature Extraction Algorithm Towards a Robust Data Fusion Framework for Brain-Computer Interfaces. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:878-881. [PMID: 34891430 DOI: 10.1109/embc46164.2021.9630804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The topological information hidden in the EEG spectral dynamics is often ignored in the majority of the existing brain-computer interface (BCI) systems. Moreover, a systematic multimodal fusion of EEG with other informative brain signals such as functional near-infrared spectroscopy (fNIRS) towards enhancing the performance of the BCI systems is not fully investigated. In this study, we present a robust EEG-fNIRS data fusion framework utilizing a series of graph-based EEG features to investigate their performance on a motor imaginary (MI) classification task. METHOD We first extract the amplitude and phase sequences of users' multi-channel EEG signals based on the complex Morlet wavelet time-frequency maps, and then convert them into an undirected graph to extract EEG topological features. The graph-based features from EEG are then selected by a thresholding method and fused with the temporal features from fNIRS signals after each being selected by the least absolute shrinkage and selection operator (LASSO) algorithm. The fused features were then classified as MI task vs. baseline by a linear support vector machine (SVM) classifier. RESULTS The time-frequency graphs of EEG signals improved the MI classification accuracy by ∼5% compared to the graphs built on the band-pass filtered temporal EEG signals. Our proposed graph-based method also showed comparable performance to the classical EEG features based on power spectral density (PSD), however with a much smaller standard deviation, showing its robustness for potential use in a practical BCI system. Our fusion analysis revealed a considerable improvement of ∼17% as opposed to the highest average accuracy of EEG only and ∼3% compared with the highest fNIRS only accuracy demonstrating an enhanced performance when modality fusion is used relative to single modal outcomes. SIGNIFICANCE Our findings indicate the potential use of the proposed data fusion framework utilizing the graph-based features in the hybrid BCI systems by making the motor imaginary inference more accurate and more robust.
Collapse
|
38
|
A novel classification framework using multiple bandwidth method with optimized CNN for brain–computer interfaces with EEG-fNIRS signals. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06202-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
39
|
Zhang B, Shi Y, Hou L, Yin Z, Chai C. TSMG: A Deep Learning Framework for Recognizing Human Learning Style Using EEG Signals. Brain Sci 2021; 11:brainsci11111397. [PMID: 34827396 PMCID: PMC8615788 DOI: 10.3390/brainsci11111397] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 10/20/2021] [Accepted: 10/21/2021] [Indexed: 11/16/2022] Open
Abstract
Educational theory claims that integrating learning style into learning-related activities can improve academic performance. Traditional methods to recognize learning styles are mostly based on questionnaires and online behavior analyses. These methods are highly subjective and inaccurate in terms of recognition. Electroencephalography (EEG) signals have significant potential for use in the measurement of learning style. This study uses EEG signals to design a deep-learning-based model of recognition to recognize people's learning styles with EEG features by using a non-overlapping sliding window, one-dimensional spatio-temporal convolutions, multi-scale feature extraction, global average pooling, and the group voting mechanism; this model is named the TSMG model (Temporal-Spatial-Multiscale-Global model). It solves the problem of processing EEG data of variable length, and improves the accuracy of recognition of the learning style by nearly 5% compared with prevalent methods, while reducing the cost of calculation by 41.93%. The proposed TSMG model can also recognize variable-length data in other fields. The authors also formulated a dataset of EEG signals (called the LSEEG dataset) containing features of the learning style processing dimension that can be used to test and compare models of recognition. This dataset is also conducive to the application and further development of EEG technology to recognize people's learning styles.
Collapse
Affiliation(s)
- Bingxue Zhang
- Department of Optical-Electrical & Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; (B.Z.); (Y.S.); (Z.Y.)
| | - Yang Shi
- Department of Optical-Electrical & Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; (B.Z.); (Y.S.); (Z.Y.)
| | - Longfeng Hou
- Department of Energy & Power Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China;
| | - Zhong Yin
- Department of Optical-Electrical & Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; (B.Z.); (Y.S.); (Z.Y.)
| | - Chengliang Chai
- Department of Optical-Electrical & Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China; (B.Z.); (Y.S.); (Z.Y.)
- Correspondence:
| |
Collapse
|
40
|
Sahonero-Alvarez G, Singh AK, Sayrafian K, Bianchi L, Roman-Gonzalez A. A Functional BCI Model by the P2731 Working Group: Transducer. BRAIN-COMPUTER INTERFACES 2021. [DOI: 10.1080/2326263x.2021.1968633] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Kamran Sayrafian
- Information Technology Laboratory, National Institute of Standards & Technology, Gaithersburg, USA
| | - Luigi Bianchi
- Civil Engineering and Computer Science Engineering Dept. Tor Vergata University of Rome, Rome, Italy
| | | |
Collapse
|
41
|
Zhang Y, Chen W, Lin CL, Pei Z, Chen J, Chen Z. Boosting-LDA algriothm with multi-domain feature fusion for motor imagery EEG decoding. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102983] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
42
|
Ortega P, Faisal AA. Deep learning multimodal fNIRS and EEG signals for bimanual grip force decoding. J Neural Eng 2021; 18. [PMID: 34350839 DOI: 10.1088/1741-2552/ac1ab3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 08/04/2021] [Indexed: 11/12/2022]
Abstract
Objective.Non-invasive brain-machine interfaces (BMIs) offer an alternative, safe and accessible way to interact with the environment. To enable meaningful and stable physical interactions, BMIs need to decode forces. Although previously addressed in the unimanual case, controlling forces from both hands would enable BMI-users to perform a greater range of interactions. We here investigate the decoding of hand-specific forces.Approach.We maximise cortical information by using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) and developing a deep-learning architecture with attention and residual layers (cnnatt) to improve their fusion. Our task required participants to generate hand-specific force profiles on which we trained and tested our deep-learning and linear decoders.Main results.The use of EEG and fNIRS improved the decoding of bimanual force and the deep-learning models outperformed the linear model. In both cases, the greatest gain in performance was due to the detection of force generation. In particular, the detection of forces was hand-specific and better for the right dominant hand andcnnattwas better at fusing EEG and fNIRS. Consequently, the study ofcnnattrevealed that forces from each hand were differently encoded at the cortical level.Cnnattalso revealed traces of the cortical activity being modulated by the level of force which was not previously found using linear models.Significance.Our results can be applied to avoid hand-cross talk during hand force decoding to improve the robustness of BMI robotic devices. In particular, we improve the fusion of EEG and fNIRS signals and offer hand-specific interpretability of the encoded forces which are valuable during motor rehabilitation assessment.
Collapse
Affiliation(s)
- Pablo Ortega
- Brain and Behaviour Lab, Department of Bioengineering, Imperial College London, London SW7 2AZ, United Kingdom.,Brain and Behaviour Lab, Department of Computing, Imperial College London, London SW7 2AZ, United Kingdom
| | - A Aldo Faisal
- Brain and Behaviour Lab, Department of Bioengineering, Imperial College London, London SW7 2AZ, United Kingdom.,Brain and Behaviour Lab, Department of Computing, Imperial College London, London SW7 2AZ, United Kingdom.,Data Science Institute, Imperial College London, London, United Kingdom.,MRC London Institute of Medical Sciences, SW7 2AZ London, United Kingdom
| |
Collapse
|
43
|
Lu HY, Lorenc ES, Zhu H, Kilmarx J, Sulzer J, Xie C, Tobler PN, Watrous AJ, Orsborn AL, Lewis-Peacock J, Santacruz SR. Multi-scale neural decoding and analysis. J Neural Eng 2021; 18. [PMID: 34284369 PMCID: PMC8840800 DOI: 10.1088/1741-2552/ac160f] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Accepted: 07/20/2021] [Indexed: 12/15/2022]
Abstract
Objective. Complex spatiotemporal neural activity encodes rich information related to behavior and cognition. Conventional research has focused on neural activity acquired using one of many different measurement modalities, each of which provides useful but incomplete assessment of the neural code. Multi-modal techniques can overcome tradeoffs in the spatial and temporal resolution of a single modality to reveal deeper and more comprehensive understanding of system-level neural mechanisms. Uncovering multi-scale dynamics is essential for a mechanistic understanding of brain function and for harnessing neuroscientific insights to develop more effective clinical treatment. Approach. We discuss conventional methodologies used for characterizing neural activity at different scales and review contemporary examples of how these approaches have been combined. Then we present our case for integrating activity across multiple scales to benefit from the combined strengths of each approach and elucidate a more holistic understanding of neural processes. Main results. We examine various combinations of neural activity at different scales and analytical techniques that can be used to integrate or illuminate information across scales, as well the technologies that enable such exciting studies. We conclude with challenges facing future multi-scale studies, and a discussion of the power and potential of these approaches. Significance. This roadmap will lead the readers toward a broad range of multi-scale neural decoding techniques and their benefits over single-modality analyses. This Review article highlights the importance of multi-scale analyses for systematically interrogating complex spatiotemporal mechanisms underlying cognition and behavior.
Collapse
Affiliation(s)
- Hung-Yun Lu
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America
| | - Elizabeth S Lorenc
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Hanlin Zhu
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Justin Kilmarx
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America
| | - James Sulzer
- The University of Texas at Austin, Mechanical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Chong Xie
- Rice University, Electrical and Computer Engineering, Houston, TX, United States of America
| | - Philippe N Tobler
- University of Zurich, Neuroeconomics and Social Neuroscience, Zurich, Switzerland
| | - Andrew J Watrous
- The University of Texas at Austin, Neurology, Austin, TX, United States of America
| | - Amy L Orsborn
- University of Washington, Electrical and Computer Engineering, Seattle, WA, United States of America.,University of Washington, Bioengineering, Seattle, WA, United States of America.,Washington National Primate Research Center, Seattle, WA, United States of America
| | - Jarrod Lewis-Peacock
- The University of Texas at Austin, Psychology, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| | - Samantha R Santacruz
- The University of Texas at Austin, Biomedical Engineering, Austin, TX, United States of America.,The University of Texas at Austin, Institute for Neuroscience, Austin, TX, United States of America
| |
Collapse
|
44
|
Wickramaratne SD, Mahmud MS. Conditional-GAN Based Data Augmentation for Deep Learning Task Classifier Improvement Using fNIRS Data. Front Big Data 2021; 4:659146. [PMID: 34396092 PMCID: PMC8362663 DOI: 10.3389/fdata.2021.659146] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/16/2021] [Indexed: 11/27/2022] Open
Abstract
Functional near-infrared spectroscopy (fNIRS) is a neuroimaging technique used for mapping the functioning human cortex. fNIRS can be widely used in population studies due to the technology’s economic, non-invasive, and portable nature. fNIRS can be used for task classification, a crucial part of functioning with Brain-Computer Interfaces (BCIs). fNIRS data are multidimensional and complex, making them ideal for deep learning algorithms for classification. Deep Learning classifiers typically need a large amount of data to be appropriately trained without over-fitting. Generative networks can be used in such cases where a substantial amount of data is required. Still, the collection is complex due to various constraints. Conditional Generative Adversarial Networks (CGAN) can generate artificial samples of a specific category to improve the accuracy of the deep learning classifier when the sample size is insufficient. The proposed system uses a CGAN with a CNN classifier to enhance the accuracy through data augmentation. The system can determine whether the subject’s task is a Left Finger Tap, Right Finger Tap, or Foot Tap based on the fNIRS data patterns. The authors obtained a task classification accuracy of 96.67% for the CGAN-CNN combination.
Collapse
Affiliation(s)
- Sajila D Wickramaratne
- Department of Electrical and Computer Engineering, University of New Hampshire, Durham, NH, United States
| | - Md Shaad Mahmud
- Department of Electrical and Computer Engineering, University of New Hampshire, Durham, NH, United States
| |
Collapse
|
45
|
|
46
|
|
47
|
Alhudhaif A. An effective classification framework for brain-computer interface system design based on combining of fNIRS and EEG signals. PeerJ Comput Sci 2021; 7:e537. [PMID: 34013040 PMCID: PMC8114820 DOI: 10.7717/peerj-cs.537] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 04/20/2021] [Indexed: 05/23/2023]
Abstract
BACKGROUND The brain-computer interface (BCI) is a relatively new but highly promising special field that is actively used in basic neuroscience. BCI includes interfaces for human-computer communication based directly on neural activity concerning mental processes. Fundamental BCI components consist of different units. In the first stage, the EEG and NIRS signals obtained from the individuals are preprocessed, and the signals are brought to a certain standard. METHODS In order to realize proposed framework, a dataset containing Motor Imaginary and Mental Activity tasks are prepared with Electroencephalography (EEG) and Near-Infrared Spectroscopy (NIRS) signal. First of all, HbO and HbR curves are obtained from NIRS signals. Hbo, HbR, HbO+HbR, EEG, EEG+HbO and EEG+HbR features tables are created with the features obtained by using HbO, HbR, and EEG signals, and feature weighted is carried out with the k-Means clustering centers based attribute weighting method (KMCC-based) and the k-Means clustering centers difference based attribute weighting method (KMCCD-based). Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), and k-Nearest Neighbors algorithm (kNN) classifiers are used to see the classifier differences in the study. RESULTS As a result of this study, an accuracy rate of 99.7% (with kNN classifier and KMCCD-based weighting) is obtained in the data set of Motor Imaginary. Similarly, an accuracy rate of 99.9% (with SVM and kNN classifier and KMCCD-based weighting) is obtained in the Mental Activity dataset. The weighting method is used to increase the classification accuracy, and it has been shown that it will contribute to the classification of EEG and NIRS BCI systems. The results show that the proposed method increases classifiers' performance, offering less processing power and ease of application. In the future, studies could be carried out by combining the k-Means clustering center-based weighted hybrid BCI method with deep learning architectures. Further improved classifier performances can be achieved by combining both systems.
Collapse
Affiliation(s)
- Adi Alhudhaif
- Department of Computer Science, College of Computer Engineering and Sciences in Al-Kharj, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
48
|
Mrachacz-Kersting N, Ibáñez J, Farina D. Towards a mechanistic approach for the development of non-invasive brain-computer interfaces for motor rehabilitation. J Physiol 2021; 599:2361-2374. [PMID: 33728656 DOI: 10.1113/jp281314] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Accepted: 03/05/2021] [Indexed: 12/11/2022] Open
Abstract
Brain-computer interfaces (BCIs) designed for motor rehabilitation use brain signals associated with motor-processing states to guide neuroplastic changes in a state-dependent manner. These technologies are uniquely positioned to induce targeted and functionally relevant plastic changes in the human motor nervous system. However, while several studies have shown that BCI-based neuromodulation interventions may improve motor function in patients with lesions in the central nervous system, the neurophysiological structures and processes targeted with the BCI interventions have not been identified. In this review, we first summarize current knowledge of the changes in the central nervous system associated with learning new motor skills. Then, we propose a classification of current BCI paradigms for plasticity induction and motor rehabilitation based on the expected neural plastic changes promoted. This classification proposes four paradigms based on two criteria: the plasticity induction methods and the brain states targeted. The existing evidence regarding the brain circuits and processes targeted with these different BCIs is discussed in detail. The proposed classification aims to serve as a starting point for future studies trying to elucidate the underlying plastic changes following BCI interventions.
Collapse
Affiliation(s)
| | - Jaime Ibáñez
- Department of Bioengineering, Centre for Neurotechnologies, Imperial College London, London, UK
- Department of Clinical and Movement Neuroscience, Institute of Neurology, University College London, London, UK
| | - Dario Farina
- Department of Bioengineering, Centre for Neurotechnologies, Imperial College London, London, UK
| |
Collapse
|
49
|
Ma T, Wang S, Xia Y, Zhu X, Evans J, Sun Y, He S. CNN-based classification of fNIRS signals in motor imagery BCI system. J Neural Eng 2021; 18. [PMID: 33761480 DOI: 10.1088/1741-2552/abf187] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 03/24/2021] [Indexed: 11/11/2022]
Abstract
Objective. Development of a brain-computer interface (BCI) requires classification of brain neural activities to different states. Functional near-infrared spectroscopy (fNIRS) can measure the brain activities and has great potential for BCI. In recent years, a large number of classification algorithms have been proposed, in which deep learning methods, especially convolutional neural network (CNN) methods are successful. fNIRS signal has typical time series properties, we combined fNIRS data and kinds of CNN-based time series classification (TSC) methods to classify BCI task.Approach. In this study, participants were recruited for a left and right hand motor imagery experiment and the cerebral neural activities were recorded by fNIRS equipment (FOIRE-3000). TSC methods are used to distinguish the brain activities when imagining the left or right hand. We have tested the overall person, single person and overall person with single-channel classification results, and these methods achieved excellent classification results. We also compared the CNN-based TSC methods with traditional classification methods such as support vector machine.Main results. Experiments showed that the CNN-based methods have significant advantages in classification accuracy: the CNN-based methods have achieved remarkable results in the classification of left-handed and right-handed imagination tasks, reaching 98.6% accuracy on overall person, 100% accuracy on single person, and in the single-channel classification an accuracy of 80.1% has been achieved with the best-performing channel.Significance. These results suggest that using the CNN-based TSC methods can significantly improve the BCI performance and also lay the foundation for the miniaturization and portability of training rehabilitation equipment.
Collapse
Affiliation(s)
- Tengfei Ma
- Centre for Optical and Electromagnetic Research, State Key Laboratory of Modern Optical Instrumentations, Zhejiang University, Hangzhou, People's Republic of China.,Ningbo Research Institute, Zhejiang University, Ningbo 315100, People's Republic of China
| | - Shasha Wang
- Center for Optical and Electromagnetic Research, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, People's Republic of China
| | - Yuting Xia
- Centre for Optical and Electromagnetic Research, State Key Laboratory of Modern Optical Instrumentations, Zhejiang University, Hangzhou, People's Republic of China.,Ningbo Research Institute, Zhejiang University, Ningbo 315100, People's Republic of China
| | - Xinhua Zhu
- Ningbo Aolai Technology Ltd, Ningbo, People's Republic of China
| | - Julian Evans
- Centre for Optical and Electromagnetic Research, State Key Laboratory of Modern Optical Instrumentations, Zhejiang University, Hangzhou, People's Republic of China
| | - Yaoran Sun
- Centre for Optical and Electromagnetic Research, State Key Laboratory of Modern Optical Instrumentations, Zhejiang University, Hangzhou, People's Republic of China.,Ningbo Research Institute, Zhejiang University, Ningbo 315100, People's Republic of China
| | - Sailing He
- Centre for Optical and Electromagnetic Research, State Key Laboratory of Modern Optical Instrumentations, Zhejiang University, Hangzhou, People's Republic of China.,Ningbo Research Institute, Zhejiang University, Ningbo 315100, People's Republic of China.,Center for Optical and Electromagnetic Research, South China Academy of Advanced Optoelectronics, South China Normal University, Guangzhou, People's Republic of China
| |
Collapse
|
50
|
Singanamalla SKR, Lin CT. Spiking Neural Network for Augmenting Electroencephalographic Data for Brain Computer Interfaces. Front Neurosci 2021; 15:651762. [PMID: 33867928 PMCID: PMC8047134 DOI: 10.3389/fnins.2021.651762] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 02/22/2021] [Indexed: 11/28/2022] Open
Abstract
With the advent of advanced machine learning methods, the performance of brain–computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance.
Collapse
Affiliation(s)
- Sai Kalyan Ranga Singanamalla
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia
| | - Chin-Teng Lin
- Computational Intelligence and Brain Computer Interface Lab, School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia.,Centre for Artificial Intelligence, University of Technology Sydney, Sydney, NSW, Australia
| |
Collapse
|