1
|
Sujatha Ravindran A, Contreras-Vidal J. An empirical comparison of deep learning explainability approaches for EEG using simulated ground truth. Sci Rep 2023; 13:17709. [PMID: 37853010 PMCID: PMC10584975 DOI: 10.1038/s41598-023-43871-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 09/29/2023] [Indexed: 10/20/2023] Open
Abstract
Recent advancements in machine learning and deep learning (DL) based neural decoders have significantly improved decoding capabilities using scalp electroencephalography (EEG). However, the interpretability of DL models remains an under-explored area. In this study, we compared multiple model explanation methods to identify the most suitable method for EEG and understand when some of these approaches might fail. A simulation framework was developed to evaluate the robustness and sensitivity of twelve back-propagation-based visualization methods by comparing to ground truth features. Multiple methods tested here showed reliability issues after randomizing either model weights or labels: e.g., the saliency approach, which is the most used visualization technique in EEG, was not class or model-specific. We found that DeepLift was consistently accurate as well as robust to detect the three key attributes tested here (temporal, spatial, and spectral precision). Overall, this study provides a review of model explanation methods for DL-based neural decoders and recommendations to understand when some of these methods fail and what they can capture in EEG.
Collapse
Affiliation(s)
- Akshay Sujatha Ravindran
- Noninvasive Brain-Machine Interface System Laboratory, Department of Electrical and Computer Engineering, University of Houston, Houston, 77204, USA.
- IUCRC BRAIN, University of Houston, Houston, 77204, USA.
- Alto Neuroscience, Los Altos, CA, 94022, USA.
| | - Jose Contreras-Vidal
- Noninvasive Brain-Machine Interface System Laboratory, Department of Electrical and Computer Engineering, University of Houston, Houston, 77204, USA
- IUCRC BRAIN, University of Houston, Houston, 77204, USA
| |
Collapse
|
2
|
Xu M, Ouyang Y, Yuan Z. Deep Learning Aided Neuroimaging and Brain Regulation. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23114993. [PMID: 37299724 DOI: 10.3390/s23114993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/15/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Currently, deep learning aided medical imaging is becoming the hot spot of AI frontier application and the future development trend of precision neuroscience. This review aimed to render comprehensive and informative insights into the recent progress of deep learning and its applications in medical imaging for brain monitoring and regulation. The article starts by providing an overview of the current methods for brain imaging, highlighting their limitations and introducing the potential benefits of using deep learning techniques to overcome these limitations. Then, we further delve into the details of deep learning, explaining the basic concepts and providing examples of how it can be used in medical imaging. One of the key strengths is its thorough discussion of the different types of deep learning models that can be used in medical imaging including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) assisted magnetic resonance imaging (MRI), positron emission tomography (PET)/computed tomography (CT), electroencephalography (EEG)/magnetoencephalography (MEG), optical imaging, and other imaging modalities. Overall, our review on deep learning aided medical imaging for brain monitoring and regulation provides a referrable glance for the intersection of deep learning aided neuroimaging and brain regulation.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| | - Yuanyuan Ouyang
- Nanomicro Sino-Europe Technology Company Limited, Zhuhai 519031, China
- Jiangfeng China-Portugal Technology Co., Ltd., Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| |
Collapse
|
3
|
Petrosyan A, Voskoboinikov A, Sukhinin D, Makarova A, Skalnaya A, Arkhipova N, Sinkin M, Ossadtchi A. Speech decoding from a small set of spatially segregated minimally invasive intracranial EEG electrodes with a compact and interpretable neural network. J Neural Eng 2022; 19. [PMID: 36356309 DOI: 10.1088/1741-2552/aca1e1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 11/10/2022] [Indexed: 11/12/2022]
Abstract
Objective. Speech decoding, one of the most intriguing brain-computer interface applications, opens up plentiful opportunities from rehabilitation of patients to direct and seamless communication between human species. Typical solutions rely on invasive recordings with a large number of distributed electrodes implanted through craniotomy. Here we explored the possibility of creating speech prosthesis in a minimally invasive setting with a small number of spatially segregated intracranial electrodes.Approach. We collected one hour of data (from two sessions) in two patients implanted with invasive electrodes. We then used only the contacts that pertained to a single stereotactic electroencephalographic (sEEG) shaft or an electrocorticographic (ECoG) stripe to decode neural activity into 26 words and one silence class. We employed a compact convolutional network-based architecture whose spatial and temporal filter weights allow for a physiologically plausible interpretation.Mainresults. We achieved on average 55% accuracy using only six channels of data recorded with a single minimally invasive sEEG electrode in the first patient and 70% accuracy using only eight channels of data recorded for a single ECoG strip in the second patient in classifying 26+1 overtly pronounced words. Our compact architecture did not require the use of pre-engineered features, learned fast and resulted in a stable, interpretable and physiologically meaningful decision rule successfully operating over a contiguous dataset collected during a different time interval than that used for training. Spatial characteristics of the pivotal neuronal populations corroborate with active and passive speech mapping results and exhibit the inverse space-frequency relationship characteristic of neural activity. Compared to other architectures our compact solution performed on par or better than those recently featured in neural speech decoding literature.Significance. We showcase the possibility of building a speech prosthesis with a small number of electrodes and based on a compact feature engineering free decoder derived from a small amount of training data.
Collapse
Affiliation(s)
- Artur Petrosyan
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia
| | | | - Dmitrii Sukhinin
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia
| | - Anna Makarova
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia
| | | | | | - Mikhail Sinkin
- Moscow State University of Medicine and Dentistry, Scientific Research Institute of First Aid to them. N.V. Sklifosovsky, Moscow, Russia
| | - Alexei Ossadtchi
- Center for Bioelectric Interfaces, Higher School of Economics, Moscow, Russia.,Artificial Intelligence Research Institute, AIRI, Moscow, Russia
| |
Collapse
|
4
|
Hong KS, Khan MNA, Ghafoor U. Non-invasive transcranial electrical brain stimulation guided by functional near-infrared spectroscopy for targeted neuromodulation: A review. J Neural Eng 2022; 19. [PMID: 35905708 DOI: 10.1088/1741-2552/ac857d] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 07/29/2022] [Indexed: 11/12/2022]
Abstract
One of the primary goals in cognitive neuroscience is to understand the neural mechanisms on which cognition is based. Researchers are trying to find how cognitive mechanisms are related to oscillations generated due to brain activity. The research focused on this topic has been considerably aided by developing non-invasive brain stimulation techniques. The dynamics of brain networks and the resultant behavior can be affected by non-invasive brain stimulation techniques, which make their use a focus of interest in many experiments and clinical fields. One essential non-invasive brain stimulation technique is transcranial electrical stimulation (tES), subdivided into transcranial direct and alternating current stimulation. tES has recently become more well-known because of the effective results achieved in treating chronic conditions. In addition, there has been exceptional progress in the interpretation and feasibility of tES techniques. Summarizing the beneficial effects of tES, this article provides an updated depiction of what has been accomplished to date, brief history, and the open questions that need to be addressed in the future. An essential issue in the field of tES is stimulation duration. This review briefly covers the stimulation durations that have been utilized in the field while monitoring the brain using functional-near infrared spectroscopy-based brain imaging.
Collapse
Affiliation(s)
- Keum-Shik Hong
- Department of Cogno-mechatronics Engineering, Pusan National University, 2 Busandaehak-ro, Geumgeong-gu, Busan, Busan, 609735, Korea (the Republic of)
| | - M N Afzal Khan
- Pusan National University, Department of Mechanical Engineering, Busan, 46241, Korea (the Republic of)
| | - Usman Ghafoor
- School of Mechanical Engineering, Pusan National University College of Engineering, room 204, Busan, 46241, Korea (the Republic of)
| |
Collapse
|
5
|
Hammer J, Schirrmeister RT, Hartmann K, Marusic P, Schulze-Bonhage A, Ball T. Interpretable functional specialization emerges in deep convolutional networks trained on brain signals. J Neural Eng 2022; 19. [PMID: 35421857 DOI: 10.1088/1741-2552/ac6770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 04/14/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Functional specialization is fundamental to neural information processing. Here, we study whether and how functional specialization emerges in artificial deep convolutional neural networks (CNNs) during a brain-computer interfacing (BCI) task. APPROACH We trained CNNs to predict hand movement speed from intracranial EEG (iEEG) and delineated how units across the different CNN hidden layers learned to represent the iEEG signal. MAIN RESULTS We show that distinct, functionally interpretable neural populations emerged as a result of the training process. While some units became sensitive to either iEEG amplitude or phase, others showed bimodal behavior with significant sensitivity to both features. Pruning of highly-sensitive units resulted in a steep drop of decoding accuracy not observed for pruning of less sensitive units, highlighting the functional relevance of the amplitude- and phase-specialized populations. SIGNIFICANCE We anticipate that emergent functional specialization as uncovered here will become a key concept in research towards interpretable deep learning for neuroscience and BCI applications.
Collapse
Affiliation(s)
- Jiri Hammer
- Neuromedical AI Lab, Department of Neurosurgery, University of Freiburg, Engelbergerstraße 21, Freiburg, 79106, GERMANY
| | | | - Kay Hartmann
- Neuromedical AI Lab, Department of Neurosurgery, University of Freiburg, Engelbergerstraße 21, Freiburg, 79106, GERMANY
| | - Petr Marusic
- Department of Neurology, Motol University Hospital, V Úvalu 84, Prague, 150 06, CZECH REPUBLIC
| | - Andreas Schulze-Bonhage
- Epilepsy Center, University Clinics, Albert-Ludwigs-Universitaet Freiburg, Albert-Ludwigs-University,, 79095 Freiburg, Germany, Freiburg, 79095, GERMANY
| | - Tonio Ball
- Epilepsy Center, University Clinics, Albert-Ludwigs-Universitaet Freiburg, Albert-Ludwigs-University,, 79095 Freiburg, Germany, Freiburg, 79106, GERMANY
| |
Collapse
|
6
|
Fathi Y, Erfanian A. Decoding Bilateral Hindlimb Kinematics From Cat Spinal Signals Using Three-Dimensional Convolutional Neural Network. Front Neurosci 2022; 16:801818. [PMID: 35401098 PMCID: PMC8990134 DOI: 10.3389/fnins.2022.801818] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 03/02/2022] [Indexed: 11/13/2022] Open
Abstract
To date, decoding limb kinematic information mostly relies on neural signals recorded from the peripheral nerve, dorsal root ganglia (DRG), ventral roots, spinal cord gray matter, and the sensorimotor cortex. In the current study, we demonstrated that the neural signals recorded from the lateral and dorsal columns within the spinal cord have the potential to decode hindlimb kinematics during locomotion. Experiments were conducted using intact cats. The cats were trained to walk on a moving belt in a hindlimb-only condition, while their forelimbs were kept on the front body of the treadmill. The bilateral hindlimb joint angles were decoded using local field potential signals recorded using a microelectrode array implanted in the dorsal and lateral columns of both the left and right sides of the cat spinal cord. The results show that contralateral hindlimb kinematics can be decoded as accurately as ipsilateral kinematics. Interestingly, hindlimb kinematics of both legs can be accurately decoded from the lateral columns within one side of the spinal cord during hindlimb-only locomotion. The results indicated that there was no significant difference between the decoding performances obtained using neural signals recorded from the dorsal and lateral columns. The results of the time-frequency analysis show that event-related synchronization (ERS) and event-related desynchronization (ERD) patterns in all frequency bands could reveal the dynamics of the neural signals during movement. The onset and offset of the movement can be clearly identified by the ERD/ERS patterns. The results of the mutual information (MI) analysis showed that the theta frequency band contained significantly more limb kinematics information than the other frequency bands. Moreover, the theta power increased with a higher locomotion speed.
Collapse
Affiliation(s)
- Yaser Fathi
- Department of Biomedical Engineering, School of Electrical Engineering, Iran Neural Technology Research Centre, Iran University of Science and Technology, Tehran, Iran
| | - Abbas Erfanian
- Department of Biomedical Engineering, School of Electrical Engineering, Iran Neural Technology Research Centre, Iran University of Science and Technology, Tehran, Iran
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
- *Correspondence: Abbas Erfanian,
| |
Collapse
|