1
|
Kurmanavičiūtė D, Kataja H, Jas M, Välilä A, Parkkonen L. Target of selective auditory attention can be robustly followed with MEG. Sci Rep 2023; 13:10959. [PMID: 37414861 PMCID: PMC10325959 DOI: 10.1038/s41598-023-37959-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 06/30/2023] [Indexed: 07/08/2023] Open
Abstract
Selective auditory attention enables filtering of relevant acoustic information from irrelevant. Specific auditory responses, measurable by magneto- and electroencephalography (MEG/EEG), are known to be modulated by attention to the evoking stimuli. However, such attention effects have typically been studied in unnatural conditions (e.g. during dichotic listening of pure tones) and have been demonstrated mostly in averaged auditory evoked responses. To test how reliably we can detect the attention target from unaveraged brain responses, we recorded MEG data from 15 healthy subjects that were presented with two human speakers uttering continuously the words "Yes" and "No" in an interleaved manner. The subjects were asked to attend to one speaker. To investigate which temporal and spatial aspects of the responses carry the most information about the target of auditory attention, we performed spatially and temporally resolved classification of the unaveraged MEG responses using a support vector machine. Sensor-level decoding of the responses to attended vs. unattended words resulted in a mean accuracy of [Formula: see text] (N = 14) for both stimulus words. The discriminating information was mostly available 200-400 ms after the stimulus onset. Spatially-resolved source-level decoding indicated that the most informative sources were in the auditory cortices, in both the left and right hemisphere. Our result corroborates attention modulation of auditory evoked responses and shows that such modulations are detectable in unaveraged MEG responses at high accuracy, which could be exploited e.g. in an intuitive brain-computer interface.
Collapse
Affiliation(s)
- Dovilė Kurmanavičiūtė
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, 00076, Aalto, Finland.
| | - Hanna Kataja
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, 00076, Aalto, Finland
| | - Mainak Jas
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, 00076, Aalto, Finland
- Athinoula A. Martinos Center for Biomedical Imaging, 149 Thirteenth Street, Charlestown, MA, 02129, USA
| | - Anne Välilä
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, 00076, Aalto, Finland
| | - Lauri Parkkonen
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, 00076, Aalto, Finland
- Aalto NeuroImaging, Aalto University, 00076, Aalto, Finland
| |
Collapse
|
2
|
Russell MK. Age and Auditory Spatial Perception in Humans: Review of Behavioral Findings and Suggestions for Future Research. Front Psychol 2022; 13:831670. [PMID: 35250777 PMCID: PMC8888835 DOI: 10.3389/fpsyg.2022.831670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
It has been well documented, and fairly well known, that concomitant with an increase in chronological age is a corresponding increase in sensory impairment. As most people realize, our hearing suffers as we get older; hence, the increased need for hearing aids. The first portion of the present paper is how the change in age apparently affects auditory judgments of sound source position. A summary of the literature evaluating the changes in the perception of sound source location and the perception of sound source motion as a function of chronological age is presented. The review is limited to empirical studies with behavioral findings involving humans. It is the view of the author that we have an immensely limited understanding of how chronological age affects perception of space when based on sound. In the latter part of the paper, discussion is given to how auditory spatial perception is traditionally conducted in the laboratory. Theoretically, beneficial reasons exist for conducting research in the manner it has been. Nonetheless, from an ecological perspective, the vast majority of previous research can be considered unnatural and greatly lacking in ecological validity. Suggestions for an alternative and more ecologically valid approach to the investigation of auditory spatial perception are proposed. It is believed an ecological approach to auditory spatial perception will enhance our understanding of the extent to which individuals perceive sound source location and how those perceptual judgments change with an increase in chronological age.
Collapse
|
3
|
Velasco-Álvarez F, Fernández-Rodríguez Á, Medina-Juliá MT, Ron-Angevin R. Speech stream segregation to control an ERP-based auditory BCI. J Neural Eng 2021; 18. [PMID: 33470970 DOI: 10.1088/1741-2552/abdd44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 01/19/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE The use of natural sounds in auditory Brain-Computer Interfaces (BCI) has been shown to improve classification results and usability. Some auditory BCIs are based on stream segregation, in which the subjects must attend one audio stream and ignore the other(s); these streams include some kind of stimuli to be detected. In this work we focus on Event-Related Potentials (ERP) and study whether providing intelligible content to each audio stream could help the users to better concentrate on the desired stream and so to better attend the target stimuli and to ignore the non-target ones. APPROACH In addition to a control condition, two experimental conditions, based on the selective attention and the cocktail party effect, were tested using two simultaneous and spatialized audio streams: i) the condition A2 consisted of an overlap of auditory stimuli (single syllables) on a background consisting of natural speech for each stream, ii) in condition A3, brief alterations of the natural flow of each speech were used as stimuli. MAIN RESULTS The two experimental proposals improved the results of the control condition (single words as stimuli without a speech background) both in a cross validation analysis of the calibration part and in the online test. The analysis of the ERP responses also presented better discriminability for the two proposals in comparison to the control condition. The results of subjective questionnaires support the better usability of the first experimental condition. SIGNIFICANCE The use of natural speech as background improves the stream segregation in an ERP-based auditory BCI (with significant results in the performance metrics, the ERP waveforms, and in the preference parameter in subjective questionnaires). Future work in the field of ERP-based stream segregation should study the use of natural speech in combination with easily perceived but not distracting stimuli.
Collapse
Affiliation(s)
- Francisco Velasco-Álvarez
- Department of Electronic Technology, Universidad de Malaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Malaga, 29071, SPAIN
| | - Álvaro Fernández-Rodríguez
- Department of Electronic Technology, University of Málaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Málaga, 29071, SPAIN
| | - M Teresa Medina-Juliá
- Department of Electronic Technology, Universidad de Malaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Malaga, 29071, SPAIN
| | - Ricardo Ron-Angevin
- Department of Electronic Technology, Universidad de Malaga, E.T.S.I. Telecomunicación, Campus de Teatinos s/n, Malaga, 29071, SPAIN
| |
Collapse
|
4
|
Abstract
Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, and motor-imagery) that employs a subject-agnostic Bidirectional Long Short-Term Memory (BLSTM) Network is developed to assess cognitive functions and identify its relationship with brain signal features, which is hypothesized to consistently indicate cognitive decline. Testing occurred with healthy subjects of age 20–40, 40–60, and >60, and mildly cognitive impaired subjects. Auditory and olfactory stimuli were presented to the subjects and the subjects imagined and conducted movement of each arm during which Electroencephalogram (EEG)/Electromyogram (EMG) signals were recorded. A deep BLSTM Neural Network is trained with Principal Component features from evoked signals and assesses their corresponding pathways. Wavelet analysis is used to decompose evoked signals and calculate the band power of component frequency bands. This deep learning system performs better than conventional deep neural networks in detecting MCI. Most features studied peaked at the age range 40–60 and were lower for the MCI group than for any other group tested. Detection accuracy of left-hand motor imagery signals best indicated cognitive aging (p = 0.0012); here, the mean classification accuracy per age group declined from 91.93% to 81.64%, and is 69.53% for MCI subjects. Motor-imagery-evoked band power, particularly in gamma bands, best indicated (p = 0.007) cognitive aging. Although the classification accuracy of the potentials effectively distinguished cognitive aging from MCI (p < 0.05), followed by gamma-band power.
Collapse
|
5
|
Hübner D, Schall A, Prange N, Tangermann M. Eyes-Closed Increases the Usability of Brain-Computer Interfaces Based on Auditory Event-Related Potentials. Front Hum Neurosci 2018; 12:391. [PMID: 30323749 PMCID: PMC6172854 DOI: 10.3389/fnhum.2018.00391] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 09/10/2018] [Indexed: 11/13/2022] Open
Abstract
Recent research has demonstrated how brain-computer interfaces (BCI) based on auditory stimuli can be used for communication and rehabilitation. In these applications, users are commonly instructed to avoid eye movements while keeping their eyes open. This secondary task can lead to exhaustion and subjects may not succeed in suppressing eye movements. In this work, we investigate the option to use a BCI with eyes-closed. Twelve healthy subjects participated in a single electroencephalography (EEG) session where they were listening to a rapid stream of bisyllabic words while alternatively having their eyes open or closed. In addition, we assessed usability aspects for the two conditions with a questionnaire. Our analysis shows that eyes-closed does not reduce the number of eye artifacts and that event-related potential (ERP) responses and classification accuracies are comparable between both conditions. Importantly, we found that subjects expressed a significant general preference toward the eyes-closed condition and were also less tensed in that condition. Furthermore, switching between eyes-closed and eyes-open and vice versa is possible without a severe drop in classification accuracy. These findings suggest that eyes-closed should be considered as a viable alternative in auditory BCIs that might be especially useful for subjects with limited control over their eye movements.
Collapse
Affiliation(s)
- David Hübner
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany.,Cluster of Excellence, BrainLinks-BrainTools, Freiburg, Germany
| | - Albrecht Schall
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany
| | - Natalie Prange
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany
| | - Michael Tangermann
- Brain State Decoding Lab, Department of Computer Science, University of Freiburg, Freiburg, Germany.,Cluster of Excellence, BrainLinks-BrainTools, Freiburg, Germany
| |
Collapse
|
6
|
Sugi M, Hagimoto Y, Nambu I, Gonzalez A, Takei Y, Yano S, Hokari H, Wada Y. Improving the Performance of an Auditory Brain-Computer Interface Using Virtual Sound Sources by Shortening Stimulus Onset Asynchrony. Front Neurosci 2018; 12:108. [PMID: 29535602 PMCID: PMC5835086 DOI: 10.3389/fnins.2018.00108] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Accepted: 02/12/2018] [Indexed: 12/03/2022] Open
Abstract
Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA.
Collapse
Affiliation(s)
- Miho Sugi
- Graduate School of Engineering, Nagaoka University of Technology, Nagaoka, Japan
| | - Yutaka Hagimoto
- Graduate School of Engineering, Nagaoka University of Technology, Nagaoka, Japan
| | - Isao Nambu
- Graduate School of Engineering, Nagaoka University of Technology, Nagaoka, Japan
| | - Alejandro Gonzalez
- Graduate School of Engineering, Nagaoka University of Technology, Nagaoka, Japan
| | - Yoshinori Takei
- Department of Electrical and Information Engineering, National Institute of Technology, Akita College, Akita, Japan
| | - Shohei Yano
- Department of Electrical and Electronic Systems Engineering, National Institute of Technology, Nagaoka College, Nagaoka, Japan
| | - Haruhide Hokari
- Graduate School of Engineering, Nagaoka University of Technology, Nagaoka, Japan
| | - Yasuhiro Wada
- Graduate School of Engineering, Nagaoka University of Technology, Nagaoka, Japan
| |
Collapse
|
7
|
Convolutional Neural Networks with 3D Input for P300 Identification in Auditory Brain-Computer Interfaces. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2017; 2017:8163949. [PMID: 29250108 PMCID: PMC5698603 DOI: 10.1155/2017/8163949] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Revised: 08/31/2017] [Accepted: 09/10/2017] [Indexed: 11/18/2022]
Abstract
From allowing basic communication to move through an environment, several attempts are being made in the field of brain-computer interfaces (BCI) to assist people that somehow find it difficult or impossible to perform certain activities. Focusing on these people as potential users of BCI, we obtained electroencephalogram (EEG) readings from nine healthy subjects who were presented with auditory stimuli via earphones from six different virtual directions. We presented the stimuli following the oddball paradigm to elicit P300 waves within the subject's brain activity for later identification and classification using convolutional neural networks (CNN). The CNN models are given a novel single trial three-dimensional (3D) representation of the EEG data as an input, maintaining temporal and spatial information as close to the experimental setup as possible, a relevant characteristic as eliciting P300 has been shown to cause stronger activity in certain brain regions. Here, we present the results of CNN models using the proposed 3D input for three different stimuli presentation time intervals (500, 400, and 300 ms) and compare them to previous studies and other common classifiers. Our results show >80% accuracy for all the CNN models using the proposed 3D input in single trial P300 classification.
Collapse
|
8
|
Zhou S, Allison BZ, Kübler A, Cichocki A, Wang X, Jin J. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI. Front Comput Neurosci 2016; 10:105. [PMID: 27790111 PMCID: PMC5061745 DOI: 10.3389/fncom.2016.00105] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2016] [Accepted: 09/27/2016] [Indexed: 11/28/2022] Open
Abstract
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
Collapse
Affiliation(s)
- Sijie Zhou
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and TechnologyShanghai, China
| | - Brendan Z. Allison
- Department of Cognitive Science, University of California San DiegoLa Jolla, CA, USA
| | - Andrea Kübler
- Institute of Psychology, University of WürzburgWürzburg, Germany
| | - Andrzej Cichocki
- Laboratory for Advanced Brain Signal Processing, Brain Science Institute, RIKENWako-shi, Japan
- Skolkovo Institute of Science and TechnologyMoscow, Russia
- Nicolaus Copernicus University (UMK)Torun, Poland
| | - Xingyu Wang
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and TechnologyShanghai, China
| | - Jing Jin
- Key Laboratory of Advanced Control and Optimization for Chemical Processes, Ministry of Education, East China University of Science and TechnologyShanghai, China
| |
Collapse
|
9
|
Yin E, Zeyl T, Saab R, Hu D, Zhou Z, Chau T. An Auditory-Tactile Visual Saccade-Independent P300 Brain–Computer Interface. Int J Neural Syst 2016; 26:1650001. [DOI: 10.1142/s0129065716500015] [Citation(s) in RCA: 78] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Most P300 event-related potential (ERP)-based brain–computer interface (BCI) studies focus on gaze shift-dependent BCIs, which cannot be used by people who have lost voluntary eye movement. However, the performance of visual saccade-independent P300 BCIs is generally poor. To improve saccade-independent BCI performance, we propose a bimodal P300 BCI approach that simultaneously employs auditory and tactile stimuli. The proposed P300 BCI is a vision-independent system because no visual interaction is required of the user. Specifically, we designed a direction-congruent bimodal paradigm by randomly and simultaneously presenting auditory and tactile stimuli from the same direction. Furthermore, the channels and number of trials were tailored to each user to improve online performance. With 12 participants, the average online information transfer rate (ITR) of the bimodal approach improved by 45.43% and 51.05% over that attained, respectively, with the auditory and tactile approaches individually. Importantly, the average online ITR of the bimodal approach, including the break time between selections, reached 10.77 bits/min. These findings suggest that the proposed bimodal system holds promise as a practical visual saccade-independent P300 BCI.
Collapse
Affiliation(s)
- Erwei Yin
- College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha, Hunan 410073, P. R. China
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing 100094, P. R. China
| | - Timothy Zeyl
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada M4G1R8, Canada
| | - Rami Saab
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada L8S 4L8, Canada
| | - Dewen Hu
- College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha, Hunan 410073, P. R. China
| | - Zongtan Zhou
- College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha, Hunan 410073, P. R. China
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON, Canada M4G1R8, Canada
| |
Collapse
|
10
|
|
11
|
EEG channel selection using particle swarm optimization for the classification of auditory event-related potentials. ScientificWorldJournal 2014; 2014:350270. [PMID: 24982944 PMCID: PMC3984837 DOI: 10.1155/2014/350270] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2014] [Accepted: 02/26/2014] [Indexed: 11/17/2022] Open
Abstract
Brain-machine interfaces (BMI) rely on the accurate classification of event-related potentials (ERPs) and their performance greatly depends on the appropriate selection of classifier parameters and features from dense-array electroencephalography (EEG) signals. Moreover, in order to achieve a portable and more compact BMI for practical applications, it is also desirable to use a system capable of accurate classification using information from as few EEG channels as possible. In the present work, we propose a method for classifying P300 ERPs using a combination of Fisher Discriminant Analysis (FDA) and a multiobjective hybrid real-binary Particle Swarm Optimization (MHPSO) algorithm. Specifically, the algorithm searches for the set of EEG channels and classifier parameters that simultaneously maximize the classification accuracy and minimize the number of used channels. The performance of the method is assessed through offline analyses on datasets of auditory ERPs from sound discrimination experiments. The proposed method achieved a higher classification accuracy than that achieved by traditional methods while also using fewer channels. It was also found that the number of channels used for classification can be significantly reduced without greatly compromising the classification accuracy.
Collapse
|
12
|
Towards a near infrared spectroscopy-based estimation of operator attentional state. PLoS One 2014; 9:e92045. [PMID: 24632819 PMCID: PMC3954803 DOI: 10.1371/journal.pone.0092045] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2013] [Accepted: 02/19/2014] [Indexed: 12/02/2022] Open
Abstract
Given the critical risks to public health and safety that can involve lapses in attention (e.g., through implication in workplace accidents), researchers have sought to develop cognitive-state tracking technologies, capable of alerting individuals engaged in cognitively demanding tasks of potentially dangerous decrements in their levels of attention. The purpose of the present study was to address this issue through an investigation of the reliability of optical measures of cortical correlates of attention in conjunction with machine learning techniques to distinguish between states of full attention and states characterized by reduced attention capacity during a sustained attention task. Seven subjects engaged in a 30 minutes duration sustained attention reaction time task with near infrared spectroscopy (NIRS) monitoring over the prefrontal and the right parietal areas. NIRS signals from the first 10 minutes of the task were considered as characterizing the ‘full attention’ class, while the NIRS signals from the last 10 minutes of the task were considered as characterizing the ‘attention decrement’ class. A two-class support vector machine algorithm was exploited to distinguish between the two levels of attention using appropriate NIRS-derived signal features. Attention decrement occurred during the task as revealed by the significant increase in reaction time in the last 10 compared to the first 10 minutes of the task (p<.05). The results demonstrate relatively good classification accuracy, ranging from 65 to 90%. The highest classification accuracy results were obtained when exploiting the oxyhemoglobin signals (i.e., from 77 to 89%, depending on the cortical area considered) rather than the deoxyhemoglobin signals (i.e., from 65 to 66%). Moreover, the classification accuracy increased to 90% when using signals from the right parietal area rather than from the prefrontal cortex. The results support the feasibility of developing cognitive tracking technologies using NIRS and machine learning techniques.
Collapse
|