1
|
Liu X, Hu B, Si Y, Wang Q. The role of eye movement signals in non-invasive brain-computer interface typing system. Med Biol Eng Comput 2024; 62:1981-1990. [PMID: 38509350 DOI: 10.1007/s11517-024-03070-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 03/05/2024] [Indexed: 03/22/2024]
Abstract
Brain-Computer Interfaces (BCIs) have shown great potential in providing communication and control for individuals with severe motor disabilities. However, traditional BCIs that rely on electroencephalography (EEG) signals suffer from low information transfer rates and high variability across users. Recently, eye movement signals have emerged as a promising alternative due to their high accuracy and robustness. Eye movement signals are the electrical or mechanical signals generated by the movements and behaviors of the eyes, serving to denote the diverse forms of eye movements, such as fixations, smooth pursuit, and other oculomotor activities like blinking. This article presents a review of recent studies on the development of BCI typing systems that incorporate eye movement signals. We first discuss the basic principles of BCI and the recent advancements in text entry. Then, we provide a comprehensive summary of the latest advancements in BCI typing systems that leverage eye movement signals. This includes an in-depth analysis of hybrid BCIs that are built upon the integration of electrooculography (EOG) and eye tracking technology, aiming to enhance the performance and functionality of the system. Moreover, we highlight the advantages and limitations of different approaches, as well as potential future directions. Overall, eye movement signals hold great potential for enhancing the usability and accessibility of BCI typing systems, and further research in this area could lead to more effective communication and control for individuals with motor disabilities.
Collapse
Affiliation(s)
- Xi Liu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Bingliang Hu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Yang Si
- Department of Neurology, Sichuan Academy of Medical Science and Sichuan Provincial People's Hospital, Chengdu, 611731, China
- University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Quan Wang
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
| |
Collapse
|
2
|
Wolf P, Götzelmann T. VEPdgets: Towards Richer Interaction Elements Based on Visually Evoked Potentials. SENSORS (BASEL, SWITZERLAND) 2023; 23:9127. [PMID: 38005515 PMCID: PMC10674685 DOI: 10.3390/s23229127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
For brain-computer interfaces, a variety of technologies and applications already exist. However, current approaches use visual evoked potentials (VEP) only as action triggers or in combination with other input technologies. This paper shows that the losing visually evoked potentials after looking away from a stimulus is a reliable temporal parameter. The associated latency can be used to control time-varying variables using the VEP. In this context, we introduced VEP interaction elements (VEP widgets) for a value input of numbers, which can be applied in various ways and is purely based on VEP technology. We carried out a user study in a desktop as well as in a virtual reality setting. The results for both settings showed that the temporal control approach using latency correction could be applied to the input of values using the proposed VEP widgets. Even though value input is not very accurate under untrained conditions, users could input numerical values. Our concept of applying latency correction to VEP widgets is not limited to the input of numbers.
Collapse
Affiliation(s)
| | - Timo Götzelmann
- Nuremberg Institute of Technology, Chair of Ambient Intelligence, D-90489 Nuremberg, Germany
| |
Collapse
|
3
|
Cardona-Álvarez YN, Álvarez-Meza AM, Cárdenas-Peña DA, Castaño-Duque GA, Castellanos-Dominguez G. A Novel OpenBCI Framework for EEG-Based Neurophysiological Experiments. SENSORS (BASEL, SWITZERLAND) 2023; 23:3763. [PMID: 37050823 PMCID: PMC10098804 DOI: 10.3390/s23073763] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 02/23/2023] [Accepted: 02/27/2023] [Indexed: 06/19/2023]
Abstract
An Open Brain-Computer Interface (OpenBCI) provides unparalleled freedom and flexibility through open-source hardware and firmware at a low-cost implementation. It exploits robust hardware platforms and powerful software development kits to create customized drivers with advanced capabilities. Still, several restrictions may significantly reduce the performance of OpenBCI. These limitations include the need for more effective communication between computers and peripheral devices and more flexibility for fast settings under specific protocols for neurophysiological data. This paper describes a flexible and scalable OpenBCI framework for electroencephalographic (EEG) data experiments using the Cyton acquisition board with updated drivers to maximize the hardware benefits of ADS1299 platforms. The framework handles distributed computing tasks and supports multiple sampling rates, communication protocols, free electrode placement, and single marker synchronization. As a result, the OpenBCI system delivers real-time feedback and controlled execution of EEG-based clinical protocols for implementing the steps of neural recording, decoding, stimulation, and real-time analysis. In addition, the system incorporates automatic background configuration and user-friendly widgets for stimuli delivery. Motor imagery tests the closed-loop BCI designed to enable real-time streaming within the required latency and jitter ranges. Therefore, the presented framework offers a promising solution for tailored neurophysiological data processing.
Collapse
Affiliation(s)
| | | | | | - Germán Albeiro Castaño-Duque
- Cultura de la Calidad en la Educación Research Group, Universidad Nacional de Colombia, Manizales 170003, Colombia
| | | |
Collapse
|
4
|
Garcia PP, Santos TG, Machado MA, Mendes N. Deep Learning Framework for Controlling Work Sequence in Collaborative Human-Robot Assembly Processes. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23010553. [PMID: 36617153 PMCID: PMC9823442 DOI: 10.3390/s23010553] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/28/2022] [Accepted: 01/02/2023] [Indexed: 05/14/2023]
Abstract
The human-robot collaboration (HRC) solutions presented so far have the disadvantage that the interaction between humans and robots is based on the human's state or on specific gestures purposely performed by the human, thus increasing the time required to perform a task and slowing down the pace of human labor, making such solutions uninteresting. In this study, a different concept of the HRC system is introduced, consisting of an HRC framework for managing assembly processes that are executed simultaneously or individually by humans and robots. This HRC framework based on deep learning models uses only one type of data, RGB camera data, to make predictions about the collaborative workspace and human action, and consequently manage the assembly process. To validate the HRC framework, an industrial HRC demonstrator was built to assemble a mechanical component. Four different HRC frameworks were created based on the convolutional neural network (CNN) model structures: Faster R-CNN ResNet-50 and ResNet-101, YOLOv2 and YOLOv3. The HRC framework with YOLOv3 structure showed the best performance, showing a mean average performance of 72.26% and allowed the HRC industrial demonstrator to successfully complete all assembly tasks within a desired time window. The HRC framework has proven effective for industrial assembly applications.
Collapse
Affiliation(s)
- Pedro P. Garcia
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
| | - Telmo G. Santos
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
- Laboratório Associado de Sistemas Inteligentes, LASI, 4800-058 Guimarães, Portugal
| | - Miguel A. Machado
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
- Laboratório Associado de Sistemas Inteligentes, LASI, 4800-058 Guimarães, Portugal
- Correspondence:
| | - Nuno Mendes
- UNIDEMI, Department of Mechanical and Industrial Engineering, NOVA School of Science and Technology, Universidade NOVA de Lisboa, 2829-516 Caparica, Portugal
- Laboratório Associado de Sistemas Inteligentes, LASI, 4800-058 Guimarães, Portugal
| |
Collapse
|