1
|
Diaz-Hernandez O. A worldwide research overview of Artificial Proprioception in prosthetics. PLOS DIGITAL HEALTH 2025; 4:e0000809. [PMID: 40261833 PMCID: PMC12013951 DOI: 10.1371/journal.pdig.0000809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/24/2025]
Abstract
Proprioception is the body's ability to sense its position and movement, which is essential for motor control. Its loss after amputation poses significant challenges for prosthesis users. Artificial Proprioception enhances sensory feedback and motor control in prosthetic devices. This review provides a global overview of current research and technology in the field, emphasizing feedback mechanisms, neural interfaces, and biomechatronic integration. This work examines innovations in sensory feedback for amputees, including electrotactile and vibrotactile stimulation, artificial intelligence, and neural interfaces to enhance prosthetic control. The methodology involved reviewing studies from Scopus, Web of Science, and PubMed on prosthetic proprioceptive feedback from 2004 to 2024, evaluating sensory feedback research by author, country, and affiliation with a synthesis provided. Countries like the United States and Italy are collaborating to advance global research. The paper concludes with potential developments, such as advanced, user-centered prosthetics that meet amputees' sensory needs and significantly enhance their quality of life.
Collapse
Affiliation(s)
- Octavio Diaz-Hernandez
- Escuela Nacional de Estudios Superiores Unidad Juriquilla, Universidad Nacional Autónoma de México, Mexico City, México
| |
Collapse
|
2
|
Ricci S, Torrigino D, Minuto M, Casadio M. A Visuo-Haptic System for Nodule Detection Training: Insights From EEG and Behavioral Analysis. IEEE TRANSACTIONS ON HAPTICS 2024; 17:946-956. [PMID: 39499593 DOI: 10.1109/toh.2024.3487522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2024]
Abstract
Medical palpation is a key skill for clinicians. It is typically trained using animal and synthetic models, which however raise ethical concerns and produce high volumes of consumables. An alternative could be visuo-haptic simulations, despite their training efficacy has not been proved yet. The assessment of palpatory skills requires objective methods, that can be achieved by combining performance metrics with electroencephalography (EEG). The goals of this study were to: (i) develop a visuo-haptic system to train nodule detection, combining a Geomagic Touch haptic device with a visuo-haptic simulation of a skin patch and a nodule, implemented using SOFA framework; (ii) assess whether this system could be used for training and evaluation. To do so, we collected performance and EEG data of 19 subjects performing multiple repetitions of a nodule detection task. Results revealed that participants could be divided in low and high performers; the former applied a greater pressure when looking for the nodule and showed a higher EEG alpha (8.5 - 13 ) peak at rest; The latter explored the skin remaining on its surface and were characterized by low alpha power. Furthermore, alpha power positively correlated with error and negatively with palpation depth. Altogether, these results suggest that alpha power might be an indicator of performance, denoting an increase in vigilance, attention, information processing, cognitive processes, and engagement, ultimately affecting strategy and performance. Also, the combination of EEG with performance data can provide an objective measure of the user's palpation ability.
Collapse
|
3
|
Jin W, Zhu X, Qian L, Wu C, Yang F, Zhan D, Kang Z, Luo K, Meng D, Xu G. Electroencephalogram-based adaptive closed-loop brain-computer interface in neurorehabilitation: a review. Front Comput Neurosci 2024; 18:1431815. [PMID: 39371523 PMCID: PMC11449715 DOI: 10.3389/fncom.2024.1431815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 09/10/2024] [Indexed: 10/08/2024] Open
Abstract
Brain-computer interfaces (BCIs) represent a groundbreaking approach to enabling direct communication for individuals with severe motor impairments, circumventing traditional neural and muscular pathways. Among the diverse array of BCI technologies, electroencephalogram (EEG)-based systems are particularly favored due to their non-invasive nature, user-friendly operation, and cost-effectiveness. Recent advancements have facilitated the development of adaptive bidirectional closed-loop BCIs, which dynamically adjust to users' brain activity, thereby enhancing responsiveness and efficacy in neurorehabilitation. These systems support real-time modulation and continuous feedback, fostering personalized therapeutic interventions that align with users' neural and behavioral responses. By incorporating machine learning algorithms, these BCIs optimize user interaction and promote recovery outcomes through mechanisms of activity-dependent neuroplasticity. This paper reviews the current landscape of EEG-based adaptive bidirectional closed-loop BCIs, examining their applications in the recovery of motor and sensory functions, as well as the challenges encountered in practical implementation. The findings underscore the potential of these technologies to significantly enhance patients' quality of life and social interaction, while also identifying critical areas for future research aimed at improving system adaptability and performance. As advancements in artificial intelligence continue, the evolution of sophisticated BCI systems holds promise for transforming neurorehabilitation and expanding applications across various domains.
Collapse
Affiliation(s)
- Wenjie Jin
- Department of Rehabilitation Medicine, Nanjing Medical University, Nanjing, China
- Rehabilitation Medicine Center, Zhejiang Chinese Medical University Affiliated Jiaxing TCM Hospital, Jiaxing, China
| | - XinXin Zhu
- Rehabilitation Medicine Center, Zhejiang Chinese Medical University Affiliated Jiaxing TCM Hospital, Jiaxing, China
| | - Lifeng Qian
- Rehabilitation Medicine Center, Zhejiang Chinese Medical University Affiliated Jiaxing TCM Hospital, Jiaxing, China
| | - Cunshu Wu
- Department of Rehabilitation Medicine, Nanjing Medical University, Nanjing, China
| | - Fan Yang
- Rehabilitation Medicine Center, Zhejiang Chinese Medical University Affiliated Jiaxing TCM Hospital, Jiaxing, China
| | - Daowei Zhan
- Rehabilitation Medicine Center, Zhejiang Chinese Medical University Affiliated Jiaxing TCM Hospital, Jiaxing, China
| | - Zhaoyin Kang
- Rehabilitation Medicine Center, Zhejiang Chinese Medical University Affiliated Jiaxing TCM Hospital, Jiaxing, China
| | - Kaitao Luo
- Rehabilitation Medicine Center, Zhejiang Chinese Medical University Affiliated Jiaxing TCM Hospital, Jiaxing, China
| | - Dianhuai Meng
- Department of Rehabilitation Medicine, Nanjing Medical University, Nanjing, China
- Rehabilitation Medicine Center, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Guangxu Xu
- Department of Rehabilitation Medicine, Nanjing Medical University, Nanjing, China
- Rehabilitation Medicine Center, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
4
|
Amann LK, Casasnovas V, Hainke J, Gail A. Optimality of multisensory integration while compensating for uncertain visual target information with artificial vibrotactile cues during reach planning. J Neuroeng Rehabil 2024; 21:155. [PMID: 39252006 PMCID: PMC11382450 DOI: 10.1186/s12984-024-01448-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Accepted: 08/19/2024] [Indexed: 09/11/2024] Open
Abstract
BACKGROUND Planning and executing movements requires the integration of different sensory modalities, such as vision and proprioception. However, neurological diseases like stroke can lead to full or partial loss of proprioception, resulting in impaired movements. Recent advances focused on providing additional sensory feedback to patients to compensate for the sensory loss, proving vibrotactile stimulation to be a viable option as it is inexpensive and easy to implement. Here, we test how such vibrotactile information can be integrated with visual signals to estimate the spatial location of a reach target. METHODS We used a center-out reach paradigm with 31 healthy human participants to investigate how artificial vibrotactile stimulation can be integrated with visual-spatial cues indicating target location. Specifically, we provided multisite vibrotactile stimulation to the moving dominant arm using eccentric rotating mass (ERM) motors. As the integration of inputs across multiple sensory modalities becomes especially relevant when one of them is uncertain, we additionally modulated the reliability of visual cues. We then compared the weighing of vibrotactile and visual inputs as a function of visual uncertainty to predictions from the maximum likelihood estimation (MLE) framework to decide if participants achieve quasi-optimal integration. RESULTS Our results show that participants could estimate target locations based on vibrotactile instructions. After short training, combined visual and vibrotactile cues led to higher hit rates and reduced reach errors when visual cues were uncertain. Additionally, we observed lower reaction times in trials with low visual uncertainty when vibrotactile stimulation was present. Using MLE predictions, we found that integration of vibrotactile and visual cues followed optimal integration when vibrotactile cues required the detection of one or two active motors. However, if estimating the location of a target required discriminating the intensities of two cues, integration violated MLE predictions. CONCLUSION We conclude that participants can quickly learn to integrate visual and artificial vibrotactile information. Therefore, using additional vibrotactile stimulation may serve as a promising way to improve rehabilitation or the control of prosthetic devices by patients suffering loss of proprioception.
Collapse
Affiliation(s)
- Lukas K Amann
- Faculty of Biology and Psychology, Georg-August University, Goßlerstr. 14, 37073, Göttingen, Germany
- Sensorimotor Group, German Primate Center, Kellnerweg 4, 37077, Göttingen, Germany
| | - Virginia Casasnovas
- Faculty of Biology and Psychology, Georg-August University, Goßlerstr. 14, 37073, Göttingen, Germany
- Sensorimotor Group, German Primate Center, Kellnerweg 4, 37077, Göttingen, Germany
| | - Jannis Hainke
- Faculty of Biology and Psychology, Georg-August University, Goßlerstr. 14, 37073, Göttingen, Germany
- Sensorimotor Group, German Primate Center, Kellnerweg 4, 37077, Göttingen, Germany
| | - Alexander Gail
- Faculty of Biology and Psychology, Georg-August University, Goßlerstr. 14, 37073, Göttingen, Germany.
- Sensorimotor Group, German Primate Center, Kellnerweg 4, 37077, Göttingen, Germany.
- Bernstein Center of Computational Neuroscience, Heinrich-Düker-Weg 12, 37077, Göttingen, Germany.
- Leibniz ScienceCampus Primate Cognition, Kellnerweg 4, 37077, Göttingen, Germany.
| |
Collapse
|
5
|
Wei Y, Marshall AG, McGlone FP, Makdani A, Zhu Y, Yan L, Ren L, Wei G. Human tactile sensing and sensorimotor mechanism: from afferent tactile signals to efferent motor control. Nat Commun 2024; 15:6857. [PMID: 39127772 PMCID: PMC11316806 DOI: 10.1038/s41467-024-50616-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 07/12/2024] [Indexed: 08/12/2024] Open
Abstract
In tactile sensing, decoding the journey from afferent tactile signals to efferent motor commands is a significant challenge primarily due to the difficulty in capturing population-level afferent nerve signals during active touch. This study integrates a finite element hand model with a neural dynamic model by using microneurography data to predict neural responses based on contact biomechanics and membrane transduction dynamics. This research focuses specifically on tactile sensation and its direct translation into motor actions. Evaluations of muscle synergy during in -vivo experiments revealed transduction functions linking tactile signals and muscle activation. These functions suggest similar sensorimotor strategies for grasping influenced by object size and weight. The decoded transduction mechanism was validated by restoring human-like sensorimotor performance on a tendon-driven biomimetic hand. This research advances our understanding of translating tactile sensation into motor actions, offering valuable insights into prosthetic design, robotics, and the development of next-generation prosthetics with neuromorphic tactile feedback.
Collapse
Affiliation(s)
- Yuyang Wei
- Department of Engineering Science, University of Oxford, Oxford, OX1 3PJ, UK
- Department of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester, M13 9PL, UK
| | - Andrew G Marshall
- Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, L69 3BX, UK
| | - Francis P McGlone
- Department of Neuroscience and Biomedical Engineering, Aalto University, Otakaari 24, Helsinki, Finland
| | - Adarsh Makdani
- School of Natural Sciences and Psychology, Liverpool John Moores University, Liverpool, L3 5UX, UK
| | - Yiming Zhu
- Department of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester, M13 9PL, UK
| | - Lingyun Yan
- Department of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester, M13 9PL, UK
| | - Lei Ren
- Department of Mechanical, Aerospace and Civil Engineering, The University of Manchester, Manchester, M13 9PL, UK.
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Jilin, China.
| | - Guowu Wei
- School of Science, Engineering and Environment, University of Salford, Manchester, M5 4WT, UK.
| |
Collapse
|
6
|
Yoshida KT, Zook ZA, Choi H, Luo M, O'Malley MK, Okamura AM. Design and Evaluation of a 3-DoF Haptic Device for Directional Shear Cues on the Forearm. IEEE TRANSACTIONS ON HAPTICS 2024; 17:483-495. [PMID: 38349838 DOI: 10.1109/toh.2024.3365669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
Wearable haptic devices on the forearm can relay information from virtual agents, robots, and other humans while leaving the hands free. We introduce and test a new wearable haptic device that uses soft actuators to provide normal and shear force to the skin of the forearm. A rigid housing and gear motor are used to control the direction of the shear force. A 6-axis force/torque sensor, distance sensor, and pressure sensors are integrated to quantify how the soft tactor interacts with the skin. When worn by participants, the device delivered consistent shear forces of up to 0.64 N and normal forces of up to 0.56 N over distances as large as 14.3 mm. To understand cue saliency, we conducted a user study asking participants to identify linear shear directional cues in a 4-direction task and an 8-direction task with different cue speeds, travel distances, and contact patterns. Participants identified cues with longer travel distances best, with an 85.1% accuracy in the 4-direction task, and a 43.5% accuracy in the 8-direction task. Participants had a directional bias, with a preferential response in the axis towards and away from the wrist bone.
Collapse
|
7
|
Deo DR, Willett FR, Avansino DT, Hochberg LR, Henderson JM, Shenoy KV. Brain control of bimanual movement enabled by recurrent neural networks. Sci Rep 2024; 14:1598. [PMID: 38238386 PMCID: PMC10796685 DOI: 10.1038/s41598-024-51617-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 01/07/2024] [Indexed: 01/22/2024] Open
Abstract
Brain-computer interfaces have so far focused largely on enabling the control of a single effector, for example a single computer cursor or robotic arm. Restoring multi-effector motion could unlock greater functionality for people with paralysis (e.g., bimanual movement). However, it may prove challenging to decode the simultaneous motion of multiple effectors, as we recently found that a compositional neural code links movements across all limbs and that neural tuning changes nonlinearly during dual-effector motion. Here, we demonstrate the feasibility of high-quality bimanual control of two cursors via neural network (NN) decoders. Through simulations, we show that NNs leverage a neural 'laterality' dimension to distinguish between left and right-hand movements as neural tuning to both hands become increasingly correlated. In training recurrent neural networks (RNNs) for two-cursor control, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously. Our results suggest that neural network decoders may be advantageous for multi-effector decoding, provided they are designed to transfer to the online setting.
Collapse
Affiliation(s)
- Darrel R Deo
- Department of Neurosurgery, Stanford University, Stanford, CA, USA.
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA.
| | - Francis R Willett
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| | - Donald T Avansino
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| | - Leigh R Hochberg
- School of Engineering, Brown University, Providence, RI, USA
- Carney Institute for Brain Science, Brown University, Providence, RI, USA
- VA RR&D Center for Neurorestoration and Neurotechnology, Rehabilitation R&D Service, Providence VA Medical Center, Providence, RI, USA
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jaimie M Henderson
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
| | - Krishna V Shenoy
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
8
|
Deo DR, Willett FR, Avansino DT, Hochberg LR, Henderson JM, Shenoy KV. Translating deep learning to neuroprosthetic control. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.21.537581. [PMID: 37131830 PMCID: PMC10153231 DOI: 10.1101/2023.04.21.537581] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Advances in deep learning have given rise to neural network models of the relationship between movement and brain activity that appear to far outperform prior approaches. Brain-computer interfaces (BCIs) that enable people with paralysis to control external devices, such as robotic arms or computer cursors, might stand to benefit greatly from these advances. We tested recurrent neural networks (RNNs) on a challenging nonlinear BCI problem: decoding continuous bimanual movement of two computer cursors. Surprisingly, we found that although RNNs appeared to perform well in offline settings, they did so by overfitting to the temporal structure of the training data and failed to generalize to real-time neuroprosthetic control. In response, we developed a method that alters the temporal structure of the training data by dilating/compressing it in time and re-ordering it, which we show helps RNNs successfully generalize to the online setting. With this method, we demonstrate that a person with paralysis can control two computer cursors simultaneously, far outperforming standard linear methods. Our results provide evidence that preventing models from overfitting to temporal structure in training data may, in principle, aid in translating deep learning advances to the BCI setting, unlocking improved performance for challenging applications.
Collapse
|
9
|
Cometa A, Falasconi A, Biasizzo M, Carpaneto J, Horn A, Mazzoni A, Micera S. Clinical neuroscience and neurotechnology: An amazing symbiosis. iScience 2022; 25:105124. [PMID: 36193050 PMCID: PMC9526189 DOI: 10.1016/j.isci.2022.105124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In the last decades, clinical neuroscience found a novel ally in neurotechnologies, devices able to record and stimulate electrical activity in the nervous system. These technologies improved the ability to diagnose and treat neural disorders. Neurotechnologies are concurrently enabling a deeper understanding of healthy and pathological dynamics of the nervous system through stimulation and recordings during brain implants. On the other hand, clinical neurosciences are not only driving neuroengineering toward the most relevant clinical issues, but are also shaping the neurotechnologies thanks to clinical advancements. For instance, understanding the etiology of a disease informs the location of a therapeutic stimulation, but also the way stimulation patterns should be designed to be more effective/naturalistic. Here, we describe cases of fruitful integration such as Deep Brain Stimulation and cortical interfaces to highlight how this symbiosis between clinical neuroscience and neurotechnology is closer to a novel integrated framework than to a simple interdisciplinary interaction.
Collapse
Affiliation(s)
- Andrea Cometa
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Antonio Falasconi
- Friedrich Miescher Institute for Biomedical Research, 4058 Basel, Switzerland
- Biozentrum, University of Basel, 4056 Basel, Switzerland
| | - Marco Biasizzo
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Jacopo Carpaneto
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Andreas Horn
- Center for Brain Circuit Therapeutics Department of Neurology Brigham & Women’s Hospital, Harvard Medical School, Boston, MA 02115, USA
- MGH Neurosurgery & Center for Neurotechnology and Neurorecovery (CNTR) at MGH Neurology Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Movement Disorder and Neuromodulation Unit, Department of Neurology, Charité – Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt- Universität zu Berlin, Department of Neurology, 10117 Berlin, Germany
| | - Alberto Mazzoni
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
| | - Silvestro Micera
- The Biorobotics Institute, Scuola Superiore Sant’Anna, 56127 Pisa, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, 56127 Pisa, Italy
- Translational Neural Engineering Lab, School of Engineering, École Polytechnique Fèdèrale de Lausanne, 1015 Lausanne, Switzerland
| |
Collapse
|
10
|
PARTICLE RIDER OPTIMIZATION DRIVEN CLASSIFICATION FOR BRAIN-COMPUTER INTERFACE. INTERNATIONAL JOURNAL OF SWARM INTELLIGENCE RESEARCH 2022. [DOI: 10.4018/ijsir.302607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The emerging technology for translating the intention of human into control signals is the Brain–computer interface (BCI). The BCI helps the patients with complete motor dysfunction to interact with the people. In this research, a method for abnormality assessment in humans from the perspective of the BCI was proposed by developing a hybrid optimization algorithm based on Electroencephalography (EEG). The hybrid optimization algorithm, called Particle Rider Optimization Algorithm (PROA) is designed through the incorporation of Particle Swarm Optimization (PSO) and Rider Optimization algorithm (ROA). The pre-processing is done for filtering the noise and removal of artefact. In pre-processing, the noise is removed through the Common Average Referencing (CAR) and Laplacian filters, whereas the artifacts are eliminated by Principle component analysis (PCA).
Collapse
|