1
|
Liu R, Song Q, Ma T, Pan H, Li H, Zhao X. SoftBoMI: a non-invasive wearable body-machine interface for mapping movement of shoulder to commands. J Neural Eng 2024; 21:066007. [PMID: 39454612 DOI: 10.1088/1741-2552/ad8b6e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 10/25/2024] [Indexed: 10/28/2024]
Abstract
Objective.Customized human-machine interfaces for controlling assistive devices are vital in improving the self-help ability of upper limb amputees and tetraplegic patients. Given that most of them possess residual shoulder mobility, using it to generate commands to operate assistive devices can serve as a complementary approach to brain-computer interfaces.Approach.We propose a hybrid body-machine interface prototype that integrates soft sensors and an inertial measurement unit. This study introduces both a rule-based data decoding method and a user intent inference-based decoding method to map human shoulder movements into continuous commands. Additionally, by incorporating prior knowledge of the user's operational performance into a shared autonomy framework, we implement an adaptive switching command mapping approach. This approach enables seamless transitions between the two decoding methods, enhancing their adaptability across different tasks.Main results.The proposed method has been validated on individuals with cervical spinal cord injury, bilateral arm amputation, and healthy subjects through a series of center-out target reaching tasks and a virtual powered wheelchair driving task. The experimental results show that using both the soft sensors and the gyroscope exhibits the most well-rounded performance in intent inference. Additionally, the rule-based method demonstrates better dynamic performance for wheelchair operation, while the intent inference method is more accurate but has higher latency. Adaptive switching decoding methods offer the best adaptability by seamlessly transitioning between decoding methods for different tasks. Furthermore, we discussed the differences and characteristics among the various types of participants in the experiment.Significance.The proposed method has the potential to be integrated into clothing, enabling non-invasive interaction with assistive devices in daily life, and could serve as a tool for rehabilitation assessment in the future.
Collapse
Affiliation(s)
- Rongkai Liu
- Hefei Institutes of Physical Science (HFIPS), Chinese Academy of Sciences, Hefei 230031, Anhui, People's Republic of China
- University of Science and Technology of China (USTC), Hefei 230026, Anhui, People's Republic of China
| | - Quanjun Song
- Hefei Institutes of Physical Science (HFIPS), Chinese Academy of Sciences, Hefei 230031, Anhui, People's Republic of China
| | - Tingting Ma
- Hefei Institutes of Physical Science (HFIPS), Chinese Academy of Sciences, Hefei 230031, Anhui, People's Republic of China
- University of Science and Technology of China (USTC), Hefei 230026, Anhui, People's Republic of China
| | - Hongqing Pan
- Hefei Institutes of Physical Science (HFIPS), Chinese Academy of Sciences, Hefei 230031, Anhui, People's Republic of China
| | - Hao Li
- Hefei Institutes of Physical Science (HFIPS), Chinese Academy of Sciences, Hefei 230031, Anhui, People's Republic of China
| | - Xinyan Zhao
- Hefei Institutes of Physical Science (HFIPS), Chinese Academy of Sciences, Hefei 230031, Anhui, People's Republic of China
- University of Science and Technology of China (USTC), Hefei 230026, Anhui, People's Republic of China
| |
Collapse
|
2
|
Lee JM, Gebrekristos T, DE Santis D, Nejati-Javaremi M, Gopinath D, Parikh B, Mussa-Ivaldi FA, Argall BD. Learning to Control Complex Robots Using High-Dimensional Body-Machine Interfaces. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2024; 13:38. [PMID: 39478971 PMCID: PMC11524533 DOI: 10.1145/3630264] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Accepted: 06/22/2023] [Indexed: 11/02/2024]
Abstract
When individuals are paralyzed from injury or damage to the brain, upper body movement and function can be compromised. While the use of body motions to interface with machines has shown to be an effective noninvasive strategy to provide movement assistance and to promote physical rehabilitation, learning to use such interfaces to control complex machines is not well understood. In a five session study, we demonstrate that a subset of an uninjured population is able to learn and improve their ability to use a high-dimensional Body-Machine Interface (BoMI), to control a robotic arm. We use a sensor net of four inertial measurement units, placed bilaterally on the upper body, and a BoMI with the capacity to directly control a robot in six dimensions. We consider whether the way in which the robot control space is mapped from human inputs has any impact on learning. Our results suggest that the space of robot control does play a role in the evolution of human learning: specifically, though robot control in joint space appears to be more intuitive initially, control in task space is found to have a greater capacity for longer-term improvement and learning. Our results further suggest that there is an inverse relationship between control dimension couplings and task performance.
Collapse
Affiliation(s)
- Jongmin M Lee
- Northwestern University, USA and Shirley Ryan AbilityLab, USA
| | | | | | | | - Deepak Gopinath
- Northwestern University, USA and Shirley Ryan AbilityLab, USA
| | - Biraj Parikh
- Northwestern University, USA and Shirley Ryan AbilityLab, USA
| | | | - Brenna D Argall
- Northwestern University, USA and Shirley Ryan AbilityLab, USA
| |
Collapse
|
3
|
Augenstein TE, Nagalla D, Mohacey A, Cubillos LH, Lee MH, Ranganathan R, Krishnan C. A novel virtual robotic platform for controlling six degrees of freedom assistive devices with body-machine interfaces. Comput Biol Med 2024; 178:108778. [PMID: 38925086 DOI: 10.1016/j.compbiomed.2024.108778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 05/14/2024] [Accepted: 06/15/2024] [Indexed: 06/28/2024]
Abstract
Body-machine interfaces (BoMIs)-systems that control assistive devices (e.g., a robotic manipulator) with a person's movements-offer a robust and non-invasive alternative to brain-machine interfaces for individuals with neurological injuries. However, commercially-available assistive devices offer more degrees of freedom (DOFs) than can be efficiently controlled with a user's residual motor function. Therefore, BoMIs often rely on nonintuitive mappings between body and device movements. Learning these mappings requires considerable practice time in a lab/clinic, which can be challenging. Virtual environments can potentially address this challenge, but there are limited options for high-DOF assistive devices, and it is unclear if learning with a virtual device is similar to learning with its physical counterpart. We developed a novel virtual robotic platform that replicated a commercially-available 6-DOF robotic manipulator. Participants controlled the physical and virtual robots using four wireless inertial measurement units (IMUs) fixed to the upper torso. Forty-three neurologically unimpaired adults practiced a target-matching task using either the physical (sample size n = 25) or virtual device (sample size n = 18) involving pre-, mid-, and post-tests separated by four training blocks. We found that both groups made similar improvements from pre-test in movement time at mid-test (Δvirtual: 9.9 ± 9.5 s; Δphysical: 11.1 ± 9.9 s) and post-test (Δvirtual: 11.1 ± 9.1 s; Δphysical: 11.8 ± 10.5 s) and in path length at mid-test (Δvirtual: 6.1 ± 6.3 m/m; Δphysical: 3.3 ± 3.5 m/m) and post-test (Δvirtual: 6.6 ± 6.2 m/m; Δphysical: 3.5 ± 4.0 m/m). Our results indicate the feasibility of using virtual environments for learning to control assistive devices. Future work should determine how these findings generalize to clinical populations.
Collapse
Affiliation(s)
- Thomas E Augenstein
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Deepak Nagalla
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Alexander Mohacey
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Luis H Cubillos
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA
| | - Mei-Hua Lee
- Department of Kinesiology, Michigan State University, Lansing, MI, USA
| | - Rajiv Ranganathan
- Department of Kinesiology, Michigan State University, Lansing, MI, USA; Department of Mechanical Engineering, Michigan State University, Lansing, MI, USA
| | - Chandramouli Krishnan
- Robotics Department, University of Michigan, Ann Arbor, MI, USA; NeuRRo Lab, Department of Physical Medicine and Rehabilitation, University of Michigan, Ann Arbor, MI, USA; Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA; Department of Kinesiology, University of Michigan, Ann Arbor, MI, USA; Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI, USA; Department of Physical Therapy, University of Michigan, Flint, MI, USA.
| |
Collapse
|
4
|
Losanno E, Ceradini M, Agnesi F, Righi G, Del Popolo G, Shokur S, Micera S. A Virtual Reality-Based Protocol to Determine the Preferred Control Strategy for Hand Neuroprostheses in People With Paralysis. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2261-2269. [PMID: 38865234 DOI: 10.1109/tnsre.2024.3413192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2024]
Abstract
Hand neuroprostheses restore voluntary movement in people with paralysis through neuromodulation protocols. There are a variety of strategies to control hand neuroprostheses, which can be based on residual body movements or brain activity. There is no universally superior solution, rather the best approach may vary from patient to patient. Here, we propose a protocol based on an immersive virtual reality (VR) environment that simulates the use of a hand neuroprosthesis to allow patients to experience and familiarize themselves with various control schemes in clinically relevant tasks and choose the preferred one. We used our VR environment to compare two alternative control strategies over 5 days of training in four patients with C6 spinal cord injury: (a) control via the ipsilateral wrist, (b) control via the contralateral shoulder. We did not find a one-fits-all solution but rather a subject-specific preference that could not be predicted based only on a general clinical assessment. The main results were that the VR simulation allowed participants to experience the pros and cons of the proposed strategies and make an educated choice, and that there was a longitudinal improvement. This shows that our VR-based protocol is a useful tool for personalization and training of the control strategy of hand neuroprostheses, which could help to promote user comfort and thus acceptance.
Collapse
|
5
|
Xu T, Zhao K, Hu Y, Li L, Wang W, Wang F, Zhou Y, Li J. Transferable non-invasive modal fusion-transformer (NIMFT) for end-to-end hand gesture recognition. J Neural Eng 2024; 21:026034. [PMID: 38565124 DOI: 10.1088/1741-2552/ad39a5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 04/02/2024] [Indexed: 04/04/2024]
Abstract
Objective.Recent studies have shown that integrating inertial measurement unit (IMU) signals with surface electromyographic (sEMG) can greatly improve hand gesture recognition (HGR) performance in applications such as prosthetic control and rehabilitation training. However, current deep learning models for multimodal HGR encounter difficulties in invasive modal fusion, complex feature extraction from heterogeneous signals, and limited inter-subject model generalization. To address these challenges, this study aims to develop an end-to-end and inter-subject transferable model that utilizes non-invasively fused sEMG and acceleration (ACC) data.Approach.The proposed non-invasive modal fusion-transformer (NIMFT) model utilizes 1D-convolutional neural networks-based patch embedding for local information extraction and employs a multi-head cross-attention (MCA) mechanism to non-invasively integrate sEMG and ACC signals, stabilizing the variability induced by sEMG. The proposed architecture undergoes detailed ablation studies after hyperparameter tuning. Transfer learning is employed by fine-tuning a pre-trained model on new subject and a comparative analysis is performed between the fine-tuning and subject-specific model. Additionally, the performance of NIMFT is compared to state-of-the-art fusion models.Main results.The NIMFT model achieved recognition accuracies of 93.91%, 91.02%, and 95.56% on the three action sets in the Ninapro DB2 dataset. The proposed embedding method and MCA outperformed the traditional invasive modal fusion transformer by 2.01% (embedding) and 1.23% (fusion), respectively. In comparison to subject-specific models, the fine-tuning model exhibited the highest average accuracy improvement of 2.26%, achieving a final accuracy of 96.13%. Moreover, the NIMFT model demonstrated superiority in terms of accuracy, recall, precision, and F1-score compared to the latest modal fusion models with similar model scale.Significance.The NIMFT is a novel end-to-end HGR model, utilizes a non-invasive MCA mechanism to integrate long-range intermodal information effectively. Compared to recent modal fusion models, it demonstrates superior performance in inter-subject experiments and offers higher training efficiency and accuracy levels through transfer learning than subject-specific approaches.
Collapse
Affiliation(s)
- Tianxiang Xu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Kunkun Zhao
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Yuxiang Hu
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Liang Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Wei Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Fulin Wang
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- Nanjing PANDA Electronics Equipment Co., Ltd, Nanjing 210033, People's Republic of China
| | - Yuxuan Zhou
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| | - Jianqing Li
- School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
- The Engineering Research Center of Intelligent Theranostics Technology and Instruments, Ministry of Education, School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, People's Republic of China
| |
Collapse
|
6
|
Albanese GA, Bucchieri A, Podda J, Tacchino A, Buccelli S, De Momi E, Laffranchi M, Mannella K, Holmes MWR, Zenzeri J, De Michieli L, Brichetto G, Barresi G. Robotic systems for upper-limb rehabilitation in multiple sclerosis: a SWOT analysis and the synergies with virtual and augmented environments. Front Robot AI 2024; 11:1335147. [PMID: 38638271 PMCID: PMC11025362 DOI: 10.3389/frobt.2024.1335147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 01/30/2024] [Indexed: 04/20/2024] Open
Abstract
The robotics discipline is exploring precise and versatile solutions for upper-limb rehabilitation in Multiple Sclerosis (MS). People with MS can greatly benefit from robotic systems to help combat the complexities of this disease, which can impair the ability to perform activities of daily living (ADLs). In order to present the potential and the limitations of smart mechatronic devices in the mentioned clinical domain, this review is structured to propose a concise SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of robotic rehabilitation in MS. Through the SWOT Analysis, a method mostly adopted in business management, this paper addresses both internal and external factors that can promote or hinder the adoption of upper-limb rehabilitation robots in MS. Subsequently, it discusses how the synergy with another category of interaction technologies - the systems underlying virtual and augmented environments - may empower Strengths, overcome Weaknesses, expand Opportunities, and handle Threats in rehabilitation robotics for MS. The impactful adaptability of these digital settings (extensively used in rehabilitation for MS, even to approach ADL-like tasks in safe simulated contexts) is the main reason for presenting this approach to face the critical issues of the aforementioned SWOT Analysis. This methodological proposal aims at paving the way for devising further synergistic strategies based on the integration of medical robotic devices with other promising technologies to help upper-limb functional recovery in MS.
Collapse
Affiliation(s)
| | - Anna Bucchieri
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Jessica Podda
- Scientific Research Area, Italian Multiple Sclerosis Foundation (FISM), Genoa, Italy
| | - Andrea Tacchino
- Scientific Research Area, Italian Multiple Sclerosis Foundation (FISM), Genoa, Italy
| | - Stefano Buccelli
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Matteo Laffranchi
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Kailynn Mannella
- Department of Kinesiology, Brock University, St. Catharines, ON, Canada
| | | | | | | | - Giampaolo Brichetto
- Scientific Research Area, Italian Multiple Sclerosis Foundation (FISM), Genoa, Italy
- AISM Rehabilitation Center Liguria, Italian Multiple Sclerosis Society (AISM), Genoa, Italy
| | - Giacinto Barresi
- Rehab Technologies Lab, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
7
|
Pierella C, D'Antuono C, Marchesi G, Menotti CE, Casadio M. A Computer Interface Controlled by Upper Limb Muscles: Effects of a Two Weeks Training on Younger and Older Adults. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3744-3751. [PMID: 37676798 DOI: 10.1109/tnsre.2023.3312981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
As the population worldwide ages, there is a growing need for assistive technology and effective human-machine interfaces to address the wider range of motor disabilities that older adults may experience. Motor disabilities can make it difficult for individuals to perform basic daily tasks, such as getting dressed, preparing meals, or using a computer. The goal of this study was to investigate the effect of two weeks of training with a myoelectric computer interface (MCI) on motor functions in younger and older adults. Twenty people were recruited in the study: thirteen younger (range: 22-35 years old) and seven older (range: 61-78 years old) adults. Participants completed six training sessions of about 2 hours each, during which the activity of right and left biceps and trapezius were mapped into a control signal for the cursor of a computer. Results highlighted significant improvements in cursor control, and therefore in muscle coordination, in both groups. All participants with training became faster and more accurate, although people in different age range learned with a different dynamic. Results of the questionnaire on system usability and quality highlighted a general consensus about easiness of use and intuitiveness. These findings suggest that the proposed MCI training can be a powerful tool in the framework of assistive technologies for both younger and older adults. Further research is needed to determine the optimal duration and intensity of MCI training for different age groups and to investigate long-term effects of training on physical and cognitive function.
Collapse
|
8
|
Portnova-Fahreeva AA, Rizzoglio F, Mussa-Ivaldi FA, Rombokas E. Autoencoder-based myoelectric controller for prosthetic hands. Front Bioeng Biotechnol 2023; 11:1134135. [PMID: 37434753 PMCID: PMC10331017 DOI: 10.3389/fbioe.2023.1134135] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 06/15/2023] [Indexed: 07/13/2023] Open
Abstract
In the past, linear dimensionality-reduction techniques, such as Principal Component Analysis, have been used to simplify the myoelectric control of high-dimensional prosthetic hands. Nonetheless, their nonlinear counterparts, such as Autoencoders, have been shown to be more effective at compressing and reconstructing complex hand kinematics data. As a result, they have a potential of being a more accurate tool for prosthetic hand control. Here, we present a novel Autoencoder-based controller, in which the user is able to control a high-dimensional (17D) virtual hand via a low-dimensional (2D) space. We assess the efficacy of the controller via a validation experiment with four unimpaired participants. All the participants were able to significantly decrease the time it took for them to match a target gesture with a virtual hand to an average of 6.9 s and three out of four participants significantly improved path efficiency. Our results suggest that the Autoencoder-based controller has the potential to be used to manipulate high-dimensional hand systems via a myoelectric interface with a higher accuracy than PCA; however, more exploration needs to be done on the most effective ways of learning such a controller.
Collapse
Affiliation(s)
| | - Fabio Rizzoglio
- Department of Neuroscience, Northwestern University, Chicago, IL, United States
| | - Ferdinando A. Mussa-Ivaldi
- Department of Mechanical Engineering, Northwestern University, Evanston, IL, United States
- Department of Neuroscience, Northwestern University, Chicago, IL, United States
| | - Eric Rombokas
- Department of Mechanical Engineering, University of Washington, Seattle, WA, United States
- Department of Electrical Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
9
|
Portnova-Fahreeva AA, Rizzoglio F, Casadio M, Mussa-Ivaldi FA, Rombokas E. Learning to operate a high-dimensional hand via a low-dimensional controller. Front Bioeng Biotechnol 2023; 11:1139405. [PMID: 37214310 PMCID: PMC10192906 DOI: 10.3389/fbioe.2023.1139405] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 04/21/2023] [Indexed: 05/24/2023] Open
Abstract
Dimensionality reduction techniques have proven useful in simplifying complex hand kinematics. They may allow for a low-dimensional kinematic or myoelectric interface to be used to control a high-dimensional hand. Controlling a high-dimensional hand, however, is difficult to learn since the relationship between the low-dimensional controls and the high-dimensional system can be hard to perceive. In this manuscript, we explore how training practices that make this relationship more explicit can aid learning. We outline three studies that explore different factors which affect learning of an autoencoder-based controller, in which a user is able to operate a high-dimensional virtual hand via a low-dimensional control space. We compare computer mouse and myoelectric control as one factor contributing to learning difficulty. We also compare training paradigms in which the dimensionality of the training task matched or did not match the true dimensionality of the low-dimensional controller (both 2D). The training paradigms were a) a full-dimensional task, in which the user was unaware of the underlying controller dimensionality, b) an implicit 2D training, which allowed the user to practice on a simple 2D reaching task before attempting the full-dimensional one, without establishing an explicit connection between the two, and c) an explicit 2D training, during which the user was able to observe the relationship between their 2D movements and the higher-dimensional hand. We found that operating a myoelectric interface did not pose a big challenge to learning the low-dimensional controller and was not the main reason for the poor performance. Implicit 2D training was found to be as good, but not better, as training directly on the high-dimensional hand. What truly aided the user's ability to learn the controller was the 2D training that established an explicit connection between the low-dimensional control space and the high-dimensional hand movements.
Collapse
Affiliation(s)
| | - Fabio Rizzoglio
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Maura Casadio
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Ferdinando A. Mussa-Ivaldi
- Department of Mechanical Engineering, Northwestern University, Evanston, IL, United States
- Department of Neuroscience, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
| | - Eric Rombokas
- Department of Mechanical Engineering, University of Washington, Seattle, WA, United States
- Department of Electrical Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
10
|
Zhou B, Feng N, Wang H, Lu Y, Wei C, Jiang D, Li Z. Non-invasive dual attention TCN for electromyography and motion data fusion in lower limb ambulation prediction. J Neural Eng 2022; 19. [PMID: 35970137 DOI: 10.1088/1741-2552/ac89b4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 08/15/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Recent technological advances show the feasibility of fusing surface electromyography (sEMG) signals and movement data to predict lower limb ambulation intentions. However, since the invasive fusion of different signals is a major impediment to improving predictive performance, searching for a non-invasive fusion mechanism for lower limb ambulation pattern recognition based on different modal features is crucial. APPROACH We propose an end-to-end sequence prediction model with non-invasive dual attention temporal convolutional networks (NIDA-TCN) as a core to elegantly address the essential deficiencies of traditional decision models with heterogeneous signal fusion. Notably, the NIDA-TCN is a weighted fusion of sEMG and inertial measurement units (IMU) with time-dependent effective hidden information in the temporal and channel dimensions using TCN and self-attentive mechanisms. The new model can better discriminate between walking, jumping, downstairs, and upstairs four lower limb activities of daily living (ADL). MAIN RESULTS The results of this study show that the NIDA-TCN models produce predictions that significantly outperform both frame-wise and TCN models in terms of accuracy, sensitivity, precision, F1 score, and stability. Particularly, the NIDA-TCN with sequence decision fusion (NIDA-TCN-SDF) models, have maximum accuracy and stability increments of 3.37% and 4.95% relative to the frame-wise model, respectively, without manual feature-encoding and complex model parameters. SIGNIFICANCE It is concluded that the results demonstrate the validity and feasibility of the NIDA-TCN-SDF models to ensure the prediction of daily lower limb ambulation activities, paving the way to the development of fused heterogeneous signal decoding with better prediction performance.
Collapse
Affiliation(s)
- Bin Zhou
- Department of Mechanical Engineering and Automation, Northeastern University, NO. 3-11, Wenhua Road, Heping District, ShenYang, 110819, CHINA
| | - Naishi Feng
- Department of Mechanical Engineering and Automation, Northeastern University, NO. 3-11, Wenhua Road, Heping District, Shenyang, 110819, CHINA
| | - Hong Wang
- Department of Mechanical Engineering and Automation, Northeastern University, NO. 3-11, Wenhua Road, Heping District, ShenYang, 110819, CHINA
| | - Yanzheng Lu
- Department of Mechanical Engineering and Automation, Northeastern University, NO. 3-11, Wenhua Road, Heping District, ShenYang, 110819, CHINA
| | - Chunfeng Wei
- Department of Mechanical Engineering and Automation, Northeastern University, NO. 3-11, Wenhua Road, Heping District, ShenYang, 110819, CHINA
| | - Daqi Jiang
- Department of Mechanical, Engineering and Automation, Northeastern University, NO. 3-11, Wenhua Road, Heping District, Shenyang , 110819, CHINA
| | - Ziyang Li
- Department of Mechanical Engineering and Automation, Northeastern University, NO. 3-11, Wenhua Road, Heping District, ShenYang, 110819, CHINA
| |
Collapse
|
11
|
Pomplun E, Thomas A, Corrigan E, Shah VA, Mrotek LA, Scheidt RA. Vibrotactile Perception for Sensorimotor Augmentation: Perceptual Discrimination of Vibrotactile Stimuli Induced by Low-Cost Eccentric Rotating Mass Motors at Different Body Locations in Young, Middle-Aged, and Older Adults. FRONTIERS IN REHABILITATION SCIENCES 2022; 3:895036. [PMID: 36188929 PMCID: PMC9397814 DOI: 10.3389/fresc.2022.895036] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 06/08/2022] [Indexed: 11/18/2022]
Abstract
Sensory augmentation technologies are being developed to convey useful supplemental sensory cues to people in comfortable, unobtrusive ways for the purpose of improving the ongoing control of volitional movement. Low-cost vibration motors are strong contenders for providing supplemental cues intended to enhance or augment closed-loop feedback control of limb movements in patients with proprioceptive deficits, but who still retain the ability to generate movement. However, it remains unclear what form such cues should take and where on the body they may be applied to enhance the perception-cognition-action cycle implicit in closed-loop feedback control. As a step toward addressing this knowledge gap, we used low-cost, wearable technology to examine the perceptual acuity of vibrotactile stimulus intensity discrimination at several candidate sites on the body in a sample of participants spanning a wide age range. We also sought to determine the extent to which the acuity of vibrotactile discrimination can improve over several days of discrimination training. Healthy adults performed a series of 2-alternative forced choice experiments that quantified capability to perceive small differences in the intensity of stimuli provided by low-cost eccentric rotating mass vibration motors fixed at various body locations. In one set of experiments, we found that the acuity of intensity discrimination was poorer in older participants than in middle-aged and younger participants, and that stimuli applied to the torso were systematically harder to discriminate than stimuli applied to the forearm, knee, or shoulders, which all had similar acuities. In another set of experiments, we found that older adults could improve intensity discrimination over the course of 3 days of practice on that task such that their final performance did not differ significantly from that of younger adults. These findings may be useful for future development of wearable technologies intended to improve the control of movements through the application of supplemental vibrotactile cues.
Collapse
Affiliation(s)
- Ella Pomplun
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Ashiya Thomas
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Erin Corrigan
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Valay A. Shah
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
- Department of Applied Physiology and Kinesiology, University of Florida, Gainesville, FL, United States
| | - Leigh A. Mrotek
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| | - Robert A. Scheidt
- Department of Biomedical Engineering, Marquette University and Medical College of Wisconsin, Milwaukee, WI, United States
| |
Collapse
|
12
|
Li W, Shi P, Yu H. Gesture Recognition Using Surface Electromyography and Deep Learning for Prostheses Hand: State-of-the-Art, Challenges, and Future. Front Neurosci 2021; 15:621885. [PMID: 33981195 PMCID: PMC8107289 DOI: 10.3389/fnins.2021.621885] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 02/23/2021] [Indexed: 01/09/2023] Open
Abstract
Amputation of the upper limb brings heavy burden to amputees, reduces their quality of life, and limits their performance in activities of daily life. The realization of natural control for prosthetic hands is crucial to improving the quality of life of amputees. Surface electromyography (sEMG) signal is one of the most widely used biological signals for the prediction of upper limb motor intention, which is an essential element of the control systems of prosthetic hands. The conversion of sEMG signals into effective control signals often requires a lot of computational power and complex process. Existing commercial prosthetic hands can only provide natural control for very few active degrees of freedom. Deep learning (DL) has performed surprisingly well in the development of intelligent systems in recent years. The significant improvement of hardware equipment and the continuous emergence of large data sets of sEMG have also boosted the DL research in sEMG signal processing. DL can effectively improve the accuracy of sEMG pattern recognition and reduce the influence of interference factors. This paper analyzes the applicability and efficiency of DL in sEMG-based gesture recognition and reviews the key techniques of DL-based sEMG pattern recognition for the prosthetic hand, including signal acquisition, signal preprocessing, feature extraction, classification of patterns, post-processing, and performance evaluation. Finally, the current challenges and future prospects in clinical application of these techniques are outlined and discussed.
Collapse
Affiliation(s)
- Wei Li
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| | - Ping Shi
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| | - Hongliu Yu
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
13
|
De Santis D. A Framework for Optimizing Co-adaptation in Body-Machine Interfaces. Front Neurorobot 2021; 15:662181. [PMID: 33967733 PMCID: PMC8097093 DOI: 10.3389/fnbot.2021.662181] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 03/22/2021] [Indexed: 11/13/2022] Open
Abstract
The operation of a human-machine interface is increasingly often referred to as a two-learners problem, where both the human and the interface independently adapt their behavior based on shared information to improve joint performance over a specific task. Drawing inspiration from the field of body-machine interfaces, we take a different perspective and propose a framework for studying co-adaptation in scenarios where the evolution of the interface is dependent on the users' behavior and that do not require task goals to be explicitly defined. Our mathematical description of co-adaptation is built upon the assumption that the interface and the user agents co-adapt toward maximizing the interaction efficiency rather than optimizing task performance. This work describes a mathematical framework for body-machine interfaces where a naïve user interacts with an adaptive interface. The interface, modeled as a linear map from a space with high dimension (the user input) to a lower dimensional feedback, acts as an adaptive “tool” whose goal is to minimize transmission loss following an unsupervised learning procedure and has no knowledge of the task being performed by the user. The user is modeled as a non-stationary multivariate Gaussian generative process that produces a sequence of actions that is either statistically independent or correlated. Dependent data is used to model the output of an action selection module concerned with achieving some unknown goal dictated by the task. The framework assumes that in parallel to this explicit objective, the user is implicitly learning a suitable but not necessarily optimal way to interact with the interface. Implicit learning is modeled as use-dependent learning modulated by a reward-based mechanism acting on the generative distribution. Through simulation, the work quantifies how the system evolves as a function of the learning time scales when a user learns to operate a static vs. an adaptive interface. We show that this novel framework can be directly exploited to readily simulate a variety of interaction scenarios, to facilitate the exploration of the parameters that lead to optimal learning dynamics of the joint system, and to provide an empirical proof for the superiority of human-machine co-adaptation over user adaptation.
Collapse
Affiliation(s)
- Dalia De Santis
- Department of Robotics, Brain and Cognitive Sciences, Center for Human Technologies, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
14
|
Pierella C, Galofaro E, De Luca A, Losio L, Gamba S, Massone A, Mussa-Ivaldi FA, Casadio M. Recovery of Distal Arm Movements in Spinal Cord Injured Patients with a Body-Machine Interface: A Proof-of-Concept Study. SENSORS (BASEL, SWITZERLAND) 2021; 21:2243. [PMID: 33807007 PMCID: PMC8004832 DOI: 10.3390/s21062243] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 03/18/2021] [Accepted: 03/19/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND The recovery of upper limb mobility and functions is essential for people with cervical spinal cord injuries (cSCI) to maximize independence in daily activities and ensure a successful return to normality. The rehabilitative path should include a thorough neuromotor evaluation and personalized treatments aimed at recovering motor functions. Body-machine interfaces (BoMI) have been proven to be capable of harnessing residual joint motions to control objects like computer cursors and virtual or physical wheelchairs and to promote motor recovery. However, their therapeutic application has still been limited to shoulder movements. Here, we expanded the use of BoMI to promote the whole arm's mobility, with a special focus on elbow movements. We also developed an instrumented evaluation test and a set of kinematic indicators for assessing residual abilities and recovery. METHODS Five inpatient cSCI subjects (four acute, one chronic) participated in a BoMI treatment complementary to their standard rehabilitative routine. The subjects wore a BoMI with sensors placed on both proximal and distal arm districts and practiced for 5 weeks. The BoMI was programmed to promote symmetry between right and left arms use and the forearms' mobility while playing games. To evaluate the effectiveness of the treatment, the subjects' kinematics were recorded while performing an evaluation test that involved functional bilateral arms movements, before, at the end, and three months after training. RESULTS At the end of the training, all subjects learned to efficiently use the interface despite being compelled by it to engage their most impaired movements. The subjects completed the training with bilateral symmetry in body recruitment, already present at the end of the familiarization, and they increased the forearm activity. The instrumental evaluation confirmed this. The elbow motion's angular amplitude improved for all subjects, and other kinematic parameters showed a trend towards the normality range. CONCLUSION The outcomes are preliminary evidence supporting the efficacy of the proposed BoMI as a rehabilitation tool to be considered for clinical practice. It also suggests an instrumental evaluation protocol and a set of indicators to assess and evaluate motor impairment and recovery in cSCI.
Collapse
Affiliation(s)
- Camilla Pierella
- Department of Neuroscience, Rehabilitation, Ophthalmology, Genetics, Maternal and Child Health (DINOGMI), University of Genova, 16132 Genoa, Italy
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16145 Genoa, Italy; (E.G.); (A.D.L.)
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA;
- Shirley Ryan Ability Lab, Chicago, IL 60611, USA
| | - Elisa Galofaro
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16145 Genoa, Italy; (E.G.); (A.D.L.)
- Assistive Robotics and Interactive Exosuits (ARIES) Lab, Institute of Computer Engineering (ZITI), University of Heidelberg, 69117 Heidelberg, Germany
| | - Alice De Luca
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16145 Genoa, Italy; (E.G.); (A.D.L.)
- Movendo Technology, 16128 Genoa, Italy
- Recovery and Functional Reeducation Unit, Santa Corona Hospital, ASL2 Savonese, 17027 Pietra Ligure, Italy
| | - Luca Losio
- S.C. Unità Spinale Unipolare, Santa Corona Hospital, ASL2 Savonese, 17027 Pietra Ligure, Italy; (L.L.); (S.G.); (A.M.)
- Italian Spinal Cord Laboratory (SCIL), 17027 Pietra Ligure, Italy
| | - Simona Gamba
- S.C. Unità Spinale Unipolare, Santa Corona Hospital, ASL2 Savonese, 17027 Pietra Ligure, Italy; (L.L.); (S.G.); (A.M.)
- Italian Spinal Cord Laboratory (SCIL), 17027 Pietra Ligure, Italy
| | - Antonino Massone
- S.C. Unità Spinale Unipolare, Santa Corona Hospital, ASL2 Savonese, 17027 Pietra Ligure, Italy; (L.L.); (S.G.); (A.M.)
- Italian Spinal Cord Laboratory (SCIL), 17027 Pietra Ligure, Italy
| | - Ferdinando A. Mussa-Ivaldi
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA;
- Shirley Ryan Ability Lab, Chicago, IL 60611, USA
- Department of Physical Medicine and Rehabilitation, Northwestern University, Evanston, IL 60208, USA
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA
| | - Maura Casadio
- Department of Informatics, Bioengineering, Robotics and Systems Engineering (DIBRIS), University of Genoa, 16145 Genoa, Italy; (E.G.); (A.D.L.)
- Department of Physiology, Northwestern University, Chicago, IL 60611, USA;
- Italian Spinal Cord Laboratory (SCIL), 17027 Pietra Ligure, Italy
| |
Collapse
|
15
|
Rizzoglio F, Casadio M, De Santis D, Mussa-Ivaldi FA. Building an adaptive interface via unsupervised tracking of latent manifolds. Neural Netw 2021; 137:174-187. [PMID: 33636657 DOI: 10.1016/j.neunet.2021.01.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 11/16/2020] [Accepted: 01/14/2021] [Indexed: 01/05/2023]
Abstract
In human-machine interfaces, decoder calibration is critical to enable an effective and seamless interaction with the machine. However, recalibration is often necessary as the decoder off-line predictive power does not generally imply ease-of-use, due to closed loop dynamics and user adaptation that cannot be accounted for during the calibration procedure. Here, we propose an adaptive interface that makes use of a non-linear autoencoder trained iteratively to perform online manifold identification and tracking, with the dual goal of reducing the need for interface recalibration and enhancing human-machine joint performance. Importantly, the proposed approach avoids interrupting the operation of the device and it neither relies on information about the state of the task, nor on the existence of a stable neural or movement manifold, allowing it to be applied in the earliest stages of interface operation, when the formation of new neural strategies is still on-going. In order to more directly test the performance of our algorithm, we defined the autoencoder latent space as the control space of a body-machine interface. After an initial offline parameter tuning, we evaluated the performance of the adaptive interface versus that of a static decoder in approximating the evolving low-dimensional manifold of users simultaneously learning to perform reaching movements within the latent space. Results show that the adaptive approach increased the representational efficiency of the interface decoder. Concurrently, it significantly improved users' task-related performance, indicating that the development of a more accurate internal model is encouraged by the online co-adaptation process.
Collapse
Affiliation(s)
- Fabio Rizzoglio
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy; Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, IL, 60611, USA; Shirley Ryan Ability Lab, Chicago, IL, 60611, USA.
| | - Maura Casadio
- Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, 16145 Genoa, Italy.
| | - Dalia De Santis
- Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, IL, 60611, USA; Shirley Ryan Ability Lab, Chicago, IL, 60611, USA; Department of Robotics, Brain and Cognitive Sciences, Istituto Italiano di Tecnologia, Via Enrico Melen 83, 16152, Genoa, Italy.
| | - Ferdinando A Mussa-Ivaldi
- Department of Physiology, Feinberg School of Medicine, Northwestern University, Chicago, IL, 60611, USA; Shirley Ryan Ability Lab, Chicago, IL, 60611, USA.
| |
Collapse
|