1
|
Molinaro DD, Kang I, Young AJ. Estimating human joint moments unifies exoskeleton control, reducing user effort. Sci Robot 2024; 9:eadi8852. [PMID: 38507475 DOI: 10.1126/scirobotics.adi8852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 02/20/2024] [Indexed: 03/22/2024]
Abstract
Robotic lower-limb exoskeletons can augment human mobility, but current systems require extensive, context-specific considerations, limiting their real-world viability. Here, we present a unified exoskeleton control framework that autonomously adapts assistance on the basis of instantaneous user joint moment estimates from a temporal convolutional network (TCN). When deployed on our hip exoskeleton, the TCN achieved an average root mean square error of 0.142 newton-meters per kilogram across 35 ambulatory conditions without any user-specific calibration. Further, the unified controller significantly reduced user metabolic cost and lower-limb positive work during level-ground and incline walking compared with walking without wearing the exoskeleton. This advancement bridges the gap between in-lab exoskeleton technology and real-world human ambulation, making exoskeleton control technology viable for a broad community.
Collapse
Affiliation(s)
- Dean D Molinaro
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Inseung Kang
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Aaron J Young
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
2
|
Liu Y, Chen C, Wang Z, Tian Y, Wang S, Xiao Y, Yang F, Wu X. Continuous Locomotion Mode and Task Identification for an Assistive Exoskeleton Based on Neuromuscular-Mechanical Fusion. Bioengineering (Basel) 2024; 11:150. [PMID: 38391636 PMCID: PMC10886133 DOI: 10.3390/bioengineering11020150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 01/15/2024] [Accepted: 01/18/2024] [Indexed: 02/24/2024] Open
Abstract
Human walking parameters exhibit significant variability depending on the terrain, speed, and load. Assistive exoskeletons currently focus on the recognition of locomotion terrain, ignoring the identification of locomotion tasks, which are also essential for control strategies. The aim of this study was to develop an interface for locomotion mode and task identification based on a neuromuscular-mechanical fusion algorithm. The modes of level and incline and tasks of speed and load were explored, and seven able-bodied participants were recruited. A continuous stream of assistive decisions supporting timely exoskeleton control was achieved according to the classification of locomotion. We investigated the optimal algorithm, feature set, window increment, window length, and robustness for precise identification and synchronization between exoskeleton assistive force and human limb movements (human-machine collaboration). The best recognition results were obtained when using a support vector machine, a root mean square/waveform length/acceleration feature set, a window length of 170, and a window increment of 20. The average identification accuracy reached 98.7% ± 1.3%. These results suggest that the surface electromyography-acceleration can be effectively used for locomotion mode and task identification. This study contributes to the development of locomotion mode and task recognition as well as exoskeleton control for seamless transitions.
Collapse
Affiliation(s)
- Yao Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Chunjie Chen
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhuo Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yongtang Tian
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Sheng Wang
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yang Xiao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Fangliang Yang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xinyu Wu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Lab of Robotics and Intelligent System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
3
|
Hybart R, Ferris D. Gait variability of outdoor vs treadmill walking with bilateral robotic ankle exoskeletons under proportional myoelectric control. PLoS One 2023; 18:e0294241. [PMID: 37956157 PMCID: PMC10642814 DOI: 10.1371/journal.pone.0294241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/27/2023] [Indexed: 11/15/2023] Open
Abstract
Lower limb robotic exoskeletons are often studied in the context of steady-state treadmill walking in laboratory environments. However, the end goal of these devices is often adoption into our everyday lives. To move outside of the laboratory, there is a need to study exoskeletons in real world, complex environments. One way to study the human-machine interaction is to look at how the exoskeleton affects the user's gait. In this study we assessed changes in gait spatiotemporal variability when using a robotic ankle exoskeleton under proportional myoelectric control both inside on a treadmill and outside overground. We hypothesized that walking with the exoskeletons would not lead to significant changes in variability inside on a treadmill or outside compared to not using the exoskeletons. In addition, we hypothesized that walking outside would lead to higher variability both with and without the exoskeletons compared to treadmill walking. In support of our hypothesis, we found significantly higher coefficients of variation of stride length, stance time, and swing time when walking outside both with and without the exoskeleton. We found a significantly higher variability when using the exoskeletons inside on the treadmill, but we did not see significantly higher variability when walking outside overground. The value of this study to the literature is that it emphasizes the importance of studying exoskeletons in the environment in which they are meant to be used. By looking at only indoor gait spatiotemporal measures, we may have assumed that the exoskeletons led to higher variability which may be unsafe for certain target populations. In the context of the literature, we show that variability due to robotic ankle exoskeletons under proportional myoelectric control does not elicit different changes in stride time variability than previously found in other daily living tasks (uneven terrain, load carriage, or cognitive tasks).
Collapse
Affiliation(s)
- Rachel Hybart
- J. Crayton Pruitt Department of Biomedical Engineering, University of Florida, Gainesville, Florida, United States of America
| | - Daniel Ferris
- J. Crayton Pruitt Department of Biomedical Engineering, University of Florida, Gainesville, Florida, United States of America
| |
Collapse
|
4
|
Zhao S, Yu Z, Wang Z, Liu H, Zhou Z, Ruan L, Wang Q. A Learning-Free Method for Locomotion Mode Prediction by Terrain Reconstruction and Visual-Inertial Odometry. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3895-3905. [PMID: 37782585 DOI: 10.1109/tnsre.2023.3321077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
This research introduces a novel, highly precise, and learning-free approach to locomotion mode prediction, a technique with potential for broad applications in the field of lower-limb wearable robotics. This study represents the pioneering effort to amalgamate 3D reconstruction and Visual-Inertial Odometry (VIO) into a locomotion mode prediction method, which yields robust prediction performance across diverse subjects and terrains, and resilience against various factors including camera view, walking direction, step size, and disturbances from moving obstacles without the need of parameter adjustments. The proposed Depth-enhanced Visual-Inertial Odometry (D-VIO) has been meticulously designed to operate within computational constraints of wearable configurations while demonstrating resilience against unpredictable human movements and sparse features. Evidence of its effectiveness, both in terms of accuracy and operational time consumption, is substantiated through tests conducted using open-source dataset and closed-loop evaluations. Comprehensive experiments were undertaken to validate its prediction accuracy across various test conditions such as subjects, scenarios, sensor mounting positions, camera views, step sizes, walking directions, and disturbances from moving obstacles. A comprehensive prediction accuracy rate of 99.00% confirms the efficacy, generality, and robustness of the proposed method.
Collapse
|
5
|
Choi S, Ko C, Kong K. Walking-Speed-Adaptive Gait Phase Estimation for Wearable Robots. SENSORS (BASEL, SWITZERLAND) 2023; 23:8276. [PMID: 37837106 PMCID: PMC10575403 DOI: 10.3390/s23198276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/02/2023] [Accepted: 10/03/2023] [Indexed: 10/15/2023]
Abstract
This paper introduces a Gait Phase Estimation Module (GPEM) and its real-time algorithm designed to estimate gait phases continuously and monotonically across a range of walking speeds and accelerations/decelerations. To address the challenges of real-world applications, we propose a speed-adaptive online gait phase estimation algorithm, which enables precise estimation of gait phases during both constant speed locomotion and dynamic speed changes. Experimental verification demonstrates that the proposed method offers smooth, continuous, and repetitive gait phase estimation when compared to conventional approaches such as the phase portrait method and time-based estimation. The proposed method achieved a 48% reduction in gait phase deviation compared to time-based estimation and a 48.29% reduction compared to the phase portrait method. The proposed algorithm is integrated within the GPEM, allowing for its versatile application in controlling gait assistive robots without incurring additional computational burden. The results of this study contribute to the development of robust and efficient gait phase estimation techniques for various robotic applications.
Collapse
Affiliation(s)
| | | | - Kyoungchul Kong
- Department of Mechanical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea; (S.C.); (C.K.)
| |
Collapse
|
6
|
Kim M, Hargrove LJ. Generating synthetic gait patterns based on benchmark datasets for controlling prosthetic legs. J Neuroeng Rehabil 2023; 20:115. [PMID: 37667313 PMCID: PMC10476332 DOI: 10.1186/s12984-023-01232-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 08/08/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND Prosthetic legs help individuals with an amputation regain locomotion. Recently, deep neural network (DNN)-based control methods, which take advantage of the end-to-end learning capability of the network, have been proposed. One prominent challenge for these learning-based approaches is obtaining data for the training, particularly for the training of a mid-level controller. In this study, we propose a method for generating synthetic gait patterns (vertical load and lower limb joint angles) using a generative adversarial network (GAN). This approach enables a mid-level controller to execute ambulation modes that are not included in the training datasets. METHODS The conditional GAN is trained on benchmark datasets that contain the gait data of individuals without amputation; synthetic gait patterns are generated from the user input. Further, a DNN-based controller for the generation of impedance parameters is trained using the synthetic gait pattern and the corresponding synthetic stiffness and damping coefficients. RESULTS The trained GAN generated synthetic gait patterns with a coefficient of determination of 0.97 and a structural similarity index of 0.94 relative to benchmark data that were not included in the training datasets. We trained a DNN-based controller using the GAN-generated synthetic gait patterns for level-ground walking, standing-to-sitting motion, and sitting-to-standing motion. Four individuals without amputation participated in bypass testing and demonstrated the ambulation modes. The model successfully generated control parameters for the knee and ankle based on thigh angle and vertical load. CONCLUSIONS This study demonstrates that synthetic gait patterns can be used to train DNN models for impedance control. We believe a conditional GAN trained on benchmark datasets can provide reliable gait data for ambulation modes that are not included in its training datasets. Thus, designing gait data using a conditional GAN could facilitate the efficient and effective training of controllers for prosthetic legs.
Collapse
Affiliation(s)
- Minjae Kim
- Department of Physical Medicine and Rehabilitation, Northwestern University, IL, Chicago, USA.
- Regenstein Center for Bionic Medicine, Shirley Ryan AbilityLab, IL, Chicago, USA.
| | - Levi J Hargrove
- Department of Physical Medicine and Rehabilitation, Northwestern University, IL, Chicago, USA
- Regenstein Center for Bionic Medicine, Shirley Ryan AbilityLab, IL, Chicago, USA
| |
Collapse
|
7
|
Kim M, Simon AM, Shah K, Hargrove LJ. Machine Learning-Based Gait Mode Prediction for Hybrid Knee Prosthesis Control. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-6. [PMID: 38083529 DOI: 10.1109/embc40787.2023.10340388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Recently, hybrid prosthetic knees, which can combine the advantages of passive and active prosthetic knees, have been proposed for individuals with a transfemoral amputation. Users could potentially take advantage of the passive knee mechanics during walking and the active power generation during stair ascent. One challenge in controlling the hybrid knees is accurate gait mode prediction for seamless transitions between passive and active modes. However, data imbalance between passive and active modes may impact the performance of a classifier. In this study, we used a dataset collected from nine individuals with a unilateral transfemoral amputation as they ambulated over level ground, inclines, and stairs. We evaluated several machine learning-based classifiers on the prediction of passive (level-ground walking, incline walking, descending stairs, and donning and doffing the prosthesis) and active mode (ascending stairs). In addition, we developed a generative adversarial network (GAN) to create synthetic data for improving classification performance. The results indicated that linear discriminant analysis and random forest might be the best classifiers regarding sensitivity to the active mode and overall accuracy, respectively. Further, we demonstrated that using the GAN-based synthetic data for training improves the sensitivity of classifiers.
Collapse
|
8
|
Medrano RL, Thomas GC, Keais CG, Rouse EJ, Gregg RD. Real-Time Gait Phase and Task Estimation for Controlling a Powered Ankle Exoskeleton on Extremely Uneven Terrain. IEEE T ROBOT 2023; 39:2170-2182. [PMID: 37304231 PMCID: PMC10249462 DOI: 10.1109/tro.2023.3235584] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2023]
Abstract
Positive biomechanical outcomes have been reported with lower-limb exoskeletons in laboratory settings, but these devices have difficulty delivering appropriate assistance in synchrony with human gait as the task or rate of phase progression change in real-world environments. This paper presents a controller for an ankle exoskeleton that uses a data-driven kinematic model to continuously estimate the phase, phase rate, stride length, and ground incline states during locomotion, which enables the real-time adaptation of torque assistance to match human torques observed in a multi-activity database of 10 able-bodied subjects. We demonstrate in live experiments with a new cohort of 10 able-bodied participants that the controller yields phase estimates comparable to the state of the art, while also estimating task variables with similar accuracy to recent machine learning approaches. The implemented controller successfully adapts its assistance in response to changing phase and task variables, both during controlled treadmill trials (N=10, phase RMSE: 4.8 ± 2.4%) and a real-world stress test with extremely uneven terrain (N=1, phase RMSE: 4.8 ± 2.7%).
Collapse
Affiliation(s)
| | | | - Connor G Keais
- Department of Robotics, University of Michigan, Ann Arbor, MI 48109
| | - Elliott J Rouse
- Department of Mechanical Engineering; Department of Robotics, University of Michigan, Ann Arbor, MI 48109
| | - Robert D Gregg
- Department of Robotics, University of Michigan, Ann Arbor, MI 48109
| |
Collapse
|
9
|
Chen C, Zhang K, Leng Y, Chen X, Fu C. Unsupervised Sim-to-Real Adaptation for Environmental Recognition in Assistive Walking. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1350-1360. [PMID: 35584064 DOI: 10.1109/tnsre.2022.3176410] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Powered lower-limb prostheses with vision sensors are expected to restore amputees' mobility in various environments with supervised learning-based environmental recognition. Due to the sim-to-real gap, such as real-world unstructured terrains and the perspective and performance limitations of vision sensor, simulated data cannot meet the requirement for supervised learning. To mitigate this gap, this paper presents an unsupervised sim-to-real adaptation method to accurately classify five common real-world (level ground, stair ascent, stair descent, ramp ascent and ramp descent) and assist amputee's terrain-adaptive locomotion. In this study, augmented simulated environments are generated from a virtual camera perspective to better simulate the real world. Then, unsupervised domain adaptation is incorporated to train the proposed adaptation network consisting of a feature extractor and two classifiers is trained on simulated data and unlabeled real-world data to minimize domain shift between source domain (simulation) and target domain (real world). To interpret the classification mechanism visually, essential features of different terrains extracted by the network are visualized. The classification results in walking experiments indicate that the average accuracy on eight subjects reaches (98.06% ± 0.71%) and (95.91% ± 1.09%) in indoor and outdoor environments respectively, which is close to the result of supervised learning using both type of labeled data (98.37% and 97.05%). The promising results demonstrate that the proposed method is expected to realize accurate real-world environmental classification and successful sim-to-real transfer.
Collapse
|