1
|
Godoy RV, Guan B, Dwivedi A, Owen M, Liarokapis M. A Video Dataset of Everyday Life Grasps for the Training of Shared Control Operation Models for Myoelectric Prosthetic Hands. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-5. [PMID: 40039393 DOI: 10.1109/embc53108.2024.10782638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
The control of high degree of freedom prosthetic hands requires high cognitive effort and has proven difficult to navigate particularly as prosthetic hands become more advanced. To reduce control complexity, vision-based shared control methods have been proposed. Such methods rely on image processing and/or machine learning to identify object features and classes and introduce a level of autonomy to the grasping process. However, currently available image datasets lack focus in the area of prosthetic grasping. Thus, in this paper, we present a new dataset capturing user interactions with a wide variety of everyday life objects using a fully actuated, human-like robot hand and an onboard camera. The dataset includes videos of over 100 grasps of 50 objects from 35 classes. A video analysis has been conducted, using established grasp taxonomies to compile a list of grasp types based on interactions of the user with the objects. The resulting dataset can be used to develop more efficient prosthetic hand operation systems based on shared control frameworks.
Collapse
|
2
|
Mora MC, García-Ortiz JV, Cerdá-Boluda J. sEMG-Based Robust Recognition of Grasping Postures with a Machine Learning Approach for Low-Cost Hand Control. SENSORS (BASEL, SWITZERLAND) 2024; 24:2063. [PMID: 38610275 PMCID: PMC11013908 DOI: 10.3390/s24072063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 03/10/2024] [Accepted: 03/20/2024] [Indexed: 04/14/2024]
Abstract
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human intention is key for their integration in daily life. This can be achieved with machine learning (ML) algorithms, which are barely used for grasping posture recognition. This work proposes an ML approach to recognize nine hand postures, representing 90% of the activities of daily living in real time using an sEMG human-robot interface (HRI). Data from 20 subjects wearing a Myo armband (8 sEMG signals) were gathered from the NinaPro DS5 and from experimental tests with the YCB Object Set, and they were used jointly in the development of a simple multi-layer perceptron in MATLAB, with a global percentage success of 73% using only two features. GPU-based implementations were run to select the best architecture, with generalization capabilities, robustness-versus-electrode shift, low memory expense, and real-time performance. This architecture enables the implementation of grasping posture recognition in low-cost devices, aimed at the development of affordable functional prostheses and HRI for social robots.
Collapse
Affiliation(s)
- Marta C. Mora
- Department of Mechanical Engineering and Construction, Universitat Jaume I, Avda de Vicent Sos Baynat s/n, 12071 Castelló de la Plana, Spain;
| | - José V. García-Ortiz
- Department of Mechanical Engineering and Construction, Universitat Jaume I, Avda de Vicent Sos Baynat s/n, 12071 Castelló de la Plana, Spain;
| | - Joaquín Cerdá-Boluda
- Instituto de Instrumentación para Imagen Molecular (I3M), Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain;
| |
Collapse
|
3
|
Yang B, Shi C, Liu Z, Hu Y, Cheng M, Jiang L. Fingertip Proximity-Based Grasping Pattern Prediction of Transradial Myoelectric Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2023; 31:1483-1491. [PMID: 37027667 DOI: 10.1109/tnsre.2023.3247580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
For transradial amputees, especially those with insufficient residual muscle activity, it is challenging to quickly obtain an appropriate grasping pattern for a multigrasp prosthesis. To address this problem, this study proposed a fingertip proximity sensor and a grasping pattern prediction method base on it. Rather than exclusively utilizing the EMG of the subject for the grasping pattern recognition, the proposed method used fingertip proximity sensing to predict the appropriate grasping pattern automatically. We established a five-fingertip proximity training dataset for five common classes of grasping patterns (spherical grip, cylindrical grip, tripod pinch, lateral pinch, and hook). A neural network-based classifier was proposed and got a high accuracy (96%) within the training dataset. We assessed the combined EMG/proximity-based method (PS-EMG) on six non-disabled subjects and one transradial amputee subject while performing the "reach-and-pick up" tasks for novel objects. The assessments compared the performance of this method with the typical pure EMG methods. Results indicated that non-disabled subjects could reach the object and initiate prosthesis grasping with the desired grasping pattern on average within 1.93 s and complete the tasks 7.30% faster on average with the PS-EMG method, relative to the pattern recognition-based EMG method. And the amputee subject was, on average, 25.58% faster in completing tasks with the proposed PS-EMG method relative to the switch-based EMG method. The results showed that the proposed method allowed the user to obtain the desired grasping pattern quickly and reduced the requirement for EMG sources.
Collapse
|
4
|
Flower XL, Poonguzhali S. Performance improvement and complexity reduction in the classification of EMG signals with mRMR-based CNN-KNN combined model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-220811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
For real-time applications, the performance in classifying the movement should be as high as possible, and the computational complexity should be low. This paper focuses on the classification of five upper arm movements which can be provided as a control for human-machine interface (HMI) based applications. The conventional machine learning algorithms are used for classification with both time and frequency domain features, and k-nearest neighbor (KNN) outplay others. To further improve the classification accuracy, pretrained CNN architectures are employed which leads to computational complexity and memory requirements. To overcome this, the deep convolutional neural network (CNN) model is introduced with three convolutional layers. To further improve the performance which is the key idea behind real-time applications, a hybrid CNN-KNN model is proposed. Even though the performance is high, the computation costs of the hybrid method are more. Minimum redundancy maximum relevance (mRMR), a feature selection method makes an effort to reduce feature dimensions. As a result, better performance is achieved by our proposed method CNN-KNN with mRMR which reduces computational complexity and memory requirement with a mean prediction accuracy of about 99.05±0.25% with 100 features.
Collapse
Affiliation(s)
- X. Little Flower
- Department of Electronics and Communication Engineering, College of Engineering Guindy (CEG), Anna University, Chennai, India
| | - S. Poonguzhali
- Department of Electronics and Communication Engineering, College of Engineering Guindy (CEG), Anna University, Chennai, India
| |
Collapse
|
5
|
Mouchoux J, Carisi S, Dosen S, Farina D, Schilling AF, Markovic M. Artificial Perception and Semiautonomous Control in Myoelectric Hand Prostheses Increases Performance and Decreases Effort. IEEE T ROBOT 2021. [DOI: 10.1109/tro.2020.3047013] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
6
|
Bettoni MC, Castellini C. Interaction in Assistive Robotics: A Radical Constructivist Design Framework. Front Neurorobot 2021; 15:675657. [PMID: 34177510 PMCID: PMC8221426 DOI: 10.3389/fnbot.2021.675657] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 05/04/2021] [Indexed: 01/05/2023] Open
Abstract
Despite decades of research, muscle-based control of assistive devices (myocontrol) is still unreliable; for instance upper-limb prostheses, each year more and more dexterous and human-like, still provide hardly enough functionality to justify their cost and the effort required to use them. In order to try and close this gap, we propose to shift the goal of myocontrol from guessing intended movements to creating new circular reactions in the constructivist sense defined by Piaget. To this aim, the myocontrol system must be able to acquire new knowledge and forget past one, and knowledge acquisition/forgetting must happen on demand, requested either by the user or by the system itself. We propose a unifying framework based upon Radical Constructivism for the design of such a myocontrol system, including its user interface and user-device interaction strategy.
Collapse
Affiliation(s)
- Marco C Bettoni
- Steinbeis Consulting Centre, Knowledge Management and Collaboration (KMC), Basel, Switzerland
| | - Claudio Castellini
- The Adaptive Bio-Interfaces Group, German Aerospace Centre (DLR), Institute of Robotics and Mechatronics, Oberpfaffenhofen, Germany
| |
Collapse
|
7
|
Xiong J, Chen J, Lee PS. Functional Fibers and Fabrics for Soft Robotics, Wearables, and Human-Robot Interface. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2021; 33:e2002640. [PMID: 33025662 PMCID: PMC11468729 DOI: 10.1002/adma.202002640] [Citation(s) in RCA: 154] [Impact Index Per Article: 38.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 05/25/2020] [Indexed: 05/24/2023]
Abstract
Soft robotics inspired by the movement of living organisms, with excellent adaptability and accuracy for accomplishing tasks, are highly desirable for efficient operations and safe interactions with human. With the emerging wearable electronics, higher tactility and skin affinity are pursued for safe and user-friendly human-robot interactions. Fabrics interlocked by fibers perform traditional static functions such as warming, protection, and fashion. Recently, dynamic fibers and fabrics are favorable to deliver active stimulus responses such as sensing and actuating abilities for soft-robots and wearables. First, the responsive mechanisms of fiber/fabric actuators and their performances under various external stimuli are reviewed. Fiber/yarn-based artificial muscles for soft-robots manipulation and assistance in human motion are discussed, as well as smart clothes for improving human perception. Second, the geometric designs, fabrications, mechanisms, and functions of fibers/fabrics for sensing and energy harvesting from the human body and environments are summarized. Effective integration between the electronic components with garments, human skin, and living organisms is illustrated, presenting multifunctional platforms with self-powered potential for human-robot interactions and biomedicine. Lastly, the relationships between robotic/wearable fibers/fabrics and the external stimuli, together with the challenges and possible routes for revolutionizing the robotic fibers/fabrics and wearables in this new era are proposed.
Collapse
Affiliation(s)
- Jiaqing Xiong
- School of Materials Science and EngineeringNanyang Technological UniversitySingapore639798Singapore
| | - Jian Chen
- School of Materials Science and EngineeringNanyang Technological UniversitySingapore639798Singapore
| | - Pooi See Lee
- School of Materials Science and EngineeringNanyang Technological UniversitySingapore639798Singapore
| |
Collapse
|
8
|
Shi C, Yang D, Zhao J, Liu H. Computer Vision-Based Grasp Pattern Recognition With Application to Myoelectric Control of Dexterous Hand Prosthesis. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2090-2099. [PMID: 32746315 DOI: 10.1109/tnsre.2020.3007625] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Artificial intelligence provides new feasibilities to the control of dexterous prostheses. To achieve suitable grasps over various objects, a novel computer vision-based classification method assorting objects into different grasp patterns is proposed in this paper. This method can be applied in the autonomous control of the multi-fingered prosthetic hand, as it can help users rapidly complete "reach-and-pick up" tasks on various daily objects with low demand on the myoelectric control. Firstly, an RGB-D image database (121 objects) was established according to four important grasp patterns (cylindrical, spherical, tripod, and lateral). The image samples in the RGB-D dataset were acquired on a large variety of daily objects of different sizes, shapes, postures (16), as well as different illumination conditions (4) and camera positions (4). Then, different inputs and structures of the discrimination model (multilayer CNN) were tested in terms of the classification success rate through cross-validation. Our results showed that depth data play an important role in grasp pattern recognition. The bimodal data (Gray-D) integrating both grayscale and depth information about the objects can improve the classification accuracy acquired from the RGB images (> 10%) effectively. Within the database, the network could achieve the classification with high accuracy (98%); it also has a strong generalization capability on novel samples (93.9 ± 3.0%). We finally applied the method on a dexterous prosthetic hand and tested the whole system on performing the "reach-and-pick up" tasks. The experiments showed that the proposed computer vision-based myoelectric control method (Vision-EMG) could significantly improve the control effectiveness (6.4 s), with comparison to the traditional coding-based myoelectric control method (Coding-EMG, 13 s ).
Collapse
|
9
|
Shehata AW, Engels LF, Controzzi M, Cipriani C, Scheme EJ, Sensinger JW. Improving internal model strength and performance of prosthetic hands using augmented feedback. J Neuroeng Rehabil 2018; 15:70. [PMID: 30064477 PMCID: PMC6069837 DOI: 10.1186/s12984-018-0417-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Accepted: 07/17/2018] [Indexed: 11/24/2022] Open
Abstract
Background The loss of an arm presents a substantial challenge for upper limb amputees when performing activities of daily living. Myoelectric prosthetic devices partially replace lost hand functions; however, lack of sensory feedback and strong understanding of the myoelectric control system prevent prosthesis users from interacting with their environment effectively. Although most research in augmented sensory feedback has focused on real-time regulation, sensory feedback is also essential for enabling the development and correction of internal models, which in turn are used for planning movements and reacting to control variability faster than otherwise possible in the presence of sensory delays. Methods Our recent work has demonstrated that audio-augmented feedback can improve both performance and internal model strength for an abstract target acquisition task. Here we use this concept in controlling a robotic hand, which has inherent dynamics and variability, and apply it to a more functional grasp-and-lift task. We assessed internal model strength using psychophysical tests and used an instrumented Virtual Egg to assess performance. Results Results obtained from 14 able-bodied subjects show that a classifier-based controller augmented with audio feedback enabled stronger internal model (p = 0.018) and better performance (p = 0.028) than a controller without this feedback. Conclusions We extended our previous work and accomplished the first steps on a path towards bridging the gap between research and clinical usability of a hand prosthesis. The main goal was to assess whether the ability to decouple internal model strength and motion variability using the continuous audio-augmented feedback extended to real-world use, where the inherent mechanical variability and dynamics in the mechanisms may contribute to a more complicated interplay between internal model formation and motion variability. We concluded that benefits of using audio-augmented feedback for improving internal model strength of myoelectric controllers extend beyond a virtual target acquisition task to include control of a prosthetic hand.
Collapse
Affiliation(s)
- Ahmed W Shehata
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada. .,Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada. .,Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, AB, T6G 2E1, Canada.
| | - Leonard F Engels
- Scuola Superiore Sant'Anna, The BioRobotics Institute, V.le R. Piaggio 34, 56025, Pontedera, PI, Italy
| | - Marco Controzzi
- Scuola Superiore Sant'Anna, The BioRobotics Institute, V.le R. Piaggio 34, 56025, Pontedera, PI, Italy
| | - Christian Cipriani
- Scuola Superiore Sant'Anna, The BioRobotics Institute, V.le R. Piaggio 34, 56025, Pontedera, PI, Italy
| | - Erik J Scheme
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada.,Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada
| | - Jonathon W Sensinger
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada.,Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB, E3B 5A3, Canada
| |
Collapse
|
10
|
Batzianoulis I, Krausz NE, Simon AM, Hargrove L, Billard A. Decoding the grasping intention from electromyography during reaching motions. J Neuroeng Rehabil 2018; 15:57. [PMID: 29940991 PMCID: PMC6020187 DOI: 10.1186/s12984-018-0396-5] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2017] [Accepted: 06/11/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Active upper-limb prostheses are used to restore important hand functionalities, such as grasping. In conventional approaches, a pattern recognition system is trained over a number of static grasping gestures. However, training a classifier in a static position results in lower classification accuracy when performing dynamic motions, such as reach-to-grasp. We propose an electromyography-based learning approach that decodes the grasping intention during the reaching motion, leading to a faster and more natural response of the prosthesis. METHODS AND RESULTS Eight able-bodied subjects and four individuals with transradial amputation gave informed consent and participated in our study. All the subjects performed reach-to-grasp motions for five grasp types, while the elecromyographic (EMG) activity and the extension of the arm were recorded. We separated the reach-to-grasp motion into three phases, with respect to the extension of the arm. A multivariate analysis of variance (MANOVA) on the muscular activity revealed significant differences among the motion phases. Additionally, we examined the classification performance on these phases. We compared the performance of three different pattern recognition methods; Linear Discriminant Analysis (LDA), Support Vector Machines (SVM) with linear and non-linear kernels, and an Echo State Network (ESN) approach. Our off-line analysis shows that it is possible to have high classification performance above 80% before the end of the motion when with three-grasp types. An on-line evaluation with an upper-limb prosthesis shows that the inclusion of the reaching motion in the training of the classifier importantly improves classification accuracy and enables the detection of grasp intention early in the reaching motion. CONCLUSIONS This method offers a more natural and intuitive control of prosthetic devices, as it will enable controlling grasp closure in synergy with the reaching motion. This work contributes to the decrease of delays between the user's intention and the device response and improves the coordination of the device with the motion of the arm.
Collapse
Affiliation(s)
- Iason Batzianoulis
- Learning Algorithms and Systems Laboratory (LASA), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Route Cantonale, Lausanne, CH-1015 Switzerland
| | - Nili E. Krausz
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
| | - Ann M. Simon
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
| | - Levi Hargrove
- Center for Bionic Medicine, Shirley Ryan AbilityLab, E Erie St., Chicago, 60611 IL USA
- Dept. of Physical Medicine and Rehabilitation, Northwestern University, N Lake Shore, Chicago, 60611 IL USA
- Dept. of Biomedical Engineering, Northwestern University, Evanston, 60208 IL USA
| | - Aude Billard
- Learning Algorithms and Systems Laboratory (LASA), School of Engineering, École Polytechnique Fédérale de Lausanne (EPFL), Route Cantonale, Lausanne, CH-1015 Switzerland
| |
Collapse
|
11
|
Shehata AW, Scheme EJ, Sensinger JW. Audible Feedback Improves Internal Model Strength and Performance of Myoelectric Prosthesis Control. Sci Rep 2018; 8:8541. [PMID: 29867147 PMCID: PMC5986794 DOI: 10.1038/s41598-018-26810-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Accepted: 05/21/2018] [Indexed: 12/01/2022] Open
Abstract
Myoelectric prosthetic devices are commonly used to help upper limb amputees perform activities of daily living, however amputees still lack the sensory feedback required to facilitate reliable and precise control. Augmented feedback may play an important role in affecting both short-term performance, through real-time regulation, and long-term performance, through the development of stronger internal models. In this work, we investigate the potential tradeoff between controllers that enable better short-term performance and those that provide sufficient feedback to develop a strong internal model. We hypothesize that augmented feedback may be used to mitigate this tradeoff, ultimately improving both short and long-term control. We used psychometric measures to assess the internal model developed while using a filtered myoelectric controller with augmented audio feedback, imitating classification-based control but with augmented regression-based feedback. In addition, we evaluated the short-term performance using a multi degree-of-freedom constrained-time target acquisition task. Results obtained from 24 able-bodied subjects show that an augmented feedback control strategy using audio cues enables the development of a stronger internal model than the filtered control with filtered feedback, and significantly better path efficiency than both raw and filtered control strategies. These results suggest that the use of augmented feedback control strategies may improve both short-term and long-term performance.
Collapse
Affiliation(s)
- Ahmed W Shehata
- University of New Brunswick, Electrical and Computer Engineering, Fredericton, E3B5A3, Canada.
- Institute of Biomedical Engineering, Fredericton, E3B5A3, Canada.
| | - Erik J Scheme
- University of New Brunswick, Electrical and Computer Engineering, Fredericton, E3B5A3, Canada
- Institute of Biomedical Engineering, Fredericton, E3B5A3, Canada
| | - Jonathon W Sensinger
- University of New Brunswick, Electrical and Computer Engineering, Fredericton, E3B5A3, Canada
- Institute of Biomedical Engineering, Fredericton, E3B5A3, Canada
| |
Collapse
|
12
|
Patel GK, Hahne JM, Castellini C, Farina D, Dosen S. Context-dependent adaptation improves robustness of myoelectric control for upper-limb prostheses. J Neural Eng 2017; 14:056016. [PMID: 28691694 DOI: 10.1088/1741-2552/aa7e82] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Dexterous upper-limb prostheses are available today to restore grasping, but an effective and reliable feed-forward control is still missing. The aim of this work was to improve the robustness and reliability of myoelectric control by using context information from sensors embedded within the prosthesis. APPROACH We developed a context-driven myoelectric control scheme (cxMYO) that incorporates the inference of context information from proprioception (inertial measurement unit) and exteroception (force and grip aperture) sensors to modulate the outputs of myoelectric control. Further, a realistic evaluation of the cxMYO was performed online in able-bodied subjects using three functional tasks, during which the cxMYO was compared to a purely machine-learning-based myoelectric control (MYO). MAIN RESULTS The results demonstrated that utilizing context information decreased the number of unwanted commands, improving the performance (success rate and dropped objects) in all three functional tasks. Specifically, the median number of objects dropped per round with cxMYO was zero in all three tasks and a significant increase in the number of successful transfers was seen in two out of three functional tasks. Additionally, the subjects reported better user experience. SIGNIFICANCE This is the first online evaluation of a method integrating information from multiple on-board prosthesis sensors to modulate the output of a machine-learning-based myoelectric controller. The proposed scheme is general and presents a simple, non-invasive and cost-effective approach for improving the robustness of myoelectric control.
Collapse
Affiliation(s)
- Gauravkumar K Patel
- Department of Trauma Surgery, Neurorehabilitation Systems Research Group, Orthopedics and Plastic Surgery, University Medical Center Göttingen, 37075 Göttingen, Germany
| | | | | | | | | |
Collapse
|
13
|
Ghazaei G, Alameer A, Degenaar P, Morgan G, Nazarpour K. Deep learning-based artificial vision for grasp classification in myoelectric hands. J Neural Eng 2017; 14:036025. [PMID: 28467317 DOI: 10.1088/1741-2552/aa6802] [Citation(s) in RCA: 99] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Computer vision-based assistive technology solutions can revolutionise the quality of care for people with sensorimotor disorders. The goal of this work was to enable trans-radial amputees to use a simple, yet efficient, computer vision system to grasp and move common household objects with a two-channel myoelectric prosthetic hand. APPROACH We developed a deep learning-based artificial vision system to augment the grasp functionality of a commercial prosthesis. Our main conceptual novelty is that we classify objects with regards to the grasp pattern without explicitly identifying them or measuring their dimensions. A convolutional neural network (CNN) structure was trained with images of over 500 graspable objects. For each object, 72 images, at [Formula: see text] intervals, were available. Objects were categorised into four grasp classes, namely: pinch, tripod, palmar wrist neutral and palmar wrist pronated. The CNN setting was first tuned and tested offline and then in realtime with objects or object views that were not included in the training set. MAIN RESULTS The classification accuracy in the offline tests reached [Formula: see text] for the seen and [Formula: see text] for the novel objects; reflecting the generalisability of grasp classification. We then implemented the proposed framework in realtime on a standard laptop computer and achieved an overall score of [Formula: see text] in classifying a set of novel as well as seen but randomly-rotated objects. Finally, the system was tested with two trans-radial amputee volunteers controlling an i-limb UltraTM prosthetic hand and a motion controlTM prosthetic wrist; augmented with a webcam. After training, subjects successfully picked up and moved the target objects with an overall success of up to [Formula: see text]. In addition, we show that with training, subjects' performance improved in terms of time required to accomplish a block of 24 trials despite a decreasing level of visual feedback. SIGNIFICANCE The proposed design constitutes a substantial conceptual improvement for the control of multi-functional prosthetic hands. We show for the first time that deep-learning based computer vision systems can enhance the grip functionality of myoelectric hands considerably.
Collapse
Affiliation(s)
- Ghazal Ghazaei
- School of Electrical and Electronic Engineering, Newcastle University, Newcastle-upon-Tyne NE1 7RU, United Kingdom
| | | | | | | | | |
Collapse
|