1
|
Reichert C, Klemm L, Mushunuri RV, Kalyani A, Schreiber S, Kuehn E, Azañón E. Discriminating Free Hand Movements Using Support Vector Machine and Recurrent Neural Network Algorithms. SENSORS (BASEL, SWITZERLAND) 2022; 22:6101. [PMID: 36015862 PMCID: PMC9412700 DOI: 10.3390/s22166101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/29/2022] [Accepted: 08/08/2022] [Indexed: 06/15/2023]
Abstract
Decoding natural hand movements is of interest for human-computer interaction and may constitute a helpful tool in the diagnosis of motor diseases and rehabilitation monitoring. However, the accurate measurement of complex hand movements and the decoding of dynamic movement data remains challenging. Here, we introduce two algorithms, one based on support vector machine (SVM) classification combined with dynamic time warping, and the other based on a long short-term memory (LSTM) neural network, which were designed to discriminate small differences in defined sequences of hand movements. We recorded hand movement data from 17 younger and 17 older adults using an exoskeletal data glove while they were performing six different movement tasks. Accuracy rates in decoding the different movement types were similarly high for SVM and LSTM in across-subject classification, but, for within-subject classification, SVM outperformed LSTM. The SVM-based approach, therefore, appears particularly promising for the development of movement decoding tools, in particular if the goal is to generalize across age groups, for example for detecting specific motor disorders or tracking their progress over time.
Collapse
Affiliation(s)
- Christoph Reichert
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118 Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Forschungscampus STIMULATE, Otto-Hahn-Str. 2, 39106 Magdeburg, Germany
| | - Lisa Klemm
- Department of Neurology, University Medical Center, Leipziger Str. 44, 39120 Magdeburg, Germany
| | | | - Avinash Kalyani
- Institute for Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
- German Center for Neurodegenerative Diseases (DZNE), Leipziger Str. 44, 39120 Magdeburg, Germany
| | - Stefanie Schreiber
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Department of Neurology, University Medical Center, Leipziger Str. 44, 39120 Magdeburg, Germany
| | - Esther Kuehn
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Institute for Cognitive Neurology and Dementia Research (IKND), Otto-von-Guericke University, Leipziger Str. 44, 39120 Magdeburg, Germany
- German Center for Neurodegenerative Diseases (DZNE), Leipziger Str. 44, 39120 Magdeburg, Germany
- Hertie Institute for Clinical Brain Research (HIH), Otfried Mueller-Str. 27, 72076 Tuebingen, Germany
| | - Elena Azañón
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118 Magdeburg, Germany
- Center for Behavioral Brain Sciences (CBBS), Universitaetsplatz 2, 39106 Magdeburg, Germany
- Department of Neurology, University Medical Center, Leipziger Str. 44, 39120 Magdeburg, Germany
| |
Collapse
|
2
|
Hemeren P, Veto P, Thill S, Li C, Sun J. Kinematic-Based Classification of Social Gestures and Grasping by Humans and Machine Learning Techniques. Front Robot AI 2021; 8:699505. [PMID: 34746242 PMCID: PMC8565478 DOI: 10.3389/frobt.2021.699505] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 09/07/2021] [Indexed: 11/20/2022] Open
Abstract
The affective motion of humans conveys messages that other humans perceive and understand without conventional linguistic processing. This ability to classify human movement into meaningful gestures or segments plays also a critical role in creating social interaction between humans and robots. In the research presented here, grasping and social gesture recognition by humans and four machine learning techniques (k-Nearest Neighbor, Locality-Sensitive Hashing Forest, Random Forest and Support Vector Machine) is assessed by using human classification data as a reference for evaluating the classification performance of machine learning techniques for thirty hand/arm gestures. The gestures are rated according to the extent of grasping motion on one task and the extent to which the same gestures are perceived as social according to another task. The results indicate that humans clearly rate differently according to the two different tasks. The machine learning techniques provide a similar classification of the actions according to grasping kinematics and social quality. Furthermore, there is a strong association between gesture kinematics and judgments of grasping and the social quality of the hand/arm gestures. Our results support previous research on intention-from-movement understanding that demonstrates the reliance on kinematic information for perceiving the social aspects and intentions in different grasping actions as well as communicative point-light actions.
Collapse
Affiliation(s)
- Paul Hemeren
- School of Informatics, University of Skövde, Skövde, Sweden
| | - Peter Veto
- School of Informatics, University of Skövde, Skövde, Sweden
| | - Serge Thill
- School of Informatics, University of Skövde, Skövde, Sweden.,Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Cai Li
- Pin An Technology Co. Ltd., Shenzhen, China
| | | |
Collapse
|
3
|
Zou L, Ge C, Wang ZJ, Cretu E, Li X. Novel Tactile Sensor Technology and Smart Tactile Sensing Systems: A Review. SENSORS (BASEL, SWITZERLAND) 2017; 17:E2653. [PMID: 29149080 PMCID: PMC5713637 DOI: 10.3390/s17112653] [Citation(s) in RCA: 136] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 11/08/2017] [Accepted: 11/14/2017] [Indexed: 02/07/2023]
Abstract
During the last decades, smart tactile sensing systems based on different sensing techniques have been developed due to their high potential in industry and biomedical engineering. However, smart tactile sensing technologies and systems are still in their infancy, as many technological and system issues remain unresolved and require strong interdisciplinary efforts to address them. This paper provides an overview of smart tactile sensing systems, with a focus on signal processing technologies used to interpret the measured information from tactile sensors and/or sensors for other sensory modalities. The tactile sensing transduction and principles, fabrication and structures are also discussed with their merits and demerits. Finally, the challenges that tactile sensing technology needs to overcome are highlighted.
Collapse
Affiliation(s)
- Liang Zou
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
| | - Chang Ge
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
| | - Z Jane Wang
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
| | - Edmond Cretu
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada.
| | - Xiaoou Li
- College of Medical Instruments, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China.
| |
Collapse
|
4
|
Bowen C, Alterovitz R. Asymptotically Optimal Motion Planning for Tasks Using Learned Virtual Landmarks. IEEE Robot Autom Lett 2016. [DOI: 10.1109/lra.2016.2530877] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
5
|
Bowen C, Ye G, Alterovitz R. Asymptotically Optimal Motion Planning for Learned Tasks Using Time-Dependent Cost Maps. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING : A PUBLICATION OF THE IEEE ROBOTICS AND AUTOMATION SOCIETY 2015; 12:171-182. [PMID: 26279642 PMCID: PMC4535732 DOI: 10.1109/tase.2014.2342718] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
UNLABELLED In unstructured environments in people's homes and workspaces, robots executing a task may need to avoid obstacles while satisfying task motion constraints, e.g., keeping a plate of food level to avoid spills or properly orienting a finger to push a button. We introduce a sampling-based method for computing motion plans that are collision-free and minimize a cost metric that encodes task motion constraints. Our time-dependent cost metric, learned from a set of demonstrations, encodes features of a task's motion that are consistent across the demonstrations and, hence, are likely required to successfully execute the task. Our sampling-based motion planner uses the learned cost metric to compute plans that simultaneously avoid obstacles and satisfy task constraints. The motion planner is asymptotically optimal and minimizes the Mahalanobis distance between the planned trajectory and the distribution of demonstrations in a feature space parameterized by the locations of task-relevant objects. The motion planner also leverages the distribution of the demonstrations to significantly reduce plan computation time. We demonstrate the method's effectiveness and speed using a small humanoid robot performing tasks requiring both obstacle avoidance and satisfaction of learned task constraints. NOTE TO PRACTITIONERS Motivated by the desire to enable robots to autonomously operate in cluttered home and workplace environments, this paper presents an approach for intuitively training a robot in a manner that enables it to repeat the task in novel scenarios and in the presence of unforeseen obstacles in the environment. Based on user-provided demonstrations of the task, our method learns features of the task that are consistent across the demonstrations and that we expect should be repeated by the robot when performing the task. We next present an efficient algorithm for planning robot motions to perform the task based on the learned features while avoiding obstacles. We demonstrate the effectiveness of our motion planner for scenarios requiring transferring a powder and pushing a button in environments with obstacles, and we plan to extend our results to more complex tasks in the future.
Collapse
Affiliation(s)
- Chris Bowen
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Gu Ye
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Ron Alterovitz
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| |
Collapse
|
6
|
|
7
|
Samadani AA, Ghodsi A, Kulić D. Discriminative functional analysis of human movements. Pattern Recognit Lett 2013. [DOI: 10.1016/j.patrec.2012.12.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
8
|
|
9
|
Hoshino K, Tamaki E, Tanimoto T. Copycat hand — robot hand imitating human motions at high speed and with high accuracy. Adv Robot 2012. [DOI: 10.1163/156855307782506183] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Kiyoshi Hoshino
- a University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
| | - Emi Tamaki
- b University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan
| | - Takanobu Tanimoto
- c Matsushita Electric Industrial Co., Ltd, 1006 Kadoma, Kadoma, Osaka 571-8573, Japan
| |
Collapse
|
10
|
Ju Z, Liu H, Zhu X, Xiong Y. Dynamic Grasp Recognition Using Time Clustering, Gaussian Mixture Models and Hidden Markov Models. Adv Robot 2012. [DOI: 10.1163/156855309x462628] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Affiliation(s)
- Zhaojie Ju
- a University of Portsmouth, Institute of Industrial Research, Burnaby Road, Portsmouth PO1 3QL, UK
| | - Honghai Liu
- b University of Portsmouth, Institute of Industrial Research, Burnaby Road, Portsmouth PO1 3QL, UK
| | - Xiangyang Zhu
- c Robotics Institute, Shanghai Jiao Tong University, Shanghai 200204, P. R. China
| | - Youlun Xiong
- d School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Huazong 430074, P. R. China
| |
Collapse
|
11
|
Cheng J, Xie C, Bian W, Tao D. Feature fusion for 3D hand gesture recognition by learning a shared hidden space. Pattern Recognit Lett 2012. [DOI: 10.1016/j.patrec.2010.12.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
12
|
|
13
|
Jie Sun, Moore JL, Bobick A, Rehg JM. Learning Visual Object Categories for Robot Affordance Prediction. Int J Rob Res 2009. [DOI: 10.1177/0278364909356602] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A fundamental requirement of any autonomous robot system is the ability to predict the affordances of its environment. The set of affordances define the actions that are available to the agent given the robot’s context. A standard approach to affordance learning is direct perception, which learns direct mappings from sensor measurements to affordance labels. For example, a robot designed for cross-country navigation could map stereo depth information and image features directly into predictions about the traversability of terrain regions. While this approach can succeed for a small number of affordances, it does not scale well as the number of affordances increases. In this paper, we show that visual object categories can be used as an intermediate representation that makes the affordance learning problem scalable. We develop a probabilistic graphical model which we call the Category—Affordance (CA) model, which describes the relationships between object categories, affordances, and appearance. This model casts visual object categorization as an intermediate inference step in affordance prediction. We describe several novel affordance learning and training strategies that are supported by our new model. Experimental results with indoor mobile robots evaluate these different strategies and demonstrate the advantages of the CA model in affordance learning, especially when learning from limited size data sets.
Collapse
Affiliation(s)
- Jie Sun
- Georgia Institute of Technology, 85 5th Street, Atlanta, GA 30332, USA
| | - Joshua L. Moore
- Georgia Institute of Technology, 85 5th Street, Atlanta, GA 30332, USA
| | - Aaron Bobick
- Georgia Institute of Technology, 85 5th Street, Atlanta, GA 30332, USA
| | - James M. Rehg
- Georgia Institute of Technology, 85 5th Street, Atlanta, GA 30332, USA,
| |
Collapse
|
14
|
Kulic D, Takano W, Nakamura Y. Online Segmentation and Clustering From Continuous Observation of Whole Body Motions. IEEE T ROBOT 2009. [DOI: 10.1109/tro.2009.2026508] [Citation(s) in RCA: 87] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
15
|
Abstract
Motion trajectory can be an informative and descriptive clue that is suitable for the characterization of motion. Studying motion trajectory for effective motion description and recognition is important in many applications. For instance, motion trajectory can play an im portant role in the representation, recognition and learning of most long-term human or robot actions, behaviors and activities. However, effective trajectory descriptors are lacking and most reported work just uses motion trajectory in its raw data form. In this paper, we propose a novel motion trajectory signature descriptor and study its rich descriptive invariants which benefit effective motion trajectory recognition. These invariants are key measures of the flexibility and effectiveness of a descriptor. Substantial descriptive invariants can be deduced from the proposed trajectory signature, which is attributed to the computational locality of the signature components. We first present the signature definition and its robust implementation. Then the signature's invariants are elaborated. A non-linear inter-signature matching algorithm is developed to measure the signature's similarity for trajectory recognition. Experiments are conducted to recognize human sign language, in which both synthetic and real data are used to verify the signature's invariants, and to illustrate the effectiveness in the signature recognition.
Collapse
Affiliation(s)
- Shandong Wu
- Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong,
| | - Y.F. Li
- Department of Manufacturing Engineering and Engineering Management, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong,
| |
Collapse
|
16
|
Kulić D, Takano W, Nakamura Y. Incremental Learning, Clustering and Hierarchy Formation of Whole Body Motion Patterns using Adaptive Hidden Markov Chains. Int J Rob Res 2008. [DOI: 10.1177/0278364908091153] [Citation(s) in RCA: 162] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper describes a novel approach for autonomous and incremental learning of motion pattern primitives by observation of human motion. Human motion patterns are abstracted into a dynamic stochastic model, which can be used for both subsequent motion recognition and generation, analogous to the mirror neuron hypothesis in primates. The model size is adaptable based on the discrimination requirements in the associated region of the current knowledge base. A new algorithm for sequentially training the Markov chains is developed, to reduce the computation cost during model adaptation. As new motion patterns are observed, they are incrementally grouped together using hierarchical agglomerative clustering based on their relative distance in the model space. The clustering algorithm forms a tree structure, with specialized motions at the tree leaves, and generalized motions closer to the root. The generated tree structure will depend on the type of training data provided, so that the most specialized motions will be those for which the most training has been received. Tests with motion capture data for a variety of motion primitives demonstrate the efficacy of the algorithm.
Collapse
Affiliation(s)
- Dana Kulić
- Department of Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan,
| | - Wataru Takano
- Department of Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan,
| | - Yoshihiko Nakamura
- Department of Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan,
| |
Collapse
|
17
|
Dipietro L, Sabatini A, Dario P. A Survey of Glove-Based Systems and Their Applications. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/tsmcc.2008.923862] [Citation(s) in RCA: 437] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
18
|
Alves N, Chau T. Vision-based segmentation of continuous mechanomyographic grasping sequences. IEEE Trans Biomed Eng 2008; 55:765-73. [PMID: 18270015 DOI: 10.1109/tbme.2007.902223] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In detecting motor related activity from mechanomyographic (MMG) recordings, the acquisition of long, continuous streams of MMG signals is typically preferred over the painstaking collection of individual, isolated contractions. However, a major challenge with continuous collection is the subsequent separation of the MMG data stream into segments representing individual contractions. This paper proposes a method for segmenting continuously recorded MMG data streams using computer vision while providing a highly reduced set of key images for rapid human expert verification. Transverse plane video recordings of functional grasp sequences were synchronized with the acquisition of MMG signals from the forearm. An automatic, vision-based algorithm exploiting skin color detection, motion estimation, and template matching provided segmentation cues for MMG signals arising from multiple grips. The automatic segmentation method tolerated extraneous hand movements, differentiated among multiple grips and estimated grip transition times. Our implementation segmented two grips with an average accuracy of 97.8 -/+ 4%, and up to seven grips with an accuracy of 73 -/+ 20%. The automatically extracted contraction initiation and termination times were within 173 -/+ 133 ms of the times obtained via manual segmentation. It is suggested that the proposed method would be particularly conducive to the assembly of large collections of signals for training MMG-driven prostheses.
Collapse
Affiliation(s)
- Natasha Alves
- Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada.
| | | |
Collapse
|