1
|
Xu J, Wang R, Shang S, Chen A, Winterbottom L, Hsu TL, Chen W, Ahmed K, Rotta PLL, Zhu X, Nilsen DM, Stein J, Ciocarlie M. ChatEMG: Synthetic Data Generation to Control a Robotic Hand Orthosis for Stroke. IEEE Robot Autom Lett 2025; 10:907-914. [PMID: 39711823 PMCID: PMC11661792 DOI: 10.1109/lra.2024.3511372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2024]
Abstract
Intent inferral on a hand orthosis for stroke patients is challenging due to the difficulty of data collection. Additionally, EMG signals exhibit significant variations across different conditions, sessions, and subjects, making it hard for classifiers to generalize. Traditional approaches require a large labeled dataset from the new condition, session, or subject to train intent classifiers; however, this data collection process is burdensome and time-consuming. In this paper, we propose ChatEMG, an autoregressive generative model that can generate synthetic EMG signals conditioned on prompts (i.e., a given sequence of EMG signals). ChatEMG enables us to collect only a small dataset from the new condition, session, or subject and expand it with synthetic samples conditioned on prompts from this new context. ChatEMG leverages a vast repository of previous data via generative training while still remaining context-specific via prompting. Our experiments show that these synthetic samples are classifier-agnostic and can improve intent inferral accuracy for different types of classifiers. We demonstrate that our complete approach can be integrated into a single patient session, including the use of the classifier for functional orthosis-assisted tasks. To the best of our knowledge, this is the first time an intent classifier trained partially on synthetic data has been deployed for functional control of an orthosis by a stroke survivor.
Collapse
Affiliation(s)
- Jingxi Xu
- Department of Computer Science, Columbia University in the City of New York, NY, USA
| | - Runsheng Wang
- Department of Mechanical Engineering, Columbia University in the City of New York, NY, USA
| | - Siqi Shang
- Department of Computer Science, Columbia University in the City of New York, NY, USA
| | - Ava Chen
- Department of Mechanical Engineering, Columbia University in the City of New York, NY, USA
| | - Lauren Winterbottom
- Department of Rehabilitation and Regenerative Medicine, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - To-Liang Hsu
- Department of Computer Science, Columbia University in the City of New York, NY, USA
| | - Wenxi Chen
- Department of Mechanical Engineering, Columbia University in the City of New York, NY, USA
| | - Khondoker Ahmed
- Department of Computer Science, Columbia University in the City of New York, NY, USA
| | - Pedro Leandro La Rotta
- Department of Mechanical Engineering, Columbia University in the City of New York, NY, USA
| | - Xinyue Zhu
- Department of Computer Science, Columbia University in the City of New York, NY, USA
| | - Dawn M Nilsen
- Department of Rehabilitation and Regenerative Medicine, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Joel Stein
- Department of Rehabilitation and Regenerative Medicine, Columbia University Irving Medical Center, New York, NY 10032, USA
| | - Matei Ciocarlie
- Department of Mechanical Engineering, Columbia University in the City of New York, NY, USA
| |
Collapse
|
2
|
Bhatia A, Hanna J, Stuart T, Kasper KA, Clausen DM, Gutruf P. Wireless Battery-free and Fully Implantable Organ Interfaces. Chem Rev 2024; 124:2205-2280. [PMID: 38382030 DOI: 10.1021/acs.chemrev.3c00425] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/23/2024]
Abstract
Advances in soft materials, miniaturized electronics, sensors, stimulators, radios, and battery-free power supplies are resulting in a new generation of fully implantable organ interfaces that leverage volumetric reduction and soft mechanics by eliminating electrochemical power storage. This device class offers the ability to provide high-fidelity readouts of physiological processes, enables stimulation, and allows control over organs to realize new therapeutic and diagnostic paradigms. Driven by seamless integration with connected infrastructure, these devices enable personalized digital medicine. Key to advances are carefully designed material, electrophysical, electrochemical, and electromagnetic systems that form implantables with mechanical properties closely matched to the target organ to deliver functionality that supports high-fidelity sensors and stimulators. The elimination of electrochemical power supplies enables control over device operation, anywhere from acute, to lifetimes matching the target subject with physical dimensions that supports imperceptible operation. This review provides a comprehensive overview of the basic building blocks of battery-free organ interfaces and related topics such as implantation, delivery, sterilization, and user acceptance. State of the art examples categorized by organ system and an outlook of interconnection and advanced strategies for computation leveraging the consistent power influx to elevate functionality of this device class over current battery-powered strategies is highlighted.
Collapse
Affiliation(s)
- Aman Bhatia
- Department of Biomedical Engineering, The University of Arizona, Tucson, Arizona 85721, United States
| | - Jessica Hanna
- Department of Biomedical Engineering, The University of Arizona, Tucson, Arizona 85721, United States
| | - Tucker Stuart
- Department of Biomedical Engineering, The University of Arizona, Tucson, Arizona 85721, United States
| | - Kevin Albert Kasper
- Department of Biomedical Engineering, The University of Arizona, Tucson, Arizona 85721, United States
| | - David Marshall Clausen
- Department of Biomedical Engineering, The University of Arizona, Tucson, Arizona 85721, United States
| | - Philipp Gutruf
- Department of Biomedical Engineering, The University of Arizona, Tucson, Arizona 85721, United States
- Department of Electrical and Computer Engineering, The University of Arizona, Tucson, Arizona 85721, United States
- Bio5 Institute, The University of Arizona, Tucson, Arizona 85721, United States
- Neuroscience Graduate Interdisciplinary Program (GIDP), The University of Arizona, Tucson, Arizona 85721, United States
| |
Collapse
|
3
|
Choo YJ, Chang MC. Use of machine learning in the field of prosthetics and orthotics: A systematic narrative review. Prosthet Orthot Int 2023; 47:226-240. [PMID: 36811961 DOI: 10.1097/pxr.0000000000000199] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 09/08/2022] [Indexed: 02/24/2023]
Abstract
Although machine learning is not yet being used in clinical practice within the fields of prosthetics and orthotics, several studies on the use of prosthetics and orthotics have been conducted. We intend to provide relevant knowledge by conducting a systematic review of prior studies on using machine learning in the fields of prosthetics and orthotics. We searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, Embase, and Scopus databases and retrieved studies published until July 18, 2021. The study included the application of machine learning algorithms to upper-limb and lower-limb prostheses and orthoses. The criteria of the Quality in Prognosis Studies tool were used to assess the methodological quality of the studies. A total of 13 studies were included in this systematic review. In the realm of prostheses, machine learning has been used to identify prosthesis, select an appropriate prosthesis, train after wearing the prosthesis, detect falls, and manage the temperature in the socket. In the field of orthotics, machine learning was used to control real-time movement while wearing an orthosis and predict the need for an orthosis. The studies included in this systematic review are limited to the algorithm development stage. However, if the developed algorithms are actually applied to clinical practice, it is expected that it will be useful for medical staff and users to handle prosthesis and orthosis.
Collapse
Affiliation(s)
- Yoo Jin Choo
- Production R&D Division Advanced Interdisciplinary Team, Medical Device Development Center, Daegu-Gyeongbuk Medical Innovation Foundation, Deagu, South Korea
| | - Min Cheol Chang
- Department of Rehabilitation Medicine, College of Medicine, Yeungnam University, Daegu, South Korea
| |
Collapse
|
4
|
Farina D, Vujaklija I, Brånemark R, Bull AMJ, Dietl H, Graimann B, Hargrove LJ, Hoffmann KP, Huang HH, Ingvarsson T, Janusson HB, Kristjánsson K, Kuiken T, Micera S, Stieglitz T, Sturma A, Tyler D, Weir RFF, Aszmann OC. Toward higher-performance bionic limbs for wider clinical use. Nat Biomed Eng 2023; 7:473-485. [PMID: 34059810 DOI: 10.1038/s41551-021-00732-x] [Citation(s) in RCA: 120] [Impact Index Per Article: 60.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 04/01/2021] [Indexed: 12/19/2022]
Abstract
Most prosthetic limbs can autonomously move with dexterity, yet they are not perceived by the user as belonging to their own body. Robotic limbs can convey information about the environment with higher precision than biological limbs, but their actual performance is substantially limited by current technologies for the interfacing of the robotic devices with the body and for transferring motor and sensory information bidirectionally between the prosthesis and the user. In this Perspective, we argue that direct skeletal attachment of bionic devices via osseointegration, the amplification of neural signals by targeted muscle innervation, improved prosthesis control via implanted muscle sensors and advanced algorithms, and the provision of sensory feedback by means of electrodes implanted in peripheral nerves, should all be leveraged towards the creation of a new generation of high-performance bionic limbs. These technologies have been clinically tested in humans, and alongside mechanical redesigns and adequate rehabilitation training should facilitate the wider clinical use of bionic limbs.
Collapse
Affiliation(s)
- Dario Farina
- Department of Bioengineering, Imperial College London, London, UK.
| | - Ivan Vujaklija
- Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland
| | - Rickard Brånemark
- Center for Extreme Bionics, Biomechatronics Group, MIT Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Anthony M J Bull
- Department of Bioengineering, Imperial College London, London, UK
| | - Hans Dietl
- Ottobock Products SE & Co. KGaA, Vienna, Austria
| | | | - Levi J Hargrove
- Center for Bionic Medicine, Shirley Ryan AbilityLab, Chicago, IL, USA
- Department of Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL, USA
- Department of Biomedical Engineering, Northwestern University, Chicago, IL, USA
| | - Klaus-Peter Hoffmann
- Department of Medical Engineering & Neuroprosthetics, Fraunhofer-Institut für Biomedizinische Technik, Sulzbach, Germany
| | - He Helen Huang
- NCSU/UNC Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA
- University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Thorvaldur Ingvarsson
- Department of Research and Development, Össur Iceland, Reykjavík, Iceland
- Faculty of Medicine, University of Iceland, Reykjavík, Iceland
| | - Hilmar Bragi Janusson
- School of Engineering and Natural Sciences, University of Iceland, Reykjavík, Iceland
| | | | - Todd Kuiken
- Center for Bionic Medicine, Shirley Ryan AbilityLab, Chicago, IL, USA
- Department of Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL, USA
- Department of Biomedical Engineering, Northwestern University, Chicago, IL, USA
| | - Silvestro Micera
- The Biorobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pontedera, Italy
- Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pontedera, Italy
- Bertarelli Foundation Chair in Translational NeuroEngineering, Center for Neuroprosthetics and Institute of Bioengineering, School of Engineering, Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland
| | - Thomas Stieglitz
- Laboratory for Biomedical Microtechnology, Department of Microsystems Engineering-IMTEK, BrainLinks-BrainTools Center and Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Agnes Sturma
- Department of Bioengineering, Imperial College London, London, UK
- Clinical Laboratory for Bionic Extremity Reconstruction, Department of Plastic and Reconstructive Surgery, Medical University of Vienna, Vienna, Austria
| | - Dustin Tyler
- Case School of Engineering, Case Western Reserve University, Cleveland, OH, USA
- Louis Stokes Veterans Affairs Medical Centre, Cleveland, OH, USA
| | - Richard F Ff Weir
- Biomechatronics Development Laboratory, Bioengineering Department, University of Colorado Denver and VA Eastern Colorado Healthcare System, Aurora, CO, USA
| | - Oskar C Aszmann
- Clinical Laboratory for Bionic Extremity Reconstruction, Department of Plastic and Reconstructive Surgery, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
5
|
Hinson RM, Berman J, Filer W, Kamper D, Hu X, Huang H. Offline Evaluation Matters: Investigation of the Influence of Offline Performance on Real-Time Operation of Electromyography-Based Neural-Machine Interfaces. IEEE Trans Neural Syst Rehabil Eng 2023; 31:680-689. [PMID: 37015358 DOI: 10.1109/tnsre.2022.3226229] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
There has been a debate on the most appropriate way to evaluate electromyography (EMG)-based neural-machine interfaces (NMIs). Accordingly, this study examined whether a relationship between offline kinematic predictive accuracy (R2) and user real-time task performance while using the interface could be identified. A virtual posture-matching task was developed to evaluate motion capture-based control and myoelectric control with artificial neural networks (ANNs) trained to low (R2 ≈ 0.4), moderate (R2 ≈ 0.6), and high ( $\text {R}^{\vphantom {\text {D}^{\text {a}}}{2}} \approx 0.8$ ) offline performance levels. Twelve non-disabled subjects trained with each offline performance level decoder before evaluating final real-time posture matching performance. Moderate to strong relationships were detected between offline performance and all real-time task performance metrics: task completion percentage (r = 0.66, p < 0.001), normalized task completion time (r = -0.51, p = 0.001), path efficiency (r = 0.74, p < 0.001), and target overshoots (r = -0.79, p < 0.001). Significant improvements in each real-time task evaluation metric were also observed between the different offline performance levels. Additionally, subjects rated myoelectric controllers with higher offline performance more favorably. The results of this study support the use and validity of offline analyses for optimization of NMIs in myoelectric control research and development.
Collapse
|
6
|
Singh SK, Chaturvedi A. Leveraging deep feature learning for wearable sensors based handwritten character recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
7
|
Haptic shared control improves neural efficiency during myoelectric prosthesis use. Sci Rep 2023; 13:484. [PMID: 36627340 PMCID: PMC9832035 DOI: 10.1038/s41598-022-26673-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 12/19/2022] [Indexed: 01/11/2023] Open
Abstract
Clinical myoelectric prostheses lack the sensory feedback and sufficient dexterity required to complete activities of daily living efficiently and accurately. Providing haptic feedback of relevant environmental cues to the user or imbuing the prosthesis with autonomous control authority have been separately shown to improve prosthesis utility. Few studies, however, have investigated the effect of combining these two approaches in a shared control paradigm, and none have evaluated such an approach from the perspective of neural efficiency (the relationship between task performance and mental effort measured directly from the brain). In this work, we analyzed the neural efficiency of 30 non-amputee participants in a grasp-and-lift task of a brittle object. Here, a myoelectric prosthesis featuring vibrotactile feedback of grip force and autonomous control of grasping was compared with a standard myoelectric prosthesis with and without vibrotactile feedback. As a measure of mental effort, we captured the prefrontal cortex activity changes using functional near infrared spectroscopy during the experiment. It was expected that the prosthesis with haptic shared control would improve both task performance and mental effort compared to the standard prosthesis. Results showed that only the haptic shared control system enabled users to achieve high neural efficiency, and that vibrotactile feedback was important for grasping with the appropriate grip force. These results indicate that the haptic shared control system synergistically combines the benefits of haptic feedback and autonomous controllers, and is well-poised to inform such hybrid advancements in myoelectric prosthesis technology.
Collapse
|
8
|
Mathewson KW, Parker ASR, Sherstan C, Edwards AL, Sutton RS, Pilarski PM. Communicative capital: a key resource for human-machine shared agency and collaborative capacity. Neural Comput Appl 2022; 35:16805-16819. [PMID: 37455836 PMCID: PMC10338399 DOI: 10.1007/s00521-022-07948-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 10/12/2022] [Indexed: 11/16/2022]
Abstract
In this work, we present a perspective on the role machine intelligence can play in supporting human abilities. In particular, we consider research in rehabilitation technologies such as prosthetic devices, as this domain requires tight coupling between human and machine. Taking an agent-based view of such devices, we propose that human-machine collaborations have a capacity to perform tasks which is a result of the combined agency of the human and the machine. We introduce communicative capital as a resource developed by a human and a machine working together in ongoing interactions. Development of this resource enables the partnership to eventually perform tasks at a capacity greater than either individual could achieve alone. We then examine the benefits and challenges of increasing the agency of prostheses by surveying literature which demonstrates that building communicative resources enables more complex, task-directed interactions. The viewpoint developed in this article extends current thinking on how best to support the functional use of increasingly complex prostheses, and establishes insight toward creating more fruitful interactions between humans and supportive, assistive, and augmentative technologies.
Collapse
Affiliation(s)
| | - Adam S. R. Parker
- University of Alberta, Edmonton, Canada
- Alberta Machine Intelligence Institute (Amii), Edmonton, Canada
| | | | | | - Richard S. Sutton
- DeepMind, Montreal, Canada
- University of Alberta, Edmonton, Canada
- Alberta Machine Intelligence Institute (Amii), Edmonton, Canada
- DeepMind, Edmonton, Canada
| | - Patrick M. Pilarski
- DeepMind, Montreal, Canada
- University of Alberta, Edmonton, Canada
- Alberta Machine Intelligence Institute (Amii), Edmonton, Canada
- DeepMind, Edmonton, Canada
| |
Collapse
|
9
|
Fu J, Choudhury R, Hosseini SM, Simpson R, Park JH. Myoelectric Control Systems for Upper Limb Wearable Robotic Exoskeletons and Exosuits-A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:8134. [PMID: 36365832 PMCID: PMC9655258 DOI: 10.3390/s22218134] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/13/2022] [Accepted: 10/21/2022] [Indexed: 06/16/2023]
Abstract
In recent years, myoelectric control systems have emerged for upper limb wearable robotic exoskeletons to provide movement assistance and/or to restore motor functions in people with motor disabilities and to augment human performance in able-bodied individuals. In myoelectric control, electromyographic (EMG) signals from muscles are utilized to implement control strategies in exoskeletons and exosuits, improving adaptability and human-robot interactions during various motion tasks. This paper reviews the state-of-the-art myoelectric control systems designed for upper-limb wearable robotic exoskeletons and exosuits, and highlights the key focus areas for future research directions. Here, different modalities of existing myoelectric control systems were described in detail, and their advantages and disadvantages were summarized. Furthermore, key design aspects (i.e., supported degrees of freedom, portability, and intended application scenario) and the type of experiments conducted to validate the efficacy of the proposed myoelectric controllers were also discussed. Finally, the challenges and limitations of current myoelectric control systems were analyzed, and future research directions were suggested.
Collapse
Affiliation(s)
- Jirui Fu
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Renoa Choudhury
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Saba M. Hosseini
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Rylan Simpson
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Joon-Hyuk Park
- Department of Mechanical and Aerospace Engineering, University of Central Florida, Orlando, FL 32816, USA
| |
Collapse
|
10
|
Faridi P, Mehr JK, Wilson D, Sharifi M, Tavakoli M, Pilarski PM, Mushahwar VK. Machine-learned Adaptive Switching in Voluntary Lower-limb Exoskeleton Control: Preliminary Results. IEEE Int Conf Rehabil Robot 2022; 2022:1-6. [PMID: 36176101 DOI: 10.1109/icorr55369.2022.9896611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Lower-limb exoskeletons utilize fixed control strategies and are not adaptable to user's intention. To this end, the goal of this study was to investigate the potential of using temporal-difference learning and general value functions for predicting the next possible walking mode that will be selected by users wearing exoskeletons in order to reduce the effort and cognitive load while switching between different modes of walking. Experiments were performed with a user wearing the Indego exoskeleton and given the authority to switch between five walking modes that were different in terms of speed and turn direction. The user's switching preferences were learned and predicted from device-centric and room-centric measurements by considering similarities in the movements being performed. A switching list was updated to show the most probable future next modes to be selected by the user. In contrast to other approaches that either can only predict a single time-step or require intensive offline training, this work used a computationally inexpensive method for learning and has the potential of providing temporally extended sets of predictions in real-time. Comparing the number of required manual switches between the machine-learned switching list and the best possible static lists showed an average decrease of 42.44% in the required switches for the machine-learned adaptive strategy. These promising results will facilitate the path for real-time application of this technique.
Collapse
|
11
|
Xu J, Meeker C, Chen A, Winterbottom L, Fraser M, Park S, Weber LM, Miya M, Nilsen D, Stein J, Ciocarlie M. Adaptive Semi-Supervised Intent Inferral to Control a Powered Hand Orthosis for Stroke. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2022; 2022:8097-8103. [PMID: 37181542 PMCID: PMC10181849 DOI: 10.1109/icra46639.2022.9811932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
In order to provide therapy in a functional context, controls for wearable robotic orthoses need to be robust and intuitive. We have previously introduced an intuitive, user-driven, EMG-based method to operate a robotic hand orthosis, but the process of training a control that is robust to concept drift (changes in the input signal) places a substantial burden on the user. In this paper, we explore semi-supervised learning as a paradigm for controlling a powered hand orthosis for stroke subjects. To the best of our knowledge, this is the first use of semi-supervised learning for an orthotic application. Specifically, we propose a disagreement-based semi-supervision algorithm for handling intrasession concept drift based on multimodal ipsilateral sensing. We evaluate the performance of our algorithm on data collected from five stroke subjects. Our results show that the proposed algorithm helps the device adapt to intrasession drift using unlabeled data and reduces the training burden placed on the user. We also validate the feasibility of our proposed algorithm with a functional task; in these experiments, two subjects successfully completed multiple instances of a pick-and-handover task.
Collapse
Affiliation(s)
- Jingxi Xu
- Department of Computer Science, Columbia University, New York, NY 10027, USA
| | - Cassie Meeker
- Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
| | - Ava Chen
- Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
| | - Lauren Winterbottom
- Department of Rehabilitation and Regenerative Medicine, Columbia University, New York, NY 10032, USA
| | - Michaela Fraser
- Department of Rehabilitation and Regenerative Medicine, Columbia University, New York, NY 10032, USA
| | - Sangwoo Park
- Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
| | - Lynne M Weber
- Department of Rehabilitation and Regenerative Medicine, Columbia University, New York, NY 10032, USA
| | - Mitchell Miya
- Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
| | - Dawn Nilsen
- Department of Rehabilitation and Regenerative Medicine, Columbia University, New York, NY 10032, USA
- Co-Principal Investigators
| | - Joel Stein
- Department of Rehabilitation and Regenerative Medicine, Columbia University, New York, NY 10032, USA
- Co-Principal Investigators
| | - Matei Ciocarlie
- Department of Mechanical Engineering, Columbia University, New York, NY 10027, USA
- Co-Principal Investigators
| |
Collapse
|
12
|
Kearney A, Günther J, Pilarski PM. Prediction, Knowledge, and Explainability: Examining the Use of General Value Functions in Machine Knowledge. Front Artif Intell 2022; 5:826724. [PMID: 35434609 PMCID: PMC9010283 DOI: 10.3389/frai.2022.826724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 03/10/2022] [Indexed: 11/30/2022] Open
Abstract
Within computational reinforcement learning, a growing body of work seeks to express an agent's knowledge of its world through large collections of predictions. While systems that encode predictions as General Value Functions (GVFs) have seen numerous developments in both theory and application, whether such approaches are explainable is unexplored. In this perspective piece, we explore GVFs as a form of explainable AI. To do so, we articulate a subjective agent-centric approach to explainability in sequential decision-making tasks. We propose that prior to explaining its decisions to others, an self-supervised agent must be able to introspectively explain decisions to itself. To clarify this point, we review prior applications of GVFs that involve human-agent collaboration. In doing so, we demonstrate that by making their subjective explanations public, predictive knowledge agents can improve the clarity of their operation in collaborative tasks.
Collapse
Affiliation(s)
- Alex Kearney
- Reinforcement Learning and Artificial Intelligence Lab, Department of Computing Science, University of Alberta, Edmonton, AB, Canada
- *Correspondence: Alex Kearney
| | - Johannes Günther
- Reinforcement Learning and Artificial Intelligence Lab, Department of Computing Science, University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Patrick M. Pilarski
- Reinforcement Learning and Artificial Intelligence Lab, Department of Computing Science, University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada
- Department of Medicine, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
13
|
sEMG Signals Characterization and Identification of Hand Movements by Machine Learning Considering Sex Differences. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Developing a robust machine-learning algorithm to detect hand motion is one of the most challenging aspects of prosthetic hands and exoskeleton design. Machine-learning methods that considered sex differences were used to identify and describe hand movement patterns in healthy individuals. To this purpose, surface Electromyographic (sEMG) signals have been acquired from muscles in the forearm and hand. The results of statistical analysis indicated that most of the same muscle pairs in the right hand (females and males) showed significant differences during the six hand movements. Time features were used an as input to machine-learning algorithms for the recognition of six gestures. Specifically, two types of hand-gesture recognition methods that considered sex differences(differentiating sex datasets and adding a sex label)were proposed and applied to the k-nearest neighbor (k-NN), support vector machine (SVM) and artificial neural network (ANN) algorithms for comparison. In addition, a t-test statistical analysis approach and 5-fold cross validation were used as complements to verify whether considering sex differences could significantly improve classification performance. It was demonstrated that considering sex differences can significantly improve classification performance. The ANN algorithm with the addition of a sex label performed best in movement classification (98.4% accuracy). In the future, hand movement recognition algorithms considering sex differences could be applied to control systems for prosthetic hands or exoskeletons.
Collapse
|
14
|
Jaber HA, Rashid MT, Fortuna L. Online myoelectric pattern recognition based on hybrid spatial features. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102482] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
15
|
Dalrymple AN, Roszko DA, Sutton RS, Mushahwar VK. Pavlovian control of intraspinal microstimulation to produce over-ground walking. J Neural Eng 2020; 17:036002. [PMID: 32348970 DOI: 10.1088/1741-2552/ab8e8e] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVE Neuromodulation technologies are increasingly used for improving function after neural injury. To achieve a symbiotic relationship between device and user, the device must augment remaining function, and independently adapt to day-to-day changes in function. The goal of this study was to develop predictive control strategies to produce over-ground walking in a model of hemisection spinal cord injury (SCI) using intraspinal microstimulation (ISMS). APPROACH Eight cats were anaesthetized and placed in a sling over a walkway. The residual function of a hemisection SCI was mimicked by manually moving one hind-limb through the walking cycle. ISMS targeted motor networks in the lumbosacral enlargement to activate muscles in the other, presumably 'paralyzed' limb, using low levels of current (<130 μA). Four people took turns to move the 'intact' limb, generating four different walking styles. Two control strategies, which used ground reaction force and angular velocity information about the manually moved 'intact' limb to control the timing of the transitions of the 'paralyzed' limb through the step cycle, were compared. The first strategy used thresholds on the raw sensor values to initiate transitions. The second strategy used reinforcement learning and Pavlovian control to learn predictions about the sensor values. Thresholds on the predictions were then used to initiate transitions. MAIN RESULTS Both control strategies were able to produce alternating, over-ground walking. Transitions based on raw sensor values required manual tuning of thresholds for each person to produce walking, whereas Pavlovian control did not. Learning occurred quickly during walking: predictions of the sensor signals were learned rapidly, initiating correct transitions after ≤4 steps. Pavlovian control was resilient to different walking styles and different cats, and recovered from induced mistakes during walking. SIGNIFICANCE This work demonstrates, for the first time, that Pavlovian control can augment remaining function and facilitate personalized walking with minimal tuning requirements.
Collapse
Affiliation(s)
- Ashley N Dalrymple
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada. Sensory Motor Adaptive Rehabilitation Technology (SMART) Network, University of Alberta, Edmonton, AB, Canada
| | | | | | | |
Collapse
|
16
|
Günther J, Ady NM, Kearney A, Dawson MR, Pilarski PM. Examining the Use of Temporal-Difference Incremental Delta-Bar-Delta for Real-World Predictive Knowledge Architectures. Front Robot AI 2020; 7:34. [PMID: 33501202 PMCID: PMC7805647 DOI: 10.3389/frobt.2020.00034] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 02/26/2020] [Indexed: 11/13/2022] Open
Abstract
Predictions and predictive knowledge have seen recent success in improving not only robot control but also other applications ranging from industrial process control to rehabilitation. A property that makes these predictive approaches well-suited for robotics is that they can be learned online and incrementally through interaction with the environment. However, a remaining challenge for many prediction-learning approaches is an appropriate choice of prediction-learning parameters, especially parameters that control the magnitude of a learning machine's updates to its predictions (the learning rates or step sizes). Typically, these parameters are chosen based on an extensive parameter search—an approach that neither scales well nor is well-suited for tasks that require changing step sizes due to non-stationarity. To begin to address this challenge, we examine the use of online step-size adaptation using the Modular Prosthetic Limb: a sensor-rich robotic arm intended for use by persons with amputations. Our method of choice, Temporal-Difference Incremental Delta-Bar-Delta (TIDBD), learns and adapts step sizes on a feature level; importantly, TIDBD allows step-size tuning and representation learning to occur at the same time. As a first contribution, we show that TIDBD is a practical alternative for classic Temporal-Difference (TD) learning via an extensive parameter search. Both approaches perform comparably in terms of predicting future aspects of a robotic data stream, but TD only achieves comparable performance with a carefully hand-tuned learning rate, while TIDBD uses a robust meta-parameter and tunes its own learning rates. Secondly, our results show that for this particular application TIDBD allows the system to automatically detect patterns characteristic of sensor failures common to a number of robotic applications. As a third contribution, we investigate the sensitivity of classic TD and TIDBD with respect to the initial step-size values on our robotic data set, reaffirming the robustness of TIDBD as shown in previous papers. Together, these results promise to improve the ability of robotic devices to learn from interactions with their environments in a robust way, providing key capabilities for autonomous agents and robots.
Collapse
Affiliation(s)
- Johannes Günther
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada.,Alberta Machine Intelligence Institute, Edmonton, AB, Canada
| | - Nadia M Ady
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada
| | - Alex Kearney
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada
| | - Michael R Dawson
- Alberta Machine Intelligence Institute, Edmonton, AB, Canada.,Department of Medicine, University of Alberta, Edmonton, AB, Canada
| | - Patrick M Pilarski
- Department of Computing Science, University of Alberta, Edmonton, AB, Canada.,Alberta Machine Intelligence Institute, Edmonton, AB, Canada.,Department of Medicine, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
17
|
Real-Time EMG Based Pattern Recognition Control for Hand Prostheses: A Review on Existing Methods, Challenges and Future Implementation. SENSORS 2019; 19:s19204596. [PMID: 31652616 PMCID: PMC6832440 DOI: 10.3390/s19204596] [Citation(s) in RCA: 121] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 10/17/2019] [Accepted: 10/19/2019] [Indexed: 11/17/2022]
Abstract
Upper limb amputation is a condition that significantly restricts the amputees from performing their daily activities. The myoelectric prosthesis, using signals from residual stump muscles, is aimed at restoring the function of such lost limbs seamlessly. Unfortunately, the acquisition and use of such myosignals are cumbersome and complicated. Furthermore, once acquired, it usually requires heavy computational power to turn it into a user control signal. Its transition to a practical prosthesis solution is still being challenged by various factors particularly those related to the fact that each amputee has different mobility, muscle contraction forces, limb positional variations and electrode placements. Thus, a solution that can adapt or otherwise tailor itself to each individual is required for maximum utility across amputees. Modified machine learning schemes for pattern recognition have the potential to significantly reduce the factors (movement of users and contraction of the muscle) affecting the traditional electromyography (EMG)-pattern recognition methods. Although recent developments of intelligent pattern recognition techniques could discriminate multiple degrees of freedom with high-level accuracy, their efficiency level was less accessible and revealed in real-world (amputee) applications. This review paper examined the suitability of upper limb prosthesis (ULP) inventions in the healthcare sector from their technical control perspective. More focus was given to the review of real-world applications and the use of pattern recognition control on amputees. We first reviewed the overall structure of pattern recognition schemes for myo-control prosthetic systems and then discussed their real-time use on amputee upper limbs. Finally, we concluded the paper with a discussion of the existing challenges and future research recommendations.
Collapse
|
18
|
Aranceta-Garza A, Conway BA. Differentiating Variations in Thumb Position From Recordings of the Surface Electromyogram in Adults Performing Static Grips, a Proof of Concept Study. Front Bioeng Biotechnol 2019; 7:123. [PMID: 31192205 PMCID: PMC6541154 DOI: 10.3389/fbioe.2019.00123] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 05/07/2019] [Indexed: 12/03/2022] Open
Abstract
Hand gesture and grip formations are produced by the muscle synergies arising between extrinsic and intrinsic hand muscles and many functional hand movements involve repositioning of the thumb relative to other digits. In this study we explored whether changes in thumb posture in able-body volunteers can be identified and classified from the modulation of forearm muscle surface-electromyography (sEMG) alone without reference to activity from the intrinsic musculature. In this proof-of-concept study, our goal was to determine if there is scope to develop prosthetic hand control systems that may incorporate myoelectric thumb-position control. Healthy volunteers performed a controlled-isometric grip task with their thumb held in four different opposing-postures. Grip force during task performance was maintained at 30% maximal-voluntary-force and sEMG signals from the forearm were recorded using 2D high-density sEMG (HD-sEMG arrays). Correlations between sEMG amplitude and root-mean squared estimates with variation in thumb-position were investigated using principal-component analysis and self-organizing feature maps. Results demonstrate that forearm muscle sEMG patterns possess classifiable parameters that correlate with variations in static thumb position (accuracy of 88.25 ± 0.5% anterior; 91.25 ± 2.5% posterior musculature of the forearm sites). Of importance, this suggests that in transradial amputees, despite the loss of access to the intrinsic muscles that control thumb action, an acceptable level of control over a thumb component within myoelectric devices may be achievable. Accordingly, further work exploring the potential to provide myoelectric control over the thumb within a prosthetic hand is warranted.
Collapse
Affiliation(s)
| | - Bernard Arthur Conway
- Department of Biomedical Engineering, University of Strathclyde, Glasgow, United Kingdom
| |
Collapse
|
19
|
Raveh E, Portnoy S, Friedman J. Myoelectric Prosthesis Users Improve Performance Time and Accuracy Using Vibrotactile Feedback When Visual Feedback Is Disturbed. Arch Phys Med Rehabil 2018; 99:2263-2270. [DOI: 10.1016/j.apmr.2018.05.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Revised: 05/01/2018] [Accepted: 05/09/2018] [Indexed: 11/28/2022]
|
20
|
A Neuromuscular Interface for Robotic Devices Control. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2018; 2018:8948145. [PMID: 30140303 PMCID: PMC6081556 DOI: 10.1155/2018/8948145] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 05/04/2018] [Accepted: 06/12/2018] [Indexed: 11/27/2022]
Abstract
A neuromuscular interface (NI) that can be employed to operate external robotic devices (RD), including commercial ones, was proposed. Multichannel electromyographic (EMG) signal is used in the control loop. Control signal can also be supplemented with electroencephalography (EEG), limb kinematics, or other modalities. The multiple electrode approach takes advantage of the massive resources of the human brain for solving nontrivial tasks, such as movement coordination. Multilayer artificial neural network was used for feature classification and further to provide command and/or proportional control of three robotic devices. The possibility of using biofeedback can compensate for control errors and implement a fundamentally important feature that has previously limited the development of intelligent exoskeletons, prostheses, and other medical devices. The control system can be integrated with wearable electronics. Examples of technical devices under control of the neuromuscular interface (NI) are presented.
Collapse
|
21
|
Michael B, Howard M. Gait Reconstruction From Motion Artefact Corrupted Fabric-Embedded Sensors. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2807810] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
22
|
Argall BD. Autonomy in Rehabilitation Robotics: An Intersection. ANNUAL REVIEW OF CONTROL, ROBOTICS, AND AUTONOMOUS SYSTEMS 2018; 1:441-463. [PMID: 34316543 PMCID: PMC8313033 DOI: 10.1146/annurev-control-061417-041727] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Within the field of human rehabilitation, robotic machines are used both to rehabilitate the body and to perform functional tasks. Robotics autonomy able to perceive the external world and reason about high-level control decisions, however, seldom is present in these machines. For functional tasks in particular, autonomy could help to decrease the operational burden on the human and perhaps even to increase access-and this potential only grows as human motor impairments become more severe. There are however serious, and often subtle, considerations to introducing clinically-feasible robotics autonomy to rehabilitation robots and machines. Today the fields of robotics autonomy and rehabilitation robotics are largely separate. The topic of this article is at the intersection of these fields: the introduction of clinically-feasible autonomy solutions to rehabilitation robots, and opportunities for autonomy within the rehabilitation domain.
Collapse
Affiliation(s)
- Brenna D Argall
- McCormick School of Engineering and Feinberg School of Medicine, Northwestern University, Evanston, IL, USA, 60208
- Shirley Ryan AbilityLab (formerly the Rehabilitation Institute of Chicago), Chicago, IL, USA, 60611
| |
Collapse
|
23
|
Adding vibrotactile feedback to a myoelectric-controlled hand improves performance when online visual feedback is disturbed. Hum Mov Sci 2018; 58:32-40. [PMID: 29353091 DOI: 10.1016/j.humov.2018.01.008] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Revised: 01/06/2018] [Accepted: 01/11/2018] [Indexed: 01/06/2023]
Abstract
We investigated whether adding vibrotactile feedback to a myoelectric-controlled hand, when visual feedback is disturbed, can improve performance during a functional test. For this purpose, able-bodied subjects, activating a myoelectric-controlled hand attached to their right hand performed the modified Box & Blocks test, grasping and manipulating wooden blocks over a partition. This was performed in 3 conditions, using a repeated-measures design: in full light, in a dark room where visual feedback was disturbed and no auditory feedback - one time with the addition of tactile feedback provided during object grasping and manipulation, and one time without any tactile feedback. The average time needed to transfer one block was measured, and an infrared camera was used to give information on the number of grasping errors during performance of the test. Our results show that when vibrotactile feedback was provided, performance time was reduced significantly, compared with when no vibrotactile feedback was available. Furthermore, the accuracy of grasping and manipulation was improved, reflected by significantly fewer errors during test performance. In conclusion, adding vibrotactile feedback to a myoelectric-controlled hand has positive effects on functional performance when visual feedback is disturbed. This may have applications to current myoelectric-controlled hands, as adding tactile feedback may help prosthesis users to improve their functional ability during daily life activities in different environments, particularly when limited visual feedback is available or desirable.
Collapse
|
24
|
Vasan G, Pilarski PM. Learning from demonstration: Teaching a myoelectric prosthesis with an intact limb via reinforcement learning. IEEE Int Conf Rehabil Robot 2017; 2017:1457-1464. [PMID: 28814025 DOI: 10.1109/icorr.2017.8009453] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Prosthetic arms should restore and extend the capabilities of someone with an amputation. They should move naturally and be able to perform elegant, coordinated movements that approximate those of a biological arm. Despite these objectives, the control of modern-day prostheses is often nonintuitive and taxing. Existing devices and control approaches do not yet give users the ability to effect highly synergistic movements during their daily-life control of a prosthetic device. As a step towards improving the control of prosthetic arms and hands, we introduce an intuitive approach to training a prosthetic control system that helps a user achieve hard-to-engineer control behaviours. Specifically, we present an actor-critic reinforcement learning method that for the first time promises to allow someone with an amputation to use their non-amputated arm to teach their prosthetic arm how to move through a wide range of coordinated motions and grasp patterns. We evaluate our method during the myoelectric control of a multi-joint robot arm by non-amputee users, and demonstrate that by using our approach a user can train their arm to perform simultaneous gestures and movements in all three degrees of freedom in the robot's hand and wrist based only on information sampled from the robot and the user's above-elbow myoelectric signals. Our results indicate that this learning-from-demonstration paradigm may be well suited to use by both patients and clinicians with minimal technical knowledge, as it allows a user to personalize the control of his or her prosthesis without having to know the underlying mechanics of the prosthetic limb. These preliminary results also suggest that our approach may extend in a straightforward way to next-generation prostheses with precise finger and wrist control, such that these devices may someday allow users to perform fluid and intuitive movements like playing the piano, catching a ball, and comfortably shaking hands.
Collapse
|
25
|
Travnik JB, Pilarski PM. Representing high-dimensional data to intelligent prostheses and other wearable assistive robots: A first comparison of tile coding and selective Kanerva coding. IEEE Int Conf Rehabil Robot 2017; 2017:1443-1450. [PMID: 28814023 DOI: 10.1109/icorr.2017.8009451] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Prosthetic devices have advanced in their capabilities and in the number and type of sensors included in their design. As the space of sensorimotor data available to a conventional or machine learning prosthetic control system increases in dimensionality and complexity, it becomes increasingly important that this data be represented in a useful and computationally efficient way. Well structured sensory data allows prosthetic control systems to make informed, appropriate control decisions. In this study, we explore the impact that increased sensorimotor information has on current machine learning prosthetic control approaches. Specifically, we examine the effect that high-dimensional sensory data has on the computation time and prediction performance of a true-online temporal-difference learning prediction method as embedded within a resource-limited upper-limb prosthesis control system. We present results comparing tile coding, the dominant linear representation for real-time prosthetic machine learning, with a newly proposed modification to Kanerva coding that we call selective Kanerva coding. In addition to showing promising results for selective Kanerva coding, our results confirm potential limitations to tile coding as the number of sensory input dimensions increases. To our knowledge, this study is the first to explicitly examine representations for realtime machine learning prosthetic devices in general terms. This work therefore provides an important step towards forming an efficient prosthesis-eye view of the world, wherein prompt and accurate representations of high-dimensional data may be provided to machine learning control systems within artificial limbs and other assistive rehabilitation technologies.
Collapse
|