1
|
Williams HE, Shehata AW, Cheng KY, Hebert JS, Pilarski PM. A multifaceted suite of metrics for comparative myoelectric prosthesis controller research. PLoS One 2024; 19:e0291279. [PMID: 38739557 PMCID: PMC11090368 DOI: 10.1371/journal.pone.0291279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/15/2024] [Indexed: 05/16/2024] Open
Abstract
Upper limb robotic (myoelectric) prostheses are technologically advanced, but challenging to use. In response, substantial research is being done to develop person-specific prosthesis controllers that can predict a user's intended movements. Most studies that test and compare new controllers rely on simple assessment measures such as task scores (e.g., number of objects moved across a barrier) or duration-based measures (e.g., overall task completion time). These assessment measures, however, fail to capture valuable details about: the quality of device arm movements; whether these movements match users' intentions; the timing of specific wrist and hand control functions; and users' opinions regarding overall device reliability and controller training requirements. In this work, we present a comprehensive and novel suite of myoelectric prosthesis control evaluation metrics that better facilitates analysis of device movement details-spanning measures of task performance, control characteristics, and user experience. As a case example of their use and research viability, we applied these metrics in real-time control experimentation. Here, eight participants without upper limb impairment compared device control offered by a deep learning-based controller (recurrent convolutional neural network-based classification with transfer learning, or RCNN-TL) to that of a commonly used controller (linear discriminant analysis, or LDA). The participants wore a simulated prosthesis and performed complex functional tasks across multiple limb positions. Analysis resulting from our suite of metrics identified 16 instances of a user-facing problem known as the "limb position effect". We determined that RCNN-TL performed the same as or significantly better than LDA in four such problem instances. We also confirmed that transfer learning can minimize user training burden. Overall, this study contributes a multifaceted new suite of control evaluation metrics, along with a guide to their application, for use in research and testing of myoelectric controllers today, and potentially for use in broader rehabilitation technologies of the future.
Collapse
Affiliation(s)
- Heather E. Williams
- Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
- Alberta Machine Intelligence Institute (Amii), Edmonton, AB, Canada
| | - Ahmed W. Shehata
- Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
| | - Kodi Y. Cheng
- Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S. Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, AB, Canada
| | - Patrick M. Pilarski
- Alberta Machine Intelligence Institute (Amii), Edmonton, AB, Canada
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
2
|
Lavoie E, Hebert JS, Chapman CS. Comparing eye-hand coordination between controller-mediated virtual reality, and a real-world object interaction task. J Vis 2024; 24:9. [PMID: 38393742 PMCID: PMC10905649 DOI: 10.1167/jov.24.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 11/30/2023] [Indexed: 02/25/2024] Open
Abstract
Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic real-world experiences, particularly in terms of eye-hand coordination. This study compares eye-hand coordination from a previously validated real-world object interaction task to the same task re-created in controller-mediated VR. We recorded eye and body movements and segmented participants' gaze data using the movement data. In the real-world condition, participants wore a head-mounted eye tracker and motion capture markers and moved a pasta box into and out of a set of shelves. In the VR condition, participants wore a VR headset and moved a virtual box using handheld controllers. Unsurprisingly, VR participants took longer to complete the task. Before picking up or dropping off the box, participants in the real world visually fixated the box about half a second before their hand arrived at the area of action. This 500-ms minimum fixation time before the hand arrived was preserved in VR. Real-world participants disengaged their eyes from the box almost immediately after their hand initiated or terminated the interaction, but VR participants stayed fixated on the box for much longer after it was picked up or dropped off. We speculate that the limited haptic feedback during object interactions in VR forces users to maintain visual fixation on objects longer than in the real world, altering eye-hand coordination. These findings suggest that current VR technology does not replicate real-world experience in terms of eye-hand coordination.
Collapse
Affiliation(s)
- Ewen Lavoie
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
- Glenrose Rehabiliation Hospital, Alberta Health Services, Edmonton, AB, Canada
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
3
|
Taghlabi KM, Cruz-Garza JG, Hassan T, Potnis O, Bhenderu LS, Guerrero JR, Whitehead RE, Wu Y, Luan L, Xie C, Robinson JT, Faraji AH. Clinical outcomes of peripheral nerve interfaces for rehabilitation in paralysis and amputation: a literature review. J Neural Eng 2024; 21:011001. [PMID: 38237175 DOI: 10.1088/1741-2552/ad200f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Accepted: 01/18/2024] [Indexed: 02/02/2024]
Abstract
Peripheral nerve interfaces (PNIs) are electrical systems designed to integrate with peripheral nerves in patients, such as following central nervous system (CNS) injuries to augment or replace CNS control and restore function. We review the literature for clinical trials and studies containing clinical outcome measures to explore the utility of human applications of PNIs. We discuss the various types of electrodes currently used for PNI systems and their functionalities and limitations. We discuss important design characteristics of PNI systems, including biocompatibility, resolution and specificity, efficacy, and longevity, to highlight their importance in the current and future development of PNIs. The clinical outcomes of PNI systems are also discussed. Finally, we review relevant PNI clinical trials that were conducted, up to the present date, to restore the sensory and motor function of upper or lower limbs in amputees, spinal cord injury patients, or intact individuals and describe their significant findings. This review highlights the current progress in the field of PNIs and serves as a foundation for future development and application of PNI systems.
Collapse
Affiliation(s)
- Khaled M Taghlabi
- Department of Neurological Surgery, Houston Methodist Hospital, Houston, TX 77030, United States of America
- Center for Neural Systems Restoration, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Clinical Innovations Laboratory, Houston Methodist Research Institute, Houston, TX 77030, United States of America
| | - Jesus G Cruz-Garza
- Department of Neurological Surgery, Houston Methodist Hospital, Houston, TX 77030, United States of America
- Center for Neural Systems Restoration, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Clinical Innovations Laboratory, Houston Methodist Research Institute, Houston, TX 77030, United States of America
| | - Taimur Hassan
- Department of Neurological Surgery, Houston Methodist Hospital, Houston, TX 77030, United States of America
- Center for Neural Systems Restoration, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Clinical Innovations Laboratory, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- School of Medicine, Texas A&M University, Bryan, TX 77807, United States of America
| | - Ojas Potnis
- Department of Neurological Surgery, Houston Methodist Hospital, Houston, TX 77030, United States of America
- Center for Neural Systems Restoration, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Clinical Innovations Laboratory, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- School of Engineering Medicine, Texas A&M University, Houston, TX 77030, United States of America
| | - Lokeshwar S Bhenderu
- Department of Neurological Surgery, Houston Methodist Hospital, Houston, TX 77030, United States of America
- Center for Neural Systems Restoration, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Clinical Innovations Laboratory, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- School of Medicine, Texas A&M University, Bryan, TX 77807, United States of America
| | - Jaime R Guerrero
- Department of Neurological Surgery, Houston Methodist Hospital, Houston, TX 77030, United States of America
- Center for Neural Systems Restoration, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Clinical Innovations Laboratory, Houston Methodist Research Institute, Houston, TX 77030, United States of America
| | - Rachael E Whitehead
- Department of Academic Affairs, Houston Methodist Academic Institute, Houston, TX 77030, United States of America
| | - Yu Wu
- Rice Neuroengineering Initiative, Rice University, Houston, TX 77005, United States of America
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, United States of America
| | - Lan Luan
- Rice Neuroengineering Initiative, Rice University, Houston, TX 77005, United States of America
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, United States of America
| | - Chong Xie
- Rice Neuroengineering Initiative, Rice University, Houston, TX 77005, United States of America
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, United States of America
| | - Jacob T Robinson
- Rice Neuroengineering Initiative, Rice University, Houston, TX 77005, United States of America
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, United States of America
| | - Amir H Faraji
- Department of Neurological Surgery, Houston Methodist Hospital, Houston, TX 77030, United States of America
- Center for Neural Systems Restoration, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Clinical Innovations Laboratory, Houston Methodist Research Institute, Houston, TX 77030, United States of America
- Rice Neuroengineering Initiative, Rice University, Houston, TX 77005, United States of America
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, United States of America
| |
Collapse
|
4
|
Stone SA, Boser QA, Dawson TR, Vette AH, Hebert JS, Pilarski PM, Chapman CS. Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture. Behav Res Methods 2024; 56:18-31. [PMID: 36085543 DOI: 10.3758/s13428-022-01958-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/15/2022] [Indexed: 11/08/2022]
Abstract
Assessing gaze behavior during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make gaze analysis difficult. Current approaches involve laborious coding of pupil positions. In settings where motion capture and mobile eye tracking are used concurrently in naturalistic tasks, it is critical that data collection be simple, efficient, and systematic. One solution is to combine eye tracking with motion capture to generate 3D gaze vectors. When combined with tracked or known object locations, 3D gaze vector generation can be automated. Here we use combined eye and motion capture and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three pupil data inputs: the efficacy of calibration routines was assessed, a validation task requiring short fixations on task-relevant locations, and a naturalistic object interaction task to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and Cartesian coordinate systems and monocular (left or right) or binocular data. All calibration routines performed similarly, with the best performance (i.e., sub-centimeter errors) coming from the naturalistic task trials when the participant is looking at an object in front of them. We found that spherical coordinate systems generate the most accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend 1-min calibration routines using binocular pupil data combined with a spherical world coordinate system to produce the highest-quality gaze vectors.
Collapse
Affiliation(s)
- Scott A Stone
- Department of Psychology, University of Alberta, Edmonton, Alberta, Canada.
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada.
| | - Quinn A Boser
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - T Riley Dawson
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Albert H Vette
- Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Patrick M Pilarski
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Craig S Chapman
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
5
|
Cheng KY, Rehani M, Hebert JS. A scoping review of eye tracking metrics used to assess visuomotor behaviours of upper limb prosthesis users. J Neuroeng Rehabil 2023; 20:49. [PMID: 37095489 PMCID: PMC10127019 DOI: 10.1186/s12984-023-01180-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/19/2023] [Indexed: 04/26/2023] Open
Abstract
Advanced upper limb prostheses aim to restore coordinated hand and arm function. However, this objective can be difficult to quantify as coordinated movements require an intact visuomotor system. Eye tracking has recently been applied to study the visuomotor behaviours of upper limb prosthesis users by enabling the calculation of eye movement metrics. This scoping review aims to characterize the visuomotor behaviours of upper limb prosthesis users as described by eye tracking metrics, to summarize the eye tracking metrics used to describe prosthetic behaviour, and to identify gaps in the literature and potential areas for future research. A review of the literature was performed to identify articles that reported eye tracking metrics to evaluate the visual behaviours of individuals using an upper limb prosthesis. Data on the level of amputation, type of prosthetic device, type of eye tracker, primary eye metrics, secondary outcome metrics, experimental task, aims, and key findings were extracted. Seventeen studies were included in this scoping review. A consistently reported finding is that prosthesis users have a characteristic visuomotor behaviour that differs from that of individuals with intact arm function. Visual attention has been reported to be directed more towards the hand and less towards the target during object manipulation tasks. A gaze switching strategy and delay to disengage gaze from the current target has also been reported. Differences in the type of prosthetic device and experimental task have revealed some distinct gaze behaviours. Control factors have been shown to be related to gaze behaviour, while sensory feedback and training interventions have been demonstrated to reduce the visual attention associated with prosthesis use. Eye tracking metrics have also been used to assess the cognitive load and sense of agency of prosthesis users. Overall, there is evidence that eye tracking is an effective tool to quantitatively assess the visuomotor behaviour of prosthesis users and the recorded eye metrics are sensitive to change in response to various factors. Additional studies are needed to validate the eye metrics used to assess cognitive load and sense of agency in upper limb prosthesis users.
Collapse
Affiliation(s)
- Kodi Y Cheng
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
| | - Mayank Rehani
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada.
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada.
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, AB, Canada.
| |
Collapse
|
6
|
Tang Z, Liu X, Huo H, Tang M, Qiao X, Chen D, Dong Y, Fan L, Wang J, Du X, Guo J, Tian S, Fan Y. Eye movement characteristics in a mental rotation task presented in virtual reality. Front Neurosci 2023; 17:1143006. [PMID: 37051147 PMCID: PMC10083294 DOI: 10.3389/fnins.2023.1143006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 03/13/2023] [Indexed: 03/28/2023] Open
Abstract
IntroductionEye-tracking technology provides a reliable and cost-effective approach to characterize mental representation according to specific patterns. Mental rotation tasks, referring to the mental representation and transformation of visual information, have been widely used to examine visuospatial ability. In these tasks, participants visually perceive three-dimensional (3D) objects and mentally rotate them until they identify whether the paired objects are identical or mirrored. In most studies, 3D objects are presented using two-dimensional (2D) images on a computer screen. Currently, visual neuroscience tends to investigate visual behavior responding to naturalistic stimuli rather than image stimuli. Virtual reality (VR) is an emerging technology used to provide naturalistic stimuli, allowing the investigation of behavioral features in an immersive environment similar to the real world. However, mental rotation tasks using 3D objects in immersive VR have been rarely reported.MethodsHere, we designed a VR mental rotation task using 3D stimuli presented in a head-mounted display (HMD). An eye tracker incorporated into the HMD was used to examine eye movement characteristics during the task synchronically. The stimuli were virtual paired objects oriented at specific angular disparities (0, 60, 120, and 180°). We recruited thirty-three participants who were required to determine whether the paired 3D objects were identical or mirrored.ResultsBehavioral results demonstrated that the response times when comparing mirrored objects were longer than identical objects. Eye-movement results showed that the percent fixation time, the number of within-object fixations, and the number of saccades for the mirrored objects were significantly lower than that for the identical objects, providing further explanations for the behavioral results.DiscussionIn the present work, we examined behavioral and eye movement characteristics during a VR mental rotation task using 3D stimuli. Significant differences were observed in response times and eye movement metrics between identical and mirrored objects. The eye movement data provided further explanation for the behavioral results in the VR mental rotation task.
Collapse
Affiliation(s)
- Zhili Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaoyu Liu
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- *Correspondence: Xiaoyu Liu,
| | - Hongqiang Huo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Min Tang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xiaofeng Qiao
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Duo Chen
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Ying Dong
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Linyuan Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jinghui Wang
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Xin Du
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Jieyi Guo
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Shan Tian
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
| | - Yubo Fan
- Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering and School of Engineering Medicine, Beihang University, Beijing, China
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- Yubo Fan,
| |
Collapse
|
7
|
Hunt CL, Sun Y, Wang S, Shehata AW, Hebert JS, Gonzalez-Fernandez M, Kaliki RR, Thakor NV. Limb loading enhances skill transfer between augmented and physical reality tasks during limb loss rehabilitation. J Neuroeng Rehabil 2023; 20:16. [PMID: 36707817 PMCID: PMC9881335 DOI: 10.1186/s12984-023-01136-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 01/10/2023] [Indexed: 01/28/2023] Open
Abstract
BACKGROUND Virtual and augmented reality (AR) have become popular modalities for training myoelectric prosthesis control with upper-limb amputees. While some systems have shown moderate success, it is unclear how well the complex motor skills learned in an AR simulation transfer to completing the same tasks in physical reality. Limb loading is a possible dimension of motor skill execution that is absent in current AR solutions that may help to increase skill transfer between the virtual and physical domains. METHODS We implemented an immersive AR environment where individuals could operate a myoelectric virtual prosthesis to accomplish a variety of object relocation manipulations. Intact limb participants were separated into three groups, the load control (CGLD; [Formula: see text]), the AR control (CGAR; [Formula: see text]), and the experimental group (EG; [Formula: see text]). Both the CGAR and EG completed a 5-session prosthesis training protocol in AR while the CGLD performed simple muscle training. The EG attempted manipulations in AR while undergoing limb loading. The CGAR attempted the same manipulations without loading. All participants performed the same manipulations in physical reality while operating a real prosthesis pre- and post-training. The main outcome measure was the change in the number of manipulations completed during the physical reality assessments (i.e. completion rate). Secondary outcomes included movement kinematics and visuomotor behavior. RESULTS The EG experienced a greater increase in completion rate post-training than both the CGAR and CGLD. This performance increase was accompanied by a shorter motor learning phase, the EG's performance saturating in less sessions of AR training than the CGAR. CONCLUSION The results demonstrated that limb loading plays an important role in transferring complex motor skills learned in virtual spaces to their physical reality analogs. While participants who did not receive limb loading were able to receive some functional benefit from AR training, participants who received the loading experienced a greater positive change in motor performance with their performance saturating in fewer training sessions.
Collapse
Affiliation(s)
- Christopher L. Hunt
- grid.21107.350000 0001 2171 9311Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, USA
| | - Yinghe Sun
- grid.429997.80000 0004 1936 7531Department of Electrical and Computer Engineering, Tufts University, Medford, USA
| | - Shipeng Wang
- grid.21107.350000 0001 2171 9311Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, USA
| | - Ahmed W. Shehata
- grid.17089.370000 0001 2190 316XDivision of Physical Medicine & Rehabilitation, University of Alberta, Edmonton, Canada
| | - Jacqueline S. Hebert
- grid.17089.370000 0001 2190 316XDivision of Physical Medicine & Rehabilitation, University of Alberta, Edmonton, Canada
| | - Marlis Gonzalez-Fernandez
- grid.21107.350000 0001 2171 9311Department of Physical Medicine and Rehabilitation, The Johns Hopkins University, Baltimore, USA
| | - Rahul R. Kaliki
- grid.21107.350000 0001 2171 9311Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, USA ,grid.281272.cInfinite Biomedical Technologies, Baltimore, USA
| | - Nitish V. Thakor
- grid.21107.350000 0001 2171 9311Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, USA
| |
Collapse
|
8
|
Lukashova-Sanz O, Agarwala R, Wahl S. Context matters during pick-and-place in VR: Impact on search and transport phases. Front Psychol 2022; 13:881269. [PMID: 36160516 PMCID: PMC9493493 DOI: 10.3389/fpsyg.2022.881269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 08/19/2022] [Indexed: 11/13/2022] Open
Abstract
When considering external assistive systems for people with motor impairments, gaze has been shown to be a powerful tool as it is anticipatory to motor actions and is promising for understanding intentions of an individual even before the action. Up until now, the vast majority of studies investigating the coordinated eye and hand movement in a grasping task focused on single objects manipulation without placing them in a meaningful scene. Very little is known about the impact of the scene context on how we manipulate objects in an interactive task. In the present study, it was investigated how the scene context affects human object manipulation in a pick-and-place task in a realistic scenario implemented in VR. During the experiment, participants were instructed to find the target object in a room, pick it up, and transport it to a predefined final location. Thereafter, the impact of the scene context on different stages of the task was examined using head and hand movement, as well as eye tracking. As the main result, the scene context had a significant effect on the search and transport phases, but not on the reach phase of the task. The present work provides insights into the development of potential supporting intention predicting systems, revealing the dynamics of the pick-and-place task behavior once it is realized in a realistic context-rich scenario.
Collapse
Affiliation(s)
- Olga Lukashova-Sanz
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International Gesellschaft mit beschränkter Haftung (GmbH), Aalen, Germany
- *Correspondence: Olga Lukashova-Sanz
| | - Rajat Agarwala
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
| | - Siegfried Wahl
- Zeiss Vision Science Lab, Institute for Ophthalmic Research, University of Tübingen, Tübingen, Germany
- Carl Zeiss Vision International Gesellschaft mit beschränkter Haftung (GmbH), Aalen, Germany
| |
Collapse
|
9
|
Development of Low-Fidelity Virtual Replicas of Products for Usability Testing. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12146937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Designers perform early-stage formative usability tests with low-fidelity prototypes to improve the design of new products. This low-tech prototype style reduces the manufacturing resources but limits the functions that can be assessed. Recent advances in technology enable designers to create low-fidelity 3D models for users to engage in a virtual environment. Three-dimensional models communicate design concepts and are not often used in formative usability testing. The proposed method discusses how to create a virtual replica of a product by assessing key human interaction steps and addresses the limitations of translating those steps into a virtual environment. In addition, the paper will provide a framework to evaluate the usability of a product in a virtual setting, with a specific emphasis on low-resource online testing in the user population. A study was performed to pilot the subject’s experience with the proposed approach and determine how the virtual online simulation impacted the performance. The study outcomes demonstrated that subjects were able to successfully interact with the virtual replica and found the simulation realistic. This method can be followed to perform formative usability tests earlier and incorporate subject feedback into future iterations of their design, which can improve safety and product efficacy.
Collapse
|
10
|
Cheng KY, Chapman CS, Hebert JS. Spatiotemporal Coupling of Hand and Eye Movements When Using a Myoelectric Prosthetic Hand. IEEE Int Conf Rehabil Robot 2022; 2022:1-6. [PMID: 36176081 DOI: 10.1109/icorr55369.2022.9896491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Upper limb prosthesis users have disruptions in hand-eye coordination, with increased fixations towards the hand and less visual allocation for feedforward planning. The purpose of this study was to explore whether improved motor planning, as reflected by eye gaze behaviour, was associated with more efficient hand movement patterns. Able-bodied participants wore a simulated prosthesis while performing a functional object movement task. Motion and eye tracking data were collected to quantify the eye gaze and hand movement during object interaction. The results of this study demonstrated that the latency of the eye to precede the hand at pick-up was correlated with measures of hand function, including hand variability, movement units, and grasp time, but not reach time. During transport and release, longer latency to disengage gaze from the grasped object and look ahead towards the target was correlated to hand kinematics of hand variability, distance travelled, and transport time. In addition, the latency of the eye to disengage the drop-off location was correlated to release time. Together these may point to control issues with opening and closing the prosthetic hand. Overall, increased feedforward fixations towards the target and reduced feedback fixations towards the hand were related to improved measures of hand function. Hence, coordination between eye and hand movements when using a myoelectric prosthesis may prove to be a useful metric to assess motor planning.
Collapse
|
11
|
Koskinen J, Torkamani-Azar M, Hussein A, Huotarinen A, Bednarik R. Automated tool detection with deep learning for monitoring kinematics and eye-hand coordination in microsurgery. Comput Biol Med 2021; 141:105121. [PMID: 34968859 DOI: 10.1016/j.compbiomed.2021.105121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 11/03/2022]
Abstract
In microsurgical procedures, surgeons use micro-instruments under high magnifications to handle delicate tissues. These procedures require highly skilled attentional and motor control for planning and implementing eye-hand coordination strategies. Eye-hand coordination in surgery has mostly been studied in open, laparoscopic, and robot-assisted surgeries, as there are no available tools to perform automatic tool detection in microsurgery. We introduce and investigate a method for simultaneous detection and processing of micro-instruments and gaze during microsurgery. We train and evaluate a convolutional neural network for detecting 17 microsurgical tools with a dataset of 7500 frames from 20 videos of simulated and real surgical procedures. Model evaluations result in mean average precision at the 0.5 threshold of 89.5-91.4% for validation and 69.7-73.2% for testing over partially unseen surgical settings, and the average inference time of 39.90 ± 1.2 frames/second. While prior research has mostly evaluated surgical tool detection on homogeneous datasets with limited number of tools, we demonstrate the feasibility of transfer learning, and conclude that detectors that generalize reliably to new settings require data from several different surgical procedures. In a case study, we apply the detector with a microscope eye tracker to investigate tool use and eye-hand coordination during an intracranial vessel dissection task. The results show that tool kinematics differentiate microsurgical actions. The gaze-to-microscissors distances are also smaller during dissection than other actions when the surgeon has more space to maneuver. The presented detection pipeline provides the clinical and research communities with a valuable resource for automatic content extraction and objective skill assessment in various microsurgical environments.
Collapse
Affiliation(s)
- Jani Koskinen
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland.
| | - Mastaneh Torkamani-Azar
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| | - Ahmed Hussein
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Faculty of Medicine, Assiut University, Assiut, 71111, Egypt
| | - Antti Huotarinen
- Microsurgery Center, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland; Department of Neurosurgery, Institute of Clinical Medicine, Kuopio University Hospital, Kuopio, 70211, Pohjois-Savo, Finland
| | - Roman Bednarik
- School of Computing, University of Eastern Finland, Länsikatu 15, Joensuu, 80100, Pohjois-Karjala, Finland
| |
Collapse
|
12
|
Marasco PD, Hebert JS, Sensinger JW, Beckler DT, Thumser ZC, Shehata AW, Williams HE, Wilson KR. Neurorobotic fusion of prosthetic touch, kinesthesia, and movement in bionic upper limbs promotes intrinsic brain behaviors. Sci Robot 2021; 6:eabf3368. [PMID: 34516746 DOI: 10.1126/scirobotics.abf3368] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
[Figure: see text].
Collapse
Affiliation(s)
- Paul D Marasco
- Laboratory for Bionic Integration, Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, 9500 Euclid Avenue, ND20, Cleveland, OH 44195, USA.,Advanced Platform Technology Center, Louis Stokes Cleveland Department of Veterans Affairs Medical Center, 10701 East Boulevard 151 W/APT, Cleveland, OH 44106, USA
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta T6G 2E1, Canada.,Glenrose Rehabilitation Hospital, Alberta Health Services, 10230-111 Avenue, Edmonton, Alberta T5G 0B7, Canada
| | - Jonathon W Sensinger
- Institute of Biomedical Engineering, University of New Brunswick, 25 Dineen Drive, Fredericton, New Brunswick E3B 5A3, Canada
| | - Dylan T Beckler
- Laboratory for Bionic Integration, Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, 9500 Euclid Avenue, ND20, Cleveland, OH 44195, USA
| | - Zachary C Thumser
- Laboratory for Bionic Integration, Department of Biomedical Engineering, Lerner Research Institute, Cleveland Clinic, 9500 Euclid Avenue, ND20, Cleveland, OH 44195, USA.,Research Service, Louis Stokes Cleveland Department of Veterans Affairs Medical Center, 10701 East Boulevard, Research 151, Cleveland, OH 44106, USA
| | - Ahmed W Shehata
- Division of Physical Medicine and Rehabilitation, Department of Medicine, University of Alberta, Edmonton, Alberta T6G 2E1, Canada
| | - Heather E Williams
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta T6G 2E1, Canada
| | - Kathleen R Wilson
- Institute of Biomedical Engineering, University of New Brunswick, 25 Dineen Drive, Fredericton, New Brunswick E3B 5A3, Canada
| |
Collapse
|
13
|
Dötsch D, Kurz J, Helm F, Hegele M, Munzert J, Schubö A. End in view: Joint end-state comfort depends on gaze and extraversion. Hum Mov Sci 2021; 80:102867. [PMID: 34492422 DOI: 10.1016/j.humov.2021.102867] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 06/23/2021] [Accepted: 08/31/2021] [Indexed: 11/24/2022]
Abstract
This study investigated how humans adapt to a partner's movement in a joint pick-and-place task and examined the role of gaze behavior and personality traits in adapting to a partner. Two participants sitting side-by-side transported a cup from one end of a table to the other. The participant sitting on the left (the agent) moved the cup to an intermediate position from where the participant sitting on the right (the partner) transported it to a goal position with varying orientations. Hand, finger, cup movements and gaze behavior were recorded synchronously via motion tracking and portable eye tracking devices. Results showed interindividual differences in the extent of the agents' motor adaptation to the joint action goal, which were accompanied by differences in gaze patterns. The longer agents directed their gaze to a cue indicating the goal orientation, the more they adapted the rotation of the cup's handle when placing it at the intermediate position. Personality trait assessment showed that higher extraverted tendencies to strive for social potency went along with more adaptation to the joint goal. These results indicate that agents who consider their partner's end-state comfort use their gaze to gather more information about the joint action goal compared to agents who do not. Moreover, the disposition to enjoy leadership and make decisions in interpersonal situations seems to play a role in determining who adapts to a partner's task in joint action.
Collapse
Affiliation(s)
- Dominik Dötsch
- Cognitive Neuroscience of Perception and Action, Faculty of Psychology, Philipps University Marburg, Marburg, Germany.
| | - Johannes Kurz
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Fabian Helm
- Department of Psychology and Sports Sciences, Goethe-University Frankfurt/Main, Germany
| | - Mathias Hegele
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Jörn Munzert
- Neuromotor Behavior Laboratory, Department of Psychology and Sport Science, Justus-Liebig-University Giessen, Giessen, Germany
| | - Anna Schubö
- Cognitive Neuroscience of Perception and Action, Faculty of Psychology, Philipps University Marburg, Marburg, Germany
| |
Collapse
|
14
|
Bäckström A, Johansson AM, Rudolfsson T, Rönnqvist L, von Hofsten C, Rosander K, Domellöf E. Motor planning and movement execution during goal-directed sequential manual movements in 6-year-old children with autism spectrum disorder: A kinematic analysis. RESEARCH IN DEVELOPMENTAL DISABILITIES 2021; 115:104014. [PMID: 34174471 DOI: 10.1016/j.ridd.2021.104014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 06/07/2021] [Accepted: 06/07/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND Atypical motor functioning is prevalent in children with autism spectrum disorder (ASD). Knowledge of the underlying kinematic properties of these problems is sparse. AIMS To investigate characteristics of manual motor planning and performance difficulties/diversity in children with ASD by detailed kinematic measurements. Further, associations between movement parameters and cognitive functions were explored. METHODS AND PROCEDURES Six-year-old children with ASD (N = 12) and typically developing (TD) peers (N = 12) performed a sequential manual task comprising grasping and fitting a semi-circular peg into a goal-slot. The goal-slot orientation was manipulated to impose different motor planning constraints. Movements were recorded by an optoelectronic system. OUTCOMES AND RESULTS The ASD-group displayed less efficient motor planning than the TD-group, evident in the reach-to-grasp and transport kinematics and less proactive adjustments of the peg to the goal-slot orientations. The intra-individual variation of movement kinematics was higher in the ASD-group compared to the TD-group. Further, in the ASD-group, movement performance associated negatively with cognitive functions. CONCLUSIONS AND IMPLICATIONS Planning and execution of sequential manual movements proved challenging for children with ASD, likely contributing to problems in everyday actions. Detailed kinematic investigations contribute to the generation of specific knowledge about the nature of atypical motor performance/diversity in ASD. This is of potential clinical relevance.
Collapse
Affiliation(s)
- Anna Bäckström
- Department of Psychology, Umeå University, Umeå, Sweden.
| | | | - Thomas Rudolfsson
- Department of Psychology, Umeå University, Umeå, Sweden; Centre for Musculoskeletal Research, Department of Occupational Health Science and Psychology, University of Gävle, Gävle, Sweden
| | | | | | | | - Erik Domellöf
- Department of Psychology, Umeå University, Umeå, Sweden
| |
Collapse
|
15
|
Krishna R, Pathirana PN, Horne MK, Corben LA, Szmulewicz DJ. Quantitative Assessment of Friedreich Ataxia via Self-Drinking Activity. IEEE J Biomed Health Inform 2021; 25:1985-1996. [PMID: 33764881 DOI: 10.1109/jbhi.2021.3069007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Effective monitoring of the progression of neurodegenerative conditions can be significantly improved by objective assessments. Clinical assessments of conditions such as Friedreich's Ataxia (FA), currently rely on subjective measures commonly practiced in clinics as well as the ability of the affected individual to perform conventional tests of the neurological examination. In this study, we propose an ataxia measuring device, in the form of a pressure canister capable of sensing certain kinetic and kinematic parameters of interest to quantify the impairment levels of participants particularly when engaged in an activity that is closely associated with daily living. In particular, the functional task of simulated drinking was utilised to capture characteristic features of disability manifestation in terms of diagnosis (separation of individuals with FA and controls) and severity assessment of individuals diagnosed with the debilitating condition of FA. Time and frequency domain analysis of these biomarkers enabled the classification of individuals with FA and control subjects to reach an accuracy of 98% and a correlation level reaching 96% with the clinical scores.
Collapse
|
16
|
Arthur T, Harris DJ, Allen K, Naylor CE, Wood G, Vine S, Wilson MR, Tsaneva-Atanasova K, Buckingham G. Visuo-motor attention during object interaction in children with developmental coordination disorder. Cortex 2021; 138:318-328. [PMID: 33780720 PMCID: PMC8064026 DOI: 10.1016/j.cortex.2021.02.013] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 12/03/2020] [Accepted: 02/16/2021] [Indexed: 10/29/2022]
Abstract
Developmental coordination disorder (DCD) describes a condition of poor motor performance in the absence of intellectual impairment. Despite being one of the most prevalent developmental disorders, little is known about how fundamental visuomotor processes might function in this group. One prevalent idea is children with DCD interact with their environment in a less predictive fashion than typically developing children. A metric of prediction which has not been examined in this group is the degree to which the hands and eyes are coordinated when performing manual tasks. To this end, we examined hand and eye movements during an object lifting task in a group of children with DCD (n = 19) and an age-matched group of children without DCD (n = 39). We observed no differences between the groups in terms of how well they coordinated their hands and eyes when lifting objects, nor in terms of the degree by which the eye led the hand. We thus find no evidence to support the proposition that children with DCD coordinate their hands and eyes in a non-predictive fashion. In a follow-up exploratory analysis we did, however, note differences in fundamental patterns of eye movements between the groups, with children in the DCD group showing some evidence of atypical visual sampling strategies and gaze anchoring behaviours during the task.
Collapse
Affiliation(s)
- Tom Arthur
- Department of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, UK
| | - David J Harris
- Department of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, UK
| | - Kate Allen
- Department of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, UK; Department of Health and Social Care, College of Medicine and Health, University of Exeter, UK
| | | | - Greg Wood
- Department of Sport and Exercise Sciences, Research Centre for Musculoskeletal Science and Sports Medicine, Manchester Metropolitan University, UK
| | - Sam Vine
- Department of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, UK
| | - Mark R Wilson
- Department of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, UK
| | - Krasimira Tsaneva-Atanasova
- Department of Mathematics, College of Engineering, Mathematics, and Physical Sciences, University of Exeter, UK; Translational Research Exchange @ Exeter, University of Exeter, UK
| | - Gavin Buckingham
- Department of Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, UK; Translational Research Exchange @ Exeter, University of Exeter, UK.
| |
Collapse
|
17
|
Alyaman M, Sobuh M, Zaid AA, Kenney L, Galpin AJ, Al-Taee MA. Towards automation of dynamic-gaze video analysis taking functional upper-limb tasks as a case study. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 203:106041. [PMID: 33756186 DOI: 10.1016/j.cmpb.2021.106041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 03/03/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Previous studies in motor control have yielded clear evidence that gaze behavior (where someone looks) quantifies the attention paid to perform actions. However, eliciting clinically meaningful results from the gaze data has been done manually, rendering it incredibly tedious, time-consuming, and highly subjective. This paper aims to study the feasibility of automating the coding process of the gaze data taking functional upper-limb tasks as a case study. METHODS This is achieved by developing a new algorithm capable of coding the collected gaze data through three main stages; data preparation, data processing, and output generation. The input data in the form of a crosshair and a gaze video are converted into a 25 Hz frame rate sequence. Keyframes and non-key frames are then obtained and processed using a combination of image processing techniques and a fuzzy logic controller. In each trial, the location and duration of gaze fixation at the areas of interest (AOIs) are obtained. Once the gaze data is coded, it can be presented in different forms and formats, including the stacked color bar. RESULTS The obtained results showed that the developed coding algorithm highly agrees with the manual coding method but significantly faster and less prone to unsystematic errors. Statistical analysis showed that Cohen's Kappa ranges from 0.705 to 1.0. Moreover, based on the intra-class correlation coefficient (ICC), the agreement index between computerized and manual coding methods is found to be (i) 0.908 with 95% confidence intervals (0.867, 0.937) for the anatomical hand and (ii) 0.923 with 95% confidence intervals (0.888, 0.948) for the prosthetic hand. A Bland-Altman plot also showed that all data points are closely scattered around the mean. These findings confirm the validity and effectiveness of the developed coding algorithm. CONCLUSION The developed algorithm demonstrated that it is feasible to automate the coding of the gaze data, reduce the coding time, and improve the coding process's reliability.
Collapse
Affiliation(s)
- Musa Alyaman
- Mechatronics Engineering Department, School of Engineering, The University of Jordan, Amman, 11942, Jordan.
| | - Mohammad Sobuh
- Department of Orthotics & Prosthetics, School of Rehabilitation Sciences. The University of Jordan, Amman, 11942, Jordan
| | - Alaa Abu Zaid
- Mechatronics Engineering Department, School of Engineering, The University of Jordan, Amman, 11942, Jordan
| | - Laurence Kenney
- School of Health and Society, University of Salford, Manchester M5 4WT, UK
| | - Adam J Galpin
- School of Health and Society, University of Salford, Manchester M5 4WT, UK
| | - Majid A Al-Taee
- School of Electrical Engineering, Electronics and Computer Science, University of Liverpool, Liverpool L69 3BX, UK
| |
Collapse
|
18
|
Williams HE, Chapman CS, Pilarski PM, Vette AH, Hebert JS. Myoelectric prosthesis users and non-disabled individuals wearing a simulated prosthesis exhibit similar compensatory movement strategies. J Neuroeng Rehabil 2021; 18:72. [PMID: 33933105 PMCID: PMC8088043 DOI: 10.1186/s12984-021-00855-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Accepted: 03/17/2021] [Indexed: 11/24/2022] Open
Abstract
Background Research studies on upper limb prosthesis function often rely on the use of simulated myoelectric prostheses (attached to and operated by individuals with intact limbs), primarily to increase participant sample size. However, it is not known if these devices elicit the same movement strategies as myoelectric prostheses (operated by individuals with amputation). The objective of this study was to address the question of whether non-disabled individuals using simulated prostheses employ the same compensatory movements (measured by hand and upper body kinematics) as individuals who use actual myoelectric prostheses. Methods The upper limb movements of two participant groups were investigated: (1) twelve non-disabled individuals wearing a simulated prosthesis, and (2) three individuals with transradial amputation using their custom-fitted myoelectric devices. Motion capture was used for data collection while participants performed a standardized functional task. Performance metrics, hand movements, and upper body angular kinematics were calculated. For each participant group, these measures were compared to those from a normative baseline dataset. Each deviation from normative movement behaviour, by either participant group, indicated that compensatory movements were used during task performance. Results Results show that participants using either a simulated or actual myoelectric prosthesis exhibited similar deviations from normative behaviour in phase durations, hand velocities, hand trajectories, number of movement units, grip aperture plateaus, and trunk and shoulder ranges of motion. Conclusions This study suggests that the use of a simulated prosthetic device in upper limb research offers a reasonable approximation of compensatory movements employed by a low- to moderately-skilled transradial myoelectric prosthesis user.
Collapse
Affiliation(s)
- Heather E Williams
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada.
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, AB, Canada
| | - Patrick M Pilarski
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Albert H Vette
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada.,Department of Mechanical Engineering, Faculty of Engineering, University of Alberta, Edmonton, AB, Canada.,Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada.,Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada.,Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, AB, Canada
| |
Collapse
|
19
|
Lavoie E, Chapman CS. What’s limbs got to do with it? Real-world movement correlates with feelings of ownership over virtual arms during object interactions in virtual reality. Neurosci Conscious 2021. [DOI: 10.1093/nc/niaa027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Abstract
Humans will initially move awkwardly so that the end-state of their movement is comfortable. But, what is comfortable? We might assume it refers to a particular physical body posture, however, humans have been shown to move a computer cursor on a screen with an out-of-sight hand less efficiently (curved) such that the visual representation appears more efficient (straight). This suggests that movement plans are made in large part to satisfy the demands of their visual appearance, rather than their physical movement properties. So, what determines if a body movement is comfortable—how it feels or how it looks? We translated an object-interaction task from the real-world into immersive virtual reality (IVR) to dissociate a movement from its visual appearance. Participants completed at least 20 trials in two conditions: Controllers—where participants saw a visual representation of the hand-held controllers and Arms—where they saw a set of virtual limbs. We found participants seeing virtual limbs moved in a less biomechanically efficient manner to make the limbs look similar to if they were interacting with a real-world object. These movement changes correlated with an increase in self-reported feelings of ownership over the limbs as compared to the controllers. Overall this suggests we plan our movements to provide optimal visual feedback, even at the cost of being less efficient. Moreover, we speculate that a detailed measurement of how people move in IVR may provide a new tool for assessing their degree of embodiment. There is something about seeing a set of limbs in front of you, doing your actions, that affects your moving, and in essence, your thinking.
Collapse
Affiliation(s)
- Ewen Lavoie
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, AB, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, AB, Canada
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
20
|
de Brouwer AJ, Flanagan JR, Spering M. Functional Use of Eye Movements for an Acting System. Trends Cogn Sci 2021; 25:252-263. [PMID: 33436307 DOI: 10.1016/j.tics.2020.12.006] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 12/05/2020] [Accepted: 12/07/2020] [Indexed: 10/22/2022]
Abstract
Movements of the eyes assist vision and support hand and body movements in a cooperative way. Despite their strong functional coupling, different types of movements are usually studied independently. We integrate knowledge from behavioral, neurophysiological, and clinical studies on how eye movements are coordinated with goal-directed hand movements and how they facilitate motor learning. Understanding the coordinated control of eye and hand movements can provide important insights into brain functions that are essential for performing or learning daily tasks in health and disease. This knowledge can also inform applications such as robotic manipulation and clinical rehabilitation.
Collapse
Affiliation(s)
- Anouk J de Brouwer
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada.
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Canada; Department of Psychology, Queen's University, Kingston, Canada
| | - Miriam Spering
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, Vancouver, Canada
| |
Collapse
|
21
|
Sensinger JW, Dosen S. A Review of Sensory Feedback in Upper-Limb Prostheses From the Perspective of Human Motor Control. Front Neurosci 2020; 14:345. [PMID: 32655344 PMCID: PMC7324654 DOI: 10.3389/fnins.2020.00345] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 03/23/2020] [Indexed: 12/22/2022] Open
Abstract
This manuscript reviews historical and recent studies that focus on supplementary sensory feedback for use in upper limb prostheses. It shows that the inability of many studies to speak to the issue of meaningful performance improvements in real-life scenarios is caused by the complexity of the interactions of supplementary sensory feedback with other types of feedback along with other portions of the motor control process. To do this, the present manuscript frames the question of supplementary feedback from the perspective of computational motor control, providing a brief review of the main advances in that field over the last 20 years. It then separates the studies on the closed-loop prosthesis control into distinct categories, which are defined by relating the impact of feedback to the relevant components of the motor control framework, and reviews the work that has been done over the last 50+ years in each of those categories. It ends with a discussion of the studies, along with suggestions for experimental construction and connections with other areas of research, such as machine learning.
Collapse
Affiliation(s)
- Jonathon W. Sensinger
- Institute of Biomedical Engineering, University of New Brunswick, Fredericton, NB, Canada
| | - Strahinja Dosen
- Department of Health Science and Technology, The Faculty of Medicine, Integrative Neuroscience, Aalborg University, Aalborg, Denmark
| |
Collapse
|
22
|
Foulsham T. Beyond the picture frame: The function of fixations in interactive tasks. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
23
|
Williams HE, Chapman CS, Pilarski PM, Vette AH, Hebert JS. Gaze and Movement Assessment (GaMA): Inter-site validation of a visuomotor upper limb functional protocol. PLoS One 2019; 14:e0219333. [PMID: 31887218 PMCID: PMC6936776 DOI: 10.1371/journal.pone.0219333] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Accepted: 12/10/2019] [Indexed: 11/18/2022] Open
Abstract
Background Successful hand-object interactions require precise hand-eye coordination with continual movement adjustments. Quantitative measurement of this visuomotor behaviour could provide valuable insight into upper limb impairments. The Gaze and Movement Assessment (GaMA) was developed to provide protocols for simultaneous motion capture and eye tracking during the administration of two functional tasks, along with data analysis methods to generate standard measures of visuomotor behaviour. The objective of this study was to investigate the reproducibility of the GaMA protocol across two independent groups of non-disabled participants, with different raters using different motion capture and eye tracking technology. Methods Twenty non-disabled adults performed the Pasta Box Task and the Cup Transfer Task. Upper body and eye movements were recorded using motion capture and eye tracking, respectively. Measures of hand movement, angular joint kinematics, and eye gaze were compared to those from a different sample of twenty non-disabled adults who had previously performed the same protocol with different technology, rater and site. Results Participants took longer to perform the tasks versus those from the earlier study, although the relative time of each movement phase was similar. Measures that were dissimilar between the groups included hand distances travelled, hand trajectories, number of movement units, eye latencies, and peak angular velocities. Similarities included all hand velocity and grip aperture measures, eye fixations, and most peak joint angle and range of motion measures. Discussion The reproducibility of GaMA was confirmed by this study, despite a few differences introduced by learning effects, task demonstration variation, and limitations of the kinematic model. GaMA accurately quantifies the typical behaviours of a non-disabled population, producing precise quantitative measures of hand function, trunk and angular joint kinematics, and associated visuomotor behaviour. This work advances the consideration for use of GaMA in populations with upper limb sensorimotor impairment.
Collapse
Affiliation(s)
- Heather E. Williams
- Department of Mechanical Engineering, Faculty of Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Craig S. Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| | - Patrick M. Pilarski
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Albert H. Vette
- Department of Mechanical Engineering, Faculty of Engineering, University of Alberta, Edmonton, Alberta, Canada
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, Alberta, Canada
| | - Jacqueline S. Hebert
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, Alberta, Canada
- * E-mail:
| |
Collapse
|
24
|
Naufel S, Knaack GL, Miranda R, Best TK, Fitzpatrick K, Emondi AA, Van Gieson E, McClure-Begley T. DARPA investment in peripheral nerve interfaces for prosthetics, prescriptions, and plasticity. J Neurosci Methods 2019; 332:108539. [PMID: 31805301 DOI: 10.1016/j.jneumeth.2019.108539] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 11/28/2019] [Accepted: 12/01/2019] [Indexed: 01/02/2023]
Abstract
BACKGROUND Peripheral nerve interfaces have emerged as alternative solutions for a variety of therapeutic and performance improvement applications. The Defense Advanced Research Projects Agency (DARPA) has widely invested in these interfaces to provide motor control and sensory feedback to prosthetic limbs, identify non-pharmacological interventions to treat disease, and facilitate neuromodulation to accelerate learning or improve performance on cognitive, sensory, or motor tasks. In this commentary, we highlight some of the design considerations for optimizing peripheral nerve interfaces depending on the application space. We also discuss the ethical considerations that accompany these advances.
Collapse
Affiliation(s)
| | - Gretchen L Knaack
- Quantitative Scientific Solutions, 4601 Fairfax Dr #1200, Arlington, VA 22203, USA
| | - Robbin Miranda
- Infinimetrics Corporation, 12020 Sunrise Valley Dr., Suite 100, Reston, VA 20191, USA
| | - Tyler K Best
- Booz Allen Hamilton, Inc., 3811 Fairfax Dr. Ste. 600, Arlington, VA 22203, USA
| | - Karrie Fitzpatrick
- Strategic Analysis Inc., 4075 Wilson Boulevard, Suite 200, Arlington, VA 22203 USA
| | - Al A Emondi
- Defense Advanced Research Projects Agency, Biological Technologies Office, 675 N Randolph St., Arlington, VA 22203, USA
| | - Eric Van Gieson
- Defense Advanced Research Projects Agency, Biological Technologies Office, 675 N Randolph St., Arlington, VA 22203, USA
| | - Tristan McClure-Begley
- Defense Advanced Research Projects Agency, Biological Technologies Office, 675 N Randolph St., Arlington, VA 22203, USA
| |
Collapse
|
25
|
Gregori V, Cognolato M, Saetta G, Atzori M, Gijsberts A. On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping. Front Bioeng Biotechnol 2019; 7:316. [PMID: 31799243 PMCID: PMC6874164 DOI: 10.3389/fbioe.2019.00316] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/24/2019] [Indexed: 11/15/2022] Open
Abstract
Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects.
Collapse
Affiliation(s)
- Valentina Gregori
- Department of Computer, Control, and Management Engineering, University of Rome La Sapienza, Rome, Italy.,VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Matteo Cognolato
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland.,Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland
| | - Gianluca Saetta
- Department of Neurology, University Hospital of Zurich, Zurich, Switzerland
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
| | | | - Arjan Gijsberts
- VANDAL Laboratory, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
26
|
Hebert JS, Boser QA, Valevicius AM, Tanikawa H, Lavoie EB, Vette AH, Pilarski PM, Chapman CS. Quantitative Eye Gaze and Movement Differences in Visuomotor Adaptations to Varying Task Demands Among Upper-Extremity Prosthesis Users. JAMA Netw Open 2019; 2:e1911197. [PMID: 31517965 PMCID: PMC6745056 DOI: 10.1001/jamanetworkopen.2019.11197] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/10/2023] Open
Abstract
IMPORTANCE New treatments for upper-limb amputation aim to improve movement quality and reduce visual attention to the prosthesis. However, evaluation is limited by a lack of understanding of the essential features of human-prosthesis behavior and by an absence of consistent task protocols. OBJECTIVE To evaluate whether task selection is a factor in visuomotor adaptations by prosthesis users to accomplish 2 tasks easily performed by individuals with normal arm function. DESIGN, SETTING, AND PARTICIPANTS This cross-sectional study was conducted in a single research center at the University of Alberta, Edmonton, Alberta, Canada. Upper-extremity prosthesis users were recruited from January 1, 2016, through December 31, 2016, and individuals with normal arm function were recruited from October 1, 2015, through November 30, 2015. Eight prosthesis users and 16 participants with normal arm function were asked to perform 2 goal-directed tasks with synchronized motion capture and eye tracking. Data analysis was performed from December 3, 2018, to April 15, 2019. MAIN OUTCOME AND MEASURES Movement time, eye fixation, and range of motion of the upper body during 2 object transfer tasks (cup and box) were the main outcomes. RESULTS A convenience sample comprised 8 male prosthesis users with acquired amputation (mean [range] age, 45 [30-64] years), along with 16 participants with normal arm function (8 [50%] of whom were men; mean [range] age, 26 [18-43] years; mean [range] height, 172.3 [158.0-186.0] cm; all right handed). Prosthesis users spent a disproportionately prolonged mean (SD) time in grasp and release phases when handling the cups (grasp: 2.0 [2.3] seconds vs 0.9 [0.8] seconds; P < .001; release: 1.1 [0.6] seconds vs 0.7 [0.4] seconds; P < .001). Prosthesis users also had increased mean (SD) visual fixations on the hand for the cup compared with the box task during reach (10.2% [12.1%] vs 2.2% [2.8%]) and transport (37.1% [9.7%] vs 22.3% [7.6%]). Fixations on the hand for both tasks were significantly greater for prosthesis users compared with normative values. Prosthesis users had significantly more trunk flexion and extension for the box task compared with the cup task (mean [SD] trunk range of motion, 32.1 [10.7] degrees vs 21.2 [3.7] degrees; P = .01), with all trunk motions greater than normative values. The box task required greater shoulder movements compared with the cup task for prosthesis users (mean [SD] flexion and extension; 51.3 [12.6] degrees vs 41.0 [9.4] degrees, P = .01; abduction and adduction: 40.5 [7.2] degrees vs 32.3 [5.1] degrees, P = .02; rotation: 50.6 [15.7] degrees vs 35.5 [10.0] degrees, P = .02). However, other than shoulder abduction and adduction for the box task, these values were less than those seen for participants with normal arm function. CONCLUSIONS AND RELEVANCE This study suggests that prosthesis users have an inherently different way of adapting to varying task demands, therefore suggesting that task selection is crucial in evaluating visuomotor performance. The cup task required greater compensatory visual fixations and prolonged grasp and release movements, and the box task required specific kinematic compensatory strategies as well as increased visual fixation. This is the first study to date to examine visuomotor differences in prosthesis users across varying task demands, and the findings appear to highlight the advantages of quantitative assessment in understanding human-prosthesis interaction.
Collapse
Affiliation(s)
- Jacqueline S. Hebert
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Quinn A. Boser
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Aïda M. Valevicius
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Hiroki Tanikawa
- Faculty of Rehabilitation, School of Health Sciences, Fujita Health University, Toyoake, Aichi, Japan
| | - Ewen B. Lavoie
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| | - Albert H. Vette
- Department of Biomedical Engineering, University of Alberta, Edmonton, Alberta, Canada
- Department of Mechanical Engineering, Faculty of Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Patrick M. Pilarski
- Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| | - Craig S. Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
27
|
Arthur T, Vine S, Brosnan M, Buckingham G. Exploring how material cues drive sensorimotor prediction across different levels of autistic-like traits. Exp Brain Res 2019; 237:2255-2267. [PMID: 31250036 PMCID: PMC6675774 DOI: 10.1007/s00221-019-05586-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 06/15/2019] [Indexed: 12/25/2022]
Abstract
Recent research proposes that sensorimotor difficulties, such as those experienced by many autistic people, may arise from atypicalities in prediction. Accordingly, we examined the relationship between non-clinical autistic-like traits and sensorimotor prediction in the material-weight illusion, where prior expectations derived from material cues typically bias one’s perception and action. Specifically, prediction-related tendencies in perception of weight, gaze patterns, and lifting actions were probed using a combination of self-report, eye-tracking, motion-capture, and force-based measures. No prediction-related associations between autistic-like traits and sensorimotor control emerged for any of these variables. Follow-up analyses, however, revealed that greater autistic-like traits were correlated with reduced adaptation of gaze with changes in environmental uncertainty. These findings challenge proposals of gross predictive atypicalities in autistic people, but suggest that the dynamic integration of prior information and environmental statistics may be related to autistic-like traits. Further research into this relationship is warranted in autistic populations, to assist the development of future movement-based coaching methods.
Collapse
Affiliation(s)
- Tom Arthur
- Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Heavitree Road, Exeter, EX1 2LU, Devon, UK
| | - Sam Vine
- Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Heavitree Road, Exeter, EX1 2LU, Devon, UK
| | - Mark Brosnan
- Department of Psychology, University of Bath, Bath, BA2 7AY, UK
| | - Gavin Buckingham
- Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, St Luke's Campus, Heavitree Road, Exeter, EX1 2LU, Devon, UK.
| |
Collapse
|
28
|
Kaufman CL, Bhutiani N, Ramirez A, Tien HY, Palazzo MD, Galvis E, Farner S, Ozyurekoglu T, Jones CM. Current Status of Vascularized Composite Allotransplantation. Am Surg 2019. [DOI: 10.1177/000313481908500628] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The field of vascularized composite allotransplantation (VCA) has moved from a highly experimental procedure to, at least for some patients, one of the best treatment alternatives for catastrophic tissue loss or dysfunction. Although the worldwide experience is still limited, progress has been made in translation to the clinic, and hand transplantation was recently designated standard of care and is now covered in full by the British Health System. This progress is tempered by the long-term challenges of systemic immunosuppression, and the rapidly evolving indications for VCA such as urogenital transplantation. This update will cover the state of and recent changes in the field, and an update of the Louisville VCA program as our initial recipient, the first person to receive a hand transplant in the United States celebrates the 20th anniversary of his transplant. The achievements and complications encountered over the last two decades will be reviewed. In addition, potential directions for research and collaboration as well as practical issues of how third party payers and funding are affecting growth of the field are presented.
Collapse
|