1
|
Gießler M, Waltersberger B, Götz T, Rockenfeller R. A multi-method framework for establishing an angular acceleration reference in sensor calibration and uncertainty quantification. COMMUNICATIONS ENGINEERING 2025; 4:65. [PMID: 40195528 PMCID: PMC11977018 DOI: 10.1038/s44172-025-00384-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 02/25/2025] [Indexed: 04/09/2025]
Abstract
Robots are increasingly being used across various sectors, from industry and healthcare to household applications. In practice, a pivotal challenge is the reaction to unexpected external disturbances, whose real-time feedback often relies on (noisy) sensor measurements. Subsequent inverse-dynamics calculations demand noise-amplifying numerical differentiation, leading to impracticable results. Although much effort has been spent on establishing direct measurement approaches, their measurement uncertainty quantification has not or yet insufficiently been tackled in the literature. Here, we propose a multi-method framework to develop an angular acceleration reference and provide evidence that it can serve as a measurement standard to calibrate various kinematic sensors. Within the framework, we use Monte-Carlo simulations to quantify the uncertainty of a direct measurement sensor recently developed by our team; the inertial measurement cluster (IMC). For angular accelerations up to 21 rad/s2, the standard deviation of the IMC was on average only 0.3 rad/s2 (95% CI: [0.28,0.31] rad/s2), which constitutes a reliable data-sheet record. Further, using least-squares optimization, we show that the deviation of IMC with respect to the reference was not only less on the level of angular acceleration but also on the level of angular velocity and angle, when compared to other direct and indirect measurement methods.
Collapse
Affiliation(s)
- Maximilian Gießler
- Department of Mechanical and Process Engineering, Offenburg University of Applied Sciences, Offenburg, Germany.
- Mathematical Institute, University of Koblenz, Koblenz, Germany.
| | - Bernd Waltersberger
- Department of Mechanical and Process Engineering, Offenburg University of Applied Sciences, Offenburg, Germany
| | - Thomas Götz
- Mathematical Institute, University of Koblenz, Koblenz, Germany
| | - Robert Rockenfeller
- Mathematical Institute, University of Koblenz, Koblenz, Germany
- MTI Mittelrhein, University of Koblenz, Koblenz, Germany
| |
Collapse
|
2
|
Xi Y, Li Z, Vatatheeswaran S, Devecchi V, Gallina A. Assessment of Pelvic Motion During Single-Leg Weight-Bearing Tasks Using Smartphone Sensors: Validity Study. JMIR Rehabil Assist Technol 2025; 12:e65342. [PMID: 40168648 PMCID: PMC11978237 DOI: 10.2196/65342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 01/15/2025] [Accepted: 02/04/2025] [Indexed: 04/03/2025] Open
Abstract
Background Clinicians and athletic training specialists often assess the performance of single-leg, weight-bearing tasks to monitor rehabilitation progress and guide exercise progression. Some of the key metrics assessed are excessive pelvic motion, balance, and duration of each repetition of the exercise. Motion can be objectively characterized using motion capture (MOCAP); however, MOCAP is often not available in clinics due to the high costs and complexity of the analyses. Smartphones have built-in sensors that can be used to measure changes in body segment orientation and acceleration, which may make them a more feasible and affordable technology to use in practice. Objective This study aimed to determine if, compared to gold-standard MOCAP, smartphone sensors can provide valid measures of pelvic orientation, acceleration, and repetition duration during single-leg tasks in healthy individuals. Methods Overall, 52 healthy participants performed single-leg squats and step-down tasks from heights of 15 and 20 cm. Pelvic motion was assessed using MOCAP and a smartphone placed over the sacrum. The MATLAB (MathWorks) mobile app was used to collect smartphone acceleration and orientation data. Individual repetitions of each exercise were manually identified, and the following outcomes were extracted: duration of the repetition, mediolateral acceleration, and 3D pelvic orientation at peak squat. Validity was assessed by comparing metrics assessed with a smartphone and MOCAP using intraclass correlation coefficients (ICCs) and paired Wilcoxon tests. Differences between tasks were compared using 1-way ANOVA or the Friedman test. Results Across the 3 single-leg tasks, smartphone estimates demonstrated consistently high agreement with the MOCAP for all metrics (ICC point estimates: >0.8 for mediolateral acceleration and frontal plane orientation; >0.9 for squat duration and orientation on the sagittal and transverse plane). Bias was identified for most outcomes (multiple P<.001). Both smartphone and MOCAP recordings identified clear differences between tasks, with step-down tasks usually requiring larger changes in pelvic orientation and larger mediolateral sways. Duration did not differ between tasks. Conclusions Despite a consistent bias, the smartphone demonstrated good to excellent validity relative to gold-standard MOCAP for most outcomes. This demonstrates that smartphones offer an accessible and affordable tool to objectively characterize pelvic motion during different single-leg weight-bearing tasks in healthy participants. Together with earlier reports of good between-day reliability of similar measures during single-leg squats, our results suggest that smartphone sensors can be used to assess and monitor single-leg task performance. Future studies should investigate whether smartphone sensors can aid in the assessment and treatment of people with musculoskeletal disorders. More user-friendly interfaces and data analysis procedures may also facilitate the implementation of this technology in practice.
Collapse
Affiliation(s)
- Yu Xi
- School of Sport, Exercise and Rehabilitation Sciences, College of Life Sciences, University of Birmingham, Y14, Birmingham, B15 2TT, United Kingdom, 44 0121 4158187
| | - Zhongsheng Li
- School of Sport, Exercise and Rehabilitation Sciences, College of Life Sciences, University of Birmingham, Y14, Birmingham, B15 2TT, United Kingdom, 44 0121 4158187
| | - Surendran Vatatheeswaran
- School of Sport, Exercise and Rehabilitation Sciences, College of Life Sciences, University of Birmingham, Y14, Birmingham, B15 2TT, United Kingdom, 44 0121 4158187
| | - Valter Devecchi
- School of Sport, Exercise and Rehabilitation Sciences, College of Life Sciences, University of Birmingham, Y14, Birmingham, B15 2TT, United Kingdom, 44 0121 4158187
| | - Alessio Gallina
- School of Sport, Exercise and Rehabilitation Sciences, College of Life Sciences, University of Birmingham, Y14, Birmingham, B15 2TT, United Kingdom, 44 0121 4158187
| |
Collapse
|
3
|
Taetz B, Lorenz M, Miezal M, Stricker D, Bleser-Taetz G. JointTracker: Real-time inertial kinematic chain tracking with joint position estimation. OPEN RESEARCH EUROPE 2025; 4:33. [PMID: 38953016 PMCID: PMC11216284 DOI: 10.12688/openreseurope.16939.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 12/12/2024] [Indexed: 07/03/2024]
Abstract
In-field motion capture is drawing increasing attention due to the multitude of application areas, in particular for human motion capture (HMC). Plenty of research is currently invested in camera-based markerless HMC, however, with the inherent drawbacks of limited field of view and occlusions. In contrast, inertial motion capture does not suffer from occlusions, thus being a promising approach for capturing motion outside the laboratory. However, one major challenge of such methods is the necessity of spatial registration. Typically, during a predefined calibration sequence, the orientation and location of each inertial sensor are registered with respect to an underlying skeleton model. This work contributes to calibration-free inertial motion capture, as it proposes a recursive estimator for the simultaneous online estimation of all sensor poses and joint positions of a kinematic chain model like the human skeleton. The full derivation from an optimization objective is provided. The approach can directly be applied to a synchronized data stream from a body-mounted inertial sensor network. Successful evaluations are demonstrated on noisy simulated data from a three-link chain, real lower-body walking data from 25 young, healthy persons, and walking data captured from a humanoid robot.
Collapse
Affiliation(s)
- Bertram Taetz
- Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Rhineland-Palatinate, 67663, Germany
- IT & Engineering, International University of Applied Sciences, Erfurt, Thuringia, 99084, Germany
| | - Michael Lorenz
- Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Rhineland-Palatinate, 67663, Germany
| | - Markus Miezal
- Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Rhineland-Palatinate, 67663, Germany
| | - Didier Stricker
- Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Rhineland-Palatinate, 67663, Germany
| | - Gabriele Bleser-Taetz
- Augmented Vision, German Research Center for Artificial Intelligence, Kaiserslautern, Rhineland-Palatinate, 67663, Germany
- IT & Engineering, International University of Applied Sciences, Erfurt, Thuringia, 99084, Germany
| |
Collapse
|
4
|
Smith TJ, Smith TR, Faruk F, Bendea M, Kumara ST, Capadona JR, Hernandez-Reynoso AG, Pancrazio JJ. A Real-Time Approach for Assessing Rodent Engagement in a Nose-Poking Go/No-Go Behavioral Task Using ArUco Markers. Bio Protoc 2024; 14:e5098. [PMID: 39525969 PMCID: PMC11543608 DOI: 10.21769/bioprotoc.5098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Revised: 09/08/2024] [Accepted: 09/14/2024] [Indexed: 11/16/2024] Open
Abstract
Behavioral neuroscience requires precise and unbiased methods for animal behavior assessment to elucidate complex brain-behavior interactions. Traditional manual scoring methods are often labor-intensive and can be prone to error, necessitating advances in automated techniques. Recent innovations in computer vision have led to both marker- and markerless-based tracking systems. In this protocol, we outline the procedures required for utilizing Augmented Reality University of Cordoba (ArUco) markers, a marker-based tracking approach, to automate the assessment and scoring of rodent engagement during an established intracortical microstimulation-based nose-poking go/no-go task. In short, this protocol involves detailed instructions for building a suitable behavioral chamber, installing and configuring all required software packages, constructing and attaching an ArUco marker pattern to a rat, running the behavioral software to track marker positions, and analyzing the engagement data for determining optimal task durations. These methods provide a robust framework for real-time behavioral analysis without the need for extensive training data or high-end computational resources. The main advantages of this protocol include its computational efficiency, ease of implementation, and adaptability to various experimental setups, making it an accessible tool for laboratories with diverse resources. Overall, this approach streamlines the process of behavioral scoring, enhancing both the scalability and reproducibility of behavioral neuroscience research. All resources, including software, 3D models, and example data, are freely available at https://github.com/tomcatsmith19/ArucoDetection. Key features • The ArUco marker mounting hardware is lightweight, compact, and detachable for minimizing interference with natural animal behavior. • Requires minimal computational resources and commercially available equipment, ensuring ease of use for diverse laboratory settings. • Instructions for extracting necessary code are included to enhance accessibility within custom environments. • Developed for real-time assessment and scoring of rodent engagement across a diverse array of pre-loaded behavioral tasks; instructions for adding custom tasks are included. • Engagement analysis allows for the quantification of optimal task durations for consistent behavioral data collection without confirmation biases.
Collapse
Affiliation(s)
- Thomas J. Smith
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Trevor R. Smith
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV, USA
| | - Fareeha Faruk
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Mihai Bendea
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | | | - Jeffrey R. Capadona
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
- Advanced Platform Technology Center, Louis Stokes Cleveland Veterans Affairs Medical Center, Cleveland, OH, USA
| | | | - Joseph J. Pancrazio
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX, USA
| |
Collapse
|
5
|
Sun J, Huang L, Wang H, Zheng C, Qiu J, Islam MT, Xie E, Zhou B, Xing L, Chandrasekaran A, Black MJ. Localization and recognition of human action in 3D using transformers. COMMUNICATIONS ENGINEERING 2024; 3:125. [PMID: 39227676 PMCID: PMC11372174 DOI: 10.1038/s44172-024-00272-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 08/23/2024] [Indexed: 09/05/2024]
Abstract
Understanding a person's behavior from their 3D motion sequence is a fundamental problem in computer vision with many applications. An important component of this problem is 3D action localization, which involves recognizing what actions a person is performing, and when the actions occur in the sequence. To promote the progress of the 3D action localization community, we introduce a new, challenging, and more complex benchmark dataset, BABEL-TAL (BT), for 3D action localization. Important baselines and evaluating metrics, as well as human evaluations, are carefully established on this benchmark. We also propose a strong baseline model, i.e., Localizing Actions with Transformers (LocATe), that jointly localizes and recognizes actions in a 3D sequence. The proposed LocATe shows superior performance on BABEL-TAL as well as on the large-scale PKU-MMD dataset, achieving state-of-the-art performance by using only 10% of the labeled training data. Our research could advance the development of more accurate and efficient systems for human behavior analysis, with potential applications in areas such as human-computer interaction and healthcare.
Collapse
Affiliation(s)
- Jiankai Sun
- School of Engineering, Stanford University, Stanford, CA, USA.
| | - Linjiang Huang
- Department of Information Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR
| | - Hongsong Wang
- Department of Computer Science and Engineering, Southeast University, Nanjing, Jiangsu, China
| | - Chuanyang Zheng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR
| | - Jianing Qiu
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR.
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Enze Xie
- Department of Computer Science, The University of Hong Kong, Hong Kong, Hong Kong SAR
| | - Bolei Zhou
- Department of Computer Science, University of California, Los Angeles, Los Angeles, CA, USA
| | - Lei Xing
- School of Engineering, Stanford University, Stanford, CA, USA
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Arjun Chandrasekaran
- Perceiving Systems Department, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael J Black
- Perceiving Systems Department, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| |
Collapse
|
6
|
Hwang S, Kim J, Yang S, Moon HJ, Cho KH, Youn I, Sung JK, Han S. Machine Learning Based Abnormal Gait Classification with IMU Considering Joint Impairment. SENSORS (BASEL, SWITZERLAND) 2024; 24:5571. [PMID: 39275482 PMCID: PMC11397963 DOI: 10.3390/s24175571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 08/23/2024] [Accepted: 08/27/2024] [Indexed: 09/16/2024]
Abstract
Gait analysis systems are critical for assessing motor function in rehabilitation and elderly care. This study aimed to develop and optimize an abnormal gait classification algorithm considering joint impairments using inertial measurement units (IMUs) and walkway systems. Ten healthy male participants simulated normal walking, walking with knee impairment, and walking with ankle impairment under three conditions: without joint braces, with a knee brace, and with an ankle brace. Based on these simulated gaits, we developed classification models: distinguishing abnormal gait due to joint impairments, identifying specific joint disorders, and a combined model for both tasks. Recursive Feature Elimination with Cross-Validation (RFECV) was used for feature extraction, and models were fine-tuned using support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGB). The IMU-based system achieved over 91% accuracy in classifying the three types of gait. In contrast, the walkway system achieved less than 77% accuracy in classifying the three types of gait, primarily due to high misclassification rates between knee and ankle joint impairments. The IMU-based system shows promise for accurate gait assessment in patients with joint impairments, suggesting future research for clinical application improvements in rehabilitation and patient management.
Collapse
Affiliation(s)
- Soree Hwang
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea
- School of Biomedical Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Jongman Kim
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea
| | - Sumin Yang
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea
| | - Hyuk-June Moon
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea
| | - Kyung-Hee Cho
- Department of Neurology, Korea University Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea
| | - Inchan Youn
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea
| | - Joon-Kyung Sung
- School of Biomedical Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Sungmin Han
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea
- Division of Bio-Medical Science & Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea
- KHU-KIST Department of Converging Science and Technology, Kyung Hee University, Seoul 02447, Republic of Korea
| |
Collapse
|
7
|
Geißler D, Zhou B, Bello H, Sorysz J, Ray L, Javaheri H, Rüb M, Herbst J, Zahn E, Woop E, Bian S, Schotten HD, Joost G, Lukowicz P. Embedding textile capacitive sensing into smart wearables as a versatile solution for human motion capturing. Sci Rep 2024; 14:15797. [PMID: 38982105 PMCID: PMC11233671 DOI: 10.1038/s41598-024-66165-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 06/27/2024] [Indexed: 07/11/2024] Open
Abstract
This work presents a novel and versatile approach to employ textile capacitive sensing as an effective solution for capturing human body movement through fashionable and everyday-life garments. Conductive textile patches are utilized for sensing the movement, working without the need for strain or direct body contact, wherefore the patches can sense only from their deformation within the garment. This principle allows the sensing area to be decoupled from the wearer's body for improved wearing comfort and more pleasant integration. We demonstrate our technology based on multiple prototypes which have been developed by an interdisciplinary team of electrical engineers, computer scientists, digital artists, and smart fashion designers through several iterations to seamlessly incorporate the technology of capacitive sensing with corresponding design considerations into textile materials. The resulting accumulation of textile capacitive sensing wearables showcases the versatile application possibilities of our technology from single-joint angle measurements towards multi-joint body part tracking.
Collapse
Affiliation(s)
- Daniel Geißler
- Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany.
| | - Bo Zhou
- Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Hymalai Bello
- Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Joanna Sorysz
- Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Lala Ray
- Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Hamraz Javaheri
- Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Matthias Rüb
- Intelligent Networks, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Jan Herbst
- Intelligent Networks, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Esther Zahn
- Design Research eXplorations, German Research Center for Artificial Intelligence (DFKI), Berlin, Germany
| | - Emil Woop
- Design Research eXplorations, German Research Center for Artificial Intelligence (DFKI), Berlin, Germany
| | - Sizhen Bian
- Information Technology and Electrical Engineering, ETH Zurich, Zurich, Switzerland
| | - Hans D Schotten
- Intelligent Networks, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| | - Gesche Joost
- Design Research eXplorations, German Research Center for Artificial Intelligence (DFKI), Berlin, Germany
| | - Paul Lukowicz
- Embedded Intelligence, German Research Center for Artificial Intelligence (DFKI), Kaiserslautern, Germany
| |
Collapse
|
8
|
García-Ruiz P, Romero-Ramirez FJ, Muñoz-Salinas R, Marín-Jiménez MJ, Medina-Carnicer R. Large-Scale Indoor Camera Positioning Using Fiducial Markers. SENSORS (BASEL, SWITZERLAND) 2024; 24:4303. [PMID: 39001083 PMCID: PMC11244017 DOI: 10.3390/s24134303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 06/27/2024] [Accepted: 06/30/2024] [Indexed: 07/16/2024]
Abstract
Estimating the pose of a large set of fixed indoor cameras is a requirement for certain applications in augmented reality, autonomous navigation, video surveillance, and logistics. However, accurately mapping the positions of these cameras remains an unsolved problem. While providing partial solutions, existing alternatives are limited by their dependence on distinct environmental features, the requirement for large overlapping camera views, and specific conditions. This paper introduces a novel approach to estimating the pose of a large set of cameras using a small subset of fiducial markers printed on regular pieces of paper. By placing the markers in areas visible to multiple cameras, we can obtain an initial estimation of the pair-wise spatial relationship between them. The markers can be moved throughout the environment to obtain the relationship between all cameras, thus creating a graph connecting all cameras. In the final step, our method performs a full optimization, minimizing the reprojection errors of the observed markers and enforcing physical constraints, such as camera and marker coplanarity and control points. We validated our approach using novel artificial and real datasets with varying levels of complexity. Our experiments demonstrated superior performance over existing state-of-the-art techniques and increased effectiveness in real-world applications. Accompanying this paper, we provide the research community with access to our code, tutorials, and an application framework to support the deployment of our methodology.
Collapse
Affiliation(s)
- Pablo García-Ruiz
- Departamento de Informática y Análisis Numérico, Edificio Einstein, Campus de Rabanales, Universidad de Coŕdoba, 14071 Córdoba, Spain; (P.G.-R.); (R.M.-C.)
| | - Francisco J. Romero-Ramirez
- Departamento de Teoría de la Señal y Comunicaciones y Sistemas Telemáticos y Computación, Campus de Fuenlabrada, Universidad Rey Juan Carlos, 28942 Fuenlabrada, Spain;
| | - Rafael Muñoz-Salinas
- Departamento de Informática y Análisis Numérico, Edificio Einstein, Campus de Rabanales, Universidad de Coŕdoba, 14071 Córdoba, Spain; (P.G.-R.); (R.M.-C.)
- Instituto Maimónides de Investigación en Biomedicina (IMIBIC), Avenida Menéndez Pidal s/n, 14004 Córdoba, Spain
| | - Manuel J. Marín-Jiménez
- Departamento de Informática y Análisis Numérico, Edificio Einstein, Campus de Rabanales, Universidad de Coŕdoba, 14071 Córdoba, Spain; (P.G.-R.); (R.M.-C.)
- Instituto Maimónides de Investigación en Biomedicina (IMIBIC), Avenida Menéndez Pidal s/n, 14004 Córdoba, Spain
| | - Rafael Medina-Carnicer
- Departamento de Informática y Análisis Numérico, Edificio Einstein, Campus de Rabanales, Universidad de Coŕdoba, 14071 Córdoba, Spain; (P.G.-R.); (R.M.-C.)
- Instituto Maimónides de Investigación en Biomedicina (IMIBIC), Avenida Menéndez Pidal s/n, 14004 Córdoba, Spain
| |
Collapse
|
9
|
Jeung S, Cockx H, Appelhoff S, Berg T, Gramann K, Grothkopp S, Warmerdam E, Hansen C, Oostenveld R, Welzel J. Motion-BIDS: an extension to the brain imaging data structure to organize motion data for reproducible research. Sci Data 2024; 11:716. [PMID: 38956071 PMCID: PMC11219788 DOI: 10.1038/s41597-024-03559-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 06/20/2024] [Indexed: 07/04/2024] Open
Affiliation(s)
- Sein Jeung
- Technical University of Berlin, Berlin, Germany.
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Helena Cockx
- Radboud University, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| | | | | | | | | | | | | | - Robert Oostenveld
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
- Karolinska Institutet, Stockholm, Sweden
| | | |
Collapse
|
10
|
Anderson I, Cosma C, Zhang Y, Mishra V, Kiourti A. Wearable Loop Sensors for Knee Flexion Monitoring: Dynamic Measurements on Human Subjects. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:542-550. [PMID: 39050975 PMCID: PMC11268931 DOI: 10.1109/ojemb.2024.3417376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 04/12/2024] [Accepted: 06/18/2024] [Indexed: 07/27/2024] Open
Abstract
Goals: We have recently introduced a new class of wearable loop sensors for joint flexion monitoring that overcomes limitations in the state-of-the-art. Our previous studies reported a proof-of-concept on a cylindrical phantom limb, under static scenarios and with a rigid sensor. In this work, we evaluate our sensors, for the first time, on human subjects, under dynamic scenarios, using a flexible textile-based prototype tethered to a network analyzer. An untethered version is also presented and validated on phantoms, aiming towards a fully wearable design. Methods: Three dynamic activities (walking, brisk walking, and full flexion/extension, all performed in place) are used to validate the tethered sensor on ten (10) adults. The untethered sensor is validated upon a cylindrical phantom that is bent manually at random speed. A calibration mechanism is developed to derive the sensor-measured angles. These angles are then compared to gold-standard angles simultaneously captured by a light detection and ranging (LiDAR) depth camera using root mean square error (RMSE) and Pearson's correlation coefficient as metrics. Results: We find excellent correlation (≥ 0.981) to gold-standard angles. The sensor achieves an RMSE of 4.463° ± 1.266° for walking, 5.541° ± 2.082° for brisk walking, 3.657° ± 1.815° for full flexion/extension activities, and 0.670° ± 0.366° for the phantom bending test. Conclusion: The tethered sensor achieves similar to slightly higher RMSE as compared to other wearable flexion sensors on human subjects, while the untethered version achieves excellent RMSE on the phantom model. Concurrently, our sensors are reliable over time and injury-safe, and do not obstruct natural movement. Our results set the ground for future improvements in angular resolution and for realizing fully wearable designs, while maintaining the abovementioned advantages over the state-of-the-art.
Collapse
Affiliation(s)
- Ian Anderson
- ElectroScience Laboratory, Department of Electrical and Computer EngineeringThe Ohio State UniversityColumbusOH43212USA
| | - Christopher Cosma
- ElectroScience Laboratory, Department of Electrical and Computer EngineeringThe Ohio State UniversityColumbusOH43212USA
| | - Yingzhe Zhang
- ElectroScience Laboratory, Department of Electrical and Computer EngineeringThe Ohio State UniversityColumbusOH43212USA
| | - Vigyanshu Mishra
- ElectroScience Laboratory, Department of Electrical and Computer EngineeringThe Ohio State UniversityColumbusOH43212USA
- Center for Applied Research in ElectronicsIndian Institute of TechnologyNew Delhi110016India
| | - Asimina Kiourti
- ElectroScience Laboratory, Department of Electrical and Computer EngineeringThe Ohio State UniversityColumbusOH43212USA
| |
Collapse
|
11
|
Maduwantha K, Jayaweerage I, Kumarasinghe C, Lakpriya N, Madushan T, Tharanga D, Wijethunga M, Induranga A, Gunawardana N, Weerakkody P, Koswattage K. Accessibility of Motion Capture as a Tool for Sports Performance Enhancement for Beginner and Intermediate Cricket Players. SENSORS (BASEL, SWITZERLAND) 2024; 24:3386. [PMID: 38894175 PMCID: PMC11175015 DOI: 10.3390/s24113386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 05/11/2024] [Accepted: 05/16/2024] [Indexed: 06/21/2024]
Abstract
Motion Capture (MoCap) has become an integral tool in fields such as sports, medicine, and the entertainment industry. The cost of deploying high-end equipment and the lack of expertise and knowledge limit the usage of MoCap from its full potential, especially at beginner and intermediate levels of sports coaching. The challenges faced while developing affordable MoCap systems for such levels have been discussed in order to initiate an easily accessible system with minimal resources.
Collapse
Affiliation(s)
- Kaveendra Maduwantha
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Ishan Jayaweerage
- Faculty of Computing, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka;
| | - Chamara Kumarasinghe
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Nimesh Lakpriya
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Thilina Madushan
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Dasun Tharanga
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Mahela Wijethunga
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Ashan Induranga
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Niroshan Gunawardana
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| | - Pathum Weerakkody
- Faculty of Applied Sciences, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka
| | - Kaveenga Koswattage
- Faculty of Technology, Sabaragamuwa University of Sri Lanka, Belihuloya 70140, Sri Lanka; (K.M.); (C.K.); (A.I.)
| |
Collapse
|
12
|
He T, Yang T, Konomi S. Human Motion Enhancement and Restoration via Unconstrained Human Structure Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:3123. [PMID: 38793976 PMCID: PMC11125183 DOI: 10.3390/s24103123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 05/08/2024] [Accepted: 05/13/2024] [Indexed: 05/26/2024]
Abstract
Human motion capture technology, which leverages sensors to track the movement trajectories of key skeleton points, has been progressively transitioning from industrial applications to broader civilian applications in recent years. It finds extensive use in fields such as game development, digital human modeling, and sport science. However, the affordability of these sensors often compromises the accuracy of motion data. Low-cost motion capture methods often lead to errors in the captured motion data. We introduce a novel approach for human motion reconstruction and enhancement using spatio-temporal attention-based graph convolutional networks (ST-ATGCNs), which efficiently learn the human skeleton structure and the motion logic without requiring prior human kinematic knowledge. This method enables unsupervised motion data restoration and significantly reduces the costs associated with obtaining precise motion capture data. Our experiments, conducted on two extensive motion datasets and with real motion capture sensors such as the SONY (Tokyo, Japan) mocopi, demonstrate the method's effectiveness in enhancing the quality of low-precision motion capture data. The experiments indicate the ST-ATGCN's potential to improve both the accessibility and accuracy of motion capture technology.
Collapse
Affiliation(s)
- Tianjia He
- Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka 819-0395, Japan
| | - Tianyuan Yang
- Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka 819-0395, Japan
| | - Shin’ichi Konomi
- Faculty of Arts and Science, Kyushu University, Fukuoka 819-0395, Japan
| |
Collapse
|
13
|
Mulás-Tejeda E, Gómez-Espinosa A, Escobedo Cabello JA, Cantoral-Ceballos JA, Molina-Leal A. Implementation of a Long Short-Term Memory Neural Network-Based Algorithm for Dynamic Obstacle Avoidance. SENSORS (BASEL, SWITZERLAND) 2024; 24:3004. [PMID: 38793861 PMCID: PMC11124987 DOI: 10.3390/s24103004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 05/02/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024]
Abstract
Autonomous mobile robots are essential to the industry, and human-robot interactions are becoming more common nowadays. These interactions require that the robots navigate scenarios with static and dynamic obstacles in a safely manner, avoiding collisions. This paper presents a physical implementation of a method for dynamic obstacle avoidance using a long short-term memory (LSTM) neural network that obtains information from the mobile robot's LiDAR for it to be capable of navigating through scenarios with static and dynamic obstacles while avoiding collisions and reaching its goal. The model is implemented using a TurtleBot3 mobile robot within an OptiTrack motion capture (MoCap) system for obtaining its position at any given time. The user operates the robot through these scenarios, recording its LiDAR readings, target point, position inside the MoCap system, and its linear and angular velocities, all of which serve as the input for the LSTM network. The model is trained on data from multiple user-operated trajectories across five different scenarios, outputting the linear and angular velocities for the mobile robot. Physical experiments prove that the model is successful in allowing the mobile robot to reach the target point in each scenario while avoiding the dynamic obstacle, with a validation accuracy of 98.02%.
Collapse
Affiliation(s)
| | - Alfonso Gómez-Espinosa
- Tecnologico de Monterrey, Escuela de Ingeniería y Ciencias, Av. Epigmenio González 500, Fracc. San Pablo, Querétaro 76130, Mexico; (E.M.-T.); (J.A.E.C.); (J.A.C.-C.); (A.M.-L.)
| | | | | | | |
Collapse
|
14
|
Cruz J, Gonçalves SB, Neves MC, Silva HP, Silva MT. Intraoperative Angle Measurement of Anatomical Structures: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:1613. [PMID: 38475148 PMCID: PMC10934548 DOI: 10.3390/s24051613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/22/2024] [Accepted: 02/28/2024] [Indexed: 03/14/2024]
Abstract
Ensuring precise angle measurement during surgical correction of orientation-related deformities is crucial for optimal postoperative outcomes, yet there is a lack of an ideal commercial solution. Current measurement sensors and instrumentation have limitations that make their use context-specific, demanding a methodical evaluation of the field. A systematic review was carried out in March 2023. Studies reporting technologies and validation methods for intraoperative angular measurement of anatomical structures were analyzed. A total of 32 studies were included, 17 focused on image-based technologies (6 fluoroscopy, 4 camera-based tracking, and 7 CT-based), while 15 explored non-image-based technologies (6 manual instruments and 9 inertial sensor-based instruments). Image-based technologies offer better accuracy and 3D capabilities but pose challenges like additional equipment, increased radiation exposure, time, and cost. Non-image-based technologies are cost-effective but may be influenced by the surgeon's perception and require careful calibration. Nevertheless, the choice of the proper technology should take into consideration the influence of the expected error in the surgery, surgery type, and radiation dose limit. This comprehensive review serves as a valuable guide for surgeons seeking precise angle measurements intraoperatively. It not only explores the performance and application of existing technologies but also aids in the future development of innovative solutions.
Collapse
Affiliation(s)
- João Cruz
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; (J.C.); (S.B.G.)
| | - Sérgio B. Gonçalves
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; (J.C.); (S.B.G.)
| | | | - Hugo Plácido Silva
- IT—Instituto de Telecomunicações, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal;
| | - Miguel Tavares Silva
- IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal; (J.C.); (S.B.G.)
| |
Collapse
|
15
|
Smith TJ, Smith TR, Faruk F, Bendea M, Tirumala Kumara S, Capadona JR, Hernandez-Reynoso AG, Pancrazio JJ. Real-Time Assessment of Rodent Engagement Using ArUco Markers: A Scalable and Accessible Approach for Scoring Behavior in a Nose-Poking Go/No-Go Task. eNeuro 2024; 11:ENEURO.0500-23.2024. [PMID: 38351132 PMCID: PMC11046262 DOI: 10.1523/eneuro.0500-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/30/2024] [Accepted: 02/05/2024] [Indexed: 03/07/2024] Open
Abstract
In the field of behavioral neuroscience, the classification and scoring of animal behavior play pivotal roles in the quantification and interpretation of complex behaviors displayed by animals. Traditional methods have relied on video examination by investigators, which is labor-intensive and susceptible to bias. To address these challenges, research efforts have focused on computational methods and image-processing algorithms for automated behavioral classification. Two primary approaches have emerged: marker- and markerless-based tracking systems. In this study, we showcase the utility of "Augmented Reality University of Cordoba" (ArUco) markers as a marker-based tracking approach for assessing rat engagement during a nose-poking go/no-go behavioral task. In addition, we introduce a two-state engagement model based on ArUco marker tracking data that can be analyzed with a rectangular kernel convolution to identify critical transition points between states of engagement and distraction. In this study, we hypothesized that ArUco markers could be utilized to accurately estimate animal engagement in a nose-poking go/no-go behavioral task, enabling the computation of optimal task durations for behavioral testing. Here, we present the performance of our ArUco tracking program, demonstrating a classification accuracy of 98% that was validated against the manual curation of video data. Furthermore, our convolution analysis revealed that, on average, our animals became disengaged with the behavioral task at ∼75 min, providing a quantitative basis for limiting experimental session durations. Overall, our approach offers a scalable, efficient, and accessible solution for automated scoring of rodent engagement during behavioral data collection.
Collapse
Affiliation(s)
- Thomas J Smith
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Trevor R Smith
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, West Virginia 26506
| | - Fareeha Faruk
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Mihai Bendea
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Shreya Tirumala Kumara
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas 75080
| | - Jeffrey R Capadona
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
- Advanced Platform Technology Center, Louis Stokes Cleveland Veterans Affairs Medical Center, Cleveland, Ohio 44106
| | | | - Joseph J Pancrazio
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas 75080
| |
Collapse
|
16
|
Mohammed H, Daniel BK, Farella M. Smile analysis in dentistry and orthodontics - a review. J R Soc N Z 2024; 55:192-205. [PMID: 39649672 PMCID: PMC11619023 DOI: 10.1080/03036758.2024.2316226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 02/01/2024] [Indexed: 12/11/2024]
Abstract
The desire for an attractive smile is a major reason people seek orthodontic and other forms of cosmetic dental treatment. An understanding of the features of a smile is important for dental diagnosis and treatment planning. The common methods of smile analysis rely on the visual analysis of smile aesthetics using posed photographs, and videos and gathering information about smiles through patient questionnaires and diaries. Recent emerging trends utilise artificial intelligence and automated systems capable of detecting and analysing smiles using motion capture, computer vision, computer graphics, infrared and thermal imaging, electromyography, and optical sensors. This review aims to provide an up-to-date summary of emerging trends in smile analysis in dentistry and orthodontics. Understanding the advantages and limitations of emerging tools for smile analysis will enable clinicians to provide tailored and up-to-date treatment plans.
Collapse
Affiliation(s)
- Hisham Mohammed
- Discipline of Orthodontics, Faculty of Dentistry, University of Otago, Dunedin, New Zealand
| | - Ben K. Daniel
- Higher Education Development Centre, University of Otago, Dunedin, New Zealand
| | - Mauro Farella
- Discipline of Orthodontics, Faculty of Dentistry, University of Otago, Dunedin, New Zealand
- Discipline of Orthodontics, Department of Surgical Sciences, University of Cagliari, Cagliari, Italy
| |
Collapse
|
17
|
Bian Q, Castellani M, Shepherd D, Duan J, Ding Z. Gait Intention Prediction Using a Lower-Limb Musculoskeletal Model and Long Short-Term Memory Neural Networks. IEEE Trans Neural Syst Rehabil Eng 2024; 32:822-830. [PMID: 38345960 DOI: 10.1109/tnsre.2024.3365201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
The prediction of gait motion intention is essential for achieving intuitive control of assistive devices and diagnosing gait disorders. To reduce the cost associated with using multimodal signals and signal processing, we proposed a novel method that integrates machine learning with musculoskeletal modelling techniques for the prediction of time-series joint angles, using only kinematic signals. Additionally, we hypothesised that a stacked long short-term memory (LSTM) neural network architecture can perform the task without relying on any ahead-of-motion features typically provided by electromyography signals. Optical cameras and inertial measurement unit (IMU) sensors were used to track level gait kinematics. Joint angles were modelled using the musculoskeletal model. The optimal LSTM architecture in fulfilling the prediction task was determined. Joint angle predictions were performed for joints on the sagittal plane, benefiting from joint angle modelling using signals from optical cameras and IMU sensors. Our proposed method predicted the upcoming joint angles in the prediction time of 10 ms, with an averaged root mean square error of 5.3° and a coefficient of determination of 0.81. Moreover, in support of our hypothesis, the recurrent stacked LSTM network demonstrated its ability to predict intended motion accurately and efficiently in gait, outperforming two other neural network architectures: a feedforward MLP and a hybrid LSTM-MLP. The method paves the way for the development of a cost-effective, single-modal control system for assistive devices in gait rehabilitation.
Collapse
|
18
|
Villalba-Meneses F, Guevara C, Lojan AB, Gualsaqui MG, Arias-Serrano I, Velásquez-López PA, Almeida-Galárraga D, Tirado-Espín A, Marín J, Marín JJ. Classification of the Pathological Range of Motion in Low Back Pain Using Wearable Sensors and Machine Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:831. [PMID: 38339548 PMCID: PMC10857033 DOI: 10.3390/s24030831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 12/14/2023] [Accepted: 01/23/2024] [Indexed: 02/12/2024]
Abstract
Low back pain (LBP) is a highly common musculoskeletal condition and the leading cause of work absenteeism. This project aims to develop a medical test to help healthcare professionals decide on and assign physical treatment for patients with nonspecific LBP. The design uses machine learning (ML) models based on the classification of motion capture (MoCap) data obtained from the range of motion (ROM) exercises among healthy and clinically diagnosed patients with LBP from Imbabura-Ecuador. The following seven ML algorithms were tested for evaluation and comparison: logistic regression, decision tree, random forest, support vector machine (SVM), k-nearest neighbor (KNN), multilayer perceptron (MLP), and gradient boosting algorithms. All ML techniques obtained an accuracy above 80%, and three models (SVM, random forest, and MLP) obtained an accuracy of >90%. SVM was found to be the best-performing algorithm. This article aims to improve the applicability of inertial MoCap in healthcare by making use of precise spatiotemporal measurements with a data-driven treatment approach to improve the quality of life of people with chronic LBP.
Collapse
Affiliation(s)
- Fernando Villalba-Meneses
- IDERGO (Research and Development in Ergonomics), I3A (Instituto de Investigación en Ingeniería de Aragón), University of Zaragoza, C/Mariano Esquillor s/n, 50018 Zaragoza, Spain; (J.M.); (J.J.M.)
- School of Biological Sciences and Engineering, Yachay Tech University, Hacienda San José s/n, San Miguel de Urcuquí 100119, Ecuador; (A.B.L.); (M.G.G.); (I.A.-S.); (P.A.V.-L.); (D.A.-G.)
- Department of Design and Manufacturing Engineering, University of Zaragoza, C/Mariano Esquillor s/n, 50018 Zaragoza, Spain
| | - Cesar Guevara
- Centro de Investigación en Mecatrónica y Sistemas Interactivos—MIST, Universidad Tecnológica Indoamérica, Quito 170103, Ecuador;
| | - Alejandro B. Lojan
- School of Biological Sciences and Engineering, Yachay Tech University, Hacienda San José s/n, San Miguel de Urcuquí 100119, Ecuador; (A.B.L.); (M.G.G.); (I.A.-S.); (P.A.V.-L.); (D.A.-G.)
| | - Mario G. Gualsaqui
- School of Biological Sciences and Engineering, Yachay Tech University, Hacienda San José s/n, San Miguel de Urcuquí 100119, Ecuador; (A.B.L.); (M.G.G.); (I.A.-S.); (P.A.V.-L.); (D.A.-G.)
| | - Isaac Arias-Serrano
- School of Biological Sciences and Engineering, Yachay Tech University, Hacienda San José s/n, San Miguel de Urcuquí 100119, Ecuador; (A.B.L.); (M.G.G.); (I.A.-S.); (P.A.V.-L.); (D.A.-G.)
| | - Paolo A. Velásquez-López
- School of Biological Sciences and Engineering, Yachay Tech University, Hacienda San José s/n, San Miguel de Urcuquí 100119, Ecuador; (A.B.L.); (M.G.G.); (I.A.-S.); (P.A.V.-L.); (D.A.-G.)
| | - Diego Almeida-Galárraga
- School of Biological Sciences and Engineering, Yachay Tech University, Hacienda San José s/n, San Miguel de Urcuquí 100119, Ecuador; (A.B.L.); (M.G.G.); (I.A.-S.); (P.A.V.-L.); (D.A.-G.)
| | - Andrés Tirado-Espín
- School of Mathematical and Computational Sciences, Yachay Tech University, Hacienda San José s/n, San Miguel de Urcuquí 100119, Ecuador;
| | - Javier Marín
- IDERGO (Research and Development in Ergonomics), I3A (Instituto de Investigación en Ingeniería de Aragón), University of Zaragoza, C/Mariano Esquillor s/n, 50018 Zaragoza, Spain; (J.M.); (J.J.M.)
- Department of Design and Manufacturing Engineering, University of Zaragoza, C/Mariano Esquillor s/n, 50018 Zaragoza, Spain
| | - José J. Marín
- IDERGO (Research and Development in Ergonomics), I3A (Instituto de Investigación en Ingeniería de Aragón), University of Zaragoza, C/Mariano Esquillor s/n, 50018 Zaragoza, Spain; (J.M.); (J.J.M.)
- Department of Design and Manufacturing Engineering, University of Zaragoza, C/Mariano Esquillor s/n, 50018 Zaragoza, Spain
| |
Collapse
|
19
|
Armstrong K, Zhang L, Wen Y, Willmott AP, Lee P, Ye X. A marker-less human motion analysis system for motion-based biomarker identification and quantification in knee disorders. Front Digit Health 2024; 6:1324511. [PMID: 38384738 PMCID: PMC10880093 DOI: 10.3389/fdgth.2024.1324511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 01/09/2024] [Indexed: 02/23/2024] Open
Abstract
In recent years the healthcare industry has had increased difficulty seeing all low-risk patients, including but not limited to suspected osteoarthritis (OA) patients. To help address the increased waiting lists and shortages of staff, we propose a novel method of automated biomarker identification and quantification for the monitoring of treatment or disease progression through the analysis of clinical motion data captured from a standard RGB video camera. The proposed method allows for the measurement of biomechanics information and analysis of their clinical significance, in both a cheap and sensitive alternative to the traditional motion capture techniques. These methods and results validate the capabilities of standard RGB cameras in clinical environments to capture clinically relevant motion data. Our method focuses on generating 3D human shape and pose from 2D video data via adversarial training in a deep neural network with a self-attention mechanism to encode both spatial and temporal information. Biomarker identification using Principal Component Analysis (PCA) allows the production of representative features from motion data and uses these to generate a clinical report automatically. These new biomarkers can then be used to assess the success of treatment and track the progress of rehabilitation or to monitor the progression of the disease. These methods have been validated with a small clinical study, by administering a local anaesthetic to a small population with knee pain, this allows these new representative biomarkers to be validated as statistically significant (p -value < 0.05 ). These significant biomarkers include the cumulative acceleration of elbow flexion/extension in a sit-to-stand, as well as the smoothness of the knee and elbow flexion/extension in both a squat and sit-to-stand.
Collapse
Affiliation(s)
- Kai Armstrong
- Laboratory of Vision Engineering, School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Lei Zhang
- Laboratory of Vision Engineering, School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Yan Wen
- Laboratory of Vision Engineering, School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| | - Alexander P. Willmott
- School of Sport and Exercise Science, University of Lincoln, Lincoln, United Kingdom
| | - Paul Lee
- School of Sport and Exercise Science, University of Lincoln, Lincoln, United Kingdom
- MSK Doctors, Sleaford, United Kingdom
| | - Xujiong Ye
- Laboratory of Vision Engineering, School of Computer Science, University of Lincoln, Lincoln, United Kingdom
| |
Collapse
|
20
|
García-Luna MA, Ruiz-Fernández D, Tortosa-Martínez J, Manchado C, García-Jaén M, Cortell-Tormo JM. Transparency as a Means to Analyse the Impact of Inertial Sensors on Users during the Occupational Ergonomic Assessment: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:298. [PMID: 38203160 PMCID: PMC10781389 DOI: 10.3390/s24010298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 12/19/2023] [Accepted: 01/03/2024] [Indexed: 01/12/2024]
Abstract
The literature has yielded promising data over the past decade regarding the use of inertial sensors for the analysis of occupational ergonomics. However, despite their significant advantages (e.g., portability, lightness, low cost, etc.), their widespread implementation in the actual workplace has not yet been realized, possibly due to their discomfort or potential alteration of the worker's behaviour. This systematic review has two main objectives: (i) to synthesize and evaluate studies that have employed inertial sensors in ergonomic analysis based on the RULA method; and (ii) to propose an evaluation system for the transparency of this technology to the user as a potential factor that could influence the behaviour and/or movements of the worker. A search was conducted on the Web of Science and Scopus databases. The studies were summarized and categorized based on the type of industry, objective, type and number of sensors used, body parts analysed, combination (or not) with other technologies, real or controlled environment, and transparency. A total of 17 studies were included in this review. The Xsens MVN system was the most widely used in this review, and the majority of studies were classified with a moderate level of transparency. It is noteworthy, however, that there is a limited and worrisome number of studies conducted in uncontrolled real environments.
Collapse
Affiliation(s)
- Marco A. García-Luna
- Department of General and Specific Didactics, Faculty of Education, University of Alicante, 03690 Alicante, Spain; (J.T.-M.); (C.M.); (M.G.-J.); (J.M.C.-T.)
| | - Daniel Ruiz-Fernández
- Department of Computer Science and Technology, University of Alicante, 03690 Alicante, Spain;
| | - Juan Tortosa-Martínez
- Department of General and Specific Didactics, Faculty of Education, University of Alicante, 03690 Alicante, Spain; (J.T.-M.); (C.M.); (M.G.-J.); (J.M.C.-T.)
| | - Carmen Manchado
- Department of General and Specific Didactics, Faculty of Education, University of Alicante, 03690 Alicante, Spain; (J.T.-M.); (C.M.); (M.G.-J.); (J.M.C.-T.)
| | - Miguel García-Jaén
- Department of General and Specific Didactics, Faculty of Education, University of Alicante, 03690 Alicante, Spain; (J.T.-M.); (C.M.); (M.G.-J.); (J.M.C.-T.)
| | - Juan M. Cortell-Tormo
- Department of General and Specific Didactics, Faculty of Education, University of Alicante, 03690 Alicante, Spain; (J.T.-M.); (C.M.); (M.G.-J.); (J.M.C.-T.)
| |
Collapse
|
21
|
Bhupal N, Bures L, Peterson E, Nicol S, Figeys M, Cruz AM. Technological interventions in Functional Capacity Evaluations: An insight into current applications. Work 2024; 79:1613-1626. [PMID: 38875068 DOI: 10.3233/wor-230560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2024] Open
Abstract
BACKGROUND Functional Capacity Evaluation (FCE) is a crucial component within return-to-work decision making. However, clinician-based physical FCE interpretation may introduce variability and biases. The rise of technological applications such as machine learning and artificial intelligence, could ensure consistent and precise results. OBJECTIVE This review investigates the application of information and communication technologies (ICT) in physical FCEs specific for return-to-work assessments. METHODS Adhering to the PRISMA guidelines, a search was conducted across five databases, extracting study specifics, populations, and technological tools employed, through dual independent reviews. RESULTS Nine studies were identified that used ICT in FCEs. These technologies included electromyography, heart rate monitors, cameras, motion detectors, and specific software. Notably, although some devices are commercially available, these technologies were at a technology readiness level of 5-6 within the field of FCE. A prevailing trend was the combined use of diverse technologies rather than a single, unified solution. Moreover, the primary emphasis was on the application of technology within study protocols, rather than a direct evaluation of the technology usability and feasibility. CONCLUSION The literature underscores limited ICT integration in FCEs. The current landscape of FCEs, marked by a high dependence on clinician observations, presents challenges regarding consistency and cost-effectiveness. There is an evident need for a standardized technological approach that introduces objective metrics to streamline the FCE process and potentially enhance its outcomes.
Collapse
Affiliation(s)
- Nake Bhupal
- Department of Occupational Therapy. Faculty of Rehabilitation Medicine. University of Alberta, Edmonton, AB. Canada
| | - Laura Bures
- Department of Occupational Therapy. Faculty of Rehabilitation Medicine. University of Alberta, Edmonton, AB. Canada
| | - Emika Peterson
- Department of Occupational Therapy. Faculty of Rehabilitation Medicine. University of Alberta, Edmonton, AB. Canada
| | - Spencer Nicol
- Department of Occupational Therapy. Faculty of Rehabilitation Medicine. University of Alberta, Edmonton, AB. Canada
| | - Mathieu Figeys
- Department of Occupational Therapy. Faculty of Rehabilitation Medicine. University of Alberta, Edmonton, AB. Canada
| | - Antonio Miguel Cruz
- Department of Occupational Therapy. Faculty of Rehabilitation Medicine. University of Alberta, Edmonton, AB. Canada
- Glenrose Rehabilitation Research, Innovation & Technology (GRRIT). Glenrose Rehabilitation Hospital, Edmonton, AB, Canada
- School of Public Health Sciences, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
22
|
Yaldiz CO, Sebkhi N, Bhavsar A, Wang J, Inan OT. Improving Reliability of Magnetic Localization Using Input Space Transformation. IEEE SENSORS JOURNAL 2023; 23:28390-28398. [PMID: 38962278 PMCID: PMC11218913 DOI: 10.1109/jsen.2023.3320033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/05/2024]
Abstract
Body motion tracking for medical applications has the potential to improve quality of life for people with physical or speech motor disorders. Current solutions available in the market are either inaccurate, not affordable, or are impractical for a medical setting or at home. Magnetic localization can address these issues thanks to its high accuracy, simplicity of use, wearability, and use of inexpensive sensors such as magnetometers. However, sources of unreliability affect magnetometers to such an extent that the localization model trained in a controlled environment might exhibit poor tracking accuracy when deployed to end users. Traditional magnetic calibration methods, such as ellipsoid fit (EF), do not sufficiently attenuate the effect of these sources of unreliability to reach a positional accuracy that is both consistent and satisfactory for our target applications. To improve reliability, we developed a calibration method called post-deployment input space transformation (PDIST) that reduces the distribution shift in the magnetic measurements between model training and deployment. In this paper, we focused on change in magnetization or magnetometer as sources of unreliability. Our results show that PDIST performs better than EF in decreasing positional errors by a factor of ~3 when magnetization is distorted, and up to ~7 when our localization model is tested on a different magnetometer than the one it was trained with. Furthermore, PDIST is shown to perform reliably by providing consistent results across all our data collection that tested various combinations of the sources of unreliability.
Collapse
Affiliation(s)
- Cem O Yaldiz
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Nordine Sebkhi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Arpan Bhavsar
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Jun Wang
- Department of Speech, Language, and Hearing Sciences and the Department of Neurology, The University of Texas at Austin, Austin, TX 78712, USA
| | - Omer T Inan
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
23
|
Ma T, Zhang Y, Choi SD, Xiong S. Modelling for design and evaluation of industrial exoskeletons: A systematic review. APPLIED ERGONOMICS 2023; 113:104100. [PMID: 37490791 DOI: 10.1016/j.apergo.2023.104100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 07/12/2023] [Accepted: 07/17/2023] [Indexed: 07/27/2023]
Abstract
Industrial exoskeletons are developed to relieve workers' physical demands in the workplace and to alleviate ergonomic issues associated with work-related musculoskeletal disorders. As a safe and economical alternative to empirical/experimental methods, modelling is considered as a powerful tool for design and evaluation of industrial exoskeletons. This systematic review aims to provide a comprehensive understanding of the current literature on the design and evaluation of industrial exoskeletons through modelling. A systematic study was conducted by general keyword searches of five electronic databases over the last two decades (2003-2022). Out of the 701 records initially retrieved, 33 eligible articles were included and analyzed in the final review, presenting a variety of model inputs, model development, and model outputs used in the modelling. This systematic review study revealed that existing modelling methods can evaluate the biomechanical and physiological effects of industrial exoskeletons and provide some design parameters. However, the modelling method is currently unable to cover some of the main evaluation metrics supported by experimental assessments, such as task performance, user experience/discomfort, change in metabolic costs etc. Standard guidelines for model construction and implementation, as well as validation of human-exoskeleton interactions, remain to be established.
Collapse
Affiliation(s)
- Tiejun Ma
- Human Factors and Ergonomics Laboratory, Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, South Korea
| | - Yanxin Zhang
- Department of Exercise Sciences, University of Auckland, 4703906, Newmarket, Auckland, New Zealand
| | - Sang D Choi
- Department of Global and Community Health, George Mason University, Fairfax, VA, 22030, USA
| | - Shuping Xiong
- Human Factors and Ergonomics Laboratory, Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, South Korea.
| |
Collapse
|
24
|
Giraudet C, Moiroud C, Beaumont A, Gaulmin P, Hatrisse C, Azevedo E, Denoix JM, Ben Mansour K, Martin P, Audigié F, Chateau H, Marin F. Development of a Methodology for Low-Cost 3D Underwater Motion Capture: Application to the Biomechanics of Horse Swimming. SENSORS (BASEL, SWITZERLAND) 2023; 23:8832. [PMID: 37960531 PMCID: PMC10647488 DOI: 10.3390/s23218832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 10/21/2023] [Accepted: 10/24/2023] [Indexed: 11/15/2023]
Abstract
Hydrotherapy has been utilized in horse rehabilitation programs for over four decades. However, a comprehensive description of the swimming cycle of horses is still lacking. One of the challenges in studying this motion is 3D underwater motion capture, which holds potential not only for understanding equine locomotion but also for enhancing human swimming performance. In this study, a marker-based system that combines underwater cameras and markers drawn on horses is developed. This system enables the reconstruction of the 3D motion of the front and hind limbs of six horses throughout an entire swimming cycle, with a total of twelve recordings. The procedures for pre- and post-processing the videos are described in detail, along with an assessment of the estimated error. This study estimates the reconstruction error on a checkerboard and computes an estimated error of less than 10 mm for segments of tens of centimeters and less than 1 degree for angles of tens of degrees. This study computes the 3D joint angles of the front limbs (shoulder, elbow, carpus, and front fetlock) and hind limbs (hip, stifle, tarsus, and hind fetlock) during a complete swimming cycle for the six horses. The ranges of motion observed are as follows: shoulder: 17 ± 3°; elbow: 76 ± 11°; carpus: 99 ± 10°; front fetlock: 68 ± 12°; hip: 39 ± 3°; stifle: 68 ± 7°; tarsus: 99 ± 6°; hind fetlock: 94 ± 8°. By comparing the joint angles during a swimming cycle to those observed during classical gaits, this study reveals a greater range of motion (ROM) for most joints during swimming, except for the front and hind fetlocks. This larger ROM is usually achieved through a larger maximal flexion angle (smaller minimal angle of the joints). Finally, the versatility of the system allows us to imagine applications outside the scope of horses, including other large animals and even humans.
Collapse
Affiliation(s)
- Chloé Giraudet
- Laboratoire de BioMécanique et BioIngénierie (UMR CNRS 7338), Centre of Excellence for Human and Animal Movement Biomechanics (CoEMoB), Université de Technologie de Compiègne (UTC), Alliance Sorbonne Université, 60200 Compiègne, France; (C.G.); (K.B.M.)
| | - Claire Moiroud
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
| | - Audrey Beaumont
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
| | - Pauline Gaulmin
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
| | - Chloé Hatrisse
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
- Univ Lyon, Univ Gustave Eiffel, Univ Claude Bernard Lyon 1, LBMC UMR_T 9406, 69622 Lyon, France
| | - Emeline Azevedo
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
| | - Jean-Marie Denoix
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
| | - Khalil Ben Mansour
- Laboratoire de BioMécanique et BioIngénierie (UMR CNRS 7338), Centre of Excellence for Human and Animal Movement Biomechanics (CoEMoB), Université de Technologie de Compiègne (UTC), Alliance Sorbonne Université, 60200 Compiègne, France; (C.G.); (K.B.M.)
| | - Pauline Martin
- LIM France, Chemin Fontaine de Fanny, 24300 Nontron, France
| | - Fabrice Audigié
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
| | - Henry Chateau
- CIRALE, USC 957 BPLC, Ecole Nationale Vétérinaire d’Alfort, 94700 Maisons-Alfort, France; (C.M.); (A.B.); (P.G.); (C.H.); (J.-M.D.); (H.C.)
| | - Frédéric Marin
- Laboratoire de BioMécanique et BioIngénierie (UMR CNRS 7338), Centre of Excellence for Human and Animal Movement Biomechanics (CoEMoB), Université de Technologie de Compiègne (UTC), Alliance Sorbonne Université, 60200 Compiègne, France; (C.G.); (K.B.M.)
| |
Collapse
|
25
|
Ang Z. Application of IoT technology based on neural networks in basketball training motion capture and injury prevention. Prev Med 2023; 175:107660. [PMID: 37573953 DOI: 10.1016/j.ypmed.2023.107660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/15/2023]
Abstract
Basketball players need to frequently engage in various physical movements during the game, which puts a certain burden on their bodies and can easily lead to various sports injuries. Therefore, it is crucial to prevent sports injuries in basketball teaching. This paper also studies basketball motion track capture. Basketball motion capture preserves the motion posture information of the target person in three-dimensional space. Because the motion capture system based on machine vision often encounters problems such as occlusion or self occlusion in the application scene, human motion capture is still a challenging problem in the current research field. This article designs a multi perspective human motion trajectory capture algorithm framework, which uses a two-dimensional human motion pose estimation algorithm based on deep learning to estimate the position distribution of human joint points on the two-dimensional image from each perspective. By combining the knowledge of camera poses from multiple perspectives, the three-dimensional spatial distribution of joint points is transformed, and the final evaluation result of the target human 3D pose is obtained. This article applies the research results of neural networks and IoT devices to basketball motion capture methods, further developing basketball motion capture systems.
Collapse
Affiliation(s)
- Zhao Ang
- Hui Shang Vocational College, Hefei 230022, China.
| |
Collapse
|
26
|
Salisu S, Ruhaiyem NIR, Eisa TAE, Nasser M, Saeed F, Younis HA. Motion Capture Technologies for Ergonomics: A Systematic Literature Review. Diagnostics (Basel) 2023; 13:2593. [PMID: 37568956 PMCID: PMC10416907 DOI: 10.3390/diagnostics13152593] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 07/25/2023] [Accepted: 08/02/2023] [Indexed: 08/13/2023] Open
Abstract
Muscular skeletal disorder is a difficult challenge faced by the working population. Motion capture (MoCap) is used for recording the movement of people for clinical, ergonomic and rehabilitation solutions. However, knowledge barriers about these MoCap systems have made them difficult to use for many people. Despite this, no state-of-the-art literature review on MoCap systems for human clinical, rehabilitation and ergonomic analysis has been conducted. A medical diagnosis using AI applies machine learning algorithms and motion capture technologies to analyze patient data, enhancing diagnostic accuracy, enabling early disease detection and facilitating personalized treatment plans. It revolutionizes healthcare by harnessing the power of data-driven insights for improved patient outcomes and efficient clinical decision-making. The current review aimed to investigate: (i) the most used MoCap systems for clinical use, ergonomics and rehabilitation, (ii) their application and (iii) the target population. We used preferred reporting items for systematic reviews and meta-analysis guidelines for the review. Google Scholar, PubMed, Scopus and Web of Science were used to search for relevant published articles. The articles obtained were scrutinized by reading the abstracts and titles to determine their inclusion eligibility. Accordingly, articles with insufficient or irrelevant information were excluded from the screening. The search included studies published between 2013 and 2023 (including additional criteria). A total of 40 articles were eligible for review. The selected articles were further categorized in terms of the types of MoCap used, their application and the domain of the experiments. This review will serve as a guide for researchers and organizational management.
Collapse
Affiliation(s)
- Sani Salisu
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Malaysia;
- Department of Information Technology, Federal University Dutse, Dutse 720101, Nigeria
| | | | | | - Maged Nasser
- Computer & Information Sciences Department, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia;
| | - Faisal Saeed
- DAAI Research Group, Department of Computing and Data Science, School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7XG, UK;
| | - Hussain A. Younis
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Malaysia;
- College of Education for Women, University of Basrah, Basrah 61004, Iraq
| |
Collapse
|
27
|
Tian W, Ding Y, Du X, Li K, Wang Z, Wang C, Deng C, Liao W. A Review of Intelligent Assembly Technology of Small Electronic Equipment. MICROMACHINES 2023; 14:1126. [PMID: 37374711 DOI: 10.3390/mi14061126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 05/21/2023] [Accepted: 05/23/2023] [Indexed: 06/29/2023]
Abstract
Electronic equipment, including phased array radars, satellites, high-performance computers, etc., has been widely used in military and civilian fields. Its importance and significance are self-evident. Electronic equipment has many small components, various functions, and complex structures, making assembly an essential step in the manufacturing process of electronic equipment. In recent years, the traditional assembly methods have had difficulty meeting the increasingly complex assembly needs of military and civilian electronic equipment. With the rapid development of Industry 4.0, emerging intelligent assembly technology is replacing the original "semi-automatic" assembly technology. Aiming at the assembly requirements of small electronic equipment, we first evaluate the existing problems and technical difficulties. Then, we analyze the intelligent assembly technology of electronic equipment from three aspects: visual positioning, path and trajectory planning, and force-position coordination control technology. Further, we describe and summarize the research status and the application of the technology and discuss possible future research directions in the intelligent assembly technology of small electronic equipment.
Collapse
Affiliation(s)
- Wei Tian
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Yifan Ding
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Xiaodong Du
- No. 29 Research Institute of CETC, Chengdu 610036, China
| | - Ke Li
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Zihang Wang
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Changrui Wang
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Chao Deng
- No. 29 Research Institute of CETC, Chengdu 610036, China
| | - Wenhe Liao
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
28
|
Digital manufacturing of personalised footwear with embedded sensors. Sci Rep 2023; 13:1962. [PMID: 36737477 PMCID: PMC9898262 DOI: 10.1038/s41598-023-29261-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 02/01/2023] [Indexed: 02/05/2023] Open
Abstract
The strong clinical demand for more accurate and personalized health monitoring technologies has called for the development of additively manufactured wearable devices. While the materials palette for additive manufacturing continues to expand, the integration of materials, designs and digital fabrication methods in a unified workflow remains challenging. In this work, a 3D printing platform is proposed for the integrated fabrication of silicone-based soft wearables with embedded piezoresistive sensors. Silicone-based inks containing cellulose nanocrystals and/or carbon black fillers were thoroughly designed and used for the direct ink writing of a shoe insole demonstrator with encapsulated sensors capable of measuring both normal and shear forces. By fine-tuning the material properties to the expected plantar pressures, the patient-customized shoe insole was fully 3D printed at room temperature to measure in-situ gait forces during physical activity. Moreover, the digitized approach allows for rapid adaptation of the sensor layout to meet specific user needs and thereby fabricate improved insoles in multiple quick iterations. The developed materials and workflow enable a new generation of fully 3D printed soft electronic devices for health monitoring.
Collapse
|
29
|
Lorenzini M, Lagomarsino M, Fortini L, Gholami S, Ajoudani A. Ergonomic human-robot collaboration in industry: A review. Front Robot AI 2023; 9:813907. [PMID: 36743294 PMCID: PMC9893795 DOI: 10.3389/frobt.2022.813907] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 08/26/2022] [Indexed: 01/20/2023] Open
Abstract
In the current industrial context, the importance of assessing and improving workers' health conditions is widely recognised. Both physical and psycho-social factors contribute to jeopardising the underlying comfort and well-being, boosting the occurrence of diseases and injuries, and affecting their quality of life. Human-robot interaction and collaboration frameworks stand out among the possible solutions to prevent and mitigate workplace risk factors. The increasingly advanced control strategies and planning schemes featured by collaborative robots have the potential to foster fruitful and efficient coordination during the execution of hybrid tasks, by meeting their human counterparts' needs and limits. To this end, a thorough and comprehensive evaluation of an individual's ergonomics, i.e. direct effect of workload on the human psycho-physical state, must be taken into account. In this review article, we provide an overview of the existing ergonomics assessment tools as well as the available monitoring technologies to drive and adapt a collaborative robot's behaviour. Preliminary attempts of ergonomic human-robot collaboration frameworks are presented next, discussing state-of-the-art limitations and challenges. Future trends and promising themes are finally highlighted, aiming to promote safety, health, and equality in worldwide workplaces.
Collapse
Affiliation(s)
- Marta Lorenzini
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy
| | - Marta Lagomarsino
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy
- Neuroengineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Polytechnic University of Milan, Milan, Italy
| | - Luca Fortini
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy
- Neuroengineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Polytechnic University of Milan, Milan, Italy
| | - Soheil Gholami
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy
- Neuroengineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Polytechnic University of Milan, Milan, Italy
| | - Arash Ajoudani
- Human-Robot Interfaces and Physical Interaction Laboratory, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
30
|
Janela D, Costa F, Weiss B, Areias AC, Molinos M, Scheer JK, Lains J, Bento V, Cohen SP, Correia FD, Yanamadala V. Effectiveness of biofeedback-assisted asynchronous telerehabilitation in musculoskeletal care: A systematic review. Digit Health 2023; 9:20552076231176696. [PMID: 37325077 PMCID: PMC10262679 DOI: 10.1177/20552076231176696] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 05/02/2023] [Indexed: 06/17/2023] Open
Abstract
Background Musculoskeletal conditions are the leading cause of disability worldwide. Telerehabilitation may be a viable option in the management of these conditions, facilitating access and patient adherence. Nevertheless, the impact of biofeedback-assisted asynchronous telerehabilitation remains unknown. Objective To systematically review and assess the effectiveness of exercise-based asynchronous biofeedback-assisted telerehabilitation on pain and function in individuals with musculoskeletal conditions. Methods This systematic review followed Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. The search was conducted using three databases: PubMed, Scopus, and PEDro. Study criteria included articles written in English and published from January 2017 to August 2022, reporting interventional trials evaluating exercise-based asynchronous telerehabilitation using biofeedback in adults with musculoskeletal disorders. The risks of bias and certainty of evidence were appraised using the Cochrane tool and Grading of Recommendations, Assessment, Development, and Evaluation (GRADE), respectively. The results are narratively summarized, and the effect sizes of the main outcomes were calculated. Results Fourteen trials were included: 10 using motion tracker technology (N = 1284) and four with camera-based biofeedback (N = 467). Telerehabilitation with motion trackers yields at least similar improvements in pain and function in people with musculoskeletal conditions (effect sizes: 0.19-1.45; low certainty of evidence). Uncertain evidence exists for the effectiveness of camera-based telerehabilitation (effect sizes: 0.11-0.13; very low evidence). No study found superior results in a control group. Conclusions Asynchronous telerehabilitation may be an option in the management of musculoskeletal conditions. Considering its potential for scalability and access democratization, additional high-quality research is needed to address long-term outcomes, comparativeness, and cost-effectiveness and identify treatment responders.
Collapse
Affiliation(s)
| | | | - Brandon Weiss
- Lake Erie College of Osteopathic Medicine, Erie, PA, USA
| | | | | | - Justin K. Scheer
- Department of Neurological Surgery, University of California, San Francisco, CA, USA
| | - Jorge Lains
- Rovisco Pais Medical and Rehabilitation Centre, Tocha, Portugal
- Faculty of Medicine, Coimbra University, Coimbra, Portugal
| | | | - Steven P. Cohen
- Departments of Anesthesiology & Critical Care Medicine, Physical Medicine and Rehabilitation, Neurology, and Psychiatry and Behavioral Sciences, Johns Hopkins School of Medicine, Baltimore, MD, USA
- Departments of Anesthesiology and Physical Medicine and Rehabilitation and Anesthesiology, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Fernando Dias Correia
- Sword Health, Inc, Draper, UT, USA
- Neurology Department, Centro Hospitalar e Universitário do Porto, Porto, Portugal
| | - Vijay Yanamadala
- Sword Health, Inc, Draper, UT, USA
- Department of Surgery, Quinnipiac University Frank H. Netter School of Medicine, Hamden, CT, USA
- Department of Neurosurgery, Hartford Healthcare Medical Group, Westport, CT, USA
| |
Collapse
|
31
|
Arrowsmith C, Burns D, Mak T, Hardisty M, Whyne C. Physiotherapy Exercise Classification with Single-Camera Pose Detection and Machine Learning. SENSORS (BASEL, SWITZERLAND) 2022; 23:363. [PMID: 36616961 PMCID: PMC9824820 DOI: 10.3390/s23010363] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/15/2022] [Accepted: 12/20/2022] [Indexed: 06/17/2023]
Abstract
Access to healthcare, including physiotherapy, is increasingly occurring through virtual formats. At-home adherence to physical therapy programs is often poor and few tools exist to objectively measure participation. The aim of this study was to develop and evaluate the potential for performing automatic, unsupervised video-based monitoring of at-home low-back and shoulder physiotherapy exercises using a mobile phone camera. Joint locations were extracted from the videos of healthy subjects performing low-back and shoulder physiotherapy exercises using an open source pose detection framework. A convolutional neural network was trained to classify physiotherapy exercises based on the segments of keypoint time series data. The model's performance as a function of input keypoint combinations was studied in addition to its robustness to variation in the camera angle. The CNN model achieved optimal performance using a total of 12 pose estimation landmarks from the upper and lower body (low-back exercise classification: 0.995 ± 0.009; shoulder exercise classification: 0.963 ± 0.020). Training the CNN on a variety of angles was found to be effective in making the model robust to variations in video filming angle. This study demonstrates the feasibility of using a smartphone camera and a supervised machine learning model to effectively classify at-home physiotherapy participation and could provide a low-cost, scalable method for tracking adherence to physical therapy exercise programs in a variety of settings.
Collapse
Affiliation(s)
- Colin Arrowsmith
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Halterix Corporation, Toronto, ON M5E 1L4, Canada
| | - David Burns
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Halterix Corporation, Toronto, ON M5E 1L4, Canada
- Division of Orthopaedic Surgery, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Thomas Mak
- Halterix Corporation, Toronto, ON M5E 1L4, Canada
| | - Michael Hardisty
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Division of Orthopaedic Surgery, University of Toronto, Toronto, ON M5T 1P5, Canada
| | - Cari Whyne
- Orthopaedic Biomechanics Lab, Holland Bone and Joint Program, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
- Division of Orthopaedic Surgery, University of Toronto, Toronto, ON M5T 1P5, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada
| |
Collapse
|
32
|
Liu K, Wan D, Wang W, Fei C, Zhou T, Guo D, Bai L, Li Y, Ni Z, Lu J. A Time-Division Position-Sensitive Detector Image System for High-Speed Multitarget Trajectory Tracking. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2206638. [PMID: 36114665 DOI: 10.1002/adma.202206638] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 09/01/2022] [Indexed: 06/15/2023]
Abstract
High-speed trajectory tracking with real-time processing capability is particularly important in the fields of pilotless automobiles, guidance systems, robotics, and filmmaking. The conventional optical approach to high-speed trajectory tracking involves charge coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) image sensors, which suffer from trade-offs between resolution and framerates, complexity of the system, and enormous data-analysis processes. Here, a high-speed trajectory tracking system is designed by using a time-division position-sensitive detector (TD-PSD) based on a graphene-silicon Schottky heterojunction. Benefiting from the high-speed optoelectronic response and sub-micrometer positional accuracy of the TD-PSD, multitarget real-time trajectory tracking is realized, with a maximum image output framerate of up to 62 000 frames per second. Moreover, multichannel trajectory tracking and image-distortion correction functionalities are realized by TD-PSD systems through frequency-related image preprocessing, which significantly improves the capacity of real-time information processing and image quality in complicated light environments.
Collapse
Affiliation(s)
- Kaiyang Liu
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Dongyang Wan
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Wenhui Wang
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Cheng Fei
- Shandong University, Center for Optics Research and Engineering, Qingdao, Shandong, 266237, P. R. China
| | - Tao Zhou
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Dingli Guo
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Lin Bai
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| | - Yongfu Li
- Shandong University, Center for Optics Research and Engineering, Qingdao, Shandong, 266237, P. R. China
| | - Zhenhua Ni
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
- Purple Mountain Laboratories, Nanjing, 211111, China
| | - Junpeng Lu
- School of Physics, Frontiers Science Center for Mobile Information Communication and Security, Quantum Information Research Center, Southeast University, Nanjing, 211189, China
| |
Collapse
|
33
|
Weiss S, Weisse V, Korthaus A, Bannas P, Frosch KH, Schlickewei C, Barg A, Priemel M. Clinical Presentation and MRI Characteristics of Appendicular Soft Tissue Lymphoma: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12071623. [PMID: 35885528 PMCID: PMC9320678 DOI: 10.3390/diagnostics12071623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/26/2022] [Accepted: 07/02/2022] [Indexed: 11/16/2022] Open
Abstract
Appendicular soft tissue lymphoma (ASTL) is rare and is frequently misinterpreted as soft tissue sarcoma (STS). Studies investigating magnet resonance imaging (MRI) characteristics of ASTL are scarce and showed heterogenous investigation criteria and results. The purpose of this study was to systematically review clinical presentations and MRI characteristics of ASTL as described in the current literature. For that purpose, we performed a systematic literature review in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Patient demographics, clinical presentation and MRI imaging characteristics of ASTL were investigated, resulting in a total of nine included studies reporting a total of 77 patients. Signal intensity of lymphoma compared to muscle tissue was mostly described as isointense (53%) or slightly hyperintense (39%) in T1-weighted images and always as hyperintense in proton-and T2-weighted images. Multicompartmental involvement was reported in 59% of cases and subcutaneous stranding in 74%. Long segmental involvement was present in 80% of investigated cases. Involvement of neurovascular structures was reported in 41% of cases and the presence of traversing vessels in 83% of patients. The presence of these findings should lead to the inclusion of ASTL in the differential diagnosis of soft tissue masses.
Collapse
Affiliation(s)
- Sebastian Weiss
- Department of Trauma and Orthopaedic Surgery, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Valentin Weisse
- Department of Trauma and Orthopaedic Surgery, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Alexander Korthaus
- Department of Trauma and Orthopaedic Surgery, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Peter Bannas
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Karl-Heinz Frosch
- Department of Trauma and Orthopaedic Surgery, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
- Department of Trauma Surgery, Orthopaedics and Sports Traumatology, BG Klinikum Hamburg, 21033 Hamburg, Germany
| | - Carsten Schlickewei
- Department of Trauma and Orthopaedic Surgery, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| | - Alexej Barg
- Department of Trauma and Orthopaedic Surgery, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
- Department of Orthopaedics, University of Utah, Salt Lake City, UT 84108, USA
| | - Matthias Priemel
- Department of Trauma and Orthopaedic Surgery, University Medical Center Hamburg-Eppendorf, 20251 Hamburg, Germany
| |
Collapse
|
34
|
Application Effect of Motion Capture Technology in Basketball Resistance Training and Shooting Hit Rate in Immersive Virtual Reality Environment. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4584980. [PMID: 35785072 PMCID: PMC9249460 DOI: 10.1155/2022/4584980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 11/18/2022]
Abstract
With the progress of society, sports have become the mainstream of social development. Strengthening the athletic ability of basketball players can effectively improve their shooting percentage. Firstly, virtual reality (VR) technology and motion capture technology are summarized. Secondly, the resistance training and shooting training of basketball players are analyzed and explained. Finally, the algorithm of motion capture technology is designed to capture and optimize the movements of athletes. In addition, a comprehensive evaluation of the shooting percentage of basketball players is carried out. The results show that the motion capture technology proposed here effectively captures the shooting action of basketball players, and the shooting percentage of players is promoted through resistance training. Among all athletes, the highest shooting percentage improvement is around 14% and the lowest is around 4%. In all groups, athletes of different heights have the largest difference in the improvement of shooting percentage. Therefore, this work plays an important role in improving the shooting rate of basketball players through VR technology. It provides technical support for improving the shooting percentage of basketball players and contributes to the progress of athletes' comprehensive athletic ability.
Collapse
|
35
|
Memar Ardestani M, Yan H. Noise Reduction in Human Motion-Captured Signals for Computer Animation based on B-Spline Filtering. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22124629. [PMID: 35746410 PMCID: PMC9230363 DOI: 10.3390/s22124629] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/06/2022] [Accepted: 06/17/2022] [Indexed: 06/01/2023]
Abstract
Motion capturing is used to record the natural movements of humans for a particular task. The motions recorded are extensively used to produce animation characters with natural movements and for virtual reality (VR) devices. The raw captured motion signals, however, contain noises introduced during the capturing process. Therefore, the signals should be effectively processed before they can be applied to animation characters. In this study, we analyzed several common methods used for smoothing signals. The smoothed signals were then compared based on the smoothness metrics defined. It was concluded that the filtering based on the B-spline-based least square method could achieve high-quality outputs with predetermined continuity and minimal parameter adjustments for a variety of motion signals.
Collapse
Affiliation(s)
- Mehdi Memar Ardestani
- Center for Intelligent Multidimensional Data Analysis, Hong Kong Science Park, Shatin, Hong Kong;
| | - Hong Yan
- Center for Intelligent Multidimensional Data Analysis, Hong Kong Science Park, Shatin, Hong Kong;
- Department of Electrical Engineering, City University of Hong Kong, Kowloon, Hong Kong
| |
Collapse
|
36
|
Scoring of Human Body-Balance Ability on Wobble Board Based on the Geometric Solution. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125967] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Many studies have reported that the human body-balance ability was essential in the early detection and self-management of chronic diseases. However, devices to measure balance, such as motion capture and force plates, are expensive and require a particular space for installation as well as specialized knowledge for analysis. Therefore, this study aimed to propose and verify a new algorithm to score the human body-balance ability on the wobble board (HBBAWB), based on a geometric solution using a cheap and portable device. Although the center of gravity (COG), the projected point of the center of mass (COM) on the fixed ground, has been used as the index for the balance ability, generally, it was not proper to use the COG under the condition of no fixed environment. The reason was that the COG index did not include the information on the slope for the wobble. Thus, this study defined the new index as the perpendicular-projection point (PPP), which was the projected point of the COM on the tilted plane. The proposed geometric solution utilized the relationship among three points, the PPP, the COM, and the middle point between the two feet, via linear regression. The experimental results found that the geometric solution, which utilized the relationship between the three angles of the equivalent model, enabled us to score the HBBAWB.
Collapse
|
37
|
Auledas-Noguera M, Liaqat A, Tiwari A. Mobile Robots for In-Process Monitoring of Aircraft Systems Assemblies. SENSORS (BASEL, SWITZERLAND) 2022; 22:3362. [PMID: 35591052 PMCID: PMC9104221 DOI: 10.3390/s22093362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 04/22/2022] [Accepted: 04/22/2022] [Indexed: 06/15/2023]
Abstract
Currently, systems installed on large-scale aerospace structures are manually equipped by trained operators. To improve current methods, an automated system that ensures quality control and process adherence could be used. This work presents a mobile robot capable of autonomously inspecting aircraft systems and providing feedback to workers. The mobile robot can follow operators and localise the position of the inspection using a thermal camera and 2D lidars. While moving, a depth camera collects 3D data about the system being installed. The in-process monitoring algorithm uses this information to check if the system has been correctly installed. Finally, based on these measurements, indications are shown on a screen to provide feedback to the workers. The performance of this solution has been validated in a laboratory environment, replicating a trailing edge equipping task. During testing, the tracking and localisation systems have proven to be reliable. The in-process monitoring system was also found to provide accurate feedback to the operators. Overall, the results show that the solution is promising for industrial applications.
Collapse
Affiliation(s)
- Marc Auledas-Noguera
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield S1 3JD, UK;
| | - Amer Liaqat
- Assembly Innovation & Development, Airbus, Broughton CH4 0DR, UK;
| | - Ashutosh Tiwari
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield S1 3JD, UK;
| |
Collapse
|
38
|
Ergonomic Design of a Workplace Using Virtual Reality and a Motion Capture Suit. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12042150] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Musculoskeletal disorders are some of the most frequent manual work disorders. Employers worldwide pay high costs for their treatment and prevention. We present an innovative method for designing an ergonomic workplace. This method uses new technologies and supports not only ergonomics, but also a general improvement in the designing of the manufacturing process. Although many researchers claim that there is a huge potential for using new disruptive technologies like virtual reality and motion capture in ergonomics, there is still a lack of a comprehensive methodological basis for implementing these technologies. Our approach was designed using the expert group method. We can validate the manufacturing process and the ergonomics using a motion capture (MoCap) suit and a head-mounted display (HMD). There are no legislative restrictions for the tools which are used for ergonomic analyses, so we can use our outputs for workplace scoring. Firstly, we measure the anthropometrics of the proband. Then the proband is immersed in virtual reality and they go through a manufacturing process during which ergonomics data are collected. The design of a particular workplace or multiple workplaces can be validated based on the reactions, measurements, and input in real-time. After processing the data, the workplace can be adjusted accordingly. The proposed method has a time and economic benefit for workplace design, optimisation of workplace ergonomics, and shortens the time required for designing the production line layout. It also includes optional steps for validation using conventional methods. These steps were used for method validation on a representative workplace using on-site experiments. We validated it on a group of 20 healthy operators working in automotive production (age 22 to 35). A comparison study describes the classic methods of workplace ergonomics evaluation, compares the classic evaluation using biomechanical analysis, modern evaluation using a MoCap suit, and connection with virtual reality. We have proved the validity of the method using the comparison study. The results also showed other potential issues which can be further examined: like the role of peripheral vision or haptic feedback.
Collapse
|
39
|
Duarte J, Rodrigues F, Castelo Branco J. Sensing Technology Applications in the Mining Industry-A Systematic Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19042334. [PMID: 35206524 PMCID: PMC8872082 DOI: 10.3390/ijerph19042334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/07/2022] [Accepted: 02/10/2022] [Indexed: 11/16/2022]
Abstract
Introduction Industry 4.0 has enhanced technological development in all fields. Currently, one can analyse, treat, and model completely different variables in real time; these include production, environmental, and occupational variables. Resultingly, there has been a significant improvement in the quality of life of workers, the environment, and in businesses in general, encouraging the implementation of continuous improvement measures. However, it is not entirely clear how the mining industry is evolving alongside this industrial evolution. With this in mind, this systematic review aimed to find sensing technology applications within this sector, in order to assist the mining industry in its goal to evolve digitally. Methodology: The research and reporting of this article were carried out by means of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Results and discussion: A total of 29 papers were included in the study, with sensors being applied in several fields, namely safety, management, and localisation. Three different implementation phases were identified regarding its execution: prototype, trial, and (already) implemented. The overall results highlighted that many mechanisms are in need of improvement in underground settings. This might be due to the fact that underground mining has particular safety challenges. Conclusions: Ventilation and mapping are primary issues to be solved in the underground setting. With regard to the surface setting, the focus is directed toward slope stability and ways of improving it regarding monitoring and prevention. The literature screening revealed a tendency in these systems to keep advancing in technologically, becoming increasingly more intelligent. In the near future, it is expected that a more technologically advanced mining industry will arise, and this will be created and sustained by the optimisation of processes, equipment, and work practices, in order to improve both the quality of life of people and the health of the environment.
Collapse
Affiliation(s)
- Joana Duarte
- Associated Laboratory for Energy, Transports and Aeronautics (PROA/LAETA), Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal;
| | - Fernanda Rodrigues
- RISCO, Civil Engineering Department, University of Aveiro, 3810-193 Aveiro, Portugal;
| | - Jacqueline Castelo Branco
- Associated Laboratory for Energy, Transports and Aeronautics (PROA/LAETA), Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal;
- Correspondence:
| |
Collapse
|
40
|
Motion Capture Sensor-Based Emotion Recognition Using a Bi-Modular Sequential Neural Network. SENSORS 2022; 22:s22010403. [PMID: 35009944 PMCID: PMC8749847 DOI: 10.3390/s22010403] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 12/29/2021] [Accepted: 12/30/2021] [Indexed: 11/17/2022]
Abstract
Motion capture sensor-based gait emotion recognition is an emerging sub-domain of human emotion recognition. Its applications span a variety of fields including smart home design, border security, robotics, virtual reality, and gaming. In recent years, several deep learning-based approaches have been successful in solving the Gait Emotion Recognition (GER) problem. However, a vast majority of such methods rely on Deep Neural Networks (DNNs) with a significant number of model parameters, which lead to model overfitting as well as increased inference time. This paper contributes to the domain of knowledge by proposing a new lightweight bi-modular architecture with handcrafted features that is trained using a RMSprop optimizer and stratified data shuffling. The method is highly effective in correctly inferring human emotions from gait, achieving a micro-mean average precision of 0.97 on the Edinburgh Locomotive Mocap Dataset. It outperforms all recent deep-learning methods, while having the lowest inference time of 16.3 milliseconds per gait sample. This research study is beneficial to applications spanning various fields, such as emotionally aware assistive robotics, adaptive therapy and rehabilitation, and surveillance.
Collapse
|
41
|
Niemann F, Lüdtke S, Bartelt C, ten Hompel M. Context-Aware Human Activity Recognition in Industrial Processes. SENSORS (BASEL, SWITZERLAND) 2021; 22:s22010134. [PMID: 35009677 PMCID: PMC8749739 DOI: 10.3390/s22010134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 12/13/2021] [Accepted: 12/21/2021] [Indexed: 05/14/2023]
Abstract
The automatic, sensor-based assessment of human activities is highly relevant for production and logistics, to optimise the economics and ergonomics of these processes. One challenge for accurate activity recognition in these domains is the context-dependence of activities: Similar movements can correspond to different activities, depending on, e.g., the object handled or the location of the subject. In this paper, we propose to explicitly make use of such context information in an activity recognition model. Our first contribution is a publicly available, semantically annotated motion capturing dataset of subjects performing order picking and packaging activities, where context information is recorded explicitly. The second contribution is an activity recognition model that integrates movement data and context information. We empirically show that by using context information, activity recognition performance increases substantially. Additionally, we analyse which of the pieces of context information is most relevant for activity recognition. The insights provided by this paper can help others to design appropriate sensor set-ups in real warehouses for time management.
Collapse
Affiliation(s)
- Friedrich Niemann
- Chair of Materials Handling and Warehousing, TU Dortmund University, Joseph-von-Fraunhofer-Str. 2-4, 44227 Dortmund, Germany;
- Correspondence:
| | - Stefan Lüdtke
- Institute for Enterprise Systems, University of Mannheim, L15 1, 68131 Mannheim, Germany; (S.L.); (C.B.)
| | - Christian Bartelt
- Institute for Enterprise Systems, University of Mannheim, L15 1, 68131 Mannheim, Germany; (S.L.); (C.B.)
| | - Michael ten Hompel
- Chair of Materials Handling and Warehousing, TU Dortmund University, Joseph-von-Fraunhofer-Str. 2-4, 44227 Dortmund, Germany;
| |
Collapse
|
42
|
An Automated Recognition of Work Activity in Industrial Manufacturing Using Convolutional Neural Networks. ELECTRONICS 2021. [DOI: 10.3390/electronics10232946] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The automated assessment and analysis of employee activity in a manufacturing enterprise, operating in accordance with the concept of Industry 4.0, is essential for a quick and precise diagnosis of work quality, especially in the process of training a new employee. In the case of industrial solutions, many approaches involving the recognition and detection of work activity are based on Convolutional Neural Networks (CNNs). Despite the wide use of CNNs, it is difficult to find solutions supporting the automated checking of work activities performed by trained employees. We propose a novel framework for the automatic generation of workplace instructions and real-time recognition of worker activities. The proposed method integrates CNN, CNN Support Vector Machine (SVM), CNN Region-Based CNN (Yolov3 Tiny) for recognizing and checking the completed work tasks. First, video recordings of the work process are analyzed and reference video frames corresponding to work activity stages are determined. Next, work-related features and objects are determined using CNN with SVM (achieving 94% accuracy) and Yolov3 Tiny network based on the characteristics of the reference frames. Additionally, matching matrix between the reference frames and the test frames using mean absolute error (MAE) as a measure of errors between paired observations was built. Finally, the practical usefulness of the proposed approach by applying the method for supporting the automatic training of new employees and checking the correctness of their work done on solid fuel boiler equipment in a manufacturing company was demonstrated. The developed information system can be integrated with other Industry 4.0 technologies introduced within an enterprise.
Collapse
|
43
|
Interoperability of Digital Tools for the Monitoring and Control of Construction Projects. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112110370] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Monitoring the progress on a construction site during the construction phase is crucial. An inadequate understanding of the project status can lead to mistakes and inappropriate actions, causing delays and increased costs. Monitoring and controlling projects via digital tools would reduce the risk of error and enable timely corrective actions. Although there is currently a wide range of technologies for these purposes, these technologies and interoperability between them are still limited. Because of this, it is important to know the possibilities of integration and interoperability regarding their implementation. This article presents a bibliographic synthesis and interpretation of 30 nonconventional digital tools for monitoring progress in terms of field data capture technologies (FDCT) and communication and collaborative technologies (CT) that are responsible for information processing and management. This research aims to perform an integration and interoperability analysis of technologies to demonstrate their potential for monitoring and controlling construction projects during the execution phase. A network analysis was conducted, and the results suggest that the triad formed by building information modeling (BIM), unmanned aerial vehicles (UAVs) and photogrammetry is an effective tool; the use of this set extends not only to monitoring and control, but also to all phases of a project.
Collapse
|
44
|
Hallett M, DelRosso LM, Elble R, Ferri R, Horak FB, Lehericy S, Mancini M, Matsuhashi M, Matsumoto R, Muthuraman M, Raethjen J, Shibasaki H. Evaluation of movement and brain activity. Clin Neurophysiol 2021; 132:2608-2638. [PMID: 34488012 PMCID: PMC8478902 DOI: 10.1016/j.clinph.2021.04.023] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 04/07/2021] [Accepted: 04/25/2021] [Indexed: 11/25/2022]
Abstract
Clinical neurophysiology studies can contribute important information about the physiology of human movement and the pathophysiology and diagnosis of different movement disorders. Some techniques can be accomplished in a routine clinical neurophysiology laboratory and others require some special equipment. This review, initiating a series of articles on this topic, focuses on the methods and techniques. The methods reviewed include EMG, EEG, MEG, evoked potentials, coherence, accelerometry, posturography (balance), gait, and sleep studies. Functional MRI (fMRI) is also reviewed as a physiological method that can be used independently or together with other methods. A few applications to patients with movement disorders are discussed as examples, but the detailed applications will be the subject of other articles.
Collapse
Affiliation(s)
- Mark Hallett
- Human Motor Control Section, National Institute of Neurological Disorders and Stroke, NIH, Bethesda, MD, USA.
| | | | - Rodger Elble
- Department of Neurology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | | | - Fay B Horak
- Department of Neurology, Oregon Health & Science University, Portland, OR, USA
| | - Stephan Lehericy
- Paris Brain Institute (ICM), Centre de NeuroImagerie de Recherche (CENIR), Team "Movement, Investigations and Therapeutics" (MOV'IT), INSERM U 1127, CNRS UMR 7225, Sorbonne Université, Paris, France
| | - Martina Mancini
- Department of Neurology, Oregon Health & Science University, Portland, OR, USA
| | - Masao Matsuhashi
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate, School of Medicine, Japan
| | - Riki Matsumoto
- Division of Neurology, Kobe University Graduate School of Medicine, Japan
| | - Muthuraman Muthuraman
- Section of Movement Disorders and Neurostimulation, Biomedical Statistics and Multimodal Signal Processing unit, Department of Neurology, Focus Program Translational Neuroscience (FTN), University Medical Center of the Johannes Gutenberg-University Mainz, Mainz, Germany
| | - Jan Raethjen
- Neurology Outpatient Clinic, Preusserstr. 1-9, 24105 Kiel, Germany
| | | |
Collapse
|
45
|
Huang X, Lin D, Liang Z, Deng Y, He Z, Wang M, Tan J, Li Y, Yang Y, Huang W. Mechanical Parameters and Trajectory of Two Chinese Cervical Manipulations Compared by a Motion Capture System. Front Bioeng Biotechnol 2021; 9:714292. [PMID: 34381767 PMCID: PMC8351596 DOI: 10.3389/fbioe.2021.714292] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 06/29/2021] [Indexed: 12/29/2022] Open
Abstract
Objective: To compare the mechanical parameters and trajectory while operating the oblique pulling manipulation and the cervical rotation–traction manipulation. Methods: An experimental research measuring kinematics parameter and recording motion trajectories of two cervical manipulations were carried out. A total of 48 healthy volunteers participated in this study, who were randomly divided into two groups of 24 representing each of the two manipulations. A clinician performed two manipulations in two groups separately. A motion capture system was used to monitor and analyze kinematics parameters during the operation. Results: The two cervical manipulations have similar thrust time, displacement, mean velocity, max velocity, and max acceleration. There were no significant differences in active and passive amplitudes between the two cervical rotation manipulations. The thrust amplitudes of the oblique pulling manipulation and the cervical rotation–traction manipulation were 5.735 ± 3.041° and 2.142 ± 1.742°, respectively. The thrust amplitudes of the oblique pulling manipulation was significantly greater than that of the cervical rotation–traction manipulation (P < 0.001). Conclusion: Compared with the oblique pulling manipulation, the cervical rotation–traction manipulation has a less thrust amplitudes.
Collapse
Affiliation(s)
- Xuecheng Huang
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China.,Shenzhen Hospital of Guangzhou University of Chinese Medicine, Shenzhen, China
| | - Dongxin Lin
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China
| | - Zeyu Liang
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China
| | - Yuping Deng
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China
| | - Zaopeng He
- Hand and Foot Surgery and Plastic Surgery, Affiliated Shunde Hospital of Guangzhou Medical University, Foshan, China
| | - Mian Wang
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China
| | - Jinchuan Tan
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China
| | - Yikai Li
- School of Chinese Medicine, Southern Medical University, Guangzhou, China
| | - Yang Yang
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China
| | - Wenhua Huang
- National Key Discipline of Human Anatomy, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Guangdong Engineering Research Center for Translation of Medical 3D Printing Application, Southern Medical University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Medical Biomechanics, Southern Medical University, Guangzhou, China
| |
Collapse
|
46
|
Liu J, Wang L, Zhou H. The Application of Human-Computer Interaction Technology Fused With Artificial Intelligence in Sports Moving Target Detection Education for College Athlete. Front Psychol 2021; 12:677590. [PMID: 34366996 PMCID: PMC8339562 DOI: 10.3389/fpsyg.2021.677590] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 06/18/2021] [Indexed: 12/29/2022] Open
Abstract
The purposes are to digitalize and intellectualize current professional sports training and enrich the application scenarios of motion capture technology of moving targets based on artificial intelligence (AI) and human–computer interaction (HCI) in sports training. From an educational psychology perspective, sport techniques are a cognitive ability of sports, and a tacit knowledge. However, sports technology, language, image, and other methods play an auxiliary role in sports training. Here, a General Framework of Knowledge-Based Coaching System (KBCS) is proposed using the HCI technology and sports knowledge to accomplish autonomous and intelligent sports training. Then, the KBCS is applied to table tennis training. The athletic performance is evaluated quantitatively through the calculation of the sports features, motion recognition, and the hitting stage division in table tennis. Results demonstrate that the speed calculated by the position after mosaicking has better continuity after the initial frame of the unmarked segment is compared with the end frame of the market segment. The typical serve and return trajectories in three serving modes of slight-spin, top-spin, and back-spin, as well as the trajectories of common services and return errors, are obtained through the judgment of the serving and receiving of table tennis. Comparison results prove that the serving accuracy of slight-spin and back-spin is better than that of top-spin, and a lower serve speed has higher accuracy. Experimental results show that the level distribution of the three participants calculated by the system is consistent with the actual situation in terms of the quality of the ball returned and the standard of the motion, proving that the proposed KBCS and algorithm are useful in a small sample, thereby further improving the accuracy of pose restoration of athletes in sports training.
Collapse
Affiliation(s)
- Jie Liu
- Sports Training and Health Care, Zhoukou Normal University, Zhoukou, China
| | - Le Wang
- Sports Training, Zhongyuan University of Technology, Zhengzhou, China
| | - Hang Zhou
- Computer Science and Technology, Zhoukou Normal University, Zhoukou, China
| |
Collapse
|
47
|
González L, Álvarez JC, López AM, Álvarez D. Metrological Evaluation of Human-Robot Collaborative Environments Based on Optical Motion Capture Systems. SENSORS 2021; 21:s21113748. [PMID: 34071352 PMCID: PMC8198618 DOI: 10.3390/s21113748] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 05/16/2021] [Accepted: 05/26/2021] [Indexed: 11/16/2022]
Abstract
In the context of human-robot collaborative shared environments, there has been an increase in the use of optical motion capture (OMC) systems for human motion tracking. The accuracy and precision of OMC technology need to be assessed in order to ensure safe human-robot interactions, but the accuracy specifications provided by manufacturers are easily influenced by various factors affecting the measurements. This article describes a new methodology for the metrological evaluation of a human-robot collaborative environment based on optical motion capture (OMC) systems. Inspired by the ASTM E3064 test guide, and taking advantage of an existing industrial robot in the production cell, the system is evaluated for mean error, error spread, and repeatability. A detailed statistical study of the error distribution across the capture area is carried out, supported by a Mann-Whitney U-test for median comparisons. Based on the results, optimal capture areas for the use of the capture system are suggested. The results of the proposed method show that the metrological characteristics obtained are compatible and comparable in quality to other methods that do not require the intervention of an industrial robot.
Collapse
|
48
|
Jarque-Bou NJ, Sancho-Bru JL, Vergara M. Synergy-Based Sensor Reduction for Recording the Whole Hand Kinematics. SENSORS 2021; 21:s21041049. [PMID: 33557063 PMCID: PMC7913855 DOI: 10.3390/s21041049] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 01/28/2021] [Accepted: 02/02/2021] [Indexed: 12/02/2022]
Abstract
Simultaneous measurement of the kinematics of all hand segments is cumbersome due to sensor placement constraints, occlusions, and environmental disturbances. The aim of this study is to reduce the number of sensors required by using kinematic synergies, which are considered the basic building blocks underlying hand motions. Synergies were identified from the public KIN-MUS UJI database (22 subjects, 26 representative daily activities). Ten synergies per subject were extracted as the principal components explaining at least 95% of the total variance of the angles recorded across all tasks. The 220 resulting synergies were clustered, and candidate angles for estimating the remaining angles were obtained from these groups. Different combinations of candidates were tested and the one providing the lowest error was selected, its goodness being evaluated against kinematic data from another dataset (KINE-ADL BE-UJI). Consequently, the original 16 joint angles were reduced to eight: carpometacarpal flexion and abduction of thumb, metacarpophalangeal and interphalangeal flexion of thumb, proximal interphalangeal flexion of index and ring fingers, metacarpophalangeal flexion of ring finger, and palmar arch. Average estimation errors across joints were below 10% of the range of motion of each joint angle for all the activities. Across activities, errors ranged between 3.1% and 16.8%.
Collapse
|