1
|
Jaén-Vargas M, Pagán J, Li S, Trujillo-Guerrero MF, Kazemi N, Sansò A, Codina-Casals B, Abi Zeid Daou R, Serrano Olmedo JJ. AI-driven balance evaluation: a comparative study between blind and non-blind individuals using the mini-BESTest. PeerJ Comput Sci 2025; 11:e2695. [PMID: 40134862 PMCID: PMC11935781 DOI: 10.7717/peerj-cs.2695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2024] [Accepted: 01/21/2025] [Indexed: 03/27/2025]
Abstract
There are 2.2 billion visually impaired individuals and 285 million blind people worldwide. The vestibular system plays a fundamental role in the balance of a person related to sight and hearing, and thus blind people require physical therapy to improve their balance. Several clinical tests have been developed to evaluate balance, such as the mini-BESTest. This test has been used to evaluate the balance of people with neurological diseases, but there have been no studies that evaluate the balance of blind individuals before. Furthermore, despite the scoring of these tests being not subjective, the performance of some activities are subject to the physiotherapist's bias. Tele-rehabilitation is a growing field that aims to provide physical therapy to people with disabilities. Among the technologies used in tele-rehabilitation are inertial measurement units that can be used to monitor the balance of individuals. The amount of data collected by these devices is large and the use of deep learning models can help in analyzing these data. Therefore, the objective of this study is to analyze for the first time the balance of blind individuals using the mini-BESTest and inertial measurement units and to identify the activities that best differentiate between blind and sighted individuals. We use the OpenSense RT monitoring device to collect data from the inertial measurement unit, and we develop machine learning and deep learning models to predict the score of the most relevant mini-BESTest activities. In this study 29 blind and sighted individuals participated. The one-legged stance is the activity that best differentiates between blind and sighted individuals. An analysis on the acceleration data suggests that the evaluation of physiotherapists is not completely adjusted to the test criterion. Cluster analysis suggests that inertial data are not able to distinguish between three levels of evaluation. However, the performance of our models shows an F1-score of 85.6% in predicting the score evaluated by the mini-BESTest in a binary classification problem. The results of this study can help physiotherapists have a more objective evaluation of the balance of their patients and to develop tele-rehabilitation systems for blind individuals.
Collapse
Affiliation(s)
- Milagros Jaén-Vargas
- Bioinstrumentation and Nanomedicine Laboratory, Center for Biomedical Technology (CTB), Universidad Politécnica de Madrid, Madrid, Spain
- Instituto Nacional de Investigaciones Científicas Avanzadas en Tecnologías de Información y Comunicación (INDICATIC AIP), Panama City, Panama
| | - Josué Pagán
- Department of Electronic Engineering, Universidad Politécnica de Madrid, Madrid, Spain
- Center for Computational Simulation, Universidad Politécnica de Madrid, Madrid, Spain
| | - Shiyang Li
- Bioinstrumentation and Nanomedicine Laboratory, Center for Biomedical Technology (CTB), Universidad Politécnica de Madrid, Madrid, Spain
| | - María Fernanda Trujillo-Guerrero
- Bioinstrumentation and Nanomedicine Laboratory, Center for Biomedical Technology (CTB), Universidad Politécnica de Madrid, Madrid, Spain
| | - Niloufar Kazemi
- Bioinstrumentation and Nanomedicine Laboratory, Center for Biomedical Technology (CTB), Universidad Politécnica de Madrid, Madrid, Spain
| | - Alessio Sansò
- Bioinstrumentation and Nanomedicine Laboratory, Center for Biomedical Technology (CTB), Universidad Politécnica de Madrid, Madrid, Spain
| | - Benito Codina-Casals
- Didactic and Educational Research Department, Universidad de La Laguna, San Cristóbal de La Laguna, Spain
- Spanish Blind Organization (ONCE), Santa Cruz de Tenerife, Spain
| | - Roy Abi Zeid Daou
- Faculty of Engineering–Polytech, Biomedical Engineering Department, Université La Sagesse, Furn El Chebbak, Lebanon
| | - Jose Javier Serrano Olmedo
- Bioinstrumentation and Nanomedicine Laboratory, Center for Biomedical Technology (CTB), Universidad Politécnica de Madrid, Madrid, Spain
- CIBER-BBN, Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, Madrid, Spain
| |
Collapse
|
2
|
Kattenbeck M, Giannopoulos I, Alinaghi N, Golab A, Montello DR. Predicting spatial familiarity by exploiting head and eye movements during pedestrian navigation in the real world. Sci Rep 2025; 15:7970. [PMID: 40055417 PMCID: PMC11889111 DOI: 10.1038/s41598-025-92274-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2024] [Accepted: 02/26/2025] [Indexed: 03/12/2025] Open
Abstract
Spatial familiarity has seen a long history of interest in wayfinding research. To date, however, no studies have been done which systematically assess the behavioral correlates of spatial familiarity, including eye and body movements. In this study, we take a step towards filling this gap by reporting on the results of an in-situ, within-subject study with [Formula: see text] pedestrian wayfinders that combines eye-tracking and body movement sensors. In our study, participants were required to walk both a familiar route and an unfamiliar route by following auditory, landmark-based route instructions. We monitored participants' behavior using a mobile eye tracker, a high-precision Global Navigation Satellite System receiver, and a high-precision, head-mounted Inertial Measurement Unit. We conducted machine learning experiments using Gradient-Boosted Trees to perform binary classification, testing out different feature sets, i.e., gaze only, Inertial Measurement Unit data only, and a combination of the two, to classify a person as familiar or unfamiliar with a particular route. We achieve the highest accuracy of [Formula: see text] using exclusively Inertial Measurement Unit data, exceeding gaze alone at [Formula: see text], and gaze and Inertial Measurement Unit data together at [Formula: see text]. For the highest accuracy achieved, yaw and acceleration values are most important. This finding indicates that head movements ("looking around to orient oneself") are a particularly valuable indicator to distinguish familiar and unfamiliar environments for pedestrian wayfinders.
Collapse
Affiliation(s)
| | | | - Negar Alinaghi
- Research Unit Geoinformation, TU Wien, 1040, Vienna, Austria
| | - Antonia Golab
- Energy Economics Group, TU Wien, 1040, Vienna, Austria
| | - Daniel R Montello
- Department of Geography, UC Santa Barbara, Santa Barbara, CA, 93117, USA
| |
Collapse
|
3
|
Jia Yi T, Ripin ZM, Ridzwan MIZ, Razali MF, Ying Heng Y, Jaafar NAB, Wai Teng AT, Binti Ahmad Yusof H, Hanafi MH. Development and validation of an automated Trunk Impairment Scale 2.0 scoring system using rule-based classification. Proc Inst Mech Eng H 2025; 239:212-226. [PMID: 39960035 DOI: 10.1177/09544119251317614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2025]
Abstract
The Trunk Impairment Scale Version 2.0 (TIS 2.0) measures the motor impairment of the trunk after a stroke through the evaluation of dynamic sitting balance and co-ordination of trunk movement. Evaluations by physiotherapists depend on their ability in detecting minor changes in motion and observing limb movements and these can be time consuming and reduce their availability for rehabilitation work. An automated scoring system for TIS 2.0 was proposed to provide a more reproducible and standardized alternative to manual physiotherapist assessments. In the development phase, motion data from lay actors simulating stroke condition were collected using video motion capture system OpenCap. This data was utilized to create metrics and establish cut-off values for a rule-based classification. The discriminant abilities of the metrics were evaluated using the area under the curve (AUC). In the testing phase, the performance of the developed system was assessed on 19 stroke survivors (Berg Balance Scale score of 20-55) using both automated system and manual scoring by nine physiotherapists. The discriminant abilities of the features used in the dynamic sitting balance subscale are considered excellent to outstanding (AUC ≥ 0.717), and coordination subscale ranged from poor to outstanding (AUC ≥ 0.667). The automated scores aligned with physiotherapists' scores, achieving an average percentage of agreement 71.1%. The total TIS 2.0 scores generated by the automated method showed moderate correlation with the sum of mode-determined task scores (R = 0.526, p < 0.05). These findings suggest that the proposed automated system demonstrates comparable validity to assessments by physiotherapists.
Collapse
Affiliation(s)
- Tay Jia Yi
- School of Mechanical Engineering, Engineering Campus, Universiti Sains Malaysia, Pulau Pinang, Malaysia
| | - Zaidi Mohd Ripin
- School of Mechanical Engineering, Engineering Campus, Universiti Sains Malaysia, Pulau Pinang, Malaysia
| | | | | | - Yeo Ying Heng
- School of Mechanical Engineering, Engineering Campus, Universiti Sains Malaysia, Pulau Pinang, Malaysia
| | - Nur Akasyah Binti Jaafar
- School of Mechanical Engineering, Engineering Campus, Universiti Sains Malaysia, Pulau Pinang, Malaysia
| | - Alexander Tan Wai Teng
- School of Mechanical Engineering, Engineering Campus, Universiti Sains Malaysia, Pulau Pinang, Malaysia
| | | | - Muhammad Hafiz Hanafi
- School of Medical Sciences, Health Campus, Universiti Sains Malaysia, Kelantan, Malaysia
| |
Collapse
|
4
|
Smits Serena R, Hinterwimmer F, Burgkart R, von Eisenhart-Rothe R, Rueckert D. The Use of Artificial Intelligence and Wearable Inertial Measurement Units in Medicine: Systematic Review. JMIR Mhealth Uhealth 2025; 13:e60521. [PMID: 39880389 PMCID: PMC11822330 DOI: 10.2196/60521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 10/20/2024] [Accepted: 11/12/2024] [Indexed: 01/31/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) has already revolutionized the analysis of image, text, and tabular data, bringing significant advances across many medical sectors. Now, by combining with wearable inertial measurement units (IMUs), AI could transform health care again by opening new opportunities in patient care and medical research. OBJECTIVE This systematic review aims to evaluate the integration of AI models with wearable IMUs in health care, identifying current applications, challenges, and future opportunities. The focus will be on the types of models used, the characteristics of the datasets, and the potential for expanding and enhancing the use of this technology to improve patient care and advance medical research. METHODS This study examines this synergy of AI models and IMU data by using a systematic methodology, following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, to explore 3 core questions: (1) Which medical fields are most actively researching AI and IMU data? (2) Which models are being used in the analysis of IMU data within these medical fields? (3) What are the characteristics of the datasets used for in this fields? RESULTS The median dataset size is of 50 participants, which poses significant limitations for AI models given their dependency on large datasets for effective training and generalization. Furthermore, our analysis reveals the current dominance of machine learning models in 76% on the surveyed studies, suggesting a preference for traditional models like linear regression, support vector machine, and random forest, but also indicating significant growth potential for deep learning models in this area. Impressively, 93% of the studies used supervised learning, revealing an underuse of unsupervised learning, and indicating an important area for future exploration on discovering hidden patterns and insights without predefined labels or outcomes. In addition, there was a preference for conducting studies in clinical settings (77%), rather than in real-life scenarios, a choice that, along with the underapplication of the full potential of wearable IMUs, is recognized as a limitation in terms of practical applicability. Furthermore, the focus of 65% of the studies on neurological issues suggests an opportunity to broaden research scope to other clinical areas such as musculoskeletal applications, where AI could have significant impacts. CONCLUSIONS In conclusion, the review calls for a collaborative effort to address the highlighted challenges, including improvements in data collection, increasing dataset sizes, a move that inherently pushes the field toward the adoption of more complex deep learning models, and the expansion of the application of AI models on IMU data methodologies across various medical fields. This approach aims to enhance the reliability, generalizability, and clinical applicability of research findings, ultimately improving patient outcomes and advancing medical research.
Collapse
Affiliation(s)
- Ricardo Smits Serena
- Department of Orthopaedics and Sports Orthopaedics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Institute for AI and Informatics in Medicine, Technical University of Munich, Munich, Germany
| | - Florian Hinterwimmer
- Department of Orthopaedics and Sports Orthopaedics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Institute for AI and Informatics in Medicine, Technical University of Munich, Munich, Germany
| | - Rainer Burgkart
- Department of Orthopaedics and Sports Orthopaedics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Rudiger von Eisenhart-Rothe
- Department of Orthopaedics and Sports Orthopaedics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Daniel Rueckert
- Institute for AI and Informatics in Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
5
|
Akter N, Molnar A, Georgakopoulos D. Toward Improving Human Training by Combining Wearable Full-Body IoT Sensors and Machine Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:7351. [PMID: 39599128 PMCID: PMC11598817 DOI: 10.3390/s24227351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 10/06/2024] [Accepted: 11/14/2024] [Indexed: 11/29/2024]
Abstract
This paper proposes DigitalUpSkilling, a novel IoT- and AI-based framework for improving and personalising the training of workers who are involved in physical-labour-intensive jobs. DigitalUpSkilling uses wearable IoT sensors to observe how individuals perform work activities. Such sensor observations are continuously processed to synthesise an avatar-like kinematic model for each worker who is being trained, referred to as the worker's digital twins. The framework incorporates novel work activity recognition using generative adversarial network (GAN) and machine learning (ML) models for recognising the types and sequences of work activities by analysing an individual's kinematic model. Finally, the development of skill proficiency ML is proposed to evaluate each trainee's proficiency in work activities and the overall task. To illustrate DigitalUpSkilling from wearable IoT-sensor-driven kinematic models to GAN-ML models for work activity recognition and skill proficiency assessment, the paper presents a comprehensive study on how specific meat processing activities in a real-world work environment can be recognised and assessed. In the study, DigitalUpSkilling achieved 99% accuracy in recognising specific work activities performed by meat workers. The study also presents an evaluation of the proficiency of workers by comparing kinematic data from trainees performing work activities. The proposed DigitalUpSkilling framework lays the foundation for next-generation digital personalised training.
Collapse
Affiliation(s)
| | - Andreea Molnar
- School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Melbourne 3122, Australia; (N.A.); (D.G.)
| | | |
Collapse
|
6
|
Tseng YH, Wen CY. Hybrid Learning Models for IMU-Based HAR with Feature Analysis and Data Correction. SENSORS (BASEL, SWITZERLAND) 2023; 23:7802. [PMID: 37765863 PMCID: PMC10537876 DOI: 10.3390/s23187802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/26/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023]
Abstract
This paper proposes a novel approach to tackle the human activity recognition (HAR) problem. Four classes of body movement datasets, namely stand-up, sit-down, run, and walk, are applied to perform HAR. Instead of using vision-based solutions, we address the HAR challenge by implementing a real-time HAR system architecture with a wearable inertial measurement unit (IMU) sensor, which aims to achieve networked sensing and data sampling of human activity, data pre-processing and feature analysis, data generation and correction, and activity classification using hybrid learning models. Referring to the experimental results, the proposed system selects the pre-trained eXtreme Gradient Boosting (XGBoost) model and the Convolutional Variational Autoencoder (CVAE) model as the classifier and generator, respectively, with 96.03% classification accuracy.
Collapse
Affiliation(s)
- Yu-Hsuan Tseng
- Department of Computer Science and Engineering, National Chung Hsing University, Taichung 40227, Taiwan;
| | - Chih-Yu Wen
- Department of Electrical Engineering, National Chung Hsing University, Taichung 40227, Taiwan
- Smart Sustainable New Agriculture Research Center (SMARTer), National Chung Hsing University, Taichung 40227, Taiwan
- Innovation and Development Center of Sustainable Agriculture (IDCSA), National Chung Hsing University, Taichung 40227, Taiwan
| |
Collapse
|
7
|
Jaramillo IE, Chola C, Jeong JG, Oh JH, Jung H, Lee JH, Lee WH, Kim TS. Human Activity Prediction Based on Forecasted IMU Activity Signals by Sequence-to-Sequence Deep Neural Networks. SENSORS (BASEL, SWITZERLAND) 2023; 23:6491. [PMID: 37514789 PMCID: PMC10385571 DOI: 10.3390/s23146491] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/30/2023]
Abstract
Human Activity Recognition (HAR) has gained significant attention due to its broad range of applications, such as healthcare, industrial work safety, activity assistance, and driver monitoring. Most prior HAR systems are based on recorded sensor data (i.e., past information) recognizing human activities. In fact, HAR works based on future sensor data to predict human activities are rare. Human Activity Prediction (HAP) can benefit in multiple applications, such as fall detection or exercise routines, to prevent injuries. This work presents a novel HAP system based on forecasted activity data of Inertial Measurement Units (IMU). Our HAP system consists of a deep learning forecaster of IMU activity signals and a deep learning classifier to recognize future activities. Our deep learning forecaster model is based on a Sequence-to-Sequence structure with attention and positional encoding layers. Then, a pre-trained deep learning Bi-LSTM classifier is used to classify future activities based on the forecasted IMU data. We have tested our HAP system for five daily activities with two tri-axial IMU sensors. The forecasted signals show an average correlation of 91.6% to the actual measured signals of the five activities. The proposed HAP system achieves an average accuracy of 97.96% in predicting future activities.
Collapse
Affiliation(s)
- Ismael Espinoza Jaramillo
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Channabasava Chola
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin-Gyun Jeong
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Ji-Heon Oh
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Hwanseok Jung
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin-Hyuk Lee
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Won Hee Lee
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| |
Collapse
|
8
|
Jameer S, Syed H. Deep SE-BiLSTM with IFPOA Fine-Tuning for Human Activity Recognition Using Mobile and Wearable Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094319. [PMID: 37177523 PMCID: PMC10181789 DOI: 10.3390/s23094319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/18/2023] [Accepted: 04/07/2023] [Indexed: 05/15/2023]
Abstract
Pervasive computing, human-computer interaction, human behavior analysis, and human activity recognition (HAR) fields have grown significantly. Deep learning (DL)-based techniques have recently been effectively used to predict various human actions using time series data from wearable sensors and mobile devices. The management of time series data remains difficult for DL-based techniques, despite their excellent performance in activity detection. Time series data still has several problems, such as difficulties in heavily biased data and feature extraction. For HAR, an ensemble of Deep SqueezeNet (SE) and bidirectional long short-term memory (BiLSTM) with improved flower pollination optimization algorithm (IFPOA) is designed to construct a reliable classification model utilizing wearable sensor data in this research. The significant features are extracted automatically from the raw sensor data by multi-branch SE-BiLSTM. The model can learn both short-term dependencies and long-term features in sequential data due to SqueezeNet and BiLSTM. The different temporal local dependencies are captured effectively by the proposed model, enhancing the feature extraction process. The hyperparameters of the BiLSTM network are optimized by the IFPOA. The model performance is analyzed using three benchmark datasets: MHEALTH, KU-HAR, and PAMPA2. The proposed model has achieved 99.98%, 99.76%, and 99.54% accuracies on MHEALTH, KU-HAR, and PAMPA2 datasets, respectively. The proposed model performs better than other approaches from the obtained experimental results. The suggested model delivers competitive results compared to state-of-the-art techniques, according to experimental results on four publicly accessible datasets.
Collapse
Affiliation(s)
- Shaik Jameer
- School of Computer Science and Engineering, VIT AP University, Amaravati 522237, India
| | - Hussain Syed
- School of Computer Science and Engineering, VIT AP University, Amaravati 522237, India
| |
Collapse
|
9
|
Wang D, Gu X, Yu H. Sensors and algorithms for locomotion intention detection of lower limb exoskeletons. Med Eng Phys 2023; 113:103960. [PMID: 36966000 DOI: 10.1016/j.medengphy.2023.103960] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 02/13/2023] [Accepted: 02/15/2023] [Indexed: 02/18/2023]
Abstract
In recent years, lower limb exoskeletons (LLEs) have received much attention due to the potential to help people with paraplegia regain the ability of upright-legged locomotion. However, one major hindrance to converting prototypes into actual products is the lack of a balance recovery function. Locomotion intentions can be the first step for balance assistance. Therefore, its significance continues to grow. Many researchers focus on this topic, but there is a lack of a general discussion on the research phenomenon. Therefore, the purpose of this work is to systematize these data and benefit future research. This review is divided into two parts, the location of sensors/devices and the evaluation criteria of algorithms, which are the main components of locomotion intentions. We found that sensor/device placement is still concentrated in the lower limbs, but most researchers have found the importance of the chest. The peak power of the signal collected from the chest may be overestimated because it undergoes higher vertical velocity and acceleration during a rotation. However, despite the differences in peak power between the upper and lower back, high correlations were found for the tasks, especially from sitting to standing. Since peak power is based on vertical acceleration and velocity, it can be considered a metric that is more robust to changes in sensor location. Therefore, data acquisition from the chest is effective. In this paper, it is pointed out that sensors placed on the chest may have a tendency to change, as some researchers have realized in the field of locomotion intention recognition. In the evaluation criteria, we also found that deep learning algorithm (such as Back Propagation Artificial Neural Network (BPANN)) is outstanding, and Support Vector Machine (SVM) is the most cost-effective algorithm. In terms of accuracy, sensitivity, and specificity, BPANN achieved nearly 100%. SVM has different types; the best one achieves 98% accuracy, 100% sensitivity, and 100% specificity. But it also has 87.8% accuracy, which is not stable. Convolutional Neural Networks (CNN) can be used for image classification and have an accuracy of around 87%. Compared to the above two algorithms, CNN may have lower performance. Other algorithms also have higher accuracy, sensitivity, and specificity. These evaluation criteria, however, were not all ideal at the same time. Based on these results, we also point out the existing problems. In general, the application of these algorithms to LLE can contribute to its intention recognition, which can be helpful in balancing research. Finally, this can help make LLE more suitable for daily use.
Collapse
Affiliation(s)
- Duojin Wang
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, 516 Jungong Road, Shanghai 200093, China; Shanghai Engineering Research Center of Assistive Devices, 516 Jungong Road, Shanghai 200093, China.
| | - Xiaoping Gu
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, 516 Jungong Road, Shanghai 200093, China
| | - Hongliu Yu
- Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, 516 Jungong Road, Shanghai 200093, China; Shanghai Engineering Research Center of Assistive Devices, 516 Jungong Road, Shanghai 200093, China
| |
Collapse
|
10
|
Gourrame K, Griškevičius J, Haritopoulos M, Lukšys D, Jatužis D, Kaladytė-Lokominienė R, Bunevičiūtė R, Mickutė G. Parkinson's disease classification with CWNN: Using wavelet transformations and IMU data fusion for improved accuracy. Technol Health Care 2023; 31:2447-2455. [PMID: 37955069 DOI: 10.3233/thc-235010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2023]
Abstract
BACKGROUND Parkinson's disease (PD) is a chronic neurodegenerative disorder characterized by motor impairments and various other symptoms. Early and accurate classification of PD patients is crucial for timely intervention and personalized treatment. Inertial measurement units (IMUs) have emerged as a promising tool for gathering movement data and aiding in PD classification. OBJECTIVE This paper proposes a Convolutional Wavelet Neural Network (CWNN) approach for PD classification using IMU data. CWNNs have emerged as effective models for sensor data classification. The objective is to determine the optimal combination of wavelet transform and IMU data type that yields the highest classification accuracy for PD. METHODS The proposed CWNN architecture integrates convolutional neural networks and wavelet neural networks to capture spatial and temporal dependencies in IMU data. Different wavelet functions, such as Morlet, Mexican Hat, and Gaussian, are employed in the continuous wavelet transform (CWT) step. The CWNN is trained and evaluated using various combinations of accelerometer data, gyroscope data, and fusion data. RESULTS Extensive experiments are conducted using a comprehensive dataset of IMU data collected from individuals with and without PD. The performance of the proposed CWNN is evaluated in terms of classification accuracy, precision, recall, and F1-score. The results demonstrate the impact of different wavelet functions and IMU data types on PD classification performance, revealing that the combination of Morlet wavelet function and IMU data fusion achieves the highest accuracy. CONCLUSION The findings highlight the significance of combining CWT with IMU data fusion for PD classification using CWNNs. The integration of CWT-based feature extraction and the fusion of IMU data from multiple sensors enhance the representation of PD-related patterns, leading to improved classification accuracy. This research provides valuable insights into the potential of CWT and IMU data fusion for advancing PD classification models, enabling more accurate and reliable diagnosis.
Collapse
Affiliation(s)
| | - Julius Griškevičius
- Department of Biomechanical Engineering, Vilnius Gediminas Technical University, Vilnius, Lithuania
| | | | - Donatas Lukšys
- Department of Biomechanical Engineering, Vilnius Gediminas Technical University, Vilnius, Lithuania
| | - Dalius Jatužis
- Clinics of Neurology and Neurosurgery, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Santaros Klinikos Hospital, Vilnius University, Vilnius, Lithuania
| | - Rūta Kaladytė-Lokominienė
- Clinics of Neurology and Neurosurgery, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Santaros Klinikos Hospital, Vilnius University, Vilnius, Lithuania
| | - Ramunė Bunevičiūtė
- Clinics of Neurology and Neurosurgery, Faculty of Medicine, Vilnius University, Vilnius, Lithuania
- Santaros Klinikos Hospital, Vilnius University, Vilnius, Lithuania
| | - Gabrielė Mickutė
- Centre of Rehabilitation, Physical and Sports Medicine, Santaros Klinikos Hospital, Vilnius University, Vilnius, Lithuania
| |
Collapse
|
11
|
Kim YW, Lee S. Data Valuation Algorithm for Inertial Measurement Unit-Based Human Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 23:184. [PMID: 36616781 PMCID: PMC9823777 DOI: 10.3390/s23010184] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 06/17/2023]
Abstract
This paper proposes a data valuation algorithm for inertial measurement unit-based human activity recognition (IMU-based HAR) data based on meta reinforcement learning. Unlike previous studies that received feature-level input, the algorithm in this study added a feature extraction structure to the data valuation algorithm, and it can receive raw-level inputs and achieve excellent performance. As IMU-based HAR data are multivariate time-series data, the proposed algorithm incorporates an architecture capable of extracting both local and global features by inserting a transformer encoder after the one-dimensional convolutional neural network (1D-CNN) backbone in the data value estimator. In addition, the 1D-CNN-based stacking ensemble structure, which exhibits excellent efficiency and performance on IMU-based HAR data, is used as a predictor to supervise model training. The Berg balance scale (BBS) IMU-based HAR dataset and the public datasets, UCI-HAR, WISDM, and PAMAP2, are used for performance evaluation in this study. The valuation performance of the proposed algorithm is observed to be excellent on IMU-based HAR data. The rate of discovering corrupted data is higher than 96% on all datasets. In addition, classification performance is confirmed to be improved by the suppression of discovery of low-value data.
Collapse
Affiliation(s)
- Yeon-Wook Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Republic of Korea
| | - Sangmin Lee
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Republic of Korea
- Department of Smart Engineering Program in Biomedical Science & Engineering, Inha University, Incheon 22212, Republic of Korea
| |
Collapse
|
12
|
Jaramillo IE, Jeong JG, Lopez PR, Lee CH, Kang DY, Ha TJ, Oh JH, Jung H, Lee JH, Lee WH, Kim TS. Real-Time Human Activity Recognition with IMU and Encoder Sensors in Wearable Exoskeleton Robot via Deep Learning Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9690. [PMID: 36560059 PMCID: PMC9783602 DOI: 10.3390/s22249690] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/02/2022] [Accepted: 12/08/2022] [Indexed: 06/17/2023]
Abstract
Wearable exoskeleton robots have become a promising technology for supporting human motions in multiple tasks. Activity recognition in real-time provides useful information to enhance the robot's control assistance for daily tasks. This work implements a real-time activity recognition system based on the activity signals of an inertial measurement unit (IMU) and a pair of rotary encoders integrated into the exoskeleton robot. Five deep learning models have been trained and evaluated for activity recognition. As a result, a subset of optimized deep learning models was transferred to an edge device for real-time evaluation in a continuous action environment using eight common human tasks: stand, bend, crouch, walk, sit-down, sit-up, and ascend and descend stairs. These eight robot wearer's activities are recognized with an average accuracy of 97.35% in real-time tests, with an inference time under 10 ms and an overall latency of 0.506 s per recognition using the selected edge device.
Collapse
Affiliation(s)
- Ismael Espinoza Jaramillo
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin Gyun Jeong
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | | | | | - Do-Yeon Kang
- Hyundai Rotem, Uiwang-si 16082, Republic of Korea
| | - Tae-Jun Ha
- Hyundai Rotem, Uiwang-si 16082, Republic of Korea
| | - Ji-Heon Oh
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Hwanseok Jung
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin Hyuk Lee
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Won Hee Lee
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| |
Collapse
|
13
|
Buisseret F, Dierick F, Van der Perre L. Wearable Sensors Applied in Movement Analysis. SENSORS (BASEL, SWITZERLAND) 2022; 22:8239. [PMID: 36365937 PMCID: PMC9658576 DOI: 10.3390/s22218239] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/17/2022] [Accepted: 10/20/2022] [Indexed: 06/16/2023]
Abstract
Recent advances in the miniaturization of electronics have resulted in sensors whose sizes and weights are such that they can be attached to living systems without interfering with their natural movements and behaviors [...].
Collapse
Affiliation(s)
- Fabien Buisseret
- Centre de Recherche, d’Étude et de Formation Continue de la Haute Ecole Louvain en Hainaut (CeREF Technique), Chaussée de Binche 159, 7000 Mons, Belgium
- Service de Physique Nucléaire et Subnucléaire, Research Institute for Complex Systems, UMONS Université de Mons, Place du Parc 20, 7000 Mons, Belgium
| | - Frédéric Dierick
- Centre de Recherche, d’Étude et de Formation Continue de la Haute Ecole Louvain en Hainaut (CeREF Technique), Chaussée de Binche 159, 7000 Mons, Belgium
- Centre National de Rééducation Fonctionnelle et de Réadaptation–Rehazenter, Laboratoire d’Analyse du Mouvement et de la Posture (LAMP), Rue André Vésale 1, 2674 Luxembourg, Luxembourg
- Faculté des Sciences de la Motricité, UCLouvain, Place Pierre de Coubertin 1-2, 1348 Ottignies-Louvain-la-Neuve, Belgium
| | | |
Collapse
|
14
|
Kim YW, Cho WH, Kim KS, Lee S. Inertial-Measurement-Unit-Based Novel Human Activity Recognition Algorithm Using Conformer. SENSORS 2022; 22:s22103932. [PMID: 35632341 PMCID: PMC9144209 DOI: 10.3390/s22103932] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/19/2022] [Accepted: 05/20/2022] [Indexed: 02/01/2023]
Abstract
Inertial-measurement-unit (IMU)-based human activity recognition (HAR) studies have improved their performance owing to the latest classification model. In this study, the conformer, which is a state-of-the-art (SOTA) model in the field of speech recognition, is introduced in HAR to improve the performance of the transformer-based HAR model. The transformer model has a multi-head self-attention structure that can extract temporal dependency well, similar to the recurrent neural network (RNN) series while having higher computational efficiency than the RNN series. However, recent HAR studies have shown good performance by combining an RNN-series and convolutional neural network (CNN) model. Therefore, the performance of the transformer-based HAR study can be improved by adding a CNN layer that extracts local features well. The model that improved these points is the conformer-based-model model. To evaluate the proposed model, WISDM, UCI-HAR, and PAMAP2 datasets were used. A synthetic minority oversampling technique was used for the data augmentation algorithm to improve the dataset. From the experiment, the conformer-based HAR model showed better performance than baseline models: the transformer-based-model and the 1D-CNN HAR models. Moreover, the performance of the proposed algorithm was superior to that of algorithms proposed in recent similar studies which do not use RNN-series.
Collapse
Affiliation(s)
- Yeon-Wook Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (Y.-W.K.); (W.-H.C.)
| | - Woo-Hyeong Cho
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (Y.-W.K.); (W.-H.C.)
| | - Kyu-Sung Kim
- Department of Otorhinolaryngology, Inha University Hospital, Incheon 22332, Korea;
| | - Sangmin Lee
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (Y.-W.K.); (W.-H.C.)
- Department of Smart Engineering Program in Biomedical Science & Engineering, Inha University, Incheon 22212, Korea
- Correspondence: ; Tel.: +82-32-860-7420
| |
Collapse
|
15
|
Hobert MA, Jamour M. [Assessment of mobility-Geriatric assessment instruments for mobility impairments and perspectives of instrumentation]. Z Gerontol Geriatr 2022; 55:116-122. [PMID: 35181808 DOI: 10.1007/s00391-022-02040-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 02/14/2022] [Indexed: 10/19/2022]
Abstract
Mobility and its limitations play an important role in the quality of life of geriatric patients and influence activity and participation. The assessment of mobility is therefore of particular importance for treatment and treatment planning in geriatric patients. There is a variety of assessment tools that cannot be used in every patient group, e.g. due to floor effects. This article provides an overview of common assessment tools and facilitates the evaluation and use of these tools. Special consideration is given to performance-oriented aspects and current technical developments such as wearables.
Collapse
Affiliation(s)
- Markus A Hobert
- Klinik für Neurologie, UKSH Campus Kiel, Christian-Albrechts-University zu Kiel, Arnold-Heller-Str. 3, 24105, Kiel, Deutschland.
| | - Michael Jamour
- Allgemeine Innere Medizin und Geriatrie, Alb-Donau-Klinikum, Ehingen, Deutschland.,Geriatrische Rehabilitationsklinik Ehingen, Ehingen, Deutschland
| |
Collapse
|