1
|
Yu Z, Dang J. The effects of the generative adversarial network and personalized virtual reality platform in improving frailty among the elderly. Sci Rep 2025; 15:8220. [PMID: 40065129 PMCID: PMC11894045 DOI: 10.1038/s41598-025-93553-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Accepted: 03/07/2025] [Indexed: 03/14/2025] Open
Abstract
As society ages, improving the health of the elderly through effective training programs has become a pressing issue. Virtual reality (VR) technology, with its immersive experience, is increasingly being utilized as a vital tool in rehabilitation training for the elderly. To further enhance training outcomes and improve health conditions among the elderly, this work proposes an integrated model that combines the Generative Adversarial Network (GAN), Variational Autoencoder (VAE), and Long Short-Term Memory (LSTM) network. The GAN generates realistic, personalized virtual environments, the VAE builds training models closely related to health data, and the LSTM network provides precise motion monitoring and feedback. They collectively improve training effectiveness and assist the elderly in enhancing their health. First, the work optimizes the GAN through alternating training of the generator and discriminator to create personalized virtual environments. Next, the VAE is trained by maximizing the marginal log-likelihood of observed and generated data, and the personalized training model is constructed. Finally, the optimized LSTM network is used to implement a motion monitoring and feedback system. Experimental evaluations reveal that the optimized GAN outperforms the non-optimized version in both image quality scores and diversity indices. The optimized VAE shows improvements in reconstruction error and personalized fitness scores, with a slight reduction in image generation time. Additionally, the training time for the VAE is reduced. After training, the elderly participants exhibit a significant increase in their daily step count and weekly exercise frequency, with p-values less than 0.01, indicating a substantial improvement in their physical activity. Assessments of psychological health show a notable decrease in anxiety and depression scores among the elderly participants.
Collapse
Affiliation(s)
- Zhendong Yu
- School of Physical Education and Health, East China Jiaotong University, Nanchang, 330001, China
| | - Jianan Dang
- College of Education, University of the Visayas, Cebu, 6000, Philippines.
| |
Collapse
|
2
|
Schneider M, Seeser-Reich K, Fiedler A, Frese U. Enhancing Slip, Trip, and Fall Prevention: Real-World Near-Fall Detection with Advanced Machine Learning Technique. SENSORS (BASEL, SWITZERLAND) 2025; 25:1468. [PMID: 40096348 PMCID: PMC11902511 DOI: 10.3390/s25051468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2025] [Revised: 02/25/2025] [Accepted: 02/26/2025] [Indexed: 03/19/2025]
Abstract
Slips, trips, and falls (STFs) are a major occupational hazard that contributes significantly to workplace injuries and the associated financial costs. The application of traditional fall detection techniques in the real world is limited because they are usually based on simulated falls. By using kinematic data from real near-fall incidents that occurred in physically demanding work environments, this study overcomes this limitation and improves the ecological validity of fall detection algorithms. This study systematically tests several machine-learning architectures for near-fall detection using the Prev-Fall dataset, which consists of high-resolution inertial measurement unit (IMU) data from 110 workers. Convolutional neural networks (CNNs), residual networks (ResNets), convolutional long short-term memory networks (convLSTMs), and InceptionTime models were trained and evaluated over a range of temporal window lengths using a neural architecture search. High-validation F1 scores were achieved by the best-performing models, particularly CNNs and InceptionTime, indicating their effectiveness in near-fall classification. The need for more contextual variables to increase robustness was highlighted by recurrent false positives found in subsequent tests on previously unobserved occupational data, especially during biomechanically demanding activities such as bending and squatting. Nevertheless, our findings suggest the applicability of machine-learning-based STF prevention systems for workplace safety monitoring and, more generally, applications in fall mitigation. To further improve the accuracy and generalizability of the system, future research should investigate multimodal data integration and improved classification techniques.
Collapse
Affiliation(s)
- Moritz Schneider
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), 53757 Sankt Augustin, Germany
| | - Kevin Seeser-Reich
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), 53757 Sankt Augustin, Germany
| | - Armin Fiedler
- RheinAhrCampus, Koblenz University of Applied Sciences, 53424 Remagen, Germany
| | - Udo Frese
- German Research Center for Artificial Intelligence (DFKI), 28359 Bremen, Germany
| |
Collapse
|
3
|
Park S, Youm M, Kim J. IMU Sensor-Based Worker Behavior Recognition and Construction of a Cyber-Physical System Environment. SENSORS (BASEL, SWITZERLAND) 2025; 25:442. [PMID: 39860812 PMCID: PMC11768975 DOI: 10.3390/s25020442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Revised: 01/07/2025] [Accepted: 01/08/2025] [Indexed: 01/27/2025]
Abstract
According to South Korea's Ministry of Employment and Labor, approximately 25,000 construction workers suffered from various injuries between 2015 and 2019. Additionally, about 500 fatalities occur annually, and multiple studies are being conducted to prevent these accidents and quickly identify their occurrence to secure the golden time for the injured. Recently, AI-based video analysis systems for detecting safety accidents have been introduced. However, these systems are limited to areas where CCTV is installed, and in locations like construction sites, numerous blind spots exist due to the limitations of CCTV coverage. To address this issue, there is active research on the use of MEMS (micro-electromechanical systems) sensors to detect abnormal conditions in workers. In particular, methods such as using accelerometers and gyroscopes within MEMS sensors to acquire data based on workers' angles, utilizing three-axis accelerometers and barometric pressure sensors to improve the accuracy of fall detection systems, and measuring the wearer's gait using the x-, y-, and z-axis data from accelerometers and gyroscopes are being studied. However, most methods involve use of MEMS sensors embedded in smartphones, typically attaching the sensors to one or two specific body parts. Therefore, in this study, we developed a novel miniaturized IMU (inertial measurement unit) sensor that can be simultaneously attached to multiple body parts of construction workers (head, body, hands, and legs). The sensor integrates accelerometers, gyroscopes, and barometric pressure sensors to measure various worker movements in real time (e.g., walking, jumping, standing, and working at heights). Additionally, incorporating PPG (photoplethysmography), body temperature, and acoustic sensors, enables the comprehensive observation of both physiological signals and environmental changes. The collected sensor data are preprocessed using Kalman and extended Kalman filters, among others, and an algorithm was proposed to evaluate workers' safety status and update health-related data in real time. Experimental results demonstrated that the proposed IMU sensor can classify work activities with over 90% accuracy even at a low sampling rate of 15 Hz. Furthermore, by integrating internal filtering, communication modules, and server connectivity within an application, we established a cyber-physical system (CPS), enabling real-time monitoring and immediate alert transmission to safety managers. Through this approach, we verified improved performance in terms of miniaturization, measurement accuracy, and server integration compared to existing commercial sensors.
Collapse
Affiliation(s)
| | - Minkyo Youm
- Advanced Institute of Convergence Technology, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si 16229, Gyeonggi-do, Republic of Korea;
| | - Junkyeong Kim
- Advanced Institute of Convergence Technology, 145 Gwanggyo-ro, Yeongtong-gu, Suwon-si 16229, Gyeonggi-do, Republic of Korea;
| |
Collapse
|
4
|
Chen X, Yan J, Qin S, Li P, Ning S, Liu Y. Fall Detection Method Based on a Human Electrostatic Field and VMD-ECANet Architecture. IEEE J Biomed Health Inform 2025; 29:583-595. [PMID: 39405150 DOI: 10.1109/jbhi.2024.3481237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Falls are one of the most serious health risks faced by older adults worldwide, and they can have a significant impact on their physical and mental well-being as well as their quality of life. Detecting falls promptly and accurately and providing assistance can effectively reduce the harm caused by falls to older adults. This paper proposed a noncontact fall detection method based on the human electrostatic field and a VMD-ECANet framework. An electrostatic measurement system was used to measure the electrostatic signals of four types of falling postures and five types of daily actions. The signals were randomly divided in proportion and by individuals to construct a training set and test set. A fall detection model based on the VMD-ECA network was proposed that decomposes electrostatic signals into modal component signals using the variational mode decomposition (VMD) technique. These signals were then fed into a multichannel convolutional neural network for feature extraction. Information fusion was achieved through the efficient channel attention network (ECANet) module. Finally, the extracted features were input into a classifier to obtain the output results. The constructed model achieved an accuracy of 96.44%. The proposed fall detection solution has several advantages, including being noncontact, cost-effective, and privacy friendly. It is suitable for detecting indoor falls by older individuals living alone and helps to reduce the harm caused by falls.
Collapse
|
5
|
Muniasamy A. Revolutionizing health monitoring: Integrating transformer models with multi-head attention for precise human activity recognition using wearable devices. Technol Health Care 2025; 33:395-409. [PMID: 39269866 DOI: 10.3233/thc-241064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
BACKGROUND A daily activity routine is vital for overall health and well-being, supporting physical and mental fitness. Consistent physical activity is linked to a multitude of benefits for the body, mind, and emotions, playing a key role in raising a healthy lifestyle. The use of wearable devices has become essential in the realm of health and fitness, facilitating the monitoring of daily activities. While convolutional neural networks (CNN) have proven effective, challenges remain in quickly adapting to a variety of activities. OBJECTIVE This study aimed to develop a model for precise recognition of human activities to revolutionize health monitoring by integrating transformer models with multi-head attention for precise human activity recognition using wearable devices. METHODS The Human Activity Recognition (HAR) algorithm uses deep learning to classify human activities using spectrogram data. It uses a pretrained convolution neural network (CNN) with a MobileNetV2 model to extract features, a dense residual transformer network (DRTN), and a multi-head multi-level attention architecture (MH-MLA) to capture time-related patterns. The model then blends information from both layers through an adaptive attention mechanism and uses a SoftMax function to provide classification probabilities for various human activities. RESULTS The integrated approach, combining pretrained CNN with transformer models to create a thorough and effective system for recognizing human activities from spectrogram data, outperformed these methods in various datasets - HARTH, KU-HAR, and HuGaDB produced accuracies of 92.81%, 97.98%, and 95.32%, respectively. This suggests that the integration of diverse methodologies yields good results in capturing nuanced human activities across different activities. The comparison analysis showed that the integrated system consistently performs better for dynamic human activity recognition datasets. CONCLUSION In conclusion, maintaining a routine of daily activities is crucial for overall health and well-being. Regular physical activity contributes substantially to a healthy lifestyle, benefiting both the body and the mind. The integration of wearable devices has simplified the monitoring of daily routines. This research introduces an innovative approach to human activity recognition, combining the CNN model with a dense residual transformer network (DRTN) with multi-head multi-level attention (MH-MLA) within the transformer architecture to enhance its capability.
Collapse
|
6
|
Jeantet L, Zondo K, Delvenne C, Martin J, Chevallier D, Dufourq E. Automatic identification of the endangered hawksbill sea turtle behavior using deep learning and cross-species transfer learning. J Exp Biol 2024; 227:jeb249232. [PMID: 39555892 PMCID: PMC11698059 DOI: 10.1242/jeb.249232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2024] [Accepted: 11/08/2024] [Indexed: 11/19/2024]
Abstract
The accelerometer, an onboard sensor, enables remote monitoring of animal posture and movement, allowing researchers to deduce behaviors. Despite the automated analysis capabilities provided by deep learning, data scarcity remains a challenge in ecology. We explored transfer learning to classify behaviors from acceleration data of critically endangered hawksbill sea turtles (Eretmochelys imbricata). Transfer learning reuses a model trained on one task from a large dataset to solve a related task. We applied this method using a model trained on green turtles (Chelonia mydas) and adapted it to identify hawksbill behaviors such as swimming, resting and feeding. We also compared this with a model trained on human activity data. The results showed an 8% and 4% F1-score improvement with transfer learning from green turtle and human datasets, respectively. Transfer learning allows researchers to adapt existing models to their study species, leveraging deep learning and expanding the use of accelerometers for wildlife monitoring.
Collapse
Affiliation(s)
- Lorène Jeantet
- African Institute for Mathematical Sciences, Muizenberg, Cape Town, 7945, South Africa
- African Institute for Mathematical Sciences, Research and Innovation Centre, KN3 Kigali, Rwanda
- Stellenbosch University, 7602, Stellenbosch, South Africa
| | - Kukhanya Zondo
- African Institute for Mathematical Sciences, Muizenberg, Cape Town, 7945, South Africa
| | - Cyrielle Delvenne
- Unité de Recherche BOREA, MNHN, CNRS 8067, SU, IRD 207, UCN, UA, Station de Recherche Marine de Martinique, Quartier Degras, Petite Anse, 97217 Les Anses d'Arlet, Martinique, France
| | - Jordan Martin
- Unité de Recherche BOREA, MNHN, CNRS 8067, SU, IRD 207, UCN, UA, Station de Recherche Marine de Martinique, Quartier Degras, Petite Anse, 97217 Les Anses d'Arlet, Martinique, France
| | - Damien Chevallier
- Unité de Recherche BOREA, MNHN, CNRS 8067, SU, IRD 207, UCN, UA, Station de Recherche Marine de Martinique, Quartier Degras, Petite Anse, 97217 Les Anses d'Arlet, Martinique, France
| | - Emmanuel Dufourq
- African Institute for Mathematical Sciences, Muizenberg, Cape Town, 7945, South Africa
- African Institute for Mathematical Sciences, Research and Innovation Centre, KN3 Kigali, Rwanda
- Stellenbosch University, 7602, Stellenbosch, South Africa
| |
Collapse
|
7
|
Choi A, Hyong Kim T, Chae S, Hwan Mun J. Improved Transfer Learning for Detecting Upper-Limb Movement Intention Using Mechanical Sensors in an Exoskeletal Rehabilitation System. IEEE Trans Neural Syst Rehabil Eng 2024; 32:3953-3965. [PMID: 39453796 DOI: 10.1109/tnsre.2024.3486444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2024]
Abstract
The objective of this study was to propose a novel strategy for detecting upper-limb motion intentions from mechanical sensor signals using deep and heterogeneous transfer learning techniques. Three sensor types, surface electromyography (sEMG), force-sensitive resistors (FSRs), and inertial measurement units (IMUs), were combined to capture biometric signals during arm-up, hold, and arm-down movements. To distinguish motion intentions, deep learning models were constructed using the CIFAR-ResNet18 and CIFAR-MobileNetV2 architectures. The input features of the source models were sEMG, FSR, and IMU signals. The target model was trained using only FSR and IMU sensor signals. Optimization techniques determined appropriate layer structures and learning rates of each layer for effective transfer learning. The source model on CIFAR-ResNet18 exhibited the highest performance, achieving an accuracy of 95% and an F-1 score of 0.95. The target model with optimization strategies performed comparably to the source model, achieving an accuracy of 93% and an F-1 score of 0.93. The results show that mechanical sensors alone can achieve performance comparable to models including sEMG. The proposed approach can serve as a convenient and precise algorithm for human-robot collaboration in rehabilitation assistant robots.
Collapse
|
8
|
Wang X, Yu L, Wang H, Tsui KL, Zhao Y. Sensor-Based Multifaceted Feature Extraction and Ensemble Elastic Net Approach for Assessing Fall Risk in Community-Dwelling Older Adults. IEEE J Biomed Health Inform 2024; 28:6661-6673. [PMID: 39172618 DOI: 10.1109/jbhi.2024.3447705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
Accurate identification of community-dwelling older adults at high fall risk can facilitate timely intervention and significantly reduce fall incidents. Analyzing gait and balance capabilities via feature extraction and modeling through sensor-based motion data has emerged as a viable approach for fall risk assessment. However, the existing approaches for extracting key features related to fall risk lack inclusiveness, with limited consideration of the non-linear characteristics of sensor signals, such as signal complexity, self-similarity, and local stability. In this study, we developed a multifaceted feature extraction scheme employing diverse feature types, including demographic, descriptive statistical, non-linear, spatiotemporal and spectral features, derived from three-axis accelerometers and gyroscope data. This study is the first attempt to investigate non-linear features related to fall risk in multi-task scenarios from a dynamic system perspective. Based on the extracted multifaceted features, we propose an ensemble elastic net (E-E-N) approach for handling imbalanced data and offering high model interpretability. The E-E-N utilizes bootstrap sampling to construct base classifiers and employs a weighting mechanism to aggregate the base classifiers. We conducted a set of validation experiments using real-world data for comprehensive comparative analysis. The results demonstrate that the E-E-N approach exhibits superior predictive performance on fall risk classification. Our proposed approach offers a cost-effective tool for accurately assessing fall risk and alleviating the burden of continuous health monitoring in the long term.
Collapse
|
9
|
Lim ZK, Connie T, Goh MKO, Saedon N'IB. Fall risk prediction using temporal gait features and machine learning approaches. Front Artif Intell 2024; 7:1425713. [PMID: 39263525 PMCID: PMC11389313 DOI: 10.3389/frai.2024.1425713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 08/07/2024] [Indexed: 09/13/2024] Open
Abstract
Introduction Falls have been acknowledged as a major public health issue around the world. Early detection of fall risk is pivotal for preventive measures. Traditional clinical assessments, although reliable, are resource-intensive and may not always be feasible. Methods This study explores the efficacy of artificial intelligence (AI) in predicting fall risk, leveraging gait analysis through computer vision and machine learning techniques. Data was collected using the Timed Up and Go (TUG) test and JHFRAT assessment from MMU collaborators and augmented with a public dataset from Mendeley involving older adults. The study introduces a robust approach for extracting and analyzing gait features, such as stride time, step time, cadence, and stance time, to distinguish between fallers and non-fallers. Results Two experimental setups were investigated: one considering separate gait features for each foot and another analyzing averaged features for both feet. Ultimately, the proposed solutions produce promising outcomes, greatly enhancing the model's ability to achieve high levels of accuracy. In particular, the LightGBM demonstrates a superior accuracy of 96% in the prediction task. Discussion The findings demonstrate that simple machine learning models can successfully identify individuals at higher fall risk based on gait characteristics, with promising results that could potentially streamline fall risk assessment processes. However, several limitations were discovered throughout the experiment, including an insufficient dataset and data variation, limiting the model's generalizability. These issues are raised for future work consideration. Overall, this research contributes to the growing body of knowledge on fall risk prediction and underscores the potential of AI in enhancing public health strategies through the early identification of at-risk individuals.
Collapse
Affiliation(s)
- Zhe Khae Lim
- Faculty of Information Science and Technology, Multimedia University, Melaka, Malaysia
| | - Tee Connie
- Faculty of Information Science and Technology, Multimedia University, Melaka, Malaysia
| | - Michael Kah Ong Goh
- Faculty of Information Science and Technology, Multimedia University, Melaka, Malaysia
| | | |
Collapse
|
10
|
Schneider M, Reich K, Hartmann U, Hermanns I, Kaufmann M, Kluge A, Fiedler A, Frese U, Ellegast R. Acquisition of Data on Kinematic Responses to Unpredictable Gait Perturbations: Collection and Quality Assurance of Data for Use in Machine Learning Algorithms for (Near-)Fall Detection. SENSORS (BASEL, SWITZERLAND) 2024; 24:5381. [PMID: 39205074 PMCID: PMC11360659 DOI: 10.3390/s24165381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Revised: 07/31/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024]
Abstract
Slip, trip, and fall (STF) accidents cause high rates of absence from work in many companies. During the 2022 reporting period, the German Social Accident Insurance recorded 165,420 STF accidents, of which 12 were fatal and 2485 led to disability pensions. Particularly in the traffic, transport and logistics sector, STF accidents are the most frequently reported occupational accidents. Therefore, an accurate detection of near-falls is critical to improve worker safety. Efficient detection algorithms are essential for this, but their performance heavily depends on large, well-curated datasets. However, there are drawbacks to current datasets, including small sample sizes, an emphasis on older demographics, and a reliance on simulated rather than real data. In this paper we report the collection of a standardised kinematic STF dataset from real-world STF events affecting parcel delivery workers and steelworkers. We further discuss the use of the data to evaluate dynamic stability control during locomotion for machine learning and build a standardised database. We present the data collection, discuss the classification of the data, present the totality of the data statistically, and compare it with existing databases. A significant research gap is the limited number of participants and focus on older populations in previous studies, as well as the reliance on simulated rather than real-world data. Our study addresses these gaps by providing a larger dataset of real-world STF events from a working population with physically demanding jobs. The population studied included 110 participants, consisting of 55 parcel delivery drivers and 55 steelworkers, both male and female, aged between 19 and 63 years. This diverse participant base allows for a more comprehensive understanding of STF incidents in different working environments.
Collapse
Affiliation(s)
- Moritz Schneider
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), 53757 Sankt Augustin, Germany
| | - Kevin Reich
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), 53757 Sankt Augustin, Germany
| | - Ulrich Hartmann
- RheinAhrCampus, Koblenz University of Applied Sciences, 53424 Remagen, Germany
| | - Ingo Hermanns
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), 53757 Sankt Augustin, Germany
| | - Mirko Kaufmann
- RheinAhrCampus, Koblenz University of Applied Sciences, 53424 Remagen, Germany
| | - Annette Kluge
- Chair of Work, Organisational & Business Psychology, Ruhr University Bochum, 44801 Bochum, Germany
| | - Armin Fiedler
- RheinAhrCampus, Koblenz University of Applied Sciences, 53424 Remagen, Germany
| | - Udo Frese
- German Research Center for Artificial Intelligence (DFKI), 28359 Bremen, Germany
| | - Rolf Ellegast
- Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA), 53757 Sankt Augustin, Germany
| |
Collapse
|
11
|
Jiang X, Li J, Zhu Z, Liu X, Yuan Y, Chou C, Yan S, Dai C, Jia F. MovePort: Multimodal Dataset of EMG, IMU, MoCap, and Insole Pressure for Analyzing Abnormal Movements and Postures in Rehabilitation Training. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2633-2643. [PMID: 39024074 DOI: 10.1109/tnsre.2024.3429637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
In most real world rehabilitation training, patients are trained to regain motion capabilities with the aid of functional/epidural electrical stimulation (FES/EES), under the support of gravity-assist systems to prevent falls. However, the lack of motion analysis dataset designed specifically for rehabilitation-related applications largely limits the conduct of pilot research. We provide an open access dataset, consisting of multimodal data collected via 16 electromyography (EMG) sensors, 6 inertial measurement unit (IMU) sensors, and 230 insole pressure sensors (IPS) per foot, together with a 26-sensor motion capture system, under different MOVEments and POstures for Rehabilitation Training (MovePort). Data were collected under diverse experimental paradigms. Twenty four participants first imitated multiple normal and abnormal body postures including (1) normal standing still, (2) leaning forward, (3) leaning back, and (4) half-squat, which in practical applications, can be detected as feedback to tune the parameters of FES/EES and gravity-assist systems to keep patients in a target body posture. Data under imitated abnormal gaits, e.g., (1) with legs raised higher under excessive electrical stimulation, and (2) with dragging legs under insufficient stimulation, were also collected. Data under normal gaits with low, medium and high speeds are also included. Pathological gait data from a subject with spastic paraplegia further increases the clinical value of our dataset. We also provide source codes to perform both intra- and inter-participant motion analyses of our dataset. We expect our dataset can provide a unique platform to promote collaboration among neurorehabilitation engineers.
Collapse
|
12
|
Yuhai O, Choi A, Cho Y, Kim H, Mun JH. Deep-Learning-Based Recovery of Missing Optical Marker Trajectories in 3D Motion Capture Systems. Bioengineering (Basel) 2024; 11:560. [PMID: 38927796 PMCID: PMC11200691 DOI: 10.3390/bioengineering11060560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/17/2024] [Accepted: 05/30/2024] [Indexed: 06/28/2024] Open
Abstract
Motion capture (MoCap) technology, essential for biomechanics and motion analysis, faces challenges from data loss due to occlusions and technical issues. Traditional recovery methods, based on inter-marker relationships or independent marker treatment, have limitations. This study introduces a novel U-net-inspired bi-directional long short-term memory (U-Bi-LSTM) autoencoder-based technique for recovering missing MoCap data across multi-camera setups. Leveraging multi-camera and triangulated 3D data, this method employs a sophisticated U-shaped deep learning structure with an adaptive Huber regression layer, enhancing outlier robustness and minimizing reconstruction errors, proving particularly beneficial for long-term data loss scenarios. Our approach surpasses traditional piecewise cubic spline and state-of-the-art sparse low rank methods, demonstrating statistically significant improvements in reconstruction error across various gap lengths and numbers. This research not only advances the technical capabilities of MoCap systems but also enriches the analytical tools available for biomechanical research, offering new possibilities for enhancing athletic performance, optimizing rehabilitation protocols, and developing personalized treatment plans based on precise biomechanical data.
Collapse
Affiliation(s)
- Oleksandr Yuhai
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| | - Ahnryul Choi
- Department of Biomedical Engineering, College of Medical Convergence, Catholic Kwandong University, Gangneung 25601, Republic of Korea;
| | - Yubin Cho
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| | - Hyunggun Kim
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| | - Joung Hwan Mun
- Department of Bio-Mechatronic Engineering, College of Biotechnology and Bioengineering, Sungkyunkwan University, Suwon 16419, Republic of Korea; (O.Y.); (Y.C.)
| |
Collapse
|
13
|
Vuong TH, Doan T, Takasu A. Deep Wavelet Convolutional Neural Networks for Multimodal Human Activity Recognition Using Wearable Inertial Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:9721. [PMID: 38139567 PMCID: PMC10747357 DOI: 10.3390/s23249721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/02/2023] [Accepted: 12/05/2023] [Indexed: 12/24/2023]
Abstract
Recent advances in wearable systems have made inertial sensors, such as accelerometers and gyroscopes, compact, lightweight, multimodal, low-cost, and highly accurate. Wearable inertial sensor-based multimodal human activity recognition (HAR) methods utilize the rich sensing data from embedded multimodal sensors to infer human activities. However, existing HAR approaches either rely on domain knowledge or fail to address the time-frequency dependencies of multimodal sensor signals. In this paper, we propose a novel method called deep wavelet convolutional neural networks (DWCNN) designed to learn features from the time-frequency domain and improve accuracy for multimodal HAR. DWCNN introduces a framework that combines continuous wavelet transforms (CWT) with enhanced deep convolutional neural networks (DCNN) to capture the dependencies of sensing signals in the time-frequency domain, thereby enhancing the feature representation ability for multiple wearable inertial sensor-based HAR tasks. Within the CWT, we further propose an algorithm to estimate the wavelet scale parameter. This helps enhance the performance of CWT when computing the time-frequency representation of the input signals. The output of the CWT then serves as input for the proposed DCNN, which consists of residual blocks for extracting features from different modalities and attention blocks for fusing these features of multimodal signals. We conducted extensive experiments on five benchmark HAR datasets: WISDM, UCI-HAR, Heterogeneous, PAMAP2, and UniMiB SHAR. The experimental results demonstrate the superior performance of the proposed model over existing competitors.
Collapse
Affiliation(s)
- Thi Hong Vuong
- Department of Informatics, National Institute of Informatics, Tokyo 101-0003, Japan;
| | - Tung Doan
- Department of Computer Engineering, School of Information and Communication Technology, Hanoi University of Science and Technology, Hanoi 11615, Vietnam;
| | - Atsuhiro Takasu
- Department of Informatics, National Institute of Informatics, Tokyo 101-0003, Japan;
| |
Collapse
|
14
|
Hellmers S, Krey E, Gashi A, Koschate J, Schmidt L, Stuckenschneider T, Hein A, Zieschang T. Comparison of machine learning approaches for near-fall-detection with motion sensors. Front Digit Health 2023; 5:1223845. [PMID: 37564882 PMCID: PMC10410450 DOI: 10.3389/fdgth.2023.1223845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 07/06/2023] [Indexed: 08/12/2023] Open
Abstract
Introduction Falls are one of the most common causes of emergency hospital visits in older people. Early recognition of an increased fall risk, which can be indicated by the occurrence of near-falls, is important to initiate interventions. Methods In a study with 87 subjects we simulated near-fall events on a perturbation treadmill and recorded them with inertial measurement units (IMU) at seven different positions. We investigated different machine learning models for the near-fall detection including support vector machines, AdaBoost, convolutional neural networks, and bidirectional long short-term memory networks. Additionally, we analyzed the influence of the sensor position on the classification results. Results The best results showed a DeepConvLSTM with an F1 score of 0.954 (precision 0.969, recall 0.942) at the sensor position "left wrist." Discussion Since these results were obtained in the laboratory, the next step is to evaluate the suitability of the classifiers in the field.
Collapse
Affiliation(s)
- Sandra Hellmers
- Assistance Systems and Medical Device Technology, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Elias Krey
- Assistance Systems and Medical Device Technology, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Arber Gashi
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Jessica Koschate
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Laura Schmidt
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Tim Stuckenschneider
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Andreas Hein
- Assistance Systems and Medical Device Technology, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| | - Tania Zieschang
- Geriatric Medicine, Department for Health Services Research, Carl von Ossietzky University, Oldenburg, Germany
| |
Collapse
|
15
|
Mohammad Z, Anwary AR, Mridha MF, Shovon MSH, Vassallo M. An Enhanced Ensemble Deep Neural Network Approach for Elderly Fall Detection System Based on Wearable Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:4774. [PMID: 37430686 DOI: 10.3390/s23104774] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/27/2023] [Accepted: 05/12/2023] [Indexed: 07/12/2023]
Abstract
Fatal injuries and hospitalizations caused by accidental falls are significant problems among the elderly. Detecting falls in real-time is challenging, as many falls occur in a short period. Developing an automated monitoring system that can predict falls before they happen, provide safeguards during the fall, and issue remote notifications after the fall is essential to improving the level of care for the elderly. This study proposed a concept for a wearable monitoring framework that aims to anticipate falls during their beginning and descent, activating a safety mechanism to minimize fall-related injuries and issuing a remote notification after the body impacts the ground. However, the demonstration of this concept in the study involved the offline analysis of an ensemble deep neural network architecture based on a Convolutional Neural Network (CNN) and a Recurrent Neural Network (RNN) and existing data. It is important to note that this study did not involve the implementation of hardware or other elements beyond the developed algorithm. The proposed approach utilized CNN for robust feature extraction from accelerometer and gyroscope data and RNN to model the temporal dynamics of the falling process. A distinct class-based ensemble architecture was developed, where each ensemble model identified a specific class. The proposed approach was evaluated on the annotated SisFall dataset and achieved a mean accuracy of 95%, 96%, and 98% for Non-Fall, Pre-Fall, and Fall detection events, respectively, outperforming state-of-the-art fall detection methods. The overall evaluation demonstrated the effectiveness of the developed deep learning architecture. This wearable monitoring system will prevent injuries and improve the quality of life of elderly individuals.
Collapse
Affiliation(s)
- Zabir Mohammad
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Arif Reza Anwary
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK
| | - Muhammad Firoz Mridha
- Department of Computer Science, American International University-Bangladesh (AIUB), Dhaka 1229, Bangladesh
| | - Md Sakib Hossain Shovon
- Department of Computer Science, American International University-Bangladesh (AIUB), Dhaka 1229, Bangladesh
| | | |
Collapse
|
16
|
Jung S, de l’Escalopier N, Oudre L, Truong C, Dorveaux E, Gorintin L, Ricard D. A Machine Learning Pipeline for Gait Analysis in a Semi Free-Living Environment. SENSORS (BASEL, SWITZERLAND) 2023; 23:4000. [PMID: 37112339 PMCID: PMC10145775 DOI: 10.3390/s23084000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 06/19/2023]
Abstract
This paper presents a novel approach to creating a graphical summary of a subject's activity during a protocol in a Semi Free-Living Environment. Thanks to this new visualization, human behavior, in particular locomotion, can now be condensed into an easy-to-read and user-friendly output. As time series collected while monitoring patients in Semi Free-Living Environments are often long and complex, our contribution relies on an innovative pipeline of signal processing methods and machine learning algorithms. Once learned, the graphical representation is able to sum up all activities present in the data and can quickly be applied to newly acquired time series. In a nutshell, raw data from inertial measurement units are first segmented into homogeneous regimes with an adaptive change-point detection procedure, then each segment is automatically labeled. Then, features are extracted from each regime, and lastly, a score is computed using these features. The final visual summary is constructed from the scores of the activities and their comparisons to healthy models. This graphical output is a detailed, adaptive, and structured visualization that helps better understand the salient events in a complex gait protocol.
Collapse
Affiliation(s)
- Sylvain Jung
- Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190 Gif-sur-Yvette, France
- Université Sorbonne Paris Nord, L2TI, UR 3043, F-93430 Villetaneuse, France
- AbilyCare, 130 Rue de Lourmel, F-75015 Paris, France
- ENGIE Lab CRIGEN, F-93249 Stains, France
| | - Nicolas de l’Escalopier
- Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-75006 Paris, France
- Service de Neurologie, Service de Santé des Armées, HIA Percy, F-92190 Clamart, France
| | - Laurent Oudre
- Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190 Gif-sur-Yvette, France
| | - Charles Truong
- Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-91190 Gif-sur-Yvette, France
| | - Eric Dorveaux
- AbilyCare, 130 Rue de Lourmel, F-75015 Paris, France
| | - Louis Gorintin
- Novakamp, 10-12 Avenue du Bosquet, F-95560 Baillet en France, France
| | - Damien Ricard
- Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, CNRS, SSA, INSERM, Centre Borelli, F-75006 Paris, France
- Service de Neurologie, Service de Santé des Armées, HIA Percy, F-92190 Clamart, France
- Ecole du Val-de-Grâce, Service de Santé des Armées, F-75005 Paris, France
| |
Collapse
|
17
|
Cardenas JD, Gutierrez CA, Aguilar-Ponce R. Deep Learning Multi-Class Approach for Human Fall Detection Based on Doppler Signatures. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1123. [PMID: 36673883 PMCID: PMC9858740 DOI: 10.3390/ijerph20021123] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/30/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Falling events are a global health concern with short- and long-term physical and psychological implications, especially for the elderly population. This work aims to monitor human activity in an indoor environment and recognize falling events without requiring users to carry a device or sensor on their bodies. A sensing platform based on the transmission of a continuous wave (CW) radio-frequency (RF) probe signal was developed using general-purpose equipment. The CW probe signal is similar to the pilot subcarriers transmitted by commercial off-the-shelf WiFi devices. As a result, our methodology can easily be integrated into a joint radio sensing and communication scheme. The sensing process is carried out by analyzing the changes in phase, amplitude, and frequency that the probe signal suffers when it is reflected or scattered by static and moving bodies. These features are commonly extracted from the channel state information (CSI) of WiFi signals. However, CSI relies on complex data acquisition and channel estimation processes. Doppler radars have also been used to monitor human activity. While effective, a radar-based fall detection system requires dedicated hardware. In this paper, we follow an alternative method to characterize falling events on the basis of the Doppler signatures imprinted on the CW probe signal by a falling person. A multi-class deep learning framework for classification was conceived to differentiate falling events from other activities that can be performed in indoor environments. Two neural network models were implemented. The first is based on a long-short-term memory network (LSTM) and the second on a convolutional neural network (CNN). A series of experiments comprising 11 subjects were conducted to collect empirical data and test the system's performance. Falls were detected with an accuracy of 92.1% for the LSTM case, while for the CNN, an accuracy rate of 92.1% was obtained. The results demonstrate the viability of human fall detection based on a radio sensing system such as the one described in this paper.
Collapse
|