1
|
Ye X, Sakurai K, Nair NKC, Wang KIK. Machine Learning Techniques for Sensor-Based Human Activity Recognition with Data Heterogeneity-A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:7975. [PMID: 39771711 PMCID: PMC11679906 DOI: 10.3390/s24247975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 12/03/2024] [Accepted: 12/09/2024] [Indexed: 01/11/2025]
Abstract
Sensor-based Human Activity Recognition (HAR) is crucial in ubiquitous computing, analyzing behaviors through multi-dimensional observations. Despite research progress, HAR confronts challenges, particularly in data distribution assumptions. Most studies assume uniform data distributions across datasets, contrasting with the varied nature of practical sensor data in human activities. Addressing data heterogeneity issues can improve performance, reduce computational costs, and aid in developing personalized, adaptive models with fewer annotated data. This review investigates how machine learning addresses data heterogeneity in HAR by categorizing data heterogeneity types, applying corresponding suitable machine learning methods, summarizing available datasets, and discussing future challenges.
Collapse
Affiliation(s)
- Xiaozhou Ye
- Department of Electrical, Computer, and Software Engineering, The University of Auckland, Auckland 1010, New Zealand; (X.Y.); (N.-K.C.N.)
| | - Kouichi Sakurai
- Department of Informatics, Kyushu University, Fukuoka 819-0395, Japan;
| | - Nirmal-Kumar C. Nair
- Department of Electrical, Computer, and Software Engineering, The University of Auckland, Auckland 1010, New Zealand; (X.Y.); (N.-K.C.N.)
| | - Kevin I-Kai Wang
- Department of Electrical, Computer, and Software Engineering, The University of Auckland, Auckland 1010, New Zealand; (X.Y.); (N.-K.C.N.)
| |
Collapse
|
2
|
ZhuParris A, de Goede AA, Yocarini IE, Kraaij W, Groeneveld GJ, Doll RJ. Machine Learning Techniques for Developing Remotely Monitored Central Nervous System Biomarkers Using Wearable Sensors: A Narrative Literature Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115243. [PMID: 37299969 DOI: 10.3390/s23115243] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 05/23/2023] [Accepted: 05/26/2023] [Indexed: 06/12/2023]
Abstract
BACKGROUND Central nervous system (CNS) disorders benefit from ongoing monitoring to assess disease progression and treatment efficacy. Mobile health (mHealth) technologies offer a means for the remote and continuous symptom monitoring of patients. Machine Learning (ML) techniques can process and engineer mHealth data into a precise and multidimensional biomarker of disease activity. OBJECTIVE This narrative literature review aims to provide an overview of the current landscape of biomarker development using mHealth technologies and ML. Additionally, it proposes recommendations to ensure the accuracy, reliability, and interpretability of these biomarkers. METHODS This review extracted relevant publications from databases such as PubMed, IEEE, and CTTI. The ML methods employed across the selected publications were then extracted, aggregated, and reviewed. RESULTS This review synthesized and presented the diverse approaches of 66 publications that address creating mHealth-based biomarkers using ML. The reviewed publications provide a foundation for effective biomarker development and offer recommendations for creating representative, reproducible, and interpretable biomarkers for future clinical trials. CONCLUSION mHealth-based and ML-derived biomarkers have great potential for the remote monitoring of CNS disorders. However, further research and standardization of study designs are needed to advance this field. With continued innovation, mHealth-based biomarkers hold promise for improving the monitoring of CNS disorders.
Collapse
Affiliation(s)
- Ahnjili ZhuParris
- Centre for Human Drug Research (CHDR), Zernikedreef 8, 2333 CL Leiden, The Netherlands
- Leiden Institute of Advanced Computer Science (LIACS), Snellius Gebouw, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
- Leiden University Medical Center (LUMC), Albinusdreef 2, 2333 ZA Leiden, The Netherlands
| | - Annika A de Goede
- Centre for Human Drug Research (CHDR), Zernikedreef 8, 2333 CL Leiden, The Netherlands
| | - Iris E Yocarini
- Leiden Institute of Advanced Computer Science (LIACS), Snellius Gebouw, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
| | - Wessel Kraaij
- Leiden Institute of Advanced Computer Science (LIACS), Snellius Gebouw, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
- The Netherlands Organisation for Applied Scientific Research (TNO), Anna van Buerenplein 1, 2595 DA, Den Haag, The Netherlands
| | - Geert Jan Groeneveld
- Centre for Human Drug Research (CHDR), Zernikedreef 8, 2333 CL Leiden, The Netherlands
- Leiden Institute of Advanced Computer Science (LIACS), Snellius Gebouw, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
| | - Robert Jan Doll
- Centre for Human Drug Research (CHDR), Zernikedreef 8, 2333 CL Leiden, The Netherlands
| |
Collapse
|
3
|
Venkatachalam K, Yang Z, Trojovsk P, Bacanin N, Deveci M, Ding W. Bimodal HAR-An Efficient Approach to Human Activity Analysis and Recognition Using Bimodal Hybrid Classifiers. Inf Sci (N Y) 2023. [DOI: 10.1016/j.ins.2023.01.121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
4
|
Jaramillo IE, Jeong JG, Lopez PR, Lee CH, Kang DY, Ha TJ, Oh JH, Jung H, Lee JH, Lee WH, Kim TS. Real-Time Human Activity Recognition with IMU and Encoder Sensors in Wearable Exoskeleton Robot via Deep Learning Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:9690. [PMID: 36560059 PMCID: PMC9783602 DOI: 10.3390/s22249690] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/02/2022] [Accepted: 12/08/2022] [Indexed: 06/17/2023]
Abstract
Wearable exoskeleton robots have become a promising technology for supporting human motions in multiple tasks. Activity recognition in real-time provides useful information to enhance the robot's control assistance for daily tasks. This work implements a real-time activity recognition system based on the activity signals of an inertial measurement unit (IMU) and a pair of rotary encoders integrated into the exoskeleton robot. Five deep learning models have been trained and evaluated for activity recognition. As a result, a subset of optimized deep learning models was transferred to an edge device for real-time evaluation in a continuous action environment using eight common human tasks: stand, bend, crouch, walk, sit-down, sit-up, and ascend and descend stairs. These eight robot wearer's activities are recognized with an average accuracy of 97.35% in real-time tests, with an inference time under 10 ms and an overall latency of 0.506 s per recognition using the selected edge device.
Collapse
Affiliation(s)
- Ismael Espinoza Jaramillo
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin Gyun Jeong
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | | | | | - Do-Yeon Kang
- Hyundai Rotem, Uiwang-si 16082, Republic of Korea
| | - Tae-Jun Ha
- Hyundai Rotem, Uiwang-si 16082, Republic of Korea
| | - Ji-Heon Oh
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Hwanseok Jung
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Jin Hyuk Lee
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Won Hee Lee
- Department of Software Convergence, Kyung Hee University, Yongin 17104, Republic of Korea
| | - Tae-Seong Kim
- Department of Electronics and Information Convergence Engineering, Kyung Hee University, Yongin 17104, Republic of Korea
| |
Collapse
|
5
|
Fu Z, Zhang B, He X, Li Y, Wang H, Huang J. Emotion recognition based on multi-modal physiological signals and transfer learning. Front Neurosci 2022; 16:1000716. [PMID: 36161186 PMCID: PMC9493208 DOI: 10.3389/fnins.2022.1000716] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 08/18/2022] [Indexed: 11/13/2022] Open
Abstract
In emotion recognition based on physiological signals, collecting enough labeled data of a single subject for training is time-consuming and expensive. The physiological signals’ individual differences and the inherent noise will significantly affect emotion recognition accuracy. To overcome the difference in subject physiological signals, we propose a joint probability domain adaptation with the bi-projection matrix algorithm (JPDA-BPM). The bi-projection matrix method fully considers the source and target domain’s different feature distributions. It can better project the source and target domains into the feature space, thereby increasing the algorithm’s performance. We propose a substructure-based joint probability domain adaptation algorithm (SSJPDA) to overcome physiological signals’ noise effect. This method can avoid the shortcomings that the domain level matching is too rough and the sample level matching is susceptible to noise. In order to verify the effectiveness of the proposed transfer learning algorithm in emotion recognition based on physiological signals, we verified it on the database for emotion analysis using physiological signals (DEAP dataset). The experimental results show that the average recognition accuracy of the proposed SSJPDA-BPM algorithm in the multimodal fusion physiological data from the DEAP dataset is 63.6 and 64.4% in valence and arousal, respectively. Compared with joint probability domain adaptation (JPDA), the performance of valence and arousal recognition accuracy increased by 17.6 and 13.4%, respectively.
Collapse
|
6
|
Kyamakya K, Tavakkoli V, McClatchie S, Arbeiter M, Scholte van Mast BG. A Comprehensive "Real-World Constraints"-Aware Requirements Engineering Related Assessment and a Critical State-of-the-Art Review of the Monitoring of Humans in Bed. SENSORS (BASEL, SWITZERLAND) 2022; 22:6279. [PMID: 36016040 PMCID: PMC9414192 DOI: 10.3390/s22166279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 08/12/2022] [Accepted: 08/17/2022] [Indexed: 06/15/2023]
Abstract
Currently, abnormality detection and/or prediction is a very hot topic. In this paper, we addressed it in the frame of activity monitoring of a human in bed. This paper presents a comprehensive formulation of a requirements engineering dossier for a monitoring system of a "human in bed" for abnormal behavior detection and forecasting. Hereby, practical and real-world constraints and concerns were identified and taken into consideration in the requirements dossier. A comprehensive and holistic discussion of the anomaly concept was extensively conducted and contributed to laying the ground for a realistic specifications book of the anomaly detection system. Some systems engineering relevant issues were also briefly addressed, e.g., verification and validation. A structured critical review of the relevant literature led to identifying four major approaches of interest. These four approaches were evaluated from the perspective of the requirements dossier. It was thereby clearly demonstrated that the approach integrating graph networks and advanced deep-learning schemes (Graph-DL) is the one capable of fully fulfilling the challenging issues expressed in the real-world conditions aware specification book. Nevertheless, to meet immediate market needs, systems based on advanced statistical methods, after a series of adaptations, already ensure and satisfy the important requirements related to, e.g., low cost, solid data security and a fully embedded and self-sufficient implementation. To conclude, some recommendations regarding system architecture and overall systems engineering were formulated.
Collapse
Affiliation(s)
- Kyandoghere Kyamakya
- Institute of Smart Systems Technologies, Universitaet Klagenfurt, 9020 Klagenfurt, Austria
| | - Vahid Tavakkoli
- Institute of Smart Systems Technologies, Universitaet Klagenfurt, 9020 Klagenfurt, Austria
| | | | | | | |
Collapse
|
7
|
Personalised Gait Recognition for People with Neurological Conditions. SENSORS 2022; 22:s22113980. [PMID: 35684600 PMCID: PMC9183078 DOI: 10.3390/s22113980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/12/2022] [Accepted: 05/18/2022] [Indexed: 11/29/2022]
Abstract
There is growing interest in monitoring gait patterns in people with neurological conditions. The democratisation of wearable inertial sensors has enabled the study of gait in free living environments. One pivotal aspect of gait assessment in uncontrolled environments is the ability to accurately recognise gait instances. Previous work has focused on wavelet transform methods or general machine learning models to detect gait; the former assume a comparable gait pattern between people and the latter assume training datasets that represent a diverse population. In this paper, we argue that these approaches are unsuitable for people with severe motor impairments and their distinct gait patterns, and make the case for a lightweight personalised alternative. We propose an approach that builds on top of a general model, fine-tuning it with personalised data. A comparative proof-of-concept evaluation with general machine learning (NN and CNN) approaches and personalised counterparts showed that the latter improved the overall accuracy in 3.5% for the NN and 5.3% for the CNN. More importantly, participants that were ill-represented by the general model (the most extreme cases) had the recognition of gait instances improved by up to 16.9% for NN and 20.5% for CNN with the personalised approaches. It is common to say that people with neurological conditions, such as Parkinson’s disease, present very individual motor patterns, and that in a sense they are all outliers; we expect that our results will motivate researchers to explore alternative approaches that value personalisation rather than harvesting datasets that are may be able to represent these differences.
Collapse
|
8
|
Ariza-Colpas PP, Vicario E, Oviedo-Carrascal AI, Butt Aziz S, Piñeres-Melo MA, Quintero-Linero A, Patara F. Human Activity Recognition Data Analysis: History, Evolutions, and New Trends. SENSORS 2022; 22:s22093401. [PMID: 35591091 PMCID: PMC9103712 DOI: 10.3390/s22093401] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 03/31/2022] [Accepted: 04/04/2022] [Indexed: 01/23/2023]
Abstract
The Assisted Living Environments Research Area–AAL (Ambient Assisted Living), focuses on generating innovative technology, products, and services to assist, medical care and rehabilitation to older adults, to increase the time in which these people can live. independently, whether they suffer from neurodegenerative diseases or some disability. This important area is responsible for the development of activity recognition systems—ARS (Activity Recognition Systems), which is a valuable tool when it comes to identifying the type of activity carried out by older adults, to provide them with assistance. that allows you to carry out your daily activities with complete normality. This article aims to show the review of the literature and the evolution of the different techniques for processing this type of data from supervised, unsupervised, ensembled learning, deep learning, reinforcement learning, transfer learning, and metaheuristics approach applied to this sector of science. health, showing the metrics of recent experiments for researchers in this area of knowledge. As a result of this article, it can be identified that models based on reinforcement or transfer learning constitute a good line of work for the processing and analysis of human recognition activities.
Collapse
Affiliation(s)
- Paola Patricia Ariza-Colpas
- Department of Computer Science and Electronics, Universidad de la Costa CUC, Barranquilla 080002, Colombia
- Faculty of Engineering in Information and Communication Technologies, Universidad Pontificia Bolivariana, Medellín 050031, Colombia;
- Correspondence:
| | - Enrico Vicario
- Department of Information Engineering, University of Florence, 50139 Firenze, Italy; (E.V.); (F.P.)
| | - Ana Isabel Oviedo-Carrascal
- Faculty of Engineering in Information and Communication Technologies, Universidad Pontificia Bolivariana, Medellín 050031, Colombia;
| | - Shariq Butt Aziz
- Department of Computer Science and IT, University of Lahore, Lahore 44000, Pakistan;
| | | | | | - Fulvio Patara
- Department of Information Engineering, University of Florence, 50139 Firenze, Italy; (E.V.); (F.P.)
| |
Collapse
|
9
|
Mathematical Criteria for a Priori Performance Estimation of Activities of Daily Living Recognition. SENSORS 2022; 22:s22072439. [PMID: 35408054 PMCID: PMC9002689 DOI: 10.3390/s22072439] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/10/2022] [Accepted: 03/15/2022] [Indexed: 11/16/2022]
Abstract
Monitoring Activities of Daily Living (ADL) has become a major occupation to respond to the aging population and prevent frailty. To do this, the scientific community is using Machine Learning (ML) techniques to learn the lifestyle habits of people at home. The most-used formalism to represent the behaviour of the inhabitant is the Hidden Markov Model (HMM) or Probabilistic Finite Automata (PFA), where events streams are considered. A common decomposition to design ADL using a mathematical model is Activities–Actions–Events (AAE). In this paper, we propose mathematical criteria to evaluate a priori the performance of these instrumentations for the goals of ADL recognition. We also present a case study to illustrate the use of these criteria.
Collapse
|
10
|
Online Activity Recognition Combining Dynamic Segmentation and Emergent Modeling. SENSORS 2022; 22:s22062250. [PMID: 35336420 PMCID: PMC8955624 DOI: 10.3390/s22062250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Revised: 03/08/2022] [Accepted: 03/11/2022] [Indexed: 11/17/2022]
Abstract
Activity recognition is fundamental to many applications envisaged in pervasive computing, especially in smart environments where the resident's data collected from sensors will be mapped to human activities. Previous research usually focuses on scripted or pre-segmented sequences related to activities, whereas many real-world deployments require information about the ongoing activities in real time. In this paper, we propose an online activity recognition model on streaming sensor data that incorporates the spatio-temporal correlation-based dynamic segmentation method and the stigmergy-based emergent modeling method to recognize activities when new sensor events are recorded. The dynamic segmentation approach integrating sensor correlation and time correlation judges whether two consecutive sensor events belong to the same window or not, avoiding events from very different functional areas or with a long time interval in the same window, thus obtaining the segmented window for every single event. Then, the emergent paradigm with marker-based stigmergy is adopted to build activity features that are explicitly represented as a directed weighted network to define the context for the last sensor event in this window, which does not need sophisticated domain knowledge. We validate the proposed method utilizing the real-world dataset Aruba from the CASAS project and the results show the effectiveness.
Collapse
|
11
|
Daily Living Activity Recognition In-The-Wild: Modeling and Inferring Activity-Aware Human Contexts. ELECTRONICS 2022. [DOI: 10.3390/electronics11020226] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Advancement in smart sensing and computing technologies has provided a dynamic opportunity to develop intelligent systems for human activity monitoring and thus assisted living. Consequently, many researchers have put their efforts into implementing sensor-based activity recognition systems. However, recognizing people’s natural behavior and physical activities with diverse contexts is still a challenging problem because human physical activities are often distracted by changes in their surroundings/environments. Therefore, in addition to physical activity recognition, it is also vital to model and infer the user’s context information to realize human-environment interactions in a better way. Therefore, this research paper proposes a new idea for activity recognition in-the-wild, which entails modeling and identifying detailed human contexts (such as human activities, behavioral environments, and phone states) using portable accelerometer sensors. The proposed scheme offers a detailed/fine-grained representation of natural human activities with contexts, which is crucial for modeling human-environment interactions in context-aware applications/systems effectively. The proposed idea is validated using a series of experiments, and it achieved an average balanced accuracy of 89.43%, which proves its effectiveness.
Collapse
|
12
|
Yen CT, Liao JX, Huang YK. Feature Fusion of a Deep-Learning Algorithm into Wearable Sensor Devices for Human Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2021; 21:8294. [PMID: 34960388 PMCID: PMC8706653 DOI: 10.3390/s21248294] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 12/02/2021] [Accepted: 12/08/2021] [Indexed: 11/17/2022]
Abstract
This paper presents a wearable device, fitted on the waist of a participant that recognizes six activities of daily living (walking, walking upstairs, walking downstairs, sitting, standing, and laying) through a deep-learning algorithm, human activity recognition (HAR). The wearable device comprises a single-board computer (SBC) and six-axis sensors. The deep-learning algorithm employs three parallel convolutional neural networks for local feature extraction and for subsequent concatenation to establish feature fusion models of varying kernel size. By using kernels of different sizes, relevant local features of varying lengths were identified, thereby increasing the accuracy of human activity recognition. Regarding experimental data, the database of University of California, Irvine (UCI) and self-recorded data were used separately. The self-recorded data were obtained by having 21 participants wear the device on their waist and perform six common activities in the laboratory. These data were used to verify the proposed deep-learning algorithm on the performance of the wearable device. The accuracy of these six activities in the UCI dataset and in the self-recorded data were 97.49% and 96.27%, respectively. The accuracies in tenfold cross-validation were 99.56% and 97.46%, respectively. The experimental results have successfully verified the proposed convolutional neural network (CNN) architecture, which can be used in rehabilitation assessment for people unable to exercise vigorously.
Collapse
Affiliation(s)
- Chih-Ta Yen
- Department of Electrical Engineering, National Taiwan Ocean University, Keelung City 202301, Taiwan
| | - Jia-Xian Liao
- Department of Electrical Engineering, National Formosa University, Yunlin County 632, Taiwan; (J.-X.L.); (Y.-K.H.)
| | - Yi-Kai Huang
- Department of Electrical Engineering, National Formosa University, Yunlin County 632, Taiwan; (J.-X.L.); (Y.-K.H.)
| |
Collapse
|
13
|
Liu L, He J, Ren K, Lungu J, Hou Y, Dong R. An Information Gain-Based Model and an Attention-Based RNN for Wearable Human Activity Recognition. ENTROPY 2021; 23:e23121635. [PMID: 34945941 PMCID: PMC8700115 DOI: 10.3390/e23121635] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/26/2021] [Accepted: 12/03/2021] [Indexed: 12/03/2022]
Abstract
Wearable sensor-based HAR (human activity recognition) is a popular human activity perception method. However, due to the lack of a unified human activity model, the number and positions of sensors in the existing wearable HAR systems are not the same, which affects the promotion and application. In this paper, an information gain-based human activity model is established, and an attention-based recurrent neural network (namely Attention-RNN) for human activity recognition is designed. Besides, the attention-RNN, which combines bidirectional long short-term memory (BiLSTM) with attention mechanism, was tested on the UCI opportunity challenge dataset. Experiments prove that the proposed human activity model provides guidance for the deployment location of sensors and provides a basis for the selection of the number of sensors, which can reduce the number of sensors used to achieve the same classification effect. In addition, experiments show that the proposed Attention-RNN achieves F1 scores of 0.898 and 0.911 in the ML (Modes of Locomotion) task and GR (Gesture Recognition) task, respectively.
Collapse
Affiliation(s)
- Leyuan Liu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
| | - Jian He
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
- Correspondence: (J.H.); (K.R.)
| | - Keyan Ren
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
- Correspondence: (J.H.); (K.R.)
| | - Jonathan Lungu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
| | - Yibin Hou
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
| | - Ruihai Dong
- School of Computer Science, University College Dublin, D04 V1W8 Dublin 4, Ireland;
| |
Collapse
|
14
|
Manikkath J, Subramony JA. Toward closed-loop drug delivery: Integrating wearable technologies with transdermal drug delivery systems. Adv Drug Deliv Rev 2021; 179:113997. [PMID: 34634396 DOI: 10.1016/j.addr.2021.113997] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 08/31/2021] [Accepted: 10/04/2021] [Indexed: 12/15/2022]
Abstract
The recent advancement and prevalence of wearable technologies and their ability to make digital measurements of vital signs and wellness parameters have triggered a new paradigm in the management of diseases. Drug delivery as a function of stimuli or response from wearable, closed-loop systems can offer real-time on-demand or preprogrammed drug delivery capability and offer total management of disease states. Here we review the key opportunities in this space for development of closed-loop systems, given the advent of digital wearable technologies. Particular considerations and focus are given to closed-loop systems combined with transdermal drug delivery technologies.
Collapse
|
15
|
Stoeve M, Schuldhaus D, Gamp A, Zwick C, Eskofier BM. From the Laboratory to the Field: IMU-Based Shot and Pass Detection in Football Training and Game Scenarios Using Deep Learning. SENSORS 2021; 21:s21093071. [PMID: 33924985 PMCID: PMC8124919 DOI: 10.3390/s21093071] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 04/24/2021] [Accepted: 04/26/2021] [Indexed: 02/05/2023]
Abstract
The applicability of sensor-based human activity recognition in sports has been repeatedly shown for laboratory settings. However, the transferability to real-world scenarios cannot be granted due to limitations on data and evaluation methods. On the example of football shot and pass detection against a null class we explore the influence of those factors for real-world event classification in field sports. For this purpose we compare the performance of an established Support Vector Machine (SVM) for laboratory settings from literature to the performance in three evaluation scenarios gradually evolving from laboratory settings to real-world scenarios. In addition, three different types of neural networks, namely a convolutional neural net (CNN), a long short term memory net (LSTM) and a convolutional LSTM (convLSTM) are compared. Results indicate that the SVM is not able to reliably solve the investigated three-class problem. In contrast, all deep learning models reach high classification scores showing the general feasibility of event detection in real-world sports scenarios using deep learning. The maximum performance with a weighted f1-score of 0.93 was reported by the CNN. The study provides valuable insights for sports assessment under practically relevant conditions. In particular, it shows that (1) the discriminative power of established features needs to be reevaluated when real-world conditions are assessed, (2) the selection of an appropriate dataset and evaluation method are both required to evaluate real-world applicability and (3) deep learning-based methods yield promising results for real-world HAR in sports despite high variations in the execution of activities.
Collapse
Affiliation(s)
- Maike Stoeve
- Machine Learning and Data Analytics Lab, Department of Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052 Erlangen, Germany;
- Correspondence:
| | | | - Axel Gamp
- Adidas AG, 91074 Herzogenaurach, Germany; (D.S.); (A.G.); (C.Z.)
| | - Constantin Zwick
- Adidas AG, 91074 Herzogenaurach, Germany; (D.S.); (A.G.); (C.Z.)
| | - Bjoern M. Eskofier
- Machine Learning and Data Analytics Lab, Department of Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), 91052 Erlangen, Germany;
| |
Collapse
|
16
|
Mekruksavanich S, Jitpattanakul A. LSTM Networks Using Smartphone Data for Sensor-Based Human Activity Recognition in Smart Homes. SENSORS 2021; 21:s21051636. [PMID: 33652697 PMCID: PMC7956629 DOI: 10.3390/s21051636] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 02/22/2021] [Accepted: 02/22/2021] [Indexed: 11/16/2022]
Abstract
Human Activity Recognition (HAR) employing inertial motion data has gained considerable momentum in recent years, both in research and industrial applications. From the abstract perspective, this has been driven by an acceleration in the building of intelligent and smart environments and systems that cover all aspects of human life including healthcare, sports, manufacturing, commerce, etc. Such environments and systems necessitate and subsume activity recognition, aimed at recognizing the actions, characteristics, and goals of one or more individuals from a temporal series of observations streamed from one or more sensors. Due to the reliance of conventional Machine Learning (ML) techniques on handcrafted features in the extraction process, current research suggests that deep-learning approaches are more applicable to automated feature extraction from raw sensor data. In this work, the generic HAR framework for smartphone sensor data is proposed, based on Long Short-Term Memory (LSTM) networks for time-series domains. Four baseline LSTM networks are comparatively studied to analyze the impact of using different kinds of smartphone sensor data. In addition, a hybrid LSTM network called 4-layer CNN-LSTM is proposed to improve recognition performance. The HAR method is evaluated on a public smartphone-based dataset of UCI-HAR through various combinations of sample generation processes (OW and NOW) and validation protocols (10-fold and LOSO cross validation). Moreover, Bayesian optimization techniques are used in this study since they are advantageous for tuning the hyperparameters of each LSTM network. The experimental results indicate that the proposed 4-layer CNN-LSTM network performs well in activity recognition, enhancing the average accuracy by up to 2.24% compared to prior state-of-the-art approaches.
Collapse
Affiliation(s)
- Sakorn Mekruksavanich
- Department of Computer Engineering, School of Information and Communication Technology, University of Phayao, Phayao 56000, Thailand;
| | - Anuchit Jitpattanakul
- Intelligent and Nonlinear Dynamic Innovations Research Center, Department of Mathematics, Faculty of Applied Science, King Mongkut’s University of Technology North Bangkok, Bangkok 10800, Thailand
- Correspondence:
| |
Collapse
|
17
|
An Ambient Intelligence-Based Human Behavior Monitoring Framework for Ubiquitous Environments. INFORMATION 2021. [DOI: 10.3390/info12020081] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This framework for human behavior monitoring aims to take a holistic approach to study, track, monitor, and analyze human behavior during activities of daily living (ADLs). The framework consists of two novel functionalities. First, it can perform the semantic analysis of user interactions on the diverse contextual parameters during ADLs to identify a list of distinct behavioral patterns associated with different complex activities. Second, it consists of an intelligent decision-making algorithm that can analyze these behavioral patterns and their relationships with the dynamic contextual and spatial features of the environment to detect any anomalies in user behavior that could constitute an emergency. These functionalities of this interdisciplinary framework were developed by integrating the latest advancements and technologies in human–computer interaction, machine learning, Internet of Things, pattern recognition, and ubiquitous computing. The framework was evaluated on a dataset of ADLs, and the performance accuracies of these two functionalities were found to be 76.71% and 83.87%, respectively. The presented and discussed results uphold the relevance and immense potential of this framework to contribute towards improving the quality of life and assisted living of the aging population in the future of Internet of Things (IoT)-based ubiquitous living environments, e.g., smart homes.
Collapse
|