1
|
Qi W, Xu X, Qian K, Schuller BW, Fortino G, Aliverti A. A Review of AIoT-Based Human Activity Recognition: From Application to Technique. IEEE J Biomed Health Inform 2025; 29:2425-2438. [PMID: 38809724 DOI: 10.1109/jbhi.2024.3406737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
This scoping review paper redefines the Artificial Intelligence-based Internet of Things (AIoT) driven Human Activity Recognition (HAR) field by systematically extrapolating from various application domains to deduce potential techniques and algorithms. We distill a general model with adaptive learning and optimization mechanisms by conducting a detailed analysis of human activity types and utilizing contact or non-contact devices. It presents various system integration mathematical paradigms driven by multimodal data fusion, covering predictions of complex behaviors and redefining valuable methods, devices, and systems for HAR. Additionally, this paper establishes benchmarks for behavior recognition across different application requirements, from simple localized actions to group activities. It summarizes open research directions, including data diversity and volume, computational limitations, interoperability, real-time recognition, data security, and privacy concerns. Finally, we aim to serve as a comprehensive and foundational resource for researchers delving into the complex and burgeoning realm of AIoT-enhanced HAR, providing insights and guidance for future innovations and developments.
Collapse
|
2
|
López JL, Espinilla M, Verdejo Á. Evaluation of the Impact of the Sustainable Development Goals on an Activity Recognition Platform for Healthcare Systems. SENSORS (BASEL, SWITZERLAND) 2023; 23:3563. [PMID: 37050622 PMCID: PMC10099385 DOI: 10.3390/s23073563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 03/06/2023] [Accepted: 03/21/2023] [Indexed: 06/19/2023]
Abstract
The Sustainable Development Goals (SDGs), also known as the Global Goals, were adopted by the United Nations in 2015 as a universal call to end poverty, protect the planet and ensure peace and prosperity for all by 2030. The 17 SDGs have been designed to end poverty, hunger, AIDS and discrimination against women and girls. Despite the clear SDG framework, there is a significant gap in the literature to establish the alignment of systems, projects or tools with the SDGs. In this research work, we assess the SDG alignment of an activity recognition platform for healthcare systems, called ACTIVA. This new platform, designed to be deployed in environments inhabited by vulnerable people, is based on sensors and artificial intelligence, and includes a mobile application to report anomalous situations and ensure a rapid response from healthcare personnel. In this work, the ACTIVA platform and its compliance with each of the SDGs is assessed, providing a detailed evaluation of SDG 7-ensuring access to affordable, reliable, sustainable and modern energy for all. In addition, a website is presented where the ACTIVA platform's compliance with the 17 SDGs has been evaluated in detail. The comprehensive assessment of this novel platform's compliance with the SDGs provides a roadmap for the evaluation of future and past systems in relation to sustainability.
Collapse
Affiliation(s)
- José L. López
- Computer Science Department, University of Jaén, Campus Las Lagunillas s/n, 23071 Jaén, Spain
| | - Macarena Espinilla
- Computer Science Department, University of Jaén, Campus Las Lagunillas s/n, 23071 Jaén, Spain
| | - Ángeles Verdejo
- Electrical Engineering Department, University of Jaén, Campus Las Lagunillas s/n, 23071 Jaén, Spain;
| |
Collapse
|
3
|
Hartmann KV, Primc N, Rubeis G. Lost in translation? Conceptions of privacy and independence in the technical development of AI-based AAL. MEDICINE, HEALTH CARE, AND PHILOSOPHY 2023; 26:99-110. [PMID: 36348209 PMCID: PMC9984520 DOI: 10.1007/s11019-022-10126-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
AAL encompasses smart home technologies that are installed in the personal living environment in order to support older, disabled, as well as chronically ill people with the goal of delaying or reducing their need for nursing care in a care facility. Artificial intelligence (AI) is seen as an important tool for assisting the target group in their daily lives. A literature search and qualitative content analysis of 255 articles from computer science and engineering was conducted to explore the usage of ethical concepts. From an ethical point of view, the concept of independence and self-determination on the one hand and the possible loss of privacy on the other hand are widely discussed in the context of AAL. These concepts are adopted by the technical discourse in the sense that independence, self-determination and privacy are recognized as important values. Nevertheless, our research shows that these concepts have different usages and meanings in the ethical and the technical discourses. In the paper, we aim to map the different meanings of independence, self-determination and privacy as they can be found in the context of technological research on AI-based AAL systems. It investigates the interpretation of these ethical and social concepts which technicians try to build into AAL systems. In a second step, these interpretations are contextualized with concepts from the ethical discourse on AI-based assistive technologies.
Collapse
Affiliation(s)
- Kris Vera Hartmann
- Institute for History and Ethics of Medicine, Faculty of Medicine, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany.
| | - Nadia Primc
- Institute for History and Ethics of Medicine, Faculty of Medicine, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | - Giovanni Rubeis
- Department of General Health Studies, Division Biomedical and Public Health Ethics, Karl Landsteiner Private University for Health Sciences, Krems, Austria
| |
Collapse
|
4
|
Ye J, Jiang H, Zhong J. A Graph-Attention-Based Method for Single-Resident Daily Activity Recognition in Smart Homes. SENSORS (BASEL, SWITZERLAND) 2023; 23:1626. [PMID: 36772666 PMCID: PMC9921809 DOI: 10.3390/s23031626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/22/2023] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
In ambient-assisted living facilitated by smart home systems, the recognition of daily human activities is of great importance. It aims to infer the household's daily activities from the triggered sensor observation sequences with varying time intervals among successive readouts. This paper introduces a novel deep learning framework based on embedding technology and graph attention networks, namely the time-oriented and location-oriented graph attention (TLGAT) networks. The embedding technology converts sensor observations into corresponding feature vectors. Afterward, TLGAT provides a sensor observation sequence as a fully connected graph to the model's temporal correlation as well as the sensor's location correlation among sensor observations and facilitates the feature representation of each sensor observation through receiving other sensor observations and weighting operations. The experiments were conducted on two public datasets, based on the diverse setups of sensor event sequence length. The experimental results revealed that the proposed method achieved favorable performance under diverse setups.
Collapse
Affiliation(s)
- Jiancong Ye
- Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, China
| | - Hongjie Jiang
- Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, China
| | - Junpei Zhong
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
5
|
Yuan L, Andrews J, Mu H, Vakil A, Ewing R, Blasch E, Li J. Interpretable Passive Multi-Modal Sensor Fusion for Human Identification and Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 22:5787. [PMID: 35957343 PMCID: PMC9371208 DOI: 10.3390/s22155787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 06/15/2023]
Abstract
Human monitoring applications in indoor environments depend on accurate human identification and activity recognition (HIAR). Single modality sensor systems have shown to be accurate for HIAR, but there are some shortcomings to these systems, such as privacy, intrusion, and costs. To combat these shortcomings for a long-term monitoring solution, an interpretable, passive, multi-modal, sensor fusion system PRF-PIR is proposed in this work. PRF-PIR is composed of one software-defined radio (SDR) device and one novel passive infrared (PIR) sensor system. A recurrent neural network (RNN) is built as the HIAR model for this proposed solution to handle the temporal dependence of passive information captured by both modalities. We validate our proposed PRF-PIR system for a potential human monitoring system through the data collection of eleven activities from twelve human subjects in an academic office environment. From our data collection, the efficacy of the sensor fusion system is proven via an accuracy of 0.9866 for human identification and an accuracy of 0.9623 for activity recognition. The results of the system are supported with explainable artificial intelligence (XAI) methodologies to serve as a validation for sensor fusion over the deployment of single sensor solutions. PRF-PIR provides a passive, non-intrusive, and highly accurate system that allows for robustness in uncertain, highly similar, and complex at-home activities performed by a variety of human subjects.
Collapse
Affiliation(s)
- Liangqi Yuan
- Department of Electrical and Computer Engineering, Oakland University, Rochester, MI 48309, USA; (L.Y.); (J.A.); (H.M.); (A.V.)
| | - Jack Andrews
- Department of Electrical and Computer Engineering, Oakland University, Rochester, MI 48309, USA; (L.Y.); (J.A.); (H.M.); (A.V.)
| | - Huaizheng Mu
- Department of Electrical and Computer Engineering, Oakland University, Rochester, MI 48309, USA; (L.Y.); (J.A.); (H.M.); (A.V.)
| | - Asad Vakil
- Department of Electrical and Computer Engineering, Oakland University, Rochester, MI 48309, USA; (L.Y.); (J.A.); (H.M.); (A.V.)
| | - Robert Ewing
- Sensors Directorate, Air Force Research Laboratory, WPAFB, Dayton, OH 45433, USA;
| | - Erik Blasch
- Information Directorate, Air Force Research Laboratory, Rome, NY 13441, USA;
| | - Jia Li
- Department of Electrical and Computer Engineering, Oakland University, Rochester, MI 48309, USA; (L.Y.); (J.A.); (H.M.); (A.V.)
| |
Collapse
|
6
|
Sarwar MU, Gillani LF, Almadhor A, Shakya M, Tariq U. Improving Recognition of Overlapping Activities with Less Interclass Variations in Smart Homes through Clustering-Based Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8303856. [PMID: 35694589 PMCID: PMC9184152 DOI: 10.1155/2022/8303856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 05/05/2022] [Indexed: 12/03/2022]
Abstract
The systems of sensing technology along with machine learning techniques provide a robust solution in a smart home due to which health monitoring, elderly care, and independent living take advantage. This study addresses the overlapping problem in activities performed by the smart home resident and improves the recognition performance of overlapping activities. The overlapping problem occurs due to less interclass variations (i.e., similar sensors used in more than one activity and the same location of performed activities). The proposed approach overlapping activity recognition using cluster-based classification (OAR-CbC) that makes a generic model for this problem is to use a soft partitioning technique to separate the homogeneous activities from nonhomogeneous activities on a coarse-grained level. Then, the activities within each cluster are balanced and the classifier is trained to correctly recognize the activities within each cluster independently on a fine-grained level. We examine four partitioning and classification techniques with the same hierarchy for a fair comparison. The OAR-CbC evaluates on smart home datasets Aruba and Milan using threefold and leave-one-day-out cross-validation. We used evaluation metrics: precision, recall, F score, accuracy, and confusion matrices to ensure the model's reliability. The OAR-CbC shows promising results on both datasets, notably boosting the recognition rate of all overlapping activities more than the state-of-the-art studies.
Collapse
Affiliation(s)
- Muhammad Usman Sarwar
- Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, Pakistan
| | - Labiba Fahad Gillani
- Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, Pakistan
| | - Ahmad Almadhor
- College of Computer and Information Sciences, Al Jouf University, Sakakah, Saudi Arabia
| | - Manoj Shakya
- Department of Computer Science and Engineering, Kathmandu University, Dhulikhel, Nepal
| | - Usman Tariq
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| |
Collapse
|
7
|
Online Activity Recognition Combining Dynamic Segmentation and Emergent Modeling. SENSORS 2022; 22:s22062250. [PMID: 35336420 PMCID: PMC8955624 DOI: 10.3390/s22062250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2022] [Revised: 03/08/2022] [Accepted: 03/11/2022] [Indexed: 11/17/2022]
Abstract
Activity recognition is fundamental to many applications envisaged in pervasive computing, especially in smart environments where the resident's data collected from sensors will be mapped to human activities. Previous research usually focuses on scripted or pre-segmented sequences related to activities, whereas many real-world deployments require information about the ongoing activities in real time. In this paper, we propose an online activity recognition model on streaming sensor data that incorporates the spatio-temporal correlation-based dynamic segmentation method and the stigmergy-based emergent modeling method to recognize activities when new sensor events are recorded. The dynamic segmentation approach integrating sensor correlation and time correlation judges whether two consecutive sensor events belong to the same window or not, avoiding events from very different functional areas or with a long time interval in the same window, thus obtaining the segmented window for every single event. Then, the emergent paradigm with marker-based stigmergy is adopted to build activity features that are explicitly represented as a directed weighted network to define the context for the last sensor event in this window, which does not need sophisticated domain knowledge. We validate the proposed method utilizing the real-world dataset Aruba from the CASAS project and the results show the effectiveness.
Collapse
|
8
|
Abstract
In recent years, research on convolutional neural networks (CNN) and recurrent neural networks (RNN) in deep learning has been actively conducted. In order to provide more personalized and advanced functions in smart home services, studies on deep learning applications are becoming more frequent, and deep learning is acknowledged as an efficient method for recognizing the voices and activities of users. In this context, this study aims to systematically review the smart home studies that apply CNN and RNN/LSTM as their main solution. Of the 632 studies retrieved from the Web of Science, Scopus, IEEE Explore, and PubMed databases, 43 studies were selected and analyzed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology. In this paper, we examine which smart home applications CNN and RNN/LSTM are applied to and compare how they were implemented and evaluated. The selected studies dealt with a total of 15 application areas for smart homes, where activity recognition was covered the most. This study provides essential data for all researchers who want to apply deep learning for smart homes, identifies the main trends, and can help to guide design and evaluation decisions for particular smart home services.
Collapse
|
9
|
Shum LC, Faieghi R, Borsook T, Faruk T, Kassam S, Nabavi H, Spasojevic S, Tung J, Khan SS, Iaboni A. Indoor Location Data for Tracking Human Behaviours: A Scoping Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:1220. [PMID: 35161964 PMCID: PMC8839091 DOI: 10.3390/s22031220] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/25/2022] [Accepted: 01/28/2022] [Indexed: 12/04/2022]
Abstract
Real-time location systems (RTLS) record locations of individuals over time and are valuable sources of spatiotemporal data that can be used to understand patterns of human behaviour. Location data are used in a wide breadth of applications, from locating individuals to contact tracing or monitoring health markers. To support the use of RTLS in many applications, the varied ways location data can describe patterns of human behaviour should be examined. The objective of this review is to investigate behaviours described using indoor location data, and particularly the types of features extracted from RTLS data to describe behaviours. Four major applications were identified: health status monitoring, consumer behaviours, developmental behaviour, and workplace safety/efficiency. RTLS data features used to analyse behaviours were categorized into four groups: dwell time, activity level, trajectory, and proximity. Passive sensors that provide non-uniform data streams and features with lower complexity were common. Few studies analysed social behaviours between more than one individual at once. Less than half the health status monitoring studies examined clinical validity against gold-standard measures. Overall, spatiotemporal data from RTLS technologies are useful to identify behaviour patterns, provided there is sufficient richness in location data, the behaviour of interest is well-characterized, and a detailed feature analysis is undertaken.
Collapse
Affiliation(s)
- Leia C. Shum
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
| | - Reza Faieghi
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
- Department of Aerospace Engineering, Ryerson University, Toronto, ON M5B 2K3, Canada
| | - Terry Borsook
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
| | - Tamim Faruk
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
| | - Souraiya Kassam
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
| | - Hoda Nabavi
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
| | - Sofija Spasojevic
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada
| | - James Tung
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Shehroz S. Khan
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada
| | - Andrea Iaboni
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, ON M5G 2A2, Canada; (L.C.S.); (R.F.); (T.B.); (T.F.); (S.K.); (H.N.); (S.S.); (S.S.K.)
- Department of Psychiatry, University of Toronto, Toronto, ON M5T 1R8, Canada
| |
Collapse
|
10
|
Chaudhary A, Gupta HP, Shukla KK. Real-Time Activities of Daily Living Recognition Under Long-Tailed Class Distribution. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2022.3150757] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
11
|
Liu L, He J, Ren K, Lungu J, Hou Y, Dong R. An Information Gain-Based Model and an Attention-Based RNN for Wearable Human Activity Recognition. ENTROPY 2021; 23:e23121635. [PMID: 34945941 PMCID: PMC8700115 DOI: 10.3390/e23121635] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/26/2021] [Accepted: 12/03/2021] [Indexed: 12/03/2022]
Abstract
Wearable sensor-based HAR (human activity recognition) is a popular human activity perception method. However, due to the lack of a unified human activity model, the number and positions of sensors in the existing wearable HAR systems are not the same, which affects the promotion and application. In this paper, an information gain-based human activity model is established, and an attention-based recurrent neural network (namely Attention-RNN) for human activity recognition is designed. Besides, the attention-RNN, which combines bidirectional long short-term memory (BiLSTM) with attention mechanism, was tested on the UCI opportunity challenge dataset. Experiments prove that the proposed human activity model provides guidance for the deployment location of sensors and provides a basis for the selection of the number of sensors, which can reduce the number of sensors used to achieve the same classification effect. In addition, experiments show that the proposed Attention-RNN achieves F1 scores of 0.898 and 0.911 in the ML (Modes of Locomotion) task and GR (Gesture Recognition) task, respectively.
Collapse
Affiliation(s)
- Leyuan Liu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
| | - Jian He
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
- Correspondence: (J.H.); (K.R.)
| | - Keyan Ren
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
- Correspondence: (J.H.); (K.R.)
| | - Jonathan Lungu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
| | - Yibin Hou
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (L.L.); (J.L.); (Y.H.)
- Beijing Engineering Research Center for IOT Software and Systems, Beijing University of Technology, Beijing 100124, China
| | - Ruihai Dong
- School of Computer Science, University College Dublin, D04 V1W8 Dublin 4, Ireland;
| |
Collapse
|
12
|
Bouchabou D, Nguyen SM, Lohr C, LeDuc B, Kanellos I. A Survey of Human Activity Recognition in Smart Homes Based on IoT Sensors Algorithms: Taxonomies, Challenges, and Opportunities with Deep Learning. SENSORS (BASEL, SWITZERLAND) 2021; 21:6037. [PMID: 34577243 PMCID: PMC8469092 DOI: 10.3390/s21186037] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 08/30/2021] [Accepted: 09/04/2021] [Indexed: 11/16/2022]
Abstract
Recent advances in Internet of Things (IoT) technologies and the reduction in the cost of sensors have encouraged the development of smart environments, such as smart homes. Smart homes can offer home assistance services to improve the quality of life, autonomy, and health of their residents, especially for the elderly and dependent. To provide such services, a smart home must be able to understand the daily activities of its residents. Techniques for recognizing human activity in smart homes are advancing daily. However, new challenges are emerging every day. In this paper, we present recent algorithms, works, challenges, and taxonomy of the field of human activity recognition in a smart home through ambient sensors. Moreover, since activity recognition in smart homes is a young field, we raise specific problems, as well as missing and needed contributions. However, we also propose directions, research opportunities, and solutions to accelerate advances in this field.
Collapse
Affiliation(s)
- Damien Bouchabou
- IMT Atlantique Engineer School, 29238 Brest, France; (C.L.); (I.K.)
- Delta Dore Company, 35270 Bonnemain, France;
| | - Sao Mai Nguyen
- IMT Atlantique Engineer School, 29238 Brest, France; (C.L.); (I.K.)
| | - Christophe Lohr
- IMT Atlantique Engineer School, 29238 Brest, France; (C.L.); (I.K.)
| | | | - Ioannis Kanellos
- IMT Atlantique Engineer School, 29238 Brest, France; (C.L.); (I.K.)
| |
Collapse
|
13
|
Tan TH, Badarch L, Zeng WX, Gochoo M, Alnajjar FS, Hsieh JW. Binary Sensors-Based Privacy-Preserved Activity Recognition of Elderly Living Alone Using an RNN. SENSORS (BASEL, SWITZERLAND) 2021; 21:5371. [PMID: 34450809 PMCID: PMC8398125 DOI: 10.3390/s21165371] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 08/06/2021] [Accepted: 08/06/2021] [Indexed: 11/16/2022]
Abstract
The recent growth of the elderly population has led to the requirement for constant home monitoring as solitary living becomes popular. This protects older people who live alone from unwanted instances such as falling or deterioration caused by some diseases. However, although wearable devices and camera-based systems can provide relatively precise information about human motion, they invade the privacy of the elderly. One way to detect the abnormal behavior of elderly residents under the condition of maintaining privacy is to equip the resident's house with an Internet of Things system based on a non-invasive binary motion sensor array. We propose to concatenate external features (previous activity and begin time-stamp) along with extracted features with a bi-directional long short-term memory (Bi-LSTM) neural network to recognize the activities of daily living with a higher accuracy. The concatenated features are classified by a fully connected neural network (FCNN). The proposed model was evaluated on open dataset from the Center for Advanced Studies in Adaptive Systems (CASAS) at Washington State University. The experimental results show that the proposed method outperformed state-of-the-art models with a margin of more than 6.25% of the F1 score on the same dataset.
Collapse
Affiliation(s)
- Tan-Hsu Tan
- Department of Electrical Engineering, National Taipei University of Technology, Taipei 10617, Taiwan;
| | - Luubaatar Badarch
- Department of Electronics, School of Information and Communication Technology, Mongolian University of Science and Technology, Ulaanbaatar 13341, Mongolia;
| | | | - Munkhjargal Gochoo
- Department of Electrical Engineering, National Taipei University of Technology, Taipei 10617, Taiwan;
- Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Al-Ain P.O. Box 15551, United Arab Emirates;
| | - Fady S. Alnajjar
- Department of Computer Science and Software Engineering, College of Information Technology, United Arab Emirates University, Al-Ain P.O. Box 15551, United Arab Emirates;
| | - Jun-Wei Hsieh
- College of AI, National Chiao Tung University, Hsinchu 30010, Taiwan;
| |
Collapse
|
14
|
Wang T, Cook DJ. sMRT: Multi-Resident Tracking in Smart Homes With Sensor Vectorization. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:2809-2821. [PMID: 32070942 PMCID: PMC7423766 DOI: 10.1109/tpami.2020.2973571] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Smart homes equipped with anonymous binary sensors offer a low-cost, unobtrusive solution that powers activity-aware applications, such as building automation, health monitoring, behavioral intervention, and home security. However, when multiple residents are living in a smart home, associating sensor events with the corresponding residents can pose a major challenge. Previous approaches to multi-resident tracking in smart homes rely on extra information, such as sensor layouts, floor plans, and annotated data, which may not be available or inconvenient to obtain in practice. To address those challenges in real-life deployment, we introduce the sMRT algorithm that simultaneously tracks the location of each resident and estimates the number of residents in the smart home, without relying on ground-truth annotated sensor data or other additional information. We evaluate the performance of our approach using two smart home datasets recorded in real-life settings and compare sMRT with two other methods that rely on sensor layout and ground truth-labeled sensor data.
Collapse
|
15
|
Classical Machine Learning Versus Deep Learning for the Older Adults Free-Living Activity Classification. SENSORS 2021; 21:s21144669. [PMID: 34300409 PMCID: PMC8309623 DOI: 10.3390/s21144669] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 06/29/2021] [Accepted: 07/05/2021] [Indexed: 11/25/2022]
Abstract
Physical activity has a strong influence on mental and physical health and is essential in healthy ageing and wellbeing for the ever-growing elderly population. Wearable sensors can provide a reliable and economical measure of activities of daily living (ADLs) by capturing movements through, e.g., accelerometers and gyroscopes. This study explores the potential of using classical machine learning and deep learning approaches to classify the most common ADLs: walking, sitting, standing, and lying. We validate the results on the ADAPT dataset, the most detailed dataset to date of inertial sensor data, synchronised with high frame-rate video labelled data recorded in a free-living environment from older adults living independently. The findings suggest that both approaches can accurately classify ADLs, showing high potential in profiling ADL patterns of the elderly population in free-living conditions. In particular, both long short-term memory (LSTM) networks and Support Vector Machines combined with ReliefF feature selection performed equally well, achieving around 97% F-score in profiling ADLs.
Collapse
|
16
|
Gochoo M, Alnajjar F, Tan TH, Khalid S. Towards Privacy-Preserved Aging in Place: A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:3082. [PMID: 33925161 PMCID: PMC8124768 DOI: 10.3390/s21093082] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Revised: 04/15/2021] [Accepted: 04/20/2021] [Indexed: 11/16/2022]
Abstract
Owing to progressive population aging, elderly people (aged 65 and above) face challenges in carrying out activities of daily living, while placement of the elderly in a care facility is expensive and mentally taxing for them. Thus, there is a need to develop their own homes into smart homes using new technologies. However, this raises concerns of privacy and data security for users since it can be handled remotely. Hence, with advancing technologies it is important to overcome this challenge using privacy-preserving and non-intrusive models. For this review, 235 articles were scanned from databases, out of which 31 articles pertaining to in-home technologies that assist the elderly in living independently were shortlisted for inclusion. They described the adoption of various methodologies like different sensor-based mechanisms, wearables, camera-based techniques, robots, and machine learning strategies to provide a safe and comfortable environment to the elderly. Recent innovations have rendered these technologies more unobtrusive and privacy-preserving with increasing use of environmental sensors and less use of cameras and other devices that may compromise the privacy of individuals. There is a need to develop a comprehensive system for smart homes which ensures patient safety, privacy, and data security; in addition, robots should be integrated with the existing sensor-based platforms to assist in carrying out daily activities and therapies as required.
Collapse
Affiliation(s)
- Munkhjargal Gochoo
- Department of Computer Science & Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, United Arab Emirates; (F.A.); (S.K.)
- Department of Electrical Engineering, National Taipei University of Technology, Taipei 106, Taiwan;
| | - Fady Alnajjar
- Department of Computer Science & Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, United Arab Emirates; (F.A.); (S.K.)
- Intelligent Behavior Control Unit, RIKEN Center for Brain Science (CBS), Wako 463-0003, Japan
| | - Tan-Hsu Tan
- Department of Electrical Engineering, National Taipei University of Technology, Taipei 106, Taiwan;
| | - Sumayya Khalid
- Department of Computer Science & Software Engineering, College of Information Technology, United Arab Emirates University, Al Ain 15551, United Arab Emirates; (F.A.); (S.K.)
| |
Collapse
|
17
|
Zhang Y, D’Haeseleer I, Coelho J, Vanden Abeele V, Vanrumste B. Recognition of Bathroom Activities in Older Adults Using Wearable Sensors: A Systematic Review and Recommendations. SENSORS 2021; 21:s21062176. [PMID: 33804626 PMCID: PMC8003704 DOI: 10.3390/s21062176] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/10/2021] [Accepted: 03/17/2021] [Indexed: 11/16/2022]
Abstract
This article provides a systematic review of studies on recognising bathroom activities in older adults using wearable sensors. Bathroom activities are an important part of Activities of Daily Living (ADL). The performance on ADL activities is used to predict the ability of older adults to live independently. This paper aims to provide an overview of the studied bathroom activities, the wearable sensors used, different applied methodologies and the tested activity recognition techniques. Six databases were screened up to March 2020, based on four categories of keywords: older adults, activity recognition, bathroom activities and wearable sensors. In total, 4262 unique papers were found, of which only seven met the inclusion criteria. This small number shows that few studies have been conducted in this field. Therefore, in addition, this critical review resulted in several recommendations for future studies. In particular, we recommend to (1) study complex bathroom activities, including multiple movements; (2) recruit participants, especially the target population; (3) conduct both lab and real-life experiments; (4) investigate the optimal number and positions of wearable sensors; (5) choose a suitable annotation method; (6) investigate deep learning models; (7) evaluate the generality of classifiers; and (8) investigate both detection and quality performance of an activity.
Collapse
Affiliation(s)
- Yiyuan Zhang
- KU Leuven, e-Media Research Lab, 3000 Leuven, Belgium; (I.D.); (V.V.A.); (B.V.)
- KU Leuven, Stadius, Department of Electrical Engineering, 3001 Leuven, Belgium
- Correspondence:
| | - Ine D’Haeseleer
- KU Leuven, e-Media Research Lab, 3000 Leuven, Belgium; (I.D.); (V.V.A.); (B.V.)
- KU Leuven, HCI, Department of Computer Science, 3001 Leuven, Belgium
| | - José Coelho
- LaSIGE, Departamento de Informática, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal;
| | - Vero Vanden Abeele
- KU Leuven, e-Media Research Lab, 3000 Leuven, Belgium; (I.D.); (V.V.A.); (B.V.)
- KU Leuven, HCI, Department of Computer Science, 3001 Leuven, Belgium
| | - Bart Vanrumste
- KU Leuven, e-Media Research Lab, 3000 Leuven, Belgium; (I.D.); (V.V.A.); (B.V.)
- KU Leuven, Stadius, Department of Electrical Engineering, 3001 Leuven, Belgium
| |
Collapse
|
18
|
Stochastic Remote Sensing Event Classification over Adaptive Posture Estimation via Multifused Data and Deep Belief Network. REMOTE SENSING 2021. [DOI: 10.3390/rs13050912] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Advances in video capturing devices enable adaptive posture estimation (APE) and event classification of multiple human-based videos for smart systems. Accurate event classification and adaptive posture estimation are still challenging domains, although researchers work hard to find solutions. In this research article, we propose a novel method to classify stochastic remote sensing events and to perform adaptive posture estimation. We performed human silhouette extraction using the Gaussian Mixture Model (GMM) and saliency map. After that, we performed human body part detection and used a unified pseudo-2D stick model for adaptive posture estimation. Multifused data that include energy, 3D Cartesian view, angular geometric, skeleton zigzag and moveable body parts were applied. Using a charged system search, we optimized our feature vector and deep belief network. We classified complex events, which were performed over sports videos in the wild (SVW), Olympic sports, UCF aerial action dataset and UT-interaction datasets. The mean accuracy of human body part detection was 83.57% over the UT-interaction, 83.00% for the Olympic sports and 83.78% for the SVW dataset. The mean event classification accuracy was 91.67% over the UT-interaction, 92.50% for Olympic sports and 89.47% for SVW dataset. These results are superior compared to existing state-of-the-art methods.
Collapse
|
19
|
Ranieri CM, MacLeod S, Dragone M, Vargas PA, Romero RF. Activity Recognition for Ambient Assisted Living with Videos, Inertial Units and Ambient Sensors. SENSORS (BASEL, SWITZERLAND) 2021; 21:768. [PMID: 33498829 PMCID: PMC7865705 DOI: 10.3390/s21030768] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 01/08/2021] [Accepted: 01/21/2021] [Indexed: 11/16/2022]
Abstract
Worldwide demographic projections point to a progressively older population. This fact has fostered research on Ambient Assisted Living, which includes developments on smart homes and social robots. To endow such environments with truly autonomous behaviours, algorithms must extract semantically meaningful information from whichever sensor data is available. Human activity recognition is one of the most active fields of research within this context. Proposed approaches vary according to the input modality and the environments considered. Different from others, this paper addresses the problem of recognising heterogeneous activities of daily living centred in home environments considering simultaneously data from videos, wearable IMUs and ambient sensors. For this, two contributions are presented. The first is the creation of the Heriot-Watt University/University of Sao Paulo (HWU-USP) activities dataset, which was recorded at the Robotic Assisted Living Testbed at Heriot-Watt University. This dataset differs from other multimodal datasets due to the fact that it consists of daily living activities with either periodical patterns or long-term dependencies, which are captured in a very rich and heterogeneous sensing environment. In particular, this dataset combines data from a humanoid robot's RGBD (RGB + depth) camera, with inertial sensors from wearable devices, and ambient sensors from a smart home. The second contribution is the proposal of a Deep Learning (DL) framework, which provides multimodal activity recognition based on videos, inertial sensors and ambient sensors from the smart home, on their own or fused to each other. The classification DL framework has also validated on our dataset and on the University of Texas at Dallas Multimodal Human Activities Dataset (UTD-MHAD), a widely used benchmark for activity recognition based on videos and inertial sensors, providing a comparative analysis between the results on the two datasets considered. Results demonstrate that the introduction of data from ambient sensors expressively improved the accuracy results.
Collapse
Affiliation(s)
- Caetano Mazzoni Ranieri
- Institute of Mathematical and Computer Sciences, University of Sao Paulo, Sao Carlos, SP 13566-590, Brazil;
| | - Scott MacLeod
- Edinburgh Centre for Robotics, Heriot-Watt University, Edinburgh, EH14 4AS, UK; (S.M.); (M.D.); (P.A.V.)
| | - Mauro Dragone
- Edinburgh Centre for Robotics, Heriot-Watt University, Edinburgh, EH14 4AS, UK; (S.M.); (M.D.); (P.A.V.)
| | - Patricia Amancio Vargas
- Edinburgh Centre for Robotics, Heriot-Watt University, Edinburgh, EH14 4AS, UK; (S.M.); (M.D.); (P.A.V.)
| | | |
Collapse
|
20
|
Abstract
The smart home has begun playing an important role in supporting independent living by monitoring the activities of daily living, typically for the elderly who live alone. Activity recognition in smart homes has been studied by many researchers with much effort spent on modeling user activities to predict behaviors. Most people, when performing their daily activities, interact with multiple objects both in space and through time. The interactions between user and objects in the home can provide rich contextual information in interpreting human activity. This paper shows the importance of spatial and temporal information for reasoning in smart homes and demonstrates how such information is represented for activity recognition. Evaluation was conducted on three publicly available smart-home datasets. Our method achieved an average recognition accuracy of more than 81% when predicting user activities given the spatial and temporal information.
Collapse
|
21
|
Diyan M, Khan M, Nathali Silva B, Han K. Scheduling Sensor Duty Cycling Based on Event Detection Using Bi-Directional Long Short-Term Memory and Reinforcement Learning. SENSORS (BASEL, SWITZERLAND) 2020; 20:E5498. [PMID: 32992795 PMCID: PMC7583935 DOI: 10.3390/s20195498] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 09/22/2020] [Accepted: 09/22/2020] [Indexed: 12/04/2022]
Abstract
A smart home provides a facilitated environment for the detection of human activity with appropriate Deep Learning algorithms to manipulate data collected from numerous sensors attached to various smart things in a smart home environment. Human activities comprise expected and unexpected behavior events; therefore, detecting these events consisting of mutual dependent activities poses a key challenge in the activities detection paradigm. Besides, the battery-powered sensor ubiquitously and extensively monitors activities, disputes, and sensor energy depletion. Therefore, to address these challenges, we propose an Energy and Event Aware-Sensor Duty Cycling scheme. The proposed model predicts the future expected event using the Bi-Directional Long-Short Term Memory model and allocates Predictive Sensors to the predicted event. To detect the unexpected events, the proposed model localizes a Monitor Sensor within a cluster of Hibernate Sensors using the Jaccard Similarity Index. Finally, we optimize the performance of our proposed scheme by employing the Q-Learning algorithm to track the missed or undetected events. The simulation is executed against the conventional Machine Learning algorithms for the sensor duty cycle, scheduling to reduce the sensor energy consumption and improve the activity detection accuracy. The experimental evaluation of our proposed scheme shows significant improvement in activity detection accuracy from 94.12% to 96.12%. Besides, the effective rotation of the Monitor Sensor significantly improves the energy consumption of each sensor with the entire network lifetime.
Collapse
Affiliation(s)
- Muhammad Diyan
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea; (M.D.); (M.K.)
| | - Murad Khan
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea; (M.D.); (M.K.)
| | - Bhagya Nathali Silva
- Department of Computer Engineering, Faculty of Engineering, University of Sri Jayewardenepura, Nugegoda 10250, Sri Lanka;
| | - Kijun Han
- School of Computer Science and Engineering, Kyungpook National University, Daegu 41566, Korea; (M.D.); (M.K.)
| |
Collapse
|
22
|
Smart Environments and Social Robots for Age-Friendly Integrated Care Services. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17113801. [PMID: 32471108 PMCID: PMC7312538 DOI: 10.3390/ijerph17113801] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 05/25/2020] [Accepted: 05/26/2020] [Indexed: 12/13/2022]
Abstract
The world is facing major societal challenges because of an aging population that is putting increasing pressure on the sustainability of care. While demand for care and social services is steadily increasing, the supply is constrained by the decreasing workforce. The development of smart, physical, social and age-friendly environments is identified by World Health Organization (WHO) as a key intervention point for enabling older adults, enabling them to remain as much possible in their residences, delay institutionalization, and ultimately, improve quality of life. In this study, we survey smart environments, machine learning and robot assistive technologies that can offer support for the independent living of older adults and provide age-friendly care services. We describe two examples of integrated care services that are using assistive technologies in innovative ways to assess and deliver of timely interventions for polypharmacy management and for social and cognitive activity support in older adults. We describe the architectural views of these services, focusing on details about technology usage, end-user interaction flows and data models that are developed or enhanced to achieve the envisioned objective of healthier, safer, more independent and socially connected older people.
Collapse
|
23
|
Toward Flexible and Efficient Home Context Sensing: Capability Evaluation and Verification of Image-Based Cognitive APIs. SENSORS 2020; 20:s20051442. [PMID: 32155806 PMCID: PMC7085595 DOI: 10.3390/s20051442] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2019] [Revised: 03/01/2020] [Accepted: 03/02/2020] [Indexed: 01/12/2023]
Abstract
Cognitive Application Program Interface (API) is an API of emerging artificial intelligence (AI)-based cloud services, which extracts various contextual information from non-numerical multimedia data including image and audio. Our interest is to apply image-based cognitive APIs to implement flexible and efficient context sensing services in a smart home. In the existing approach with machine learning by us, with the complexity of recognition object and the number of the defined contexts increases by users, it still requires directly manually labeling a moderate scale of data for training and continually try to calling multiple cognitive APIs for feature extraction. In this paper, we propose a novel method that uses a small scale of labeled data to evaluate the capability of cognitive APIs in advance, before training features of the APIs with machine learning, for the flexible and efficient home context sensing. In the proposed method, we exploit document similarity measures and the concepts (i.e., internal cohesion and external isolation) integrate into clustering results, to see how the capability of different cognitive APIs for recognizing each context. By selecting the cognitive APIs that relatively adapt to the defined contexts and data based on the evaluation results, we have achieved the flexible integration and efficient process of cognitive APIs for home context sensing.
Collapse
|
24
|
Recognition of Daily Activities of Two Residents in a Smart Home Based on Time Clustering. SENSORS 2020; 20:s20051457. [PMID: 32155888 PMCID: PMC7085800 DOI: 10.3390/s20051457] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 01/30/2020] [Accepted: 02/27/2020] [Indexed: 11/22/2022]
Abstract
With the development of population aging, the recognition of elderly activity in smart homes has received increasing attention. In recent years, single-resident activity recognition based on smart homes has made great progress. However, few researchers have focused on multi-resident activity recognition. In this paper, we propose a method to recognize two-resident activities based on time clustering. First, to use a de-noising method to extract the feature of the dataset. Second, to cluster the dataset based on the begin time and end time. Finally, to complete activity recognition using a similarity matching method. To test the performance of the method, we used two two-resident datasets provided by Center for Advanced Studies in Adaptive Systems (CASAS). We evaluated our method by comparing it with some common classifiers. The results show that our method has certain improvements in the accuracy, recall, precision, and F-Measure. At the end of the paper, we explain the parameter selection and summarize our method.
Collapse
|
25
|
Gautam A, Panwar M, Biswas D, Acharyya A. MyoNet: A Transfer-Learning-Based LRCN for Lower Limb Movement Recognition and Knee Joint Angle Prediction for Remote Monitoring of Rehabilitation Progress From sEMG. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2020; 8:2100310. [PMID: 32190428 PMCID: PMC7062147 DOI: 10.1109/jtehm.2020.2972523] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 12/29/2019] [Accepted: 01/09/2020] [Indexed: 12/02/2022]
Abstract
The clinical assessment technology such as remote monitoring of rehabilitation progress for lower limb related ailments rely on the automatic evaluation of movement performed along with an estimation of joint angle information. In this paper, we introduce a transfer-learning based Long-term Recurrent Convolution Network (LRCN) named as 'MyoNet' for the classification of lower limb movements, along with the prediction of the corresponding knee joint angle. The model consists of three blocks- (i) feature extractor block, (ii) joint angle prediction block, and (iii) movement classification block. Initially, the model is end-to-end trained for knee joint angle prediction followed by transferring the knowledge of a trained model to the movement classification through transfer-learning approach making a memory and computationally efficient design. The proposed MyoNet was evaluated on publicly available University of California (UC) Irvine machine learning repository dataset of the lower limb for 11 healthy subjects and 11 subjects with knee pathology for three movements type-walking, standing with knee flexion movements and sitting with knee extension movements. The average mean absolute error (MAE) resulted in the prediction of joint angle for healthy subjects and subjects with knee pathology are 8.1 % and 9.2 % respectively. Subsequently, an average classification accuracy of 98.1 % and 92.4 % were achieved for healthy subjects and subjects with knee pathology, respectively. Interestingly, the significance of this study in itself is promising with substantial improvement in the performance compared to state-of-the-art methodologies. The clinical significance of such surface electromyography signals (sEMG) based movement recognition and prediction of corresponding joint angle system could be beneficial for remote monitoring of rehabilitation progress by the physiotherapist using wearables.
Collapse
Affiliation(s)
- Arvind Gautam
- Department of Electrical EngineeringIndian Institute of Technology HyderabadHyderabad502205India
| | - Madhuri Panwar
- Department of Electrical EngineeringIndian Institute of Technology HyderabadHyderabad502205India
| | | | - Amit Acharyya
- Department of Electrical EngineeringIndian Institute of Technology HyderabadHyderabad502205India
| |
Collapse
|
26
|
Jiao B. Anti-Motion Interference Wearable Device for Monitoring Blood Oxygen Saturation Based on Sliding Window Algorithm. IEEE ACCESS 2020; 8:124675-124687. [DOI: 10.1109/access.2020.3005981] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|