1
|
Kumar V, Alam MN, Manik G, Park SS. Recent Advancements in Rubber Composites for Physical Activity Monitoring Sensors: A Critical Review. Polymers (Basel) 2025; 17:1085. [PMID: 40284349 PMCID: PMC12030466 DOI: 10.3390/polym17081085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2025] [Revised: 04/15/2025] [Accepted: 04/15/2025] [Indexed: 04/29/2025] Open
Abstract
This review provides the latest insight (2020 to 2025) for composite-based physical activity monitoring sensors. These composite materials are based on carbon-reinforced silicone rubber. These composites feature the use of composite materials, thereby allowing the creation of new generation non-invasive sensors for monitoring of sports activity. These physical sports activities include running, cycling, or swimming. The review describes a brief overview of carbon nanomaterials and silicone rubber-based composites. Then, the prospects of such sensors in terms of mechanical and electrical properties are described. Here, a special focus on electrical properties like resistance change, response time, and gauge factor are reported. Finally, the review reports a brief overview of the industrial uses of these sensors. Some aspects are sports activities like boxing or physical activities like walking, squatting, or running. Lastly, the main aspect of fracture toughness for obtaining high sensor durability is reviewed. Finally, the key challenges in material stability, scalability, and integration of multifunctional aspects of these composite sensors are addressed. Moreover, the future research prospects are described for these composite-based sensors, along with their advantages and limitations.
Collapse
Affiliation(s)
- Vineet Kumar
- School of Mechanical Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Republic of Korea; (V.K.); (M.N.A.)
| | - Md Najib Alam
- School of Mechanical Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Republic of Korea; (V.K.); (M.N.A.)
| | - Gaurav Manik
- Department of Polymer and Process Engineering, Indian Institute of Technology Roorkee, Saharanpur Campus, Saharanpur 247001, Uttar Pradesh, India;
| | - Sang-Shin Park
- School of Mechanical Engineering, Yeungnam University, 280 Daehak-Ro, Gyeongsan 38541, Gyeongbuk, Republic of Korea; (V.K.); (M.N.A.)
| |
Collapse
|
2
|
Zhang X, Liu X, Wang M, Zhang J, Liu K, Xu Z, Chen W, Hu J, Zhang P, Zhang Y, Dong L, Xu W, Pan Z. A Bioinspired Defect-Tolerant Hydrogel Medical Patch for Abdominal Wall Defect Repair. ACS NANO 2025; 19:11075-11090. [PMID: 40091215 DOI: 10.1021/acsnano.4c17122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2025]
Abstract
Point-wise suturing is the standard method for ensuring that patches effectively perform mechanically supportive functions in tissue repair. However, stress concentrations around suture holes can compromise the mechanical stability of patches. In this study, we develop a suturable hydrogel patch with flaw-tolerance capabilities by leveraging multiscale stress deconcentration, inspired by natural silk. This design mitigates stress concentration across two scales through the synergistic integration of nanoscale high-energy crystalline domains and intermolecular interactions. The resulting integral hydrogel patch exhibits superior flaw resistance compared to conventional patches and effectively addresses tissue adhesion issues. To validate the efficacy of the patch, we demonstrate successful in vivo repair of abdominal wall defects in rats, comparing the performance of the proposed patch to commercial mesh patches (Prolene). The integral patch design strategy present here offers a valuable approach for developing patches that can be tailored to meet the mechanical support needs of various tissue repair applications.
Collapse
Affiliation(s)
- Xiang Zhang
- Department of Geriatrics, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230001, China
| | - Xiaoning Liu
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou 310022, China
- School of Molecular Medicine, Hangzhou Institute for Advanced Study, University of the Chinese Academy of Sciences, Hangzhou 310024, China
| | - Mohan Wang
- Department of Oral and Maxillofacial Surgery, Ninth People's Hospital, College of Stomatology, Shanghai Jiao Tong University School of Medicine, Shanghai 200011, China
| | - Jingjing Zhang
- Department of Orthopedics, Department of Spine Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei 230022, China
| | - Ke Liu
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou 310022, China
| | - Ziming Xu
- Department of Ophthalmology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230001, China
| | - Wanfeng Chen
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou 310022, China
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Jun Hu
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou 310022, China
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Pan Zhang
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou 310022, China
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Yinshun Zhang
- Department of Orthopedics, Department of Spine Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei 230022, China
| | - Liang Dong
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou 310022, China
- School of Molecular Medicine, Hangzhou Institute for Advanced Study, University of the Chinese Academy of Sciences, Hangzhou 310024, China
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| | - Weiping Xu
- Department of Geriatrics, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230001, China
- Department of Geriatrics, Gerontology Institute of Anhui Province, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230001, China
- Anhui Provincial Key Laboratory of Tumor Immunotherapy and Nutrition Therapy, Hefei 230001, China
| | - Zhao Pan
- Department of Geriatrics, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230001, China
- Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Zhejiang Cancer Hospital, Hangzhou 310022, China
- School of Molecular Medicine, Hangzhou Institute for Advanced Study, University of the Chinese Academy of Sciences, Hangzhou 310024, China
- College of Materials Science and Engineering, Zhejiang University of Technology, Hangzhou 310014, China
| |
Collapse
|
3
|
Boyle LD, Giriteka L, Marty B, Sandgathe L, Haugarvoll K, Steihaug OM, Husebo BS, Patrascu M. Activity and Behavioral Recognition Using Sensing Technology in Persons with Parkinson's Disease or Dementia: An Umbrella Review of the Literature. SENSORS (BASEL, SWITZERLAND) 2025; 25:668. [PMID: 39943307 PMCID: PMC11820304 DOI: 10.3390/s25030668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2024] [Revised: 01/10/2025] [Accepted: 01/20/2025] [Indexed: 02/16/2025]
Abstract
BACKGROUND With a progressively aging global population, the prevalence of Parkinson's Disease and dementia will increase, thus multiplying the healthcare burden worldwide. Sensing technology can complement the current measures used for symptom management and monitoring. The aim of this umbrella review is to provide future researchers with a synthesis of the current methodologies and metrics of sensing technologies for the management and monitoring of activities and behavioral symptoms in older adults with neurodegenerative disease. This is of key importance when considering the rapid obsolescence of and potential for future implementation of these technologies into real-world healthcare settings. METHODS Seven medical and technical databases were searched for systematic reviews (2018-2024) that met our inclusion/exclusion criteria. Articles were screened independently using Rayyan. PRISMA guidelines, the Cochrane Handbook for Systematic Reviews, and the Johanna Briggs Institute Critical Appraisal Checklist for Systematic Reviews were utilized for the assessment of bias, quality, and research synthesis. A narrative synthesis combines the study findings. RESULTS After screening 1458 articles, 9 systematic reviews were eligible for inclusion, synthesizing 402 primary studies. This umbrella review reveals that the use of sensing technologies for the observation and management of activities and behavioral symptoms is promising, however diversely applied, heterogenous in the methods used, and currently challenging to apply within clinical settings. CONCLUSIONS Human activity and behavioral recognition requires true interdisciplinary collaborations between engineering, data science, and healthcare domains. The standardization of metrics, ethical AI development, and a culture of research-friendly technology and support are the next crucial developments needed for this rising field.
Collapse
Affiliation(s)
- Lydia D. Boyle
- Centre for Elderly and Nursing Home Medicine, Department of Global Public Health and Primary Care, University of Bergen, Årstadveien 17, 5009 Bergen, Norway; (L.G.); (B.M.); (B.S.H.)
- Neuro-SysMed, Department of Clinical Medicine, University of Bergen, Jonas vei 65, 5021 Bergen, Norway;
- Helse Vest, Helse Bergen HF, Haukeland Universitetssjukehus, Postboks 1400, 5021 Bergen, Norway
| | - Lionel Giriteka
- Centre for Elderly and Nursing Home Medicine, Department of Global Public Health and Primary Care, University of Bergen, Årstadveien 17, 5009 Bergen, Norway; (L.G.); (B.M.); (B.S.H.)
| | - Brice Marty
- Centre for Elderly and Nursing Home Medicine, Department of Global Public Health and Primary Care, University of Bergen, Årstadveien 17, 5009 Bergen, Norway; (L.G.); (B.M.); (B.S.H.)
- Neuro-SysMed, Department of Clinical Medicine, University of Bergen, Jonas vei 65, 5021 Bergen, Norway;
| | - Lucas Sandgathe
- Centre for Elderly and Nursing Home Medicine, Department of Global Public Health and Primary Care, University of Bergen, Årstadveien 17, 5009 Bergen, Norway; (L.G.); (B.M.); (B.S.H.)
- Department of Orthopedic Surgery, Voss Hospital, Sjukehusvegen 16, 5704 Voss, Norway
| | - Kristoffer Haugarvoll
- Neuro-SysMed, Department of Clinical Medicine, University of Bergen, Jonas vei 65, 5021 Bergen, Norway;
- Helse Vest, Helse Bergen HF, Haukeland Universitetssjukehus, Postboks 1400, 5021 Bergen, Norway
- Department of Neurology, Haukeland University Hospital, Haukelandsveien 22, 2009 Bergen, Norway
| | - Ole Martin Steihaug
- Department of Internal Medicine, Haraldsplass Deaconess Hospital, Ulriksdal 8, 5009 Bergen, Norway;
| | - Bettina S. Husebo
- Centre for Elderly and Nursing Home Medicine, Department of Global Public Health and Primary Care, University of Bergen, Årstadveien 17, 5009 Bergen, Norway; (L.G.); (B.M.); (B.S.H.)
- Neuro-SysMed, Department of Clinical Medicine, University of Bergen, Jonas vei 65, 5021 Bergen, Norway;
| | - Monica Patrascu
- Centre for Elderly and Nursing Home Medicine, Department of Global Public Health and Primary Care, University of Bergen, Årstadveien 17, 5009 Bergen, Norway; (L.G.); (B.M.); (B.S.H.)
- Neuro-SysMed, Department of Clinical Medicine, University of Bergen, Jonas vei 65, 5021 Bergen, Norway;
- Complex Systems Laboratory, University Politehnica of Bucharest, Splaiul Independentei 313, 060042 Bucharest, Romania
| |
Collapse
|
4
|
Newaz NT, Hanada E. An Approach to Fall Detection Using Statistical Distributions of Thermal Signatures Obtained by a Stand-Alone Low-Resolution IR Array Sensor Device. SENSORS (BASEL, SWITZERLAND) 2025; 25:504. [PMID: 39860873 PMCID: PMC11769196 DOI: 10.3390/s25020504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 12/18/2024] [Accepted: 01/13/2025] [Indexed: 01/27/2025]
Abstract
Infrared array sensor-based fall detection and activity recognition systems have gained momentum as promising solutions for enhancing healthcare monitoring and safety in various environments. Unlike camera-based systems, which can be privacy-intrusive, IR array sensors offer a non-invasive, reliable approach for fall detection and activity recognition while preserving privacy. This work proposes a novel method to distinguish between normal motion and fall incidents by analyzing thermal patterns captured by infrared array sensors. Data were collected from two subjects who performed a range of activities of daily living, including sitting, standing, walking, and falling. Data for each state were collected over multiple trials and extended periods to ensure robustness and variability in the measurements. The collected thermal data were compared with multiple statistical distributions using Earth Mover's Distance. Experimental results showed that normal activities exhibited low EMD values with Beta and Normal distributions, suggesting that these distributions closely matched the thermal patterns associated with regular movements. Conversely, fall events exhibited high EMD values, indicating greater variability in thermal signatures. The system was implemented using a Raspberry Pi-based stand-alone device that provides a cost-effective solution without the need for additional computational devices. This study demonstrates the effectiveness of using IR array sensors for non-invasive, real-time fall detection and activity recognition, which offer significant potential for improving healthcare monitoring and ensuring the safety of fall-prone individuals.
Collapse
Affiliation(s)
- Nishat Tasnim Newaz
- Graduate School of Science and Engineering, Saga University, Saga 840-8502, Japan
| | - Eisuke Hanada
- Faculty of Science and Engineering, Saga University, Saga 840-8502, Japan
| |
Collapse
|
5
|
Alarfaj M, Al Madini A, Alsafran A, Farag M, Chtourou S, Afifi A, Ahmad A, Al Rubayyi O, Al Harbi A, Al Thunaian M. Wearable sensors based on artificial intelligence models for human activity recognition. Front Artif Intell 2024; 7:1424190. [PMID: 39015365 PMCID: PMC11250658 DOI: 10.3389/frai.2024.1424190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2024] [Accepted: 06/17/2024] [Indexed: 07/18/2024] Open
Abstract
Human motion detection technology holds significant potential in medicine, health care, and physical exercise. This study introduces a novel approach to human activity recognition (HAR) using convolutional neural networks (CNNs) designed for individual sensor types to enhance the accuracy and address the challenge of diverse data shapes from accelerometers, gyroscopes, and barometers. Specific CNN models are constructed for each sensor type, enabling them to capture the characteristics of their respective sensors. These adapted CNNs are designed to effectively process varying data shapes and sensor-specific characteristics to accurately classify a wide range of human activities. The late-fusion technique is employed to combine predictions from various models to obtain comprehensive estimates of human activity. The proposed CNN-based approach is compared to a standard support vector machine (SVM) classifier using the one-vs-rest methodology. The late-fusion CNN model showed significantly improved performance, with validation and final test accuracies of 99.35 and 94.83% compared to the conventional SVM classifier at 87.07 and 83.10%, respectively. These findings provide strong evidence that combining multiple sensors and a barometer and utilizing an additional filter algorithm greatly improves the accuracy of identifying different human movement patterns.
Collapse
Affiliation(s)
- Mohammed Alarfaj
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Azzam Al Madini
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Ahmed Alsafran
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Mohammed Farag
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Slim Chtourou
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Ahmed Afifi
- Department of Computer Science, College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Ayaz Ahmad
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Osama Al Rubayyi
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Ali Al Harbi
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| | - Mustafa Al Thunaian
- Department of Electrical Engineering, College of Engineering, King Faisal University, Al-Ahsa, Saudi Arabia
| |
Collapse
|
6
|
Hollinger D, Schall MC, Chen H, Zabala M. The Effect of Sensor Feature Inputs on Joint Angle Prediction across Simple Movements. SENSORS (BASEL, SWITZERLAND) 2024; 24:3657. [PMID: 38894447 PMCID: PMC11175352 DOI: 10.3390/s24113657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 05/31/2024] [Accepted: 06/03/2024] [Indexed: 06/21/2024]
Abstract
The use of wearable sensors, such as inertial measurement units (IMUs), and machine learning for human intent recognition in health-related areas has grown considerably. However, there is limited research exploring how IMU quantity and placement affect human movement intent prediction (HMIP) at the joint level. The objective of this study was to analyze various combinations of IMU input signals to maximize the machine learning prediction accuracy for multiple simple movements. We trained a Random Forest algorithm to predict future joint angles across these movements using various sensor features. We hypothesized that joint angle prediction accuracy would increase with the addition of IMUs attached to adjacent body segments and that non-adjacent IMUs would not increase the prediction accuracy. The results indicated that the addition of adjacent IMUs to current joint angle inputs did not significantly increase the prediction accuracy (RMSE of 1.92° vs. 3.32° at the ankle, 8.78° vs. 12.54° at the knee, and 5.48° vs. 9.67° at the hip). Additionally, including non-adjacent IMUs did not increase the prediction accuracy (RMSE of 5.35° vs. 5.55° at the ankle, 20.29° vs. 20.71° at the knee, and 14.86° vs. 13.55° at the hip). These results demonstrated how future joint angle prediction during simple movements did not improve with the addition of IMUs alongside current joint angle inputs.
Collapse
Affiliation(s)
- David Hollinger
- Department of Mechanical Engineering, Auburn University, Auburn, AL 36849, USA; (D.H.); (M.Z.)
| | - Mark C. Schall
- Department of Industrial & Systems Engineering, Auburn University, Auburn, AL 36849, USA
| | - Howard Chen
- Department of Industrial & Systems Engineering and Engineering Management, University of Alabama-Huntsville, Huntsville, AL 35899, USA;
| | - Michael Zabala
- Department of Mechanical Engineering, Auburn University, Auburn, AL 36849, USA; (D.H.); (M.Z.)
| |
Collapse
|
7
|
Hossain MB, LaMunion SR, Crouter SE, Melanson EL, Sazonov E. A CNN Model for Physical Activity Recognition and Energy Expenditure Estimation from an Eyeglass-Mounted Wearable Sensor. SENSORS (BASEL, SWITZERLAND) 2024; 24:3046. [PMID: 38793899 PMCID: PMC11125058 DOI: 10.3390/s24103046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/04/2024] [Accepted: 05/10/2024] [Indexed: 05/26/2024]
Abstract
Metabolic syndrome poses a significant health challenge worldwide, prompting the need for comprehensive strategies integrating physical activity monitoring and energy expenditure. Wearable sensor devices have been used both for energy intake and energy expenditure (EE) estimation. Traditionally, sensors are attached to the hip or wrist. The primary aim of this research is to investigate the use of an eyeglass-mounted wearable energy intake sensor (Automatic Ingestion Monitor v2, AIM-2) for simultaneous recognition of physical activity (PAR) and estimation of steady-state EE as compared to a traditional hip-worn device. Study data were collected from six participants performing six structured activities, with the reference EE measured using indirect calorimetry (COSMED K5) and reported as metabolic equivalents of tasks (METs). Next, a novel deep convolutional neural network-based multitasking model (Multitasking-CNN) was developed for PAR and EE estimation. The Multitasking-CNN was trained with a two-step progressive training approach for higher accuracy, where in the first step the model for PAR was trained, and in the second step the model was fine-tuned for EE estimation. Finally, the performance of Multitasking-CNN on AIM-2 attached to eyeglasses was compared to the ActiGraph GT9X (AG) attached to the right hip. On the AIM-2 data, Multitasking-CNN achieved a maximum of 95% testing accuracy of PAR, a minimum of 0.59 METs mean square error (MSE), and 11% mean absolute percentage error (MAPE) in EE estimation. Conversely, on AG data, the Multitasking-CNN model achieved a maximum of 82% testing accuracy in PAR, a minimum of 0.73 METs MSE, and 13% MAPE in EE estimation. These results suggest the feasibility of using an eyeglass-mounted sensor for both PAR and EE estimation.
Collapse
Affiliation(s)
- Md Billal Hossain
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| | - Samuel R. LaMunion
- Department of Kinesiology, Recreation and Sport Studies, The University of Tennessee, Knoxville, TN 37996, USA; (S.R.L.); (S.E.C.)
| | - Scott E. Crouter
- Department of Kinesiology, Recreation and Sport Studies, The University of Tennessee, Knoxville, TN 37996, USA; (S.R.L.); (S.E.C.)
| | - Edward L. Melanson
- USA Division of Endocrinology, Metabolism, and Diabetes, University of Colorado Anschutz Medical Campus, Aurora, CO 80045, USA;
| | - Edward Sazonov
- Department of Electrical and Computer Engineering, The University of Alabama, Tuscaloosa, AL 35487, USA;
| |
Collapse
|
8
|
Gilmore J, Nasseri M. Human Activity Recognition Algorithm with Physiological and Inertial Signals Fusion: Photoplethysmography, Electrodermal Activity, and Accelerometry. SENSORS (BASEL, SWITZERLAND) 2024; 24:3005. [PMID: 38793858 PMCID: PMC11124986 DOI: 10.3390/s24103005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/23/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024]
Abstract
Inertial signals are the most widely used signals in human activity recognition (HAR) applications, and extensive research has been performed on developing HAR classifiers using accelerometer and gyroscope data. This study aimed to investigate the potential enhancement of HAR models through the fusion of biological signals with inertial signals. The classification of eight common low-, medium-, and high-intensity activities was assessed using machine learning (ML) algorithms, trained on accelerometer (ACC), blood volume pulse (BVP), and electrodermal activity (EDA) data obtained from a wrist-worn sensor. Two types of ML algorithms were employed: a random forest (RF) trained on features; and a pre-trained deep learning (DL) network (ResNet-18) trained on spectrogram images. Evaluation was conducted on both individual activities and more generalized activity groups, based on similar intensity. Results indicated that RF classifiers outperformed corresponding DL classifiers at both individual and grouped levels. However, the fusion of EDA and BVP signals with ACC data improved DL classifier performance compared to a baseline DL model with ACC-only data. The best performance was achieved by a classifier trained on a combination of ACC, EDA, and BVP images, yielding F1-scores of 69 and 87 for individual and grouped activity classifications, respectively. For DL models trained with additional biological signals, almost all individual activity classifications showed improvement (p-value < 0.05). In grouped activity classifications, DL model performance was enhanced for low- and medium-intensity activities. Exploring the classification of two specific activities, ascending/descending stairs and cycling, revealed significantly improved results using a DL model trained on combined ACC, BVP, and EDA spectrogram images (p-value < 0.05).
Collapse
Affiliation(s)
- Justin Gilmore
- Department of Electrical and Computer Engineering, University of Central Florida, Orlando, FL 32816, USA
| | - Mona Nasseri
- School of Engineering, University of North Florida, Jacksonville, FL 32224, USA
| |
Collapse
|
9
|
Saddaf Khan N, Qadir S, Anjum G, Uddin N. StresSense: Real-Time detection of stress-displaying behaviors. Int J Med Inform 2024; 185:105401. [PMID: 38493546 DOI: 10.1016/j.ijmedinf.2024.105401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/29/2024] [Accepted: 03/02/2024] [Indexed: 03/19/2024]
Abstract
BACKGROUND Wrist-worn gadgets like smartphones are ideal for unobtrusively gathering user data, in various fields such as health and fitness monitoring, communication, and productivity enhancement. They seamlessly integrate into users' daily lives, providing valuable insights and features without the need for constant attention or disruption. In sensitive domains like mental health, these devices provide user-friendly, privacy-protected means of diagnosis and treatment, offering a secure and cost-effective avenue for seeking help. OBJECTIVES This study addresses the limitations of traditional mental health assessment techniques, such as intrusive sensing and subjective self-reporting, by harnessing the unobtrusive data collection capabilities of smartphones. Equipped with accelerometers and other sensors, these devices offer a novel approach to mental health research. Our objective was to develop methods for real-time detection of stress and boredom behavior markers using smart devices and machine learning algorithms. METHODOLOGY By leveraging data from accelerometers (A), gyroscopes (G), and magnetometers (M), we compiled a dataset indicative of stress-related behaviors and trained various machine-learning models for predictive accuracy. The methodology involved collecting data from motion sensors (A, G, and M) on the dominant arm's wrist-worn smartphone, followed by data preprocessing, transformation from time series format, and training a Deep Neural Network (DNN) model for activity recognition. FINDINGS Remarkably, the DNN achieved an accuracy of 93.50% on test data, outperforming traditional and ensemble machine learning methods across different window sizes, and demonstrated real-time accuracy of 77.78%, validating its practical application. CONCLUSION In conclusion, this research presents a novel dataset for detecting stress and boredom behaviors using smartphones, reducing reliance on costly devices and offering a more objective assessment. It also proposes a DNN-based method for wrist-worn devices to accurately identify complex activities associated with stress and boredom, with benefits in terms of privacy and user convenience. This advancement represents a significant contribution to the field of mental health research, providing a less intrusive and more user-friendly approach to monitoring mental well-being.
Collapse
Affiliation(s)
- Nida Saddaf Khan
- CITRIC Health Data Science Centre, Medical College, Agha Khan University, Stadium Road, P.O. Box 3500, Karachi 74800, Pakistan; Telecommunication Research Lab (TRL), School of Mathematics and Computer Science, Institute of Business Administration, Karachi, Pakistan.
| | - Saleeta Qadir
- National High-Performance Computing Center, Friedrich-Alexander-Universität, Erlangen-Nürnberg, Schloßplatz 4, 91054 Erlangen, Germany; Telecommunication Research Lab (TRL), School of Mathematics and Computer Science, Institute of Business Administration, Karachi, Pakistan.
| | - Gulnaz Anjum
- Department of Psychology, University of Oslo, Forskningsveien 3A, Harald Schjelderups hus, 0373 Oslo, Norway.
| | - Nasir Uddin
- School of Computer Science, National University of Computer and Emerging Sciences, Karachi Campus, Pakistan.
| |
Collapse
|
10
|
Paola Patricia AC, Rosberg PC, Butt-Aziz S, Marlon Alberto PM, Roberto-Cesar MO, Miguel UT, Naz S. Semi-supervised ensemble learning for human activity recognition in casas Kyoto dataset. Heliyon 2024; 10:e29398. [PMID: 38655356 PMCID: PMC11035997 DOI: 10.1016/j.heliyon.2024.e29398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 03/31/2024] [Accepted: 04/08/2024] [Indexed: 04/26/2024] Open
Abstract
-The automatic identification of human physical activities, commonly referred to as Human Activity Recognition (HAR), has garnered significant interest and application across various sectors, including entertainment, sports, and notably health. Within the realm of health, a myriad of applications exists, contingent upon the nature of experimentation, the activities under scrutiny, and the methodology employed for data and information acquisition. This diversity opens doors to multifaceted applications, including support for the well-being and safeguarding of elderly individuals afflicted with neurodegenerative diseases, especially in the context of smart homes. Within the existing literature, a multitude of datasets from both indoor and outdoor environments have surfaced, significantly contributing to the activity identification processes. One prominent dataset, the CASAS project developed by Washington State University (WSU) University, encompasses experiments conducted in indoor settings. This dataset facilitates the identification of a range of activities, such as cleaning, cooking, eating, washing hands, and even making phone calls. This article introduces a model founded on the principles of Semi-supervised Ensemble Learning, enabling the harnessing of the potential inherent in distance-based clustering analysis. This technique aids in the identification of distinct clusters, each encapsulating unique activity characteristics. These clusters serve as pivotal inputs for the subsequent classification process, which leverages supervised techniques. The outcomes of this approach exhibit great promise, as evidenced by the quality metrics' analysis, showcasing favorable results compared to the existing state-of-the-art methods. This integrated framework not only contributes to the field of HAR but also holds immense potential for enhancing the capabilities of smart homes and related applications.
Collapse
Affiliation(s)
| | - Pacheco-Cuentas Rosberg
- Universidad de la Costa, Department of Computer Science and Electronics, Barranquilla, Colombia
| | - Shariq Butt-Aziz
- School of Systems and Technology, Department of Computer Science, University of Management and Technology, Lahore, Pakistan
| | | | | | - Urina-Triana Miguel
- Universidad Simón Bolívar, Faculty of Health Sciences, Barranquilla, Colombia
| | - Sumera Naz
- Department of Mathematics, Division of Science and Technology, University of Education, Lahore, Pakistan
| |
Collapse
|
11
|
Ryu S, Yun S, Lee S, Jeong IC. Exploring the Possibility of Photoplethysmography-Based Human Activity Recognition Using Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:1610. [PMID: 38475146 DOI: 10.3390/s24051610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 02/20/2024] [Accepted: 02/28/2024] [Indexed: 03/14/2024]
Abstract
Various sensing modalities, including external and internal sensors, have been employed in research on human activity recognition (HAR). Among these, internal sensors, particularly wearable technologies, hold significant promise due to their lightweight nature and simplicity. Recently, HAR techniques leveraging wearable biometric signals, such as electrocardiography (ECG) and photoplethysmography (PPG), have been proposed using publicly available datasets. However, to facilitate broader practical applications, a more extensive analysis based on larger databases with cross-subject validation is required. In pursuit of this objective, we initially gathered PPG signals from 40 participants engaged in five common daily activities. Subsequently, we evaluated the feasibility of classifying these activities using deep learning architecture. The model's performance was assessed in terms of accuracy, precision, recall, and F-1 measure via cross-subject cross-validation (CV). The proposed method successfully distinguished the five activities considered, with an average test accuracy of 95.14%. Furthermore, we recommend an optimal window size based on a comprehensive evaluation of performance relative to the input signal length. These findings confirm the potential for practical HAR applications based on PPG and indicate its prospective extension to various domains, such as healthcare or fitness applications, by concurrently analyzing behavioral and health data through a single biometric signal.
Collapse
Affiliation(s)
- Semin Ryu
- Department of Artificial Intelligence Convergence, Hallym University, Chuncheon 24252, Republic of Korea
- Cerebrovascular Disease Research Center, Hallym University, Chuncheon 24252, Republic of Korea
| | - Suyeon Yun
- Department of Artificial Intelligence Convergence, Hallym University, Chuncheon 24252, Republic of Korea
- Cerebrovascular Disease Research Center, Hallym University, Chuncheon 24252, Republic of Korea
| | - Sunghan Lee
- Cerebrovascular Disease Research Center, Hallym University, Chuncheon 24252, Republic of Korea
| | - In Cheol Jeong
- Department of Artificial Intelligence Convergence, Hallym University, Chuncheon 24252, Republic of Korea
- Cerebrovascular Disease Research Center, Hallym University, Chuncheon 24252, Republic of Korea
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| |
Collapse
|
12
|
Novak R, Robinson JA, Kanduč T, Sarigiannis D, Džeroski S, Kocman D. Empowering Participatory Research in Urban Health: Wearable Biometric and Environmental Sensors for Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:9890. [PMID: 38139735 PMCID: PMC10747712 DOI: 10.3390/s23249890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 11/20/2023] [Accepted: 12/15/2023] [Indexed: 12/24/2023]
Abstract
Participatory exposure research, which tracks behaviour and assesses exposure to stressors like air pollution, traditionally relies on time-activity diaries. This study introduces a novel approach, employing machine learning (ML) to empower laypersons in human activity recognition (HAR), aiming to reduce dependence on manual recording by leveraging data from wearable sensors. Recognising complex activities such as smoking and cooking presents unique challenges due to specific environmental conditions. In this research, we combined wearable environment/ambient and wrist-worn activity/biometric sensors for complex activity recognition in an urban stressor exposure study, measuring parameters like particulate matter concentrations, temperature, and humidity. Two groups, Group H (88 individuals) and Group M (18 individuals), wore the devices and manually logged their activities hourly and minutely, respectively. Prioritising accessibility and inclusivity, we selected three classification algorithms: k-nearest neighbours (IBk), decision trees (J48), and random forests (RF), based on: (1) proven efficacy in existing literature, (2) understandability and transparency for laypersons, (3) availability on user-friendly platforms like WEKA, and (4) efficiency on basic devices such as office laptops or smartphones. Accuracy improved with finer temporal resolution and detailed activity categories. However, when compared to other published human activity recognition research, our accuracy rates, particularly for less complex activities, were not as competitive. Misclassifications were higher for vague activities (resting, playing), while well-defined activities (smoking, cooking, running) had few errors. Including environmental sensor data increased accuracy for all activities, especially playing, smoking, and running. Future work should consider exploring other explainable algorithms available on diverse tools and platforms. Our findings underscore ML's potential in exposure studies, emphasising its adaptability and significance for laypersons while also highlighting areas for improvement.
Collapse
Affiliation(s)
- Rok Novak
- Department of Environmental Sciences, Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (J.A.R.); (T.K.); (D.K.)
- Ecotechnologies Programme, Jožef Stefan International Postgraduate School, 1000 Ljubljana, Slovenia;
| | - Johanna Amalia Robinson
- Department of Environmental Sciences, Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (J.A.R.); (T.K.); (D.K.)
- Ecotechnologies Programme, Jožef Stefan International Postgraduate School, 1000 Ljubljana, Slovenia;
- Centre for Research and Development, Slovenian Institute for Adult Education, 1000 Ljubljana, Slovenia
| | - Tjaša Kanduč
- Department of Environmental Sciences, Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (J.A.R.); (T.K.); (D.K.)
| | - Dimosthenis Sarigiannis
- Environmental Engineering Laboratory, Department of Chemical Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece;
- HERACLES Research Centre on the Exposome and Health, Centre for Interdisciplinary Research and Innovation, 57001 Thessaloniki, Greece
- Environmental Health Engineering, Department of Science, Technology and Society, University School of Advanced Study IUSS, 27100 Pavia, Italy
| | - Sašo Džeroski
- Ecotechnologies Programme, Jožef Stefan International Postgraduate School, 1000 Ljubljana, Slovenia;
- Department of Knowledge Technologies, Jožef Stefan Institute, 1000 Ljubljana, Slovenia
| | - David Kocman
- Department of Environmental Sciences, Jožef Stefan Institute, 1000 Ljubljana, Slovenia; (J.A.R.); (T.K.); (D.K.)
| |
Collapse
|
13
|
Niță VA, Magyar P. Improving Balance and Movement Control in Fencing Using IoT and Real-Time Sensorial Feedback. SENSORS (BASEL, SWITZERLAND) 2023; 23:9801. [PMID: 38139647 PMCID: PMC10747936 DOI: 10.3390/s23249801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 11/27/2023] [Accepted: 12/11/2023] [Indexed: 12/24/2023]
Abstract
Fencing, a sport emphasizing the equilibrium and movement control of participants, forms the focal point of inquiry in the current study. The research endeavors to assess the efficacy of a novel system designed for real-time monitoring of fencers' balance and movement control, augmented by modules incorporating visual feedback and haptic feedback, to ascertain its potential for performance enhancement. Over a span of five weeks, three distinct groups, each comprising ten fencers, underwent specific training: a control group, a cohort utilizing the system with a visual real-time feedback module, and a cohort using the system with a haptic real-time feedback module. Positive outcomes were observed across all three groups, a typical occurrence following a 5-week training regimen. However, noteworthy advancements were particularly discerned in the second group, reaching approximately 15%. In contrast, the improvements in the remaining two groups were below 5%. Statistical analyses employing the Wilcoxon signed-rank test for repeated measures were applied to assess the significance of the results. Significance was solely ascertained for the second group, underscoring the efficacy of the system integrated with visual real-time feedback in yielding statistically noteworthy performance enhancements.
Collapse
Affiliation(s)
- Valentin-Adrian Niță
- Department of Communications, University Politehnica of Timișoara, 300006 Timișoara, Romania
| | - Petra Magyar
- Department of Physical and Sports Education, West University of Timisoara, 300223 Timișoara, Romania;
| |
Collapse
|
14
|
Tan Q, Qin Y, Tang R, Wu S, Cao J. A Multi-Layer Classifier Model XR-KS of Human Activity Recognition for the Problem of Similar Human Activity. SENSORS (BASEL, SWITZERLAND) 2023; 23:9613. [PMID: 38067987 PMCID: PMC10708779 DOI: 10.3390/s23239613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/18/2023]
Abstract
Sensor-based human activity recognition is now well developed, but there are still many challenges, such as insufficient accuracy in the identification of similar activities. To overcome this issue, we collect data during similar human activities using three-axis acceleration and gyroscope sensors. We developed a model capable of classifying similar activities of human behavior, and the effectiveness and generalization capabilities of this model are evaluated. Based on the standardization and normalization of data, we consider the inherent similarities of human activity behaviors by introducing the multi-layer classifier model. The first layer of the proposed model is a random forest model based on the XGBoost feature selection algorithm. In the second layer of this model, similar human activities are extracted by applying the kernel Fisher discriminant analysis (KFDA) with feature mapping. Then, the support vector machine (SVM) model is applied to classify similar human activities. Our model is experimentally evaluated, and it is also applied to four benchmark datasets: UCI DSA, UCI HAR, WISDM, and IM-WSHA. The experimental results demonstrate that the proposed approach achieves recognition accuracies of 97.69%, 97.92%, 98.12%, and 90.6%, indicating excellent recognition performance. Additionally, we performed K-fold cross-validation on the random forest model and utilized ROC curves for the SVM classifier to assess the model's generalization ability. The results indicate that our multi-layer classifier model exhibits robust generalization capabilities.
Collapse
Affiliation(s)
- Qiancheng Tan
- College of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China; (Q.T.); (S.W.)
| | - Yonghui Qin
- College of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China; (Q.T.); (S.W.)
- Center for Applied Mathematics of Guangxi (GUET), Guilin 541004, China
- Guangxi Key Laboratory of Automatic Detecting Technology and Instruments, Guilin University of Electronic Technology, Guilin 541004, China
| | - Rui Tang
- School of Advanced Manufacturing, Fuzhou University, Fuzhou 350108, China;
| | - Sixuan Wu
- College of Mathematics and Computing Science, Guangxi Colleges and Universities Key Laboratory of Data Analysis and Computation, Guilin University of Electronic Technology, Guilin 541004, China; (Q.T.); (S.W.)
| | - Jing Cao
- College of Electrical Engineering and Information, Northeast Agricultural University, Harbin 150030, China;
| |
Collapse
|
15
|
Nakanishi K, Goto H. A New Index for the Quantitative Evaluation of Surgical Invasiveness Based on Perioperative Patients' Behavior Patterns: Machine Learning Approach Using Triaxial Acceleration. JMIR Perioper Med 2023; 6:e50188. [PMID: 37962919 PMCID: PMC10685283 DOI: 10.2196/50188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 09/12/2023] [Accepted: 10/11/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND The minimally invasive nature of thoracoscopic surgery is well recognized; however, the absence of a reliable evaluation method remains challenging. We hypothesized that the postoperative recovery speed is closely linked to surgical invasiveness, where recovery signifies the patient's behavior transition back to their preoperative state during the perioperative period. OBJECTIVE This study aims to determine whether machine learning using triaxial acceleration data can effectively capture perioperative behavior changes and establish a quantitative index for quantifying variations in surgical invasiveness. METHODS We trained 7 distinct machine learning models using a publicly available human acceleration data set as supervised data. The 3 top-performing models were selected to predict patient actions, as determined by the Matthews correlation coefficient scores. Two patients who underwent different levels of invasive thoracoscopic surgery were selected as participants. Acceleration data were collected via chest sensors for 8 hours during the preoperative and postoperative hospitalization days. These data were categorized into 4 actions (walking, standing, sitting, and lying down) using the selected models. The actions predicted by the model with intermediate results were adopted as the actions of the participants. The daily appearance probability was calculated for each action. The 2 differences between 2 appearance probabilities (sitting vs standing and lying down vs walking) were calculated using 2 coordinates on the x- and y-axes. A 2D vector composed of coordinate values was defined as the index of behavior pattern (iBP) for the day. All daily iBPs were graphed, and the enclosed area and distance between points were calculated and compared between participants to assess the relationship between changes in the indices and invasiveness. RESULTS Patients 1 and 2 underwent lung lobectomy and incisional tumor biopsy, respectively. The selected predictive model was a light-gradient boosting model (mean Matthews correlation coefficient 0.98, SD 0.0027; accuracy: 0.98). The acceleration data yielded 548,466 points for patient 1 and 466,407 points for patient 2. The iBPs of patient 1 were [(0.32, 0.19), (-0.098, 0.46), (-0.15, 0.13), (-0.049, 0.22)] and those of patient 2 were [(0.55, 0.30), (0.77, 0.21), (0.60, 0.25), (0.61, 0.31)]. The enclosed areas were 0.077 and 0.0036 for patients 1 and 2, respectively. Notably, the distances for patient 1 were greater than those for patient 2 ({0.44, 0.46, 0.37, 0.26} vs {0.23, 0.0065, 0.059}; P=.03 [Mann-Whitney U test]). CONCLUSIONS The selected machine learning model effectively predicted the actions of the surgical patients with high accuracy. The temporal distribution of action times revealed changes in behavior patterns during the perioperative phase. The proposed index may facilitate the recognition and visualization of perioperative changes in patients and differences in surgical invasiveness.
Collapse
Affiliation(s)
- Kozo Nakanishi
- Department of General Thoracic Surgery, National Hospital Organization Saitama Hospital, Wako Saitama, Japan
| | - Hidenori Goto
- Department of General Thoracic Surgery, National Hospital Organization Saitama Hospital, Wako Saitama, Japan
| |
Collapse
|
16
|
Hoelzemann A, Romero JL, Bock M, Laerhoven KV, Lv Q. Hang-Time HAR: A Benchmark Dataset for Basketball Activity Recognition Using Wrist-Worn Inertial Sensors. SENSORS (BASEL, SWITZERLAND) 2023; 23:5879. [PMID: 37447730 DOI: 10.3390/s23135879] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/12/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
We present a benchmark dataset for evaluating physical human activity recognition methods from wrist-worn sensors, for the specific setting of basketball training, drills, and games. Basketball activities lend themselves well for measurement by wrist-worn inertial sensors, and systems that are able to detect such sport-relevant activities could be used in applications of game analysis, guided training, and personal physical activity tracking. The dataset was recorded from two teams in separate countries (USA and Germany) with a total of 24 players who wore an inertial sensor on their wrist, during both a repetitive basketball training session and a game. Particular features of this dataset include an inherent variance through cultural differences in game rules and styles as the data was recorded in two countries, as well as different sport skill levels since the participants were heterogeneous in terms of prior basketball experience. We illustrate the dataset's features in several time-series analyses and report on a baseline classification performance study with two state-of-the-art deep learning architectures.
Collapse
Affiliation(s)
| | - Julia Lee Romero
- Computer Science, University of Colorado Boulder, Boulder, CO 80302, USA
| | - Marius Bock
- Ubiquitous Computing, University of Siegen, 57076 Siegen, Germany
| | | | - Qin Lv
- Computer Science, University of Colorado Boulder, Boulder, CO 80302, USA
| |
Collapse
|
17
|
Sopidis G, Haslgrübler M, Ferscha A. Counting Activities Using Weakly Labeled Raw Acceleration Data: A Variable-Length Sequence Approach with Deep Learning to Maintain Event Duration Flexibility. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115057. [PMID: 37299784 DOI: 10.3390/s23115057] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 05/19/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
This paper presents a novel approach for counting hand-performed activities using deep learning and inertial measurement units (IMUs). The particular challenge in this task is finding the correct window size for capturing activities with different durations. Traditionally, fixed window sizes have been used, which occasionally result in incorrectly represented activities. To address this limitation, we propose segmenting the time series data into variable-length sequences using ragged tensors to store and process the data. Additionally, our approach utilizes weakly labeled data to simplify the annotation process and reduce the time to prepare annotated data for machine learning algorithms. Thus, the model receives only partial information about the performed activity. Therefore, we propose an LSTM-based architecture, which takes into account both the ragged tensors and the weak labels. To the best of our knowledge, no prior studies attempted counting utilizing variable-size IMU acceleration data with relatively low computational requirements using the number of completed repetitions of hand-performed activities as a label. Hence, we present the data segmentation method we employed and the model architecture that we implemented to show the effectiveness of our approach. Our results are evaluated using the Skoda public dataset for Human activity recognition (HAR) and demonstrate a repetition error of ±1 even in the most challenging cases. The findings of this study have applications and can be beneficial for various fields, including healthcare, sports and fitness, human-computer interaction, robotics, and the manufacturing industry.
Collapse
Affiliation(s)
| | | | - Alois Ferscha
- Institute of Pervasive Computing, Johannes Kepler University, Altenberger Straße 69, 4040 Linz, Austria
| |
Collapse
|
18
|
Strommen KJ, Tørresen J, Côté-Allard U. Latent space unsupervised semantic segmentation. Front Physiol 2023; 14:1151312. [PMID: 37179829 PMCID: PMC10166858 DOI: 10.3389/fphys.2023.1151312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 03/28/2023] [Indexed: 05/15/2023] Open
Abstract
The development of compact and energy-efficient wearable sensors has led to an increase in the availability of biosignals. To effectively and efficiently analyze continuously recorded and multidimensional time series at scale, the ability to perform meaningful unsupervised data segmentation is an auspicious target. A common way to achieve this is to identify change-points within the time series as the segmentation basis. However, traditional change-point detection algorithms often come with drawbacks, limiting their real-world applicability. Notably, they generally rely on the complete time series to be available and thus cannot be used for real-time applications. Another common limitation is that they poorly (or cannot) handle the segmentation of multidimensional time series. Consequently, the main contribution of this work is to propose a novel unsupervised segmentation algorithm for multidimensional time series named Latent Space Unsupervised Semantic Segmentation (LS-USS), which was designed to easily work with both online and batch data. Latent Space Unsupervised Semantic Segmentation addresses the challenge of multivariate change-point detection by utilizing an autoencoder to learn a 1-dimensional latent space on which change-point detection is then performed. To address the challenge of real-time time series segmentation, this work introduces the Local Threshold Extraction Algorithm (LTEA) and a "batch collapse" algorithm. The "batch collapse" algorithm enables Latent Space Unsupervised Semantic Segmentation to process streaming data by dividing it into manageable batches, while Local Threshold Extraction Algorithm is employed to detect change-points in the time series whenever the computed metric by Latent Space Unsupervised Semantic Segmentation exceeds a predefined threshold. By using these algorithms in combination, our approach is able to accurately segment time series data in real-time, making it well-suited for applications where timely detection of changes is critical. When evaluating Latent Space Unsupervised Semantic Segmentation on a variety of real-world datasets the Latent Space Unsupervised Semantic Segmentation systematically achieves equal or better performance than other state-of-the-art change-point detection algorithms it is compared to in both offline and real-time settings.
Collapse
Affiliation(s)
| | - Jim Tørresen
- Department of Informatics, University of Oslo, Oslo, Norway
- RITMO, University of Oslo, Oslo, Norway
| | - Ulysse Côté-Allard
- Department of Informatics, University of Oslo, Oslo, Norway
- RITMO, University of Oslo, Oslo, Norway
| |
Collapse
|
19
|
Lee SH, Lee DW, Kim MS. A Deep Learning-Based Semantic Segmentation Model Using MCNN and Attention Layer for Human Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:2278. [PMID: 36850876 PMCID: PMC9965081 DOI: 10.3390/s23042278] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 02/14/2023] [Accepted: 02/16/2023] [Indexed: 06/18/2023]
Abstract
With the development of wearable devices such as smartwatches, several studies have been conducted on the recognition of various human activities. Various types of data are used, e.g., acceleration data collected using an inertial measurement unit sensor. Most scholars segmented the entire timeseries data with a fixed window size before performing recognition. However, this approach has limitations in performance because the execution time of the human activity is usually unknown. Therefore, there have been many attempts to solve this problem through the method of activity recognition by sliding the classification window along the time axis. In this study, we propose a method for classifying all frames rather than a window-based recognition method. For implementation, features extracted using multiple convolutional neural networks with different kernel sizes were fused and used. In addition, similar to the convolutional block attention module, an attention layer to each channel and spatial level is applied to improve the model recognition performance. To verify the performance of the proposed model and prove the effectiveness of the proposed method on human activity recognition, evaluation experiments were performed. For comparison, models using various basic deep learning modules and models, in which all frames were classified for recognizing a specific wave in electrocardiography data were applied. As a result, the proposed model reported the best F1-score (over 0.9) for all kinds of target activities compared to other deep learning-based recognition models. Further, for the improvement verification of the proposed CEF method, the proposed method was compared with three types of SW method. As a result, the proposed method reported the 0.154 higher F1-score than SW. In the case of the designed model, the F1-score was higher as much as 0.184.
Collapse
|
20
|
Balch JA, Ruppert MM, Shickel B, Ozrazgat-Baslanti T, Tighe PJ, Efron PA, Upchurch GR, Rashidi P, Bihorac A, Loftus TJ. Building an automated, machine learning-enabled platform for predicting post-operative complications. Physiol Meas 2023; 44:024001. [PMID: 36657179 PMCID: PMC9910093 DOI: 10.1088/1361-6579/acb4db] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 12/29/2022] [Accepted: 01/19/2023] [Indexed: 01/21/2023]
Abstract
Objective. In 2019, the University of Florida College of Medicine launched theMySurgeryRiskalgorithm to predict eight major post-operative complications using automatically extracted data from the electronic health record.Approach. This project was developed in parallel with our Intelligent Critical Care Center and represents a culmination of efforts to build an efficient and accurate model for data processing and predictive analytics.Main Results and Significance. This paper discusses how our model was constructed and improved upon. We highlight the consolidation of the database, processing of fixed and time-series physiologic measurements, development and training of predictive models, and expansion of those models into different aspects of patient assessment and treatment. We end by discussing future directions of the model.
Collapse
Affiliation(s)
- Jeremy A Balch
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, United States of America
- Department of Surgery, University of Florida, Gainesville, Florida, United States of America
| | - Matthew M Ruppert
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, United States of America
- Department of Medicine, University of Florida, Gainesville, Florida, United States of America
| | - Benjamin Shickel
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, United States of America
- Department of Medicine, University of Florida, Gainesville, Florida, United States of America
| | - Tezcan Ozrazgat-Baslanti
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, United States of America
- Department of Medicine, University of Florida, Gainesville, Florida, United States of America
| | - Patrick J Tighe
- Department of Anesthesiology, University of Florida, Gainesville, Florida, United States of America
| | - Philip A Efron
- Department of Surgery, University of Florida, Gainesville, Florida, United States of America
| | - Gilbert R Upchurch
- Department of Surgery, University of Florida, Gainesville, Florida, United States of America
| | - Parisa Rashidi
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, United States of America
- Department of Biomedical Engineering, University of Florida, Gainesville, Florida, United States of America
| | - Azra Bihorac
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, United States of America
- Department of Medicine, University of Florida, Gainesville, Florida, United States of America
| | - Tyler J Loftus
- Intelligent Critical Care Center, University of Florida, Gainesville, FL, United States of America
- Department of Surgery, University of Florida, Gainesville, Florida, United States of America
| |
Collapse
|
21
|
Ye J, Jiang H, Zhong J. A Graph-Attention-Based Method for Single-Resident Daily Activity Recognition in Smart Homes. SENSORS (BASEL, SWITZERLAND) 2023; 23:1626. [PMID: 36772666 PMCID: PMC9921809 DOI: 10.3390/s23031626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/22/2023] [Accepted: 01/25/2023] [Indexed: 06/18/2023]
Abstract
In ambient-assisted living facilitated by smart home systems, the recognition of daily human activities is of great importance. It aims to infer the household's daily activities from the triggered sensor observation sequences with varying time intervals among successive readouts. This paper introduces a novel deep learning framework based on embedding technology and graph attention networks, namely the time-oriented and location-oriented graph attention (TLGAT) networks. The embedding technology converts sensor observations into corresponding feature vectors. Afterward, TLGAT provides a sensor observation sequence as a fully connected graph to the model's temporal correlation as well as the sensor's location correlation among sensor observations and facilitates the feature representation of each sensor observation through receiving other sensor observations and weighting operations. The experiments were conducted on two public datasets, based on the diverse setups of sensor event sequence length. The experimental results revealed that the proposed method achieved favorable performance under diverse setups.
Collapse
Affiliation(s)
- Jiancong Ye
- Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, China
| | - Hongjie Jiang
- Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, China
| | - Junpei Zhong
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong
| |
Collapse
|
22
|
Khan YA, Imaduddin S, Singh YP, Wajid M, Usman M, Abbas M. Artificial Intelligence Based Approach for Classification of Human Activities Using MEMS Sensors Data. SENSORS (BASEL, SWITZERLAND) 2023; 23:1275. [PMID: 36772315 PMCID: PMC9919731 DOI: 10.3390/s23031275] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/15/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
The integration of Micro Electronic Mechanical Systems (MEMS) sensor technology in smartphones has greatly improved the capability for Human Activity Recognition (HAR). By utilizing Machine Learning (ML) techniques and data from these sensors, various human motion activities can be classified. This study performed experiments and compiled a large dataset of nine daily activities, including Laying Down, Stationary, Walking, Brisk Walking, Running, Stairs-Up, Stairs-Down, Squatting, and Cycling. Several ML models, such as Decision Tree Classifier, Random Forest Classifier, K Neighbors Classifier, Multinomial Logistic Regression, Gaussian Naive Bayes, and Support Vector Machine, were trained on sensor data collected from accelerometer, gyroscope, and magnetometer embedded in smartphones and wearable devices. The highest test accuracy of 95% was achieved using the random forest algorithm. Additionally, a custom-built Bidirectional Long-Short-Term Memory (Bi-LSTM) model, a type of Recurrent Neural Network (RNN), was proposed and yielded an improved test accuracy of 98.1%. This approach differs from traditional algorithmic-based human activity detection used in current wearable technologies, resulting in improved accuracy.
Collapse
Affiliation(s)
- Yusuf Ahmed Khan
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Syed Imaduddin
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Yash Pratap Singh
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Mohd Wajid
- Department of Electronics Engineering, ZHCET, Aligarh Muslim University, Aligarh 202002, India
| | - Mohammed Usman
- Department of Electrical Engineering, King Khalid University, Abha 61411, Saudi Arabia
| | - Mohamed Abbas
- Electrical Engineering Department, College of Engineering, King Khalid University, Abha 61421, Saudi Arabia
- Electronics and Communication Department, College of Engineering, Delta University for Science and Technology, Gamasa 35712, Egypt
| |
Collapse
|
23
|
Wrapper-based deep feature optimization for activity recognition in the wearable sensor networks of healthcare systems. Sci Rep 2023; 13:965. [PMID: 36653370 PMCID: PMC9846703 DOI: 10.1038/s41598-022-27192-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Accepted: 12/28/2022] [Indexed: 01/19/2023] Open
Abstract
The Human Activity Recognition (HAR) problem leverages pattern recognition to classify physical human activities as they are captured by several sensor modalities. Remote monitoring of an individual's activities has gained importance due to the reduction in travel and physical activities during the pandemic. Research on HAR enables one person to either remotely monitor or recognize another person's activity via the ubiquitous mobile device or by using sensor-based Internet of Things (IoT). Our proposed work focuses on the accurate classification of daily human activities from both accelerometer and gyroscope sensor data after converting into spectrogram images. The feature extraction process follows by leveraging the pre-trained weights of two popular and efficient transfer learning convolutional neural network models. Finally, a wrapper-based feature selection method has been employed for selecting the optimal feature subset that both reduces the training time and improves the final classification performance. The proposed HAR model has been tested on the three benchmark datasets namely, HARTH, KU-HAR and HuGaDB and has achieved 88.89%, 97.97% and 93.82% respectively on these datasets. It is to be noted that the proposed HAR model achieves an improvement of about 21%, 20% and 6% in the overall classification accuracies while utilizing only 52%, 45% and 60% of the original feature set for HuGaDB, KU-HAR and HARTH datasets respectively. This proves the effectiveness of our proposed wrapper-based feature selection HAR methodology.
Collapse
|
24
|
Kapoor B, Nagpal B, Jain PK, Abraham A, Gabralla LA. Epileptic Seizure Prediction Based on Hybrid Seek Optimization Tuned Ensemble Classifier Using EEG Signals. SENSORS (BASEL, SWITZERLAND) 2022; 23:423. [PMID: 36617019 PMCID: PMC9824897 DOI: 10.3390/s23010423] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 12/15/2022] [Accepted: 12/21/2022] [Indexed: 06/17/2023]
Abstract
Visual analysis of an electroencephalogram (EEG) by medical professionals is highly time-consuming and the information is difficult to process. To overcome these limitations, several automated seizure detection strategies have been introduced by combining signal processing and machine learning. This paper proposes a hybrid optimization-controlled ensemble classifier comprising the AdaBoost classifier, random forest (RF) classifier, and the decision tree (DT) classifier for the automatic analysis of an EEG signal dataset to predict an epileptic seizure. The EEG signal is pre-processed initially to make it suitable for feature selection. The feature selection process receives the alpha, beta, delta, theta, and gamma wave data from the EEG, where the significant features, such as statistical features, wavelet features, and entropy-based features, are extracted by the proposed hybrid seek optimization algorithm. These extracted features are fed forward to the proposed ensemble classifier that produces the predicted output. By the combination of corvid and gregarious search agent characteristics, the proposed hybrid seek optimization technique has been developed, and is used to evaluate the fusion parameters of the ensemble classifier. The suggested technique's accuracy, sensitivity, and specificity are determined to be 96.6120%, 94.6736%, and 91.3684%, respectively, for the CHB-MIT database. This demonstrates the effectiveness of the suggested technique for early seizure prediction. The accuracy, sensitivity, and specificity of the proposed technique are 95.3090%, 93.1766%, and 90.0654%, respectively, for the Siena Scalp database, again demonstrating its efficacy in the early seizure prediction process.
Collapse
Affiliation(s)
- Bhaskar Kapoor
- Ambedkar Institute of Advanced Communication Technologies & Research (AIACT&R), Guru Gobind Singh Indraprastha University, New Delhi 110078, India
| | - Bharti Nagpal
- NSUT (East Campus) (Formerly AIACT&R), Delhi 110031, India
| | - Praphula Kumar Jain
- Department of Computer Engineering & Applications, GLA University, Mathura 281406, India
| | - Ajith Abraham
- Machine Intelligence Research Labs (MIR Labs), Auburn, WA 98071, USA
| | - Lubna Abdelkareim Gabralla
- Department of Computer Science and Information Technology, College of Applied, Princess Nourah bint Abdulrahman University, Riyadh 11564, Saudi Arabia
| |
Collapse
|
25
|
Gad G, Fadlullah Z. Federated Learning via Augmented Knowledge Distillation for Heterogenous Deep Human Activity Recognition Systems. SENSORS (BASEL, SWITZERLAND) 2022; 23:6. [PMID: 36616609 PMCID: PMC9823596 DOI: 10.3390/s23010006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/11/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
Deep learning-based Human Activity Recognition (HAR) systems received a lot of interest for health monitoring and activity tracking on wearable devices. The availability of large and representative datasets is often a requirement for training accurate deep learning models. To keep private data on users' devices while utilizing them to train deep learning models on huge datasets, Federated Learning (FL) was introduced as an inherently private distributed training paradigm. However, standard FL (FedAvg) lacks the capability to train heterogeneous model architectures. In this paper, we propose Federated Learning via Augmented Knowledge Distillation (FedAKD) for distributed training of heterogeneous models. FedAKD is evaluated on two HAR datasets: A waist-mounted tabular HAR dataset and a wrist-mounted time-series HAR dataset. FedAKD is more flexible than standard federated learning (FedAvg) as it enables collaborative heterogeneous deep learning models with various learning capacities. In the considered FL experiments, the communication overhead under FedAKD is 200X less compared with FL methods that communicate models' gradients/weights. Relative to other model-agnostic FL methods, results show that FedAKD boosts performance gains of clients by up to 20 percent. Furthermore, FedAKD is shown to be relatively more robust under statistical heterogeneous scenarios.
Collapse
Affiliation(s)
- Gad Gad
- Department of Computer Science, Lakehead University, Thunder Bay, ON P7B 5E1, Canada
| | - Zubair Fadlullah
- Department of Computer Science, Lakehead University, Thunder Bay, ON P7B 5E1, Canada
- Thunder Bay Regional Health Research Institute (TBRHRI), Thunder Bay, ON P7B 7A5, Canada
| |
Collapse
|
26
|
Celik Y, Aslan MF, Sabanci K, Stuart S, Woo WL, Godfrey A. Improving Inertial Sensor-Based Activity Recognition in Neurological Populations. SENSORS (BASEL, SWITZERLAND) 2022; 22:9891. [PMID: 36560259 PMCID: PMC9783358 DOI: 10.3390/s22249891] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 12/14/2022] [Accepted: 12/14/2022] [Indexed: 06/17/2023]
Abstract
Inertial sensor-based human activity recognition (HAR) has a range of healthcare applications as it can indicate the overall health status or functional capabilities of people with impaired mobility. Typically, artificial intelligence models achieve high recognition accuracies when trained with rich and diverse inertial datasets. However, obtaining such datasets may not be feasible in neurological populations due to, e.g., impaired patient mobility to perform many daily activities. This study proposes a novel framework to overcome the challenge of creating rich and diverse datasets for HAR in neurological populations. The framework produces images from numerical inertial time-series data (initial state) and then artificially augments the number of produced images (enhanced state) to achieve a larger dataset. Here, we used convolutional neural network (CNN) architectures by utilizing image input. In addition, CNN enables transfer learning which enables limited datasets to benefit from models that are trained with big data. Initially, two benchmarked public datasets were used to verify the framework. Afterward, the approach was tested in limited local datasets of healthy subjects (HS), Parkinson's disease (PD) population, and stroke survivors (SS) to further investigate validity. The experimental results show that when data augmentation is applied, recognition accuracies have been increased in HS, SS, and PD by 25.6%, 21.4%, and 5.8%, respectively, compared to the no data augmentation state. In addition, data augmentation contributes to better detection of stair ascent and stair descent by 39.1% and 18.0%, respectively, in limited local datasets. Findings also suggest that CNN architectures that have a small number of deep layers can achieve high accuracy. The implication of this study has the potential to reduce the burden on participants and researchers where limited datasets are accrued.
Collapse
Affiliation(s)
- Yunus Celik
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK
| | - M. Fatih Aslan
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, Karaman 70100, Turkey
| | - Kadir Sabanci
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, Karaman 70100, Turkey
| | - Sam Stuart
- Department of Sport, Exercise and Rehabilitation, Northumbria University, Newcastle upon Tyne NE1 8ST, UK
| | - Wai Lok Woo
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK
| | - Alan Godfrey
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK
| |
Collapse
|
27
|
Jovanovic M, Mitrov G, Zdravevski E, Lameski P, Colantonio S, Kampel M, Tellioglu H, Florez-Revuelta F. Ambient Assisted Living: Scoping Review of Artificial Intelligence Models, Domains, Technology, and Concerns. J Med Internet Res 2022; 24:e36553. [PMID: 36331530 PMCID: PMC9675018 DOI: 10.2196/36553] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/15/2022] [Accepted: 09/23/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Ambient assisted living (AAL) is a common name for various artificial intelligence (AI)-infused applications and platforms that support their users in need in multiple activities, from health to daily living. These systems use different approaches to learn about their users and make automated decisions, known as AI models, for personalizing their services and increasing outcomes. Given the numerous systems developed and deployed for people with different needs, health conditions, and dispositions toward the technology, it is critical to obtain clear and comprehensive insights concerning AI models used, along with their domains, technology, and concerns, to identify promising directions for future work. OBJECTIVE This study aimed to provide a scoping review of the literature on AI models in AAL. In particular, we analyzed specific AI models used in AАL systems, the target domains of the models, the technology using the models, and the major concerns from the end-user perspective. Our goal was to consolidate research on this topic and inform end users, health care professionals and providers, researchers, and practitioners in developing, deploying, and evaluating future intelligent AAL systems. METHODS This study was conducted as a scoping review to identify, analyze, and extract the relevant literature. It used a natural language processing toolkit to retrieve the article corpus for an efficient and comprehensive automated literature search. Relevant articles were then extracted from the corpus and analyzed manually. This review included 5 digital libraries: IEEE, PubMed, Springer, Elsevier, and MDPI. RESULTS We included a total of 108 articles. The annual distribution of relevant articles showed a growing trend for all categories from January 2010 to July 2022. The AI models mainly used unsupervised and semisupervised approaches. The leading models are deep learning, natural language processing, instance-based learning, and clustering. Activity assistance and recognition were the most common target domains of the models. Ambient sensing, mobile technology, and robotic devices mainly implemented the models. Older adults were the primary beneficiaries, followed by patients and frail persons of various ages. Availability was a top beneficiary concern. CONCLUSIONS This study presents the analytical evidence of AI models in AAL and their domains, technologies, beneficiaries, and concerns. Future research on intelligent AAL should involve health care professionals and caregivers as designers and users, comply with health-related regulations, improve transparency and privacy, integrate with health care technological infrastructure, explain their decisions to the users, and establish evaluation metrics and design guidelines. TRIAL REGISTRATION PROSPERO (International Prospective Register of Systematic Reviews) CRD42022347590; https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022347590.
Collapse
Affiliation(s)
- Mladjan Jovanovic
- Department of Computer Science, Singidunum University, Belgrade, Serbia
| | - Goran Mitrov
- Faculty of Computer Science and Engineering, University Saints Cyril and Methodius, Skopje, North Macedonia
| | - Eftim Zdravevski
- Faculty of Computer Science and Engineering, University Saints Cyril and Methodius, Skopje, North Macedonia
| | - Petre Lameski
- Faculty of Computer Science and Engineering, University Saints Cyril and Methodius, Skopje, North Macedonia
| | - Sara Colantonio
- Signals & Images Lab, Institute of Information Science and Technologies, National Research Council of Italy, Pisa, Italy
| | - Martin Kampel
- Faculty of Informatics, Vienna University of Technology, Vienna, Austria
| | - Hilda Tellioglu
- Faculty of Informatics, Vienna University of Technology, Vienna, Austria
| | | |
Collapse
|
28
|
Wang X, Zhang H, Tian W. Impact of assistive devices use on levels of depression in older adults: Evidence from China. HEALTH & SOCIAL CARE IN THE COMMUNITY 2022; 30:e4628-e4638. [PMID: 35712791 DOI: 10.1111/hsc.13869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 05/06/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
The purpose of this study is to evaluate the effect of assistive devices on the level of depression among older adults. Using data from the 2015 and 2018 waves of China Health and Retirement Longitudinal Studies (CHARLS), we analysed this effect through the PSM-DID model and verified the mechanism of the effect through Hayes' mediating effect model. The results showed that assistive devices increased depression levels in older adults. Moreover, there were significant differences among different groups of older adults. The use of assistive devices in developed areas, women, people under 75 years old, and socially active older people had a deeper impact on the level of depression. Differences in the type and number of assistive devices used also affect the level of depression in older people. Furthermore, assistive devices use in older adults increases depression levels by decreasing health satisfaction. This study provides new evidence to explore the relationship between the use of assistive devices and depression levels in older adults. Meanwhile, our research illustrates the importance of developing products and services with age-friendly technology.
Collapse
Affiliation(s)
- Xiaoyu Wang
- School of Social Development and Public Policy, Beijing Normal University, Beijing, China
| | - Huan Zhang
- School of Social Development and Public Policy, Beijing Normal University, Beijing, China
| | - Wenze Tian
- College of Politics and Public Administration, Qingdao University, Qingdao, China
| |
Collapse
|
29
|
Mapping features and patterns of accelerometry data on human movement in different age groups and associated health problems: A cross-sectional study. Exp Gerontol 2022; 168:111949. [PMID: 36089174 DOI: 10.1016/j.exger.2022.111949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 09/01/2022] [Accepted: 09/03/2022] [Indexed: 11/21/2022]
Abstract
PURPOSE Human movement is considered one of the important factors for maintaining an independent life. Individuals in different age groups have different characteristics of locomotion patterns and some health conditions can affect or be affected by mobility changes. Few studies clarify or present data about the influence of different ages and biopsychosocial factors on accelerometry features. The aim of this study was to identify characteristics and variables in the frequency signals for different age groups and their relationship with associated health conditions in raw accelerometry data obtained from the use of a triaxial accelerometer during 7 days of activities of daily living. METHOD A cross-sectional study was conducted based on the database of the first evaluations of the Epidemiological Study of Movement (EPIMOV) cohort. Frequency, signal amplitude, and entropy accelerometry features of EPIMOV participants who used a triaxial accelerometer for 7 days were extracted. Sociodemographic, clinical, anthropometric and physical activity assessments were also performed. Two-way ANOVA was performed to compare accelerometry features within different age groups. A series of stepwise multiple regressions were performed on accelerometry variables to analyze their relationships with demographic, anthropometric and cardiovascular risk variables. RESULTS The sample consisted mostly of female, white, and high school graduates. The most prevalent cardiovascular risk factors were sedentary behavior and obesity. When analyzing the accelerometry variables, it was possible to observe that the entropy feature, and the counts, decrease in the group of older adults, while the feature of harmonic components of gait (frequency × amplitude) increases in the group of older adults. Regarding the amplitude feature, there were no significant differences between the groups. Through stepwise multiple linear regression, it was possible to observe that demographic, anthropometric and cardiovascular risk factors are associated with most accelerometry variables. CONCLUSION The results confirm that human movement can be influenced by different ages, sex, demographic, anthropometric and cardiovascular risk factors. Future studies and clinical analyzes can use the methods proposed in this research to adjust movement patterns for sex and different age groups, thus obtaining new interpretations about human movement.
Collapse
|
30
|
Kyamakya K, Tavakkoli V, McClatchie S, Arbeiter M, Scholte van Mast BG. A Comprehensive "Real-World Constraints"-Aware Requirements Engineering Related Assessment and a Critical State-of-the-Art Review of the Monitoring of Humans in Bed. SENSORS (BASEL, SWITZERLAND) 2022; 22:6279. [PMID: 36016040 PMCID: PMC9414192 DOI: 10.3390/s22166279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 08/12/2022] [Accepted: 08/17/2022] [Indexed: 06/15/2023]
Abstract
Currently, abnormality detection and/or prediction is a very hot topic. In this paper, we addressed it in the frame of activity monitoring of a human in bed. This paper presents a comprehensive formulation of a requirements engineering dossier for a monitoring system of a "human in bed" for abnormal behavior detection and forecasting. Hereby, practical and real-world constraints and concerns were identified and taken into consideration in the requirements dossier. A comprehensive and holistic discussion of the anomaly concept was extensively conducted and contributed to laying the ground for a realistic specifications book of the anomaly detection system. Some systems engineering relevant issues were also briefly addressed, e.g., verification and validation. A structured critical review of the relevant literature led to identifying four major approaches of interest. These four approaches were evaluated from the perspective of the requirements dossier. It was thereby clearly demonstrated that the approach integrating graph networks and advanced deep-learning schemes (Graph-DL) is the one capable of fully fulfilling the challenging issues expressed in the real-world conditions aware specification book. Nevertheless, to meet immediate market needs, systems based on advanced statistical methods, after a series of adaptations, already ensure and satisfy the important requirements related to, e.g., low cost, solid data security and a fully embedded and self-sufficient implementation. To conclude, some recommendations regarding system architecture and overall systems engineering were formulated.
Collapse
Affiliation(s)
- Kyandoghere Kyamakya
- Institute of Smart Systems Technologies, Universitaet Klagenfurt, 9020 Klagenfurt, Austria
| | - Vahid Tavakkoli
- Institute of Smart Systems Technologies, Universitaet Klagenfurt, 9020 Klagenfurt, Austria
| | | | | | | |
Collapse
|
31
|
Azadi B, Haslgrübler M, Anzengruber-Tanase B, Grünberger S, Ferscha A. Alpine Skiing Activity Recognition Using Smartphone's IMUs. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22155922. [PMID: 35957479 PMCID: PMC9371385 DOI: 10.3390/s22155922] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 08/01/2022] [Accepted: 08/05/2022] [Indexed: 06/12/2023]
Abstract
Many studies on alpine skiing are limited to a few gates or collected data in controlled conditions. In contrast, it is more functional to have a sensor setup and a fast algorithm that can work in any situation, collect data, and distinguish alpine skiing activities for further analysis. This study aims to detect alpine skiing activities via smartphone inertial measurement units (IMU) in an unsupervised manner that is feasible for daily use. Data of full skiing sessions from novice to expert skiers were collected in varied conditions using smartphone IMU. The recorded data is preprocessed and analyzed using unsupervised algorithms to distinguish skiing activities from the other possible activities during a day of skiing. We employed a windowing strategy to extract features from different combinations of window size and sliding rate. To reduce the dimensionality of extracted features, we used Principal Component Analysis. Three unsupervised techniques were examined and compared: KMeans, Ward's methods, and Gaussian Mixture Model. The results show that unsupervised learning can detect alpine skiing activities accurately independent of skiers' skill level in any condition. Among the studied methods and settings, the best model had 99.25% accuracy.
Collapse
Affiliation(s)
- Behrooz Azadi
- Pro2Future GmbH, Altenberger Strasse 69, 4040 Linz, Austria
| | | | | | - Stefan Grünberger
- Institute of Pervasive Computing, Johannes Kepler University, Altenberger Straße 69, 4040 Linz, Austria
| | - Alois Ferscha
- Institute of Pervasive Computing, Johannes Kepler University, Altenberger Straße 69, 4040 Linz, Austria
| |
Collapse
|
32
|
|
33
|
Klingenberg A, Purrucker V, Schuler W, Ganapathy N, Spicher N, Deserno TM. Human activity recognition from textile electrocardiograms. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3434-3437. [PMID: 36086499 DOI: 10.1109/embc48229.2022.9871210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Textile sensors for physiological signals bear the potential of unobtrusive and continuous application in daily life. Recently, textile electrocardiography (ECG) sensors became available which are of particular interest for physical activity monitoring due to the high effect of exercise on the heart rate. In this work, we evaluate the effectiveness of a single-lead ECG signal acquired using a non-medical-grade ECG shirt for human activity recognition (HAR). Healthy volunteers (N=10) wore the shirt during four different activities (sleeping, sitting, walking, running) in an uncontrolled environment and ECG data (256 Hz, 12 Bit) was stored, manually checked, and unusable segments (e.g. no sensor contact) were removed, resulting in a total of 228 hours of recording. Signals were split in short segments of different duration (10, 30, 60s), transformed using the Short-time Fourier Transform (STFT) to a spectrogram image and fed into a state-of-the-art convolutional neural network (CNN). The best configuration results in an F'l-Score of 73% and an accuracy of 77% on the test set. Results with leave-one-subject-out cross-validation show F'l-Scores ranging from 41 % to 80%. Thus, a single-lead, wearable-generated ECG has an informative value for HAR to a certain extent. In future work, we aim at using more sensors of the smart shirt and sensor fusion.
Collapse
|
34
|
Celik Y, Stuart S, Woo WL, Pearson LT, Godfrey A. Exploring human activity recognition using feature level fusion of inertial and electromyography data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1766-1769. [PMID: 36086572 DOI: 10.1109/embc48229.2022.9870909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Wearables are objective tools for human activity recognition (HAR). Advances in wearables enable synchronized multi-sensing within a single device. This has resulted in studies investigating the use of single or multiple wearable sensor modalities for HAR. Some studies use inertial data, others use surface electromyography (sEMG) from multiple muscles and different post-processing approaches. Yet, questions remain about accuracies relating to e.g., multi-modal approaches, and sEMG post-processing. Here, we explored how inertial and sEMG could be efficiently combined with machine learning and used with post-processing methods for better HAR. This study aims recognition of four basic daily life activities; walking, standing, stair ascent and descent. Firstly, we created a new feature vector based on the domain knowledge gained from previous mobility studies. Then, a feature level data fusion approach was used to combine inertial and sEMG data. Finally, two supervised learning classifiers (Support Vector Machine, SVM, and the k-Nearest Neighbors, kNN) were tested with 5-fold cross-validation. Results show the use of inertial data with sEMG increased overall accuracy by 3.5% (SVM) and 6.3% (kNN). Extracting features from linear envelopes instead of bandpass filtered sEMG improves overall HAR accuracy in both classifiers. Clinical Relevance- Post-processing on sEMG signals can improve the performance of multimodal HAR.
Collapse
|
35
|
Issa ME, Helmi AM, Al-Qaness MAA, Dahou A, Abd Elaziz M, Damaševičius R. Human Activity Recognition Based on Embedded Sensor Data Fusion for the Internet of Healthcare Things. Healthcare (Basel) 2022; 10:healthcare10061084. [PMID: 35742136 PMCID: PMC9222808 DOI: 10.3390/healthcare10061084] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/05/2022] [Accepted: 06/09/2022] [Indexed: 12/31/2022] Open
Abstract
Nowadays, the emerging information technologies in smart handheld devices are motivating the research community to make use of embedded sensors in such devices for healthcare purposes. In particular, inertial measurement sensors such as accelerometers and gyroscopes embedded in smartphones and smartwatches can provide sensory data fusion for human activities and gestures. Thus, the concepts of the Internet of Healthcare Things (IoHT) paradigm can be applied to handle such sensory data and maximize the benefits of collecting and analyzing them. The application areas contain but are not restricted to the rehabilitation of elderly people, fall detection, smoking control, sportive exercises, and monitoring of daily life activities. In this work, a public dataset collected using two smartphones (in pocket and wrist positions) is considered for IoHT applications. Three-dimensional inertia signals of thirteen timestamped human activities such as Walking, Walking Upstairs, Walking Downstairs, Writing, Smoking, and others are registered. Here, an efficient human activity recognition (HAR) model is presented based on efficient handcrafted features and Random Forest as a classifier. Simulation results ensure the superiority of the applied model over others introduced in the literature for the same dataset. Moreover, different approaches to evaluating such models are considered, as well as implementation issues. The accuracy of the current model reaches 98.7% on average. The current model performance is also verified using the WISDM v1 dataset.
Collapse
Affiliation(s)
- Mohamed E. Issa
- Computer and Systems Engineering Department, Faculty of Engineering, Zagazig University, Zagazig 44519, Egypt; (M.E.I.); (A.M.H.)
| | - Ahmed M. Helmi
- Computer and Systems Engineering Department, Faculty of Engineering, Zagazig University, Zagazig 44519, Egypt; (M.E.I.); (A.M.H.)
- College of Engineering and Information Technology, Buraydah Private Colleges, Buraydah 51418, Saudi Arabia
| | - Mohammed A. A. Al-Qaness
- State Key Laboratory for Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China
- Faculty of Engineering, Sana’a University, Sana’a 12544, Yemen
- Correspondence: (M.A.A.A.-Q.); (R.D.)
| | - Abdelghani Dahou
- LDDI Laboratory, Faculty of Science and Technology, University of Ahmed DRAIA, Adrar 01000, Algeria;
| | - Mohamed Abd Elaziz
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt;
- Artificial Intelligence Research Center (AIRC), College of Engineering and Information Technology, Ajman University, Ajman 346, United Arab Emirates
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig 44519, Egypt
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania
- Correspondence: (M.A.A.A.-Q.); (R.D.)
| |
Collapse
|
36
|
Kim YW, Cho WH, Kim KS, Lee S. Inertial-Measurement-Unit-Based Novel Human Activity Recognition Algorithm Using Conformer. SENSORS 2022; 22:s22103932. [PMID: 35632341 PMCID: PMC9144209 DOI: 10.3390/s22103932] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/19/2022] [Accepted: 05/20/2022] [Indexed: 02/01/2023]
Abstract
Inertial-measurement-unit (IMU)-based human activity recognition (HAR) studies have improved their performance owing to the latest classification model. In this study, the conformer, which is a state-of-the-art (SOTA) model in the field of speech recognition, is introduced in HAR to improve the performance of the transformer-based HAR model. The transformer model has a multi-head self-attention structure that can extract temporal dependency well, similar to the recurrent neural network (RNN) series while having higher computational efficiency than the RNN series. However, recent HAR studies have shown good performance by combining an RNN-series and convolutional neural network (CNN) model. Therefore, the performance of the transformer-based HAR study can be improved by adding a CNN layer that extracts local features well. The model that improved these points is the conformer-based-model model. To evaluate the proposed model, WISDM, UCI-HAR, and PAMAP2 datasets were used. A synthetic minority oversampling technique was used for the data augmentation algorithm to improve the dataset. From the experiment, the conformer-based HAR model showed better performance than baseline models: the transformer-based-model and the 1D-CNN HAR models. Moreover, the performance of the proposed algorithm was superior to that of algorithms proposed in recent similar studies which do not use RNN-series.
Collapse
Affiliation(s)
- Yeon-Wook Kim
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (Y.-W.K.); (W.-H.C.)
| | - Woo-Hyeong Cho
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (Y.-W.K.); (W.-H.C.)
| | - Kyu-Sung Kim
- Department of Otorhinolaryngology, Inha University Hospital, Incheon 22332, Korea;
| | - Sangmin Lee
- Department of Electrical and Computer Engineering, Inha University, Incheon 22212, Korea; (Y.-W.K.); (W.-H.C.)
- Department of Smart Engineering Program in Biomedical Science & Engineering, Inha University, Incheon 22212, Korea
- Correspondence: ; Tel.: +82-32-860-7420
| |
Collapse
|
37
|
Brard R, Bellanger L, Chevreuil L, Doistau F, Drouin P, Stamm A. A Novel Walking Activity Recognition Model for Rotation Time Series Collected by a Wearable Sensor in a Free-Living Environment. SENSORS (BASEL, SWITZERLAND) 2022; 22:3555. [PMID: 35591247 PMCID: PMC9101770 DOI: 10.3390/s22093555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 05/03/2022] [Accepted: 05/05/2022] [Indexed: 11/16/2022]
Abstract
Solutions to assess walking deficiencies are widespread and largely used in healthcare. Wearable sensors are particularly appealing, as they offer the possibility to monitor gait in everyday life, outside a facility in which the context of evaluation biases the measure. While some wearable sensors are powerful enough to integrate complex walking activity recognition models, non-invasive lightweight sensors do not always have the computing or memory capacity to run them. In this paper, we propose a walking activity recognition model that offers a viable solution to this problem for any wearable sensors that measure rotational motion of body parts. Specifically, the model was trained and tuned using data collected by a motion sensor in the form of a unit quaternion time series recording the hip rotation over time. This time series was then transformed into a real-valued time series of geodesic distances between consecutive quaternions. Moving average and moving standard deviation versions of this time series were fed to standard machine learning classification algorithms. To compare the different models, we used metrics to assess classification performance (precision and accuracy) while maintaining the detection prevalence at the level of the prevalence of walking activities in the data, as well as metrics to assess change point detection capability and computation time. Our results suggest that the walking activity recognition model with a decision tree classifier yields the best compromise in terms of precision and computation time. The sensor that was used had purposely low computing and memory capacity so that reported performances can be thought of as the lower bounds of what can be achieved. Walking activity recognition is performed online, i.e., on-the-fly, which further extends the range of applicability of our model to sensors with very low memory capacity.
Collapse
Affiliation(s)
- Raphaël Brard
- Department of Mathematics Jean Leray, UMR CNRS 6629, Nantes University, 44322 Nantes, France; (R.B.); (L.B.); (P.D.)
- UmanIT, 13 Place Sophie Trébuchet, 44000 Nantes, France; (L.C.); (F.D.)
| | - Lise Bellanger
- Department of Mathematics Jean Leray, UMR CNRS 6629, Nantes University, 44322 Nantes, France; (R.B.); (L.B.); (P.D.)
| | - Laurent Chevreuil
- UmanIT, 13 Place Sophie Trébuchet, 44000 Nantes, France; (L.C.); (F.D.)
| | - Fanny Doistau
- UmanIT, 13 Place Sophie Trébuchet, 44000 Nantes, France; (L.C.); (F.D.)
| | - Pierre Drouin
- Department of Mathematics Jean Leray, UMR CNRS 6629, Nantes University, 44322 Nantes, France; (R.B.); (L.B.); (P.D.)
- UmanIT, 13 Place Sophie Trébuchet, 44000 Nantes, France; (L.C.); (F.D.)
| | - Aymeric Stamm
- Department of Mathematics Jean Leray, UMR CNRS 6629, Nantes University, 44322 Nantes, France; (R.B.); (L.B.); (P.D.)
| |
Collapse
|
38
|
Feasibility of DRNN for Identifying Built Environment Barriers to Walkability Using Wearable Sensor Data from Pedestrians’ Gait. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Identifying built environment barriers to walkability is the first step toward monitoring and improving our walking environment. Although conventional approaches (i.e., surveys by experts or pedestrians, walking interviews, etc.) to identify built environment barriers have contributed to improving the walking environment, these approaches may require time and effort. To address the limitations of conventional approaches, wearable sensing technologies and data analysis techniques have recently been adopted in the investigation of the built environment. Among various wearable sensors, an inertial measurement unit (IMU) can continuously capture gait-related data, which can be used to identify built environment barriers to walkability. To propose a more efficient method, the author adopts a cascaded bidirectional and unidirectional long short-term memory (LSTM)-based deep recurrent neural network (DRNN) model for classifying human gait activities (normal and abnormal walking) according to walking environmental conditions (i.e., normal and abnormal conditions). This study uses 101,607 gait data collected from the author’s previous study for training and testing a DRNN model. In addition, 31,142 gait data (20 participants) have been newly collected to validate whether the DRNN model is feasible for newly added gait data. The gait activity classification results show that the proposed method can classify normal gaits and abnormal gaits with an accuracy of about 95%. The results also indicate that the proposed method can be used to monitor environmental barriers and improve the walking environment.
Collapse
|
39
|
UCA-EHAR: A Dataset for Human Activity Recognition with Embedded AI on Smart Glasses. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083849] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Human activity recognition can help in elderly care by monitoring the physical activities of a subject and identifying a degradation in physical abilities. Vision-based approaches require setting up cameras in the environment, while most body-worn sensor approaches can be a burden on the elderly due to the need of wearing additional devices. Another solution consists in using smart glasses, a much less intrusive device that also leverages the fact that the elderly often already wear glasses. In this article, we propose UCA-EHAR, a novel dataset for human activity recognition using smart glasses. UCA-EHAR addresses the lack of usable data from smart glasses for human activity recognition purpose. The data are collected from a gyroscope, an accelerometer and a barometer embedded onto smart glasses with 20 subjects performing 8 different activities (STANDING, SITTING, WALKING, LYING, WALKING_DOWNSTAIRS, WALKING_UPSTAIRS, RUNNING, and DRINKING). Results of the classification task are provided using a residual neural network. Additionally, the neural network is quantized and deployed on the smart glasses using the open-source MicroAI framework in order to provide a live human activity recognition application based on our dataset. Power consumption is also analysed when performing live inference on the smart glasses’ microcontroller.
Collapse
|
40
|
Huang EJ, Yan K, Onnela JP. Smartphone-Based Activity Recognition Using Multistream Movelets Combining Accelerometer and Gyroscope Data. SENSORS (BASEL, SWITZERLAND) 2022; 22:2618. [PMID: 35408232 PMCID: PMC9002497 DOI: 10.3390/s22072618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Revised: 03/03/2022] [Accepted: 03/26/2022] [Indexed: 02/01/2023]
Abstract
Physical activity patterns can reveal information about one's health status. Built-in sensors in a smartphone, in comparison to a patient's self-report, can collect activity recognition data more objectively, unobtrusively, and continuously. A variety of data analysis approaches have been proposed in the literature. In this study, we applied the movelet method to classify the activities performed using smartphone accelerometer and gyroscope data, which measure a phone's acceleration and angular velocity, respectively. The movelet method constructs a personalized dictionary for each participant using training data and classifies activities in new data with the dictionary. Our results show that this method has the advantages of being interpretable and transparent. A unique aspect of our movelet application involves extracting unique information, optimally, from multiple sensors. In comparison to single-sensor applications, our approach jointly incorporates the accelerometer and gyroscope sensors with the movelet method. Our findings show that combining data from the two sensors can result in more accurate activity recognition than using each sensor alone. In particular, the joint-sensor method reduces errors of the gyroscope-only method in differentiating between standing and sitting. It also reduces errors in the accelerometer-only method when classifying vigorous activities.
Collapse
Affiliation(s)
- Emily J Huang
- Department of Mathematics and Statistics, Wake Forest University, Winston-Salem, NC 27106, USA
| | - Kebin Yan
- Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, PA 19104, USA
| | | |
Collapse
|
41
|
Nair R, Ragab M, Mujallid OA, Mohammad KA, Mansour RF, Viju GK. Impact of Wireless Sensor Data Mining with Hybrid Deep Learning for Human Activity Recognition. WIRELESS COMMUNICATIONS AND MOBILE COMPUTING 2022; 2022:1-8. [DOI: 10.1155/2022/9457536] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2024]
Abstract
Human activity recognition is a time series classification problem that is difficult to solve (HAR). Traditional signal processing approaches and domain expertise are necessary to appropriately create features from raw data and fit a machine learning model for predicting a person’s movement. This work aims to demonstrate how a hybrid deep learning model may be used to recognize human behavior. Deep learning methodologies such as convolutional neural networks and recurrent neural networks will extract the features and achieve the classification goal. The suggested model has used wireless sensor data mining datasets to predict human activity. The model’s performance has been assessed using the confusion matrix, accuracy, training loss, and testing loss. Thus, the model has achieved greater than 96% accuracy, superior to other state-of-the-art algorithms in this field.
Collapse
Affiliation(s)
- Rajit Nair
- School of Computing Science and Engineering, Vellore Institute of Technology, Bhopal, Madhya Pradesh, India
| | - Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Centre of Artificial Intelligence for Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Osama A. Mujallid
- Public Administration Department, Faculty of Economic and Administration, King Abdul-Aziz University, Jeddah 21589, Saudi Arabia
| | - Khadijah Ahmad Mohammad
- Department of Pharmaceutical Chemistry, Faculty of Pharmacy, King Abdulaziz University, Alsulaymanyah, Jeddah 21589, Saudi Arabia
| | - Romany F. Mansour
- Professor and Dean, Post Graduate Studies, University of Garden City, Khartoum, Sudan
| | - G. K. Viju
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga, 72511, Egypt
| |
Collapse
|
42
|
Saqlain M, Kim D, Cha J, Lee C, Lee S, Baek S. 3DMesh-GAR: 3D Human Body Mesh-Based Method for Group Activity Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 22:1464. [PMID: 35214365 PMCID: PMC8877503 DOI: 10.3390/s22041464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 02/05/2022] [Accepted: 02/10/2022] [Indexed: 06/14/2023]
Abstract
Group activity recognition is a prime research topic in video understanding and has many practical applications, such as crowd behavior monitoring, video surveillance, etc. To understand the multi-person/group action, the model should not only identify the individual person's action in the context but also describe their collective activity. A lot of previous works adopt skeleton-based approaches with graph convolutional networks for group activity recognition. However, these approaches are subject to limitation in scalability, robustness, and interoperability. In this paper, we propose 3DMesh-GAR, a novel approach to 3D human body Mesh-based Group Activity Recognition, which relies on a body center heatmap, camera map, and mesh parameter map instead of the complex and noisy 3D skeleton of each person of the input frames. We adopt a 3D mesh creation method, which is conceptually simple, single-stage, and bounding box free, and is able to handle highly occluded and multi-person scenes without any additional computational cost. We implement 3DMesh-GAR on a standard group activity dataset: the Collective Activity Dataset, and achieve state-of-the-art performance for group activity recognition.
Collapse
Affiliation(s)
- Muhammad Saqlain
- AI Graduate School, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (M.S.); (D.K.); (J.C.)
| | - Donguk Kim
- AI Graduate School, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (M.S.); (D.K.); (J.C.)
| | - Junuk Cha
- AI Graduate School, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (M.S.); (D.K.); (J.C.)
| | - Changhwa Lee
- Department of Computer Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (C.L.); (S.L.)
| | - Seongyeong Lee
- Department of Computer Science and Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (C.L.); (S.L.)
| | - Seungryul Baek
- AI Graduate School, Ulsan National Institute of Science and Technology, Ulsan 44919, Korea; (M.S.); (D.K.); (J.C.)
| |
Collapse
|
43
|
Joo H, Kim H, Ryu JK, Ryu S, Lee KM, Kim SC. Estimation of Fine-Grained Foot Strike Patterns with Wearable Smartwatch Devices. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19031279. [PMID: 35162308 PMCID: PMC8835219 DOI: 10.3390/ijerph19031279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 01/19/2022] [Accepted: 01/21/2022] [Indexed: 12/14/2022]
Abstract
People who exercise may benefit or be injured depending on their foot striking (FS) style. In this study, we propose an intelligent system that can recognize subtle differences in FS patterns while walking and running using measurements from a wearable smartwatch device. Although such patterns could be directly measured utilizing pressure distribution of feet while striking on the ground, we instead focused on analyzing hand movements by assuming that striking patterns consequently affect temporal movements of the whole body. The advantage of the proposed approach is that FS patterns can be estimated in a portable and less invasive manner. To this end, first, we developed a wearable system for measuring inertial movements of hands and then conducted an experiment where participants were asked to walk and run while wearing a smartwatch. Second, we trained and tested the captured multivariate time series signals in supervised learning settings. The experimental results obtained demonstrated high and robust classification performances (weighted-average F1 score > 90%) when recent deep neural network models, such as 1D-CNN and GRUs, were employed. We conclude this study with a discussion of potential future work and applications that increase benefits while walking and running properly using the proposed approach.
Collapse
Affiliation(s)
- Hyeyeoun Joo
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Korea; (H.J.); (K.-M.L.)
| | - Hyejoo Kim
- Machine Learning Systems Laboratory, Department of Sports Science, Sungkyunkwan University, Suwon 16419, Korea;
| | - Jeh-Kwang Ryu
- Department of Physical Education, College of Education, Dongguk University, Seoul 04620, Korea;
| | - Semin Ryu
- Intelligent Robotics Laboratory, School of Artificial Intelligence Convergence, Hallym University, Chuncheon 24252, Korea;
| | - Kyoung-Min Lee
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Korea; (H.J.); (K.-M.L.)
| | - Seung-Chan Kim
- Machine Learning Systems Laboratory, Department of Sports Science, Sungkyunkwan University, Suwon 16419, Korea;
- Correspondence: ; Tel.: +82-31-299-6918
| |
Collapse
|
44
|
Gupta N, Gupta SK, Pathak RK, Jain V, Rashidi P, Suri JS. Human activity recognition in artificial intelligence framework: a narrative review. Artif Intell Rev 2022; 55:4755-4808. [PMID: 35068651 PMCID: PMC8763438 DOI: 10.1007/s10462-021-10116-x] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Human activity recognition (HAR) has multifaceted applications due to its worldly usage of acquisition devices such as smartphones, video cameras, and its ability to capture human activity data. While electronic devices and their applications are steadily growing, the advances in Artificial intelligence (AI) have revolutionized the ability to extract deep hidden information for accurate detection and its interpretation. This yields a better understanding of rapidly growing acquisition devices, AI, and applications, the three pillars of HAR under one roof. There are many review articles published on the general characteristics of HAR, a few have compared all the HAR devices at the same time, and few have explored the impact of evolving AI architecture. In our proposed review, a detailed narration on the three pillars of HAR is presented covering the period from 2011 to 2021. Further, the review presents the recommendations for an improved HAR design, its reliability, and stability. Five major findings were: (1) HAR constitutes three major pillars such as devices, AI and applications; (2) HAR has dominated the healthcare industry; (3) Hybrid AI models are in their infancy stage and needs considerable work for providing the stable and reliable design. Further, these trained models need solid prediction, high accuracy, generalization, and finally, meeting the objectives of the applications without bias; (4) little work was observed in abnormality detection during actions; and (5) almost no work has been done in forecasting actions. We conclude that: (a) HAR industry will evolve in terms of the three pillars of electronic devices, applications and the type of AI. (b) AI will provide a powerful impetus to the HAR industry in future. Supplementary Information The online version contains supplementary material available at 10.1007/s10462-021-10116-x.
Collapse
Affiliation(s)
- Neha Gupta
- CSE Department, Bennett University, Greater Noida, UP India
- Bharati Vidyapeeth’s College of Engineering, Paschim Vihar, New Delhi, India
| | | | | | - Vanita Jain
- Bharati Vidyapeeth’s College of Engineering, Paschim Vihar, New Delhi, India
| | - Parisa Rashidi
- Intelligent Health Laboratory, Department of Biomedical Engineering, University of Florida, Gainesville, USA
| | - Jasjit S. Suri
- Stroke Diagnostic and Monitoring Division, AtheroPointTM, Roseville, CA 95661 USA
- Global Biomedical Technologies, Inc., Roseville, CA USA
| |
Collapse
|
45
|
Khatun MA, Yousuf MA, Ahmed S, Uddin MZ, Alyami SA, Al-Ashhab S, Akhdar HF, Khan A, Azad A, Moni MA. Deep CNN-LSTM With Self-Attention Model for Human Activity Recognition Using Wearable Sensor. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:2700316. [PMID: 35795873 PMCID: PMC9252338 DOI: 10.1109/jtehm.2022.3177710] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 05/15/2022] [Accepted: 05/16/2022] [Indexed: 12/21/2022]
Abstract
Human Activity Recognition (HAR) systems are devised for continuously observing human behavior - primarily in the fields of environmental compatibility, sports injury detection, senior care, rehabilitation, entertainment, and the surveillance in intelligent home settings. Inertial sensors, e.g., accelerometers, linear acceleration, and gyroscopes are frequently employed for this purpose, which are now compacted into smart devices, e.g., smartphones. Since the use of smartphones is so widespread now-a-days, activity data acquisition for the HAR systems is a pressing need. In this article, we have conducted the smartphone sensor-based raw data collection, namely H-Activity, using an Android-OS-based application for accelerometer, gyroscope, and linear acceleration. Furthermore, a hybrid deep learning model is proposed, coupling convolutional neural network and long-short term memory network (CNN-LSTM), empowered by the self-attention algorithm to enhance the predictive capabilities of the system. In addition to our collected dataset (H-Activity), the model has been evaluated with some benchmark datasets, e.g., MHEALTH, and UCI-HAR to demonstrate the comparative performance of our model. When compared to other models, the proposed model has an accuracy of 99.93% using our collected H-Activity data, and 98.76% and 93.11% using data from MHEALTH and UCI-HAR databases respectively, indicating its efficacy in recognizing human activity recognition. We hope that our developed model could be applicable in the clinical settings and collected data could be useful for further research.
Collapse
Affiliation(s)
- Mst. Alema Khatun
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, Bangladesh
| | - Mohammad Abu Yousuf
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, Bangladesh
| | - Sabbir Ahmed
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, Bangladesh
| | | | - Salem A. Alyami
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Samer Al-Ashhab
- Department of Mathematics and Statistics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Hanan F. Akhdar
- Department of Physics, Faculty of Science, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh, Saudi Arabia
| | - Asaduzzaman Khan
- School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, Saint Lucia, QLD, Australia
| | - Akm Azad
- Faculty of Science, Engineering & Technology, Swinburne University of Technology Sydney, Parramatta, NSW, Australia
| | - Mohammad Ali Moni
- School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, Saint Lucia, QLD, Australia
| |
Collapse
|
46
|
Şengül G, Karakaya M, Misra S, Abayomi-Alli OO, Damaševičius R. Deep learning based fall detection using smartwatches for healthcare applications. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103242] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
47
|
Ciliberto M, Fortes Rey V, Calatroni A, Lukowicz P, Roggen D. Opportunity++: A Multimodal Dataset for Video- and Wearable, Object and Ambient Sensors-Based Human Activity Recognition. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2021.792065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
48
|
Ionut-Cristian S, Dan-Marius D. Using Inertial Sensors to Determine Head Motion-A Review. J Imaging 2021; 7:265. [PMID: 34940732 PMCID: PMC8708381 DOI: 10.3390/jimaging7120265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/16/2021] [Accepted: 11/22/2021] [Indexed: 12/13/2022] Open
Abstract
Human activity recognition and classification are some of the most interesting research fields, especially due to the rising popularity of wearable devices, such as mobile phones and smartwatches, which are present in our daily lives. Determining head motion and activities through wearable devices has applications in different domains, such as medicine, entertainment, health monitoring, and sports training. In addition, understanding head motion is important for modern-day topics, such as metaverse systems, virtual reality, and touchless systems. The wearability and usability of head motion systems are more technologically advanced than those which use information from a sensor connected to other parts of the human body. The current paper presents an overview of the technical literature from the last decade on state-of-the-art head motion monitoring systems based on inertial sensors. This study provides an overview of the existing solutions used to monitor head motion using inertial sensors. The focus of this study was on determining the acquisition methods, prototype structures, preprocessing steps, computational methods, and techniques used to validate these systems. From a preliminary inspection of the technical literature, we observed that this was the first work which looks specifically at head motion systems based on inertial sensors and their techniques. The research was conducted using four internet databases-IEEE Xplore, Elsevier, MDPI, and Springer. According to this survey, most of the studies focused on analyzing general human activity, and less on a specific activity. In addition, this paper provides a thorough overview of the last decade of approaches and machine learning algorithms used to monitor head motion using inertial sensors. For each method, concept, and final solution, this study provides a comprehensive number of references which help prove the advantages and disadvantages of the inertial sensors used to read head motion. The results of this study help to contextualize emerging inertial sensor technology in relation to broader goals to help people suffering from partial or total paralysis of the body.
Collapse
Affiliation(s)
- Severin Ionut-Cristian
- Faculty of Electronics, Telecommunication and Information Technology, “Gheorghe Asachi” Technical University, 679048 Iași, Romania;
| | | |
Collapse
|
49
|
Logacjov A, Bach K, Kongsvold A, Bårdstu HB, Mork PJ. HARTH: A Human Activity Recognition Dataset for Machine Learning. SENSORS (BASEL, SWITZERLAND) 2021; 21:7853. [PMID: 34883863 PMCID: PMC8659926 DOI: 10.3390/s21237853] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/17/2021] [Accepted: 11/22/2021] [Indexed: 11/29/2022]
Abstract
Existing accelerometer-based human activity recognition (HAR) benchmark datasets that were recorded during free living suffer from non-fixed sensor placement, the usage of only one sensor, and unreliable annotations. We make two contributions in this work. First, we present the publicly available Human Activity Recognition Trondheim dataset (HARTH). Twenty-two participants were recorded for 90 to 120 min during their regular working hours using two three-axial accelerometers, attached to the thigh and lower back, and a chest-mounted camera. Experts annotated the data independently using the camera's video signal and achieved high inter-rater agreement (Fleiss' Kappa =0.96). They labeled twelve activities. The second contribution of this paper is the training of seven different baseline machine learning models for HAR on our dataset. We used a support vector machine, k-nearest neighbor, random forest, extreme gradient boost, convolutional neural network, bidirectional long short-term memory, and convolutional neural network with multi-resolution blocks. The support vector machine achieved the best results with an F1-score of 0.81 (standard deviation: ±0.18), recall of 0.85±0.13, and precision of 0.79±0.22 in a leave-one-subject-out cross-validation. Our highly professional recordings and annotations provide a promising benchmark dataset for researchers to develop innovative machine learning approaches for precise HAR in free living.
Collapse
Affiliation(s)
- Aleksej Logacjov
- Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, 7034 Trondheim, Norway;
| | - Kerstin Bach
- Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, 7034 Trondheim, Norway;
| | - Atle Kongsvold
- Department of Public Health and Nursing, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, 7034 Trondheim, Norway; (A.K.); (P.J.M.)
| | - Hilde Bremseth Bårdstu
- Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, 7034 Trondheim, Norway;
- Department of Sport, Food and Natural Sciences, Faculty of Education, Arts and Sports, Western Norway University of Applied Sciences, 6851 Sogndal, Norway
| | - Paul Jarle Mork
- Department of Public Health and Nursing, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, 7034 Trondheim, Norway; (A.K.); (P.J.M.)
| |
Collapse
|
50
|
Human Behavior Recognition Model Based on Feature and Classifier Selection. SENSORS 2021; 21:s21237791. [PMID: 34883795 PMCID: PMC8659462 DOI: 10.3390/s21237791] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 11/07/2021] [Accepted: 11/19/2021] [Indexed: 02/04/2023]
Abstract
With the rapid development of the computer and sensor field, inertial sensor data have been widely used in human activity recognition. At present, most relevant studies divide human activities into basic actions and transitional actions, in which basic actions are classified by unified features, while transitional actions usually use context information to determine the category. For the existing single method that cannot well realize human activity recognition, this paper proposes a human activity classification and recognition model based on smartphone inertial sensor data. The model fully considers the feature differences of different properties of actions, uses a fixed sliding window to segment the human activity data of inertial sensors with different attributes and, finally, extracts the features and recognizes them on different classifiers. The experimental results show that dynamic and transitional actions could obtain the best recognition performance on support vector machines, while static actions could obtain better classification effects on ensemble classifiers; as for feature selection, the frequency-domain feature used in dynamic action had a high recognition rate, up to 99.35%. When time-domain features were used for static and transitional actions, higher recognition rates were obtained, 98.40% and 91.98%, respectively.
Collapse
|