1
|
Understanding Smartwatch Battery Utilization in the Wild. SENSORS 2020; 20:s20133784. [PMID: 32640587 PMCID: PMC7374306 DOI: 10.3390/s20133784] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 06/30/2020] [Accepted: 07/01/2020] [Indexed: 11/17/2022]
Abstract
Smartwatch battery limitations are one of the biggest hurdles to their acceptability in the consumer market. To our knowledge, despite promising studies analyzing smartwatch battery data, there has been little research that has analyzed the battery usage of a diverse set of smartwatches in a real-world setting. To address this challenge, this paper utilizes a smartwatch dataset collected from 832 real-world users, including different smartwatch brands and geographic locations. First, we employ clustering to identify common patterns of smartwatch battery utilization; second, we introduce a transparent low-parameter convolutional neural network model, which allows us to identify the latent patterns of smartwatch battery utilization. Our model converts the battery consumption rate into a binary classification problem; i.e., low and high consumption. Our model has 85.3% accuracy in predicting high battery discharge events, outperforming other machine learning algorithms that have been used in state-of-the-art research. Besides this, it can be used to extract information from filters of our deep learning model, based on learned filters of the feature extractor, which is impossible for other models. Third, we introduce an indexing method that includes a longitudinal study to quantify smartwatch battery quality changes over time. Our novel findings can assist device manufacturers, vendors and application developers, as well as end-users, to improve smartwatch battery utilization.
Collapse
|
2
|
Novel Approaches to Air Pollution Exposure and Clinical Outcomes Assessment in Environmental Health Studies. ATMOSPHERE 2020. [DOI: 10.3390/atmos11020122] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
An accurate assessment of pollutants’ exposure and precise evaluation of the clinical outcomes pose two major challenges to the contemporary environmental health research. The common methods for exposure assessment are based on residential addresses and are prone to many biases. Pollution levels are defined based on monitoring stations that are sparsely distributed and frequently distanced far from residential addresses. In addition, the degree of an association between outdoor and indoor air pollution levels is not fully elucidated, making the exposure assessment all the more inaccurate. Clinical outcomes’ assessment, on the other hand, mostly relies on the access to medical records from hospital admissions and outpatients’ visits in clinics. This method differentiates by health care seeking behavior and is therefore, problematic in evaluation of an onset, duration, and severity of an outcome. In the current paper, we review a number of novel solutions aimed to mitigate the aforementioned biases. First, a hybrid satellite-based modeling approach provides daily continuous spatiotemporal estimations with improved spatial resolution of 1 × 1 km2 and 200 × 200 m2 grid, and thus allows a more accurate exposure assessment. Utilizing low-cost air pollution sensors allowing a direct measurement of indoor air pollution levels can further validate these models. Furthermore, the real temporal-spatial activity can be assessed by GPS tracking devices within the individuals’ smartphones. A widespread use of smart devices can help with obtaining objective measurements of some of the clinical outcomes such as vital signs and glucose levels. Finally, human biomonitoring can be efficiently done at a population level, providing accurate estimates of in-vivo absorbed pollutants and allowing for the evaluation of body responses, by biomarkers examination. We suggest that the adoption of these novel methods will change the research paradigm heavily relying on ecological methodology and support development of the new clinical practices preventing adverse environmental effects on human health.
Collapse
|
3
|
The Design of an Automated System for the Analysis of the Activity and Emotional Patterns of Dogs with Wearable Sensors Using Machine Learning. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9224938] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
The safety and welfare of companion animals such as dogs has become a large challenge in the last few years. To assess the well-being of a dog, it is very important for human beings to understand the activity pattern of the dog, and its emotional behavior. A wearable, sensor-based system is suitable for such ends, as it will be able to monitor the dogs in real-time. However, the question remains unanswered as to what kind of data should be used to detect the activity patterns and emotional patterns, as does another: what should be the location of the sensors for the collection of data and how should we automate the system? Yet these questions remain unanswered, because to date, there is no such system that can address the above-mentioned concerns. The main purpose of this study was (1) to develop a system that can detect the activities and emotions based on the accelerometer and gyroscope signals and (2) to automate the system with robust machine learning techniques for implementing it for real-time situations. Therefore, we propose a system which is based on the data collected from 10 dogs, including nine breeds of various sizes and ages, and both genders. We used machine learning classification techniques for automating the detection and evaluation process. The ground truth fetched for the evaluation process was carried out by taking video recording data in frame per second and the wearable sensors data were collected in parallel with the video recordings. Evaluation of the system was performed using an ANN (artificial neural network), random forest, SVM (support vector machine), KNN (k nearest neighbors), and a naïve Bayes classifier. The robustness of our system was evaluated by taking independent training and validation sets. We achieved an accuracy of 96.58% while detecting the activity and 92.87% while detecting emotional behavior, respectively. This system will help the owners of dogs to track their behavior and emotions in real-life situations for various breeds in different scenarios.
Collapse
|
4
|
Aich S, Pradhan PM, Park J, Sethi N, Vathsa VSS, Kim HC. A Validation Study of Freezing of Gait (FoG) Detection and Machine-Learning-Based FoG Prediction Using Estimated Gait Characteristics with a Wearable Accelerometer. SENSORS 2018; 18:s18103287. [PMID: 30274340 PMCID: PMC6210779 DOI: 10.3390/s18103287] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 09/23/2018] [Accepted: 09/26/2018] [Indexed: 01/29/2023]
Abstract
One of the most common symptoms observed among most of the Parkinson's disease patients that affects movement pattern and is also related to the risk of fall, is usually termed as "freezing of gait (FoG)". To allow systematic assessment of FoG, objective quantification of gait parameters and automatic detection of FoG are needed. This will help in personalizing the treatment. In this paper, the objectives of the study are (1) quantification of gait parameters in an objective manner by using the data collected from wearable accelerometers; (2) comparison of five estimated gait parameters from the proposed algorithm with their counterparts obtained from the 3D motion capture system in terms of mean error rate and Pearson's correlation coefficient (PCC); (3) automatic discrimination of FoG patients from no FoG patients using machine learning techniques. It was found that the five gait parameters have a high level of agreement with PCC ranging from 0.961 to 0.984. The mean error rate between the estimated gait parameters from accelerometer-based approach and 3D motion capture system was found to be less than 10%. The performances of the classifiers are compared on the basis of accuracy. The best result was accomplished with the SVM classifier with an accuracy of approximately 88%. The proposed approach shows enough evidence that makes it applicable in a real-life scenario where the wearable accelerometer-based system would be recommended to assess and monitor the FoG.
Collapse
Affiliation(s)
- Satyabrata Aich
- Department of Computer Engineering/Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Korea.
| | - Pyari Mohan Pradhan
- Department of Electronics and Communication Engineering, IIT Roorkee, Uttarakhand 247667, India.
| | - Jinse Park
- Department of Neurology, Haeundae Paik Hospital, Inje University, Busan 47392, Korea.
| | - Nitin Sethi
- Department of Electronics and Communication Engineering, IIT Roorkee, Uttarakhand 247667, India.
| | - Vemula Sai Sri Vathsa
- Department of Electronics and Communication Engineering, IIT Roorkee, Uttarakhand 247667, India.
| | - Hee-Cheol Kim
- Department of Computer Engineering/Institute of Digital Anti-Aging Healthcare, Inje University, Gimhae 50834, Korea.
| |
Collapse
|
5
|
A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication. SENSORS 2018; 18:s18041007. [PMID: 29597285 PMCID: PMC5948624 DOI: 10.3390/s18041007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2018] [Revised: 02/27/2018] [Accepted: 03/01/2018] [Indexed: 11/30/2022]
Abstract
All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication—an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment—confirm the feasibility of this approach.
Collapse
|
6
|
Xu X, Zheng Y, Yao S, Sun G, Xu B, Chen X. A low-cost multimodal head-mounted display system for neuroendoscopic surgery. Brain Behav 2018; 8:e00891. [PMID: 29568688 PMCID: PMC5853619 DOI: 10.1002/brb3.891] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 09/26/2017] [Accepted: 11/15/2017] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND With rapid advances in technology, wearable devices as head-mounted display (HMD) have been adopted for various uses in medical science, ranging from simply aiding in fitness to assisting surgery. We aimed to investigate the feasibility and practicability of a low-cost multimodal HMD system in neuroendoscopic surgery. METHODS A multimodal HMD system, mainly consisted of a HMD with two built-in displays, an action camera, and a laptop computer displaying reconstructed medical images, was developed to assist neuroendoscopic surgery. With this intensively integrated system, the neurosurgeon could freely switch between endoscopic image, three-dimensional (3D) reconstructed virtual endoscopy images, and surrounding environment images. Using a leap motion controller, the neurosurgeon could adjust or rotate the 3D virtual endoscopic images at a distance to better understand the positional relation between lesions and normal tissues at will. RESULTS A total of 21 consecutive patients with ventricular system diseases underwent neuroendoscopic surgery with the aid of this system. All operations were accomplished successfully, and no system-related complications occurred. The HMD was comfortable to wear and easy to operate. Screen resolution of the HMD was high enough for the neurosurgeon to operate carefully. With the system, the neurosurgeon might get a better comprehension on lesions by freely switching among images of different modalities. The system had a steep learning curve, which meant a quick increment of skill with it. Compared with commercially available surgical assistant instruments, this system was relatively low-cost. CONCLUSIONS The multimodal HMD system is feasible, practical, helpful, and relatively cost efficient in neuroendoscopic surgery.
Collapse
Affiliation(s)
- Xinghua Xu
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Yi Zheng
- Department of Dermatology Beijing Chaoyang Hospital Capital Medical University Beijing China
| | - Shujing Yao
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Guochen Sun
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Bainan Xu
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| | - Xiaolei Chen
- Department of Neurosurgery Chinese PLA General Hospital Beijing China
| |
Collapse
|
7
|
A Radar-Based Smart Sensor for Unobtrusive Elderly Monitoring in Ambient Assisted Living Applications. BIOSENSORS-BASEL 2017; 7:bios7040055. [PMID: 29186786 PMCID: PMC5746778 DOI: 10.3390/bios7040055] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Revised: 11/16/2017] [Accepted: 11/21/2017] [Indexed: 11/17/2022]
Abstract
Continuous in-home monitoring of older adults living alone aims to improve their quality of life and independence, by detecting early signs of illness and functional decline or emergency conditions. To meet requirements for technology acceptance by seniors (unobtrusiveness, non-intrusiveness, and privacy-preservation), this study presents and discusses a new smart sensor system for the detection of abnormalities during daily activities, based on ultra-wideband radar providing rich, not privacy-sensitive, information useful for sensing both cardiorespiratory and body movements, regardless of ambient lighting conditions and physical obstructions (through-wall sensing). The radar sensing is a very promising technology, enabling the measurement of vital signs and body movements at a distance, and thus meeting both requirements of unobtrusiveness and accuracy. In particular, impulse-radio ultra-wideband radar has attracted considerable attention in recent years thanks to many properties that make it useful for assisted living purposes. The proposed sensing system, evaluated in meaningful assisted living scenarios by involving 30 participants, exhibited the ability to detect vital signs, to discriminate among dangerous situations and activities of daily living, and to accommodate individual physical characteristics and habits. The reported results show that vital signs can be detected also while carrying out daily activities or after a fall event (post-fall phase), with accuracy varying according to the level of movements, reaching up to 95% and 91% in detecting respiration and heart rates, respectively. Similarly, good results were achieved in fall detection by using the micro-motion signature and unsupervised learning, with sensitivity and specificity greater than 97% and 90%, respectively.
Collapse
|
8
|
Hossain SM, Hnat T, Saleheen N, Nasrin NJ, Noor J, Ho BJ, Condie T, Srivastava M, Kumar S. mCerebrum: A Mobile Sensing Software Platform for Development and Validation of Digital Biomarkers and Interventions. PROCEEDINGS OF THE ... INTERNATIONAL CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS. INTERNATIONAL CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS 2017; 2017. [PMID: 30288504 DOI: 10.1145/3131672.3131694] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
The development and validation studies of new multisensory biomarkers and sensor-triggered interventions requires collecting raw sensor data with associated labels in the natural field environment. Unlike platforms for traditional mHealth apps, a software platform for such studies needs to not only support high-rate data ingestion, but also share raw high-rate sensor data with researchers, while supporting high-rate sense-analyze-act functionality in real-time. We present mCerebrum, a realization of such a platform, which supports high-rate data collections from multiple sensors with realtime assessment of data quality. A scalable storage architecture (with near optimal performance) ensures quick response despite rapidly growing data volume. Micro-batching and efficient sharing of data among multiple source and sink apps allows reuse of computations to enable real-time computation of multiple biomarkers without saturating the CPU or memory. Finally, it has a reconfigurable scheduler which manages all prompts to participants that is burden- and context-aware. With a modular design currently spanning 23+ apps, mCerebrum provides a comprehensive ecosystem of system services and utility apps. The design of mCerebrum has evolved during its concurrent use in scientific field studies at ten sites spanning 106,806 person days. Evaluations show that compared with other platforms, mCerebrum's architecture and design choices support 1.5 times higher data rates and 4.3 times higher storage throughput, while causing 8.4 times lower CPU usage. CCS Concepts • Human-centered computing → Ubiquitous and mobile computing; Ubiquitous and mobile computing systems and tools; • Computer systems organization → Embedded and cyber-physical systems. ACM Reference format Syed Monowar Hossain, Timothy Hnat, Nazir Saleheen, Nusrat Jahan Nasrin, Joseph Noor, Bo-Jhang Ho, Tyson Condie, Mani Srivastava, and Santosh Kumar. 2017. mCerebrum: A Mobile Sensing Software Platform for Development and Validation of Digital Biomarkers and Interventions. In Proceedings of SenSys '17, Delft, Netherlands, November 6-8, 2017, 14 pages.
Collapse
|
9
|
Keshavarz H, Abadeh MS. Accurate frequency-based lexicon generation for opinion mining. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2017. [DOI: 10.3233/jifs-16562] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Hamidreza Keshavarz
- Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Mohammad Saniee Abadeh
- Faculty of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| |
Collapse
|
10
|
An Ultra-Low Power Turning Angle Based Biomedical Signal Compression Engine with Adaptive Threshold Tuning. SENSORS 2017; 17:s17081809. [PMID: 28783079 PMCID: PMC5579728 DOI: 10.3390/s17081809] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2017] [Revised: 07/25/2017] [Accepted: 08/01/2017] [Indexed: 11/17/2022]
Abstract
Intelligent sensing is drastically changing our everyday life including healthcare by biomedical signal monitoring, collection, and analytics. However, long-term healthcare monitoring generates tremendous data volume and demands significant wireless transmission power, which imposes a big challenge for wearable healthcare sensors usually powered by batteries. Efficient compression engine design to reduce wireless transmission data rate with ultra-low power consumption is essential for wearable miniaturized healthcare sensor systems. This paper presents an ultra-low power biomedical signal compression engine for healthcare data sensing and analytics in the era of big data and sensor intelligence. It extracts the feature points of the biomedical signal by window-based turning angle detection. The proposed approach has low complexity and thus low power consumption while achieving a large compression ratio (CR) and good quality of reconstructed signal. Near-threshold design technique is adopted to further reduce the power consumption on the circuit level. Besides, the angle threshold for compression can be adaptively tuned according to the error between the original signal and reconstructed signal to address the variation of signal characteristics from person to person or from channel to channel to meet the required signal quality with optimal CR. For demonstration, the proposed biomedical compression engine has been used and evaluated for ECG compression. It achieves an average (CR) of 71.08% and percentage root-mean-square difference (PRD) of 5.87% while consuming only 39 nW. Compared to several state-of-the-art ECG compression engines, the proposed design has significantly lower power consumption while achieving similar CRD and PRD, making it suitable for long-term wearable miniaturized sensor systems to sense and collect healthcare data for remote data analytics.
Collapse
|
11
|
Threats of Password Pattern Leakage Using Smartwatch Motion Recognition Sensors. Symmetry (Basel) 2017. [DOI: 10.3390/sym9070101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Thanks to the development of Internet of Things (IoT) technologies, wearable markets have been growing rapidly. Smartwatches can be said to be the most representative product in wearable markets, and involve various hardware technologies in order to overcome the limitations of small hardware. Motion recognition sensors are a representative example of those hardware technologies. However, smartwatches and motion recognition sensors that can be worn by users may pose security threats of password pattern leakage. In the present paper, passwords are inferred through experiments to obtain password patterns inputted by users using motion recognition sensors, and verification of the results and the accuracy of the results is shown.
Collapse
|
12
|
Filippoupolitis A, Oliff W, Takand B, Loukas G. Location-Enhanced Activity Recognition in Indoor Environments Using Off the Shelf Smart Watch Technology and BLE Beacons. SENSORS 2017; 17:s17061230. [PMID: 28555022 PMCID: PMC5492220 DOI: 10.3390/s17061230] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2017] [Revised: 05/17/2017] [Accepted: 05/19/2017] [Indexed: 11/16/2022]
Abstract
Activity recognition in indoor spaces benefits context awareness and improves the efficiency of applications related to personalised health monitoring, building energy management, security and safety. The majority of activity recognition frameworks, however, employ a network of specialised building sensors or a network of body-worn sensors. As this approach suffers with respect to practicality, we propose the use of commercial off-the-shelf devices. In this work, we design and evaluate an activity recognition system composed of a smart watch, which is enhanced with location information coming from Bluetooth Low Energy (BLE) beacons. We evaluate the performance of this approach for a variety of activities performed in an indoor laboratory environment, using four supervised machine learning algorithms. Our experimental results indicate that our location-enhanced activity recognition system is able to reach a classification accuracy ranging from 92% to 100%, while without location information classification accuracy it can drop to as low as 50% in some cases, depending on the window size chosen for data segmentation.
Collapse
Affiliation(s)
- Avgoustinos Filippoupolitis
- Computing and Information Systems Department, University of Greenwich, Old Royal Naval College, Park Row, London SE10 9LS, UK.
| | - William Oliff
- Computing and Information Systems Department, University of Greenwich, Old Royal Naval College, Park Row, London SE10 9LS, UK.
| | - Babak Takand
- Computing and Information Systems Department, University of Greenwich, Old Royal Naval College, Park Row, London SE10 9LS, UK.
| | - George Loukas
- Computing and Information Systems Department, University of Greenwich, Old Royal Naval College, Park Row, London SE10 9LS, UK.
| |
Collapse
|
13
|
Urbanski M, Reyes CG, Noh J, Sharma A, Geng Y, Subba Rao Jampani V, Lagerwall JPF. Liquid crystals in micron-scale droplets, shells and fibers. JOURNAL OF PHYSICS. CONDENSED MATTER : AN INSTITUTE OF PHYSICS JOURNAL 2017; 29:133003. [PMID: 28199222 DOI: 10.1088/1361-648x/aa5706] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The extraordinary responsiveness and large diversity of self-assembled structures of liquid crystals are well documented and they have been extensively used in devices like displays. For long, this application route strongly influenced academic research, which frequently focused on the performance of liquid crystals in display-like geometries, typically between flat, rigid substrates of glass or similar solids. Today a new trend is clearly visible, where liquid crystals confined within curved, often soft and flexible, interfaces are in focus. Innovation in microfluidic technology has opened for high-throughput production of liquid crystal droplets or shells with exquisite monodispersity, and modern characterization methods allow detailed analysis of complex director arrangements. The introduction of electrospinning in liquid crystal research has enabled encapsulation in optically transparent polymeric cylinders with very small radius, allowing studies of confinement effects that were not easily accessible before. It also opened the prospect of functionalizing textile fibers with liquid crystals in the core, triggering activities that target wearable devices with true textile form factor for seamless integration in clothing. Together, these developments have brought issues center stage that might previously have been considered esoteric, like the interaction of topological defects on spherical surfaces, saddle-splay curvature-induced spontaneous chiral symmetry breaking, or the non-trivial shape changes of curved liquid crystal elastomers with non-uniform director fields that undergo a phase transition to an isotropic state. The new research thrusts are motivated equally by the intriguing soft matter physics showcased by liquid crystals in these unconventional geometries, and by the many novel application opportunities that arise when we can reproducibly manufacture these systems on a commercial scale. This review attempts to summarize the current understanding of liquid crystals in spherical and cylindrical geometry, the state of the art of producing such samples, as well as the perspectives for innovative applications that have been put forward.
Collapse
|
14
|
Dobbins C, Rawassizadeh R, Momeni E. Detecting physical activity within lifelogs towards preventing obesity and aiding ambient assisted living. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.02.088] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Pérez-Torres R, Torres-Huitzil C, Galeana-Zapién H. Full On-Device Stay Points Detection in Smartphones for Location-Based Mobile Applications. SENSORS 2016; 16:s16101693. [PMID: 27754388 PMCID: PMC5087481 DOI: 10.3390/s16101693] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 09/24/2016] [Accepted: 10/01/2016] [Indexed: 11/25/2022]
Abstract
The tracking of frequently visited places, also known as stay points, is a critical feature in location-aware mobile applications as a way to adapt the information and services provided to smartphones users according to their moving patterns. Location based applications usually employ the GPS receiver along with Wi-Fi hot-spots and cellular cell tower mechanisms for estimating user location. Typically, fine-grained GPS location data are collected by the smartphone and transferred to dedicated servers for trajectory analysis and stay points detection. Such Mobile Cloud Computing approach has been successfully employed for extending smartphone’s battery lifetime by exchanging computation costs, assuming that on-device stay points detection is prohibitive. In this article, we propose and validate the feasibility of having an alternative event-driven mechanism for stay points detection that is executed fully on-device, and that provides higher energy savings by avoiding communication costs. Our solution is encapsulated in a sensing middleware for Android smartphones, where a stream of GPS location updates is collected in the background, supporting duty cycling schemes, and incrementally analyzed following an event-driven paradigm for stay points detection. To evaluate the performance of the proposed middleware, real world experiments were conducted under different stress levels, validating its power efficiency when compared against a Mobile Cloud Computing oriented solution.
Collapse
Affiliation(s)
- Rafael Pérez-Torres
- Information Technology Laboratory, CINVESTAV-Tamaulipas, Ciudad Victoria C.P. 87130, Tamaulipas, Mexico.
| | - César Torres-Huitzil
- Information Technology Laboratory, CINVESTAV-Tamaulipas, Ciudad Victoria C.P. 87130, Tamaulipas, Mexico.
| | - Hiram Galeana-Zapién
- Information Technology Laboratory, CINVESTAV-Tamaulipas, Ciudad Victoria C.P. 87130, Tamaulipas, Mexico.
| |
Collapse
|
16
|
Guo H, Huang H, Huang L, Sun YE. Recognizing the Operating Hand and the Hand-Changing Process for User Interface Adjustment on Smartphones. SENSORS 2016; 16:s16081314. [PMID: 27556461 PMCID: PMC5017479 DOI: 10.3390/s16081314] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Revised: 08/05/2016] [Accepted: 08/10/2016] [Indexed: 11/16/2022]
Abstract
As the size of smartphone touchscreens has become larger and larger in recent years, operability with a single hand is getting worse, especially for female users. We envision that user experience can be significantly improved if smartphones are able to recognize the current operating hand, detect the hand-changing process and then adjust the user interfaces subsequently. In this paper, we proposed, implemented and evaluated two novel systems. The first one leverages the user-generated touchscreen traces to recognize the current operating hand, and the second one utilizes the accelerometer and gyroscope data of all kinds of activities in the user's daily life to detect the hand-changing process. These two systems are based on two supervised classifiers constructed from a series of refined touchscreen trace, accelerometer and gyroscope features. As opposed to existing solutions that all require users to select the current operating hand or confirm the hand-changing process manually, our systems follow much more convenient and practical methods and allow users to change the operating hand frequently without any harm to the user experience. We conduct extensive experiments on Samsung Galaxy S4 smartphones, and the evaluation results demonstrate that our proposed systems can recognize the current operating hand and detect the hand-changing process with 94.1% and 93.9% precision and 94.1% and 93.7% True Positive Rates (TPR) respectively, when deciding with a single touchscreen trace or accelerometer-gyroscope data segment, and the False Positive Rates (FPR) are as low as 2.6% and 0.7% accordingly. These two systems can either work completely independently and achieve pretty high accuracies or work jointly to further improve the recognition accuracy.
Collapse
Affiliation(s)
- Hansong Guo
- School of Computer Science and Technology, University of Science and Technology of China, Hefei 230000, China.
| | - He Huang
- School of Computer Science and Technology, Soochow University, Soochow 215000, China.
| | - Liusheng Huang
- School of Computer Science and Technology, University of Science and Technology of China, Hefei 230000, China.
| | - Yu-E Sun
- School of Urban Rail Transportation, Soochow University, Soochow 215000, China.
- School of Computer and Software, Nanjing University of Information Science and Technology, Nanjing 210000, China.
| |
Collapse
|
17
|
Kutafina E, Laukamp D, Bettermann R, Schroeder U, Jonas SM. Wearable Sensors for eLearning of Manual Tasks: Using Forearm EMG in Hand Hygiene Training. SENSORS 2016; 16:s16081221. [PMID: 27527167 PMCID: PMC5017386 DOI: 10.3390/s16081221] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/13/2016] [Revised: 07/08/2016] [Accepted: 07/26/2016] [Indexed: 11/16/2022]
Abstract
In this paper, we propose a novel approach to eLearning that makes use of smart wearable sensors. Traditional eLearning supports the remote and mobile learning of mostly theoretical knowledge. Here we discuss the possibilities of eLearning to support the training of manual skills. We employ forearm armbands with inertial measurement units and surface electromyography sensors to detect and analyse the user’s hand motions and evaluate their performance. Hand hygiene is chosen as the example activity, as it is a highly standardized manual task that is often not properly executed. The World Health Organization guidelines on hand hygiene are taken as a model of the optimal hygiene procedure, due to their algorithmic structure. Gesture recognition procedures based on artificial neural networks and hidden Markov modeling were developed, achieving recognition rates of 98.30% (±1.26%) for individual gestures. Our approach is shown to be promising for further research and application in the mobile eLearning of manual skills.
Collapse
Affiliation(s)
- Ekaterina Kutafina
- Department of Medical Informatics, Uniklinik RWTH Aachen, Pauwelsstrasse 30, 52057 Aachen, Germany.
- Faculty of Applied Mathematics, AGH University of Science and Technology, Mickiewicza 30, 30-059 Cracow, Poland.
| | - David Laukamp
- Department of Medical Informatics, Uniklinik RWTH Aachen, Pauwelsstrasse 30, 52057 Aachen, Germany.
| | - Ralf Bettermann
- Department of Medical Informatics, Uniklinik RWTH Aachen, Pauwelsstrasse 30, 52057 Aachen, Germany.
| | - Ulrik Schroeder
- Computer Supported Learning Group, RWTH Aachen University, Ahornstrasse 55, 52074 Aachen, Germany.
| | - Stephan M Jonas
- Department of Medical Informatics, Uniklinik RWTH Aachen, Pauwelsstrasse 30, 52057 Aachen, Germany.
| |
Collapse
|
18
|
On Curating Multimodal Sensory Data for Health and Wellness Platforms. SENSORS 2016; 16:s16070980. [PMID: 27355955 PMCID: PMC4970031 DOI: 10.3390/s16070980] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2016] [Revised: 06/14/2016] [Accepted: 06/21/2016] [Indexed: 11/26/2022]
Abstract
In recent years, the focus of healthcare and wellness technologies has shown a significant shift towards personal vital signs devices. The technology has evolved from smartphone-based wellness applications to fitness bands and smartwatches. The novelty of these devices is the accumulation of activity data as their users go about their daily life routine. However, these implementations are device specific and lack the ability to incorporate multimodal data sources. Data accumulated in their usage does not offer rich contextual information that is adequate for providing a holistic view of a user’s lifelog. As a result, making decisions and generating recommendations based on this data are single dimensional. In this paper, we present our Data Curation Framework (DCF) which is device independent and accumulates a user’s sensory data from multimodal data sources in real time. DCF curates the context of this accumulated data over the user’s lifelog. DCF provides rule-based anomaly detection over this context-rich lifelog in real time. To provide computation and persistence over the large volume of sensory data, DCF utilizes the distributed and ubiquitous environment of the cloud platform. DCF has been evaluated for its performance, correctness, ability to detect complex anomalies, and management support for a large volume of sensory data.
Collapse
|
19
|
Shoaib M, Bosch S, Incel OD, Scholten H, Havinga PJM. Complex Human Activity Recognition Using Smartphone and Wrist-Worn Motion Sensors. SENSORS 2016; 16:426. [PMID: 27023543 PMCID: PMC4850940 DOI: 10.3390/s16040426] [Citation(s) in RCA: 106] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2016] [Revised: 02/23/2016] [Accepted: 03/17/2016] [Indexed: 12/03/2022]
Abstract
The position of on-body motion sensors plays an important role in human activity recognition. Most often, mobile phone sensors at the trouser pocket or an equivalent position are used for this purpose. However, this position is not suitable for recognizing activities that involve hand gestures, such as smoking, eating, drinking coffee and giving a talk. To recognize such activities, wrist-worn motion sensors are used. However, these two positions are mainly used in isolation. To use richer context information, we evaluate three motion sensors (accelerometer, gyroscope and linear acceleration sensor) at both wrist and pocket positions. Using three classifiers, we show that the combination of these two positions outperforms the wrist position alone, mainly at smaller segmentation windows. Another problem is that less-repetitive activities, such as smoking, eating, giving a talk and drinking coffee, cannot be recognized easily at smaller segmentation windows unlike repetitive activities, like walking, jogging and biking. For this purpose, we evaluate the effect of seven window sizes (2–30 s) on thirteen activities and show how increasing window size affects these various activities in different ways. We also propose various optimizations to further improve the recognition of these activities. For reproducibility, we make our dataset publicly available.
Collapse
Affiliation(s)
- Muhammad Shoaib
- Pervasive Systems Group, Department of Computer Science, Zilverling Building, PO-Box 217, 7500 AE Enschede, The Netherlands.
| | - Stephan Bosch
- Pervasive Systems Group, Department of Computer Science, Zilverling Building, PO-Box 217, 7500 AE Enschede, The Netherlands.
| | - Ozlem Durmaz Incel
- Department of Computer Engineering, Galatasaray University, Ortakoy, 34349 Istanbul, Turkey.
| | - Hans Scholten
- Pervasive Systems Group, Department of Computer Science, Zilverling Building, PO-Box 217, 7500 AE Enschede, The Netherlands.
| | - Paul J M Havinga
- Pervasive Systems Group, Department of Computer Science, Zilverling Building, PO-Box 217, 7500 AE Enschede, The Netherlands.
| |
Collapse
|
20
|
Lesson Learned from Collecting Quantified Self Information via Mobile and Wearable Devices. JOURNAL OF SENSOR AND ACTUATOR NETWORKS 2015. [DOI: 10.3390/jsan4040315] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
21
|
Can smartwatches replace smartphones for posture tracking? SENSORS 2015; 15:26783-800. [PMID: 26506354 PMCID: PMC4634473 DOI: 10.3390/s151026783] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Revised: 10/15/2015] [Accepted: 10/16/2015] [Indexed: 11/17/2022]
Abstract
This paper introduces a human posture tracking platform to identify the human postures of sitting, standing or lying down, based on a smartwatch. This work develops such a system as a proof-of-concept study to investigate a smartwatch’s ability to be used in future remote health monitoring systems and applications. This work validates the smartwatches’ ability to track the posture of users accurately in a laboratory setting while reducing the sampling rate to potentially improve battery life, the first steps in verifying that such a system would work in future clinical settings. The algorithm developed classifies the transitions between three posture states of sitting, standing and lying down, by identifying these transition movements, as well as other movements that might be mistaken for these transitions. The system is trained and developed on a Samsung Galaxy Gear smartwatch, and the algorithm was validated through a leave-one-subject-out cross-validation of 20 subjects. The system can identify the appropriate transitions at only 10 Hz with an F-score of 0.930, indicating its ability to effectively replace smart phones, if needed.
Collapse
|