1
|
Sahin AS. A Computer Vision and AI-based Approach to Remote Monitoring of Vital Signs Under Rubble. Disaster Med Public Health Prep 2025; 19:e61. [PMID: 40104942 DOI: 10.1017/dmp.2025.64] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Affiliation(s)
- Abdul Samet Sahin
- Karadeniz Technical University, Faculty of Medicine, Department of Emergency Medicine, Trabzon, Turkey
| |
Collapse
|
2
|
Couderc JP, Page A, Lutz M, Pham T, Tsouri GR, Hall B. Real-world evidence for passive video-based cardiac monitoring from smartphones used by patients with a history of AF. J Electrocardiol 2025; 89:153860. [PMID: 39754789 DOI: 10.1016/j.jelectrocard.2024.153860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 10/30/2024] [Accepted: 12/13/2024] [Indexed: 01/06/2025]
Abstract
Passive cardiac monitoring has become synonymous with wearable technologies, necessitating patients to incorporate new devices into their daily routines. While this requirement may not be a burden for many, it is a constraint for individuals with chronic diseases who already have their daily routine. In this study, we introduce an innovative technology that harnesses the front-facing camera of smartphones to capture pulsatile signals discreetly when users engage in other activities on their device. We conducted a clinical study to gather real world evidence that passive video-based cardiac monitoring is feasible and it can be used to gather daily information about cardiac status of patients with a history of atrial fibrillation (AF). The study involved 16 patients who used an application called HealthKam AFib (HK) on their Android smartphone for a period of 14 days. They also wore an ECG patch during the first 7 days that was used as a reference device. Subjects were asked to also perform self testing procedures using video selfies twice a day, but measurements were also collected in the background during normal device usage. The 16 subjects had the HK app installed on their device during an average time period of 12.8±2.3 days. On average, the measurement rate was 2.1±1.6 measurements per hour of utilization of the smartphone. Heart rate measurements were found to be highly accurate, with a mean error equal to -0.3 bpm. The study revealed that passive facial video monitoring collected reliable data in real-world conditions.
Collapse
Affiliation(s)
| | - A Page
- VPG Medical, Inc., Rochester, NY, USA
| | - M Lutz
- VPG Medical, Inc., Rochester, NY, USA
| | - T Pham
- VPG Medical, Inc., Rochester, NY, USA
| | | | - B Hall
- VPG Medical, Inc., Rochester, NY, USA
| |
Collapse
|
3
|
Ferreira S, Marinheiro C, Mateus C, Rodrigues PP, Rodrigues MA, Rocha N. Overcoming Challenges in Video-Based Health Monitoring: Real-World Implementation, Ethics, and Data Considerations. SENSORS (BASEL, SWITZERLAND) 2025; 25:1357. [PMID: 40096177 PMCID: PMC11902461 DOI: 10.3390/s25051357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2024] [Revised: 02/15/2025] [Accepted: 02/21/2025] [Indexed: 03/19/2025]
Abstract
In the context of evolving healthcare technologies, this study investigates the application of AI and machine learning in video-based health monitoring systems, focusing on the challenges and potential of implementing such systems in real-world scenarios, specifically for knowledge workers. The research underscores the criticality of addressing technological, ethical, and practical hurdles in deploying these systems outside controlled laboratory environments. Methodologically, the study spanned three months and employed advanced facial recognition technology embedded in participants' computing devices to collect physiological metrics such as heart rate, blinking frequency, and emotional states, thereby contributing to a stress detection dataset. This approach ensured data privacy and aligns with ethical standards. The results reveal significant challenges in data collection and processing, including biases in video datasets, the need for high-resolution videos, and the complexities of maintaining data quality and consistency, with 42% (after adjustments) of data lost. In conclusion, this research emphasizes the necessity for rigorous, ethical, and technologically adapted methodologies to fully realize the benefits of these systems in diverse healthcare contexts.
Collapse
Affiliation(s)
- Simão Ferreira
- RISE-Health, Center for Translational Health and Medical Biotechnology Research (TBIO), ESS, Polytechnic of Porto, R. Dr. António Bernardino de Almeida, 400, 4200-072 Porto, Portugal; (C.M.); (M.A.R.)
| | - Catarina Marinheiro
- Centro Hospitalar de Vila Nova de Gaia/Espinho, 4430-999 Vila Nova de Gaia, Portugal;
- Faculdade de Ciências da Saúde e Enfermagem, Universidade Católica Portuguesa, 1649-023 Lisboa, Portugal
| | - Catarina Mateus
- RISE-Health, Center for Translational Health and Medical Biotechnology Research (TBIO), ESS, Polytechnic of Porto, R. Dr. António Bernardino de Almeida, 400, 4200-072 Porto, Portugal; (C.M.); (M.A.R.)
| | - Pedro Pereira Rodrigues
- MEDCIDS—Department of Community Medicine, Information and Decision Sciences, Faculty of Medicine, University of Porto, 4200-450 Porto, Portugal;
- CINTESIS@RISE—Centre for Health Technologies and Services Research, 4200-450 Porto, Portugal
| | - Matilde A. Rodrigues
- RISE-Health, Center for Translational Health and Medical Biotechnology Research (TBIO), ESS, Polytechnic of Porto, R. Dr. António Bernardino de Almeida, 400, 4200-072 Porto, Portugal; (C.M.); (M.A.R.)
| | - Nuno Rocha
- RISE-Health, Center for Translational Health and Medical Biotechnology Research (TBIO), ESS, Polytechnic of Porto, R. Dr. António Bernardino de Almeida, 400, 4200-072 Porto, Portugal; (C.M.); (M.A.R.)
| |
Collapse
|
4
|
Pinnelli M, Lo Presti D, Silvestri S, Setola R, Schena E, Massaroni C. Towards the Instrumentation of Facemasks Used as Personal Protective Equipment for Unobtrusive Breathing Monitoring of Workers. SENSORS (BASEL, SWITZERLAND) 2024; 24:5815. [PMID: 39275726 PMCID: PMC11397801 DOI: 10.3390/s24175815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 08/29/2024] [Accepted: 09/05/2024] [Indexed: 09/16/2024]
Abstract
This study focuses on the integration and validation of a filtering face piece 3 (FFP3) facemask module for monitoring breathing activity in industrial environments. The key objective is to ensure accurate, real-time respiratory rate (RR) monitoring while maintaining workers' comfort. RR monitoring is conducted through temperature variations detected using temperature sensors tested in two configurations: sensor t1, integrated inside the exhalation valve and necessitating structural mask modifications, and sensor t2, mounted externally in a 3D-printed structure, thus preserving its certification as a piece of personal protective equipment (PPE). Ten healthy volunteers participated in static and dynamic tests, simulating typical daily life and industrial occupational activities while wearing the breathing activity monitoring module and a chest strap as a reference instrument. These tests were carried out in both indoor and outdoor settings. The results demonstrate comparable mean absolute error (MAE) for t1 and t2 in both indoor (i.e., 0.31 bpm and 0.34 bpm) and outdoor conditions (i.e., 0.43 bpm and 0.83 bpm). During simulated working activities, both sensors showed consistency with MAE values in static tests and were not influenced by motion artifacts, with more than 97% of RR estimated errors within ±2 bpm. These findings demonstrate the effectiveness of integrating a smart module into protective masks, enhancing occupational health monitoring by providing continuous and precise RR data without requiring additional wearable devices.
Collapse
Affiliation(s)
- Mariangela Pinnelli
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
- Unit of Automatic Control, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
| | - Daniela Lo Presti
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Rome, Italy
| | - Sergio Silvestri
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Rome, Italy
| | - Roberto Setola
- Unit of Automatic Control, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
| | - Emiliano Schena
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Rome, Italy
| | - Carlo Massaroni
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, Via Alvaro del Portillo, 21, 00128 Rome, Italy
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Alvaro del Portillo, 200, 00128 Rome, Italy
| |
Collapse
|
5
|
Talala S, Shvimmer S, Simhon R, Gilead M, Yitzhaky Y. Emotion Classification Based on Pulsatile Images Extracted from Short Facial Videos via Deep Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:2620. [PMID: 38676235 PMCID: PMC11053953 DOI: 10.3390/s24082620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024]
Abstract
Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants' emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera.
Collapse
Affiliation(s)
- Shlomi Talala
- Department of Electro-Optics and Photonics Engineering, School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; (S.T.)
| | - Shaul Shvimmer
- Department of Electro-Optics and Photonics Engineering, School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; (S.T.)
| | - Rotem Simhon
- School of Psychology, Tel Aviv University, Tel Aviv 39040, Israel
| | - Michael Gilead
- School of Psychology, Tel Aviv University, Tel Aviv 39040, Israel
| | - Yitzhak Yitzhaky
- Department of Electro-Optics and Photonics Engineering, School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beer Sheva 84105, Israel; (S.T.)
| |
Collapse
|
6
|
Cheng CH, Yuen Z, Chen S, Wong KL, Chin JW, Chan TT, So RHY. Contactless Blood Oxygen Saturation Estimation from Facial Videos Using Deep Learning. Bioengineering (Basel) 2024; 11:251. [PMID: 38534525 DOI: 10.3390/bioengineering11030251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/26/2024] [Accepted: 03/02/2024] [Indexed: 03/28/2024] Open
Abstract
Blood oxygen saturation (SpO2) is an essential physiological parameter for evaluating a person's health. While conventional SpO2 measurement devices like pulse oximeters require skin contact, advanced computer vision technology can enable remote SpO2 monitoring through a regular camera without skin contact. In this paper, we propose novel deep learning models to measure SpO2 remotely from facial videos and evaluate them using a public benchmark database, VIPL-HR. We utilize a spatial-temporal representation to encode SpO2 information recorded by conventional RGB cameras and directly pass it into selected convolutional neural networks to predict SpO2. The best deep learning model achieves 1.274% in mean absolute error and 1.71% in root mean squared error, which exceed the international standard of 4% for an approved pulse oximeter. Our results significantly outperform the conventional analytical Ratio of Ratios model for contactless SpO2 measurement. Results of sensitivity analyses of the influence of spatial-temporal representation color spaces, subject scenarios, acquisition devices, and SpO2 ranges on the model performance are reported with explainability analyses to provide more insights for this emerging research field.
Collapse
Affiliation(s)
- Chun-Hong Cheng
- Department of Electrical and Electronic Engineering, Imperial College London, London SW7 2AZ, UK
| | - Zhikun Yuen
- Department of Computer Science, University of Ottawa, Ottawa, ON K1H 8M5, Canada
| | - Shutao Chen
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China
| | - Kwan-Long Wong
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China
| | - Jing-Wei Chin
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China
| | - Tsz-Tai Chan
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China
| | - Richard H Y So
- PanopticAI, Hong Kong Science and Technology Parks, New Territories, Hong Kong, China
- Department of Industrial Engineering and Decision Analytics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| |
Collapse
|
7
|
Huang B, Hu S, Liu Z, Lin CL, Su J, Zhao C, Wang L, Wang W. Challenges and prospects of visual contactless physiological monitoring in clinical study. NPJ Digit Med 2023; 6:231. [PMID: 38097771 PMCID: PMC10721846 DOI: 10.1038/s41746-023-00973-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 11/21/2023] [Indexed: 12/17/2023] Open
Abstract
The monitoring of physiological parameters is a crucial topic in promoting human health and an indispensable approach for assessing physiological status and diagnosing diseases. Particularly, it holds significant value for patients who require long-term monitoring or with underlying cardiovascular disease. To this end, Visual Contactless Physiological Monitoring (VCPM) is capable of using videos recorded by a consumer camera to monitor blood volume pulse (BVP) signal, heart rate (HR), respiratory rate (RR), oxygen saturation (SpO2) and blood pressure (BP). Recently, deep learning-based pipelines have attracted numerous scholars and achieved unprecedented development. Although VCPM is still an emerging digital medical technology and presents many challenges and opportunities, it has the potential to revolutionize clinical medicine, digital health, telemedicine as well as other areas. The VCPM technology presents a viable solution that can be integrated into these systems for measuring vital parameters during video consultation, owing to its merits of contactless measurement, cost-effectiveness, user-friendly passive monitoring and the sole requirement of an off-the-shelf camera. In fact, the studies of VCPM technologies have been rocketing recently, particularly AI-based approaches, but few are employed in clinical settings. Here we provide a comprehensive overview of the applications, challenges, and prospects of VCPM from the perspective of clinical settings and AI technologies for the first time. The thorough exploration and analysis of clinical scenarios will provide profound guidance for the research and development of VCPM technologies in clinical settings.
Collapse
Affiliation(s)
- Bin Huang
- AI Research Center, Hangzhou Innovation Institute, Beihang University, 99 Juhang Rd., Binjiang Dist., Hangzhou, Zhejiang, China.
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China.
| | - Shen Hu
- Department of Obstetrics, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
- Department of Epidemiology, The Harvard T.H. Chan School of Public Health, Boston, MA, USA
| | - Zimeng Liu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| | - Chun-Liang Lin
- College of Electrical Engineering and Computer Science, National Chung Hsing University, 145 Xingda Rd., South Dist., Taichung, Taiwan.
| | - Junfeng Su
- Department of General Intensive Care Unit, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
- Key Laboratory of Early Warning and Intervention of Multiple Organ Failure, China National Ministry of Education, Hangzhou, Zhejiang, China
| | - Changchen Zhao
- AI Research Center, Hangzhou Innovation Institute, Beihang University, 99 Juhang Rd., Binjiang Dist., Hangzhou, Zhejiang, China
| | - Li Wang
- Department of Rehabilitation Medicine, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Wenjin Wang
- Department of Biomedical Engineering, Southern University of Science and Technology, 1088 Xueyuan Ave, Nanshan Dist., Shenzhen, Guangdong, China.
| |
Collapse
|
8
|
Arrow C, Ward M, Eshraghian J, Dwivedi G. Capturing the pulse: a state-of-the-art review on camera-based jugular vein assessment. BIOMEDICAL OPTICS EXPRESS 2023; 14:6470-6492. [PMID: 38420308 PMCID: PMC10898581 DOI: 10.1364/boe.507418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 11/02/2023] [Accepted: 11/05/2023] [Indexed: 03/02/2024]
Abstract
Heart failure is associated with a rehospitalisation rate of up to 50% within six months. Elevated central venous pressure may serve as an early warning sign. While invasive procedures are used to measure central venous pressure for guiding treatment in hospital, this becomes impractical upon discharge. A non-invasive estimation technique exists, where the clinician visually inspects the pulsation of the jugular veins in the neck, but it is less reliable due to human limitations. Video and signal processing technologies may offer a high-fidelity alternative. This state-of-the-art review analyses existing literature on camera-based methods for jugular vein assessment. We summarize key design considerations and suggest avenues for future research. Our review highlights the neck as a rich imaging target beyond the jugular veins, capturing comprehensive cardiac signals, and outlines factors affecting signal quality and measurement accuracy. Addressing an often quoted limitation in the field, we also propose minimum reporting standards for future studies.
Collapse
Affiliation(s)
- Coen Arrow
- School of Medicine, University of Western Australia, Perth, Australia
- Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, University of Western Australia, Perth, Australia
| | - Max Ward
- Department of Computer Science and Software Engineering, University of Western Australia, Perth, Australia
| | - Jason Eshraghian
- Department of Electrical and Computer Engineering, University of California (Santa Cruz), California, USA
| | - Girish Dwivedi
- School of Medicine, University of Western Australia, Perth, Australia
- Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, University of Western Australia, Perth, Australia
- Department of Cardiology, Fiona Stanley Hospital, Perth, Australia
| |
Collapse
|
9
|
Szankin M, Kwasniewska A, Ruminski J. Thermal Image Processing for Respiratory Estimation from Cubical Data with Expandable Depth. J Imaging 2023; 9:184. [PMID: 37754948 PMCID: PMC10532126 DOI: 10.3390/jimaging9090184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 08/24/2023] [Accepted: 09/07/2023] [Indexed: 09/28/2023] Open
Abstract
As healthcare costs continue to rise, finding affordable and non-invasive ways to monitor vital signs is increasingly important. One of the key metrics for assessing overall health and identifying potential issues early on is respiratory rate (RR). Most of the existing methods require multiple steps that consist of image and signal processing. This might be difficult to deploy on edge devices that often do not have specialized digital signal processors (DSP). Therefore, the goal of this study is to develop a single neural network realizing the entire process of RR estimation in a single forward pass. The proposed solution builds on recent advances in video recognition, capturing both spatial and temporal information in a multi-path network. Both paths process the data at different sampling rates to capture rapid and slow changes that are associated with differences in the temperature of the nostril area during the breathing episodes. The preliminary results show that the introduced end-to-end solution achieves better performance compared to state-of-the-art methods, without requiring additional pre/post-processing steps and signal-processing techniques. In addition, the presented results demonstrate its robustness on low-resolution thermal video sequences that are often used at the embedded edge due to the size and power constraints of such systems. Taking that into account, the proposed approach has the potential for efficient and convenient respiratory rate estimation across various markets in solutions deployed locally, close to end users.
Collapse
Affiliation(s)
- Maciej Szankin
- Intel Corporation, 16409 W Bernardo Dr Suite 100, San Diego, CA 92127, USA
| | | | - Jacek Ruminski
- Department of Biomedical Engineering, Gdansk University of Technology, Gabriela Narutowicza 11/12, 80233 Gdansk, Poland;
| |
Collapse
|
10
|
Fleischhauer V, Bruhn J, Rasche S, Zaunseder S. Photoplethysmography upon cold stress-impact of measurement site and acquisition mode. Front Physiol 2023; 14:1127624. [PMID: 37324389 PMCID: PMC10267461 DOI: 10.3389/fphys.2023.1127624] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 05/15/2023] [Indexed: 06/17/2023] Open
Abstract
Photoplethysmography (PPG) allows various statements about the physiological state. It supports multiple recording setups, i.e., application to various body sites and different acquisition modes, rendering the technique a versatile tool for various situations. Owing to anatomical, physiological and metrological factors, PPG signals differ with the actual setup. Research on such differences can deepen the understanding of prevailing physiological mechanisms and path the way towards improved or novel methods for PPG analysis. The presented work systematically investigates the impact of the cold pressor test (CPT), i.e., a painful stimulus, on the morphology of PPG signals considering different recording setups. Our investigation compares contact PPG recorded at the finger, contact PPG recorded at the earlobe and imaging PPG (iPPG), i.e., non-contact PPG, recorded at the face. The study bases on own experimental data from 39 healthy volunteers. We derived for each recording setup four common morphological PPG features from three intervals around CPT. For the same intervals, we derived blood pressure and heart rate as reference. To assess differences between the intervals, we used repeated measures ANOVA together with paired t-tests for each feature and we calculated Hedges' g to quantify effect sizes. Our analyses show a distinct impact of CPT. As expected, blood pressure shows a highly significant and persistent increase. Independently of the recording setup, all PPG features show significant changes upon CPT as well. However, there are marked differences between recording setups. Effect sizes generally differ with the finger PPG showing the strongest response. Moreover, one feature (pulse width at half amplitude) shows an inverse behavior in finger PPG and head PPG (earlobe PPG and iPPG). In addition, iPPG features behave partially different from contact PPG features as they tend to return to baseline values while contact PPG features remain altered. Our findings underline the importance of recording setup and physiological as well as metrological differences that relate to the setups. The actual setup must be considered in order to properly interpret features and use PPG. The existence of differences between recording setups and a deepened knowledge on such differences might open up novel diagnostic methods in the future.
Collapse
Affiliation(s)
- Vincent Fleischhauer
- Laboratory for Advanced Measurements and Biomedical Data Analysis, Faculty of Information Technology, FH Dortmund, Dortmund, Germany
| | - Jan Bruhn
- Laboratory for Advanced Measurements and Biomedical Data Analysis, Faculty of Information Technology, FH Dortmund, Dortmund, Germany
| | - Stefan Rasche
- Faculty of Medicine Carl Gustav Carus, TU Dresden, Dresden, Germany
| | - Sebastian Zaunseder
- Laboratory for Advanced Measurements and Biomedical Data Analysis, Faculty of Information Technology, FH Dortmund, Dortmund, Germany
- Professorship for Diagnostic Sensing, Faculty of Applied Computer Science, University Augsburg, Augsburg, Germany
| |
Collapse
|
11
|
Savur C, Dautov R, Bukum K, Xia X, Couderc JP, Tsouri GR. Monitoring Pulse Rate in the Background Using Front Facing Cameras of Mobile Devices. IEEE J Biomed Health Inform 2023; 27:2208-2218. [PMID: 35939479 PMCID: PMC10244025 DOI: 10.1109/jbhi.2022.3197076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We propose a novel framework to passively monitor pulse rate during the time spent by users on their personal mobile devices. Our framework is based on passively capturing the user's pulse signal using the front-facing camera. Signal capture is performed in the background, while the user is interacting with the device as he/she normally would, e.g., watch movies, read emails, text, and play games. The framework does not require subject participation with the monitoring procedure, thereby addressing the well-known problem of low adherence with such procedures. We investigate various techniques to suppress the impact of spontaneous user motion and fluctuations in ambient light conditions expected in non-participatory environments. Techniques include traditional signal processing, machine learning classifiers, and deep learning methods. Our performance evaluation is based on a clinical study encompassing 113 patients with a history of atrial fibrillation (Afib) who are passively monitored at home using a tablet for a period of two weeks. Our results show that the proposed framework accurately monitors pulse rate, thereby providing a gateway for long-term monitoring without relying on subject participation or the use of a dedicated wearable device.
Collapse
|
12
|
Islam SMM. Radar-based remote physiological sensing: Progress, challenges, and opportunities. Front Physiol 2022; 13:955208. [PMID: 36304581 PMCID: PMC9592800 DOI: 10.3389/fphys.2022.955208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 09/20/2022] [Indexed: 11/23/2022] Open
Abstract
Modern microwave Doppler radar-based physiological sensing is playing an important role in healthcare applications and during the last decade, there has been a significant advancement in this non-contact respiration sensing technology. The advantages of contactless, unobtrusive respiration monitoring have drawn interest in various medical applications such as sleep apnea, sudden infant death syndromes (SIDS), remote respiratory monitoring of burn victims, and COVID patients. This paper provides a perspective on recent advances in biomedical and healthcare applications of Doppler radar that can detect the tiny movement of the chest surfaces to extract heartbeat and respiration and its associated different vital signs parameters (tidal volume, heart rate variability (HRV), and so on) of the human subject. Additionally, it also highlights the challenges, and opportunities of this remote physiological sensing technology and several future research directions will be laid out to deploy this sensor technology in our day-to-day life.
Collapse
|
13
|
Couderc JP, Page A, Lutz M, Tsouri GR, Hall B. Assessment of facial video-based detection of atrial fibrillation across human complexion. CARDIOVASCULAR DIGITAL HEALTH JOURNAL 2022; 3:305-312. [PMID: 36589315 PMCID: PMC9795266 DOI: 10.1016/j.cvdhj.2022.08.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
Abstract
Background Early self-detection of atrial fibrillation (AF) can help delay and/or prevent significant associated complications, including embolic stroke and heart failure. We developed a facial video technology, videoplethysmography (VPG), to detect AF based on the analysis of facial pulsatile signals. Objective The purpose of this study was to evaluate the accuracy of a video-based technology to detect AF on a smartphone and to test the performance of the technology in AF patients across the whole spectrum of skin complexion and under various recording conditions. Methods The performance of video-based monitoring depends on a set of factors such as the angle and the distance between the camera and the patient's face, the strength of illumination, and the patient's skin tone. We conducted a clinical study involving 60 subjects with a confirmed diagnosis of AF. A continuous electrocardiogram was used as the gold standard for cardiac rhythm annotation. The VPG technology was fine-tuned on a smartphone for the first 15 subjects. Validation recordings were then done using 7053 measurements collected from the remaining 45 subjects. Results The VPG technology detected the presence of AF using the video camera from a common smartphone with sensitivity and specificity ≥90%. The ambient level of illumination needs to be ≥100 lux for the technology to deliver consistent performance across all skin tones. Conclusion We demonstrated that facial video-based detection of AF provides accurate outpatient cardiac monitoring including high pulse rate accuracy and medical-grade performance for AF detection.
Collapse
Affiliation(s)
- Jean-Philippe Couderc
- Address reprint requests and correspondence: Dr Jean-Philippe Couderc, VPG Medical Inc., 375 White Spruce Blvd, Rochester, NY 14610.
| | | | | | | | | |
Collapse
|
14
|
Molinaro N, Schena E, Silvestri S, Massaroni C. Multi-ROI Spectral Approach for the Continuous Remote Cardio-Respiratory Monitoring from Mobile Device Built-In Cameras. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22072539. [PMID: 35408151 PMCID: PMC9002464 DOI: 10.3390/s22072539] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/16/2022] [Accepted: 03/23/2022] [Indexed: 05/05/2023]
Abstract
Heart rate (HR) and respiratory rate (fR) can be estimated by processing videos framing the upper body and face regions without any physical contact with the subject. This paper proposed a technique for continuously monitoring HR and fR via a multi-ROI approach based on the spectral analysis of RGB video frames recorded with a mobile device (i.e., a smartphone's camera). The respiratory signal was estimated by the motion of the chest, whereas the cardiac signal was retrieved from the pulsatile activity at the level of right and left cheeks and forehead. Videos were recorded from 18 healthy volunteers in four sessions with different user-camera distances (i.e., 0.5 m and 1.0 m) and illumination conditions (i.e., natural and artificial light). For HR estimation, three approaches were investigated based on single or multi-ROI approaches. A commercially available multiparametric device was used to record reference respiratory signals and electrocardiogram (ECG). The results demonstrated that the multi-ROI approach outperforms the single-ROI approach providing temporal trends of both the vital parameters comparable to those provided by the reference, with a mean absolute error (MAE) consistently below 1 breaths·min-1 for fR in all the scenarios, and a MAE between 0.7 bpm and 6 bpm for HR estimation, whose values increase at higher distances.
Collapse
Affiliation(s)
- Nunzia Molinaro
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, 00128 Rome, Italy
| | - Emiliano Schena
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, 00128 Rome, Italy
| | - Sergio Silvestri
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, 00128 Rome, Italy
| | - Carlo Massaroni
- Unit of Measurements and Biomedical Instrumentation, Departmental Faculty of Engineering, Università Campus Bio-Medico di Roma, 00128 Rome, Italy
| |
Collapse
|