1
|
Baseman C, Fayfman M, Schechter MC, Ostadabbas S, Santamarina G, Ploetz T, Arriaga RI. Intelligent Care Management for Diabetic Foot Ulcers: A Scoping Review of Computer Vision and Machine Learning Techniques and Applications. J Diabetes Sci Technol 2023:19322968231213378. [PMID: 37953531 DOI: 10.1177/19322968231213378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2023]
Abstract
Ten percent of adults in the United States have a diagnosis of diabetes and up to a third of these individuals will develop a diabetic foot ulcer (DFU) in their lifetime. Of those who develop a DFU, a fifth will ultimately require amputation with a mortality rate of up to 70% within five years. The human suffering, economic burden, and disproportionate impact of diabetes on communities of color has led to increasing interest in the use of computer vision (CV) and machine learning (ML) techniques to aid the detection, characterization, monitoring, and even prediction of DFUs. Remote monitoring and automated classification are expected to revolutionize wound care by allowing patients to self-monitor their wound pathology, assist in the remote triaging of patients by clinicians, and allow for more immediate interventions when necessary. This scoping review provides an overview of applicable CV and ML techniques. This includes automated CV methods developed for remote assessment of wound photographs, as well as predictive ML algorithms that leverage heterogeneous data streams. We discuss the benefits of such applications and the role they may play in diabetic foot care moving forward. We highlight both the need for, and possibilities of, computational sensing systems to improve diabetic foot care and bring greater knowledge to patients in need.
Collapse
Affiliation(s)
- Cynthia Baseman
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA
| | - Maya Fayfman
- Grady Health System, Division of Endocrinology, Metabolism, and Lipids, Department of Medicine, School of Medicine, Emory University, Atlanta, GA, USA
| | - Marcos C Schechter
- Grady Health System, Division of Infectious Diseases, Department of Medicine, School of Medicine, Emory University, Atlanta, GA, USA
| | - Sarah Ostadabbas
- Department of Electrical & Computer Engineering, Northeastern University, Boston, MA, USA
| | - Gabriel Santamarina
- Department of Medicine and Orthopaedics, School of Medicine, Emory University, Atlanta, GA, USA
| | - Thomas Ploetz
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA
| | - Rosa I Arriaga
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
2
|
Duncan L, Zhu S, Pergolotti M, Giri S, Salsabili H, Faezipour M, Ostadabbas S, Mirbozorgi SA. Camera-Based Short Physical Performance Battery and Timed Up and Go Assessment for Older Adults With Cancer. IEEE Trans Biomed Eng 2023; 70:2529-2539. [PMID: 37028022 DOI: 10.1109/tbme.2023.3253061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
This paper presents an automatic camera-based device to monitor and evaluate the gait speed, standing balance, and 5 times sit-stand (5TSS) tests of the Short Physical Performance Battery (SPPB) and the Timed Up and Go (TUG) test. The proposed design measures and calculates the parameters of the SPPB tests automatically. The SPPB data can be used for physical performance assessment of older patients under cancer treatment. This stand-alone device has a Raspberry Pi (RPi) computer, three cameras, and two DC motors. The left and right cameras are used for gait speed tests. The center camera is used for standing balance, 5TSS, and TUG tests and for angle positioning of the camera platform toward the subject using DC motors by turning the camera left/right and tilting it up/down. The key algorithm for operating the proposed system is developed using Channel and Spatial Reliability Tracking in the cv2 module in Python. Graphical User Interfaces (GUIs) in the RPi are developed to run tests and adjust cameras, controlled remotely via smartphone and its Wi-Fi hotspot. We have tested the implemented camera setup prototype and extracted all SPPB and TUG parameters by conducting several experiments on a human subject population of 8 volunteers (male and female, light and dark complexions) in 69 test runs. The measured data and calculated outputs of the system consist of tests of gait speed (0.041 to 1.92 m/s with average accuracy of >95%), and standing balance, 5TSS, TUG, all with average time accuracy of >97%.
Collapse
|
3
|
Saha R, Jiang L, Salsabili H, Faezipour M, Ostadabbas S, Larimer B, Mirbozorgi SA. Toward a Smart Sensing System to Monitor Small Animal's Physical State via Multi-Frequency Resonator Array. IEEE Trans Biomed Circuits Syst 2023; PP:1-13. [PMID: 37307182 DOI: 10.1109/tbcas.2023.3284823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This paper presents a highly scalable and rack-mountable wireless sensing system for long-term monitoring (i.e., sense and estimate) of small animal/s' physical state (SAPS), such as changes in location and posture within standard cages. The conventional tracking systems may lack one or more features such as scalability, cost efficiency, rack-mount ability, and light condition insensitivity to work 24/7 on a large scale. The proposed sensing mechanism relies on relative changes of multiple resonance frequencies due to the animal's presence over the sensor unit. The sensor unit can track SAPS changes based on changes in electrical properties in the sensors near fields, appearing in the resonance frequencies, i.e., an Electromagnetic (EM) Signature, within the 200 MHz - 300 MHz frequency range. The sensing unit is located underneath a standard mouse cage and consists of thin layers of a reading coil and six resonators tuned at six distinct frequencies. ANSYS HFSS software is used to model and optimize the proposed sensor unit and calculate the Specific Absorption Rate (SAR) obtained under 0.05 W/kg. Multiple prototypes have been implemented to test, validate, and characterize the performance of the design by conducting in vitro and in vivo experiments on Mice. The in-vitro test results have shown a 15 mm spatial resolution in detecting the mouse's location over the sensor array having maximum frequency shifts of 832 kHz and posture detection with under 30º resolution. The in-vivo experiment on mouse displacement resulted in frequency shifts of up to 790 kHz, indicating the SAPS's capability to detect the Mice's physical state.
Collapse
|
4
|
Liu S, Ostadabbas S. Pressure eye: In-bed contact pressure estimation via contact-less imaging. Med Image Anal 2023; 87:102835. [PMID: 37150066 DOI: 10.1016/j.media.2023.102835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 06/03/2022] [Accepted: 04/21/2023] [Indexed: 05/09/2023]
Abstract
Computer vision has achieved great success in interpreting semantic meanings from images, yet estimating underlying (non-visual) physical properties of an object is often limited to their bulk values rather than reconstructing a dense map. In this work, we present our pressure eye (PEye) approach to estimate contact pressure between a human body and the surface she is lying on with high resolution from vision signals directly. PEye approach could ultimately enable the prediction and early detection of pressure ulcers in bed-bound patients, that currently depends on the use of expensive pressure mats. Our PEye network is configured in a dual encoding shared decoding form to fuse visual cues and some relevant physical parameters in order to reconstruct high resolution pressure maps (PMs). We also present a pixel-wise resampling approach based on Naive Bayes assumption to further enhance the PM regression performance. A percentage of correct sensing (PCS) tailored for sensing estimation accuracy evaluation is also proposed which provides another perspective for performance evaluation under varying error tolerances. We tested our approach via a series of extensive experiments using multimodal sensing technologies to collect data from 102 subjects while lying on a bed. The individual's high resolution contact pressure data could be estimated from their RGB or long wavelength infrared (LWIR) images with 91.8% and 91.2% estimation accuracies in PCSefs0.1 criteria, superior to state-of-the-art methods in the related image regression/translation tasks.
Collapse
Affiliation(s)
- Shuangjun Liu
- Augmented Cognition Lab, Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA
| | - Sarah Ostadabbas
- Augmented Cognition Lab, Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, USA.
| |
Collapse
|
5
|
Zhu S, Hosni SI, Huang X, Wan M, Borgheai SB, McLinden J, Shahriari Y, Ostadabbas S. A dynamical graph-based feature extraction approach to enhance mental task classification in brain-computer interfaces. Comput Biol Med 2023; 153:106498. [PMID: 36634598 DOI: 10.1016/j.compbiomed.2022.106498] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 12/08/2022] [Accepted: 12/27/2022] [Indexed: 12/31/2022]
Abstract
Graph theoretic approaches in analyzing spatiotemporal dynamics of brain activities are under-studied but could be very promising directions in developing effective brain-computer interfaces (BCIs). Many existing BCI systems use electroencephalogram (EEG) signals to record and decode human neural activities noninvasively. Often, however, the features extracted from the EEG signals ignore the topological information hidden in the EEG temporal dynamics. Moreover, existing graph theoretic approaches are mostly used to reveal the topological patterns of brain functional networks based on synchronization between signals from distinctive spatial regions, instead of interdependence between states at different timestamps. In this study, we present a robust fold-wise hyperparameter optimization framework utilizing a series of conventional graph-based measurements combined with spectral graph features and investigate its discriminative performance on classification of a designed mental task in 6 participants with amyotrophic lateral sclerosis (ALS). Across all of our participants, we reached an average accuracy of 71.1%±4.5% for mental task classification by combining the global graph-based measurements and the spectral graph features, higher than the conventional non-graph based feature performance (67.1%±7.5%). Compared to using either one of the graphic features (66.3%±6.5% for the eigenvalues and 65.9%±5.2% for the global graph features), our feature combination strategy shows considerable improvement in both accuracy and robustness performance. Our results indicate the feasibility and advantage of the presented fold-wise optimization framework utilizing graph-based features in BCI systems targeted at end-users.
Collapse
Affiliation(s)
- Shaotong Zhu
- The Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA
| | - Sarah Ismail Hosni
- The Electrical, Computer, and Biomedical Engineering Department, University of Rhode Island, Kingston, RI 02881, USA
| | - Xiaofei Huang
- The Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA
| | - Michael Wan
- The Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA
| | - Seyyed Bahram Borgheai
- The Electrical, Computer, and Biomedical Engineering Department, University of Rhode Island, Kingston, RI 02881, USA
| | - John McLinden
- The Electrical, Computer, and Biomedical Engineering Department, University of Rhode Island, Kingston, RI 02881, USA
| | - Yalda Shahriari
- The Electrical, Computer, and Biomedical Engineering Department, University of Rhode Island, Kingston, RI 02881, USA
| | - Sarah Ostadabbas
- The Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave, Boston, MA 02115, USA.
| |
Collapse
|
6
|
Liu S, Huang X, Fu N, Li C, Su Z, Ostadabbas S. Simultaneously-Collected Multimodal Lying Pose Dataset: Enabling In-Bed Human Pose Monitoring. IEEE Trans Pattern Anal Mach Intell 2023; 45:1106-1118. [PMID: 35239476 DOI: 10.1109/tpami.2022.3155712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Computer vision field has achieved great success in interpreting semantic meanings from images, yet its algorithms can be brittle for tasks with adverse vision conditions and the ones suffering from data/label pair limitation. Among these tasks is in-bed human pose monitoring with significant value in many healthcare applications. In-bed pose monitoring in natural settings involves pose estimation in complete darkness or full occlusion. The lack of publicly available in-bed pose datasets hinders the applicability of many successful human pose estimation algorithms for this task. In this paper, we introduce our Simultaneously-collected multimodal Lying Pose (SLP) dataset, which includes in-bed pose images from 109 participants captured using multiple imaging modalities including RGB, long wave infrared (LWIR), depth, and pressure map. We also present a physical hyper parameter tuning strategy for ground truth pose label generation under adverse vision conditions. The SLP design is compatible with the mainstream human pose datasets; therefore, the state-of-the-art 2D pose estimation models can be trained effectively with the SLP data with promising performance as high as 95% at PCKh@0.5 on a single modality. The pose estimation performance of these models can be further improved by including additional modalities through the proposed collaborative scheme.
Collapse
|
7
|
Hosni SMI, Borgheai SB, McLinden J, Zhu S, Huang X, Ostadabbas S, Shahriari Y. A Graph-Based Nonlinear Dynamic Characterization of Motor Imagery Toward an Enhanced Hybrid BCI. Neuroinformatics 2022; 20:1169-1189. [PMID: 35907174 DOI: 10.1007/s12021-022-09595-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Decoding neural responses from multimodal information sources, including electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), has the transformative potential to advance hybrid brain-computer interfaces (hBCIs). However, existing modest performance improvement of hBCIs might be attributed to the lack of computational frameworks that exploit complementary synergistic properties in multimodal features. This study proposes a multimodal data fusion framework to represent and decode synergistic multimodal motor imagery (MI) neural responses. We hypothesize that exploiting EEG nonlinear dynamics adds a new informative dimension to the commonly combined EEG-fNIRS features and will ultimately increase the synergy between EEG and fNIRS features toward an enhanced hBCI. The EEG nonlinear dynamics were quantified by extracting graph-based recurrence quantification analysis (RQA) features to complement the commonly used spectral features for an enhanced multimodal configuration when combined with fNIRS. The high-dimensional multimodal features were further given to a feature selection algorithm relying on the least absolute shrinkage and selection operator (LASSO) for fused feature selection. Linear support vector machine (SVM) was then used to evaluate the framework. The mean hybrid classification performance improved by up to 15% and 4% compared to the unimodal EEG and fNIRS, respectively. The proposed graph-based framework substantially increased the contribution of EEG features for hBCI classification from 28.16% up to 52.9% when introduced the nonlinear dynamics and improved the performance by approximately 2%. These findings suggest that graph-based nonlinear dynamics can increase the synergy between EEG and fNIRS features for an enhanced MI response representation that is not dominated by a single modality.
Collapse
Affiliation(s)
- Sarah M I Hosni
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA
| | - Seyyed B Borgheai
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA
| | - John McLinden
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA
| | - Shaotong Zhu
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA
| | - Xiaofei Huang
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA
| | - Sarah Ostadabbas
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA
| | - Yalda Shahriari
- Department of Electrical, Computer & Biomedical Engineering, University of Rhode Island (URI), Kingston, RI, 02881, USA.
| |
Collapse
|
8
|
Mak J, Kocanaogullari D, Huang X, Kersey J, Shih M, Grattan ES, Skidmore ER, Wittenberg GF, Ostadabbas S, Akcakaya M. Detection of Stroke-Induced Visual Neglect and Target Response Prediction Using Augmented Reality and Electroencephalography. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1840-1850. [PMID: 35786558 DOI: 10.1109/tnsre.2022.3188184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We aim to build a system incorporating electroencephalography (EEG) and augmented reality (AR) that is capable of identifying the presence of visual spatial neglect (SN) and mapping the estimated neglected visual field. An EEG-based brain-computer interface (BCI) was used to identify those spatiospectral features that best detect participants with SN among stroke survivors using their EEG responses to ipsilesional and contralesional visual stimuli. Frontal-central delta and alpha, frontal-parietal theta, Fp1 beta, and left frontal gamma were found to be important features for neglect detection. Additionally, temporal analysis of the responses shows that the proposed model is accurate in detecting potentially neglected targets. These targets were predicted using common spatial patterns as the feature extraction algorithm and regularized discriminant analysis combined with kernel density estimation for classification. With our preliminary results, our system shows promise for reliably detecting the presence of SN and predicting visual target responses in stroke patients with SN.
Collapse
|
9
|
Huang X, Mak J, Wears A, Price RB, Akcakaya M, Ostadabbas S, Woody ML. Using Neurofeedback from Steady-State Visual Evoked Potentials to Target Affect-Biased Attention in Augmented Reality. Annu Int Conf IEEE Eng Med Biol Soc 2022; 2022:2314-2318. [PMID: 36085716 PMCID: PMC9801955 DOI: 10.1109/embc48229.2022.9871982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Biases in attention to emotional stimuli (i.e., affect-biased attention) contribute to the development and mainte-nance of depression and anxiety and may be a promising target for intervention. Past attempts to therapeutically modify affect-biased attention have been unsatisfactory due to issues with reliability and precision. Electroencephalogram (EEG)-derived steady-state visual evoked potentials (SSVEPS) provide a temporally-sensitive biological index of attention to competing visual stimuli at the level of neuronal populations in the visual cortex. SSVEPS can potentially be used to quantify whether affective distractors vs. task-relevant stimuli have "won" the competition for attention at a trial-by-trial level during neuro-feedback sessions. This study piloted a protocol for a SSVEP-based neurofeedback training to modify affect-biased attention using a portable augmented-reality (AR) EEG interface. During neurofeedback sessions with five healthy participants, signifi-cantly greater attention was given to the task-relevant stimulus (a Gabor patch) than to affective distractors (negative emotional expressions) across SSVEP indices (p<0.000l). SSVEP indices exhibited excellent internal consistency as evidenced by a maximum Guttman split-half coefficient of 0.97 when comparing even to odd trials. Further testing is required, but findings suggest several SSVEP neurofeedback calculation methods most deserving of additional investigation and support ongoing efforts to develop and implement a SSVEP-guided AR-based neurofeedback training to modify affect-biased attention in adolescent girls at high risk for depression.
Collapse
Affiliation(s)
- Xiaofei Huang
- Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave, Boston, Massachusetts 02115, USA
| | - Jennifer Mak
- Department of Bioengineering, University of Pittsburgh, 3700 O’Hara St, Pittsburgh, PA 15213, USA
| | - Anna Wears
- University of Pittsburgh School of Medicine, 3550 Terrace Street, Pittsburgh, PA 15261, USA
| | - Rebecca B. Price
- University of Pittsburgh School of Medicine, 3550 Terrace Street, Pittsburgh, PA 15261, USA
| | - Murat Akcakaya
- Department of Electrical and Computer Engineering, University of Pittsburgh, 3700 O’Hara St, Pittsburgh, PA 15213, USA
| | - Sarah Ostadabbas
- Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave, Boston, Massachusetts 02115, USA,Corresponding author: Sarah Ostadabbas.
| | - Mary L. Woody
- University of Pittsburgh School of Medicine, 3550 Terrace Street, Pittsburgh, PA 15261, USA
| |
Collapse
|
10
|
Kamath CV, Liu S, Ostadabbas S. Privacy-Preserving In-Bed Pose and Posture Tracking on Edge. Annu Int Conf IEEE Eng Med Biol Soc 2022; 2022:3365-3369. [PMID: 36085982 DOI: 10.1109/embc48229.2022.9870881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In-bed behavior monitoring is commonly needed for bed-bound patient and has long been confined to wearable devices or expensive pressure mapping systems. Meanwhile, vision-based human pose and posture tracking while experiencing a lot of attention/success in the computer vision field has been hindered in terms of usability for in-bed cases, due to huge privacy concerns surrounding this topic. Moreover, the inference models for mainstream pose and posture estimation often require excessive computing resources, impeding their implementation on edge devices. In this paper, we introduce a privacy-preserving in-bed pose and posture tracking system running entirely on an edge device with added functionality to detect stable motion as well as setting user-specific alerts for given poses. We evaluated the estimation accuracy of our system on a series of retrospective infrared (LWIR) images as well as samples from a real-world test environment. Our test results reached over 93.6% estimation accuracy for in-bed poses and achieved over 95.9% accuracy in estimating three in-bed posture categories.
Collapse
|
11
|
Zhu S, Hosni SI, Huang X, Borgheai SB, McLinden J, Shahriari Y, Ostadabbas S. A Graph-Based Feature Extraction Algorithm Towards a Robust Data Fusion Framework for Brain-Computer Interfaces. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:878-881. [PMID: 34891430 DOI: 10.1109/embc46164.2021.9630804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE The topological information hidden in the EEG spectral dynamics is often ignored in the majority of the existing brain-computer interface (BCI) systems. Moreover, a systematic multimodal fusion of EEG with other informative brain signals such as functional near-infrared spectroscopy (fNIRS) towards enhancing the performance of the BCI systems is not fully investigated. In this study, we present a robust EEG-fNIRS data fusion framework utilizing a series of graph-based EEG features to investigate their performance on a motor imaginary (MI) classification task. METHOD We first extract the amplitude and phase sequences of users' multi-channel EEG signals based on the complex Morlet wavelet time-frequency maps, and then convert them into an undirected graph to extract EEG topological features. The graph-based features from EEG are then selected by a thresholding method and fused with the temporal features from fNIRS signals after each being selected by the least absolute shrinkage and selection operator (LASSO) algorithm. The fused features were then classified as MI task vs. baseline by a linear support vector machine (SVM) classifier. RESULTS The time-frequency graphs of EEG signals improved the MI classification accuracy by ∼5% compared to the graphs built on the band-pass filtered temporal EEG signals. Our proposed graph-based method also showed comparable performance to the classical EEG features based on power spectral density (PSD), however with a much smaller standard deviation, showing its robustness for potential use in a practical BCI system. Our fusion analysis revealed a considerable improvement of ∼17% as opposed to the highest average accuracy of EEG only and ∼3% compared with the highest fNIRS only accuracy demonstrating an enhanced performance when modality fusion is used relative to single modal outcomes. SIGNIFICANCE Our findings indicate the potential use of the proposed data fusion framework utilizing the graph-based features in the hybrid BCI systems by making the motor imaginary inference more accurate and more robust.
Collapse
|
12
|
Kocanaogullari D, Huang X, Mak J, Shih M, Skidmore E, Wittenberg GF, Ostadabbas S, Akcakaya M. Fine-tuning and Personalization of EEG-based Neglect Detection in Stroke Patients. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:1096-1099. [PMID: 34891478 DOI: 10.1109/embc46164.2021.9630794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Spatial neglect (SN) is a neurological disorder that causes inattention to visual stimuli in the contralesional visual field, stemming from unilateral brain injury such as stroke. The current gold standard method of SN assessment, the conventional Behavioral Inattention Test (BIT-C), is highly variable and inconsistent in its results. In our previous work, we built an augmented reality (AR)-based BCI to overcome the limitations of the BIT-C and classified between neglected and non-neglected targets with high accuracy. Our previous approach included personalization of the neglect detection classifier but the process required rigorous retraining from scratch and time-consuming feature selection for each participant. Future steps of our work will require rapid personalization of the neglect classifier; therefore, in this paper, we investigate fine-tuning of a neural network model to hasten the personalization process.
Collapse
|
13
|
Hosni SMI, Borgheai SB, McLinden J, Zhu S, Huang X, Ostadabbas S, Shahriari Y. Graph-based Recurrence Quantification Analysis of EEG Spectral Dynamics for Motor Imagery-based BCIs. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:6453-6457. [PMID: 34892589 DOI: 10.1109/embc46164.2021.9630068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
UNLABELLED Despite continuous research, communication approaches based on brain-computer interfaces (BCIs) are not yet an efficient and reliable means that severely disabled patients can rely on. To date, most motor imagery (MI)-based BCI systems use conventional spectral analysis methods to extract discriminative features and classify the associated electroencephalogram (EEG)-based sensorimotor rhythms (SMR) dynamics that results in relatively low performance. In this study, we investigated the feasibility of using recurrence quantification analysis (RQA) and complex network theory graph-based feature extraction methods as a novel way to improve MI-BCIs performance. Rooted in chaos theory, these features explore the nonlinear dynamics underlying the MI neural responses as a new informative dimension in classifying MI. METHOD EEG time series recorded from six healthy participants performing MI-Rest tasks were projected into multidimensional phase space trajectories in order to construct the corresponding recurrence plots (RPs). Eight nonlinear graph-based RQA features were extracted from the RPs then compared to the classical spectral features through a 5-fold nested cross-validation procedure for parameter optimization using a linear support vector machine (SVM) classifier. RESULTS Nonlinear graph-based RQA features were able to improve the average performance of MI-BCI by 5.8% as compared to the classical features. SIGNIFICANCE These findings suggest that RQA and complex network analysis could represent new informative dimensions for nonlinear characteristics of EEG signals in order to enhance the MI-BCI performance.
Collapse
|
14
|
Duncan L, Gulati P, Giri S, Ostadabbas S, Abdollah Mirbozorgi S. Camera-Based Human Gait Speed Monitoring and Tracking for Performance Assessment of Elderly Patients with Cancer. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:3522-3525. [PMID: 34891999 DOI: 10.1109/embc46164.2021.9630474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This paper presents a camera-based device for monitoring walking gait speed. The walking gait speed data will be used for performance assessment of elderly patients with cancer and calibrating wearable walking gait speed monitoring devices. This standalone device has a Raspberry Pi computer, three cameras (two cameras for finding the trajectory and gait speed of the subject and one camera for tracking the subject), and two stepper motors. The stepper motors turn the camera platform left and right and tilt it up and down by using video footage from the center camera. The left and right cameras are used to record videos of the person walking. The algorithm for operating the proposed system is developed in Python. The measured data and calculated outputs of the system consist of times for frames, distances from the center camera, horizontal angles, distances moved, instantaneous gait speed (frame-by-frame), total distance walked, and average speed. This system covers a large Lab area of 134.3 m2 and has achieved errors of less than 5% for gait speed calculation.Clinical Relevance- This project will help specialists to adjust the chemo dosage for elderly patients with cancer. The results will be used to analyze the human walking movements for estimating frailty and rehabilitation applications, too.
Collapse
|
15
|
Yang X, Jiang L, Giri S, Ostadabbas S, Abdollah Mirbozorgi S. A Wearable Walking Gait Speed-Sensing Device using Frequency Bifurcations of Multi-Resonator Inductive Link. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:7272-7275. [PMID: 34892777 DOI: 10.1109/embc46164.2021.9630127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This paper describes a wearable inductive sensing system to monitor (i.e., sense and estimate) walking gait speed. This proposed design relies on the multi-resonance inductive link to quantify the angle of the human legs for calculating the speed of walking. The walking gait speed can be used to estimate the frailty in elderly patients with cancer. We have designed, optimized, and implemented a multi-resonator sensor unit to precisely measure the angle between human legs during walking. The couplings between resonators change by lateral displacements due to walking, and a reading coil senses the frequency bifurcations, corresponding to the changes in angle between legs. The proposed design is optimized using ANSYS HFSS and implemented using copper foil. The Specific Absorption Rate, SAR, in the human body is calculated 0.035 W/kg using the developed HFSS model. The operating frequency range of the proposed sensor is from 25 MHz to 46 MHz, and it can measure angles up to 90° (-45° to +45°). The measured resolution for estimating the angle shows the capability of the sensor for calculating the walking speed with a resolution of less than 0.1 m/s.
Collapse
|
16
|
Farnoosh A, Wang Z, Zhu S, Ostadabbas S. A Bayesian Dynamical Approach for Human Action Recognition. Sensors (Basel) 2021; 21:s21165613. [PMID: 34451054 PMCID: PMC8402468 DOI: 10.3390/s21165613] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 08/16/2021] [Accepted: 08/16/2021] [Indexed: 11/24/2022]
Abstract
We introduce a generative Bayesian switching dynamical model for action recognition in 3D skeletal data. Our model encodes highly correlated skeletal data into a few sets of low-dimensional switching temporal processes and from there decodes to the motion data and their associated action labels. We parameterize these temporal processes with regard to a switching deep autoregressive prior to accommodate both multimodal and higher-order nonlinear inter-dependencies. This results in a dynamical deep generative latent model that parses meaningful intrinsic states in skeletal dynamics and enables action recognition. These sequences of states provide visual and quantitative interpretations about motion primitives that gave rise to each action class, which have not been explored previously. In contrast to previous works, which often overlook temporal dynamics, our method explicitly model temporal transitions and is generative. Our experiments on two large-scale 3D skeletal datasets substantiate the superior performance of our model in comparison with the state-of-the-art methods. Specifically, our method achieved 6.3% higher action classification accuracy (by incorporating a dynamical generative framework), and 3.5% better predictive error (by employing a nonlinear second-order dynamical transition model) when compared with the best-performing competitors.
Collapse
|
17
|
Kocanaogullari D, Mak J, Kersey J, Khalaf A, Ostadabbas S, Wittenberg G, Skidmore E, Akcakaya M. EEG-based Neglect Detection for Stroke Patients. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:264-267. [PMID: 33017979 DOI: 10.1109/embc44109.2020.9176378] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Spatial neglect (SN) is a neurological syndrome in stroke patients, commonly due to unilateral brain injury. It results in inattention to stimuli in the contralesional visual field. The current gold standard for SN assessment is the behavioral inattention test (BIT). BIT includes a series of penand-paper tests. These tests can be unreliable due to high variablility in subtest performances; they are limited in their ability to measure the extent of neglect, and they do not assess the patients in a realistic and dynamic environment. In this paper, we present an electroencephalography (EEG)-based brain-computer interface (BCI) that utilizes the Starry Night Test to overcome the limitations of the traditional SN assessment tests. Our overall goal with the implementation of this EEG-based Starry Night neglect detection system is to provide a more detailed assessment of SN. Specifically, to detect the presence of SN and its severity. To achieve this goal, as an initial step, we utilize a convolutional neural network (CNN) based model to analyze EEG data and accordingly propose a neglect detection method to distinguish between stroke patients without neglect and stroke patients with neglect.Clinical relevance-The proposed EEG-based BCI can be used to detect neglect in stroke patients with high accuracy, specificity and sensitivity. Further research will additionally allow for an estimation of a patient's field of view (FOV) for more detailed assessment of neglect.
Collapse
|
18
|
Hejazi D, Liu S, Farnoosh A, Ostadabbas S, Kar S. Development of use-specific high-performance cyber-nanomaterial optical detectors by effective choice of machine learning algorithms. Mach Learn : Sci Technol 2020. [DOI: 10.1088/2632-2153/ab8967] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Abstract
Due to their inherent variabilities, nanomaterials-based sensors are challenging to translate into real-world applications, where reliability and reproducibility are key. Machine learning can be a powerful approach for obtaining reliable inferences from data generated by such sensors. Here, we show that the best choice of ML algorithm in a cyber-nanomaterial detector is largely determined by the specific use-considerations, including accuracy, computational cost, speed, and resilience against drifts and long-term ageing effects. When sufficient data and computing resources are provided, the highest sensing accuracy can be achieved by the k-nearest neighbors (kNNs) and Bayesian inference algorithms, however, these algorithms can be computationally expensive for real-time applications. In contrast, artificial neural networks (ANNs) are computationally expensive to train (off-line), but they provide the fastest result under testing conditions (on-line) while remaining reasonably accurate. When access to data is limited, support vector machines (SVMs) can perform well even with small training sample sizes, while other algorithms show considerable reduction in accuracy if data is scarce, hence, setting a lower limit on the size of required training data. We also show by tracking and modeling the long-term drifts of the detector performance over a one year time-frame, it is possible to dramatically improve the predictive accuracy without any re-calibration. Our research shows for the first time that if the ML algorithm is chosen specific to the use-case, low-cost solution-processed cyber-nanomaterial detectors can be practically implemented under diverse operational requirements, despite their inherent variabilities.
Collapse
|
19
|
Liu S, Yin Y, Ostadabbas S. In-Bed Pose Estimation: Deep Learning With Shallow Dataset. IEEE J Transl Eng Health Med 2019; 7:4900112. [PMID: 30792942 PMCID: PMC6360998 DOI: 10.1109/jtehm.2019.2892970] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Revised: 01/06/2019] [Accepted: 01/07/2019] [Indexed: 11/09/2022]
Abstract
This paper presents a robust human posture and body parts detection method under a specific application scenario known as in-bed pose estimation. Although the human pose estimation for various computer vision (CV) applications has been studied extensively in the last few decades, the in-bed pose estimation using camera-based vision methods has been ignored by the CV community because it is assumed to be identical to the general purpose pose estimation problems. However, the in-bed pose estimation has its own specialized aspects and comes with specific challenges, including the notable differences in lighting conditions throughout the day and having pose distribution different from the common human surveillance viewpoint. In this paper, we demonstrate that these challenges significantly reduce the effectiveness of the existing general purpose pose estimation models. In order to address the lighting variation challenge, the infrared selective (IRS) image acquisition technique is proposed to provide uniform quality data under various lighting conditions. In addition, to deal with the unconventional pose perspective, a 2- end histogram of oriented gradient (HOG) rectification method is presented. The deep learning framework proves to be the most effective model in human pose estimation; however, the lack of large public dataset for in-bed poses prevents us from using a large network from scratch. In this paper, we explored the idea of employing a pre-trained convolutional neural network (CNN) model trained on large public datasets of general human poses and fine-tuning the model using our own shallow (limited in size and different in perspective and color) in-bed IRS dataset. We developed an IRS imaging system and collected IRS image data from several realistic life-size mannequins in a simulated hospital room environment. A pre-trained CNN called convolutional pose machine (CPM) was fine-tuned for in-bed pose estimation by re-training its specific intermediate layers. Using the HOG rectification method, the pose estimation performance of CPM improved significantly by 26.4% in the probability of correct key-point (PCK) criteria at PCK0.1 compared to the model without such rectification. Even testing with only well aligned in-bed pose images, our fine-tuned model still surpassed the traditionally tuned CNN by another 16.6% increase in pose estimation accuracy.
Collapse
Affiliation(s)
- Shuangjun Liu
- Augmented Cognition LaboratoryElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | - Yu Yin
- Augmented Cognition LaboratoryElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | - Sarah Ostadabbas
- Augmented Cognition LaboratoryElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| |
Collapse
|
20
|
Nabian M, Yin Y, Wormwood J, Quigley KS, Barrett LF, Ostadabbas S. An Open-Source Feature Extraction Tool for the Analysis of Peripheral Physiological Data. IEEE J Transl Eng Health Med 2018; 6:2800711. [PMID: 30443441 PMCID: PMC6231905 DOI: 10.1109/jtehm.2018.2878000] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Revised: 09/05/2018] [Accepted: 10/22/2018] [Indexed: 11/09/2022]
Abstract
Electrocardiogram, electrodermal activity, electromyogram, continuous blood pressure, and impedance cardiography are among the most commonly used peripheral physiological signals (biosignals) in psychological studies and healthcare applications, including health tracking, sleep quality assessment, disease early-detection/diagnosis, and understanding human emotional and affective phenomena. This paper presents the development of a biosignal-specific processing toolbox (Bio-SP tool) for preprocessing and feature extraction of these physiological signals according to the state-of-the-art studies reported in the scientific literature and feedback received from the field experts. Our open-source Bio-SP tool is intended to assist researchers in affective computing, digital and mobile health, and telemedicine to extract relevant physiological patterns (i.e., features) from these biosignals semi-automatically and reliably. In this paper, we describe the successful algorithms used for signal-specific quality checking, artifact/noise filtering, and segmentation along with introducing features shown to be highly relevant to category discrimination in several healthcare applications (e.g., discriminating patterns associated with disease versus non-disease). Further, the Bio-SP tool is a publicly-available software written in MATLAB with a user-friendly graphical user interface (GUI), enabling future crowd-sourced modification to these tools. The GUI is compatible with MathWorks Classification Learner app for inference model development, such as model training, cross-validation scheme farming, and classification result computation.
Collapse
Affiliation(s)
- Mohsen Nabian
- Augmented Cognition LabElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
- Harvard Medical SchoolBostonMA02115USA
| | - Yu Yin
- Augmented Cognition LabElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | | | | | - Lisa F. Barrett
- Department of PsychologyNortheastern UniversityBostonMA02115USA
| | - Sarah Ostadabbas
- Augmented Cognition LabElectrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| |
Collapse
|
21
|
Heydarzadeh M, Nourani M, Ostadabbas S. Gait variability assessment in neuro-degenerative patients by measuring complexity of independent sources. Annu Int Conf IEEE Eng Med Biol Soc 2017; 2017:3186-3189. [PMID: 29060575 DOI: 10.1109/embc.2017.8037534] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Patients suffering from neuro-degenerative diseases have difficulties with normal locomotion. This problem progresses with the course of disease. Gait assessment is an effective way of diagnosing the disease and quantifying its progress which can effectively prevent falls. In this paper, an automatic assessment method for analyzing gait data obtained by force sensor insoles is introduced. The gait analysis method is based on measuring the complexity of gait data after extracting independent sources. The results are promising an average accuracy of 94% for three different diseases.
Collapse
|
22
|
Heydarzadeh M, Nourani M, Ostadabbas S. In-bed posture classification using deep autoencoders. Annu Int Conf IEEE Eng Med Biol Soc 2017; 2016:3839-3842. [PMID: 28269123 DOI: 10.1109/embc.2016.7591565] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Pressure ulcers are high prevalence complications among bed-bound patients which are not only extremely painful and difficult to treat, but also impose a great burden in our health-care system. We target automatic posture detection which is a key module in all pressure ulcer monitoring platforms. Using data collected from a commercially-available pressure mapping system, we applied deep neural networks to automatically classify in-bed posture using features extracted from the histogram of gradient technique. High accuracy of up to 98% was achieved in classifying five different in-bed postures for more than 60,000 pressure images.
Collapse
|
23
|
Ostadabbas S, Housley SN, Sebkhi N, Richards K, Wu D, Zhang Z, Rodriguez MG, Warthen L, Yarbrough C, Belagaje S, Butler AJ, Ghovanloo M. Tongue-controlled robotic rehabilitation: A feasibility study in people with stroke. ACTA ACUST UNITED AC 2017; 53:989-1006. [PMID: 28475207 DOI: 10.1682/jrrd.2015.06.0122] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Revised: 01/25/2016] [Indexed: 11/05/2022]
Abstract
Stroke survivors with severe upper limb (UL) impairment face years of therapy to recover function. Robot-assisted therapy (RT) is increasingly used in the field for goal-oriented rehabilitation as a means to improve function in ULs. To be used effectively for wrist and hand therapy, the current RT systems require the patient to have a minimal active range of movement in the UL, and those that do not have active voluntary movement cannot use these systems. We have overcome this limitation by harnessing tongue motion to allow patients to control a robot using synchronous tongue and hand movement. This novel RT device combines a commercially available UL exoskeleton, the Hand Mentor, and our custom-designed Tongue Drive System as its controller. We conducted a proof-of-concept study on six nondisabled participants to evaluate the system usability and a case series on three participants with movement limitations from poststroke hemiparesis. Data from two stroke survivors indicate that for patients with chronic, moderate UL impairment following stroke, a 15-session training regimen resulted in modest decreases in impairment, with functional improvement and improved quality of life. The improvement met the standard of minimal clinically important difference for activities of daily living, mobility, and strength assessments.
Collapse
Affiliation(s)
- Sarah Ostadabbas
- Electrical and Computer Engineering Department, Northeastern University, Boston, MA
| | - Stephen N Housley
- School of Nursing & Health Professions, Georgia State University, Atlanta, GA
| | - Nordine Sebkhi
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Kimberly Richards
- School of Nursing & Health Professions, Georgia State University, Atlanta, GA
| | - David Wu
- School of Nursing & Health Professions, Georgia State University, Atlanta, GA
| | - Zhenxuan Zhang
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA
| | | | - Lindsey Warthen
- School of Nursing & Health Professions, Georgia State University, Atlanta, GA
| | - Crystal Yarbrough
- School of Nursing & Health Professions, Georgia State University, Atlanta, GA
| | | | - Andrew J Butler
- School of Nursing & Health Professions, Georgia State University, Atlanta, GA.,Department of Physical Therapy, Georgia State University, Atlanta, GA
| | - Maysam Ghovanloo
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
24
|
Rezaei B, Lowe J, Yee JR, Porges S, Ostadabbas S. Non-contact automatic respiration monitoring in restrained rodents. Annu Int Conf IEEE Eng Med Biol Soc 2016; 2016:4946-4950. [PMID: 28269378 DOI: 10.1109/embc.2016.7591837] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Prairie voles are socially monogamous rodents that form social bonds similar to those seen in primates. Social behavior investigation in these species, that include studying their breathing regulation, can provide us with an invaluable psychological model to understand social and emotional functions in both animals and humans. There have been several studies associated with the respiratory pattern of these species in the state of fear-induced defense. However, non-invasive measurement methods employed so far suffer from the lack of a natural experiment environment for the rodents. In this paper, we present a remote depth-based system, which applies a modified autocorrelation algorithm to automatically extract respiration patterns in small rodents. We evaluated our estimation accuracy through a series of experiments and comparing the extracted results with breathing rates obtained from visual inspection of synchronously collected RGB videos. In a preliminary test on a human participant, breathing rate was estimated with 100% accuracy, while the estimation accuracy was 94.8% for a restrained vole. Finally, we monitored the respiratory alternations of three voles in transition from a baseline, to a fearful state, and back to a normal state; the estimated breathing rates confirmed the existing hypothesis regarding animal defense strategies.
Collapse
|
25
|
Ostadabbas S, Sebkhi N, Zhang M, Rahim S, Anderson LJ, Lee FEH, Ghovanloo M. A Vision-Based Respiration Monitoring System for Passive Airway Resistance Estimation. IEEE Trans Biomed Eng 2015; 63:1904-1913. [PMID: 26660514 DOI: 10.1109/tbme.2015.2505732] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Airway resistance is the mechanical cause of most of the symptoms in obstructive pulmonary disease, and can be considered as the primary measure of disease severity. A low-cost and noninvasive method to measure the airway resistance that does not require patient effort could be of great benefit in evaluating the severity of lung diseases, especially in patient population that are unable to use spirometry, such as young children. METHODS The Vision-Based Passive Airway Resistance Estimation (VB-PARE) technology is a passive method to measure airway resistance noninvasively. The airway resistance is estimated from: 1) airflow extracted from processing depth data captured by a Microsoft Kinect, and 2) Pulsus Paradoxus extracted from a pulse oximeter (SpO 2). RESULTS To verify the validity and accuracy of the VB-PARE, two phases of experiment were conducted. In Phase I, spontaneous breathing data was collected from 14 healthy participants with externally induced airway obstruction, and the accuracy of 76.2±13.8% was achieved in predicting three levels of obstruction severity. In Phase II, VB-PARE outputs were compared with the clinical results from 14 patients. VB-PARE estimated the tidal volume with an average error of 0.07±0.06 liter. Also, patients with airway obstruction were detected with 80% accuracy. CONCLUSION Using the information extracted from Kinect and SpO 2 , here, we present a quantitative method to measure the severity of airway obstruction without requiring active patient involvement. SIGNIFICANCE The proposed VB-PARE system contributes to the state-of-art respiration monitoring methods by expanding the idea of passive and noninvasive airway resistance measurement.
Collapse
|
26
|
Baran Pouyan M, Ostadabbas S, Nourani M, Pompeo M. Classifying bed inclination using pressure images. Annu Int Conf IEEE Eng Med Biol Soc 2015; 2014:4663-6. [PMID: 25571032 DOI: 10.1109/embc.2014.6944664] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Pressure ulcer is one of the most prevalent problems for bed-bound patients in hospitals and nursing homes. Pressure ulcers are painful for patients and costly for healthcare systems. Accurate in-bed posture analysis can significantly help in preventing pressure ulcers. Specifically, bed inclination (back angle) is a factor contributing to pressure ulcer development. In this paper, an efficient methodology is proposed to classify bed inclination. Our approach uses pressure values collected from a commercial pressure mat system. Then, by applying a number of image processing and machine learning techniques, the approximate degree of bed is estimated and classified. The proposed algorithm was tested on 15 subjects with various sizes and weights. The experimental results indicate that our method predicts bed inclination in three classes with 80.3% average accuracy.
Collapse
|
27
|
Ostadabbas S, Ghovanloo M, John Butler A. Developing a Tongue Controlled Exoskeleton for a Wrist Tracking Exercise: A Preliminary Study1. J Med Device 2015. [DOI: 10.1115/1.4030605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Affiliation(s)
- Sarah Ostadabbas
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30308
| | - Maysam Ghovanloo
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30308
| | - Andrew John Butler
- Department of Physical Therapy, Georgia State University, Atlanta, GA 30308
| |
Collapse
|
28
|
Yousefi R, Nourani M, Ostadabbas S, Panahi I. A motion-tolerant adaptive algorithm for wearable photoplethysmographic biosensors. IEEE J Biomed Health Inform 2014; 18:670-81. [PMID: 24608066 DOI: 10.1109/jbhi.2013.2264358] [Citation(s) in RCA: 86] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The performance of portable and wearable biosensors is highly influenced by motion artifact. In this paper, a novel real-time adaptive algorithm is proposed for accurate motion-tolerant extraction of heart rate (HR) and pulse oximeter oxygen saturation ( SpO2) from wearable photoplethysmographic (PPG) biosensors. The proposed algorithm removes motion artifact due to various sources including tissue effect and venous blood changes during body movements and provides noise-free PPG waveforms for further feature extraction. A two-stage normalized least mean square adaptive noise canceler is designed and validated using a novel synthetic reference signal at each stage. Evaluation of the proposed algorithm is done by Bland-Altman agreement and correlation analyses against reference HR from commercial ECG and SpO2 sensors during standing, walking, and running at different conditions for a single- and multisubject scenarios. Experimental results indicate high agreement and high correlation (more than 0.98 for HR and 0.7 for SpO2 extraction) between measurements by reference sensors and our algorithm.
Collapse
|
29
|
Abstract
It is known that prolonged pressure on the plantar area is one of the main factors in developing foot ulcers. With current technology, electronic pressure monitoring systems can be placed as an insole into regular shoes to continuously monitor the plantar area and provide evidence on ulcer formation process as well as insight for proper orthotic footwear design. The reliability of these systems heavily depends on the spatial resolution of their sensor platforms. However, due to the cost and energy constraints, practical wireless in-shoe pressure monitoring systems have a limited number of sensors, i.e., typically K < 10. In this paper, we present a knowledge-based regression model (SCPM) to reconstruct a spatially continuous plantar pressure image from a small number of pressure sensors. This model makes use of high-resolution pressure data collected clinically to train a per-subject regression function. SCPM is shown to outperform all other tested interpolation methods for K < 60 sensors, with less than one-third of the error for K = 10 sensors. SCPM bridges the gap between the technological capability and medical need and can play an important role in the adoption of sensing insole for a wide range of medical applications.
Collapse
|
30
|
Ostadabbas S, Bulach C, Ku DN, Anderson LJ, Ghovanloo M. A passive quantitative measurement of airway resistance using depth data. Annu Int Conf IEEE Eng Med Biol Soc 2014; 2014:5743-5747. [PMID: 25571300 DOI: 10.1109/embc.2014.6944932] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
The Respiratory Syncytial Virus (RSV) is the most common cause of serious lower respiratory tract infections in infants and young children. RSV often causes increased airway resistance, clinically detected as wheezing by chest auscultation. In this disease, expiratory flows are significantly reduced due to the high resistance in patient's airway passages. A quantitative method for measuring resistance can have a great benefit to diagnosis and management of children with RSV infections as well as with other lung diseases. Airway resistance is defined as the lung pressure divided by the airflow. In this paper, we propose a method to quantify resistance through a simple, non-contact measurement of chest volume that can act as a surrogate measure of the lung pressure and volumetric airflow. We used depth data collected by a Microsoft Kinect camera for the measurement of the lung volume over time. In our experimentation, breathing through a number of plastic straws induced different airway resistances. For a standard spirometry test, our volume/flow estimation using Kinect showed strong correlation with the flow data collected by a commercially-available spirometer (five subjects, each performing 20 breathing trials, correlation coefficient = 0.88, with 95% confidence interval). As the number of straws decreased, emulating a higher airway obstruction, our algorithm was sufficient to distinguish between several levels of airway resistance.
Collapse
|
31
|
Ostadabbas S, Saeed A, Nourani M, Pompeo M. Sensor architectural tradeoff for diabetic foot ulcer monitoring. Annu Int Conf IEEE Eng Med Biol Soc 2013; 2012:6687-90. [PMID: 23367463 DOI: 10.1109/embc.2012.6347528] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The diabetic foot complications constitute a tremendous challenge for patients, caregivers, and the healthcare system. Studies show up to 25% of diabetic individuals will develop a foot ulcer during their lifetime and many of these patients eventually must undergo amputation as a result of infection due to untreated foot ulcers. With current technology, in-shoe monitoring systems can be implemented to continuously monitor at-risk ulceration sites based on known indicators such as peak pressure. The important parameters in designing a pressure-sensing insole include the number, location and size of sensors. In this paper, we aim at showing the criticality of sensor architectural tradeoff in developing the in-shoe plantar pressure monitoring systems. We evaluate this tradeoff by using our custom-made platform for data collection during normal walking.
Collapse
Affiliation(s)
- Sarah Ostadabbas
- Department of Electrical and Computer Engineering, University of Texas at Dallas, Richardson, TX 75080, USA.
| | | | | | | |
Collapse
|
32
|
Ostadabbas S, Yousefi R, Nourani M, Faezipour M, Tamil L, Pompeo MQ. A resource-efficient planning for pressure ulcer prevention. ACTA ACUST UNITED AC 2012; 16:1265-73. [PMID: 22922729 DOI: 10.1109/titb.2012.2214443] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Pressure ulcer is a critical problem for bed-ridden and wheelchair-bound patients, diabetics, and the elderly. Patients need to be regularly repositioned to prevent excessive pressure on a single area of body, which can lead to ulcers. Pressure ulcers are extremely costly to treat and may lead to several other health problems, including death. The current standard for prevention is to reposition at-risk patients every two hours. Even if it is done properly, a fixed schedule is not sufficient to prevent all ulcers. Moreover, it may result in nurses being overworked by turning some patients too frequently. In this paper, we present an algorithm for finding a nurse-effort optimal repositioning schedule that prevents pressure ulcer formation for a finite planning horizon. Our proposed algorithm uses data from a commercial pressure mat assembled on the beds surface and provides a sequence of next positions and the time of repositioning for each patient.
Collapse
|
33
|
Yousefi R, Ostadabbas S, Faezipour M, Farshbaf M, Nourani M, Tamil L, Pompeo M. Bed posture classification for pressure ulcer prevention. Annu Int Conf IEEE Eng Med Biol Soc 2012; 2011:7175-8. [PMID: 22255993 DOI: 10.1109/iembs.2011.6091813] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Pressure ulcer is an age-old problem imposing a huge cost to our health care system. Detecting and keeping record of the patient's posture on bed, help care givers reposition patient more efficiently and reduce the risk of developing pressure ulcer. In this paper, a commercial pressure mapping system is used to create a time-stamped, whole-body pressure map of the patient. An image-based processing algorithm is developed to keep an unobtrusive and informative record of patient's bed posture over time. The experimental results show that proposed algorithm can predict patient's bed posture with up to 97.7% average accuracy. This algorithm could ultimately be used with current support surface technologies to reduce the risk of ulcer development.
Collapse
Affiliation(s)
- R Yousefi
- Quality of Life Technology Laboratory The University of Texas at Dallas, Richardson, TX 75080, USA.
| | | | | | | | | | | | | |
Collapse
|
34
|
Ostadabbas S, Jafari R. Spectral Spatio-Temporal template extraction from EEG signals. Annu Int Conf IEEE Eng Med Biol Soc 2010; 2010:4678-4682. [PMID: 21096006 DOI: 10.1109/iembs.2010.5626411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Analysis of Event Related Potentials (ERPs) produced by brain activities can provide insight into the timing of underlying brain function. ERPs can be classified by their time/frequency characteristics and spatial location on the scalp. Traditionally, ERPs are manually located by temporally and spatially averaged EEG signals. This process is error prone and sensitive to a priori assumptions. Our proposed algorithm is a general neuroscience-focused data mining algorithm that performs time and frequency analysis on ERPs and automatically extracts templates corresponding to Spectral Spatio-Temporal (SST) regions exhibiting significant differences between experimental outcomes. The method uses time-aligned templates, which preserve the characteristics of the signal important to cognitive researchers. The ability of the selected signal templates to differentiate between stimulus responses has been verified using a pattern recognition procedure. SST template extraction is tested on data taken from a Go/NoGo task and shown to both find relationships consistent with published neuroscience literature as well as novel relationships.
Collapse
Affiliation(s)
- Sarah Ostadabbas
- Embedded Systems and Signal Processing Lab, Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX 75080, USA.
| | | |
Collapse
|
35
|
Ghassemzadeh H, Guenterberg E, Ostadabbas S, Jafari R. A motion sequence fusion technique based on PCA for activity analysis in body sensor networks. Annu Int Conf IEEE Eng Med Biol Soc 2009; 2009:3146-3149. [PMID: 19963575 DOI: 10.1109/iembs.2009.5332589] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Human movement analysis by means of mobile sensory platforms is an ever-growing area with promise to revolutionize delivery of healthcare services. An effective data fusion technique is essential for understanding the inertial information obtained from distributed sensor nodes. In this paper, we develop a data fusion model based on the concept of principal component analysis. Unlike traditional fusion techniques which deal with statistical feature space, our model operates on motion transcripts, where each movement is represented as a sequence of basic building blocks called primitives. We describe how our model transforms transcripts of different nodes into a unified transcript by integrating the most relevant primitives of movements. Finally, we demonstrate the performance of our transcript fusion model for action recognition using real data collected from three subjects.
Collapse
Affiliation(s)
- Hassan Ghassemzadeh
- Embedded Systems and Signal Processing Lab, Department of Electrical Engineering, University of Texas at Dallas, Richardson, TX, 75080, USA.
| | | | | | | |
Collapse
|