1
|
Monti K, Williams K, Ivins B, Uomoto J, Skarda-Craft J, Dretsch M. Feasibility and usability of microinteraction ecological momentary assessment using a smartwatch in military personnel with a history of traumatic brain injury. Front Neurol 2025; 16:1564657. [PMID: 40297860 PMCID: PMC12036483 DOI: 10.3389/fneur.2025.1564657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2025] [Accepted: 03/13/2025] [Indexed: 04/30/2025] Open
Abstract
Introduction Microinteraction Ecological Momentary Assessment (miEMA) addresses the challenges of traditional self-report questionnaires by collecting data in real time. The purpose of this study was to examine the feasibility and usability of employing miEMA using a smartwatch in military service members undergoing traumatic brain injury (TBI) rehabilitation. Materials and methods Twenty-eight United States active duty service members with a TBI history were recruited as patients from a military outpatient TBI rehabilitation center, enrolled in either a 2-week or 3-week study arm, and administered miEMA surveys via a custom smartwatch app. The 3-week arm participants were also concurrently receiving cognitive rehabilitation. Select constructs evaluated with miEMA included mood, fatigue, pain, headache, self-efficacy, and cognitive strategy use. Outcome measures of adherence were completion (percentage of questions answered out of questions delivered) and compliance (percentage of questions answered out of questions scheduled). The Mobile Health Application Usability Questionnaire (MAUQ) and System Usability Scale (SUS) assessed participants' perceptions of smartwatch and app usability. Results Completion and compliance rates were 80.1% and 77.4%, respectively. Mean participant completion and compliance were 81.1% ± 12.0% and 78.1% ± 13.0%, respectively. Mean participant completion increased to 87.7% ± 8.8% when using an embedded question retry mechanism. Mean participant survey set completion was 69.8% ± 18.3% during the early morning but remained steady during the late morning/early afternoon (85.7% ± 12.8%), afternoon (86.2% ± 12.6%), and late afternoon/evening (85.0% ± 14.7%). The mean overall item score for the MAUQ was 6.3 ± 1.1 out of 7. The mean SUS score was 89.0 ± 7.2 out of 100 and mean SUS percentile ranking was 96.4% ± 8.4%. Conclusion Overall adherence was similar to previous studies in civilian populations. Participants rated the miEMA app and smartwatch as having high usability. These findings suggest that miEMA using a smartwatch for tracking symptoms and treatment strategy use is feasible in military service members with a TBI history, including those undergoing rehabilitation for cognitive difficulties.
Collapse
Affiliation(s)
- Katrina Monti
- Traumatic Brain Injury Center of Excellence, Silver Spring, MD, United States
- CICONIX LLC, Annapolis, MD, United States
- Madigan Army Medical Center, Tacoma, WA, United States
| | - Katie Williams
- Traumatic Brain Injury Center of Excellence, Silver Spring, MD, United States
- CICONIX LLC, Annapolis, MD, United States
- Madigan Army Medical Center, Tacoma, WA, United States
| | - Brian Ivins
- Traumatic Brain Injury Center of Excellence, Silver Spring, MD, United States
- General Dynamics Information Technology, Falls Church, VA, United States
| | - Jay Uomoto
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, United States
| | | | - Michael Dretsch
- Traumatic Brain Injury Center of Excellence, Silver Spring, MD, United States
- Madigan Army Medical Center, Tacoma, WA, United States
- General Dynamics Information Technology, Falls Church, VA, United States
| |
Collapse
|
2
|
Cook D, Walker A, Minor B, Luna C, Tomaszewski Farias S, Wiese L, Weaver R, Schmitter-Edgecombe M. Understanding the Relationship Between Ecological Momentary Assessment Methods, Sensed Behavior, and Responsiveness: Cross-Study Analysis. JMIR Mhealth Uhealth 2025; 13:e57018. [PMID: 40209210 PMCID: PMC12005599 DOI: 10.2196/57018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 09/10/2024] [Accepted: 03/03/2025] [Indexed: 04/12/2025] Open
Abstract
Background Ecological momentary assessment (EMA) offers an effective method to collect frequent, real-time data on an individual's well-being. However, challenges exist in response consistency, completeness, and accuracy. Objective This study examines EMA response patterns and their relationship with sensed behavior for data collected from diverse studies. We hypothesize that EMA response rate (RR) will vary with prompt time of day, number of questions, and behavior context. In addition, we postulate that response quality will decrease over the study duration and that relationships will exist between EMA responses, participant demographics, behavior context, and study purpose. Methods Data from 454 participants in 9 clinical studies were analyzed, comprising 146,753 EMA mobile prompts over study durations ranging from 2 weeks to 16 months. Concurrently, sensor data were collected using smartwatch or smart home sensors. Digital markers, such as activity level, time spent at home, and proximity to activity transitions (change points), were extracted to provide context for the EMA responses. All studies used the same data collection software and EMA interface but varied in participant groups, study length, and the number of EMA questions and tasks. We analyzed RR, completeness, quality, alignment with sensor-observed behavior, impact of study design, and ability to model the series of responses. Results The average RR was 79.95%. Of those prompts that received a response, the proportion of fully completed response and task sessions was 88.37%. Participants were most responsive in the evening (82.31%) and on weekdays (80.43%), although results varied by study demographics. While overall RRs were similar for weekday and weekend prompts, older adults were more responsive during the week (an increase of 0.27), whereas younger adults responded less during the week (a decrease of 3.25). RR was negatively correlated with the number of EMA questions (r=-0.433, P<.001). Additional correlations were observed between RR and sensor-detected activity level (r=0.045, P<.001), time spent at home (r=0.174, P<.001), and proximity to change points (r=0.124, P<.001). Response quality showed a decline over time, with careless responses increasing by 0.022 (P<.001) and response variance decreasing by 0.363 (P<.001). The within-study dynamic time warping distance between response sequences averaged 14.141 (SD 11.957), compared with the 33.246 (SD 4.971) between-study average distance. ARIMA (Autoregressive Integrated Moving Average) models fit the aggregated time series with high log-likelihood values, indicating strong model fit with low complexity. Conclusions EMA response patterns are significantly influenced by participant demographics and study parameters. Tailoring EMA prompt strategies to specific participant characteristics can improve RRs and quality. Findings from this analysis suggest that timing EMA prompts close to detected activity transitions and minimizing the duration of EMA interactions may improve RR. Similarly, strategies such as gamification may be introduced to maintain participant engagement and retain response variance.
Collapse
Affiliation(s)
- Diane Cook
- Department of Psychology, College of Arts and Sciences, Washington State University, 3160 Folsom Blvd, Sacramento, WA, 95816, United States, 1 5093354985
| | - Aiden Walker
- Department of Psychology, College of Arts and Sciences, Washington State University, 3160 Folsom Blvd, Sacramento, WA, 95816, United States, 1 5093354985
| | - Bryan Minor
- Department of Psychology, College of Arts and Sciences, Washington State University, 3160 Folsom Blvd, Sacramento, WA, 95816, United States, 1 5093354985
| | - Catherine Luna
- Department of Psychology, College of Arts and Sciences, Washington State University, 3160 Folsom Blvd, Sacramento, WA, 95816, United States, 1 5093354985
| | - Sarah Tomaszewski Farias
- Department of Neurology, UC Davis Medical Center, University of California at Davis, Davis, CA, United States
| | - Lisa Wiese
- Christine E Lynn College of Nursing, Florida Atlantic University, Boca Raton, FL, United States
| | - Raven Weaver
- Department of Psychology, College of Arts and Sciences, Washington State University, 3160 Folsom Blvd, Sacramento, WA, 95816, United States, 1 5093354985
| | - Maureen Schmitter-Edgecombe
- Department of Psychology, College of Arts and Sciences, Washington State University, 3160 Folsom Blvd, Sacramento, WA, 95816, United States, 1 5093354985
| |
Collapse
|
3
|
Stone C, Adams S, Wootton RE, Skinner A. Smartwatch-Based Ecological Momentary Assessment for High-Temporal-Density, Longitudinal Measurement of Alcohol Use (AlcoWatch): Feasibility Evaluation. JMIR Form Res 2025; 9:e63184. [PMID: 40131326 PMCID: PMC11979524 DOI: 10.2196/63184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 02/12/2025] [Accepted: 03/02/2025] [Indexed: 03/26/2025] Open
Abstract
BACKGROUND Ecological momentary assessment methods have recently been adapted for use on smartwatches. One particular class of these methods, developed to minimize participant burden and maximize engagement and compliance, is referred to as microinteraction-based ecological momentary assessment (μEMA). OBJECTIVE This study explores the feasibility of using these smartwatch-based μEMA methods to capture longitudinal, high-temporal-density self-report data about alcohol consumption in a nonclinical population selected to represent high- and low-socioeconomic position (SEP) groups. METHODS A total of 32 participants from the Avon Longitudinal Study of Parents and Children (13 high and 19 low SEP) wore a smartwatch running a custom-developed μEMA app for 3 months between October 2019 and June 2020. Every day over a 12-week period, participants were asked 5 times a day about any alcoholic drinks they had consumed in the previous 2 hours, and the context in which they were consumed. They were also asked if they had missed recording any alcoholic drinks the day before. As a comparison, participants also completed fortnightly online diaries of alcohol consumed using the Timeline Followback (TLFB) method. At the end of the study, participants completed a semistructured interview about their experiences. RESULTS The compliance rate for all participants who started the study for the smartwatch μEMA method decreased from around 70% in week 1 to 45% in week 12, compared with the online TLFB method which was flatter at around 50% over the 12 weeks. The compliance for all participants still active for the smartwatch μEMA method was much flatter, around 70% for the whole 12 weeks, while for the online TLFB method, it varied between 50% and 80% over the same period. The completion rate for the smartwatch μEMA method varied around 80% across the 12 weeks. Within high- and low-SEP groups there was considerable variation in compliance and completion at each week of the study for both methods. However, almost all point estimates for both smartwatch μEMA and online TLFB indicated lower levels of engagement for low-SEP participants. All participants scored "experiences of using" the 2 methods equally highly, with "willingness to use again" slightly higher for smartwatch μEMA. CONCLUSIONS Our findings demonstrate the acceptability and potential utility of smartwatch μEMA methods for capturing data on alcohol consumption. These methods have the benefits of capturing higher-temporal-density longitudinal data on alcohol consumption, promoting greater participant engagement with less missing data, and potentially being less susceptible to recall errors than established methods such as TLFB. Future studies should explore the factors impacting participant attrition (the biggest reason for reduced engagement), latency issues, and the validity of alcohol data captured with these methods. The consistent pattern of lower engagement among low-SEP participants than high-SEP participants indicates that further work is warranted to explore the impact and causes of these differences.
Collapse
Affiliation(s)
- Chris Stone
- Integrative Cancer Epidemiology Programme, Bristol Medical School, University of Bristol, Bristol, United Kingdom
- School of Psychological Science, University of Bristol, Bristol, United Kingdom
| | - Sally Adams
- School of Psychology, University of Birmingham, Birmingham, United Kingdom
| | - Robyn E Wootton
- School of Psychological Science, University of Bristol, Bristol, United Kingdom
- Nic Waals Institute, Lovisenberg Diaconal Hospital, Oslo, Norway
- Medical Research Council Integrative Epidemiology Unit, Bristol Medical School, University of Bristol, Bristol, United Kingdom
- PsychGen Centre of Genetic Epidemiology and Mental Health, Norwegian Institute of Public Health, Oslo, Norway
| | - Andy Skinner
- Integrative Cancer Epidemiology Programme, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| |
Collapse
|
4
|
Li J, Ponnada A, Wang WL, Dunton GF, Intille SS. Ask Less, Learn More: Adapting Ecological Momentary Assessment Survey Length by Modeling Question-Answer Information Gain. PROCEEDINGS OF THE ACM ON INTERACTIVE, MOBILE, WEARABLE AND UBIQUITOUS TECHNOLOGIES 2024; 8:166. [PMID: 39664111 PMCID: PMC11633767 DOI: 10.1145/3699735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2024]
Abstract
Ecological momentary assessment (EMA) is an approach to collect self-reported data repeatedly on mobile devices in natural settings. EMAs allow for temporally dense, ecologically valid data collection, but frequent interruptions with lengthy surveys on mobile devices can burden users, impacting compliance and data quality. We propose a method that reduces the length of each EMA question set measuring interrelated constructs, with only modest information loss. By estimating the potential information gain of each EMA question using question-answer prediction models, this method can prioritize the presentation of the most informative question in a question-by-question sequence and skip uninformative questions. We evaluated the proposed method by simulating question omission using four real-world datasets from three different EMA studies. When compared against the random question omission approach that skips 50% of the questions, our method reduces imputation errors by 15%-52%. In surveys with five answer options for each question, our method can reduce the mean survey length by 34%-56% with a real-time prediction accuracy of 72%-95% for the skipped questions. The proposed method may either allow more constructs to be surveyed without adding user burden or reduce response burden for more sustainable longitudinal EMA data collection.
Collapse
|
5
|
Sokolovsky AW, Gunn RL, Wycoff AM, Boyle HK, White HR, Jackson KM. Compliance and response consistency in a lengthy intensive longitudinal data protocol. Psychol Assess 2024; 36:606-617. [PMID: 39101913 PMCID: PMC11864101 DOI: 10.1037/pas0001332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2024]
Abstract
Research on real-world patterns of substance use increasingly involves intensive longitudinal data (ILD) collection, requiring long assessment windows. The present study extends limited prior research examining event- and person-level influences on compliance and response consistency by investigating how these behaviors are sustained over time in an ILD study of alcohol and cannabis co-use in college students. Participants (n = 316) completed two 28-day bursts of ILD comprising five daily surveys, which included a morning survey of prior-day drinking. We used linear mixed effects models in a multilevel interrupted time series framework to evaluate the associations of time and measurement burst with (a) noncompliance (count of missed surveys) and (b) response consistency (difference between same-day report of drinking and morning report of prior-day drinking). We observed that time was positively associated with noncompliance, with no discontinuity associated with measurement burst. The slope of time was more positive in the second burst. Neither time nor measurement burst were significantly associated with consistent reporting. However, survey nonresponse and consistency of responding appeared to be impacted by the same-day use of substances. Overall, compliance decreased while consistency was stable across the duration of a lengthy ILD protocol. Shorter assessment windows or adaptive prompting strategies may improve overall study compliance. Further work examining daily burden and context is needed to inform future ILD design. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Rachel L. Gunn
- Center for Alcohol and Addiction Studies, Brown University
| | - Andrea M. Wycoff
- Center for Alcohol and Addiction Studies, Brown University
- Department of Psychiatry, University of Missouri
| | - Holly K. Boyle
- Center for Alcohol and Addiction Studies, Brown University
| | - Helene R. White
- Rutgers Center of Alcohol and Substance Studies, Rutgers University
| | | |
Collapse
|
6
|
King ZD, Yu H, Vaessen T, Myin-Germeys I, Sano A. Investigating Receptivity and Affect Using Machine Learning: Ecological Momentary Assessment and Wearable Sensing Study. JMIR Mhealth Uhealth 2024; 12:e46347. [PMID: 38324358 PMCID: PMC10882474 DOI: 10.2196/46347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 11/01/2023] [Accepted: 11/21/2023] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND As mobile health (mHealth) studies become increasingly productive owing to the advancements in wearable and mobile sensor technology, our ability to monitor and model human behavior will be constrained by participant receptivity. Many health constructs are dependent on subjective responses, and without such responses, researchers are left with little to no ground truth to accompany our ever-growing biobehavioral data. This issue can significantly impact the quality of a study, particularly for populations known to exhibit lower compliance rates. To address this challenge, researchers have proposed innovative approaches that use machine learning (ML) and sensor data to modify the timing and delivery of surveys. However, an overarching concern is the potential introduction of biases or unintended influences on participants' responses when implementing new survey delivery methods. OBJECTIVE This study aims to demonstrate the potential impact of an ML-based ecological momentary assessment (EMA) delivery system (using receptivity as the predictor variable) on the participants' reported emotional state. We examine the factors that affect participants' receptivity to EMAs in a 10-day wearable and EMA-based emotional state-sensing mHealth study. We study the physiological relationships indicative of receptivity and affect while also analyzing the interaction between the 2 constructs. METHODS We collected data from 45 healthy participants wearing 2 devices measuring electrodermal activity, accelerometer, electrocardiography, and skin temperature while answering 10 EMAs daily, containing questions about perceived mood. Owing to the nature of our constructs, we can only obtain ground truth measures for both affect and receptivity during responses. Therefore, we used unsupervised and supervised ML methods to infer affect when a participant did not respond. Our unsupervised method used k-means clustering to determine the relationship between physiology and receptivity and then inferred the emotional state during nonresponses. For the supervised learning method, we primarily used random forest and neural networks to predict the affect of unlabeled data points as well as receptivity. RESULTS Our findings showed that using a receptivity model to trigger EMAs decreased the reported negative affect by >3 points or 0.29 SDs in our self-reported affect measure, scored between 13 and 91. The findings also showed a bimodal distribution of our predicted affect during nonresponses. This indicates that this system initiates EMAs more commonly during states of higher positive emotions. CONCLUSIONS Our results showed a clear relationship between affect and receptivity. This relationship can affect the efficacy of an mHealth study, particularly those that use an ML algorithm to trigger EMAs. Therefore, we propose that future work should focus on a smart trigger that promotes EMA receptivity without influencing affect during sampled time points.
Collapse
Affiliation(s)
- Zachary D King
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, United States
| | - Han Yu
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, United States
| | - Thomas Vaessen
- Center For Contextual Psychiatry, Katholieke Universiteit Leuven, Leuven, Belgium
- Department of Psychology, Health & Technology, University of Twente, Enschede, Netherlands
| | - Inez Myin-Germeys
- Center For Contextual Psychiatry, Katholieke Universiteit Leuven, Leuven, Belgium
| | - Akane Sano
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, United States
| |
Collapse
|
7
|
Edney S, Goh CM, Chua XH, Low A, Chia J, S Koek D, Cheong K, van Dam R, Tan CS, Müller-Riemenschneider F. Evaluating the Effects of Rewards and Schedule Length on Response Rates to Ecological Momentary Assessment Surveys: Randomized Controlled Trials. J Med Internet Res 2023; 25:e45764. [PMID: 37856188 PMCID: PMC10623229 DOI: 10.2196/45764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 05/31/2023] [Accepted: 07/28/2023] [Indexed: 10/20/2023] Open
Abstract
BACKGROUND Ecological momentary assessments (EMAs) are short, repeated surveys designed to collect information on experiences in real-time, real-life contexts. Embedding periodic bursts of EMAs within cohort studies enables the study of experiences on multiple timescales and could greatly enhance the accuracy of self-reported information. However, the burden on participants may be high and should be minimized to optimize EMA response rates. OBJECTIVE We aimed to evaluate the effects of study design features on EMA response rates. METHODS Embedded within an ongoing cohort study (Health@NUS), 3 bursts of EMAs were implemented over a 7-month period (April to October 2021). The response rate (percentage of completed EMA surveys from all sent EMA surveys; 30-42 individual EMA surveys sent/burst) for each burst was examined. Following a low response rate in burst 1, changes were made to the subsequent implementation strategy (SMS text message announcements instead of emails). In addition, 2 consecutive randomized controlled trials were conducted to evaluate the efficacy of 4 different reward structures (with fixed and bonus components) and 2 different schedule lengths (7 or 14 d) on changes to the EMA response rate. Analyses were conducted from 2021 to 2022 using ANOVA and analysis of covariance to examine group differences and mixed models to assess changes across all 3 bursts. RESULTS Participants (N=384) were university students (n=232, 60.4% female; mean age 23, SD 1.3 y) in Singapore. Changing the reward structure did not significantly change the response rate (F3,380=1.75; P=.16). Changing the schedule length did significantly change the response rate (F1,382=6.23; P=.01); the response rate was higher for the longer schedule (14 d; mean 48.34%, SD 33.17%) than the shorter schedule (7 d; mean 38.52%, SD 33.44%). The average response rate was higher in burst 2 and burst 3 (mean 50.56, SD 33.61 and mean 48.34, SD 33.17, respectively) than in burst 1 (mean 25.78, SD 30.12), and the difference was statistically significant (F2,766=93.83; P<.001). CONCLUSIONS Small changes to the implementation strategy (SMS text messages instead of emails) may have contributed to increasing the response rate over time. Changing the available rewards did not lead to a significant difference in the response rate, whereas changing the schedule length did lead to a significant difference in the response rate. Our study provides novel insights on how to implement EMA surveys in ongoing cohort studies. This knowledge is essential for conducting high-quality studies using EMA surveys. TRIAL REGISTRATION ClinicalTrials.gov NCT05154227; https://clinicaltrials.gov/ct2/show/NCT05154227.
Collapse
Affiliation(s)
- Sarah Edney
- Physical Activity and Nutrition Determinants in Asia Programme, Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore, Singapore
| | - Claire Marie Goh
- Physical Activity and Nutrition Determinants in Asia Programme, Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore, Singapore
| | - Xin Hui Chua
- Physical Activity and Nutrition Determinants in Asia Programme, Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore, Singapore
| | - Alicia Low
- Singapore Health Promotion Board, Singapore Government, Singapore, Singapore
| | - Janelle Chia
- Singapore Health Promotion Board, Singapore Government, Singapore, Singapore
| | - Daphne S Koek
- Singapore Health Promotion Board, Singapore Government, Singapore, Singapore
| | - Karen Cheong
- Singapore Health Promotion Board, Singapore Government, Singapore, Singapore
| | - Rob van Dam
- Physical Activity and Nutrition Determinants in Asia Programme, Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore, Singapore
- Department of Exercise and Nutrition Sciences and Epidemiology, Milken Institute of Public Health, The George Washington University, Washington DC, VA, United States
| | - Chuen Seng Tan
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore, Singapore
| | - Falk Müller-Riemenschneider
- Physical Activity and Nutrition Determinants in Asia Programme, Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, Singapore, Singapore
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
- Digital Health Center, Berlin Institute of Health, Charité-Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
8
|
Hester J, Le H, Intille S, Meier E. A feasibility study on the use of audio-based ecological momentary assessment with persons with aphasia. ASSETS. ANNUAL ACM CONFERENCE ON ASSISTIVE TECHNOLOGIES 2023; 2023:55. [PMID: 38549687 PMCID: PMC10969676 DOI: 10.1145/3597638.3608419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/01/2024]
Abstract
We describe a smartphone/smartwatch system to evaluate anomia in individuals with aphasia by using audio-recording-based ecological momentary assessments. The system delivers object-naming assessments to a participant's smartwatch, whereby a prompt signals the availability of images of these objects on the watch screen. Participants attempt to speak the names of the images that appear on the watch display out loud and into the watch as they go about their lives. We conducted a three-week feasibility study with six participants with mild to moderate aphasia. Participants were assigned to either a nine-item (four prompts per day with nine images) or single-item (36 prompts per day with one image each) ecological momentary assessment protocol. Compliance in recording an audio response to a prompt was approximately 80% for both protocols. Qualitative analysis of the participants' interviews suggests that the participants felt capable of completing the protocol, but opinions about using a smartwatch were mixed. We review participant feedback and highlight the importance of considering a population's specific cognitive or motor impairments when designing technology and training protocols.
Collapse
Affiliation(s)
- Jack Hester
- Khoury College of Computer Sciences and Bouvé College of Health Sciences, Northeastern University
| | - Ha Le
- Khoury College of Computer Sciences, Northeastern University
| | - Stephen Intille
- Khoury College of Computer Sciences and Bouvé College of Health Sciences, Northeastern University
| | - Erin Meier
- Bouvé College of Health Sciences, Northeastern University
| |
Collapse
|