1
|
Koorathota S, Ma JL, Faller J, Hong L, Lapborisuth P, Sajda P. Pupil-linked arousal correlates with neural activity prior to sensorimotor decisions. J Neural Eng 2023; 20:066031. [PMID: 38016448 DOI: 10.1088/1741-2552/ad1055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.Sensorimotor decisions require the brain to process external information and combine it with relevant knowledge prior to actions. In this study, we explore the neural predictors of motor actions in a novel, realistic driving task designed to study decisions while driving.Approach.Through a spatiospectral assessment of functional connectivity during the premotor period, we identified the organization of visual cortex regions of interest into a distinct scene processing network. Additionally, we identified a motor action selection network characterized by coherence between the anterior cingulate cortex (ACC) and dorsolateral prefrontal cortex (DLPFC).Main results.We show that steering behavior can be predicted from oscillatory power in the visual cortex, DLPFC, and ACC. Power during the premotor periods (specific to the theta and beta bands) correlates with pupil-linked arousal and saccade duration.Significance.We interpret our findings in the context of network-level correlations with saccade-related behavior and show that the DLPFC is a key node in arousal circuitry and in sensorimotor decisions.
Collapse
Affiliation(s)
- Sharath Koorathota
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Jia Li Ma
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Josef Faller
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Linbi Hong
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Pawan Lapborisuth
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
- Department of Electrical Engineering, Columbia University, New York, NY, United States of America
- Data Science Institute, Columbia University, New York, NY, United States of America
| |
Collapse
|
2
|
Lapborisuth P, Koorathota S, Sajda P. Pupil-linked arousal modulates network-level EEG signatures of attention reorienting during immersive multitasking. J Neural Eng 2023; 20:046043. [PMID: 37595578 DOI: 10.1088/1741-2552/acf1cb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 08/18/2023] [Indexed: 08/20/2023]
Abstract
Objective. When multitasking, we must dynamically reorient our attention between different tasks. Attention reorienting is thought to arise through interactions of physiological arousal and brain-wide network dynamics. In this study, we investigated the relationship between pupil-linked arousal and electroencephalography (EEG) brain dynamics in a multitask driving paradigm conducted in virtual reality. We hypothesized that there would be an interaction between arousal and EEG dynamics and that this interaction would correlate with multitasking performance.Approach. We collected EEG and eye tracking data while subjects drove a motorcycle through a simulated city environment, with the instructions to count the number of target images they observed while avoiding crashing into a lead vehicle. The paradigm required the subjects to continuously reorient their attention between the two tasks. Subjects performed the paradigm under two conditions, one more difficult than the other.Main results. We found that task difficulty did not strongly correlate with pupil-linked arousal, and overall task performance increased as arousal level increased. A single-trial analysis revealed several interesting relationships between pupil-linked arousal and task-relevant EEG dynamics. Employing exact low-resolution electromagnetic tomography, we found that higher pupil-linked arousal led to greater EEG oscillatory activity, especially in regions associated with the dorsal attention network and ventral attention network (VAN). Consistent with our hypothesis, we found a relationship between EEG functional connectivity and pupil-linked arousal as a function of multitasking performance. Specifically, we found decreased functional connectivity between regions in the salience network (SN) and the VAN as pupil-linked arousal increased, suggesting that improved multitasking performance at high arousal levels may be due to a down-regulation in coupling between the VAN and the SN. Our results suggest that when multitasking, our brain rebalances arousal-based reorienting so that individual task demands can be met without prematurely reorienting to competing tasks.
Collapse
Affiliation(s)
- Pawan Lapborisuth
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Sharath Koorathota
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
- Department of Electrical Engineering, Columbia University, New York, NY, United States of America
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, United States of America
- Data Science Institute, Columbia University, New York, NY, United States of America
| |
Collapse
|
3
|
Ma JL, Koorathota S, Sajda P. Neurophysiological Predictors of Self-Reported Difficulty in a Virtual-Reality Driving Scenario. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38082974 DOI: 10.1109/embc40787.2023.10340597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Our perception of subjective difficulty in complex tasks, such as driving, is a judgment that is likely a result of dynamic interactions between distributed brain regions. In this paper, we investigate how neurophysiological markers associated with arousal state are informative of this perceived difficulty throughout a driving task. We do this by classifying subjective difficulty reports of subjects using set of features that include neural, autonomic, and eye behavioral markers. We subsequently assess the importance of these features in the classification. We find that though multiple EEG linked to cognitive control and, motor performance linked to classification of subjective difficulty, only pupil diameter, a measure of pupil-linked arousal, is strongly linked to both measured self-reported difficulty and actual task performance. We interpret our findings in the context of arousal pathways influencing performance and discuss their relevance to future brain-computer interface systems.
Collapse
|
4
|
Ranzenhofer LM, Solhjoo S, Crosby RD, Kim BH, Korn R, Koorathota S, Lloyd EC, Walsh BT, Haigney MC. Autonomic indices and loss-of-control eating in adolescents: an ecological momentary assessment study. Psychol Med 2023; 53:4742-4750. [PMID: 35920245 PMCID: PMC10336770 DOI: 10.1017/s0033291722001684] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
BACKGROUND Loss-of-control (LOC) eating commonly develops during adolescence, and it predicts full-syndrome eating disorders and excess weight gain. Although negative emotions and emotion dysregulation are hypothesized to precede and predict LOC eating, they are rarely examined outside the self-report domain. Autonomic indices, including heart rate (HR) and heart rate variability (HRV), may provide information about stress and capacity for emotion regulation in response to stress. METHODS We studied whether autonomic indices predict LOC eating in real-time in adolescents with LOC eating and body mass index (BMI) ⩾70th percentile. Twenty-four adolescents aged 12-18 (67% female; BMI percentile mean ± standard deviation = 92.6 ± 9.4) who reported at least twice-monthly LOC episodes wore biosensors to monitor HR, HRV, and physical activity for 1 week. They reported their degree of LOC after all eating episodes on a visual analog scale (0-100) using a smartphone. RESULTS Adjusting for physical activity and time of day, higher HR and lower HRV predicted higher self-reported LOC after eating. Parsing between- and within-subjects effects, there was a significant, positive, within-subjects association between pre-meal HR and post-meal LOC rating. However, there was no significant within-subjects effect for HRV, nor were there between-subjects effects for either electrophysiologic variable. CONCLUSIONS Findings suggest that autonomic indices may either be a marker of risk for subsequent LOC eating or contribute to LOC eating. Linking physiological markers with behavior in the natural environment can improve knowledge of illness mechanisms and provide new avenues for intervention.
Collapse
Affiliation(s)
- Lisa M Ranzenhofer
- Columbia University Irving Medical Center, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| | - Soroosh Solhjoo
- Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Ross D Crosby
- Sanford Center for Biobehavioral Research, Fargo, ND, USA
| | - Brittany H Kim
- Columbia University Irving Medical Center, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| | - Rachel Korn
- Columbia University Irving Medical Center, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| | | | - E Caitlin Lloyd
- Columbia University Irving Medical Center, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| | - B Timothy Walsh
- Columbia University Irving Medical Center, New York, NY, USA
| | - Mark C Haigney
- F. Edward Hébert School of Medicine, Bethesda, MD, USA
- Military Cardiovascular Outcomes Research (MiCOR), Bethesda, MD, USA
| |
Collapse
|
5
|
Koorathota S, Khan Z, Lapborisuth P, Sajda P. Multimodal Neurophysiological Transformer for Emotion Recognition. Annu Int Conf IEEE Eng Med Biol Soc 2022; 2022:3563-3567. [PMID: 36086657 DOI: 10.1109/embc48229.2022.9871421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Understanding neural function often requires multiple modalities of data, including electrophysiogical data, imaging techniques, and demographic surveys. In this paper, we introduce a novel neurophysiological model to tackle major challenges in modeling multimodal data. First, we avoid non-alignment issues between raw signals and extracted, frequency-domain features by addressing the issue of variable sampling rates. Second, we encode modalities through "cross-attention" with other modalities. Lastly, we utilize properties of our parent transformer architecture to model long-range dependencies between segments across modalities and assess intermediary weights to better understand how source signals affect prediction. We apply our Multimodal Neurophysiological Transformer (MNT) to predict valence and arousal in an existing open-source dataset. Experiments on non-aligned multimodal time-series show that our model performs similarly and, in some cases, outperforms existing methods in classification tasks. In addition, qualitative analysis suggests that MNT is able to model neural influences on autonomic activity in predicting arousal. Our architecture has the potential to be fine-tuned to a variety of downstream tasks, including for BCI systems.
Collapse
|
6
|
Lapborisuth P, Koorathota S, Wang Q, Sajda P. Integrating neural and ocular attention reorienting signals in virtual reality. J Neural Eng 2022; 18:066052. [PMID: 34937017 DOI: 10.1088/1741-2552/ac4593] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 12/22/2021] [Indexed: 11/11/2022]
Abstract
Objective.Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm.Approach.Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events.Main results.In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition.Significance.We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients.
Collapse
Affiliation(s)
- Pawan Lapborisuth
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Sharath Koorathota
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, United States of America
- Department of Radiology , Columbia University Irving Medical Center, New York, NY 10032, United States of America
- Department of Electrical Engineering , Columbia University, New York, NY 10027, United States of America
- Data Science Institute, Columbia University , New York, NY 10027, United States of America
| |
Collapse
|
7
|
Koorathota S, Thakoor K, Hong L, Mao Y, Adelman P, Sajda P. A Recurrent Neural Network for Attenuating Non-cognitive Components of Pupil Dynamics. Front Psychol 2021; 12:604522. [PMID: 33597908 PMCID: PMC7882598 DOI: 10.3389/fpsyg.2021.604522] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Accepted: 01/04/2021] [Indexed: 11/13/2022] Open
Abstract
There is increasing interest in how the pupil dynamics of the eye reflect underlying cognitive processes and brain states. Problematic, however, is that pupil changes can be due to non-cognitive factors, for example luminance changes in the environment, accommodation and movement. In this paper we consider how by modeling the response of the pupil in real-world environments we can capture the non-cognitive related changes and remove these to extract a residual signal which is a better index of cognition and performance. Specifically, we utilize sequence measures such as fixation position, duration, saccades, and blink-related information as inputs to a deep recurrent neural network (RNN) model for predicting subsequent pupil diameter. We build and evaluate the model for a task where subjects are watching educational videos and subsequently asked questions based on the content. Compared to commonly-used models for this task, the RNN had the lowest errors rates in predicting subsequent pupil dilation given sequence data. Most importantly was how the model output related to subjects' cognitive performance as assessed by a post-viewing test. Consistent with our hypothesis that the model captures non-cognitive pupil dynamics, we found (1) the model's root-mean square error was less for lower performing subjects than for those having better performance on the post-viewing test, (2) the residuals of the RNN (LSTM) model had the highest correlation with subject post-viewing test scores and (3) the residuals had the highest discriminability (assessed via area under the ROC curve, AUC) for classifying high and low test performers, compared to the true pupil size or the RNN model predictions. This suggests that deep learning sequence models may be good for separating components of pupil responses that are linked to luminance and accommodation from those that are linked to cognition and arousal.
Collapse
Affiliation(s)
- Sharath Koorathota
- Department of Biomedical Engineering, Columbia University, New York, NY, United States.,Fovea Inc., New York, NY, United States
| | - Kaveri Thakoor
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Linbi Hong
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| | - Yaoli Mao
- Department of Cognitive Science, Columbia University, New York, NY, United States
| | | | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, United States
| |
Collapse
|
8
|
Bergelson E, Amatuni A, Dailey S, Koorathota S, Tor S. Day by day, hour by hour: Naturalistic language input to infants. Dev Sci 2019; 22:e12715. [PMID: 30094888 PMCID: PMC6294661 DOI: 10.1111/desc.12715] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Accepted: 06/13/2018] [Indexed: 11/30/2022]
Abstract
Measurements of infants' quotidian experiences provide critical information about early development. However, the role of sampling methods in providing these measurements is rarely examined. Here we directly compare language input from hour-long video-recordings and daylong audio-recordings within the same group of 44 infants at 6 and 7 months. We compared 12 measures of language quantity and lexical diversity, talker variability, utterance-type, and object presence, finding moderate correlations across recording-types. However, video-recordings generally featured far denser noun input across these measures compared to the daylong audio-recordings, more akin to 'peak' audio hours (though not as high in talkers and word-types). Although audio-recordings captured ~10 times more awake-time than videos, the noun input in them was only 2-4 times greater. Notably, whether we compared videos to daylong audio-recordings or peak audio times, videos featured relatively fewer declaratives and more questions; furthermore, the most common video-recorded nouns were less consistent across families than the top audio-recording nouns were. Thus, hour-long videos and daylong audio-recordings revealed fairly divergent pictures of the language infants hear and learn from in their daily lives. We suggest that short video-recordings provide a dense and somewhat different sample of infants' language experiences, rather than a typical one, and should be used cautiously for extrapolation about common words, talkers, utterance-types, and contexts at larger timescales. If theories of language development are to be held accountable to 'facts on the ground' from observational data, greater care is needed to unpack the ramifications of sampling methods of early language input.
Collapse
|