1
|
Akkoyun M, Koçoğlu K, Eraslan Boz H, Tüfekci IY, Ekin M, Akdal G. Visual search for real-world scenes in patients with Alzheimer's disease and amnestic mild cognitive impairment. Brain Behav 2024; 14:e3567. [PMID: 38841742 PMCID: PMC11154822 DOI: 10.1002/brb3.3567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 05/03/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Visual attention-related processes that underlie visual search behavior are impaired in both the early stages of Alzheimer's disease (AD) and amnestic mild cognitive impairment (aMCI), which is considered a risk factor for AD. Although traditional computer-based array tasks have been used to investigate visual search, information on the visual search patterns of AD and MCI patients in real-world environments is limited. AIM The objective of this study was to evaluate the differences in visual search behaviors among individuals with AD, aMCI, and healthy controls (HCs) in real-world scenes. MATERIALS AND METHODS A total of 92 participants were enrolled, including 28 with AD, 32 with aMCI, and 32 HCs. During the visual search task, participants were instructed to look at a single target object amid distractors, and their eye movements were recorded. RESULTS The results indicate that patients with AD made more fixations on distractors and fewer fixations on the target, compared to patients with aMCI and HC groups. Additionally, AD patients had longer fixation durations on distractors and spent less time looking at the target than both patients with aMCI and HCs. DISCUSSION These findings suggest that visual search behavior is impaired in patients with AD and can be distinguished from aMCI and healthy individuals. For future studies, it is important to longitudinally monitor visual search behavior in the progression from aMCI to AD. CONCLUSION Our study holds significance in elucidating the interplay between impairments in attention, visual processes, and other underlying cognitive processes, which contribute to the functional decline observed in individuals with AD and aMCI.
Collapse
Affiliation(s)
- Müge Akkoyun
- Department of Neuroscience, Institute of Health SciencesDokuz Eylul UniversityIzmirTürkiye
| | - Koray Koçoğlu
- Department of Neuroscience, Institute of Health SciencesDokuz Eylul UniversityIzmirTürkiye
| | - Hatice Eraslan Boz
- Department of Neuroscience, Institute of Health SciencesDokuz Eylul UniversityIzmirTürkiye
- Department of Neurology, Unit of NeuropsychologyDokuz Eylul UniversityIzmirTürkiye
| | - Işıl Yağmur Tüfekci
- Department of Neuroscience, Institute of Health SciencesDokuz Eylul UniversityIzmirTürkiye
| | - Merve Ekin
- Department of Neuroscience, Institute of Health SciencesDokuz Eylul UniversityIzmirTürkiye
| | - Gülden Akdal
- Department of Neuroscience, Institute of Health SciencesDokuz Eylul UniversityIzmirTürkiye
- Department of Neurology, Faculty of MedicineDokuz Eylul UniversityIzmirTürkiye
| |
Collapse
|
2
|
Ren Y, Zhang Y, Liu Z, Xie N. Eye-Hand Typing: Eye Gaze Assisted Finger Typing via Bayesian Processes in AR. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2496-2506. [PMID: 38498759 DOI: 10.1109/tvcg.2024.3372106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/20/2024]
Abstract
Nowadays, AR HMDs are widely used in scenarios such as intelligent manufacturing and digital factories. In a factory environment, fast and accurate text input is crucial for operators' efficiency and task completion quality. However, the traditional AR keyboard may not meet this requirement, and the noisy environment is unsuitable for voice input. In this article, we introduce Eye-Hand Typing, an intelligent AR keyboard. We leverage the speed advantage of eye gaze and use a Bayesian process based on the information of gaze points to infer users' text input intentions. We improve the underlying keyboard algorithm without changing user input habits, thereby improving factory users' text input speed and accuracy. In real-time applications, when the user's gaze point is on the keyboard, the Bayesian process can predict the most likely characters, vocabulary, or commands that the user will input based on the position and duration of the gaze point and input history. The system can enlarge and highlight recommended text input options based on the predicted results, thereby improving user input efficiency. A user study showed that compared with the current HoloLens 2 system keyboard, Eye-Hand Typing could reduce input error rates by 28.31 % and improve text input speed by 14.5%. It also outperformed a gaze-only technique, being 43.05% more accurate and 39.55% faster. And it was no significant compromise in eye fatigue. Users also showed positive preferences.
Collapse
|
3
|
Walter K, Freeman M, Bex P. Quantifying task-related gaze. Atten Percept Psychophys 2024; 86:1318-1329. [PMID: 38594445 PMCID: PMC11093728 DOI: 10.3758/s13414-024-02883-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 04/11/2024]
Abstract
Competing theories attempt to explain what guides eye movements when exploring natural scenes: bottom-up image salience and top-down semantic salience. In one study, we apply language-based analyses to quantify the well-known observation that task influences gaze in natural scenes. Subjects viewed ten scenes as if they were performing one of two tasks. We found that the semantic similarity between the task and the labels of objects in the scenes captured the task-dependence of gaze (t(39) = 13.083; p < 0.001). In another study, we examined whether image salience or semantic salience better predicts gaze during a search task, and if viewing strategies are affected by searching for targets of high or low semantic relevance to the scene. Subjects searched 100 scenes for a high- or low-relevance object. We found that image salience becomes a worse predictor of gaze across successive fixations, while semantic salience remains a consistent predictor (X2(1, N=40) = 75.148, p < .001). Furthermore, we found that semantic salience decreased as object relevance decreased (t(39) = 2.304; p = .027). These results suggest that semantic salience is a useful predictor of gaze during task-related scene viewing, and that even in target-absent trials, gaze is modulated by the relevance of a search target to the scene in which it might be located.
Collapse
Affiliation(s)
- Kerri Walter
- Department of Psychology, Northeastern University, Boston, MA, USA.
| | - Michelle Freeman
- Department of Psychology, Northeastern University, Boston, MA, USA
| | - Peter Bex
- Department of Psychology, Northeastern University, Boston, MA, USA
| |
Collapse
|
4
|
Aizenman AM, Gegenfurtner KR, Goettker A. Oculomotor routines for perceptual judgments. J Vis 2024; 24:3. [PMID: 38709511 PMCID: PMC11078167 DOI: 10.1167/jov.24.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/09/2024] [Indexed: 05/07/2024] Open
Abstract
In everyday life we frequently make simple visual judgments about object properties, for example, how big or wide is a certain object? Our goal is to test whether there are also task-specific oculomotor routines that support perceptual judgments, similar to the well-established exploratory routines for haptic perception. In a first study, observers saw different scenes with two objects presented in a photorealistic virtual reality environment. Observers were asked to judge which of two objects was taller or wider while gaze was tracked. All tasks were performed with the same set of virtual objects in the same scenes, so that we can compare spatial characteristics of exploratory gaze behavior to quantify oculomotor routines for each task. Width judgments showed fixations around the center of the objects with larger horizontal spread. In contrast, for height judgments, gaze was shifted toward the top of the objects with larger vertical spread. These results suggest specific strategies in gaze behavior that presumably are used for perceptual judgments. To test the causal link between oculomotor behavior and perception, in a second study, observers could freely gaze at the object or we introduced a gaze-contingent setup forcing observers to fixate specific positions on the object. Discrimination performance was similar between free-gaze and the gaze-contingent conditions for width and height judgments. These results suggest that although gaze is adapted for different tasks, performance seems to be based on a perceptual strategy, independent of potential cues that can be provided by the oculomotor system.
Collapse
Affiliation(s)
- Avi M Aizenman
- Psychology Department Giessen University, Giessen, Germany
- http://aviaizenman.com/
| | - Karl R Gegenfurtner
- Psychology Department Giessen University, Giessen, Germany
- https://www.allpsych.uni-giessen.de/karl/
| | - Alexander Goettker
- Psychology Department Giessen University, Giessen, Germany
- https://alexgoettker.com/
| |
Collapse
|
5
|
Yamada Y, Shinkawa K, Kobayashi M, Nemoto M, Ota M, Nemoto K, Arai T. Distinct eye movement patterns to complex scenes in Alzheimer's disease and Lewy body disease. Front Neurosci 2024; 18:1333894. [PMID: 38646608 PMCID: PMC11026598 DOI: 10.3389/fnins.2024.1333894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 03/22/2024] [Indexed: 04/23/2024] Open
Abstract
Background Alzheimer's disease (AD) and Lewy body disease (LBD), the two most common causes of neurodegenerative dementia with similar clinical manifestations, both show impaired visual attention and altered eye movements. However, prior studies have used structured tasks or restricted stimuli, limiting the insights into how eye movements alter and differ between AD and LBD in daily life. Objective We aimed to comprehensively characterize eye movements of AD and LBD patients on naturalistic complex scenes with broad categories of objects, which would provide a context closer to real-world free viewing, and to identify disease-specific patterns of altered eye movements. Methods We collected spontaneous viewing behaviors to 200 naturalistic complex scenes from patients with AD or LBD at the prodromal or dementia stage, as well as matched control participants. We then investigated eye movement patterns using a computational visual attention model with high-level image features of object properties and semantic information. Results Compared with matched controls, we identified two disease-specific altered patterns of eye movements: diminished visual exploration, which differentially correlates with cognitive impairment in AD and with motor impairment in LBD; and reduced gaze allocation to objects, attributed to a weaker attention bias toward high-level image features in AD and attributed to a greater image-center bias in LBD. Conclusion Our findings may help differentiate AD and LBD patients and comprehend their real-world visual behaviors to mitigate the widespread impact of impaired visual attention on daily activities.
Collapse
Affiliation(s)
- Yasunori Yamada
- Digital Health, IBM Research, Tokyo, Japan
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | | | - Masatomo Kobayashi
- Digital Health, IBM Research, Tokyo, Japan
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Miyuki Nemoto
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Miho Ota
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Kiyotaka Nemoto
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Tetsuaki Arai
- Department of Psychiatry, Division of Clinical Medicine, Institute of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
6
|
Backhaus D, Engbert R. How body postures affect gaze control in scene viewing under specific task conditions. Exp Brain Res 2024; 242:745-756. [PMID: 38300280 DOI: 10.1007/s00221-023-06771-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/18/2023] [Indexed: 02/02/2024]
Abstract
Gaze movements during visual exploration of natural scenes are typically investigated with the static picture viewing paradigm in the laboratory. While this paradigm is attractive for its highly controlled conditions, limitations in the generalizability of the resulting findings to more natural viewing behavior have been raised frequently. Here, we address the combined influences of body posture and viewing task on gaze behavior with the static picture viewing paradigm under free viewing as a baseline condition. We recorded gaze data using mobile eye tracking during postural manipulations in scene viewing. Specifically, in Experiment 1, we compared gaze behavior during head-supported sitting and quiet standing under two task conditions. We found that task affects temporal and spatial gaze parameters, while posture produces no effects on temporal and small effects on spatial parameters. In Experiment 2, we further investigated body posture by introducing four conditions (sitting with chin rest, head-free sitting, quiet standing, standing on an unstable platform). Again, we found no effects on temporal and small effects on spatial gaze parameters. In our experiments, gaze behavior is largely unaffected by body posture, while task conditions readily produce effects. We conclude that results from static picture viewing may allow predictions of gaze statistics under more natural viewing conditions, however, viewing tasks should be chosen carefully because of their potential effects on gaze characteristics.
Collapse
Affiliation(s)
- Daniel Backhaus
- Department of Psychology, University of Potsdam, Karl-Liebknecht-Str. 24-25, Potsdam, 14476, Germany.
| | - Ralf Engbert
- Department of Psychology, University of Potsdam, Karl-Liebknecht-Str. 24-25, Potsdam, 14476, Germany
- Research Focus Cognitive Sciences, University of Potsdam, Karl-Liebknecht-Str. 24-25, Potsdam, 14476, Germany
| |
Collapse
|
7
|
Walter K, Manley CE, Bex PJ, Merabet LB. Visual search patterns during exploration of naturalistic scenes are driven by saliency cues in individuals with cerebral visual impairment. Sci Rep 2024; 14:3074. [PMID: 38321069 PMCID: PMC10847433 DOI: 10.1038/s41598-024-53642-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 02/03/2024] [Indexed: 02/08/2024] Open
Abstract
We investigated the relative influence of image salience and image semantics during the visual search of naturalistic scenes, comparing performance in individuals with cerebral visual impairment (CVI) and controls with neurotypical development. Participants searched for a prompted target presented as either an image or text cue. Success rate and reaction time were collected, and gaze behavior was recorded with an eye tracker. A receiver operating characteristic (ROC) analysis compared the distribution of individual gaze landings based on predictions of image salience (using Graph-Based Visual Saliency) and image semantics (using Global Vectors for Word Representations combined with Linguistic Analysis of Semantic Salience) models. CVI participants were less likely and were slower in finding the target. Their visual search behavior was also associated with a larger visual search area and greater number of fixations. ROC scores were also lower in CVI compared to controls for both model predictions. Furthermore, search strategies in the CVI group were not affected by cue type, although search times and accuracy showed a significant correlation with verbal IQ scores for text-cued searches. These results suggest that visual search patterns in CVI are driven mainly by image salience and provide further characterization of higher-order processing deficits observed in this population.
Collapse
Affiliation(s)
- Kerri Walter
- Translational Vision Lab, Department of Psychology, Northeastern University, Boston, MA, USA
| | - Claire E Manley
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, 20 Staniford Street, Boston, MA, 02114, USA
| | - Peter J Bex
- Translational Vision Lab, Department of Psychology, Northeastern University, Boston, MA, USA
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, 20 Staniford Street, Boston, MA, 02114, USA.
| |
Collapse
|
8
|
Yamashita J, Takimoto Y, Oishi H, Kumada T. How do personality traits modulate real-world gaze behavior? Generated gaze data shows situation-dependent modulations. Front Psychol 2024; 14:1144048. [PMID: 38268808 PMCID: PMC10805946 DOI: 10.3389/fpsyg.2023.1144048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 12/21/2023] [Indexed: 01/26/2024] Open
Abstract
It has both scientific and practical benefits to substantiate the theoretical prediction that personality (Big Five) traits systematically modulate gaze behavior in various real-world (working) situations. Nevertheless, previous methods that required controlled situations and large numbers of participants failed to incorporate real-world personality modulation analysis. One cause of this research gap is the mixed effects of individual attributes (e.g., the accumulated attributes of age, gender, and degree of measurement noise) and personality traits in gaze data. Previous studies may have used larger sample sizes to average out the possible concentration of specific individual attributes in some personality traits, and may have imposed control situations to prevent unexpected interactions between these possibly biased individual attributes and complex, realistic situations. Therefore, we generated and analyzed real-world gaze behavior where the effects of personality traits are separated out from individual attributes. In Experiment 1, we successfully provided a methodology for generating such sensor data on head and eye movements for a small sample of participants who performed realistic nonsocial (data-entry) and social (conversation) work tasks (i.e., the first contribution). In Experiment 2, we evaluated the effectiveness of generated gaze behavior for real-world personality modulation analysis. We successfully showed how openness systematically modulates the autocorrelation coefficients of sensor data, reflecting the period of head and eye movements in data-entry and conversation tasks (i.e., the second contribution). We found different openness modulations in the autocorrelation coefficients from the generated sensor data of the two tasks. These modulations could not be detected using real sensor data because of the contamination of individual attributes. In conclusion, our method is a potentially powerful tool for understanding theoretically expected, systematic situation-specific personality modulation of real-world gaze behavior.
Collapse
Affiliation(s)
- Jumpei Yamashita
- NTT Access Network Service Systems Laboratories, Nippon Telegraph and Telephone Corporation, Tokyo, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Yoshiaki Takimoto
- NTT Human Informatics Laboratories, Nippon Telegraph and Telephone Corporation, Kanagawa, Japan
| | - Haruo Oishi
- NTT Access Network Service Systems Laboratories, Nippon Telegraph and Telephone Corporation, Tokyo, Japan
| | | |
Collapse
|
9
|
Hooge ITC, Niehorster DC, Nyström M, Hessels RS. Large eye-head gaze shifts measured with a wearable eye tracker and an industrial camera. Behav Res Methods 2024:10.3758/s13428-023-02316-w. [PMID: 38200239 DOI: 10.3758/s13428-023-02316-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
We built a novel setup to record large gaze shifts (up to 140[Formula: see text]). The setup consists of a wearable eye tracker and a high-speed camera with fiducial marker technology to track the head. We tested our setup by replicating findings from the classic eye-head gaze shift literature. We conclude that our new inexpensive setup is good enough to investigate the dynamics of large eye-head gaze shifts. This novel setup could be used for future research on large eye-head gaze shifts, but also for research on gaze during e.g., human interaction. We further discuss reference frames and terminology in head-free eye tracking. Despite a transition from head-fixed eye tracking to head-free gaze tracking, researchers still use head-fixed eye movement terminology when discussing world-fixed gaze phenomena. We propose to use more specific terminology for world-fixed phenomena, including gaze fixation, gaze pursuit, and gaze saccade.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
10
|
Marshall TR, Ruesseler M, Hunt LT, O’Reilly JX. The representation of priors and decisions in the human parietal cortex. PLoS Biol 2024; 22:e3002383. [PMID: 38285671 PMCID: PMC10824454 DOI: 10.1371/journal.pbio.3002383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 12/11/2023] [Indexed: 01/31/2024] Open
Abstract
Animals actively sample their environment through orienting actions such as saccadic eye movements. Saccadic targets are selected based both on sensory evidence immediately preceding the saccade, and a "salience map" or prior built-up over multiple saccades. In the primate cortex, the selection of each individual saccade depends on competition between target-selective cells that ramp up their firing rate to saccade release. However, it is less clear how a cross-saccade prior might be implemented, either in neural firing or through an activity-silent mechanism such as modification of synaptic weights on sensory inputs. Here, we present evidence from magnetoencephalography for 2 distinct processes underlying the selection of the current saccade, and the representation of the prior, in human parietal cortex. While the classic ramping decision process for each saccade was reflected in neural firing rates (measured in the event-related field), a prior built-up over multiple saccades was implemented via modulation of the gain on sensory inputs from the preferred target, as evidenced by rapid frequency tagging. A cascade of computations over time (initial representation of the prior, followed by evidence accumulation and then an integration of prior and evidence) provides a mechanism by which a salience map may be built up across saccades in parietal cortex. It also provides insight into the apparent contradiction that inactivation of parietal cortex has been shown not to affect performance on single-trials, despite the presence of clear evidence accumulation signals in this region.
Collapse
Affiliation(s)
- Tom R. Marshall
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham, United Kingdom
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Oxford University, Oxford, United Kingdom
| | - Maria Ruesseler
- Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, Oxford University, Oxford, United Kingdom
| | - Laurence T. Hunt
- Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, Oxford University, Oxford, United Kingdom
| | - Jill X. O’Reilly
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Oxford University, Oxford, United Kingdom
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department for Clinical Neurosciences, Oxford University, Oxford, United Kingdom
| |
Collapse
|
11
|
Stone SA, Boser QA, Dawson TR, Vette AH, Hebert JS, Pilarski PM, Chapman CS. Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture. Behav Res Methods 2024; 56:18-31. [PMID: 36085543 DOI: 10.3758/s13428-022-01958-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/15/2022] [Indexed: 11/08/2022]
Abstract
Assessing gaze behavior during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make gaze analysis difficult. Current approaches involve laborious coding of pupil positions. In settings where motion capture and mobile eye tracking are used concurrently in naturalistic tasks, it is critical that data collection be simple, efficient, and systematic. One solution is to combine eye tracking with motion capture to generate 3D gaze vectors. When combined with tracked or known object locations, 3D gaze vector generation can be automated. Here we use combined eye and motion capture and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three pupil data inputs: the efficacy of calibration routines was assessed, a validation task requiring short fixations on task-relevant locations, and a naturalistic object interaction task to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and Cartesian coordinate systems and monocular (left or right) or binocular data. All calibration routines performed similarly, with the best performance (i.e., sub-centimeter errors) coming from the naturalistic task trials when the participant is looking at an object in front of them. We found that spherical coordinate systems generate the most accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend 1-min calibration routines using binocular pupil data combined with a spherical world coordinate system to produce the highest-quality gaze vectors.
Collapse
Affiliation(s)
- Scott A Stone
- Department of Psychology, University of Alberta, Edmonton, Alberta, Canada.
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada.
| | - Quinn A Boser
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - T Riley Dawson
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Albert H Vette
- Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Patrick M Pilarski
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Craig S Chapman
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
12
|
Hooge ITC, Niehorster DC, Hessels RS, Benjamins JS, Nyström M. How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
Affiliation(s)
- Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Diederick C Niehorster
- Lund University Humanities Lab and Department of Psychology, Lund University, Lund, Sweden
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Jeroen S Benjamins
- Experimental Psychology, Helmholtz Institute, and Social, Health and Organisational Psychology, Utrecht University, Utrecht, The Netherlands
| | - Marcus Nyström
- Lund University Humanities Lab, Lund University, Lund, Sweden
| |
Collapse
|
13
|
Testoni A, Bernardi R, Ruggeri A. The Efficiency of Question-Asking Strategies in a Real-World Visual Search Task. Cogn Sci 2023; 47:e13396. [PMID: 38142430 DOI: 10.1111/cogs.13396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 11/14/2023] [Accepted: 12/01/2023] [Indexed: 12/26/2023]
Abstract
In recent years, a multitude of datasets of human-human conversations has been released for the main purpose of training conversational agents based on data-hungry artificial neural networks. In this paper, we argue that datasets of this sort represent a useful and underexplored source to validate, complement, and enhance cognitive studies on human behavior and language use. We present a method that leverages the recent development of powerful computational models to obtain the fine-grained annotation required to apply metrics and techniques from Cognitive Science to large datasets. Previous work in Cognitive Science has investigated the question-asking strategies of human participants by employing different variants of the so-called 20-question-game setting and proposing several evaluation methods. In our work, we focus on GuessWhat, a task proposed within the Computer Vision and Natural Language Processing communities that is similar in structure to the 20-question-game setting. Crucially, the GuessWhat dataset contains tens of thousands of dialogues based on real-world images, making it a suitable setting to investigate the question-asking strategies of human players on a large scale and in a natural setting. Our results demonstrate the effectiveness of computational tools to automatically code how the hypothesis space changes throughout the dialogue in complex visual scenes. On the one hand, we confirm findings from previous work on smaller and more controlled settings. On the other hand, our analyses allow us to highlight the presence of "uninformative" questions (in terms of Expected Information Gain) at specific rounds of the dialogue. We hypothesize that these questions fulfill pragmatic constraints that are exploited by human players to solve visual tasks in complex scenes successfully. Our work illustrates a method that brings together efforts and findings from different disciplines to gain a better understanding of human question-asking strategies on large-scale datasets, while at the same time posing new questions about the development of conversational systems.
Collapse
Affiliation(s)
- Alberto Testoni
- Institute for Logic, Language and Computation (ILLC), University of Amsterdam
| | - Raffaella Bernardi
- Center for Mind/Brain Sciences (CIMeC), University of Trento
- Department of Information Engineering and Computer Science (DISI), University of Trento
| | - Azzurra Ruggeri
- MPRG iSearch, Max Planck Institute for Human Development, Berlin
- School of Social Sciences and Technology, Technical University Munich
- Department of Cognitive Science, Central European University
| |
Collapse
|
14
|
Solbach MD, Tsotsos JK. The psychophysics of human three-dimensional active visuospatial problem-solving. Sci Rep 2023; 13:19967. [PMID: 37968501 PMCID: PMC10651907 DOI: 10.1038/s41598-023-47188-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/09/2023] [Indexed: 11/17/2023] Open
Abstract
Our understanding of how visual systems detect, analyze and interpret visual stimuli has advanced greatly. However, the visual systems of all animals do much more; they enable visual behaviours. How well the visual system performs while interacting with the visual environment and how vision is used in the real world is far from fully understood, especially in humans. It has been suggested that comparison is the most primitive of psychophysical tasks. Thus, as a probe into these active visual behaviours, we use a same-different task: Are two physical 3D objects visually the same? This task is a fundamental cognitive ability. We pose this question to human subjects who are free to move about and examine two real objects in a physical 3D space. The experimental design is such that all behaviours are directed to viewpoint change. Without any training, our participants achieved a mean accuracy of 93.82%. No learning effect was observed on accuracy after many trials, but some effect was seen for response time, number of fixations and extent of head movement. Our probe task, even though easily executed at high-performance levels, uncovered a surprising variety of complex strategies for viewpoint control, suggesting that solutions were developed dynamically and deployed in a seemingly directed hypothesize-and-test manner tailored to the specific task. Subjects need not acquire task-specific knowledge; instead, they formulate effective solutions right from the outset, and as they engage in a series of attempts, those solutions progressively refine, becoming more efficient without compromising accuracy.
Collapse
Affiliation(s)
- Markus D Solbach
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada.
| | - John K Tsotsos
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
15
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
16
|
Kümmerer M, Bethge M. Predicting Visual Fixations. Annu Rev Vis Sci 2023; 9:269-291. [PMID: 37419107 DOI: 10.1146/annurev-vision-120822-072528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/09/2023]
Abstract
As we navigate and behave in the world, we are constantly deciding, a few times per second, where to look next. The outcomes of these decisions in response to visual input are comparatively easy to measure as trajectories of eye movements, offering insight into many unconscious and conscious visual and cognitive processes. In this article, we review recent advances in predicting where we look. We focus on evaluating and comparing models: How can we consistently measure how well models predict eye movements, and how can we judge the contribution of different mechanisms? Probabilistic models facilitate a unified approach to fixation prediction that allows us to use explainable information explained to compare different models across different settings, such as static and video saliency, as well as scanpath prediction. We review how the large variety of saliency maps and scanpath models can be translated into this unifying framework, how much different factors contribute, and how we can select the most informative examples for model comparison. We conclude that the universal scale of information gain offers a powerful tool for the inspection of candidate mechanisms and experimental design that helps us understand the continual decision-making process that determines where we look.
Collapse
Affiliation(s)
| | - Matthias Bethge
- Tübingen AI Center, University of Tübingen, Tübingen, Germany; ,
| |
Collapse
|
17
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
18
|
Recker L, Poth CH. Test-retest reliability of eye tracking measures in a computerized Trail Making Test. J Vis 2023; 23:15. [PMID: 37594452 PMCID: PMC10445213 DOI: 10.1167/jov.23.8.15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 07/12/2023] [Indexed: 08/19/2023] Open
Abstract
The Trail Making Test (TMT) is a frequently applied neuropsychological test that evaluates participants' executive functions based on their time to connect a sequence of numbers (TMT-A) or alternating numbers and letters (TMT-B). Test performance is associated with various cognitive functions ranging from visuomotor speed to working memory capabilities. However, although the test can screen for impaired executive functioning in a variety of neuropsychiatric disorders, it provides only little information about which specific cognitive impairments underlie performance detriments. To resolve this lack of specificity, recent cognitive research combined the TMT with eye tracking so that eye movements could help uncover reasons for performance impairments. However, using eye-tracking-based test scores to examine differences between persons, and ultimately apply the scores for diagnostics, presupposes that the reliability of the scores is established. Therefore, we investigated the test-retest reliabilities of scores in an eye-tracking version of the TMT recently introduced by Recker et al. (2022). We examined two healthy samples performing an initial test and then a retest 3 days (n = 31) or 10 to 30 days (n = 34) later. Results reveal that, although reliabilities of classic completion times were overall good, comparable with earlier versions, reliabilities of eye-tracking-based scores ranged from excellent (e.g., durations of fixations) to poor (e.g., number of fixations guiding manual responses). These findings indicate that some eye-tracking measures offer a strong basis for assessing interindividual differences beyond classic behavioral measures when examining processes related to information accumulation processes but are less suitable to diagnose differences in eye-hand coordination.
Collapse
Affiliation(s)
- Lukas Recker
- Neuro-Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
- https://orcid.org/0000-0001-8465-9643
- https://www.uni-bielefeld.de/fakultaeten/psychologie/abteilung/arbeitseinheiten/01/people/scientificstaff/recker/
| | - Christian H Poth
- Neuro-Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
- https://orcid.org/0000-0003-1621-4911
| |
Collapse
|
19
|
Baltaretu BR, Stevens WD, Freud E, Crawford JD. Occipital and parietal cortex participate in a cortical network for transsaccadic discrimination of object shape and orientation. Sci Rep 2023; 13:11628. [PMID: 37468709 DOI: 10.1038/s41598-023-38554-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 07/11/2023] [Indexed: 07/21/2023] Open
Abstract
Saccades change eye position and interrupt vision several times per second, necessitating neural mechanisms for continuous perception of object identity, orientation, and location. Neuroimaging studies suggest that occipital and parietal cortex play complementary roles for transsaccadic perception of intrinsic versus extrinsic spatial properties, e.g., dorsomedial occipital cortex (cuneus) is sensitive to changes in spatial frequency, whereas the supramarginal gyrus (SMG) is modulated by changes in object orientation. Based on this, we hypothesized that both structures would be recruited to simultaneously monitor object identity and orientation across saccades. To test this, we merged two previous neuroimaging protocols: 21 participants viewed a 2D object and then, after sustained fixation or a saccade, judged whether the shape or orientation of the re-presented object changed. We, then, performed a bilateral region-of-interest analysis on identified cuneus and SMG sites. As hypothesized, cuneus showed both saccade and feature (i.e., object orientation vs. shape change) modulations, and right SMG showed saccade-feature interactions. Further, the cuneus activity time course correlated with several other cortical saccade/visual areas, suggesting a 'functional network' for feature discrimination. These results confirm the involvement of occipital/parietal cortex in transsaccadic vision and support complementary roles in spatial versus identity updating.
Collapse
Affiliation(s)
- B R Baltaretu
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada.
- Department of Psychology, Justus-Liebig University Giessen, Otto-Behaghel-Strasse 10F, 35394, Giessen, Hesse, Germany.
| | - W Dale Stevens
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - E Freud
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
| | - J D Crawford
- Centre for Vision Research and Vision: Science to Applications (VISTA) Program, York University, Toronto, ON, M3J 1P3, Canada
- Department of Biology, York University, Toronto, ON, M3J 1P3, Canada
- Department of Psychology and Neuroscience Graduate Diploma Program, York University, Toronto, ON, M3J 1P3, Canada
- School of Kinesiology and Health Sciences, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
20
|
Troncoso A, Soto V, Gomila A, Martínez-Pernía D. Moving beyond the lab: investigating empathy through the Empirical 5E approach. Front Psychol 2023; 14:1119469. [PMID: 37519389 PMCID: PMC10374225 DOI: 10.3389/fpsyg.2023.1119469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 06/05/2023] [Indexed: 08/01/2023] Open
Abstract
Empathy is a complex and multifaceted phenomenon that plays a crucial role in human social interactions. Recent developments in social neuroscience have provided valuable insights into the neural underpinnings and bodily mechanisms underlying empathy. This methodology often prioritizes precision, replicability, internal validity, and confound control. However, fully understanding the complexity of empathy seems unattainable by solely relying on artificial and controlled laboratory settings, while overlooking a comprehensive view of empathy through an ecological experimental approach. In this article, we propose articulating an integrative theoretical and methodological framework based on the 5E approach (the "E"s stand for embodied, embedded, enacted, emotional, and extended perspectives of empathy), highlighting the relevance of studying empathy as an active interaction between embodied agents, embedded in a shared real-world environment. In addition, we illustrate how a novel multimodal approach including mobile brain and body imaging (MoBi) combined with phenomenological methods, and the implementation of interactive paradigms in a natural context, are adequate procedures to study empathy from the 5E approach. In doing so, we present the Empirical 5E approach (E5E) as an integrative scientific framework to bridge brain/body and phenomenological attributes in an interbody interactive setting. Progressing toward an E5E approach can be crucial to understanding empathy in accordance with the complexity of how it is experienced in the real world.
Collapse
Affiliation(s)
- Alejandro Troncoso
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| | - Vicente Soto
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| | - Antoni Gomila
- Department of Psychology, University of the Balearic Islands, Palma de Mallorca, Spain
| | - David Martínez-Pernía
- Center for Social and Cognitive Neuroscience, School of Psychology, Adolfo Ibáñez University, Santiago, Chile
| |
Collapse
|
21
|
Talley J, Pusdekar S, Feltenberger A, Ketner N, Evers J, Liu M, Gosh A, Palmer SE, Wardill TJ, Gonzalez-Bellido PT. Predictive saccades and decision making in the beetle-predating saffron robber fly. Curr Biol 2023:S0960-9822(23)00770-4. [PMID: 37379842 DOI: 10.1016/j.cub.2023.06.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 04/28/2023] [Accepted: 06/06/2023] [Indexed: 06/30/2023]
Abstract
Internal predictions about the sensory consequences of self-motion, encoded by corollary discharge, are ubiquitous in the animal kingdom, including for fruit flies, dragonflies, and humans. In contrast, predicting the future location of an independently moving external target requires an internal model. With the use of internal models for predictive gaze control, vertebrate predatory species compensate for their sluggish visual systems and long sensorimotor latencies. This ability is crucial for the timely and accurate decisions that underpin a successful attack. Here, we directly demonstrate that the robber fly Laphria saffrana, a specialized beetle predator, also uses predictive gaze control when head tracking potential prey. Laphria uses this predictive ability to perform the difficult categorization and perceptual decision task of differentiating a beetle from other flying insects with a low spatial resolution retina. Specifically, we show that (1) this predictive behavior is part of a saccade-and-fixate strategy, (2) the relative target angular position and velocity, acquired during fixation, inform the subsequent predictive saccade, and (3) the predictive saccade provides Laphria with additional fixation time to sample the frequency of the prey's specular wing reflections. We also demonstrate that Laphria uses such wing reflections as a proxy for the wingbeat frequency of the potential prey and that consecutively flashing LEDs to produce apparent motion elicits attacks when the LED flicker frequency matches that of the beetle's wingbeat cycle.
Collapse
Affiliation(s)
- Jennifer Talley
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA.
| | - Siddhant Pusdekar
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA
| | - Aaron Feltenberger
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA
| | - Natalie Ketner
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA
| | - Johnny Evers
- Air Force Research Laboratory, Munitions Directorate, Eglin AFB, FL 32542, USA
| | - Molly Liu
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA
| | - Atishya Gosh
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA; Department of Biomedical Informatics and Computational Biology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy, The University of Chicago, Chicago, IL 60637, USA
| | - Trevor J Wardill
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA; Department of Biomedical Informatics and Computational Biology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Paloma T Gonzalez-Bellido
- Department of Ecology, Evolution and Behavior, University of Minnesota, Saint Paul, MN 55108, USA; Department of Biomedical Informatics and Computational Biology, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
22
|
Ahmad Rudin AM, Abd Rahman NH, Rosli SA, Asrullah M. Effect of Contrast Polarity Towards Eye Fixation Rates When Reading On Smartphone. ENVIRONMENT-BEHAVIOUR PROCEEDINGS JOURNAL 2023; 8:347-353. [DOI: 10.21834/ebpj.v8i24.4680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 04/30/2023] [Indexed: 09/02/2023]
Abstract
This study is conducted to investigate the effect of contrast polarity towards eye fixation patterns when reading text on a smartphone in bright and dark conditions involving the effects when reading on a smartphone such as in real-life situations. The number of fixations and duration of fixation showed no statistically significant difference (p=0.160 and 0.099 respectively). However, emmetropic subjects showed a higher result in bright conditions compared to myopic subjects (p=0.046). This concludes that emmetropic eye movement efficiency seems superior, possibly due to lower spherical order aberration as pupil size decreases in bright illumination.
Collapse
|
23
|
Mineiro J, Buckingham G. O hand, where art thou? Mapping hand location across the visual field during common activities. Exp Brain Res 2023; 241:1227-1239. [PMID: 36961553 PMCID: PMC10130124 DOI: 10.1007/s00221-023-06597-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 03/09/2023] [Indexed: 03/25/2023]
Abstract
Humans employ visually-guided actions during a myriad of daily activities. These ubiquitous but precise manual actions rely on synergistic work between eye and hand movements. During this close cooperation between hands and eyes, the hands persist in sight in a way which is unevenly distributed across our visual field. One common assertion is that most hand actions occur in the lower visual field (LVF) because the arms are anatomically lower than the head, and objects typically rest on waist-high table surfaces. While experimental work has shown that humans are more efficient at reaching for and grasping targets located below their visual midline (Goodale and Danckert, Exp Brain Res 137:303-308, 2001), there is almost no empirical data detailing where the hands lie in the visual fields during natural hand actions. To build a comprehensive picture of hand location during natural visually guided manual actions, we analyzed data from a large-scale open-access dataset containing 100 h of non-scripted manual object interactions during domestic kitchen tasks filmed from a head-mounted camera. We found a clear vertical visual asymmetry with hands located in the lower visual scene (LVS) in more than 70% of image frames, particularly in ipsilateral space. These findings provide the first direct evidence for the established assumption that hands spend more time in the lower than in the upper visual field (UVF). Further work is required to determine whether this LVF asymmetry differs across the lifespan, in different professions, and in clinical populations.
Collapse
Affiliation(s)
- Joao Mineiro
- Department of Public Health and Sports Sciences, University of Exeter, Exeter, UK.
| | - Gavin Buckingham
- Department of Public Health and Sports Sciences, University of Exeter, Exeter, UK
| |
Collapse
|
24
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
25
|
Earl B. Humans, fish, spiders and bees inherited working memory and attention from their last common ancestor. Front Psychol 2023; 13:937712. [PMID: 36814887 PMCID: PMC9939904 DOI: 10.3389/fpsyg.2022.937712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/11/2022] [Indexed: 02/08/2023] Open
Abstract
All brain processes that generate behaviour, apart from reflexes, operate with information that is in an "activated" state. This activated information, which is known as working memory (WM), is generated by the effect of attentional processes on incoming information or information previously stored in short-term or long-term memory (STM or LTM). Information in WM tends to remain the focus of attention; and WM, attention and STM together enable information to be available to mental processes and the behaviours that follow on from them. WM and attention underpin all flexible mental processes, such as solving problems, making choices, preparing for opportunities or threats that could be nearby, or simply finding the way home. Neither WM nor attention are necessarily conscious, and both may have evolved long before consciousness. WM and attention, with similar properties, are possessed by humans, archerfish, and other vertebrates; jumping spiders, honey bees, and other arthropods; and members of other clades, whose last common ancestor (LCA) is believed to have lived more than 600 million years ago. It has been reported that very similar genes control the development of vertebrate and arthropod brains, and were likely inherited from their LCA. Genes that control brain development are conserved because brains generate adaptive behaviour. However, the neural processes that generate behaviour operate with the activated information in WM, so WM and attention must have existed prior to the evolution of brains. It is proposed that WM and attention are widespread amongst animal species because they are phylogenetically conserved mechanisms that are essential to all mental processing, and were inherited from the LCA of vertebrates, arthropods, and some other animal clades.
Collapse
|
26
|
Zhang Z, Cesanek E, Ingram JN, Flanagan JR, Wolpert DM. Object weight can be rapidly predicted, with low cognitive load, by exploiting learned associations between the weights and locations of objects. J Neurophysiol 2023; 129:285-297. [PMID: 36350057 PMCID: PMC9886355 DOI: 10.1152/jn.00414.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 10/30/2022] [Indexed: 11/11/2022] Open
Abstract
Weight prediction is critical for dexterous object manipulation. Previous work has focused on lifting objects presented in isolation and has examined how the visual appearance of an object is used to predict its weight. Here we tested the novel hypothesis that when interacting with multiple objects, as is common in everyday tasks, people exploit the locations of objects to directly predict their weights, bypassing slower and more demanding processing of visual properties to predict weight. Using a three-dimensional robotic and virtual reality system, we developed a task in which participants were presented with a set of objects. In each trial a randomly chosen object translated onto the participant's hand and they had to anticipate the object's weight by generating an equivalent upward force. Across conditions we could control whether the visual appearance and/or location of the objects were informative as to their weight. Using this task, and a set of analogous web-based experiments, we show that when location information was predictive of the objects' weights participants used this information to achieve faster prediction than observed when prediction is based on visual appearance. We suggest that by "caching" associations between locations and weights, the sensorimotor system can speed prediction while also lowering working memory demands involved in predicting weight from object visual properties.NEW & NOTEWORTHY We use a novel object support task using a three-dimensional robotic interface and virtual reality system to provide evidence that the locations of objects are used to predict their weights. Using location information, rather than the visual appearance of the objects, supports fast prediction, thereby avoiding processes that can be demanding on working memory.
Collapse
Affiliation(s)
- Zhaoran Zhang
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - Evan Cesanek
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - James N Ingram
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| | - J Randall Flanagan
- Department of Psychology and Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Daniel M Wolpert
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York
- Department of Neuroscience, Columbia University, New York, New York
| |
Collapse
|
27
|
Aizenman AM, Koulieris GA, Gibaldi A, Sehgal V, Levi DM, Banks MS. The Statistics of Eye Movements and Binocular Disparities during VR Gaming: Implications for Headset Design. ACM TRANSACTIONS ON GRAPHICS 2023; 42:7. [PMID: 37122317 PMCID: PMC10139447 DOI: 10.1145/3549529] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The human visual system evolved in environments with statistical regularities. Binocular vision is adapted to these such that depth perception and eye movements are more precise, faster, and performed comfortably in environments consistent with the regularities. We measured the statistics of eye movements and binocular disparities in virtual-reality (VR) - gaming environments and found that they are quite different from those in the natural environment. Fixation distance and direction are more restricted in VR, and fixation distance is farther. The pattern of disparity across the visual field is less regular in VR and does not conform to a prominent property of naturally occurring disparities. From this we predict that double vision is more likely in VR than in the natural environment. We also determined the optimal screen distance to minimize discomfort due to the vergence-accommodation conflict, and the optimal nasal-temporal positioning of head-mounted display (HMD) screens to maximize binocular field of view. Finally, in a user study we investigated how VR content affects comfort and performance. Content that is more consistent with the statistics of the natural world yields less discomfort than content that is not. Furthermore, consistent content yields slightly better performance than inconsistent content.
Collapse
|
28
|
Jing M, Kadooka K, Franchak J, Kirkorian HL. The effect of narrative coherence and visual salience on children's and adults' gaze while watching video. J Exp Child Psychol 2023; 226:105562. [PMID: 36257254 DOI: 10.1016/j.jecp.2022.105562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/12/2022] [Accepted: 09/14/2022] [Indexed: 11/05/2022]
Abstract
Low-level visual features (e.g., motion, contrast) predict eye gaze during video viewing. The current study investigated the effect of narrative coherence on the extent to which low-level visual salience predicts eye gaze. Eye movements were recorded as 4-year-olds (n = 20) and adults (n = 20) watched a cohesive versus random sequence of video shots from a 4.5-min full vignette from Sesame Street. Overall, visual salience was a stronger predictor of gaze in adults than in children, especially when viewing a random shot sequence. The impact of narrative coherence on children's gaze was limited to the short period of time surrounding cuts to new video shots. The discussion considers potential direct effects of visual salience as well as incidental effects due to overlap between salient features and semantic content. The findings are also discussed in the context of developing video comprehension.
Collapse
Affiliation(s)
- Mengguo Jing
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA.
| | - Kellan Kadooka
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - John Franchak
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - Heather L Kirkorian
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA
| |
Collapse
|
29
|
Glennerster A. Understanding 3D vision as a policy network. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210448. [PMID: 36511403 PMCID: PMC9745881 DOI: 10.1098/rstb.2021.0448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
It is often assumed that the brain builds 3D coordinate frames, in retinal coordinates (with binocular disparity giving the third dimension), head-centred, body-centred and world-centred coordinates. This paper questions that assumption and begins to sketch an alternative based on, essentially, a set of reflexes. A 'policy network' is a term used in reinforcement learning to describe the set of actions that are generated by an agent depending on its current state. This is an untypical starting point for describing 3D vision, but a policy network can serve as a useful representation both for the 3D layout of a scene and the location of the observer within it. It avoids 3D reconstruction of the type used in computer vision but is similar to recent representations for navigation generated through reinforcement learning. A policy network for saccades (pure rotations of the camera/eye) is a logical starting point for understanding (i) an ego-centric representation of space (e.g. Marr's (Marr 1982 Vision: a computational investigation into the human representation and processing of visual information) 2[Formula: see text]-D sketch) and (ii) a hierarchical, compositional representation for navigation. The potential neural implementation of policy networks is straightforward; a network with a large range of sensory and task-related inputs such as the cerebellum would be capable of implementing this input/output function. This is not the case for 3D coordinate transformations in the brain: no neurally implementable proposals have yet been put forward that could carry out a transformation of a visual scene from retinal to world-based coordinates. Hence, if the representation underlying 3D vision can be described as a policy network (in which the actions are either saccades or head translations), this would be a significant step towards a neurally plausible model of 3D vision. This article is part of the theme issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Andrew Glennerster
- School of Psychology and Clinical Language Sciences, University of Reading, RG6 6AL Reading, UK
| |
Collapse
|
30
|
Nakayama K, Moher J, Song JH. Rethinking Vision and Action. Annu Rev Psychol 2023; 74:59-86. [PMID: 36652303 DOI: 10.1146/annurev-psych-021422-043229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Action is an important arbitrator as to whether an individual or a species will survive. Yet, action has not been well integrated into the study of psychology. Action or motor behavior is a field apart. This is traditional science with its need for specialization. The sequence in a typical laboratory experiment of see → decide → act provides the rationale for broad disciplinary categorizations. With renewed interest in action itself, surprising and exciting anomalous findings at odds with this simplified caricature have emerged. They reveal a much more intimate coupling of vision and action, which we describe. In turn, this prompts us to identify and dwell on three pertinent theories deserving of greater notice.
Collapse
Affiliation(s)
- Ken Nakayama
- Department of Psychology, University of California, Berkeley, California, USA;
| | - Jeff Moher
- Department of Psychology, Connecticut College, New London, Connecticut, USA;
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island, USA;
| |
Collapse
|
31
|
Niederhauser L, Gunser S, Waser M, Mast FW, Caversaccio M, Anschuetz L. Training and proficiency level in endoscopic sinus surgery change residents' eye movements. Sci Rep 2023; 13:79. [PMID: 36596830 PMCID: PMC9810736 DOI: 10.1038/s41598-022-25518-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 11/30/2022] [Indexed: 01/04/2023] Open
Abstract
Nose surgery is challenging and needs a lot of training for safe and efficient treatments. Eye tracking can provide an objective assessment to measure residents' learning curve. The aim of the current study was to assess residents' fixation duration and other dependent variables over the course of a dedicated training in functional endoscopic sinus surgery (FESS). Sixteen residents performed a FESS training over 18 sessions, split into three surgical steps. Eye movements in terms of percent fixation on the screen and average fixation duration were measured, in addition to residents' completion time, cognitive load, and surgical performance. Results indicated performance improvements in terms of completion time and surgical performance. Cognitive load and average fixation duration showed a significant change within the last step of training. Percent fixation on screen increased within the first step, and then stagnated. Results showed that eye movements and cognitive load differed between residents of different proficiency levels. In conclusion, eye tracking is a helpful objective measuring tool in FESS. It provides additional insights of the training level and changes with increasing performance. Expert-like gaze was obtained after half of the training sessions and increased proficiency in FESS was associated with increased fixation duration.
Collapse
Affiliation(s)
- Laura Niederhauser
- grid.5734.50000 0001 0726 5157Department of Psychology, University of Bern, Bern, Switzerland
| | - Sandra Gunser
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| | - Manuel Waser
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| | - Fred W. Mast
- grid.5734.50000 0001 0726 5157Department of Psychology, University of Bern, Bern, Switzerland
| | - Marco Caversaccio
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| | - Lukas Anschuetz
- grid.5734.50000 0001 0726 5157Department of Otorhinolaryngology, Head and Neck Surgery, Inselspital, University Hospital Freiburgstrasse 18, University of Bern, 3010 Bern, Switzerland
| |
Collapse
|
32
|
Bosco A, Sanz Diez P, Filippini M, Fattori P. The influence of action on perception spans different effectors. Front Syst Neurosci 2023; 17:1145643. [PMID: 37205054 PMCID: PMC10185787 DOI: 10.3389/fnsys.2023.1145643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/10/2023] [Indexed: 05/21/2023] Open
Abstract
Perception and action are fundamental processes that characterize our life and our possibility to modify the world around us. Several pieces of evidence have shown an intimate and reciprocal interaction between perception and action, leading us to believe that these processes rely on a common set of representations. The present review focuses on one particular aspect of this interaction: the influence of action on perception from a motor effector perspective during two phases, action planning and the phase following execution of the action. The movements performed by eyes, hands, and legs have a different impact on object and space perception; studies that use different approaches and paradigms have formed an interesting general picture that demonstrates the existence of an action effect on perception, before as well as after its execution. Although the mechanisms of this effect are still being debated, different studies have demonstrated that most of the time this effect pragmatically shapes and primes perception of relevant features of the object or environment which calls for action; at other times it improves our perception through motor experience and learning. Finally, a future perspective is provided, in which we suggest that these mechanisms can be exploited to increase trust in artificial intelligence systems that are able to interact with humans.
Collapse
Affiliation(s)
- Annalisa Bosco
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
- *Correspondence: Annalisa Bosco
| | - Pablo Sanz Diez
- Carl Zeiss Vision International GmbH, Aalen, Germany
- Institute for Ophthalmic Research, Eberhard Karls University Tüebingen, Tüebingen, Germany
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
- Alma Mater Research Institute for Human-Centered Artificial Intelligence (Alma Human AI), University of Bologna, Bologna, Italy
| |
Collapse
|
33
|
Forsthofer M, Straka H. Homeostatic plasticity of eye movement performance in Xenopus tadpoles following prolonged visual image motion stimulation. J Neurol 2023; 270:57-70. [PMID: 35947153 PMCID: PMC9813097 DOI: 10.1007/s00415-022-11311-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 07/08/2022] [Accepted: 07/19/2022] [Indexed: 01/09/2023]
Abstract
Visual image motion-driven ocular motor behaviors such as the optokinetic reflex (OKR) provide sensory feedback for optimizing gaze stability during head/body motion. The performance of this visuo-motor reflex is subject to plastic alterations depending on requirements imposed by specific eco-physiological or developmental circumstances. While visuo-motor plasticity can be experimentally induced by various combinations of motion-related stimuli, the extent to which such evoked behavioral alterations contribute to the behavioral demands of an environment remains often obscure. Here, we used isolated preparations of Xenopus laevis tadpoles to assess the extent and ontogenetic dependency of visuo-motor plasticity during prolonged visual image motion. While a reliable attenuation of large OKR amplitudes can be induced already in young larvae, a robust response magnitude-dependent bidirectional plasticity is present only at older developmental stages. The possibility of older larvae to faithfully enhance small OKR amplitudes coincides with the developmental maturation of inferior olivary-Purkinje cell signal integration. This conclusion was supported by the loss of behavioral plasticity following transection of the climbing fiber pathway and by the immunohistochemical demonstration of a considerable volumetric extension of the Purkinje cell dendritic area between the two tested stages. The bidirectional behavioral alterations with different developmental onsets might functionally serve to standardize the motor output, comparable to the known differential adaptability of vestibulo-ocular reflexes in these animals. This homeostatic plasticity potentially equilibrates the working range of ocular motor behaviors during altered visuo-vestibular conditions or prolonged head/body motion to fine-tune resultant eye movements.
Collapse
Affiliation(s)
- Michael Forsthofer
- Faculty of Biology, Ludwig-Maximilians-University Munich, Großhaderner Str. 2, 82152, Planegg, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-University Munich, Großhaderner Str. 2, 82152, Planegg, Germany
| | - Hans Straka
- Faculty of Biology, Ludwig-Maximilians-University Munich, Großhaderner Str. 2, 82152, Planegg, Germany.
| |
Collapse
|
34
|
Bruner E, Battaglia-Mayer A, Caminiti R. The parietal lobe evolution and the emergence of material culture in the human genus. Brain Struct Funct 2023; 228:145-167. [PMID: 35451642 DOI: 10.1007/s00429-022-02487-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/24/2022] [Indexed: 02/07/2023]
Abstract
Traditional and new disciplines converge in suggesting that the parietal lobe underwent a considerable expansion during human evolution. Through the study of endocasts and shape analysis, paleoneurology has shown an increased globularity of the braincase and bulging of the parietal region in modern humans, as compared to other human species, including Neandertals. Cortical complexity increased in both the superior and inferior parietal lobules. Emerging fields bridging archaeology and neuroscience supply further evidence of the involvement of the parietal cortex in human-specific behaviors related to visuospatial capacity, technological integration, self-awareness, numerosity, mathematical reasoning and language. Here, we complement these inferences on the parietal lobe evolution, with results from more classical neuroscience disciplines, such as behavioral neurophysiology, functional neuroimaging, and brain lesions; and apply these to define the neural substrates and the role of the parietal lobes in the emergence of functions at the core of material culture, such as tool-making, tool use and constructional abilities.
Collapse
Affiliation(s)
- Emiliano Bruner
- Centro Nacional de Investigación Sobre la Evolución Humana, Burgos, Spain
| | | | - Roberto Caminiti
- Neuroscience and Behavior Laboratory, Istituto Italiano di Tecnologia (IIT), Roma, Italy.
| |
Collapse
|
35
|
Moskowitz JB, Berger SA, Fooken J, Castelhano MS, Gallivan JP, Flanagan JR. The influence of movement-related costs when searching to act and acting to search. J Neurophysiol 2023; 129:115-130. [PMID: 36475897 DOI: 10.1152/jn.00305.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Real-world search behavior often involves limb movements, either during search or after search. Here we investigated whether movement-related costs influence search behavior in two kinds of search tasks. In our visual search tasks, participants made saccades to find a target object among distractors and then moved a cursor, controlled by the handle of a robotic manipulandum, to the target. In our manual search tasks, participants moved the cursor to perform the search, placing it onto objects to reveal their identity as either a target or a distractor. In all tasks, there were multiple targets. Across experiments, we manipulated either the effort or time costs associated with movement such that these costs varied across the search space. We varied effort by applying different resistive forces to the handle, and we varied time costs by altering the speed of the cursor. Our analysis of cursor and eye movements during manual and visual search, respectively, showed that effort influenced manual search but did not influence visual search. In contrast, time costs influenced both visual and manual search. Our results demonstrate that, in addition to perceptual and cognitive factors, movement-related costs can also influence search behavior.NEW & NOTEWORTHY Numerous studies have investigated the perceptual and cognitive factors that influence decision making about where to look, or move, in search tasks. However, little is known about how search is influenced by movement-related costs associated with acting on an object once it has been visually located or acting during manual search. In this article, we show that movement time costs can bias visual and manual search and that movement effort costs bias manual search.
Collapse
Affiliation(s)
- Joshua B Moskowitz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Sarah A Berger
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Monica S Castelhano
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
36
|
Kenderla P, Kibbe MM. Explore versus store: Children strategically trade off reliance on exploration versus working memory during a complex task. J Exp Child Psychol 2023; 225:105535. [PMID: 36041236 DOI: 10.1016/j.jecp.2022.105535] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 08/03/2022] [Accepted: 08/04/2022] [Indexed: 10/15/2022]
Abstract
During complex tasks, we use working memory to actively maintain goal sets and direct attention toward goal-relevant information in the environment. However, working memory is severely limited, and storing information in working memory is cognitively effortful. Previous work by Kibbe and Kowler [2011, Journal of Vision, 11(3), Article 14] showed that adults strategically modulate reliance on working memory during complex, goal-oriented tasks, varying the amount of information they store in working memory depending both on the cognitive demands of the task and on the ease with which task-relevant information can be accessed from the environment. We asked whether children, whose working memory and executive functions are undergoing significant developmental change, also use working memory strategically during complex tasks. Forty-six 8-10-year-old children searched through arrays of hidden objects to find three that belonged to a given category defined over the objects' features. We manipulated the cognitive demands of the task by increasing the complexity of the category. We manipulated the exploration costs of the task by varying the rate at which task-relevant information could be accessed. We measured children's search patterns to gain insights into how the children used working memory during the task. We found that as the cognitive demands of the task increased, children stored less information in working memory, relying more on exploration. When exploration was costlier, children explored less, storing more in working memory. These results suggest that developing children, like adults, make strategic decisions about when to explore versus when to store during a complex, goal-oriented task.
Collapse
Affiliation(s)
- Praveen Kenderla
- Department of Psychological & Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Melissa M Kibbe
- Department of Psychological & Brain Sciences, Boston University, Boston, MA 02215, USA; Center for Systems Neuroscience, Boston University, Boston, MA 02215, USA.
| |
Collapse
|
37
|
Cunha F, Gutiérrez-Ibáñez C, Brinkman B, Wylie DR, Iwaniuk AN. The relative sizes of nuclei in the oculomotor complex vary by order and behaviour in birds. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2022; 209:341-360. [PMID: 36522507 DOI: 10.1007/s00359-022-01598-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 10/26/2022] [Accepted: 11/25/2022] [Indexed: 12/23/2022]
Abstract
Eye movements are a critical component of visually guided behaviours, allowing organisms to scan the environment and bring stimuli of interest to regions of acuity in the retina. Although the control and modulation of eye movements by cranial nerve nuclei are highly conserved across vertebrates, species variation in visually guided behaviour and eye morphology could lead to variation in the size of oculomotor nuclei. Here, we test for differences in the size and neuron numbers of the oculomotor nuclei among birds that vary in behaviour and eye morphology. Using unbiased stereology, we measured the volumes and numbers of neurons of the oculomotor (nIII), trochlear (nIV), abducens (nVI), and Edinger-Westphal (EW) nuclei across 71 bird species and analysed these with phylogeny-informed statistics. Owls had relatively smaller nIII, nIV, nVI and EW nuclei than other birds, which reflects their limited degrees of eye movements. In contrast, nVI was relatively larger in falcons and hawks, likely reflecting how these predatory species must shift focus between the central and temporal foveae during foraging and prey capture. Unexpectedly, songbirds had an enlarged EW and relatively more nVI neurons, which might reflect accommodation and horizontal eye movements. Finally, the one merganser we measured also has an enlarged EW, which is associated with the high accommodative power needed for pursuit diving. Overall, these differences reflect species and clade level variation in behaviour, but more data are needed on eye movements in birds across species to better understand the relationships among behaviour, retinal anatomy, and brain anatomy.
Collapse
Affiliation(s)
- Felipe Cunha
- Department of Neuroscience, University of Lethbridge, 4401 University Dr W, Lethbridge, AB, T1K 3M4, Canada
| | | | - Benjamin Brinkman
- Department of Neuroscience, University of Lethbridge, 4401 University Dr W, Lethbridge, AB, T1K 3M4, Canada
| | - Douglas R Wylie
- Department of Biological Sciences, University of Alberta, Edmonton, AB, Canada
| | - Andrew N Iwaniuk
- Department of Neuroscience, University of Lethbridge, 4401 University Dr W, Lethbridge, AB, T1K 3M4, Canada.
| |
Collapse
|
38
|
Lyu J, Maýe A, Görner M, Ruppel P, Engel AK, Zhang J. Coordinating human-robot collaboration by EEG-based human intention prediction and vigilance control. Front Neurorobot 2022; 16:1068274. [DOI: 10.3389/fnbot.2022.1068274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 11/08/2022] [Indexed: 12/04/2022] Open
Abstract
In human-robot collaboration scenarios with shared workspaces, a highly desired performance boost is offset by high requirements for human safety, limiting speed and torque of the robot drives to levels which cannot harm the human body. Especially for complex tasks with flexible human behavior, it becomes vital to maintain safe working distances and coordinate tasks efficiently. An established approach in this regard is reactive servo in response to the current human pose. However, such an approach does not exploit expectations of the human's behavior and can therefore fail to react to fast human motions in time. To adapt the robot's behavior as soon as possible, predicting human intention early becomes a factor which is vital but hard to achieve. Here, we employ a recently developed type of brain-computer interface (BCI) which can detect the focus of the human's overt attention as a predictor for impending action. In contrast to other types of BCI, direct projection of stimuli onto the workspace facilitates a seamless integration in workflows. Moreover, we demonstrate how the signal-to-noise ratio of the brain response can be used to adjust the velocity of the robot movements to the vigilance or alertness level of the human. Analyzing this adaptive system with respect to performance and safety margins in a physical robot experiment, we found the proposed method could improve both collaboration efficiency and safety distance.
Collapse
|
39
|
Walter K, Bex P. Low-level factors increase gaze-guidance under cognitive load: A comparison of image-salience and semantic-salience models. PLoS One 2022; 17:e0277691. [PMID: 36441789 PMCID: PMC9704686 DOI: 10.1371/journal.pone.0277691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 11/01/2022] [Indexed: 11/29/2022] Open
Abstract
Growing evidence links eye movements and cognitive functioning, however there is debate concerning what image content is fixated in natural scenes. Competing approaches have argued that low-level/feedforward and high-level/feedback factors contribute to gaze-guidance. We used one low-level model (Graph Based Visual Salience, GBVS) and a novel language-based high-level model (Global Vectors for Word Representation, GloVe) to predict gaze locations in a natural image search task, and we examined how fixated locations during this task vary under increasing levels of cognitive load. Participants (N = 30) freely viewed a series of 100 natural scenes for 10 seconds each. Between scenes, subjects identified a target object from the scene a specified number of trials (N) back among three distracter objects of the same type but from alternate scenes. The N-back was adaptive: N-back increased following two correct trials and decreased following one incorrect trial. Receiver operating characteristic (ROC) analysis of gaze locations showed that as cognitive load increased, there was a significant increase in prediction power for GBVS, but not for GloVe. Similarly, there was no significant difference in the area under the ROC between the minimum and maximum N-back achieved across subjects for GloVe (t(29) = -1.062, p = 0.297), while there was a cohesive upwards trend for GBVS (t(29) = -1.975, p = .058), although not significant. A permutation analysis showed that gaze locations were correlated with GBVS indicating that salient features were more likely to be fixated. However, gaze locations were anti-correlated with GloVe, indicating that objects with low semantic consistency with the scene were more likely to be fixated. These results suggest that fixations are drawn towards salient low-level image features and this bias increases with cognitive load. Additionally, there is a bias towards fixating improbable objects that does not vary under increasing levels of cognitive load.
Collapse
Affiliation(s)
- Kerri Walter
- Psychology Department, Northeastern University, Boston, MA, United States of America
| | - Peter Bex
- Psychology Department, Northeastern University, Boston, MA, United States of America
| |
Collapse
|
40
|
Sidenmark L, Parent M, Wu CH, Chan J, Glueck M, Wigdor D, Grossman T, Giordano M. Weighted Pointer: Error-aware Gaze-based Interaction through Fallback Modalities. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3585-3595. [PMID: 36048981 DOI: 10.1109/tvcg.2022.3203096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Gaze-based interaction is a fast and ergonomic type of hands-free interaction that is often used with augmented and virtual reality when pointing at targets. Such interaction, however, can be cumbersome whenever user, tracking, or environmental factors cause eye tracking errors. Recent research has suggested that fallback modalities could be leveraged to ensure stable interaction irrespective of the current level of eye tracking error. This work thus presents Weighted Pointer interaction, a collection of error-aware pointing techniques that determine whether pointing should be performed by gaze, a fallback modality, or a combination of the two, depending on the level of eye tracking error that is present. These techniques enable users to accurately point at targets when eye tracking is accurate and inaccurate. A virtual reality target selection study demonstrated that Weighted Pointer techniques were more performant and preferred over techniques that required the use of manual modality switching.
Collapse
|
41
|
Cognitive benefits of using non-invasive compared to implantable neural feedback. Sci Rep 2022; 12:16696. [PMID: 36202893 PMCID: PMC9537330 DOI: 10.1038/s41598-022-21057-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 09/22/2022] [Indexed: 11/26/2022] Open
Abstract
A non-optimal prosthesis integration into an amputee’s body schema suggests some important functional and health consequences after lower limb amputation. These include low perception of a prosthesis as a part of the body, experiencing it as heavier than the natural limb, and cognitively exhausting use for users. Invasive approaches, exploiting the surgical implantation of electrodes in residual nerves, improved prosthesis integration by restoring natural and somatotopic sensory feedback in transfemoral amputees. A non-invasive alternative that avoids surgery would reduce costs and shorten certification time, significantly increasing the adoption of such systems. To explore this possibility, we compared results from a non-invasive, electro-cutaneous stimulation system to outcomes observed with the use of implants in above the knee amputees. This non-invasive solution was tested in transfemoral amputees through evaluation of their ability to perceive and recognize touch intensity and locations, or movements of a prosthesis, and its cognitive integration (through dual task performance and perceived prosthesis weight). While this managed to evoke the perception of different locations on the artificial foot, and closures of the leg, it was less performant than invasive solutions. Non-invasive stimulation induced similar improvements in dual motor and cognitive tasks compared to neural feedback. On the other hand, results demonstrate that remapped, evoked sensations are less informative and intuitive than the neural evoked somatotopic sensations. The device therefore fails to improve prosthesis embodiment together with its associated weight perception. This preliminary evaluation meaningfully highlights the drawbacks of non-invasive systems, but also demonstrates benefits when performing multiple tasks at once. Importantly, the improved dual task performance is consistent with invasive devices, taking steps towards the expedited development of a certified device for widespread use.
Collapse
|
42
|
Predicting Product Preferences on Retailers’ Web Shops through Measurement of Gaze and Pupil Size Dynamics. J Cogn 2022; 5:45. [PMID: 36304586 PMCID: PMC9541120 DOI: 10.5334/joc.240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 08/23/2022] [Indexed: 11/22/2022] Open
Abstract
Previous studies used gaze behavior to predict product preference in value-based decision-making, based on gaze angle variables such as dwell time, fixation duration and the first fixated product. While the application for online retail seems obvious, research with realistic web shop stimuli has been lacking so far. Here, we studied the decision process for 60 Dutch web shops of a variety of retailers, by measuring eye movements and pupil size during the viewing of web shop images. The outcomes of an ordinal linear regression model showed that a combination of gaze angle variables accurately predicted product choice, with the total dwell time being the most predictive gaze dynamic. Although pupillometric analysis showed a positive relationship between pupil dilation and product preference, adding pupil size to the model only slightly improved the prediction accuracy. The current study holds the potential to substantially improve retargeting mechanisms in online marketing based on consumers’ gaze information. Also, gaze-based product preference proves to be a valuable metric in pre-testing product introductions for market research and prevent product launches from failure.
Collapse
|
43
|
Hwang SH, Ra Y, Paeng S, Kim HF. Motivational salience drives habitual gazes during value memory retention and facilitates relearning of forgotten value. iScience 2022; 25:105104. [PMID: 36185371 PMCID: PMC9519605 DOI: 10.1016/j.isci.2022.105104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Revised: 05/08/2022] [Accepted: 09/07/2022] [Indexed: 11/25/2022] Open
Abstract
A habitual gaze is critical to efficiently identify and exploit valuable objects. However, it is unclear what salience components drive the habitual gaze choice. Here, we trained subjects to assign positive, neutral, and negative values to objects and found that motivational salience guided habitual gaze choices over 30 days of memory retention. The habitual preference for negatively valued objects emerged during memory retention. This habitual choice was not explained by a general model with salience components driven by physical features of objects and the rank of learned values. Instead, this is better explained by a model that contains an additional component driven by motivational salience. In a simulated value-forgotten condition, these motivational salience-based habitual choices facilitated re-learning. Our data indicate that after long-term retention, habitual gaze results from increased motivational salience, potentially facilitating the re-learning of forgotten values. Habitual preference for negatively valued objects emerged during long-term retention Changes in habitual preference were driven by 3 salience components over time Preference for negatively valued objects facilitates re-learning of forgotten values
Collapse
|
44
|
Orlowska-Feuer P, Ebrahimi AS, Zippo AG, Petersen RS, Lucas RJ, Storchi R. Look-up and look-down neurons in the mouse visual thalamus during freely moving exploration. Curr Biol 2022; 32:3987-3999.e4. [PMID: 35973431 PMCID: PMC9616738 DOI: 10.1016/j.cub.2022.07.049] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 05/24/2022] [Accepted: 07/20/2022] [Indexed: 12/28/2022]
Abstract
Visual information reaches cortex via the thalamic dorsal lateral geniculate nucleus (dLGN). dLGN activity is modulated by global sleep/wake states and arousal, indicating that it is not simply a passive relay station. However, its potential for more specific visuomotor integration is largely unexplored. We addressed this question by developing robust 3D video reconstruction of mouse head and body during spontaneous exploration paired with simultaneous neuronal recordings from dLGN. Unbiased evaluation of a wide range of postures and movements revealed a widespread coupling between neuronal activity and few behavioral parameters. In particular, postures associated with the animal looking up/down correlated with activity in >50% neurons, and the extent of this effect was comparable with that induced by full-body movements (typically locomotion). By contrast, thalamic activity was minimally correlated with other postures or movements (e.g., left/right head and body torsions). Importantly, up/down postures and full-body movements were largely independent and jointly coupled to neuronal activity. Thus, although most units were excited during full-body movements, some expressed highest firing when the animal was looking up ("look-up" neurons), whereas others expressed highest firing when the animal was looking down ("look-down" neurons). These results were observed in the dark, thus representing a genuine behavioral modulation, and were amplified in a lit arena. Our results demonstrate that the primary visual thalamus, beyond global modulations by sleep/awake states, is potentially involved in specific visuomotor integration and reveal two distinct couplings between up/down postures and neuronal activity.
Collapse
Affiliation(s)
- Patrycja Orlowska-Feuer
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Aghileh S Ebrahimi
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Antonio G Zippo
- Institute of Neuroscience, Consiglio Nazionale delle Ricerche, Via Raoul Follereau, 3, 20854 Vedano al Lambro, Italy
| | - Rasmus S Petersen
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Robert J Lucas
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK
| | - Riccardo Storchi
- University of Manchester, Faculty of Biology, Medicine and Health, School of Biological Science, Division of Neuroscience and Experimental Psychology, Oxford Road, M139PL Manchester, UK.
| |
Collapse
|
45
|
Chee L, Valle G, Marazzi M, Preatoni G, Haufe FL, Xiloyannis M, Riener R, Raspopovic S. Optimally-calibrated non-invasive feedback improves amputees' metabolic consumption, balance and walking confidence. J Neural Eng 2022; 19. [PMID: 35944515 DOI: 10.1088/1741-2552/ac883b] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/09/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Lower-limb amputees suffer from a variety of health problems, including higher metabolic consumption and low mobility. These conditions are linked to the lack of a natural sensory feedback from their prosthetic device, which forces them to adopt compensatory walking strategies that increase fatigue. Recently, both invasive (i.e. requiring a surgery) and non-invasive approaches have been able to provide artificial sensations via neurostimulation, inducing multiple functional and cognitive benefits. Implants helped to improve patient mobility and significantly reduce their metabolic consumption. A wearable, non-invasive alterative that provides similar useful health benefits, would eliminate the surgery related risks and costs thereby increasing the accessibility and the spreading of such neurotechnologies. APPROACH Here, we present a non-invasive sensory feedback system exploiting an optimally-calibrated (JND-based) electro-cutaneous stimulation to encode intensity-modulated foot-ground and knee angle information personalized to the user's just noticeable perceptual threshold. This device was holistically evaluated in three transfemoral amputees by examination of metabolic consumption while walking outdoors, walking over different inclinations on a treadmill indoors, and balance maintenance in reaction to unexpected perturbation on a treadmill indoors. We then collected spatio-temporal parameters (i.e. gait dynamic and kinematics), and self-reported prosthesis confidence while the patients were walking with and without the sensory feedback. MAIN RESULTS This non-invasive sensory feedback system, encoding different distinctly perceived levels of tactile and knee flexion information, successfully enabled subjects to decrease metabolic consumption while walking and increase prosthesis confidence. Remarkably, more physiological walking strategies and increased stability in response to external perturbations were observed while walking with the sensory feedback. SIGNIFICANCE The health benefits observed with the use of this non-invasive device, previously only observed exploiting invasive technologies, takes an important step towards the development of a practical, non-invasive alternative to restoring sensory feedback in leg amputees.
Collapse
Affiliation(s)
- Lauren Chee
- ETH Zurich, Tannenstrasse 1, Zurich, Zürich, 8092, SWITZERLAND
| | - Giacomo Valle
- ETH Zürich, Tannenstrasse 1, TAN E2, Zurich, Zurich, 8092, SWITZERLAND
| | - Michele Marazzi
- ETH Zürich, Tannenstrasse 1, Zurich, Zurich, 8092, SWITZERLAND
| | - Greta Preatoni
- ETH Zürich, Tannenstrasse 1, TAN E2, Zurich, Zurich, 8092, SWITZERLAND
| | - Florian L Haufe
- ETH Zürich, Tannenstrasse 1, TAN E5, Zurich, Zurich, 8092, SWITZERLAND
| | | | - Robert Riener
- ETH Zürich, Tannenstrasse 1, TAN E5, Zurich, Zurich, 8092, SWITZERLAND
| | - Stanisa Raspopovic
- ETH Zürich, Tannenstrasse 1, TAN E2, Zurich, Zurich, ZH, 8092, SWITZERLAND
| |
Collapse
|
46
|
Yeo L, Romero R. Optical ultrasound simulation-based training in obstetric sonography. J Matern Fetal Neonatal Med 2022; 35:2469-2484. [PMID: 32635783 PMCID: PMC10544761 DOI: 10.1080/14767058.2020.1786519] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 04/27/2020] [Accepted: 06/19/2020] [Indexed: 12/30/2022]
Abstract
Ultrasound is an imaging modality that is highly operator dependent. This article reviews the challenges in learning how to perform obstetric sonography, as well as the processes necessary to acquire expert performance skills in sonography. Simulation-based education and learning, and the value of medical simulation are also discussed. Ultrasound simulators are an effective means of teaching obstetric sonography, because it provides training, deliberate practice, and performance evaluation/feedback which allows continuous and critical self-evaluation. We review evidence that simulation can improve performance in obstetric ultrasound examination, review current simulators, and discuss the current problems/gaps in ultrasound simulation. Optical positioning ultrasound simulation is a novel high-fidelity simulation learning system, which addresses many of these problems/gaps and is introduced for the first time here.
Collapse
Affiliation(s)
- Lami Yeo
- Perinatology Research Branch, Division of Obstetrics and Maternal-Fetal Medicine, Division of Intramural Research, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, MD and Detroit, MI, USA
- Detroit Medical Center, Detroit, MI, USA
- Department of Obstetrics and Gynecology, Wayne State University School of Medicine, Detroit, MI, USA
| | - Roberto Romero
- Perinatology Research Branch, Division of Obstetrics and Maternal-Fetal Medicine, Division of Intramural Research, Eunice Kennedy Shriver National Institute of Child Health and Human Development, National Institutes of Health, U.S. Department of Health and Human Services, Bethesda, MD and Detroit, MI, USA
- Detroit Medical Center, Detroit, MI, USA
- Department of Obstetrics and Gynecology, University of Michigan, Ann Arbor, MI, USA
- Department of Epidemiology and Biostatistics, Michigan State University, East Lansing, MI, USA
- Center for Molecular Medicine and Genetics, Wayne State University, Detroit, MI, USA
| |
Collapse
|
47
|
Visual control during climbing: Variability in practice fosters a proactive gaze pattern. PLoS One 2022; 17:e0269794. [PMID: 35687600 PMCID: PMC9187105 DOI: 10.1371/journal.pone.0269794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/30/2022] [Indexed: 11/19/2022] Open
Abstract
In climbing, the visual system is confronted with a dual demand: controlling ongoing movement and searching for upcoming movement possibilities. The aims of the present research were: (i) to investigate the effect of different modes of practice on how learners deal with this dual demand; and (ii) to analyze the extent this effect may facilitate transfer of learning to a new climbing route. The effect of a constant practice, an imposed schedule of variations and a self-controlled schedule of variations on the gaze behaviors and the climbing fluency of novices were compared. Results showed that the constant practice group outperformed the imposed variability group on the training route and the three groups climbing fluency on the transfer route did not differ. Analyses of the gaze behaviors showed that the constant practice group used more online gaze control during the last session whereas the imposed variability group relied on a more proactive gaze control. This last gaze pattern was also used on the transfer route by the imposed variability group. Self-controlled variability group displayed more interindividual differences in gaze behaviors. These findings reflect that learning protocols induce different timing for gaze patterns that may differently facilitate adaptation to new climbing routes.
Collapse
|
48
|
Jangir R, Hansen N, Ghosal S, Jain M, Wang X. Look Closer: Bridging Egocentric and Third-Person Views With Transformers for Robotic Manipulation. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3144512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
49
|
James S, Davison AJ. Q-Attention: Enabling Efficient Learning for Vision-Based Robotic Manipulation. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3140817] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
50
|
Lappi O. Gaze Strategies in Driving-An Ecological Approach. Front Psychol 2022; 13:821440. [PMID: 35360580 PMCID: PMC8964278 DOI: 10.3389/fpsyg.2022.821440] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 02/07/2022] [Indexed: 01/16/2023] Open
Abstract
Human performance in natural environments is deeply impressive, and still much beyond current AI. Experimental techniques, such as eye tracking, may be useful to understand the cognitive basis of this performance, and "the human advantage." Driving is domain where these techniques may deployed, in tasks ranging from rigorously controlled laboratory settings through high-fidelity simulations to naturalistic experiments in the wild. This research has revealed robust patterns that can be reliably identified and replicated in the field and reproduced in the lab. The purpose of this review is to cover the basics of what is known about these gaze behaviors, and some of their implications for understanding visually guided steering. The phenomena reviewed will be of interest to those working on any domain where visual guidance and control with similar task demands is involved (e.g., many sports). The paper is intended to be accessible to the non-specialist, without oversimplifying the complexity of real-world visual behavior. The literature reviewed will provide an information base useful for researchers working on oculomotor behaviors and physiology in the lab who wish to extend their research into more naturalistic locomotor tasks, or researchers in more applied fields (sports, transportation) who wish to bring aspects of the real-world ecology under experimental scrutiny. Part of a Research Topic on Gaze Strategies in Closed Self-paced tasks, this aspect of the driving task is discussed. It is in particular emphasized why it is important to carefully separate the visual strategies driving (quite closed and self-paced) from visual behaviors relevant to other forms of driver behavior (an open-ended menagerie of behaviors). There is always a balance to strike between ecological complexity and experimental control. One way to reconcile these demands is to look for natural, real-world tasks and behavior that are rich enough to be interesting yet sufficiently constrained and well-understood to be replicated in simulators and the lab. This ecological approach to driving as a model behavior and the way the connection between "lab" and "real world" can be spanned in this research is of interest to anyone keen to develop more ecologically representative designs for studying human gaze behavior.
Collapse
Affiliation(s)
- Otto Lappi
- Cognitive Science/TRU, University of Helsinki, Helsinki, Finland
| |
Collapse
|