1
|
Adam KCS, Klatt LI, Miller JA, Rösner M, Fukuda K, Kiyonaga A. Beyond Routine Maintenance: Current Trends in Working Memory Research. J Cogn Neurosci 2025; 37:1035-1052. [PMID: 39792640 DOI: 10.1162/jocn_a_02298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
Working memory (WM) is an evolving concept. Our understanding of the neural functions that support WM develops iteratively alongside the approaches used to study it, and both can be profoundly shaped by available tools and prevailing theoretical paradigms. Here, the organizers of the 2024 Working Memory Symposium-inspired by this year's meeting-highlight current trends and looming questions in WM research. This review is organized into sections describing (1) ongoing efforts to characterize WM function across sensory modalities, (2) the growing appreciation that WM representations are malleable to context and future actions, (3) the enduring problem of how multiple WM items and features are structured and integrated, and (4) new insights about whether WM shares function with other cognitive processes that have conventionally been considered distinct. This review aims to chronicle where the field is headed and calls attention to issues that are paramount for future research.
Collapse
|
2
|
Blaser E, Kaldy Z. How attention and working memory work together in the pursuit of goals: The development of the sampling-remembering trade-off. DEVELOPMENTAL REVIEW 2025; 75:101187. [PMID: 39990591 PMCID: PMC11845231 DOI: 10.1016/j.dr.2025.101187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2025]
Abstract
Most work in the last 50 years on visual working memory and attention has used a classic psychophysical setup: participants are instructed to attend to, or remember, a set of items. This setup sidesteps the role of cognitive control; effort is maximal, tasks are simple, and strategies are limited. While this approach has yielded important insights, it provides no clear path toward an integrative theory (Kristjánsson & Draschkow, 2021) and, like studying a town's walkability by having its college students run the 50-yard dash, it runs the danger of focusing on edge cases. Here, in this theoretical opinion article, we argue for an approach where dynamic relationships between the agent and the environment are understood functionally, in light of an agent's goals. This means a shift in emphasis from the performance of the mechanisms underlying a narrow task ("remember these items!") to their control in pursuit of a naturalistic goal ("make a sandwich!", Land & Hayhoe, 2001). Here, we highlight the sampling-remembering trade-off between exploiting goal-relevant information in the environment versus maintaining it in working memory. We present a dynamic feedback model of this trade-off - where the individual weighs the subjective costs of accessing external information versus those of maintaining it in memory - using insights from existing cognitive control models based on economic principles (Kool & Botvinick, 2018). This trade-off is particularly interesting in children, as the optimal use of internal resources is even more crucial when limited. Our model makes some specific predictions for future research: 1) an individual child strikes a preferred balance between the effort to attend to goal-relevant information in the environment versus the effort to maintain it in working memory, 2) in order to maintain this balance as underlying memory and cognitive control mechanisms improve with age, the child will have to increasingly shift toward remembering, and 3) older children will show greater adaptability to changing task demands.
Collapse
Affiliation(s)
- Erik Blaser
- University of Massachusetts Boston, Department of Psychology, Developmental and Brain Sciences Program, 100 Morrissey Blvd., Boston, MA, 02125, USA
| | - Zsuzsa Kaldy
- University of Massachusetts Boston, Department of Psychology, Developmental and Brain Sciences Program, 100 Morrissey Blvd., Boston, MA, 02125, USA
| |
Collapse
|
3
|
Britt N, Chau J, Sun HJ. Context-dependent modulation of spatial attention: prioritizing behaviourally relevant stimuli. Cogn Res Princ Implic 2025; 10:4. [PMID: 39920517 PMCID: PMC11806188 DOI: 10.1186/s41235-025-00612-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2024] [Accepted: 01/13/2025] [Indexed: 02/09/2025] Open
Abstract
Human attention can be guided by semantic information conveyed by individual objects in the environment. Over time, we learn to allocate attention resources towards stimuli that are behaviourally relevant to ongoing action, leading to attention capture by meaningful peripheral stimuli. A common example includes, while driving, stimuli that imply a possibly hazardous scenario (e.g. a pedestrian about to cross the road) warrant attentional prioritization to ensure safe proceedings. In the current study, we report a novel phenomenon in which the guidance of attention is dependent on the stimuli appearing in a behaviourally relevant context. Using a driving simulator, we simulated a real-world driving task representing an overlearned behaviour for licensed drivers. While driving, participants underwent a peripheral cue-target paradigm where a roadside pedestrian avatar (target) appeared following a cylinder cue. Results revealed that, during simulated driving conditions, participants (all with driver's licenses) showed greater attentional facilitation when pedestrians were oriented towards the road compared to away. This orientation-specific selectivity was not seen if the 3-D context was removed (Experiment 1) or the same visual scene was presented, but participants' viewpoints remained stationary (Experiment 2), or an inanimate object served as a target during simulated driving (Experiment 3). This context-specific attention modulation likely reflects drivers' expertise in automatically attending to behaviourally relevant information in a context-dependent manner.
Collapse
Affiliation(s)
- Noah Britt
- McMaster University, 1280 Main Street West, Hamilton, ON, Canada.
| | - Jackie Chau
- McMaster University, 1280 Main Street West, Hamilton, ON, Canada
| | - Hong-Jin Sun
- McMaster University, 1280 Main Street West, Hamilton, ON, Canada.
| |
Collapse
|
4
|
Allegretti E, D'Innocenzo G, Coco MI. The Visual Integration of Semantic and Spatial Information of Objects in Naturalistic Scenes (VISIONS) database: attentional, conceptual, and perceptual norms. Behav Res Methods 2025; 57:42. [PMID: 39753746 DOI: 10.3758/s13428-024-02535-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/23/2024] [Indexed: 01/11/2025]
Abstract
The complex interplay between low- and high-level mechanisms governing our visual system can only be fully understood within ecologically valid naturalistic contexts. For this reason, in recent years, substantial efforts have been devoted to equipping the scientific community with datasets of realistic images normed on semantic or spatial features. Here, we introduce VISIONS, an extensive database of 1136 naturalistic scenes normed on a wide range of perceptual and conceptual norms by 185 English speakers across three levels of granularity: isolated object, whole scene, and object-in-scene. Each naturalistic scene contains a critical object systematically manipulated and normed regarding its semantic consistency (e.g., a toothbrush vs. a flashlight in a bathroom) and spatial position (i.e., left, right). Normative data are also available for low- (i.e., clarity, visual complexity) and high-level (i.e., name agreement, confidence, familiarity, prototypicality, manipulability) features of the critical object and its embedding scene context. Eye-tracking data during a free-viewing task further confirms the experimental validity of our manipulations while theoretically demonstrating that object semantics is acquired in extra-foveal vision and used to guide early overt attention. To our knowledge, VISIONS is the first database exhaustively covering norms about integrating objects in scenes and providing several perceptual and conceptual norms of the two as independently taken. We expect VISIONS to become an invaluable image dataset to examine and answer timely questions above and beyond vision science, where a diversity of perceptual, attentive, mnemonic, or linguistic processes could be explored as they develop, age, or become neuropathological.
Collapse
Affiliation(s)
- Elena Allegretti
- Department of Psychology, Sapienza, University of Rome, Rome, Italy.
| | | | - Moreno I Coco
- Department of Psychology, Sapienza, University of Rome, Rome, Italy.
- I.R.C.C.S. Fondazione Santa Lucia, Rome, Italy.
| |
Collapse
|
5
|
Sefranek M, Zokaei N, Draschkow D, Nobre AC. Comparing the impact of contextual associations and statistical regularities in visual search and attention orienting. PLoS One 2024; 19:e0302751. [PMID: 39570820 PMCID: PMC11581329 DOI: 10.1371/journal.pone.0302751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 10/06/2024] [Indexed: 11/24/2024] Open
Abstract
During visual search, we quickly learn to attend to an object's likely location. Research has shown that this process can be guided by learning target locations based on consistent spatial contextual associations or other statistical regularities. Here, we tested how different types of associations guide learning and the utilisation of established memories for different purposes. Participants learned contextual associations or rule-like statistical regularities that predicted target locations within different scenes. The consequences of this learning for subsequent performance were then evaluated on attention-orienting and memory-recall tasks. Participants demonstrated facilitated attention-orienting and recall performance based on both contextual associations and statistical regularities. Contextual associations facilitated attention orienting with a different time course compared to statistical regularities. Benefits to memory-recall performance depended on the alignment between the learned association or regularity and the recall demands. The distinct patterns of behavioural facilitation by contextual associations and statistical regularities show how different forms of long-term memory may influence neural information processing through different modulatory mechanisms.
Collapse
Affiliation(s)
- Marcus Sefranek
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Nahid Zokaei
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Dejan Draschkow
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Anna C. Nobre
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
- Wu Tsai Institute, Yale University, New Haven, CT, United States of America
- Department of Psychology, Yale University, New Haven, CT, United States of America
| |
Collapse
|
6
|
Le STT, Kristjánsson Á, MacInnes WJ. Target selection during "snapshot" foraging. Atten Percept Psychophys 2024; 86:2778-2793. [PMID: 39604757 DOI: 10.3758/s13414-024-02988-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/05/2024] [Indexed: 11/29/2024]
Abstract
While previous foraging studies have identified key variables that determine attentional selection, they are affected by the global statistics of the tasks. In most studies, targets are selected one at a time without replacement while distractor numbers remain constant, steadily reducing the ratios of targets to distractors with every selection. We designed a foraging task with a sequence of local "snapshots" of foraging displays, with each snapshot requiring a target selection. This enabled tighter control of local target and distractor type ratios while maintaining the flavor of a sequential, multiple-target foraging task. Observers saw only six items for each target selection during a "snapshot" containing varying numbers of two target types and two distractor types. After each selection, a new six-item array (the following snapshot) immediately appeared, centered on the locus of the last selected target. We contrasted feature-based and conjunction-based foraging and analyzed the data by the proportion of different target types in each trial. We found that target type proportion affected selection, with longer response times during conjunction foraging when the number of the alternate target types was greater than the repeated target types. In addition, the choice of target in each snapshot was influenced by the relative positions of selected targets and distractors during preceding snapshots. Importantly, this shows to what degree previous findings on foraging can be attributed to changing global statistics of the foraging array. We propose that "snapshot foraging" can increase experimental control in understanding how people choose targets during continuous attentional orienting.
Collapse
|
7
|
Donenfeld J, Blaser E, Kaldy Z. The resolution of proactive interference in a novel visual working memory task: A behavioral and pupillometric study. Atten Percept Psychophys 2024; 86:2345-2362. [PMID: 38898344 DOI: 10.3758/s13414-024-02888-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/18/2024] [Indexed: 06/21/2024]
Abstract
Proactive interference (PI) occurs when previously learned information impairs memory for more recently learned information. Most PI studies have employed verbal stimuli, while the role of PI in visual working memory (VWM) has had relatively little attention. In the verbal domain, Johansson and colleagues (2018) found that pupil diameter - a real-time neurophysiological index of cognitive effort - reflects the accumulation and resolution of PI. Here we use a novel, naturalistic paradigm to test the behavioral and pupillary correlates of PI resolution for what-was-where item-location bindings in VWM. Importantly, in our paradigm, trials (PI vs. no-PI condition) are mixed in a block, and participants are naïve to the condition until they are tested. This design sidesteps concerns about differences in encoding strategies or generalized effort differences between conditions. Across three experiments (N = 122 total) we assessed PI's effect on VWM and whether PI resolution during memory retrieval is associated with greater cognitive effort (as indexed by the phasic, task-evoked pupil response). We found strong support for PI's detrimental effect on VWM (even with our spatially distributed stimuli), but no consistent link between interference resolution and effort during memory retrieval (this, even though the pupil was a reliable indicator that higher-performing individuals tried harder during memory encoding). We speculate that when explicit strategies are minimized, and PI resolution relies primarily on implicit processing, the effect may not be sufficient to trigger a robust pupillometric response.
Collapse
Affiliation(s)
- Jamie Donenfeld
- Department of Psychology, University of Massachusetts Boston, 100 William T. Morrissey Blvd, Boston, MA, 02125-3393, USA.
| | - Erik Blaser
- Department of Psychology, University of Massachusetts Boston, 100 William T. Morrissey Blvd, Boston, MA, 02125-3393, USA
| | - Zsuzsa Kaldy
- Department of Psychology, University of Massachusetts Boston, 100 William T. Morrissey Blvd, Boston, MA, 02125-3393, USA
| |
Collapse
|
8
|
Aivar MP, Li CL, Tong MH, Kit DM, Hayhoe MM. Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task. J Vis 2024; 24:1. [PMID: 39226069 PMCID: PMC11373708 DOI: 10.1167/jov.24.9.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
- https://www.psicologiauam.es/aivar/
| | - Chia-Ling Li
- Institute of Neuroscience, The University of Texas at Austin, Austin, TX, USA
- Present address: Apple Inc., Cupertino, California, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: IBM Research, Cambridge, Massachusetts, USA
| | - Dmitry M Kit
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: F5, Boston, Massachusetts, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
9
|
Bays PM, Schneegans S, Ma WJ, Brady TF. Representation and computation in visual working memory. Nat Hum Behav 2024; 8:1016-1034. [PMID: 38849647 DOI: 10.1038/s41562-024-01871-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Accepted: 03/22/2024] [Indexed: 06/09/2024]
Abstract
The ability to sustain internal representations of the sensory environment beyond immediate perception is a fundamental requirement of cognitive processing. In recent years, debates regarding the capacity and fidelity of the working memory (WM) system have advanced our understanding of the nature of these representations. In particular, there is growing recognition that WM representations are not merely imperfect copies of a perceived object or event. New experimental tools have revealed that observers possess richer information about the uncertainty in their memories and take advantage of environmental regularities to use limited memory resources optimally. Meanwhile, computational models of visuospatial WM formulated at different levels of implementation have converged on common principles relating capacity to variability and uncertainty. Here we review recent research on human WM from a computational perspective, including the neural mechanisms that support it.
Collapse
Affiliation(s)
- Paul M Bays
- Department of Psychology, University of Cambridge, Cambridge, UK
| | | | - Wei Ji Ma
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
| | - Timothy F Brady
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
10
|
Makarov I, Unnthorsson R, Kristjánsson Á, Thornton IM. The effects of visual and auditory synchrony on human foraging. Atten Percept Psychophys 2024; 86:909-930. [PMID: 38253985 DOI: 10.3758/s13414-023-02840-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 01/24/2024]
Abstract
Can synchrony in stimulation guide attention and aid perceptual performance? Here, in a series of three experiments, we tested the influence of visual and auditory synchrony on attentional selection during a novel human foraging task. Human foraging tasks are a recent extension of the classic visual search paradigm in which multiple targets must be located on a given trial, making it possible to capture a wide range of performance metrics. Experiment 1 was performed online, where the task was to forage for 10 (out of 20) vertical lines among 60 randomly oriented distractor lines that changed color between yellow and blue at random intervals. The targets either changed colors in visual synchrony or not. In another condition, a non-spatial sound additionally occurred synchronously with the color change of the targets. Experiment 2 was run in the laboratory (within-subjects) with the same design. When the targets changed color in visual synchrony, foraging times were significantly shorter than when they randomly changed colors, but there was no additional benefit for the sound synchrony, in contrast to predictions from the so-called "pip-and-pop" effect (Van der Burg et al., Journal of Experimental Psychology, 1053-1065, 2008). In Experiment 3, task difficulty was increased as participants foraged for as many 45° rotated lines as possible among lines of different orientations within 10 s, with the same synchrony conditions as in Experiments 1 and 2. Again, there was a large benefit of visual synchrony but no additional benefit for sound synchronization. Our results provide strong evidence that visual synchronization can guide attention during multiple target foraging. This likely reflects the local grouping of the synchronized targets. Importantly, there was no additional benefit for sound synchrony, even when the foraging task was quite difficult (Experiment 3).
Collapse
Affiliation(s)
- Ivan Makarov
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland.
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland.
| | - Runar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland
| | - Árni Kristjánsson
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
| | - Ian M Thornton
- Department of Cognitive Science Faculty of Media & Knowledge Science, University of Malta, Msida, Malta
| |
Collapse
|
11
|
Martarelli CS, Chiquet S, Ertl M. Keeping track of reality: embedding visual memory in natural behaviour. Memory 2023; 31:1295-1305. [PMID: 37727126 DOI: 10.1080/09658211.2023.2260148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 07/21/2023] [Indexed: 09/21/2023]
Abstract
Since immersive virtual reality (IVR) emerged as a research method in the 1980s, the focus has been on the similarities between IVR and actual reality. In this vein, it has been suggested that IVR methodology might fill the gap between laboratory studies and real life. IVR allows for high internal validity (i.e., a high degree of experimental control and experimental replicability), as well as high external validity by letting participants engage with the environment in an almost natural manner. Despite internal validity being crucial to experimental designs, external validity also matters in terms of the generalizability of results. In this paper, we first highlight and summarise the similarities and differences between IVR, desktop situations (both non-immersive VR and computer experiments), and reality. In the second step, we propose that IVR is a promising tool for visual memory research in terms of investigating the representation of visual information embedded in natural behaviour. We encourage researchers to carry out experiments on both two-dimensional computer screens and in immersive virtual environments to investigate visual memory and validate and replicate the findings. IVR is valuable because of its potential to improve theoretical understanding and increase the psychological relevance of the findings.
Collapse
Affiliation(s)
| | - Sandra Chiquet
- Faculty of Psychology, UniDistance Suisse, Brig, Switzerland
| | - Matthias Ertl
- Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
12
|
Draschkow D, Anderson NC, David E, Gauge N, Kingstone A, Kumle L, Laurent X, Nobre AC, Shiels S, Võ MLH. Using XR (Extended Reality) for Behavioral, Clinical, and Learning Sciences Requires Updates in Infrastructure and Funding. POLICY INSIGHTS FROM THE BEHAVIORAL AND BRAIN SCIENCES 2023; 10:317-323. [PMID: 37900910 PMCID: PMC10602770 DOI: 10.1177/23727322231196305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
Extended reality (XR, including augmented and virtual reality) creates a powerful intersection between information technology and cognitive, clinical, and education sciences. XR technology has long captured the public imagination, and its development is the focus of major technology companies. This article demonstrates the potential of XR to (1) deliver behavioral insights, (2) transform clinical treatments, and (3) improve learning and education. However, without appropriate policy, funding, and infrastructural investment, many research institutions will struggle to keep pace with the advances and opportunities of XR. To realize the full potential of XR for basic and translational research, funding should incentivize (1) appropriate training, (2) open software solutions, and (3) collaborations between complementary academic and industry partners. Bolstering the XR research infrastructure with the right investments and incentives is vital for delivering on the potential for transformative discoveries, innovations, and applications.
Collapse
Affiliation(s)
- Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Erwan David
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Nathan Gauge
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Levi Kumle
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Xavier Laurent
- Centre for Teaching and Learning, University of Oxford, Oxford, UK
| | - Anna C. Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Wu Tsai Institute, Yale University, New Haven, USA
| | - Sally Shiels
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Melissa L.-H. Võ
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
13
|
Kallmayer A, Võ MLH, Draschkow D. Viewpoint dependence and scene context effects generalize to depth rotated three-dimensional objects. J Vis 2023; 23:9. [PMID: 37707802 PMCID: PMC10506680 DOI: 10.1167/jov.23.10.9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 08/17/2023] [Indexed: 09/15/2023] Open
Abstract
Viewpoint effects on object recognition interact with object-scene consistency effects. While recognition of objects seen from "noncanonical" viewpoints (e.g., a cup from below) is typically impeded compared to processing of objects seen from canonical viewpoints (e.g., the string-side of a guitar), this effect is reduced by meaningful scene context information. In the present study we investigated if these findings established by using photographic images, generalize to strongly noncanonical orientations of three-dimensional (3D) models of objects. Using 3D models allowed us to probe a broad range of viewpoints and empirically establish viewpoints with very strong noncanonical and canonical orientations. In Experiment 1, we presented 3D models of objects from six different viewpoints (0°, 60°, 120°, 180° 240°, 300°) in color (1a) and grayscaled (1b) in a sequential matching task. Viewpoint had a significant effect on accuracy and response times. Based on the viewpoint effect in Experiments 1a and 1b, we could empirically determine the most canonical and noncanonical viewpoints from our set of viewpoints to use in Experiment 2. In Experiment 2, participants again performed a sequential matching task, however now the objects were paired with scene backgrounds which could be either consistent (e.g., a cup in the kitchen) or inconsistent (e.g., a guitar in the bathroom) to the object. Viewpoint interacted significantly with scene consistency in that object recognition was less affected by viewpoint when consistent scene information was provided, compared to inconsistent information. Our results show that scene context supports object recognition even when using extremely noncanonical orientations of depth rotated 3D objects. This supports the important role object-scene processing plays for object constancy especially under conditions of high uncertainty.
Collapse
Affiliation(s)
- Aylin Kallmayer
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Melissa L-H Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
14
|
Chawoush B, Draschkow D, van Ede F. Capacity and selection in immersive visual working memory following naturalistic object disappearance. J Vis 2023; 23:9. [PMID: 37548958 PMCID: PMC10411649 DOI: 10.1167/jov.23.8.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 07/06/2023] [Indexed: 08/08/2023] Open
Abstract
Visual working memory-holding past visual information in mind for upcoming behavior-is commonly studied following the abrupt removal of visual objects from static two-dimensional (2D) displays. In everyday life, visual objects do not typically vanish from the environment in front of us. Rather, visual objects tend to enter working memory following self or object motion: disappearing from view gradually and changing the spatial relation between memoranda and observer. Here, we used virtual reality (VR) to investigate whether two classic findings from visual working memory research-a capacity of around three objects and the reliance on space for object selection-generalize to more naturalistic modes of object disappearance. Our static reference condition mimicked traditional laboratory tasks whereby visual objects were held static in front of the participant and removed from view abruptly. In our critical flow condition, the same visual objects flowed by participants, disappearing from view gradually and behind the observer. We considered visual working memory performance and capacity, as well as space-based mnemonic selection, indexed by directional biases in gaze. Despite vastly distinct modes of object disappearance and altered spatial relations between memoranda and observer, we found comparable capacity and comparable gaze signatures of space-based mnemonic selection. This finding reveals how classic findings from visual working memory research generalize to immersive situations with more naturalistic modes of object disappearance and with dynamic spatial relations between memoranda and observer.
Collapse
Affiliation(s)
- Babak Chawoush
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
15
|
Le STT, Kristjánsson Á, MacInnes WJ. Bayesian approximations to the theory of visual attention (TVA) in a foraging task. Q J Exp Psychol (Hove) 2023; 76:497-510. [PMID: 35361003 DOI: 10.1177/17470218221094572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Foraging as a natural visual search for multiple targets has increasingly been studied in humans in recent years. Here, we aimed to model the differences in foraging strategies between feature and conjunction foraging tasks found by Á. Kristjánsson et al. Bundesen proposed the theory of visual attention (TVA) as a computational model of attentional function that divides the selection process into filtering and pigeonholing. The theory describes a mechanism by which the strength of sensory evidence serves to categorise elements. We combined these ideas to train augmented Naïve Bayesian classifiers using data from Á. Kristjánsson et al. as input. Specifically, we attempted to answer whether it is possible to predict how frequently observers switch between different target types during consecutive selections (switches) during feature and conjunction foraging using Bayesian classifiers. We formulated 11 new parameters that represent key sensory and bias information that could be used for each selection during the foraging task and tested them with multiple Bayesian models. Separate Bayesian networks were trained on feature and conjunction foraging data, and parameters that had no impact on the model's predictability were pruned away. We report high accuracy for switch prediction in both tasks from the classifiers, although the model for conjunction foraging was more accurate. We also report our Bayesian parameters in terms of their theoretical associations with TVA parameters, πj (denoting the pertinence value), and βi (denoting the decision-making bias).
Collapse
Affiliation(s)
- Sofia Tkhan Tin Le
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Árni Kristjánsson
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.,Department of Psychology, University of Iceland, Reykjavik, Iceland
| | - W Joseph MacInnes
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
16
|
Priming of probabilistic attentional templates. Psychon Bull Rev 2023; 30:22-39. [PMID: 35831678 DOI: 10.3758/s13423-022-02125-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2022] [Indexed: 11/08/2022]
Abstract
Attentional priming has a dominating influence on vision, speeding visual search, releasing items from crowding, reducing masking effects, and during free-choice, primed targets are chosen over unprimed ones. Many accounts postulate that templates stored in working memory control what we attend to and mediate the priming. But what is the nature of these templates (or representations)? Analyses of real-world visual scenes suggest that tuning templates to exact color or luminance values would be impractical since those can vary greatly because of changes in environmental circumstances and perceptual interpretation. Tuning templates to a range of the most probable values would be more efficient. Recent evidence does indeed suggest that the visual system represents such probability, gradually encoding statistical variation in the environment through repeated exposure to input statistics. This is consistent with evidence from neurophysiology and theoretical neuroscience as well as computational evidence of probabilistic representations in visual perception. I argue that such probabilistic representations are the unit of attentional priming and that priming of, say, a repeated single-color value simply involves priming of a distribution with no variance. This "priming of probability" view can be modelled within a Bayesian framework where priming provides contextual priors. Priming can therefore be thought of as learning of the underlying probability density function of the target or distractor sets in a given continuous task.
Collapse
|
17
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
18
|
Tünnermann J, Kristjánsson Á, Petersen A, Schubö A, Scharlau I. Advances in the application of a computational Theory of Visual Attention (TVA): Moving towards more naturalistic stimuli and game-like tasks. OPEN PSYCHOLOGY 2022. [DOI: 10.1515/psych-2022-0002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
The theory of visual attention, “TVA”, is an influential and formal theory of attentional selection. It is widely applied in clinical assessment of attention and fundamental attention research. However, most TVA-based research is based on accuracy data from letter report experiments performed in controlled laboratory environments. While such basic approaches to questions regarding attentional selection are undoubtedly useful, recent technological advances have enabled the use of increasingly sophisticated experimental paradigms involving more realistic scenarios. Notably, these studies have in many cases resulted in different estimates of capacity limits than those found in studies using traditional TVA-based assessment. Here we review recent developments in TVA-based assessment of attention that goes beyond the use of letter report experiments and experiments performed in controlled laboratory environments. We show that TVA can be used with other tasks and new stimuli, that TVA-based parameter estimation can be embedded into complex scenarios, such as games that can be used to investigate particular problems regarding visual attention, and how TVA-based simulations of “visual foraging” can elucidate attentional control in more naturalistic tasks. We also discuss how these developments may inform future advances of TVA.
Collapse
Affiliation(s)
- Jan Tünnermann
- Philipps-University Marburg , Department of Psychology , Marburg , Germany
| | - Árni Kristjánsson
- Icelandic Vision Laboratory , School of Health Sciences , University of Iceland, Reykjavík, Iceland; National Research University Higher School of Economics , Moscow , Russian Federation
| | - Anders Petersen
- Center for Visual Cognition , Department of Psychology , University of Copenhagen , Copenhagen , Denmark
| | - Anna Schubö
- Philipps-University Marburg , Department of Psychology , Marburg , Germany
| | - Ingrid Scharlau
- Department of Arts and Humanities , Paderborn University , Paderborn , Germany
| |
Collapse
|
19
|
Draschkow D, Nobre AC, van Ede F. Multiple spatial frames for immersive working memory. Nat Hum Behav 2022; 6:536-544. [PMID: 35058640 PMCID: PMC7612679 DOI: 10.1038/s41562-021-01245-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Accepted: 10/25/2021] [Indexed: 11/09/2022]
Abstract
As we move around, relevant information that disappears from sight can still be held in working memory to serve upcoming behaviour. How we maintain and select visual information as we move through the environment remains poorly understood because most laboratory tasks of working memory rely on removing visual material while participants remain still. We used virtual reality to study visual working memory following self-movement in immersive environments. Directional biases in gaze revealed the recruitment of more than one spatial frame for maintaining and selecting memoranda following self-movement. The findings bring the important realization that multiple spatial frames support working memory in natural behaviour. The results also illustrate how virtual reality can be a critical experimental tool to characterize this core memory system.
Collapse
Affiliation(s)
- Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Anna C Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
20
|
He C, Gunalp P, Meyerhoff HS, Rathbun Z, Stieff M, Franconeri SL, Hegarty M. Visual working memory for connected 3D objects: effects of stimulus complexity, dimensionality and connectivity. Cogn Res Princ Implic 2022; 7:19. [PMID: 35182236 PMCID: PMC8857738 DOI: 10.1186/s41235-022-00367-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Accepted: 02/01/2022] [Indexed: 11/20/2022] Open
Abstract
Visual working memory (VWM) is typically measured using arrays of two-dimensional isolated stimuli with simple visual identities (e.g., color or shape), and these studies typically find strong capacity limits. Science, technology, engineering and mathematics (STEM) experts are tasked with reasoning with representations of three-dimensional (3D) connected objects, raising questions about whether those stimuli would be subject to the same limits. Here, we use a color change detection task to examine working memory capacity for 3D objects made up of differently colored cubes. Experiment 1a shows that increasing the number of parts of an object leads to less sensitivity to color changes, while change-irrelevant structural dimensionality (the number of dimensions into which parts of the structure extend) does not. Experiment 1b shows that sensitivity to color changes decreases similarly with increased complexity for multipart 3D connected objects and disconnected 2D squares, while sensitivity is slightly higher with 3D objects. Experiments 2a and 2b find that when other stimulus characteristics, such as size and visual angle, are controlled, change-irrelevant dimensionality and connectivity have no effect on performance. These results suggest that detecting color changes on 3D connected objects and on displays of isolated 2D stimuli are subject to similar set size effects and are not affected by dimensionality and connectivity when these properties are change-irrelevant, ruling out one possible explanation for scientists’ advantages in storing and manipulating representations of complex 3D objects.
Collapse
Affiliation(s)
- Chuanxiuyue He
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106, USA.
| | - Peri Gunalp
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106, USA
| | | | - Zoe Rathbun
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106, USA
| | - Mike Stieff
- University of Illinois, Chicago, Chicago, USA
| | | | - Mary Hegarty
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, CA, 93106, USA
| |
Collapse
|
21
|
Korkusuz S, Top E. Does the combination of physical activity and attention training affect the motor skills and cognitive activities of individuals with mild intellectual disability? INTERNATIONAL JOURNAL OF DEVELOPMENTAL DISABILITIES 2021; 69:654-662. [PMID: 37547556 PMCID: PMC10402842 DOI: 10.1080/20473869.2021.1995640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 10/06/2021] [Accepted: 10/13/2021] [Indexed: 08/08/2023]
Abstract
Individuals with mild intellectual disability (MID) were worse than their peers who typically develop in motor skills and attention-demanding assignments. In this study, effect of a 14-week physical activity and attention training practise on the motor skills, visual retention, perception and attention levels of students with MID were analysed. Twenty-two individuals between 7 and 14 ages participated voluntarily. Activities based on developing attention skills and physical activities enhancing fine-gross motor skills (40 + 60 min./2 days/14 weeks) were given to the experimental group. d2 Test of Attention, Benton Visual Retention and Bruininks-Oseretsky Test of Motor Proficiency-2nd version tests were used as data collection tools. There was significant difference in terms of total number of items processed, commissions, raw score of errors, total number of items minus error scores, concentration performance, Benton visual retention test and perception, fine motor skill precision, fine motor skill integration, manual dexterity and upper-limb coordination values regarding group and time dependant (p< .05). However, there was no significant difference in omissions and fluctuation rate values (p> .05). As a result; it is determined that the combination of physical activity and attention training practises features a positive effect on visual retention, perception, attention and motor skill levels of students with MID.
Collapse
Affiliation(s)
- Sevda Korkusuz
- Institute of Health Sciences, University of Usak, Usak, Turkey
| | - Elif Top
- Faculty of Sport Sciences, University of Usak, Usak, Turkey
| |
Collapse
|
22
|
Schröder R, Baumert PM, Ettinger U. Replicability and reliability of the background and target velocity effects in smooth pursuit eye movements. Acta Psychol (Amst) 2021; 219:103364. [PMID: 34245980 DOI: 10.1016/j.actpsy.2021.103364] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 06/23/2021] [Accepted: 07/01/2021] [Indexed: 11/17/2022] Open
Abstract
When we follow a slowly moving target with our eyes, we perform smooth pursuit eye movements (SPEM). Previous investigations point to significantly and robustly reduced SPEM performance in the presence of a stationary background and at higher compared to lower target velocities. However, the reliability of these background and target velocity effects has not yet been investigated systematically. To address this issue, 45 healthy participants (17 m, 28 f) took part in two experimental sessions 7 days apart. In each session, participants were instructed to follow a horizontal SPEM target moving sinusoidally between ±7.89° at three different target velocities, corresponding to frequencies of 0.2, 0.4 and 0.6 Hz. Each target velocity was presented once with and once without a stationary background, resulting in six blocks. The blocks were presented twice per session in order to additionally explore potential task length effects. To assess SPEM performance, velocity gain was calculated as the ratio of eye to target velocity. In line with previous research, detrimental background and target velocity effects were replicated robustly in both sessions with large effect sizes. Good to excellent test-retest reliabilities were obtained at higher target velocities and in the presence of a stationary background, whereas lower reliabilities occurred with slower targets and in the absence of background stimuli. Target velocity and background effects resulted in largely good to excellent reliabilities. These findings not only replicated robust experimental effects of background and target velocity at group level, but also revealed that these effects can be translated into reliable individual difference measures.
Collapse
Affiliation(s)
- Rebekka Schröder
- Department of Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111 Bonn, Germany
| | | | - Ulrich Ettinger
- Department of Psychology, University of Bonn, Kaiser-Karl-Ring 9, 53111 Bonn, Germany.
| |
Collapse
|