1
|
Williams LH, Wiegand I, Lavelle M, Wolfe JM, Fukuda K, Peelen MV, Drew T. Electrophysiological Correlates of Visual Memory Search. J Cogn Neurosci 2025; 37:63-85. [PMID: 39378181 DOI: 10.1162/jocn_a_02256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
Abstract
In everyday life, we frequently engage in 'hybrid' visual and memory search, where we look for multiple items stored in memory (e.g., a mental shopping list) in our visual environment. Across three experiments, we used event-related potentials to better understand the contributions of visual working memory (VWM) and long-term memory (LTM) during the memory search component of hybrid search. Experiments 1 and 2 demonstrated that the FN400 (an index of LTM recognition) and the CDA (an index of VWM load) increased with memory set size (target load), suggesting that both VWM and LTM are involved in memory search, even when target load exceeds capacity limitations of VWM. In Experiment 3, we used these electrophysiological indices to test how categorical similarity of targets and distractors affects memory search. The CDA and FN400 were modulated by memory set size only if items resembled targets. This suggests that dissimilar distractor items can be rejected before eliciting a memory search. Together, our findings demonstrate the interplay of VWM and LTM processes during memory search for multiple targets.
Collapse
|
2
|
Saltzmann SM, Eich B, Moen KC, Beck MR. Activated long-term memory and visual working memory during hybrid visual search: Effects on target memory search and distractor memory. Mem Cognit 2024; 52:2156-2171. [PMID: 38528298 DOI: 10.3758/s13421-024-01556-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2024] [Indexed: 03/27/2024]
Abstract
In hybrid visual search, observers must maintain multiple target templates and subsequently search for any one of those targets. If the number of potential target templates exceeds visual working memory (VWM) capacity, then the target templates are assumed to be maintained in activated long-term memory (aLTM). Observers must search the array for potential targets (visual search), as well as search through memory (target memory search). Increasing the target memory set size reduces accuracy, increases search response times (RT), and increases dwell time on distractors. However, the extent of observers' memory for distractors during hybrid search is largely unknown. In the current study, the impact of hybrid search on target memory search (measured by dwell time on distractors, false alarms, and misses) and distractor memory (measured by distractor revisits and recognition memory of recently viewed distractors) was measured. Specifically, we aimed to better understand how changes in behavior during hybrid search impacts distractor memory. Increased target memory set size led to an increase in search RTs, distractor dwell times, false alarms, and target identification misses. Increasing target memory set size increased revisits to distractors, suggesting impaired distractor location memory, but had no effect on a two-alternative forced-choice (2AFC) distractor recognition memory test presented during the search trial. The results from the current study suggest a lack of interference between memory stores maintaining target template representations (aLTM) and distractor information (VWM). Loading aLTM with more target templates does not impact VWM for distracting information.
Collapse
Affiliation(s)
- Stephanie M Saltzmann
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Brandon Eich
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Katherine C Moen
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
- Department of Psychology, University of Nebraska at Kearney, 2504 9th Ave, Kearney, NE, 68849, USA
| | - Melissa R Beck
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA.
| |
Collapse
|
3
|
Barbosa A, Ruarte G, Ries AJ, Kamienkowski JE, Ison MJ. Investigating the effects of context, visual working memory, and inhibitory control in hybrid visual search. Front Hum Neurosci 2024; 18:1436564. [PMID: 39257697 PMCID: PMC11384996 DOI: 10.3389/fnhum.2024.1436564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Accepted: 08/06/2024] [Indexed: 09/12/2024] Open
Abstract
Introduction In real-life scenarios, individuals frequently engage in tasks that involve searching for one of the distinct items stored in memory. This combined process of visual search and memory search is known as hybrid search. To date, most hybrid search studies have been restricted to average observers looking for previously well-memorized targets in blank backgrounds. Methods We investigated the effects of context and the role of memory in hybrid search by modifying the task's memorization phase to occur in all-new single trials. In addition, we aimed to assess how individual differences in visual working memory capacity and inhibitory control influence performance during hybrid search. In an online experiment, 110 participants searched for potential targets in images with and without context. A change detection and go/no-go task were also performed to measure working memory capacity and inhibitory control, respectively. Results We show that, in target present trials, the main hallmarks of hybrid search remain present, with a linear relationship between reaction time and visual set size and a logarithmic relationship between reaction time and memory set size. These behavioral results can be reproduced by using a simple drift-diffusion model. Finally, working memory capacity did not predict most search performance measures. Inhibitory control, when relationships were significant, could account for only a small portion of the variability in the data. Discussion This study provides insights into the effects of context and individual differences on search efficiency and termination.
Collapse
Affiliation(s)
- Alessandra Barbosa
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Gonzalo Ruarte
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación (Universidad de Buenos Aires - Consejo Nacional de Investigaciones Científicas y Técnicas), Buenos Aires, Argentina
| | - Anthony J Ries
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Juan E Kamienkowski
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación (Universidad de Buenos Aires - Consejo Nacional de Investigaciones Científicas y Técnicas), Buenos Aires, Argentina
- Departamento de Computación (Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires), Buenos Aires, Argentina
| | - Matias J Ison
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
4
|
Zheng Y, Lou J, Lu Y, Li Z. Multiple visual items can be simultaneously compared with target templates in memory. Atten Percept Psychophys 2024; 86:1641-1652. [PMID: 38839716 DOI: 10.3758/s13414-024-02906-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
When we search for something, we often rely on both what we see and what we remember. This process can be divided into three stages: selecting items, identifying those items, and comparing them with what we are trying to find in our memory. It has been suggested that we select items one by one, and we can identify several items at once. In the present study, we tested whether we need to finish comparing a selected item in the visual display with one or more target templates in memory before we can move on to the next selected item. In Experiment 1, observers looked for either one or two target types in a rapid serially presented stimuli stream. The time interval between the presentation onset of successive items in the stream was varied to get a threshold. For search for one target, the threshold was 89 ms. When look for either of two targets, it was 192 ms. This threshold difference offered a baseline. In Experiment 2, observers looked for one or two types of target in a search array. If they compared each identified item separately, we should expect a jump in the slope of the RT × Set Size function, on the order of the baseline obtained in Experiment 1. However, the slope difference was only 13 ms/item, suggesting that several identified items can be compared at once with target templates in memory. Experiment 3 showed that this slope difference was not just a memory-load cost.
Collapse
Affiliation(s)
- Yujie Zheng
- Department of Psychology and Behavioral Sciences, Zhejiang University, 148 Tian Mu Shan Road, Hangzhou, 310007, People's Republic of China
| | - Jiafei Lou
- Department of Psychology and Behavioral Sciences, Zhejiang University, 148 Tian Mu Shan Road, Hangzhou, 310007, People's Republic of China
| | - Yunrong Lu
- Department of Psychiatry, The Fourth Affiliated Hospital, Zhejiang University School of Medicine and International Institutes of Medicine of Zhejiang University, Yiwu, 322000, People's Republic of China.
- Department of Psychiatry, The Second Affiliate Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.
| | - Zhi Li
- Department of Psychology and Behavioral Sciences, Zhejiang University, 148 Tian Mu Shan Road, Hangzhou, 310007, People's Republic of China.
- Department of Psychiatry, The Fourth Affiliated Hospital, Zhejiang University School of Medicine and International Institutes of Medicine of Zhejiang University, Yiwu, 322000, People's Republic of China.
| |
Collapse
|
5
|
Hong I, Kim MS. Attenuation of spatial bias with target template variation. Sci Rep 2024; 14:7869. [PMID: 38570555 PMCID: PMC10991434 DOI: 10.1038/s41598-024-57255-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 03/15/2024] [Indexed: 04/05/2024] Open
Abstract
This study investigated the impact of target template variation or consistency on attentional bias in location probability learning. Participants conducted a visual search task to find a heterogeneous shape among a homogeneous set of distractors. The target and distractor shapes were either fixed throughout the experiment (target-consistent group) or unpredictably varied on each trial (target-variant group). The target was often presented in one possible search region, unbeknownst to the participants. When the target template was consistent throughout the biased visual search, spatial attention was persistently biased toward the frequent target location. However, when the target template was inconsistent and varied during the biased search, the spatial bias was attenuated so that attention was less prioritized to a frequent target location. The results suggest that the alternative use of target templates may interfere with the emergence of a persistent spatial bias. The regularity-based spatial bias depends on the number of attentional shifts to the frequent target location, but also on search-relevant contexts.
Collapse
Affiliation(s)
- Injae Hong
- Visual Attention Lab, Brigham and Women's Hospital, Boston, MA, 02215, USA
- Harvard Medical School, Boston, MA, 02115, USA
| | - Min-Shik Kim
- Department of Psychology, Yonsei University, Seoul, 03722, Republic of Korea.
| |
Collapse
|
6
|
Shang L, Yeh LC, Zhao Y, Wiegand I, Peelen MV. Category-based attention facilitates memory search. eNeuro 2024; 11:ENEURO.0012-24.2024. [PMID: 38331577 PMCID: PMC10897531 DOI: 10.1523/eneuro.0012-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 01/16/2024] [Indexed: 02/10/2024] Open
Abstract
We often need to decide whether the object we look at is also the object we look for. When we look for one specific object, this process can be facilitated by feature-based attention. However, when we look for many objects at the same time (e.g., the products on our shopping list) such a strategy may no longer be possible, as research has shown that we can actively prepare to detect only one or two objects at a time. Therefore, looking for multiple objects additionally requires long-term memory search, slowing down decision making. Interestingly, however, previous research has shown that distractor objects can be efficiently rejected during memory search when they are from a different category than the items in the memory set. Here, using EEG, we show that this efficiency is supported by top-down attention at the category level. In Experiment 1, human participants (both sexes) performed a memory search task on individually presented objects from different categories, most of which were distractors. We observed category-level attentional modulation of distractor processing from ∼150 ms after stimulus onset, expressed both as an evoked response modulation and as an increase in decoding accuracy of same-category distractors. In Experiment 2, memory search was performed on two concurrently presented objects. When both objects were distractors, spatial attention (indexed by the N2pc component) was directed to the object that was of the same category as the objects in the memory set. Together, these results demonstrate how top-down attention can facilitate memory search.Significance statement When we are in the supermarket, we repeatedly decide whether a product we look at (e.g., a banana) is on our memorized shopping list (e.g., apples, oranges, kiwis). This requires searching our memory, which takes time. However, when the product is of an entirely different category (e.g., dairy instead of fruit), the decision can be made quickly. Here, we used EEG to show that this between-category advantage in memory search tasks is supported by top-down attentional modulation of visual processing: The visual response evoked by distractor objects was modulated by category membership, and spatial attention was quickly directed to the location of within-category (vs. between-category) distractors. These results demonstrate a close link between attention and memory.
Collapse
Affiliation(s)
- Linlin Shang
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 GD Nijmegen, The Netherlands
| | - Lu-Chun Yeh
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-University Gießen, 35392 Gießen, Germany
| | - Yuanfang Zhao
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Iris Wiegand
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 GD Nijmegen, The Netherlands
| | - Marius V Peelen
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6525 GD Nijmegen, The Netherlands
| |
Collapse
|
7
|
Plater L, Giammarco M, Joubran S, Al-Aidroos N. Control over attentional capture within 170 ms by long-term memory control settings: Evidence from the N2pc. Psychon Bull Rev 2024; 31:283-292. [PMID: 37566216 DOI: 10.3758/s13423-023-02352-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 08/12/2023]
Abstract
Observers adopt attentional control settings (ACSs) based on their goals that guide the capture of attention: Searched-for stimuli capture attention, and stimuli that are not searched for do not. While previous behavioural research indicates that observers can adopt long-term memory (LTM) ACSs (Giammarco et al. Visual Cognition, 24, 78-101, 2016), it seems surprising that representations in LTM could guide attention quickly enough to control attentional capture. To assess the claim that LTM ACSs exert control over early attentional orienting, we recorded electroencephalography while participants studied and searched for 30 target objects in an attention cueing task. Participants reported the studied target and ignored the preceding cues. To control for perceptual evoked responses, on each trial we presented two cue objects (one studied and one nonstudied). Even though participants were instructed to ignore the cues, studied cues produced the N2pc event-related potential, indicating early attentional orienting that was preferentially directed towards the studied cue versus the nonstudied cue. Critically, the N2pc was detectable within 170 ms, confirming that LTM ACSs rapidly control early capture. We propose an update to contemporary models of attentional capture to account for rapid attentional guidance by LTM ACSs.
Collapse
Affiliation(s)
- Lindsay Plater
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada.
| | - Maria Giammarco
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - Samantha Joubran
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - Naseem Al-Aidroos
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada
| |
Collapse
|
8
|
Zou B, Huang Z, Alaoui-Soce A, Wolfe JM. Hybrid visual and memory search for scenes and objects with variable viewpoints. J Vis 2024; 24:5. [PMID: 38197740 PMCID: PMC10787592 DOI: 10.1167/jov.24.1.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 12/07/2023] [Indexed: 01/11/2024] Open
Abstract
In hybrid search, observers search visual arrays for any of several target types held in memory. The key finding in hybrid search is that response times (RTs) increase as a linear function of the number of items in a display (visual set size), but RTs increase linearly with the log of the memory set size. Previous experiments have shown this result for specific targets (find exactly this picture of a boot on a blank background) and for broad categorical targets (find any animal). Arguably, these are rather unnatural situations. In the real world, objects are parts of scenes and are seen from multiple viewpoints. The present experiments generalize the hybrid search findings to scenes (Experiment 1) and multiple viewpoints (Experiment 2). The results replicated the basic pattern of hybrid search results: RTs increased logarithmically with the number of scene photos/categories held in memory. Experiment 3 controls the experiment for which viewpoints were seen in an initial learning phase. The results replicate the findings of Experiment 2. Experiment 4 compares hybrid search for specific viewpoints, variable viewpoints, and categorical targets. Search difficulty increases from specific viewpoints to variable viewpoints and then to categorical targets. The results of the four experiments show the generality of logarithmic search through memory in hybrid search.
Collapse
Affiliation(s)
- Bochao Zou
- School of Computer and Communication Engineering, University of Science and Technology Beijing, China
| | | | - Abla Alaoui-Soce
- Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Jeremy M Wolfe
- Visual Attention Lab, Harvard Medical School and Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
9
|
Yu X, Zhou Z, Becker SI, Boettcher SEP, Geng JJ. Good-enough attentional guidance. Trends Cogn Sci 2023; 27:391-403. [PMID: 36841692 DOI: 10.1016/j.tics.2023.01.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 02/27/2023]
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
10
|
Plater L, Nyman S, Joubran S, Al-Aidroos N. Repetition enhances the effects of activated long-term memory. Q J Exp Psychol (Hove) 2023; 76:621-631. [PMID: 35400220 PMCID: PMC9936439 DOI: 10.1177/17470218221095755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recent research indicates that visual long-term memory (vLTM) representations directly interface with perception and guide attention. This may be accomplished through a state known as activated LTM, however, little is known about the nature of activated LTM. Is it possible to enhance the attentional effects of these activated representations? And furthermore, is activated LTM discrete (i.e., a representation is either active or not active, but only active representations interact with perception) or continuous (i.e., there are different levels within the active state that all interact with perception)? To answer these questions, in the present study, we measured intrusion effects during a modified Sternberg task. Participants saw two lists of three complex visual objects, were cued that only one list was relevant for the current trial (the other list was, thus, irrelevant), and then their memory for the cued list was probed. Critically, half of the trials contained repeat objects (shown 10 times each), and half of the trials contained non-repeat objects (shown only once each). Results indicated that repetition enhanced activated LTM, as the intrusion effect (i.e., longer reaction times to irrelevant list objects than novel objects) was larger for repeat trials compared with non-repeat trials. These initial findings provide preliminary support that LTM activation is continuous, as the intrusion effect was not the same size for repeat and non-repeat trials. We conclude that researchers should repeat stimuli to increase the size of their effects and enhance how LTM representations interact with perception.
Collapse
Affiliation(s)
- Lindsay Plater
- Lindsay Plater, Department of Psychology, University of Guelph, Guelph, Ontario, Canada N1G 2W1.
| | | | | | | |
Collapse
|
11
|
Yu X, Johal SK, Geng JJ. Visual search guidance uses coarser template information than target-match decisions. Atten Percept Psychophys 2022; 84:1432-1445. [PMID: 35474414 PMCID: PMC9232460 DOI: 10.3758/s13414-022-02478-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2022] [Indexed: 11/18/2022]
Abstract
When searching for an object, we use a target template in memory that contains task-relevant information to guide visual attention to potential targets and to determine the identity of attended objects. These processes in visual search have typically been assumed to rely on a common source of template information. However, our recent work (Yu et al., 2022) argued that attentional guidance and target-match decisions rely on different information during search, with guidance using a "fuzzier" version of the template compared with target decisions. However, that work was based on the special case of search for a target amongst linearly separable distractors (e.g., search for an orange target amongst yellower distractors). Real-world search targets, however, are infrequently linearly separable from distractors, and it remains unclear whether the differences between the precision of template information used for guidance compared with target decisions also applies under more typical conditions. In four experiments, we tested this question by varying distractor similarity during visual search and measuring the likelihood of attentional guidance to distractors and target misidentifications. We found that early attentional guidance is indeed less precise than that of subsequent match decisions under varying exposure durations and distractor set sizes. These results suggest that attentional guidance operates on a coarser code than decisions, perhaps because guidance is constrained by lower acuity in peripheral vision or the need to rapidly explore a wide region of space while decisions about selected objects are more precise to optimize decision accuracy.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, 267 Cousteau Pl, Davis, CA, 95618, USA.
- Department of Psychology, University of California, One Shields Ave, Davis, CA, 95616, USA.
| | - Simran K Johal
- Department of Psychology, University of California, One Shields Ave, Davis, CA, 95616, USA
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Pl, Davis, CA, 95618, USA.
- Department of Psychology, University of California, One Shields Ave, Davis, CA, 95616, USA.
| |
Collapse
|
12
|
Moon A, Zhao J, Peters MAK, Wu R. Interaction of prior category knowledge and novel statistical patterns during visual search for real-world objects. Cogn Res Princ Implic 2022; 7:21. [PMID: 35244797 PMCID: PMC8897521 DOI: 10.1186/s41235-022-00356-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 01/08/2022] [Indexed: 11/10/2022] Open
Abstract
Two aspects of real-world visual search are typically studied in parallel: category knowledge (e.g., searching for food) and visual patterns (e.g., predicting an upcoming street sign from prior street signs). Previous visual search studies have shown that prior category knowledge hinders search when targets and distractors are from the same category. Other studies have shown that task-irrelevant patterns of non-target objects can enhance search when targets appear in locations that previously contained these irrelevant patterns. Combining EEG (N2pc ERP component, a neural marker of target selection) and behavioral measures, the present study investigated how search efficiency is simultaneously affected by prior knowledge of real-world objects (food and toys) and irrelevant visual patterns (sequences of runic symbols) within the same paradigm. We did not observe behavioral differences between locating items in patterned versus random locations. However, the N2pc components emerged sooner when search items appeared in the patterned location, compared to the random location, with a stronger effect when search items were targets, as opposed to non-targets categorically related to the target. A multivariate pattern analysis revealed that neural responses during search trials in the same time window reflected where the visual patterns appeared. Our finding contributes to our understanding of how knowledge acquired prior to the search task (e.g., category knowledge) interacts with new content within the search task.
Collapse
Affiliation(s)
- Austin Moon
- Department of Psychology, University of California, 900 University Ave, Riverside, CA, 92521, USA.
| | - Jiaying Zhao
- Department of Psychology and Institute for Resources, Environment and Sustainability, University of British Columbia, Vancouver, Canada
| | - Megan A K Peters
- Department of Cognitive Sciences, University of California, Irvine, USA.,Department of Bioengineering, University of California, Riverside, USA
| | - Rachel Wu
- Department of Psychology, University of California, 900 University Ave, Riverside, CA, 92521, USA
| |
Collapse
|
13
|
Yu X, Hanks TD, Geng JJ. Attentional Guidance and Match Decisions Rely on Different Template Information During Visual Search. Psychol Sci 2021; 33:105-120. [PMID: 34878949 DOI: 10.1177/09567976211032225] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
When searching for a target object, we engage in a continuous "look-identify" cycle in which we use known features of the target to guide attention toward potential targets and then to decide whether the selected object is indeed the target. Target information in memory (the target template or attentional template) is typically characterized as having a single, fixed source. However, debate has recently emerged over whether flexibility in the target template is relational or optimal. On the basis of evidence from two experiments using college students (Ns = 30 and 70, respectively), we propose that initial guidance of attention uses a coarse relational code, but subsequent decisions use an optimal code. Our results offer a novel perspective that the precision of template information differs when guiding sensory selection and when making identity decisions during visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| | - Timothy D Hanks
- Center for Neuroscience, University of California, Davis.,Department of Neurology, University of California, Davis
| | - Joy J Geng
- Center for Mind and Brain, University of California, Davis.,Department of Psychology, University of California, Davis
| |
Collapse
|
14
|
Moon A, He C, Ditta AS, Cheung OS, Wu R. Rapid category selectivity for animals versus man-made objects: An N2pc study. Int J Psychophysiol 2021; 171:20-28. [PMID: 34856220 DOI: 10.1016/j.ijpsycho.2021.11.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 08/24/2021] [Accepted: 11/25/2021] [Indexed: 10/19/2022]
Abstract
Visual recognition occurs rapidly at multiple categorization levels, including the superordinate level (e.g., animal), basic level (e.g., cat), or exemplar level (e.g., my cat). Visual search for animals is faster than for man-made objects, even when the images from those categories have comparable gist statistics (i.e., low- or mid-level visual information), which suggests that higher-level, conceptual influences may support this search advantage for animals. However, it remains unclear whether the search advantage can be explained in part by early visual search processes via the N2pc ERP component, which emerges earlier than behavioral responses, across different categorization levels. Participants searched for 1) an exact image (e.g., a specific squirrel image, Exemplar-level Search), 2) any images of an item (e.g., any squirrels, Basic-level Search), or 3) any items in a category (e.g., any animals, Superordinate-level Search). In addition to Target Present trials, Foil trials measured involuntary attentional selection of task-irrelevant images related to the targets (e.g., other squirrel images when searching for a specific squirrel image, or other animals when searching for squirrels). ERP results revealed 1) a larger N2pc amplitude during Foil trials in Exemplar-level Search for animals than man-made objects, and 2) faster onset latencies for animal search than man-made object search across all categorization levels. These results suggest that the search advantage for animals over man-made objects emerges early, and that attentional selection is more biased toward the basic-level (e.g., squirrel) for animals than for man-made objects during visual search.
Collapse
Affiliation(s)
- Austin Moon
- Department of Psychology, University of California, Riverside, United States of America.
| | - Chenxi He
- INSERM, U992, Cognitive Neuroimaging Unit, Gif/Yvette, France
| | - Annie S Ditta
- Department of Psychology, University of California, Riverside, United States of America
| | - Olivia S Cheung
- Department of Psychology, Division of Science, New York University Abu Dhabi, United Arab Emirates
| | - Rachel Wu
- Department of Psychology, University of California, Riverside, United States of America
| |
Collapse
|
15
|
Adamo SH, Gereke BJ, Shomstein S, Schmidt J. From "satisfaction of search" to "subsequent search misses": a review of multiple-target search errors across radiology and cognitive science. Cogn Res Princ Implic 2021; 6:59. [PMID: 34455466 PMCID: PMC8403090 DOI: 10.1186/s41235-021-00318-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 07/15/2021] [Indexed: 11/10/2022] Open
Abstract
For over 50 years, the satisfaction of search effect has been studied within the field of radiology. Defined as a decrease in detection rates for a subsequent target when an initial target is found within the image, these multiple target errors are known to underlie errors of omission (e.g., a radiologist is more likely to miss an abnormality if another abnormality is identified). More recently, they have also been found to underlie lab-based search errors in cognitive science experiments (e.g., an observer is more likely to miss a target 'T' if a different target 'T' was detected). This phenomenon was renamed the subsequent search miss (SSM) effect in cognitive science. Here we review the SSM literature in both radiology and cognitive science and discuss: (1) the current SSM theories (i.e., satisfaction, perceptual set, and resource depletion theories), (2) the eye movement errors that underlie the SSM effect, (3) the existing efforts tested to alleviate SSM errors, and (4) the evolution of methodologies and analyses used when calculating the SSM effect. Finally, we present the attentional template theory, a novel mechanistic explanation for SSM errors, which ties together our current understanding of SSM errors and the attentional template literature.
Collapse
Affiliation(s)
- Stephen H Adamo
- Department of Cognitive Psychology, University of Central Florida, Orlando, USA.
| | - Brian J Gereke
- Department of Neuroscience, University of Texas at Austin, Austin, USA
| | - Sarah Shomstein
- Department of Cognitive Neuroscience, The George Washington University, Washington, USA
| | - Joseph Schmidt
- Department of Cognitive Psychology, University of Central Florida, Orlando, USA
| |
Collapse
|
16
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|
17
|
Lavelle M, Alonso D, Luria R, Drew T. Visual working memory load plays limited, to no role in encoding distractor objects during visual search. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1914256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - David Alonso
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Roy Luria
- The School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
18
|
Schienle A, Potthoff J, Schönthaler E, Schlintl C. Disgust-Related Memory Bias in Children and Adults. EVOLUTIONARY PSYCHOLOGY 2021; 19:1474704921996585. [PMID: 33902359 PMCID: PMC10303556 DOI: 10.1177/1474704921996585] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 02/02/2021] [Indexed: 11/17/2022] Open
Abstract
Studies with adults found a memory bias for disgust, such that memory for disgusting stimuli was enhanced compared to neutral and frightening stimuli. We investigated whether this bias is more pronounced in females and whether it is already present in children. Moreover, we analyzed whether the visual exploration of disgust stimuli during encoding is associated with memory retrieval. In a first recognition experiment with intentional learning, 50 adults (mean age; M = 23 years) and 52 children (M = 11 years) were presented with disgusting, frightening, and neutral pictures. Both children and adults showed a better recognition performance for disgusting images compared to the other image categories. Males and females did not differ in their memory performance. In a second free recall experiment with eye-tracking, 50 adults (M = 22 years) viewed images from the categories disgust, fear, and neutral. Disgusting and neutral images were matched for color, complexity, brightness, and contrast. The participants, who were not instructed to remember the stimuli, showed a disgust memory bias as well as shorter fixation durations and longer scan paths for disgusting images compared to neutral images. This "hyperscanning pattern" correlated with the number of correctly recalled disgust images. In conclusion, we found a disgust-related memory bias in both children and adults regardless of sex and independently of the memorization method used (recognition/free recall; intentional/incidental).
Collapse
Affiliation(s)
- Anne Schienle
- Clinical Psychology, University of Graz,
BioTechMed Graz, Austria
| | - Jonas Potthoff
- Clinical Psychology, University of Graz,
BioTechMed Graz, Austria
| | | | - Carina Schlintl
- Clinical Psychology, University of Graz,
BioTechMed Graz, Austria
| |
Collapse
|
19
|
Allocation of resources in working memory: Theoretical and empirical implications for visual search. Psychon Bull Rev 2021; 28:1093-1111. [PMID: 33733298 PMCID: PMC8367923 DOI: 10.3758/s13423-021-01881-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/08/2021] [Indexed: 01/09/2023]
Abstract
Recently, working memory (WM) has been conceptualized as a limited resource, distributed flexibly and strategically between an unlimited number of representations. In addition to improving the precision of representations in WM, the allocation of resources may also shape how these representations act as attentional templates to guide visual search. Here, we reviewed recent evidence in favor of this assumption and proposed three main principles that govern the relationship between WM resources and template-guided visual search. First, the allocation of resources to an attentional template has an effect on visual search, as it may improve the guidance of visual attention, facilitate target recognition, and/or protect the attentional template against interference. Second, the allocation of the largest amount of resources to a representation in WM is not sufficient to give this representation the status of attentional template and thus, the ability to guide visual search. Third, the representation obtaining the status of attentional template, whether at encoding or during maintenance, receives an amount of WM resources proportional to its relevance for visual search. Thus defined, the resource hypothesis of visual search constitutes a parsimonious and powerful framework, which provides new perspectives on previous debates and complements existing models of template-guided visual search.
Collapse
|
20
|
Papesh MH, Hout MC, Guevara Pinto JD, Robbins A, Lopez A. Eye movements reflect expertise development in hybrid search. Cogn Res Princ Implic 2021; 6:7. [PMID: 33587219 PMCID: PMC7884546 DOI: 10.1186/s41235-020-00269-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 12/23/2020] [Indexed: 11/10/2022] Open
Abstract
Domain-specific expertise changes the way people perceive, process, and remember information from that domain. This is often observed in visual domains involving skilled searches, such as athletics referees, or professional visual searchers (e.g., security and medical screeners). Although existing research has compared expert to novice performance in visual search, little work has directly documented how accumulating experiences change behavior. A longitudinal approach to studying visual search performance may permit a finer-grained understanding of experience-dependent changes in visual scanning, and the extent to which various cognitive processes are affected by experience. In this study, participants acquired experience by taking part in many experimental sessions over the course of an academic semester. Searchers looked for 20 categories of targets simultaneously (which appeared with unequal frequency), in displays with 0-3 targets present, while having their eye movements recorded. With experience, accuracy increased and response times decreased. Fixation probabilities and durations decreased with increasing experience, but saccade amplitudes and visual span increased. These findings suggest that the behavioral benefits endowed by expertise emerge from oculomotor behaviors that reflect enhanced reliance on memory to guide attention and the ability to process more of the visual field within individual fixations.
Collapse
Affiliation(s)
- Megan H Papesh
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA.
| | - Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
| | | | - Arryn Robbins
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
- Carthage College, Kenosha, WI, USA
| | - Alexis Lopez
- Department of Psychology, New Mexico State University, P.O. Box 30001/MSC 3452, Las Cruces, NM, 88003, USA
| |
Collapse
|
21
|
Boettcher SEP, van Ede F, Nobre AC. Functional biases in attentional templates from associative memory. J Vis 2020; 20:7. [PMID: 33296459 PMCID: PMC7729124 DOI: 10.1167/jov.20.13.7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In everyday life, attentional templates—which facilitate the perception of task-relevant sensory inputs—are often based on associations in long-term memory. We ask whether templates retrieved from memory are necessarily faithful reproductions of the encoded information or if associative-memory templates can be functionally adapted after retrieval in service of current task demands. Participants learned associations between four shapes and four colored gratings, each with a characteristic combination of color (green or pink) and orientation (left or right tilt). On each trial, observers saw one shape followed by a grating and indicated whether the pair matched the learned shape-grating association. Across experimental blocks, we manipulated the types of nonmatch (lure) gratings most often presented. In some blocks the lures were most likely to differ in color but not tilt, whereas in other blocks this was reversed. If participants functionally adapt the retrieved template such that the distinguishing information between lures and targets is prioritized, then they should overemphasize the most commonly diagnostic feature dimension within the template. We found evidence for this in the behavioral responses to the lures: participants were more accurate and faster when responding to common versus rare lures, as predicted by the functional—but not the strictly veridical—template hypothesis. This shows that templates retrieved from memory can be functionally biased to optimize task performance in a flexible, context-dependent, manner.
Collapse
Affiliation(s)
- Sage E P Boettcher
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.,
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.,Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, The Netherlands.,
| | - Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.,
| |
Collapse
|
22
|
Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Collapse
Affiliation(s)
- Jeremy M. Wolfe
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
23
|
Nartker MS, Alaoui-Soce A, Wolfe JM. Visual search errors are persistent in a laboratory analog of the incidental finding problem. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:32. [PMID: 32728864 PMCID: PMC7391453 DOI: 10.1186/s41235-020-00235-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 06/11/2020] [Indexed: 11/18/2022]
Abstract
When radiologists search for a specific target (e.g., lung cancer), they are also asked to report any other clinically significant “incidental findings” (e.g., pneumonia). These incidental findings are missed at an undesirably high rate. In an effort to understand and reduce these errors, Wolfe et al. (Cognitive Research: Principles and Implications 2:35, 2017) developed “mixed hybrid search” as a model system for incidental findings. In this task, non-expert observers memorize six targets: half of these targets are specific images (analogous to the suspected diagnosis in the clinical task). The other half are broader, categorically defined targets, like “animals” or “cars” (analogous to the less well-specified incidental findings). In subsequent search through displays for any instances of any of the targets, observers miss about one third of the categorical targets, mimicking the incidental finding problem. In the present paper, we attempted to reduce the number of errors in the mixed hybrid search task with the goal of finding methods that could be deployed in a clinical setting. In Experiments 1a and 1b, we reminded observers about the categorical targets by inserting non-search trials in which categorical targets were clearly marked. In Experiment 2, observers responded twice on each trial: once to confirm the presence or absence of the specific targets, and once to confirm the presence or absence of the categorical targets. In Experiment 3, observers were required to confirm the presence or absence of every target on every trial using a checklist procedure. Only Experiment 3 produced a marked decline in categorical target errors, but at the cost of a substantial increase in response time.
Collapse
Affiliation(s)
- Makaela S Nartker
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Abla Alaoui-Soce
- Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Department of Surgery, Brigham and Women's Hospital, Boston, MA, USA.,Department of Ophthalmology and Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
24
|
Not looking for any trouble? Purely affective attentional settings do not induce goal-driven attentional capture. Atten Percept Psychophys 2020; 82:1150-1165. [DOI: 10.3758/s13414-019-01895-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
25
|
Gorbunova E. Prospects for using visual search tasks in modern cognitive psychology. СОВРЕМЕННАЯ ЗАРУБЕЖНАЯ ПСИХОЛОГИЯ 2020. [DOI: 10.17759/jmfp.2020090209] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The article describes the main results of modern foreign studies with modifications of classical visual search tasks, as well as proposed classification of such modifications. The essence of visual search is to find target stimuli among the distracters, and the standard task involves finding one target stimulus, which is usually a simple object. Modifications to the standard task may include the presence of more than one target on the screen, the search for more than one type of target, and options that combine both of these modifications. Proposed modifications of the standard task allow not only to study new aspects of visual attention, but also to approach real-life tasks within laboratory studies.
Collapse
Affiliation(s)
- E.S. Gorbunova
- School of Psychology, National Research University Higher School of Economics
| |
Collapse
|
26
|
Madrid J, Hout MC. Examining the effects of passive and active strategies on behavior during hybrid visual memory search: evidence from eye tracking. Cogn Res Princ Implic 2019; 4:39. [PMID: 31549256 PMCID: PMC6757087 DOI: 10.1186/s41235-019-0191-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 08/03/2019] [Indexed: 11/23/2022] Open
Abstract
Hybrid search requires observers to search both through a visual display and through the contents of memory in order to find designated target items. Because professional hybrid searchers such as airport baggage screeners are required to look for many items simultaneously, it is important to explore any potential strategies that may beneficially impact performance during these societally important tasks. The aim of the current study was to investigate the role that cognitive strategies play in facilitating hybrid search. We hypothesized that observers in a hybrid search task would naturally adopt a strategy in which they remained somewhat passive, allowing targets to "pop out." Alternatively, we considered the possibility that observers could adopt a strategy in which they more actively directed their attention around the visual display. In experiment 1, we compared behavioral responses during uninstructed, passive, and active hybrid search. We found that uninstructed search tended to be more active in nature, but that adopting a passive strategy led to above average performance as indicated by a combined measure of speed and accuracy called a balanced integration score (BIS). We replicated these findings in experiment 2. Additionally, we found that oculomotor behavior in passive hybrid search was characterized by longer saccades, improved attentional guidance, and an improved ability to identify items as targets or distractors (relative to active hybrid search). These results have implications for understanding hybrid visual search and the effect that strategy use has on performance and oculomotor behavior during this common, and at times societally important, task.
Collapse
Affiliation(s)
- Jessica Madrid
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, New Mexico 88003 USA
| | - Michael C. Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, New Mexico 88003 USA
| |
Collapse
|
27
|
Abstract
In hybrid foraging tasks, observers search visual displays, so called patches, for multiple instances of any of several types of targets with the goal of collecting targets as quickly as possible. Here, targets were photorealistic objects. Younger and older adults collected targets by mouse clicks. They could move to the next patch whenever they decided to do so. The number of targets held in memory varied between 8 and 64 objects, and the number of items (targets and distractors) in the patches varied between 60 and 105 objects. Older adults foraged somewhat less efficiently than younger adults due to a more exploitative search strategy. When target items became depleted in a patch and search slowed down, younger adults acted according to the optimal foraging theory and moved on to the next patch when the instantaneous rate of collection was close to their average rate of collection. Older adults, by contrast, were more likely to stay longer and spend time searching for the last few targets. Within a patch, both younger and older adults tended to collect the same type of target in "runs." This behavior is more efficient than continual switching between target types. Furthermore, after correction for general age-related slowing, RT × set size functions revealed largely preserved attention and memory functions in older age. Hybrid foraging tasks share features with important real-world search tasks. Differences between younger and older observers on this task may therefore help to explain age differences in many complex search tasks of daily life. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Iris Wiegand
- Visual Attention Lab, Brigham and Women's Hospital
| | | | - Jeremy Wolfe
- Visual Attention Lab, Brigham and Women's Hospital
| |
Collapse
|
28
|
van Bergen G, Flecken M, Wu R. Rapid target selection of object categories based on verbs: Implications for language-categorization interactions. Psychophysiology 2019; 56:e13395. [PMID: 31115079 DOI: 10.1111/psyp.13395] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 03/13/2019] [Accepted: 04/26/2019] [Indexed: 11/29/2022]
Abstract
Although much is known about how nouns facilitate object categorization, very little is known about how verbs (e.g., posture verbs such as stand or lie) facilitate object categorization. Native Dutch speakers are a unique population to investigate this issue with because the configurational categories distinguished by staan (to stand) and liggen (to lie) are inherent in everyday Dutch language. Using an ERP component (N2pc), four experiments demonstrate that selection of posture verb categories is rapid (between 220-320 ms). The effect was attenuated, though present, when removing the perceptual distinction between categories. A similar attenuated effect was obtained in native English speakers, where the category distinction is less familiar, and when category labels were implicit for native Dutch speakers. Our results are among the first to demonstrate that category search based on verbs can be rapid, although extensive linguistic experience and explicit labels may not be necessary to facilitate categorization in this case.
Collapse
Affiliation(s)
- Geertje van Bergen
- Max Planck Institute for Psycholinguistics, Radboud University Nijmegen, Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Monique Flecken
- Max Planck Institute for Psycholinguistics, Radboud University Nijmegen, Nijmegen, The Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Rachel Wu
- Department of Psychology, University of California, Riverside, Riverside, California
| |
Collapse
|
29
|
Wiegand I, Wolfe JM. Age doesn't matter much: hybrid visual and memory search is preserved in older adults. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 27:220-253. [PMID: 31050319 DOI: 10.1080/13825585.2019.1604941] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
We tested younger and older observers' attention and long-term memory functions in a "hybrid search" task, in which observers look through visual displays for instances of any of several types of targets held in memory. Apart from a general slowing, search efficiency did not change with age. In both age groups, reaction times increased linearly with the visual set size and logarithmically with the memory set size, with similar relative costs of increasing load (Experiment 1). We replicated the finding and further showed that performance remained comparable between age groups when familiarity cues were made irrelevant (Experiment 2) and target-context associations were to be retrieved (Experiment 3). Our findings are at variance with theories of cognitive aging that propose age-specific deficits in attention and memory. As hybrid search resembles many real-world searches, our results might be relevant to improve the ecological validity of assessing age-related cognitive decline.
Collapse
Affiliation(s)
- Iris Wiegand
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Jeremy M Wolfe
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Departments of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
30
|
Abstract
In Hybrid Foraging tasks, observers search for multiple instances of several types of target. Collecting all the dirty laundry and kitchenware out of a child's room would be a real-world example. How are such foraging episodes structured? A series of four experiments shows that selection of one item from the display makes it more likely that the next item will be of the same type. This pattern holds if the targets are defined by basic features like color and shape but not if they are defined by their identity (e.g., the letters p & d). Additionally, switching between target types during search is expensive in time, with longer response times between successive selections if the target type changes than if they are the same. Finally, the decision to leave a screen/patch for the next screen in these foraging tasks is imperfectly consistent with the predictions of optimal foraging theory. The results of these hybrid foraging studies cast new light on the ways in which prior selection history guides subsequent visual search in general.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Visual Attention Laboratory, Department of Surgery, Brigham and Women's Hospital, Boston, MA, USA.
- Department of Ophthalmology and Radiology, Harvard Medical School, Boston, MA, USA.
- Visual Attention Laboratory, Department of Surgery, Brigham and Women's Hospital, 64 Sidney St. Suite. 170, Cambridge, MA, 02139-4170, USA.
| | - Matthew S Cain
- Development, and Engineering Center, US Army Natick Soldier Research, Natick, MA, USA
- Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA, USA
| | - Avigael M Aizenman
- Vision Science Department, University of California Berkeley, Berkeley, CA, USA
| |
Collapse
|
31
|
Abstract
For some real-world color searches, the target colors are not precisely known, and any item within a range of color values should be attended. Thus, a target representation that captures multiple similar colors would be advantageous. If such a multicolor search is possible, then search for two targets (e.g., Stroud, Menneer, Cave, and Donnelly, Journal of Experimental Psychology: Human Perception and Performance, 38(1): 113-122, 2012) might be guided by a target representation that included the target colors as well as the continuum of colors that fall between the targets within a contiguous region in color space. Results from Stroud, Menneer, Cave, and Donnelly, Journal of Experimental Psychology: Human Perception and Performance, 38(1): 113-122, (2012) suggest otherwise, however. The current set of experiments show that guidance for a set of colors that are all from a single region of color space can be reasonably effective if targets are depicted as specific discrete colors. Specifically, Experiments 1-3 demonstrate that a search can be guided by four and even eight colors given the appropriate conditions. However, Experiment 5 gives evidence that guidance is sometimes sensitive to how informative the target preview is to search. Experiments 6 and 7 show that a stimulus showing a continuous range of target colors is not translated into a search target representation. Thus, search can be guided by multiple discrete colors that are from a single region in color space, but this approach was not adopted in a search for two targets with intervening distractor colors.
Collapse
|
32
|
Madrid J, Cunningham CA, Robbins A, Hout MC. You’re looking for what? Comparing search for familiar, nameable objects to search for unfamiliar, novel objects. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1577318] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Jessica Madrid
- Department of Psychology, New Mexico State University, Las Cruces, NM, USA
| | - Corbin A. Cunningham
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, USA
| | - Arryn Robbins
- Department of Psychology, New Mexico State University, Las Cruces, NM, USA
| | - Michael C. Hout
- Department of Psychology, New Mexico State University, Las Cruces, NM, USA
| |
Collapse
|
33
|
Berggren N, Eimer M. Visual Working Memory Load Disrupts Template-guided Attentional Selection during Visual Search. J Cogn Neurosci 2018; 30:1902-1915. [DOI: 10.1162/jocn_a_01324] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Mental representations of target features (attentional templates) control the selection of candidate target objects in visual search. The question where templates are maintained remains controversial. We employed the N2pc component as an electrophysiological marker of template-guided target selection to investigate whether and under which conditions templates are held in visual working memory (vWM). In two experiments, participants memorized one or four shapes (low vs. high vWM load) before either being tested on their memory or performing a visual search task. When targets were defined by one of two possible colors (e.g., red or green), target N2pcs were delayed with high vWM load. This suggests that the maintenance of multiple shapes in vWM interfered with the activation of color-specific search templates, supporting the hypothesis that these templates are held in vWM. This was the case despite participants always searching for the same two target colors. In contrast, the speed of target selection in a task where a single target color remained relevant throughout was unaffected by concurrent load, indicating that a constant search template for a single feature may be maintained outside vWM in a different store. In addition, early visual N1 components to search and memory test displays were attenuated under high load, suggesting a competition between external and internal attention. The size of this attenuation predicted individual vWM performance. These results provide new electrophysiological evidence for impairment of top–down attentional control mechanisms by high vWM load, demonstrating that vWM is involved in the guidance of attentional target selection during search.
Collapse
|
34
|
Friedman GN, Johnson L, Williams ZM. Long-Term Visual Memory and Its Role in Learning Suppression. Front Psychol 2018; 9:1896. [PMID: 30369895 PMCID: PMC6194155 DOI: 10.3389/fpsyg.2018.01896] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 09/18/2018] [Indexed: 11/13/2022] Open
Abstract
Long-term memory is a core aspect of human learning that permits a wide range of skills and behaviors often important for survival. While this core ability has been broadly observed for procedural and declarative memory, whether similar mechanisms subserve basic sensory or perceptual processes remains unclear. Here, we use a visual learning paradigm to show that training humans to search for common visual features in the environment leads to a persistent improvement in performance over consecutive days but, surprisingly, suppresses the subsequent ability to learn similar visual features. This suppression is reversed if the memory is prevented from consolidating, while still permitting the ability to learn multiple visual features simultaneously. These findings reveal a memory mechanism that may enable salient sensory patterns to persist in memory over prolonged durations, but which also functions to prevent false-positive detection by proactively suppressing new learning.
Collapse
Affiliation(s)
- Gabriel N Friedman
- Department of Neurosurgery, Harvard Medical School, Massachusetts General Hospital, Boston, MA, United States
| | - Lance Johnson
- Department of Neurobiology, Harvard University, Cambridge, MA, United States
| | - Ziv M Williams
- Department of Neurosurgery, Harvard Medical School, Massachusetts General Hospital, Boston, MA, United States.,Harvard-MIT Health Sciences and Technology, Boston, MA, United States.,Program in Neuroscience, Harvard Medical School, Harvard University, Boston, MA, United States
| |
Collapse
|
35
|
The Time Course of Target Template Activation Processes during Preparation for Visual Search. J Neurosci 2018; 38:9527-9538. [PMID: 30242053 DOI: 10.1523/jneurosci.0409-18.2018] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 07/31/2018] [Accepted: 08/28/2018] [Indexed: 11/21/2022] Open
Abstract
Search for target objects in visual scenes is guided by mental representations of target features (attentional templates). However, it is unknown when such templates are activated during each search episode and whether this can be controlled by temporal expectations. We used electrophysiological measures to track search template activation processes in real time. In three experiments, female and male humans searched for a color-defined target object in search displays where targets were accompanied by distractors in different nontarget colors. Brief task-irrelevant color singleton probes that matched the target template were flashed rapidly (every 200 ms) throughout each block. Probes presented at times when the target template is active should capture attention, whereas probes presented at other times should not. To assess this, N2pc components were measured as markers of attentional capture, separately for probes at each successive temporal position between two search displays. Results demonstrated that search templates were active from ∼1000 ms before the arrival of the next search display, and were deactivated after each search episode, even when the preceding search display did not contain a target object. Templates were activated later when the predictable interval between search displays was increased. Results demonstrate that search templates are not continuously active but are transiently activated during the preparation for each new search episode. These activation states are regulated in a top-down fashion by temporal expectations about when an attentional template will become task-relevant.SIGNIFICANCE STATEMENT It is often assumed that observers prepare for a visual search task by activating mental representations of search target objects (attentional templates). However, the time course of such template activation processes is completely unknown. By using a new sequential probe presentation technique and electrophysiological measures of attentional processing, we demonstrate that target templates are rapidly activated and deactivated before and after each successive search display, and that these template activation states are tuned to observers' temporal expectations. These results provide novel insights into the temporal dynamics of cognitive control processes in visual attention. They show that attentional templates for visual search are preparatory states that are activated in a transient fashion before each new search episode.
Collapse
|
36
|
Wu R, McGee B, Echiverri C, Zinszer BD. Prior knowledge of category size impacts visual search. Psychophysiology 2018; 55:e13075. [DOI: 10.1111/psyp.13075] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 01/03/2018] [Accepted: 02/19/2018] [Indexed: 11/28/2022]
Affiliation(s)
- Rachel Wu
- Department of Psychology; University of California; Riverside, Riverside California USA
| | - Brianna McGee
- Department of Psychology; University of California; Riverside, Riverside California USA
| | - Chelsea Echiverri
- Department of Psychology; University of California; Riverside, Riverside California USA
| | - Benjamin D. Zinszer
- Communication Sciences and Disorders; University of Texas at Austin; Austin Texas USA
| |
Collapse
|
37
|
Nordfang M, Wolfe JM. Guided search through memory. VISUAL COGNITION 2018. [DOI: 10.1080/13506285.2018.1439851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Maria Nordfang
- Department of Neurology, Copenhagen University Hospital - Rigshospitalet, Glostrup, Denmark
| | - Jeremy M. Wolfe
- Departments of Ophthalmology and Radiology, Harvard Medical School, Boston, USA
- Department of Surgery, Brigham and Women’s Hospital, Cambridge, USA
| |
Collapse
|
38
|
Who should be searching? Differences in personality can affect visual search accuracy. PERSONALITY AND INDIVIDUAL DIFFERENCES 2017. [DOI: 10.1016/j.paid.2017.04.045] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
39
|
Drew T, Boettcher SEP, Wolfe JM. One visual search, many memory searches: An eye-tracking investigation of hybrid search. J Vis 2017; 17:5. [PMID: 28892812 PMCID: PMC5596794 DOI: 10.1167/17.11.5] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Suppose you go to the supermarket with a shopping list of 10 items held in memory. Your shopping expedition can be seen as a combination of visual search and memory search. This is known as "hybrid search." There is a growing interest in understanding how hybrid search tasks are accomplished. We used eye tracking to examine how manipulating the number of possible targets (the memory set size [MSS]) changes how observers (Os) search. We found that dwell time on each distractor increased with MSS, suggesting a memory search was being executed each time a new distractor was fixated. Meanwhile, although the rate of refixation increased with MSS, it was not nearly enough to suggest a strategy that involves repeatedly searching visual space for subgroups of the target set. These data provide a clear demonstration that hybrid search tasks are carried out via a "one visual search, many memory searches" heuristic in which Os examine items in the visual array once with a very low rate of refixations. For each item selected, Os activate a memory search that produces logarithmic response time increases with increased MSS. Furthermore, the percentage of distractors fixated was strongly modulated by the MSS: More items in the MSS led to a higher percentage of fixated distractors. Searching for more potential targets appears to significantly alter how Os approach the task, ultimately resulting in more eye movements and longer response times.
Collapse
Affiliation(s)
| | | | - Jeremy M Wolfe
- Brigham and Women's Hospital, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA
| |
Collapse
|
40
|
Wolfe JM, Alaoui Soce A, Schill HM. How did I miss that? Developing mixed hybrid visual search as a 'model system' for incidental finding errors in radiology. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2017; 2:35. [PMID: 28890920 PMCID: PMC5569644 DOI: 10.1186/s41235-017-0072-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Accepted: 07/10/2017] [Indexed: 12/21/2022]
Abstract
In a real world search, it can be important to keep ‘an eye out’ for items of interest that are not the primary subject of the search. For instance, you might look for the exit sign on the freeway, but you should also respond to the armadillo crossing the road. In medicine, these items are known as “incidental findings,” findings of possible clinical significance that were not the main object of search. These errors (e.g., missing a broken rib while looking for pneumonia) have medical consequences for the patient and potential legal consequences for the physician. Here we report three experiments intended to develop a ‘model system’ for incidental findings – a paradigm that could be used in the lab to develop strategies to reduce incidental finding errors in the clinic. All the experiments involve ‘hybrid’ visual search for any of several targets held in memory. In this ‘mixed hybrid search task,’ observers search for any of three specific targets (e.g., this rabbit, this truck, and this spoon) and three categorical targets (e.g., masks, furniture, and plants). The hypothesis is that the specific items are like the specific goals of a real world search and the categorical targets are like the less well-defined incidental findings that might be present and that should be reported. In all these experiments, varying target prevalence, number of targets, etc., the categorical targets are missed at a much higher rate than the specific targets. This paradigm shows promise as a model of the incidental finding problem.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology Departments, Harvard Medical School, 64 Sidney St. Suite 170, Cambridge, MA 02139 USA.,Visual Attention Lab, Brigham and Women's Hospital, 64 Sidney St. Suite 170, Cambridge, MA 02139 USA
| | - Abla Alaoui Soce
- Visual Attention Lab, Brigham and Women's Hospital, 64 Sidney St. Suite 170, Cambridge, MA 02139 USA
| | - Hayden M Schill
- Visual Attention Lab, Brigham and Women's Hospital, 64 Sidney St. Suite 170, Cambridge, MA 02139 USA
| |
Collapse
|
41
|
Abstract
The items on a memorized grocery list are not relevant in every aisle; for example, it is useless to search for the cabbage in the cereal aisle. It might be beneficial if one could mentally partition the list so only the relevant subset was active, so that vegetables would be activated in the produce section. In four experiments, we explored observers' abilities to partition memory searches. For example, if observers held 16 items in memory, but only eight of the items were relevant, would response times resemble a search through eight or 16 items? In Experiments 1a and 1b, observers were not faster for the partition set; however, they suffered relatively small deficits when "lures" (items from the irrelevant subset) were presented, indicating that they were aware of the partition. In Experiment 2 the partitions were based on semantic distinctions, and again, observers were unable to restrict search to the relevant items. In Experiments 3a and 3b, observers attempted to remove items from the list one trial at a time but did not speed up over the course of a block, indicating that they also could not limit their memory searches. Finally, Experiments 4a, 4b, 4c, and 4d showed that observers were able to limit their memory searches when a subset was relevant for a run of trials. Overall, observers appear to be unable or unwilling to partition memory sets from trial to trial, yet they are capable of restricting search to a memory subset that remains relevant for several trials. This pattern is consistent with a cost to switching between currently relevant memory items.
Collapse
|
42
|
|
43
|
Ort E, Fahrenfort JJ, Olivers CNL. Lack of Free Choice Reveals the Cost of Having to Search for More Than One Object. Psychol Sci 2017; 28:1137-1147. [PMID: 28661761 PMCID: PMC5659593 DOI: 10.1177/0956797617705667] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
It is debated whether people can actively search for more than one object or whether this results in switch costs. Using a gaze-contingent eye-tracking paradigm, we revealed a crucial role for cognitive control in multiple-target search. We instructed participants to simultaneously search for two target objects presented among distractors. In one condition, both targets were available, which gave the observer free choice of what to search for and allowed for proactive control. In the other condition, only one of the two targets was available, so that the choice was imposed, and a reactive mechanism would be required. No switch costs emerged when target choice was free, but switch costs emerged reliably when targets were imposed. Bridging contradictory findings, the results are consistent with models of visual selection in which only one attentional template actively drives selection and in which the efficiency of switching targets depends on the type of cognitive control allowed for by the environment.
Collapse
Affiliation(s)
- Eduard Ort
- Department of Experimental and Applied Psychology, Institute for Brain and Behaviour, Vrije Universiteit Amsterdam
| | - Johannes J Fahrenfort
- Department of Experimental and Applied Psychology, Institute for Brain and Behaviour, Vrije Universiteit Amsterdam
| | - Christian N L Olivers
- Department of Experimental and Applied Psychology, Institute for Brain and Behaviour, Vrije Universiteit Amsterdam
| |
Collapse
|
44
|
Horowitz TS. Prevalence in Visual Search: From the Clinic to the Lab and Back Again. JAPANESE PSYCHOLOGICAL RESEARCH 2017. [DOI: 10.1111/jpr.12153] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
45
|
Abstract
Suppose you were monitoring a group of people in order to determine if anyone of them did something suspicious (e.g., putting down a bag) or if any two interacted in a suspicious manner (e.g., trading bags). How large a group could you monitor successfully? This paper reports on six experiments in which observers monitor a group of entities, watching for an event. Whether the event was performed by a single entity or was an interaction between a pair, the capacity for event monitoring was two to three items. This was lower than the multiple object tracking capacity for the same stimuli (approximately six items). Capacity was essentially the same whether entities were identical circles or unique cartoon animals; nor was capacity changed by an added requirement to identify the entities involved in an event. Event monitoring appears to be related to, but not identical to, multiple object tracking.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Harvard Medical School, Boston, USA
- Visual Attention Lab, Brigham and Women’s Hospital, Boston, USA
| | - Jeremy M. Wolfe
- Harvard Medical School, Boston, USA
- Visual Attention Lab, Brigham and Women’s Hospital, Boston, USA
| |
Collapse
|
46
|
Horstmann G, Ansorge U. Surprise capture and inattentional blindness. Cognition 2016; 157:237-249. [DOI: 10.1016/j.cognition.2016.09.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Revised: 09/01/2016] [Accepted: 09/10/2016] [Indexed: 10/21/2022]
|
47
|
Searching while loaded: Visual working memory does not interfere with hybrid search efficiency but hybrid search uses working memory capacity. Psychon Bull Rev 2016; 23:201-12. [PMID: 26055755 DOI: 10.3758/s13423-015-0874-8] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In "hybrid search" tasks, such as finding items on a grocery list, one must search the scene for targets while also searching the list in memory. How is the representation of a visual item compared with the representations of items in the memory set? Predominant theories would propose a role for visual working memory (VWM) either as the site of the comparison or as a conduit between visual and memory systems. In seven experiments, we loaded VWM in different ways and found little or no effect on hybrid search performance. However, the presence of a hybrid search task did reduce the measured capacity of VWM by a constant amount regardless of the size of the memory or visual sets. These data are broadly consistent with an account in which VWM must dedicate a fixed amount of its capacity to passing visual representations to long-term memory for comparison to the items in the memory set. The data cast doubt on models in which the search template resides in VWM or where memory set item representations are moved from LTM through VWM to earlier areas for comparison to visual items.
Collapse
|
48
|
Giammarco M, Paoletti A, Guild EB, Al-Aidroos N. Attentional capture by items that match episodic long-term memory representations. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1195470] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
49
|
|
50
|
Wu R, Pruitt Z, Runkle M, Scerif G, Aslin RN. A neural signature of rapid category-based target selection as a function of intra-item perceptual similarity, despite inter-item dissimilarity. Atten Percept Psychophys 2016; 78:749-60. [PMID: 26732265 PMCID: PMC4811727 DOI: 10.3758/s13414-015-1039-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous work on visual search has suggested that only a single attentional template can be prioritized at any given point in time. Grouping features into objects and objects into categories can facilitate search performance by maximizing the amount of information carried by an attentional template. From infancy to adulthood, earlier studies on perceptual similarity have shown that consistent features increase the likelihood of grouping features into objects (e.g., Quinn & Bhatt, Psychological Science. 20:933-938, 2009) and objects into categories (e.g., shape bias; Landau, Smith, & Jones, Cognitive Development. 3:299-321, 1988). Here we asked whether lower-level, intra-item similarity facilitates higher-level categorization, despite inter-item dissimilarity. Adults participated in four visual search tasks in which targets were defined as either one item (a specific alien) or a category (any alien) with either similar features (e.g., circle belly shape and circle back spikes) or dissimilar features (e.g., circle belly shape and triangle back spikes). Using behavioral and neural measures (i.e., the N2pc event-related potential component, which typically emerges 200 ms poststimulus), we found that intra-item feature similarity facilitated categorization, despite dissimilar features across the category items. Our results demonstrate that feature similarity builds novel categories and activates a task-appropriate abstract categorical search template. In other words, grouping at the lower, item level facilitates grouping at the higher, category level, which allows us to overcome efficiency limitations in visual search.
Collapse
Affiliation(s)
- Rachel Wu
- Department of Psychology, University of California, Riverside, CA, USA.
| | - Zoe Pruitt
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Megan Runkle
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Richard N Aslin
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|