1
|
Děchtěrenko F, Lukavský J, Adámek P. Low detail retention in visual memory despite focused effort. Q J Exp Psychol (Hove) 2025:17470218251335636. [PMID: 40205740 DOI: 10.1177/17470218251335636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Humans can recognize a vast number of previously seen images, yet their ability to recall fine details from visual memory remains limited. This study investigated whether prolonged study of a small number of stimuli could improve the recognition accuracy for memorizing details of the scene. We developed a novel experimental paradigm that allowed repeated testing of memory for individual images, allowing us to query images repeatedly and measure which parts of the scene were remembered, and which were forgotten. Our results revealed that participants struggled to achieve high accuracy in detail-oriented memory tasks, even with extensive effort and focus. Follow-up experiments explored potential factors contributing to this limitation, shedding light on why memorizing fine details is inherently difficult. These findings underscore the challenges of achieving high-detail visual memory in long-term memory for complex scenes-although we can memorize large numbers of scenes with low fidelity, we cannot memorize details even in a small number of scenes.
Collapse
Affiliation(s)
- Filip Děchtěrenko
- Institute of Psychology, Czech Academy of Sciences, Prague, Czech Republic
| | - Jiří Lukavský
- Institute of Psychology, Czech Academy of Sciences, Prague, Czech Republic
| | - Petr Adámek
- National Institute of Mental Health, Klecany, Czech Republic
- Third Faculty of Medicine, Charles University, Prague, Czech Republic
| |
Collapse
|
2
|
Daggett EW, Hout MC. A tutorial review on methods for collecting similarity judgments from human observers. Atten Percept Psychophys 2025; 87:737-751. [PMID: 40069479 DOI: 10.3758/s13414-025-03044-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/18/2025] [Indexed: 04/04/2025]
Abstract
Similarity is a central concept in the study of cognition, having been identified as an explanatory factor in the dynamics of myriad psychological phenomena. The collection of similarity judgments, however, can be a difficult, laborious, and time-consuming process. There is presently a vast and diverse array of methodologies applied throughout the psychological sciences from which to gather judgments of similarity perceptions, and each carries its own relative advantages and disadvantages. Each method may be suitable for a specific set of contexts and stimuli but be inappropriate for others. This tutorial review is meant to serve as a guided tour of common similarity judgment-gathering methods currently utilized in the psychological sciences, and to provide an overview of how and when researchers should leverage them.
Collapse
Affiliation(s)
- Eben W Daggett
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA
| | - Michael C Hout
- Department of Psychology, New Mexico State University, P.O. Box 30001 / MSC 3452, Las Cruces, NM, 88003, USA.
- Department of Kinesiology, New Mexico State University, Las Cruces, NM, USA.
| |
Collapse
|
3
|
Kyle-Davidson C, Solis O, Robinson S, Tan RTW, Evans KK. Scene complexity and the detail trace of human long-term visual memory. Vision Res 2025; 227:108525. [PMID: 39644707 DOI: 10.1016/j.visres.2024.108525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 10/30/2024] [Accepted: 11/21/2024] [Indexed: 12/09/2024]
Abstract
Humans can remember a vast amount of scene images; an ability often attributed to encoding only low-fidelity gist traces of a scene. Instead, studies show a surprising amount of detail is retained for each scene image allowing them to be distinguished from highly similar in-category distractors. The gist trace for images can be relatively easily captured through both computational and behavioural techniques, but capturing detail is much harder. While detail can be broadly estimated at the categorical level (e.g. man-made scenes more complex than natural), there is a lack of both ground-truth detail data at the sample level and a way to operationalise it for measurement purposes. Here through three different studies, we investigate whether the perceptual complexity of scenes can serve as a suitable analogue for the detail present in a scene, and hence whether we can use complexity to determine the relationship between scene detail and visual long term memory for scenes. First we examine this relationship directly using the VISCHEMA datasets, to determine whether the perceived complexity of a scene interacts with memorability, finding a significant positive correlation between complexity and memory, in contrast to the hypothesised U-shaped relation often proposed in the literature. In the second study we model complexity via artificial means, and find that even predicted measures of complexity still correlate with the overall ground-truth memorability of a scene, indicating that complexity and memorability cannot be easily disentangled. Finally, we investigate how cognitive load impacts the influence of scene complexity on image memorability. Together, findings indicate complexity and memorability do vary non-linearly, but generally it is limited to the extremes of the image complexity ranges. The effect of complexity on memory closely mirrors previous findings that detail enhances memory, and suggests that complexity is a suitable analogue for detail in visual long-term scene memory.
Collapse
Affiliation(s)
| | - Oscar Solis
- University of York, Dept. of Psychology, York, YO10 5NA, UK
| | | | | | - Karla K Evans
- University of York, Dept. of Psychology, York, YO10 5NA, UK
| |
Collapse
|
4
|
Cárdenas-Miller N, O'Donnell RE, Tam J, Wyble B. Surprise! Draw the scene: Visual recall reveals poor incidental working memory following visual search in natural scenes. Mem Cognit 2025; 53:19-32. [PMID: 37770695 DOI: 10.3758/s13421-023-01465-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2023] [Indexed: 09/30/2023]
Abstract
Searching within natural scenes can induce incidental encoding of information about the scene and the target, particularly when the scene is complex or repeated. However, recent evidence from attribute amnesia (AA) suggests that in some situations, searchers can find a target without building a robust incidental memory of its task relevant features. Through drawing-based visual recall and an AA search task, we investigated whether search in natural scenes necessitates memory encoding. Participants repeatedly searched for and located an easily detected item in novel scenes for numerous trials before being unexpectedly prompted to draw either the entire scene (Experiment 1) or their search target (Experiment 2) directly after viewing the search image. Naïve raters assessed the similarity of the drawings to the original information. We found that surprise-trial drawings of the scene and search target were both poorly recognizable, but the same drawers produced highly recognizable drawings on the next trial when they had an expectation to draw the image. Experiment 3 further showed that the poor surprise trial memory could not merely be attributed to interference from the surprising event. Our findings suggest that even for searches done in natural scenes, it is possible to locate a target without creating a robust memory of either it or the scene it was in, even if attended to just a few seconds prior. This disconnection between attention and memory might reflect a fundamental property of cognitive computations designed to optimize task performance and minimize resource use.
Collapse
Affiliation(s)
| | - Ryan E O'Donnell
- Pennsylvania State University, University Park, PA, USA
- Drexel University, Philadelphia, PA, USA
| | - Joyce Tam
- Pennsylvania State University, University Park, PA, USA
| | - Brad Wyble
- Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
5
|
Sefranek M, Zokaei N, Draschkow D, Nobre AC. Comparing the impact of contextual associations and statistical regularities in visual search and attention orienting. PLoS One 2024; 19:e0302751. [PMID: 39570820 PMCID: PMC11581329 DOI: 10.1371/journal.pone.0302751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 10/06/2024] [Indexed: 11/24/2024] Open
Abstract
During visual search, we quickly learn to attend to an object's likely location. Research has shown that this process can be guided by learning target locations based on consistent spatial contextual associations or other statistical regularities. Here, we tested how different types of associations guide learning and the utilisation of established memories for different purposes. Participants learned contextual associations or rule-like statistical regularities that predicted target locations within different scenes. The consequences of this learning for subsequent performance were then evaluated on attention-orienting and memory-recall tasks. Participants demonstrated facilitated attention-orienting and recall performance based on both contextual associations and statistical regularities. Contextual associations facilitated attention orienting with a different time course compared to statistical regularities. Benefits to memory-recall performance depended on the alignment between the learned association or regularity and the recall demands. The distinct patterns of behavioural facilitation by contextual associations and statistical regularities show how different forms of long-term memory may influence neural information processing through different modulatory mechanisms.
Collapse
Affiliation(s)
- Marcus Sefranek
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Nahid Zokaei
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Dejan Draschkow
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
| | - Anna C. Nobre
- Brain and Cognition Lab, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, United Kingdom
- Wu Tsai Institute, Yale University, New Haven, CT, United States of America
- Department of Psychology, Yale University, New Haven, CT, United States of America
| |
Collapse
|
6
|
White B, Daggett E, Hout MC. Similarity ratings for basic-level categories from the Nosofsky et al. (2018) database of rock images. Front Psychol 2024; 15:1438901. [PMID: 39582996 PMCID: PMC11582978 DOI: 10.3389/fpsyg.2024.1438901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2024] [Accepted: 10/23/2024] [Indexed: 11/26/2024] Open
Affiliation(s)
- Bryan White
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Eben Daggett
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Michael C Hout
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
- Department of Kinesiology, New Mexico State University, Las Cruces, NM, United States
| |
Collapse
|
7
|
Saltzmann SM, Eich B, Moen KC, Beck MR. Activated long-term memory and visual working memory during hybrid visual search: Effects on target memory search and distractor memory. Mem Cognit 2024; 52:2156-2171. [PMID: 38528298 DOI: 10.3758/s13421-024-01556-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2024] [Indexed: 03/27/2024]
Abstract
In hybrid visual search, observers must maintain multiple target templates and subsequently search for any one of those targets. If the number of potential target templates exceeds visual working memory (VWM) capacity, then the target templates are assumed to be maintained in activated long-term memory (aLTM). Observers must search the array for potential targets (visual search), as well as search through memory (target memory search). Increasing the target memory set size reduces accuracy, increases search response times (RT), and increases dwell time on distractors. However, the extent of observers' memory for distractors during hybrid search is largely unknown. In the current study, the impact of hybrid search on target memory search (measured by dwell time on distractors, false alarms, and misses) and distractor memory (measured by distractor revisits and recognition memory of recently viewed distractors) was measured. Specifically, we aimed to better understand how changes in behavior during hybrid search impacts distractor memory. Increased target memory set size led to an increase in search RTs, distractor dwell times, false alarms, and target identification misses. Increasing target memory set size increased revisits to distractors, suggesting impaired distractor location memory, but had no effect on a two-alternative forced-choice (2AFC) distractor recognition memory test presented during the search trial. The results from the current study suggest a lack of interference between memory stores maintaining target template representations (aLTM) and distractor information (VWM). Loading aLTM with more target templates does not impact VWM for distracting information.
Collapse
Affiliation(s)
- Stephanie M Saltzmann
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Brandon Eich
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Katherine C Moen
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
- Department of Psychology, University of Nebraska at Kearney, 2504 9th Ave, Kearney, NE, 68849, USA
| | - Melissa R Beck
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA.
| |
Collapse
|
8
|
Aivar MP, Li CL, Tong MH, Kit DM, Hayhoe MM. Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task. J Vis 2024; 24:1. [PMID: 39226069 PMCID: PMC11373708 DOI: 10.1167/jov.24.9.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
- https://www.psicologiauam.es/aivar/
| | - Chia-Ling Li
- Institute of Neuroscience, The University of Texas at Austin, Austin, TX, USA
- Present address: Apple Inc., Cupertino, California, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: IBM Research, Cambridge, Massachusetts, USA
| | - Dmitry M Kit
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: F5, Boston, Massachusetts, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
9
|
Zsido AN, Hout MC, Hernandez M, White B, Polák J, Kiss BL, Godwin HJ. No evidence of attentional prioritization for threatening targets in visual search. Sci Rep 2024; 14:5651. [PMID: 38454142 PMCID: PMC10920919 DOI: 10.1038/s41598-024-56265-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 03/04/2024] [Indexed: 03/09/2024] Open
Abstract
Throughout human evolutionary history, snakes have been associated with danger and threat. Research has shown that snakes are prioritized by our attentional system, despite many of us rarely encountering them in our daily lives. We conducted two high-powered, pre-registered experiments (total N = 224) manipulating target prevalence to understand this heightened prioritization of threatening targets. Target prevalence refers to the proportion of trials wherein a target is presented; reductions in prevalence consistently reduce the likelihood that targets will be found. We reasoned that snake targets in visual search should experience weaker effects of low target prevalence compared to non-threatening targets (rabbits) because they should be prioritized by searchers despite appearing rarely. In both experiments, we found evidence of classic prevalence effects but (contrasting prior work) we also found that search for threatening targets was slower and less accurate than for nonthreatening targets. This surprising result is possibly due to methodological issues common in prior studies, including comparatively smaller sample sizes, fewer trials, and a tendency to exclusively examine conditions of relatively high prevalence. Our findings call into question accounts of threat prioritization and suggest that prior attention findings may be constrained to a narrow range of circumstances.
Collapse
Affiliation(s)
- Andras N Zsido
- Institute of Psychology, University of Pécs, 6 Ifjusag Street, Pécs, 7624, Baranya, Hungary.
- Szentágothai Research Centre, University of Pécs, Pécs, Hungary.
| | - Michael C Hout
- Department of Psychology, New Mexico State University, Las Cruces, USA
| | - Marko Hernandez
- Department of Psychology, New Mexico State University, Las Cruces, USA
| | - Bryan White
- Department of Psychology, New Mexico State University, Las Cruces, USA
| | - Jakub Polák
- Department of Economy and Management, Ambis University, Prague, Czech Republic
- Faculty of Science, Charles University, Prague, Czech Republic
| | - Botond L Kiss
- Institute of Psychology, University of Pécs, 6 Ifjusag Street, Pécs, 7624, Baranya, Hungary
| | - Hayward J Godwin
- School of Psychology, University of Southampton, Southampton, UK
| |
Collapse
|
10
|
Moriya J. Long-term memory for distractors: Effects of involuntary attention from working memory. Mem Cognit 2024; 52:401-416. [PMID: 37768481 DOI: 10.3758/s13421-023-01469-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
In a visual search task, attention to task-irrelevant distractors impedes search performance. However, is it maladaptive to future performance? Here, I showed that attended distractors in a visual search task were better remembered in long-term memory (LTM) in the subsequent surprise recognition task than non-attended distractors. In four experiments, participants performed a visual search task using real-world objects of a single color. They encoded color in working memory (WM) during the task; because each object had a different color, participants directed their attention to the WM-matching colored distractor. Then, in the surprise recognition task, participants were required to indicate whether an object had been shown in the earlier visual search task, regardless of its color. The results showed that attended distractors were remembered better in LTM than non-attended distractors (Experiments 1 and 2). Moreover, the more participants directed their attention to distractors, the better they explicitly remembered them. Participants did not explicitly remember the color of the attended distractors (Experiment 3) but remembered integrated information with object and color (Experiment 4). When the color of the distractors in the recognition task was mismatched with the color in the visual search task, LTM decreased compared to color-matching distractors. These results suggest that attention to distractors impairs search for a target but is helpful in remembering distractors in LTM. When task-irrelevant distractors become task-relevant information in the future, their attention becomes beneficial.
Collapse
Affiliation(s)
- Jun Moriya
- Faculty of Sociology, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka, Japan.
| |
Collapse
|
11
|
How does searching for faces among similar-looking distractors affect distractor memory? Mem Cognit 2023:10.3758/s13421-023-01405-7. [PMID: 36849759 DOI: 10.3758/s13421-023-01405-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2023] [Indexed: 03/01/2023]
Abstract
Prior research has shown that searching for multiple targets in a visual search task enhances distractor memory in a subsequent recognition test. Three non-mutually exclusive accounts have been offered to explain this phenomenon. The mental comparison hypothesis states that searching for multiple targets requires participants to make more mental comparisons between the targets and the distractors, which enhances distractor memory. The attention allocation hypothesis states that participants allocate more attention to distractors because a multiple-target search cue leads them to expect a more difficult search. Finally, the partial match hypothesis states that searching for multiple targets increases the amount of featural overlap between targets and distractors, which necessitates greater attention in order to reject each distractor. In two experiments, we examined these hypotheses by manipulating visual working memory (VWM) load and target-distractor similarity of AI-generated faces in a visual search (i.e., RSVP) task. Distractor similarity was manipulated using a multidimensional scaling model constructed from facial landmarks and other metadata of each face. In both experiments, distractors from multiple-target searches were recognized better than distractors from single-target searches. Experiment 2 additionally revealed that increased target-distractor similarity during search improved distractor recognition memory, consistent with the partial match hypothesis.
Collapse
|
12
|
Zhang Q, Luo C, Ngetich R, Zhang J, Jin Z, Li L. Visual Selective Attention P300 Source in Frontal-Parietal Lobe: ERP and fMRI Study. Brain Topogr 2022; 35:636-650. [PMID: 36178537 DOI: 10.1007/s10548-022-00916-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 09/03/2022] [Indexed: 11/28/2022]
Abstract
Visual selective attention can be achieved into bottom-up and top-down attention. Different selective attention tasks involve different attention control ways. The pop-out task requires more bottom-up attention, whereas the search task involves more top-down attention. P300, which is the positive potential generated by the brain in the latency of 300 ~ 600 ms after stimulus, reflects the processing of attention. There is no consensus on the P300 source. The aim of present study is to study the source of P300 elicited by different visual selective attention. We collected thirteen participants' P300 elicited by pop-out and search tasks with event-related potentials (ERP). We collected twenty-six participants' activation brain regions in pop-out and search tasks with functional magnetic resonance imaging (fMRI). And we analyzed the sources of P300 using the ERP and fMRI integration with high temporal resolution and high spatial resolution. ERP results indicated that the pop-out task induced larger P300 than the search task. P300 induced by the two tasks distributed at frontal and parietal lobes, with P300 induced by the pop-out task mainly at the parietal lobe and that induced by the search task mainly at the frontal lobe. Further ERP and fMRI integration analysis showed that neural difference sources of P300 were the right precentral gyrus, left superior frontal gyrus (medial orbital), left middle temporal gyrus, left rolandic operculum, right postcentral gyrus, and left angular gyrus. Our study suggests that the frontal and parietal lobes contribute to the P300 component of visual selective attention.
Collapse
Affiliation(s)
- Qiuzhu Zhang
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Cimei Luo
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ronald Ngetich
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Junjun Zhang
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Zhenlan Jin
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ling Li
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
13
|
Marian V, Hayakawa S, Schroeder SR. Memory after visual search: Overlapping phonology, shared meaning, and bilingual experience influence what we remember. BRAIN AND LANGUAGE 2021; 222:105012. [PMID: 34464828 PMCID: PMC8554070 DOI: 10.1016/j.bandl.2021.105012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/19/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
How we remember the things that we see can be shaped by our prior experiences. Here, we examine how linguistic and sensory experiences interact to influence visual memory. Objects in a visual search that shared phonology (cat-cast) or semantics (dog-fox) with a target were later remembered better than unrelated items. Phonological overlap had a greater influence on memory when targets were cued by spoken words, while semantic overlap had a greater effect when targets were cued by characteristic sounds. The influence of overlap on memory varied as a function of individual differences in language experience -- greater bilingual experience was associated with decreased impact of overlap on memory. We conclude that phonological and semantic features of objects influence memory differently depending on individual differences in language experience, guiding not only what we initially look at, but also what we later remember.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States.
| | - Scott R Schroeder
- Department of Speech, Language, Hearing Sciences, Hofstra University, 110, Hempstead, NY 11549, United States
| |
Collapse
|
14
|
Flexible attention allocation dynamically impacts incidental encoding in prospective memory. Mem Cognit 2021; 50:112-128. [PMID: 34184211 DOI: 10.3758/s13421-021-01199-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/04/2021] [Indexed: 11/08/2022]
Abstract
Remembering to fulfill an intention at a later time often requires people to monitor the environment for cues that it is time to act. This monitoring involves the strategic allocation of attentional resources, ramping attention up more in some contexts than others. In addition to interfering with ongoing task performance, flexibly shifting attention may affect whether task-irrelevant information is later remembered. In the present investigation, we manipulated contextual expectations in event-related prospective memory (PM) to examine the consequences of flexible attention allocation on incidental memory. Across two experiments, participants completed a color-matching task while monitoring for ill-defined (Experiment 1) or specific (Experiment 2) PM targets. To manipulate contextual expectations, some participants were explicitly told about the trial types in which PM targets could (or not) appear, while others were given less precise or no expectations. Across experiments, participants' color-matching decisions were slower in high-expectation trials, relative to trials when targets were not expected. Additionally, participants had better incidental memory for PM-irrelevant items from high-expectation trials, but only when they received explicit contextual expectations. These results confirm that participants flexibly allocate attention based on explicit trial-by-trial expectations. Furthermore, the present study indicates that greater attention to item identity yields better incidental memory even for PM-irrelevant items, irrespective of processing time.
Collapse
|
15
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
16
|
Lavelle M, Alonso D, Luria R, Drew T. Visual working memory load plays limited, to no role in encoding distractor objects during visual search. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1914256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - David Alonso
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Roy Luria
- The School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|