1
|
Shakerian F, Kushki R, Pashkam MV, Dehaqani MRA, Esteky H. Heterogeneity in Category Recognition across the Visual Field. eNeuro 2025; 12:ENEURO.0331-24.2024. [PMID: 39788731 PMCID: PMC11772044 DOI: 10.1523/eneuro.0331-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 12/01/2024] [Accepted: 12/02/2024] [Indexed: 01/12/2025] Open
Abstract
Visual information emerging from the extrafoveal locations is important for visual search, saccadic eye movement control, and spatial attention allocation. Our everyday sensory experience with visual object categories varies across different parts of the visual field which may result in location-contingent variations in visual object recognition. We used a body, animal body, and chair two-forced choice object category recognition task to investigate this possibility. Animal body and chair images with various levels of visual ambiguity were presented at the fovea and different extrafoveal locations across the vertical and horizontal meridians. We found heterogeneous body and chair category recognition across the visual field. Specifically, while the recognition performance of the body and chair presented at the fovea were similar, it varied across different extrafoveal locations. The largest difference was observed when the body and chair images were presented at the lower-left and upper-right visual fields, respectively. The lower/upper visual field bias of the body/chair recognition was particularly observed in low/high stimulus visual signals. Finally, when subjects' performances were adjusted for a potential location-contingent decision bias in category recognition by subtracting the category detection in full noise condition, location-dependent category recognition was observed only for the body category. These results suggest heterogeneous body recognition bias across the visual field potentially due to more frequent exposure of the lower visual field to body stimuli.
Collapse
Affiliation(s)
- Farideh Shakerian
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 1956836613, Iran
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran 141554364, Iran
- Pasargad Institute for Advanced Innovative Solutions (PIAIS), Tehran 1991633357, Iran
| | - Roxana Kushki
- School of Cognitive Sciences, Institute for Research in Fundamental Sciences (IPM), Tehran 1956836613, Iran
| | - Maryam Vaziri Pashkam
- Movement and Visual Perception Lab, Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware 19711
| | - Mohammad-Reza A Dehaqani
- Department of Brain and Cognitive Sciences, Cell Science Research Center, Royan Institute for Stem Cell Biology and Technology, ACECR, Tehran 141554364, Iran
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran 1439957131, Iran
| | - Hossein Esteky
- Pasargad Institute for Advanced Innovative Solutions (PIAIS), Tehran 1991633357, Iran
- Research Group for Brain and Cognitive Science, Shahid Beheshti Medical University, Tehran 1983969411, Iran
| |
Collapse
|
2
|
Keshvari S, Wijntjes MWA. Peripheral material perception. J Vis 2024; 24:13. [PMID: 38625088 PMCID: PMC11033595 DOI: 10.1167/jov.24.4.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 02/19/2024] [Indexed: 04/17/2024] Open
Abstract
Humans can rapidly identify materials, such as wood or leather, even within a complex visual scene. Given a single image, one can easily identify the underlying "stuff," even though a given material can have highly variable appearance; fabric comes in unlimited variations of shape, pattern, color, and smoothness, yet we have little trouble categorizing it as fabric. What visual cues do we use to determine material identity? Prior research suggests that simple "texture" features of an image, such as the power spectrum, capture information about material properties and identity. Few studies, however, have tested richer and biologically motivated models of texture. We compared baseline material classification performance to performance with synthetic textures generated from the Portilla-Simoncelli model and several common image degradations. The textures retain statistical information but are otherwise random. We found that performance with textures and most degradations was well below baseline, suggesting insufficient information to support foveal material perception. Interestingly, modern research suggests that peripheral vision might use a statistical, texture-like representation. In a second set of experiments, we found that peripheral performance is more closely predicted by texture and other image degradations. These findings delineate the nature of peripheral material classification.
Collapse
Affiliation(s)
| | - Maarten W A Wijntjes
- Perceptual Intelligence Lab, Industrial Design Engineering, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
3
|
Margolles P, Elosegi P, Mei N, Soto D. Unconscious Manipulation of Conceptual Representations with Decoded Neurofeedback Impacts Search Behavior. J Neurosci 2024; 44:e1235232023. [PMID: 37985180 PMCID: PMC10866193 DOI: 10.1523/jneurosci.1235-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/04/2023] [Accepted: 10/26/2023] [Indexed: 11/22/2023] Open
Abstract
The necessity of conscious awareness in human learning has been a long-standing topic in psychology and neuroscience. Previous research on non-conscious associative learning is limited by the low signal-to-noise ratio of the subliminal stimulus, and the evidence remains controversial, including failures to replicate. Using functional MRI decoded neurofeedback, we guided participants from both sexes to generate neural patterns akin to those observed when visually perceiving real-world entities (e.g., dogs). Importantly, participants remained unaware of the actual content represented by these patterns. We utilized an associative DecNef approach to imbue perceptual meaning (e.g., dogs) into Japanese hiragana characters that held no inherent meaning for our participants, bypassing a conscious link between the characters and the dogs concept. Despite their lack of awareness regarding the neurofeedback objective, participants successfully learned to activate the target perceptual representations in the bilateral fusiform. The behavioral significance of our training was evaluated in a visual search task. DecNef and control participants searched for dogs or scissors targets that were pre-cued by the hiragana used during DecNef training or by a control hiragana. The DecNef hiragana did not prime search for its associated target but, strikingly, participants were impaired at searching for the targeted perceptual category. Hence, conscious awareness may function to support higher-order associative learning. Meanwhile, lower-level forms of re-learning, modification, or plasticity in existing neural representations can occur unconsciously, with behavioral consequences outside the original training context. The work also provides an account of DecNef effects in terms of neural representational drift.
Collapse
Affiliation(s)
- Pedro Margolles
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
- Universidad del País Vasco/Euskal Herriko Unibertsitatea (UPV/EHU), Leioa, Bizkaia 48940, Spain
| | - Patxi Elosegi
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
- Universidad del País Vasco/Euskal Herriko Unibertsitatea (UPV/EHU), Leioa, Bizkaia 48940, Spain
| | - Ning Mei
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
| | - David Soto
- Basque Center on Cognition, Brain and Language (BCBL), Donostia - San Sebastián, Gipuzkoa 20009, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Bizkaia 48009, Spain
| |
Collapse
|
4
|
Jérémie JN, Perrinet LU. Ultrafast Image Categorization in Biology and Neural Models. Vision (Basel) 2023; 7:29. [PMID: 37092462 PMCID: PMC10123664 DOI: 10.3390/vision7020029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 03/09/2023] [Accepted: 03/15/2023] [Indexed: 03/29/2023] Open
Abstract
Humans are able to categorize images very efficiently, in particular to detect the presence of an animal very quickly. Recently, deep learning algorithms based on convolutional neural networks (CNNs) have achieved higher than human accuracy for a wide range of visual categorization tasks. However, the tasks on which these artificial networks are typically trained and evaluated tend to be highly specialized and do not generalize well, e.g., accuracy drops after image rotation. In this respect, biological visual systems are more flexible and efficient than artificial systems for more general tasks, such as recognizing an animal. To further the comparison between biological and artificial neural networks, we re-trained the standard VGG 16 CNN on two independent tasks that are ecologically relevant to humans: detecting the presence of an animal or an artifact. We show that re-training the network achieves a human-like level of performance, comparable to that reported in psychophysical tasks. In addition, we show that the categorization is better when the outputs of the models are combined. Indeed, animals (e.g., lions) tend to be less present in photographs that contain artifacts (e.g., buildings). Furthermore, these re-trained models were able to reproduce some unexpected behavioral observations from human psychophysics, such as robustness to rotation (e.g., an upside-down or tilted image) or to a grayscale transformation. Finally, we quantified the number of CNN layers required to achieve such performance and showed that good accuracy for ultrafast image categorization can be achieved with only a few layers, challenging the belief that image recognition requires deep sequential analysis of visual objects. We hope to extend this framework to biomimetic deep neural architectures designed for ecological tasks, but also to guide future model-based psychophysical experiments that would deepen our understanding of biological vision.
Collapse
Affiliation(s)
- Jean-Nicolas Jérémie
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, 13005 Marseille, France
| | - Laurent U. Perrinet
- Institut de Neurosciences de la Timone (UMR 7289), Aix Marseille University, CNRS, 13005 Marseille, France
| |
Collapse
|
5
|
Pareidolic faces receive prioritized attention in the dot-probe task. Atten Percept Psychophys 2023; 85:1106-1126. [PMID: 36918509 DOI: 10.3758/s13414-023-02685-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/20/2023] [Indexed: 03/16/2023]
Abstract
Face pareidolia occurs when random or ambiguous inanimate objects are perceived as faces. While real faces automatically receive prioritized attention compared with nonface objects, it is unclear whether pareidolic faces similarly receive special attention. We hypothesized that, given the evolutionary importance of broadly detecting animacy, pareidolic faces may have enough faceness to activate a broad face template, triggering prioritized attention. To test this hypothesis, and to explore where along the faceness continuum pareidolic faces fall, we conducted a series of dot-probe experiments in which we paired pareidolic faces with other images directly competing for attention: objects, animal faces, and human faces. We found that pareidolic faces elicited more prioritized attention than objects, a process that was disrupted by inversion, suggesting this prioritized attention was unlikely to be driven by low-level features. However, unexpectedly, pareidolic faces received more privileged attention compared with animal faces and showed similar prioritized attention to human faces. This attentional efficiency may be due to pareidolic faces being perceived as not only face-like, but also as human-like, and having larger facial features-eyes and mouths-compared with real faces. Together, our findings suggest that pareidolic faces appear automatically attentionally privileged, similar to human faces. Our findings are consistent with the proposal of a highly sensitive broad face detection system that is activated by pareidolic faces, triggering false alarms (i.e., illusory faces), which, evolutionarily, are less detrimental relative to missing potentially relevant signals (e.g., conspecific or heterospecific threats). In sum, pareidolic faces appear "special" in attracting attention.
Collapse
|
6
|
Nelson MJ, Moeller S, Seckin M, Rogalski EJ, Mesulam MM, Hurley RS. The eyes speak when the mouth cannot: Using eye movements to interpret omissions in primary progressive aphasia. Neuropsychologia 2023; 184:108530. [PMID: 36906222 PMCID: PMC10166577 DOI: 10.1016/j.neuropsychologia.2023.108530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 03/01/2023] [Accepted: 03/02/2023] [Indexed: 03/12/2023]
Abstract
Though it may seem simple, object naming is a complex multistage process that can be impaired by lesions at various sites of the language network. Individuals with neurodegenerative disorders of language, known as primary progressive aphasias (PPA), have difficulty with naming objects, and instead frequently say "I don't know" or fail to give a vocal response at all, known as an omission. Whereas other types of naming errors (paraphasias) give clues as to which aspects of the language network have been compromised, the mechanisms underlying omissions remain largely unknown. In this study, we used a novel eye tracking approach to probe the cognitive mechanisms of omissions in the logopenic and semantic variants of PPA (PPA-L and PPA-S). For each participant, we identified pictures of common objects (e.g., animals, tools) that they could name aloud correctly, as well as pictures that elicited an omission. In a separate word-to-picture matching task, those pictures appeared as targets embedded among an array with 15 foils. Participants were given a verbal cue and tasked with pointing to the target, while eye movements were monitored. On trials with correctly-named targets, controls and both PPA groups ceased visual search soon after foveating the target. On omission trials, however, the PPA-S group failed to stop searching, and went on to view many foils "post-target". As further indication of impaired word knowledge, gaze of the PPA-S group was subject to excessive "taxonomic capture", such that they spent less time viewing the target and more time viewing related foils on omission trials. In contrast, viewing behavior of the PPA-L group was similar to controls on both correctly-named and omission trials. These results indicate that the mechanisms of omission in PPA differ by variant. In PPA-S, anterior temporal lobe degeneration causes taxonomic blurring, such that words from the same category can no longer be reliably distinguished. In PPA-L, word knowledge remains relatively intact, and omissions instead appear to be caused by downstream factors (e.g., lexical access, phonological encoding). These findings demonstrate that when words fail, eye movements can be particularly informative.
Collapse
Affiliation(s)
- M J Nelson
- Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Neurological Surgery, Feinberg School of Medicine, Northwestern University, USA; Department of Neurosurgery, School of Medicine, University of Alabama at Birmingham, Birmingham, AL 35249, USA.
| | - S Moeller
- Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Psychology, University of Nevada, Las Vegas, NV 89154, USA
| | - M Seckin
- Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Neurology, Acıbadem Mehmet Ali Aydınlar University School of Medicine, İstanbul, 34684, Turkey
| | - E J Rogalski
- Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Psychiatry and Behavioral Sciences, Feinberg School of Medicine, Northwestern University, USA
| | - M-M Mesulam
- Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Neurology, Feinberg School of Medicine, Northwestern University, USA
| | - R S Hurley
- Mesulam Center for Cognitive Neurology and Alzheimer's Disease, Feinberg School of Medicine, Northwestern University, Chicago, IL USA, 60611; Department of Psychology, Cleveland State University, Cleveland, OH, 44115, USA.
| |
Collapse
|
7
|
Li X, Lin Z, Chen Y, Gong M. Working memory modulates the anger superiority effect in central and peripheral visual fields. Cogn Emot 2022; 37:271-283. [PMID: 36565287 DOI: 10.1080/02699931.2022.2161483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Angry faces have been shown to be detected more efficiently in a crowd of distractors compared to happy faces, known as the anger superiority effect (ASE). The present study investigated whether the ASE could be modified by top-down manipulation of working memory (WM), in central and peripheral visual fields. In central vision, participants held a colour in WM for a final memory test while simultaneously performing a visual search task that required them to determine whether a face showed a different expression from other coloured faces. The colour held in WM matched either the colour of the target face (target-matching), the colour of a distractor face (distractor-matching), or neither (non-matching). Results showed that the ASE was observed when the probability of target-matching trials was low. However, when the top-down WM effect was strengthened by raising the probability of target-matching trials, the ASE in the target-matching condition was completely eliminated. Intriguingly, when the visual search task was substituted by a peripheral crowding task, similar results to central vision were found in the target-matching condition. Taken together, our findings indicate that the ASE is subject to the top-down WM effect, regardless of the visual field.
Collapse
Affiliation(s)
- Xiang Li
- School of Psychology, Jiangxi Normal University, Nanchang, People's Republic of China
| | - Zhen Lin
- School of Psychology, Jiangxi Normal University, Nanchang, People's Republic of China
| | - Yufei Chen
- School of Psychology, Jiangxi Normal University, Nanchang, People's Republic of China
| | - Mingliang Gong
- School of Psychology, Jiangxi Normal University, Nanchang, People's Republic of China
| |
Collapse
|
8
|
Srikantharajah J, Ellard C. How central and peripheral vision influence focal and ambient processing during scene viewing. J Vis 2022; 22:4. [PMID: 36322076 PMCID: PMC9639699 DOI: 10.1167/jov.22.12.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Central and peripheral vision carry out different functions during scene processing. The ambient mode of visual processing is more likely to involve peripheral visual processes, whereas the focal mode of visual processing is more likely to involve central visual processes. Although the ambient mode is responsible for navigating space and comprehending scene layout, the focal mode gathers detailed information as central vision is oriented to salient areas of the visual field. Previous work suggests that during the time course of scene viewing, there is a transition from ambient processing during the first few seconds to focal processing during later time intervals, characterized by longer fixations and shorter saccades. In this study, we identify the influence of central and peripheral vision on changes in eye movements and the transition from ambient to focal processing during the time course of scene processing. Using a gaze-contingent protocol, we restricted the visual field to central or peripheral vision while participants freely viewed scenes for 20 seconds. Results indicated that fixation durations are shorter when vision is restricted to central vision compared to normal vision. During late visual processing, fixations in peripheral vision were longer than those in central vision. We show that a transition from more ambient to more focal processing during scene viewing will occur even when vision is restricted to only central vision or peripheral vision.
Collapse
Affiliation(s)
| | - Colin Ellard
- Department of Psychology, University of Waterloo, Waterloo, Canada,
| |
Collapse
|
9
|
Braaten LF, Arntzen E. Peripheral vision in matching-to-sample procedures. J Exp Anal Behav 2022; 118:425-441. [PMID: 36053794 DOI: 10.1002/jeab.795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 07/21/2022] [Accepted: 08/14/2022] [Indexed: 01/07/2023]
Abstract
Eye-tracking has been used to investigate observing responses in matching-to-sample procedures. However, in visual search, peripheral vision plays an important role. Therefore, three experiments were conducted to investigate the extent to which adult participants can discriminate stimuli that vary in size and position in the periphery. Experiment 1 used arbitrary matching with abstract stimuli, Experiment 2 used identity matching with abstract stimuli, and Experiment 3 used identity matching with simple (familiar) shapes. In all three experiments, participants were taught eight conditional discriminations establishing four 3-member classes of stimuli. Four different stimulus sizes and three different stimulus positions were manipulated in the 12 peripheral test phases. In these test trials, participants had to fixate their gaze on the sample stimulus in the middle of the screen while selecting a comparison stimulus. Eye movements were measured with a head-mounted eye-tracker during both training and testing. Experiment 1 shows that participants can discriminate small abstract stimuli that are arbitrarily related in the periphery. Experiment 2 shows that matching identical stimuli does not affect discrimination in the periphery compared to arbitrarily related stimuli. However, Experiment 3 shows that discrimination increases when stimuli are well-known simple shapes.
Collapse
|
10
|
Panchuk D, Maloney M. A Perception-Action Assessment of the Functionality of Peripheral Vision in Expert and Novice Australian Footballers. JOURNAL OF SPORT & EXERCISE PSYCHOLOGY 2022; 44:327-334. [PMID: 35894962 DOI: 10.1123/jsep.2021-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 04/01/2022] [Accepted: 05/13/2022] [Indexed: 05/27/2023]
Abstract
While widely acknowledged as being important for team-sport performance, the contribution of peripheral vision is poorly understood. This study aimed to better understand the role of far peripheral vision in team sport by exploring how domain experts and novices used far peripheral vision to support decision making and action control. Expert (n = 25) and novice (n = 23) Australian football players completed a perception-only task to assess the extent of their peripheral field. Next, they completed two sport-specific variations (response and recognition) of a "no-look" pass task that required passing a ball to a teammate who appeared in their far peripheral field. In the perception-only task, novices outperformed experts. However, in the sport-specific action response and recognition tasks, experts demonstrated superior performance as they responded to the stimulus farther from central vision and more accurately. Results demonstrate expertise effects for the use of far peripheral vision in sport.
Collapse
Affiliation(s)
- Derek Panchuk
- Movement Science, Australian Institute of Sport, Bruce, ACT,Australia
- Derek Panchuk Consulting, Canberra, ACT,Australia
| | - Michael Maloney
- Movement Science, Australian Institute of Sport, Bruce, ACT,Australia
| |
Collapse
|
11
|
Wolfe JM, Suresh SB, Dewulf AW, Lyu W. Priming effects in inefficient visual search: Real, but transient. Atten Percept Psychophys 2022; 84:1417-1431. [PMID: 35578002 PMCID: PMC9109951 DOI: 10.3758/s13414-022-02503-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/27/2022] [Indexed: 12/02/2022]
Abstract
In visual search tasks, responses to targets on one trial can influence responses on the next trial. Most typically, target repetition speeds response while switching to a different target slows response. Such "priming" effects have sometimes been given very significant roles in theories of search (e.g., Theeuwes, Philosophical Transactions of the Royal Society B: Biological Sciences, 368, 1628, 2013). Most work on priming has involved "singleton" or "popout" tasks. In non-popout priming tasks, observers must often perform a task-switching operation because the guiding template for one target (e.g., a red vertical target in a conjunction task) is incompatible with efficient search for the other target (green horizontal, in this example). We examined priming in inefficient search where the priming feature (Color: Experiments 1-3, Shape: Experiments 4-5) was irrelevant to the task of finding a T among Ls. We wished to determine if finding a red T on one trial helped observers to be more efficient if the next T was also red. In all experiments, we found additive priming effects. The reaction time (RT) for the second trial was shorter if the color of the T was repeated. However, there was no interaction with set size. The slope of the RT × Set Size function was not shallower for runs of the same target color, compared to trials where the target color switched. We propose that priming might produce transient guidance of the earliest deployments of attention on the next trial or it might speed decisions about a selected target. Priming does not appear to guide attention over the entire search.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Visual Attention Lab, Department of Surgery, Brigham and Women's Hospital, 900 Commonwealth Ave, Boston, MA, 02215, USA.
- Harvard Medical School, Boston, MA, USA.
| | - Sneha B Suresh
- Visual Attention Lab, Department of Surgery, Brigham and Women's Hospital, 900 Commonwealth Ave, Boston, MA, 02215, USA
| | | | - Wanyi Lyu
- Visual Attention Lab, Department of Surgery, Brigham and Women's Hospital, 900 Commonwealth Ave, Boston, MA, 02215, USA
| |
Collapse
|
12
|
Capparini C, To MPS, Reid VM. Identifying the limits of peripheral visual processing in 9‐month‐old infants. Dev Psychobiol 2022; 64:e22274. [DOI: 10.1002/dev.22274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 02/22/2022] [Accepted: 03/13/2022] [Indexed: 11/07/2022]
Affiliation(s)
- Chiara Capparini
- Department of Psychology Lancaster University Lancaster United Kingdom
| | - Michelle P. S. To
- Department of Psychology Lancaster University Lancaster United Kingdom
| | - Vincent M. Reid
- School of Psychology University of Waikato Hamilton New Zealand
| |
Collapse
|
13
|
Jang H, Tong F. Convolutional neural networks trained with a developmental sequence of blurry to clear images reveal core differences between face and object processing. J Vis 2021; 21:6. [PMID: 34767621 PMCID: PMC8590164 DOI: 10.1167/jov.21.12.6] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Although convolutional neural networks (CNNs) provide a promising model for understanding human vision, most CNNs lack robustness to challenging viewing conditions, such as image blur, whereas human vision is much more reliable. Might robustness to blur be attributable to vision during infancy, given that acuity is initially poor but improves considerably over the first several months of life? Here, we evaluated the potential consequences of such early experiences by training CNN models on face and object recognition tasks while gradually reducing the amount of blur applied to the training images. For CNNs trained on blurry to clear faces, we observed sustained robustness to blur, consistent with a recent report by Vogelsang and colleagues (2018). By contrast, CNNs trained with blurry to clear objects failed to retain robustness to blur. Further analyses revealed that the spatial frequency tuning of the two CNNs was profoundly different. The blurry to clear face-trained network successfully retained a preference for low spatial frequencies, whereas the blurry to clear object-trained CNN exhibited a progressive shift toward higher spatial frequencies. Our findings provide novel computational evidence showing how face recognition, unlike object recognition, allows for more holistic processing. Moreover, our results suggest that blurry vision during infancy is insufficient to account for the robustness of adult vision to blurry objects.
Collapse
Affiliation(s)
- Hojin Jang
- Department of Psychology and Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA.,
| | - Frank Tong
- Department of Psychology and Vanderbilt Vision Research Center, Vanderbilt University, Nashville, TN, USA.,
| |
Collapse
|
14
|
Huber-Huber C, Buonocore A, Melcher D. The extrafoveal preview paradigm as a measure of predictive, active sampling in visual perception. J Vis 2021; 21:12. [PMID: 34283203 PMCID: PMC8300052 DOI: 10.1167/jov.21.7.12] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 05/18/2021] [Indexed: 01/02/2023] Open
Abstract
A key feature of visual processing in humans is the use of saccadic eye movements to look around the environment. Saccades are typically used to bring relevant information, which is glimpsed with extrafoveal vision, into the high-resolution fovea for further processing. With the exception of some unusual circumstances, such as the first fixation when walking into a room, our saccades are mainly guided based on this extrafoveal preview. In contrast, the majority of experimental studies in vision science have investigated "passive" behavioral and neural responses to suddenly appearing and often temporally or spatially unpredictable stimuli. As reviewed here, a growing number of studies have investigated visual processing of objects under more natural viewing conditions in which observers move their eyes to a stationary stimulus, visible previously in extrafoveal vision, during each trial. These studies demonstrate that the extrafoveal preview has a profound influence on visual processing of objects, both for behavior and neural activity. Starting from the preview effect in reading research we follow subsequent developments in vision research more generally and finally argue that taking such evidence seriously leads to a reconceptualization of the nature of human visual perception that incorporates the strong influence of prediction and action on sensory processing. We review theoretical perspectives on visual perception under naturalistic viewing conditions, including theories of active vision, active sensing, and sampling. Although the extrafoveal preview paradigm has already provided useful information about the timing of, and potential mechanisms for, the close interaction of the oculomotor and visual systems while reading and in natural scenes, the findings thus far also raise many new questions for future research.
Collapse
Affiliation(s)
- Christoph Huber-Huber
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, The Netherlands
- CIMeC, University of Trento, Italy
| | - Antimo Buonocore
- Werner Reichardt Centre for Integrative Neuroscience, Tübingen University, Tübingen, BW, Germany
- Hertie Institute for Clinical Brain Research, Tübingen University, Tübingen, BW, Germany
| | - David Melcher
- CIMeC, University of Trento, Italy
- Division of Science, New York University Abu Dhabi, UAE
| |
Collapse
|
15
|
Abstract
SIGNIFICANCE This study summarizes the empirical evidence on the use of peripheral vision for the most-researched peripheral vision tools in sports. The objective of this review was to explain if and how the tools can be used to investigate peripheral vision usage and how empirical findings with these vision tools might be transferred to sports situations. The data sources used in this study were Scopus, ScienceDirect, and PubMed. We additionally searched the manufacturers' Web pages and used Google Scholar to find full texts that were not available elsewhere. Studies were included if they were published in a peer-reviewed journal, were written in English language, and were conducted in a sports context. From the 10 searched tools, we included the 5 tools with most published studies. In our topical search, we identified 93 studies for the five most-used peripheral vision tools. Surprisingly, none of these studies used eye-tracking methods to control for the use of peripheral vision. Best "passive" control is achieved by tools using (foveal) secondary tasks (Dynavision D2 and Vienna Test System). Best transfer to sports tasks is expected for tools demanding action responses (FitLight, Dynavision D2). Tools are likely to train peripheral monitoring (NeuroTracker), peripheral reaction time (Dynavision D2, Vienna Test System), or peripheral preview (FitLight), whereas one tool did not show any link to peripheral vision processes (Nike SPARQ Vapor Strobe).
Collapse
Affiliation(s)
| | - Hans Strasburger
- Institute of Medical Psychology, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
16
|
Abstract
Visual processing varies dramatically across the visual field. These differences start in the retina and continue all the way to the visual cortex. Despite these differences in processing, the perceptual experience of humans is remarkably stable and continuous across the visual field. Research in the last decade has shown that processing in peripheral and foveal vision is not independent, but is more directly connected than previously thought. We address three core questions on how peripheral and foveal vision interact, and review recent findings on potentially related phenomena that could provide answers to these questions. First, how is the processing of peripheral and foveal signals related during fixation? Peripheral signals seem to be processed in foveal retinotopic areas to facilitate peripheral object recognition, and foveal information seems to be extrapolated toward the periphery to generate a homogeneous representation of the environment. Second, how are peripheral and foveal signals re-calibrated? Transsaccadic changes in object features lead to a reduction in the discrepancy between peripheral and foveal appearance. Third, how is peripheral and foveal information stitched together across saccades? Peripheral and foveal signals are integrated across saccadic eye movements to average percepts and to reduce uncertainty. Together, these findings illustrate that peripheral and foveal processing are closely connected, mastering the compromise between a large peripheral visual field and high resolution at the fovea.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany.,
| | - Matteo Valsecchi
- Dipartimento di Psicologia, Universitá di Bologna, Bologna, Italy.,
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany.,Center for Mind, Brain and Behavior, Philipps-Universität Marburg, Marburg, Germany., https://www.uni-marburg.de/en/fb04/team-schuetz/team/alexander-schutz
| |
Collapse
|
17
|
Ambard M. Sunny Pointer: Designing a mouse pointer for people with peripheral vision loss. Assist Technol 2021; 34:454-467. [PMID: 33465018 DOI: 10.1080/10400435.2021.1872735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
We introduce here a new mouse cursor designed to facilitate the use of the mouse by people with peripheral vision loss. The pointer consists of a collection of converging straight lines covering the whole screen and following the position of the mouse cursor. We measured its positive effects in a group of participants with peripheral vision loss of different kinds and found that it can reduce by a factor of seven the time required to complete a targeting task using the mouse. Using eye tracking, we show that this system makes it possible to initiate the movement toward the target without having to precisely locate the mouse pointer. Using Fitts' Law, we compare these performances with those of full visual field users in order to understand the relation between the accuracy of the estimated mouse cursor position and the index of performance obtained with our tool.
Collapse
Affiliation(s)
- Maxime Ambard
- LEAD - CNRS UMR 5022, Université Bourgogne Franche-Comté, Dijon, France
| |
Collapse
|
18
|
Ringer RV. Investigating Visual Crowding of Objects in Complex Real-World Scenes. Iperception 2021; 12:2041669521994150. [PMID: 35145614 PMCID: PMC8822316 DOI: 10.1177/2041669521994150] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Accepted: 01/07/2021] [Indexed: 11/23/2022] Open
Abstract
Visual crowding, the impairment of object recognition in peripheral vision due to flanking objects, has generally been studied using simple stimuli on blank backgrounds. While crowding is widely assumed to occur in natural scenes, it has not been shown rigorously yet. Given that scene contexts can facilitate object recognition, crowding effects may be dampened in real-world scenes. Therefore, this study investigated crowding using objects in computer-generated real-world scenes. In two experiments, target objects were presented with four flanker objects placed uniformly around the target. Previous research indicates that crowding occurs when the distance between the target and flanker is approximately less than half the retinal eccentricity of the target. In each image, the spacing between the target and flanker objects was varied considerably above or below the standard (0.5) threshold to either suppress or facilitate the crowding effect. Experiment 1 cued the target location and then briefly flashed the scene image before participants could move their eyes. Participants then selected the target object's category from a 15-alternative forced choice response set (including all objects shown in the scene). Experiment 2 used eye tracking to ensure participants were centrally fixating at the beginning of each trial and showed the image for the duration of the participant's fixation. Both experiments found object recognition accuracy decreased with smaller spacing between targets and flanker objects. Thus, this study rigorously shows crowding of objects in semantically consistent real-world scenes.
Collapse
Affiliation(s)
- Ryan V. Ringer
- Department of Psychology, Wichita State University, Wichita, Kansas, United States
| |
Collapse
|
19
|
Tatler BW. Searching in CCTV: effects of organisation in the multiplex. Cogn Res Princ Implic 2021; 6:11. [PMID: 33599890 PMCID: PMC7892658 DOI: 10.1186/s41235-021-00277-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 02/03/2021] [Indexed: 11/10/2022] Open
Abstract
CCTV plays a prominent role in public security, health and safety. Monitoring large arrays of CCTV camera feeds is a visually and cognitively demanding task. Arranging the scenes by geographical proximity in the surveilled environment has been recommended to reduce this demand, but empirical tests of this method have failed to find any benefit. The present study tests an alternative method for arranging scenes, based on psychological principles from literature on visual search and scene perception: grouping scenes by semantic similarity. Searching for a particular scene in the array-a common task in reactive and proactive surveillance-was faster when scenes were arranged by semantic category. This effect was found only when scenes were separated by gaps for participants who were not made aware that scenes in the multiplex were grouped by semantics (Experiment 1), but irrespective of whether scenes were separated by gaps or not for participants who were made aware of this grouping (Experiment 2). When target frequency varied between scene categories-mirroring unequal distributions of crime over space-the benefit of organising scenes by semantic category was enhanced for scenes in the most frequently searched-for category, without any statistical evidence for a cost when searching for rarely searched-for categories (Experiment 3). The findings extend current understanding of the role of within-scene semantics in visual search, to encompass between-scene semantic relationships. Furthermore, the findings suggest that arranging scenes in the CCTV control room by semantic category is likely to assist operators in finding specific scenes during surveillance.
Collapse
Affiliation(s)
- Benjamin W Tatler
- School of Psychology, University of Aberdeen, Aberdeen, AB24 3FX, Scotland, UK.
| |
Collapse
|
20
|
Global and local interference effects in ensemble encoding are best explained by interactions between summary representations of the mean and the range. Atten Percept Psychophys 2021; 83:1106-1128. [PMID: 33506350 PMCID: PMC8049940 DOI: 10.3758/s13414-020-02224-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2020] [Indexed: 11/16/2022]
Abstract
Through ensemble encoding, the visual system compresses redundant statistical properties from multiple items into a single summary metric (e.g., average size). Numerous studies have shown that global summary information is extracted quickly, does not require access to single-item representations, and often interferes with reports of single items from the set. Yet a thorough understanding of ensemble processing would benefit from a more extensive investigation at the local level. Thus, the purpose of this study was to provide a more critical inspection of global-local processing in ensemble perception. Taking inspiration from Navon (Cognitive Psychology, 9(3), 353-383, 1977), we employed a novel paradigm that independently manipulates the degree of interference at the global (mean) or local (single item) level of the ensemble. Initial results were consistent with reciprocal interference between global and local ensemble processing. However, further testing revealed that local interference effects were better explained by interference from another summary statistic, the range of the set. Furthermore, participants were unable to disambiguate single items from the ensemble display from other items that were within the ensemble range but, critically, were not actually present in the ensemble. Thus, it appears that local item values are likely inferred based on their relationship to higher-order summary statistics such as the range and the mean. These results conflict with claims that local information is captured alongside global information in summary representations. In such studies, successful identification of set members was not compared with misidentification of items within the range, but which were nevertheless not presented within the set.
Collapse
|
21
|
Törnqvist H, Somppi S, Kujala MV, Vainio O. Observing animals and humans: dogs target their gaze to the biological information in natural scenes. PeerJ 2020; 8:e10341. [PMID: 33362955 DOI: 10.7717/peerj.10341/supp-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/20/2020] [Indexed: 05/26/2023] Open
Abstract
BACKGROUND This study examines how dogs observe images of natural scenes containing living creatures (wild animals, dogs and humans) recorded with eye gaze tracking. Because dogs have had limited exposure to wild animals in their lives, we also consider the natural novelty of the wild animal images for the dogs. METHODS The eye gaze of dogs was recorded while they viewed natural images containing dogs, humans, and wild animals. Three categories of images were used: naturalistic landscape images containing single humans or animals, full body images containing a single human or an animal, and full body images containing a pair of humans or animals. The gazing behavior of two dog populations, family and kennel dogs, were compared. RESULTS As a main effect, dogs gazed at living creatures (object areas) longer than the background areas of the images; heads longer than bodies; heads longer than background areas; and bodies longer than background areas. Dogs gazed less at the object areas vs. the background in landscape images than in the other image categories. Both dog groups also gazed wild animal heads longer than human or dog heads in the images. When viewing single animal and human images, family dogs focused their gaze very prominently on the head areas, but in images containing a pair of animals or humans, they gazed more at the body than the head areas. In kennel dogs, the difference in gazing times of the head and body areas within single or paired images failed to reach significance. DISCUSSION Dogs focused their gaze on living creatures in all image categories, also detecting them in the natural landscape images. Generally, they also gazed at the biologically informative areas of the images, such as the head, which supports the importance of the head/face area for dogs in obtaining social information. The natural novelty of the species represented in the images as well as the image category affected the gazing behavior of dogs. Furthermore, differences in the gazing strategy between family and kennel dogs was obtained, suggesting an influence of different social living environments and life experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Miiamaaria V Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
- Department of Psychology, Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
22
|
Törnqvist H, Somppi S, Kujala MV, Vainio O. Observing animals and humans: dogs target their gaze to the biological information in natural scenes. PeerJ 2020; 8:e10341. [PMID: 33362955 PMCID: PMC7749655 DOI: 10.7717/peerj.10341] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Accepted: 10/20/2020] [Indexed: 11/20/2022] Open
Abstract
Background This study examines how dogs observe images of natural scenes containing living creatures (wild animals, dogs and humans) recorded with eye gaze tracking. Because dogs have had limited exposure to wild animals in their lives, we also consider the natural novelty of the wild animal images for the dogs. Methods The eye gaze of dogs was recorded while they viewed natural images containing dogs, humans, and wild animals. Three categories of images were used: naturalistic landscape images containing single humans or animals, full body images containing a single human or an animal, and full body images containing a pair of humans or animals. The gazing behavior of two dog populations, family and kennel dogs, were compared. Results As a main effect, dogs gazed at living creatures (object areas) longer than the background areas of the images; heads longer than bodies; heads longer than background areas; and bodies longer than background areas. Dogs gazed less at the object areas vs. the background in landscape images than in the other image categories. Both dog groups also gazed wild animal heads longer than human or dog heads in the images. When viewing single animal and human images, family dogs focused their gaze very prominently on the head areas, but in images containing a pair of animals or humans, they gazed more at the body than the head areas. In kennel dogs, the difference in gazing times of the head and body areas within single or paired images failed to reach significance. Discussion Dogs focused their gaze on living creatures in all image categories, also detecting them in the natural landscape images. Generally, they also gazed at the biologically informative areas of the images, such as the head, which supports the importance of the head/face area for dogs in obtaining social information. The natural novelty of the species represented in the images as well as the image category affected the gazing behavior of dogs. Furthermore, differences in the gazing strategy between family and kennel dogs was obtained, suggesting an influence of different social living environments and life experiences.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Sanni Somppi
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| | - Miiamaaria V Kujala
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland.,Department of Psychology, Faculty of Education and Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Outi Vainio
- Department of Equine and Small Animal Medicine, Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
23
|
Maurage P, Bollen Z, Masson N, D'Hondt F. Eye Tracking Studies Exploring Cognitive and Affective Processes among Alcohol Drinkers: a Systematic Review and Perspectives. Neuropsychol Rev 2020; 31:167-201. [PMID: 33099714 DOI: 10.1007/s11065-020-09458-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 09/23/2020] [Indexed: 11/28/2022]
Abstract
Acute alcohol intoxication and alcohol use disorders are characterized by a wide range of psychological and cerebral impairments, which have been widely explored using neuropsychological and neuroscientific techniques. Eye tracking has recently emerged as an innovative tool to renew this exploration, as eye movements offer complementary information on the processes underlying perceptive, attentional, memory or executive abilities. Building on this, the present systematic and critical literature review provides a comprehensive overview of eye tracking studies exploring cognitive and affective processes among alcohol drinkers. Using PRISMA guidelines, 36 papers that measured eye movements among alcohol drinkers were extracted from three databases (PsycINFO, PubMed, Scopus). They were assessed for methodological quality using a standardized procedure, and categorized based on the main cognitive function measured, namely perceptive abilities, attentional bias, executive function, emotion and prevention/intervention. Eye tracking indexes showed that alcohol-related disorders are related to: (1) a stable pattern of basic eye movement impairments, particularly during alcohol intoxication; (2) a robust attentional bias, indexed by increased dwell times for alcohol-related stimuli; (3) a reduced inhibitory control on saccadic movements; (4) an increased pupillary reactivity to visual stimuli, regardless of their emotional content; (5) a limited visual attention to prevention messages. Perspectives for future research are proposed, notably encouraging the exploration of eye movements in severe alcohol use disorders and the establishment of methodological gold standards for eye tracking measures in this field.
Collapse
Affiliation(s)
- Pierre Maurage
- Louvain Experimental Psychopathology research group (LEP), Psychological Sciences Research Institute, UCLouvain, Louvain-la-Neuve, Belgium.
| | - Zoé Bollen
- Louvain Experimental Psychopathology research group (LEP), Psychological Sciences Research Institute, UCLouvain, Louvain-la-Neuve, Belgium
| | - Nicolas Masson
- Numerical Cognition Group, Psychological Sciences Research Institute and Neuroscience Institute, UCLouvain, Louvain-la-Neuve, Belgium.,Institute of Cognitive Science and Assessment (COSA), Department of Behavioural and Cognitive Sciences (DBCS), Faculty of Humanities, Education and Social Sciences (FHSE), University of Luxembourg, Luxembourg, Luxembourg
| | - Fabien D'Hondt
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, Université de Lille, Lille, France.,Centre National de Ressources et de Résilience (CN2R), Lille, France
| |
Collapse
|
24
|
Codispoti M, Micucci A, De Cesarei A. Time will tell: Object categorization and emotional engagement during processing of degraded natural scenes. Psychophysiology 2020; 58:e13704. [PMID: 33090526 DOI: 10.1111/psyp.13704] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 09/01/2020] [Accepted: 09/08/2020] [Indexed: 11/27/2022]
Abstract
The aim of the present study was to examine the relationship between object categorization in natural scenes and the engagement of cortico-limbic appetitive and defensive systems (emotional engagement) by manipulating both the bottom-up information and the top-down context. Concerning the bottom-up information, we manipulated the computational load by scrambling the phase of the spatial frequency spectrum, and asked participants to classify natural scenes as containing an animal or a person. The role of the top-down context was assessed by comparing an incremental condition, in which pictures were progressively revealed, to a condition in which no probabilistic relationship existed between each stimulus and the following one. In two experiments, the categorization and response to emotional and neutral scenes were similarly modulated by the computational load. The Late Positive Potential (LPP) was affected by the emotional content of the scenes, and by categorization accuracy. When the phase of the spatial frequency spectrum was scrambled by a large amount (>58%), chance categorization resulted, and affective LPP modulation was eliminated. With less degraded scenes, categorization accuracy was higher (.82 in Experiment 1, .86 in Experiment 2) and affective modulation of the LPP was observed at a late window (>800 ms), indicating that it is possible to delay the time of engagement of the motivational systems which are responsible for the LPP affective modulation. The present data strongly support the view that semantic analysis of visual scenes, operationalized here as object categorization, is a necessary condition for emotional engagement at the electrocortical level (LPP).
Collapse
Affiliation(s)
| | - Antonia Micucci
- Department of Psychology, University of Bologna, Bologna, Italy
| | | |
Collapse
|
25
|
General and own-species attentional face biases. Atten Percept Psychophys 2020; 83:187-198. [PMID: 33025467 DOI: 10.3758/s13414-020-02132-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans demonstrate enhanced processing of human faces compared with animal faces, known as own-species bias. This bias is important for identifying people who may cause harm, as well as for recognizing friends and kin. However, growing evidence also indicates a more general face bias. Faces have high evolutionary importance beyond conspecific interactions, as they aid in detecting predators and prey. Few studies have explored the interaction of these biases together. In three experiments, we explored processing of human and animal faces, compared with each other and to nonface objects, which allowed us to examine both own-species and broader face biases. We used a dot-probe paradigm to examine human adults' covert attentional biases for task-irrelevant human faces, animal faces, and objects. We replicated the own-species attentional bias for human faces relative to animal faces. We also found an attentional bias for animal faces relative to objects, consistent with the proposal that faces broadly receive privileged processing. Our findings suggest that humans may be attracted to a broad class of faces. Further, we found that while participants rapidly attended to human faces across all cue display durations, they attended to animal faces only when they had sufficient time to process them. Our findings reveal that the dot-probe paradigm is sensitive for capturing both own-species and more general face biases, and that each has a different attentional signature, possibly reflecting their unique but overlapping evolutionary importance.
Collapse
|
26
|
Exogeneous Spatial Cueing beyond the Near Periphery: Cueing Effects in a Discrimination Paradigm at Large Eccentricities. Vision (Basel) 2020; 4:vision4010013. [PMID: 32079326 PMCID: PMC7157755 DOI: 10.3390/vision4010013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 01/06/2020] [Accepted: 01/27/2020] [Indexed: 11/30/2022] Open
Abstract
Although visual attention is one of the most thoroughly investigated topics in experimental psychology and vision science, most of this research tends to be restricted to the near periphery. Eccentricities used in attention studies usually do not exceed 20° to 30°, but most studies even make use of considerably smaller maximum eccentricities. Thus, empirical knowledge about attention beyond this range is sparse, probably due to a previous lack of suitable experimental devices to investigate attention in the far periphery. This is currently changing due to the development of temporal high-resolution projectors and head-mounted displays (HMDs) that allow displaying experimental stimuli at far eccentricities. In the present study, visual attention was investigated beyond the near periphery (15°, 30°, 56° Exp. 1) and (15°, 35°, 56° Exp. 2) in a peripheral Posner cueing paradigm using a discrimination task with placeholders. Interestingly, cueing effects were revealed for the whole range of eccentricities although the inhomogeneity of the visual field and its functional subdivisions might lead one to suspect otherwise.
Collapse
|
27
|
On the roles of central and peripheral vision in the extraction of material and form from a scene. Atten Percept Psychophys 2019; 81:1209-1219. [PMID: 30989582 DOI: 10.3758/s13414-019-01731-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Conventional wisdom tells us that the appreciation of local (detail) and global (form and spatial relations) information from a scene is preferentially processed by central and peripheral vision, respectively. Using an eye monitor with high spatial and temporal precision, we sought to provide direct evidence for this idea by controlling whether carefully designed hierarchical scenes were viewed only with central vision (the periphery was masked), only with peripheral vision (the central region was masked), or with full vision. The scenes consisted of a neutral form (a D shape) composed of target circles or squares, or a target circle or square composed of neutral material (Ds). The task was for the participant to determine as quickly as possible whether the scene contained circle(s) or square(s). Increasing the size of the masked region had deleterious effects on performance. This deleterious effect was greater for the extraction of form information when the periphery was masked, and greater for the extraction of material information when central vision was masked, thus providing direct evidence for conventional ideas about the processing predilections of central and peripheral vision.
Collapse
|
28
|
Wolfe JM, Utochkin IS. What is a preattentive feature? Curr Opin Psychol 2019; 29:19-26. [PMID: 30472539 PMCID: PMC6513732 DOI: 10.1016/j.copsyc.2018.11.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 11/01/2018] [Accepted: 11/08/2018] [Indexed: 11/30/2022]
Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Corresponding author Visual Attention Lab, Department
of Surgery, Brigham & Women's Hospital, Departments of Ophthalmology
and Radiology, Harvard Medical School, 64 Sidney St. Suite. 170, Cambridge, MA
02139-4170,
| | - Igor S Utochkin
- National Research University Higher School of
Economics, Moscow, Russian Federation Address: 101000, Armyansky per. 4, Moscow,
Russian Federation,
| |
Collapse
|
29
|
Yu CP, Liu H, Samaras D, Zelinsky GJ. Modelling attention control using a convolutional neural network designed after the ventral visual pathway. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2019.1661927] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Chen-Ping Yu
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Huidong Liu
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Dimitrios Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Gregory J. Zelinsky
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
30
|
Ramezani F, Kheradpisheh SR, Thorpe SJ, Ghodrati M. Object categorization in visual periphery is modulated by delayed foveal noise. J Vis 2019; 19:1. [PMID: 31369042 DOI: 10.1167/19.9.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Behavioral studies in humans indicate that peripheral vision can do object recognition to some extent. Moreover, recent studies have shown that some information from brain regions retinotopic to visual periphery is somehow fed back to regions retinotopic to the fovea and disrupting this feedback impairs object recognition in human. However, it is unclear to what extent the information in visual periphery contributes to human object categorization. Here, we designed two series of rapid object categorization tasks to first investigate the performance of human peripheral vision in categorizing natural object images at different eccentricities and abstraction levels (superordinate, basic, and subordinate). Then, using a delayed foveal noise mask, we studied how modulating the foveal representation impacts peripheral object categorization at any of the abstraction levels. We found that peripheral vision can quickly and accurately accomplish superordinate categorization, while its performance in finer categorization levels dramatically drops as the object presents further in the periphery. Also, we found that a 300-ms delayed foveal noise mask can significantly disturb categorization performance in basic and subordinate levels, while it has no effect on the superordinate level. Our results suggest that human peripheral vision can easily process objects at high abstraction levels, and the information is fed back to foveal vision to prime foveal cortex for finer categorizations when a saccade is made toward the target object.
Collapse
Affiliation(s)
- Farzad Ramezani
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
| | - Saeed Reza Kheradpisheh
- Department of Computer and Data Sciences, Faculty of Mathematical Sciences, Shahid Beheshti University, Tehran, Iran
| | - Simon J Thorpe
- Centre de Recherche Cerveau et Cognition (CerCo) Université Paul Sabatier, Toulouse, France
| | - Masoud Ghodrati
- Neuroscience Program, Biomedicine Discovery Institute, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
31
|
Awad D, Emery NJ, Mareschal I. The Role of Emotional Expression and Eccentricity on Gaze Perception. Front Psychol 2019; 10:1129. [PMID: 31164853 PMCID: PMC6536623 DOI: 10.3389/fpsyg.2019.01129] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2018] [Accepted: 04/29/2019] [Indexed: 11/13/2022] Open
Abstract
The perception of another’s gaze direction and facial expression complements verbal communication and modulates how we interact with other people. However, our perception of these two cues is not always accurate, even when we are looking directly at the person. In addition, in many cases social communication occurs within groups of people where we can’t always look directly at every person in the group. Here, we sought to examine how the presence of other people influences our perception of a target face. We asked participants to judge the direction of gaze of the target face as either looking to their left, to their right or directly at them, when the face was viewed on its own or viewed within a group of other identity faces. The target face either had an angry or a neutral expression and was viewed directly (foveal experiment), or within peripheral vision (peripheral experiment). When the target was viewed within a group, the flanking faces also had either neutral or angry expressions and their gaze was in one of five different directions (from averted leftwards to averted rightwards in steps of 10°). When the target face was viewed foveally there was no effect of target emotion on participants’ judgments of its gaze direction. There was also no effect of the presence of flankers (regardless of expression) on the perception of the target gaze. When the target face was viewed peripherally, participants judged its direction of gaze to be direct over a wider range of gaze deviations than when viewed foveally, and more so for angry faces than neutral faces. We also find that flankers (regardless of emotional expression) did not influence performance. This suggests that observers judge that angry faces were looking at them over a broad range of gaze deviations in the periphery only, possibly resulting from increased uncertainty about the stimulus.
Collapse
Affiliation(s)
- Deema Awad
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom
| | - Nathan J Emery
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom
| | - Isabelle Mareschal
- Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
32
|
Loschky LC, Szaffarczyk S, Beugnet C, Young ME, Boucart M. The contributions of central and peripheral vision to scene-gist recognition with a 180° visual field. J Vis 2019; 19:15. [DOI: 10.1167/19.5.15] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Sebastien Szaffarczyk
- Laboratoire de Sciences Cognitives et Affectives SCALab, Université de Lille, CNRS, Lille, France
| | - Clement Beugnet
- Laboratoire de Sciences Cognitives et Affectives SCALab, Université de Lille, CNRS, Lille, France
| | - Michael E. Young
- Psychological Sciences, Kansas State University, Manhattan, KS, USA
| | - Muriel Boucart
- Laboratoire de Sciences Cognitives et Affectives SCALab, Université de Lille, CNRS, Lille, France
| |
Collapse
|
33
|
Asfaw DS, Jones PR, Mönter VM, Smith ND, Crabb DP. Does Glaucoma Alter Eye Movements When Viewing Images of Natural Scenes? A Between-Eye Study. Invest Ophthalmol Vis Sci 2019; 59:3189-3198. [PMID: 29971443 DOI: 10.1167/iovs.18-23779] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose To investigate whether glaucoma produces measurable changes in eye movements. Methods Fifteen glaucoma patients with asymmetric vision loss (difference in mean deviation [MD] > 6 dB between eyes) were asked to monocularly view 120 images of natural scenes, presented sequentially on a computer monitor. Each image was viewed twice-once each with the better and worse eye. Patients' eye movements were recorded with an Eyelink 1000 eye-tracker. Eye-movement parameters were computed and compared within participants (better eye versus worse eye). These parameters included a novel measure: saccadic reversal rate (SRR), as well as more traditional metrics such as saccade amplitude, fixation counts, fixation duration, and spread of fixation locations (bivariate contour ellipse area [BCEA]). In addition, the associations of these parameters with clinical measures of vision were investigated. Results In the worse eye, saccade amplitude\(\def\upalpha{\unicode[Times]{x3B1}}\)\(\def\upbeta{\unicode[Times]{x3B2}}\)\(\def\upgamma{\unicode[Times]{x3B3}}\)\(\def\updelta{\unicode[Times]{x3B4}}\)\(\def\upvarepsilon{\unicode[Times]{x3B5}}\)\(\def\upzeta{\unicode[Times]{x3B6}}\)\(\def\upeta{\unicode[Times]{x3B7}}\)\(\def\uptheta{\unicode[Times]{x3B8}}\)\(\def\upiota{\unicode[Times]{x3B9}}\)\(\def\upkappa{\unicode[Times]{x3BA}}\)\(\def\uplambda{\unicode[Times]{x3BB}}\)\(\def\upmu{\unicode[Times]{x3BC}}\)\(\def\upnu{\unicode[Times]{x3BD}}\)\(\def\upxi{\unicode[Times]{x3BE}}\)\(\def\upomicron{\unicode[Times]{x3BF}}\)\(\def\uppi{\unicode[Times]{x3C0}}\)\(\def\uprho{\unicode[Times]{x3C1}}\)\(\def\upsigma{\unicode[Times]{x3C3}}\)\(\def\uptau{\unicode[Times]{x3C4}}\)\(\def\upupsilon{\unicode[Times]{x3C5}}\)\(\def\upphi{\unicode[Times]{x3C6}}\)\(\def\upchi{\unicode[Times]{x3C7}}\)\(\def\uppsy{\unicode[Times]{x3C8}}\)\(\def\upomega{\unicode[Times]{x3C9}}\)\(\def\bialpha{\boldsymbol{\alpha}}\)\(\def\bibeta{\boldsymbol{\beta}}\)\(\def\bigamma{\boldsymbol{\gamma}}\)\(\def\bidelta{\boldsymbol{\delta}}\)\(\def\bivarepsilon{\boldsymbol{\varepsilon}}\)\(\def\bizeta{\boldsymbol{\zeta}}\)\(\def\bieta{\boldsymbol{\eta}}\)\(\def\bitheta{\boldsymbol{\theta}}\)\(\def\biiota{\boldsymbol{\iota}}\)\(\def\bikappa{\boldsymbol{\kappa}}\)\(\def\bilambda{\boldsymbol{\lambda}}\)\(\def\bimu{\boldsymbol{\mu}}\)\(\def\binu{\boldsymbol{\nu}}\)\(\def\bixi{\boldsymbol{\xi}}\)\(\def\biomicron{\boldsymbol{\micron}}\)\(\def\bipi{\boldsymbol{\pi}}\)\(\def\birho{\boldsymbol{\rho}}\)\(\def\bisigma{\boldsymbol{\sigma}}\)\(\def\bitau{\boldsymbol{\tau}}\)\(\def\biupsilon{\boldsymbol{\upsilon}}\)\(\def\biphi{\boldsymbol{\phi}}\)\(\def\bichi{\boldsymbol{\chi}}\)\(\def\bipsy{\boldsymbol{\psy}}\)\(\def\biomega{\boldsymbol{\omega}}\)\(\def\bupalpha{\unicode[Times]{x1D6C2}}\)\(\def\bupbeta{\unicode[Times]{x1D6C3}}\)\(\def\bupgamma{\unicode[Times]{x1D6C4}}\)\(\def\bupdelta{\unicode[Times]{x1D6C5}}\)\(\def\bupepsilon{\unicode[Times]{x1D6C6}}\)\(\def\bupvarepsilon{\unicode[Times]{x1D6DC}}\)\(\def\bupzeta{\unicode[Times]{x1D6C7}}\)\(\def\bupeta{\unicode[Times]{x1D6C8}}\)\(\def\buptheta{\unicode[Times]{x1D6C9}}\)\(\def\bupiota{\unicode[Times]{x1D6CA}}\)\(\def\bupkappa{\unicode[Times]{x1D6CB}}\)\(\def\buplambda{\unicode[Times]{x1D6CC}}\)\(\def\bupmu{\unicode[Times]{x1D6CD}}\)\(\def\bupnu{\unicode[Times]{x1D6CE}}\)\(\def\bupxi{\unicode[Times]{x1D6CF}}\)\(\def\bupomicron{\unicode[Times]{x1D6D0}}\)\(\def\buppi{\unicode[Times]{x1D6D1}}\)\(\def\buprho{\unicode[Times]{x1D6D2}}\)\(\def\bupsigma{\unicode[Times]{x1D6D4}}\)\(\def\buptau{\unicode[Times]{x1D6D5}}\)\(\def\bupupsilon{\unicode[Times]{x1D6D6}}\)\(\def\bupphi{\unicode[Times]{x1D6D7}}\)\(\def\bupchi{\unicode[Times]{x1D6D8}}\)\(\def\buppsy{\unicode[Times]{x1D6D9}}\)\(\def\bupomega{\unicode[Times]{x1D6DA}}\)\(\def\bupvartheta{\unicode[Times]{x1D6DD}}\)\(\def\bGamma{\bf{\Gamma}}\)\(\def\bDelta{\bf{\Delta}}\)\(\def\bTheta{\bf{\Theta}}\)\(\def\bLambda{\bf{\Lambda}}\)\(\def\bXi{\bf{\Xi}}\)\(\def\bPi{\bf{\Pi}}\)\(\def\bSigma{\bf{\Sigma}}\)\(\def\bUpsilon{\bf{\Upsilon}}\)\(\def\bPhi{\bf{\Phi}}\)\(\def\bPsi{\bf{\Psi}}\)\(\def\bOmega{\bf{\Omega}}\)\(\def\iGamma{\unicode[Times]{x1D6E4}}\)\(\def\iDelta{\unicode[Times]{x1D6E5}}\)\(\def\iTheta{\unicode[Times]{x1D6E9}}\)\(\def\iLambda{\unicode[Times]{x1D6EC}}\)\(\def\iXi{\unicode[Times]{x1D6EF}}\)\(\def\iPi{\unicode[Times]{x1D6F1}}\)\(\def\iSigma{\unicode[Times]{x1D6F4}}\)\(\def\iUpsilon{\unicode[Times]{x1D6F6}}\)\(\def\iPhi{\unicode[Times]{x1D6F7}}\)\(\def\iPsi{\unicode[Times]{x1D6F9}}\)\(\def\iOmega{\unicode[Times]{x1D6FA}}\)\(\def\biGamma{\unicode[Times]{x1D71E}}\)\(\def\biDelta{\unicode[Times]{x1D71F}}\)\(\def\biTheta{\unicode[Times]{x1D723}}\)\(\def\biLambda{\unicode[Times]{x1D726}}\)\(\def\biXi{\unicode[Times]{x1D729}}\)\(\def\biPi{\unicode[Times]{x1D72B}}\)\(\def\biSigma{\unicode[Times]{x1D72E}}\)\(\def\biUpsilon{\unicode[Times]{x1D730}}\)\(\def\biPhi{\unicode[Times]{x1D731}}\)\(\def\biPsi{\unicode[Times]{x1D733}}\)\(\def\biOmega{\unicode[Times]{x1D734}}\)\((P = 0.012; - 13\% \)) and BCEA \((P = 0.005; - 16\% )\) were smaller, while SRR was greater (\(P = 0.018; + 16\% \)). There was a significant correlation between the intereye difference in BCEA, and differences in MD values (\({\rm{Spearman^{\prime} s}}\ r = 0.65;P = 0.01\)), while differences in SRR were associated with differences in visual acuity (\({\rm{Spearman^{\prime} s}}\ r = 0.64;P = 0.01\)). Furthermore, between-eye differences in BCEA were a significant predictor of between-eye differences in MD: for every 1-dB difference in MD, BCEA reduced by 6.2% (95% confidence interval, 1.6%-10.3%). Conclusions Eye movements are altered by visual field loss, and these changes are related to changes in clinical measures. Eye movements recorded while passively viewing images could potentially be used as biomarkers for visual field damage.
Collapse
Affiliation(s)
- Daniel S Asfaw
- Division of Optometry and Visual Science, School of Health Science, City, University of London, London, United Kingdom
| | - Pete R Jones
- Division of Optometry and Visual Science, School of Health Science, City, University of London, London, United Kingdom
| | - Vera M Mönter
- Division of Optometry and Visual Science, School of Health Science, City, University of London, London, United Kingdom
| | - Nicholas D Smith
- Division of Optometry and Visual Science, School of Health Science, City, University of London, London, United Kingdom
| | - David P Crabb
- Division of Optometry and Visual Science, School of Health Science, City, University of London, London, United Kingdom
| |
Collapse
|
34
|
Abstract
It is known that unpleasant images capture our attention. However, the causes of the emotions evoked by these images can vary. Trypophobia is the fear of clustered objects. A recent study claimed that this phobia is elicited by the specific power spectrum of such images. In the present study, we measured saccade trajectories to examine how trypophobic images possessing a characteristic power spectrum affect visual attention. The participants' task was to make a saccade in the direction that was indicated by a cue. Four irrelevant images with different emotional content were presented as periphery distractors at 0 ms, 150 ms, and 450 ms in terms of cue-image onset asynchrony. The irrelevant images consisted of trypophobic, fearful, or neutral scenes. The presence of saccade trajectory deviations induced by trypophobic images suggest that intact trypophobic images oriented attention to their location. Moreover, when the images were phase scrambled, the saccade curved away from the trypophobic images, suggesting that trypophobic power spectra also triggered attentional capture, which was weak and then led to inhibition. These findings suggest that not only the power spectral characteristics but also the gist of a trypophobic image affect attentional deployment.
Collapse
|
35
|
Brand J, Johnson AP. The effects of distributed and focused attention on rapid scene categorization. VISUAL COGNITION 2018. [DOI: 10.1080/13506285.2018.1485808] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- John Brand
- Department of Epidemiology, Geisel School of Medicine Dartmouth College, Hanover, USA
| | - Aaron P. Johnson
- Department of Psychology, Concordia University, Montreal, Canada
| |
Collapse
|
36
|
Jahanian A, Keshvari S, Rosenholtz R. Web pages: What can you see in a single fixation? COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2018; 3:14. [PMID: 29774229 PMCID: PMC5945715 DOI: 10.1186/s41235-018-0099-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Accepted: 03/23/2018] [Indexed: 11/10/2022]
Abstract
Research in human vision suggests that in a single fixation, humans can extract a significant amount of information from a natural scene, e.g. the semantic category, spatial layout, and object identities. This ability is useful, for example, for quickly determining location, navigating around obstacles, detecting threats, and guiding eye movements to gather more information. In this paper, we ask a new question: What can we see at a glance at a web page – an artificial yet complex “real world” stimulus? Is it possible to notice the type of website, or where the relevant elements are, with only a glimpse? We find that observers, fixating at the center of a web page shown for only 120 milliseconds, are well above chance at classifying the page into one of ten categories. Furthermore, this ability is supported in part by text that they can read at a glance. Users can also understand the spatial layout well enough to reliably localize the menu bar and to detect ads, even though the latter are often camouflaged among other graphical elements. We discuss the parallels between web page gist and scene gist, and the implications of our findings for both vision science and human-computer interaction.
Collapse
Affiliation(s)
- Ali Jahanian
- Department: Computer Science and Artificial Intelligence Laboratory (CSAIL), Institution: Massachusetts Institute of Technology, Cambridge, MA USA
| | - Shaiyan Keshvari
- Department: Computer Science and Artificial Intelligence Laboratory (CSAIL), Institution: Massachusetts Institute of Technology, Cambridge, MA USA
| | - Ruth Rosenholtz
- Department: Computer Science and Artificial Intelligence Laboratory (CSAIL), Institution: Massachusetts Institute of Technology, Cambridge, MA USA
| |
Collapse
|
37
|
Zwitserlood P, Bölte J, Hofmann R, Meier CC, Dobel C. Seeing for speaking: Semantic and lexical information provided by briefly presented, naturalistic action scenes. PLoS One 2018; 13:e0194762. [PMID: 29652939 PMCID: PMC5898714 DOI: 10.1371/journal.pone.0194762] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 03/09/2018] [Indexed: 11/19/2022] Open
Abstract
At the interface between scene perception and speech production, we investigated how rapidly action scenes can activate semantic and lexical information. Experiment 1 examined how complex action-scene primes, presented for 150 ms, 100 ms, or 50 ms and subsequently masked, influenced the speed with which immediately following action-picture targets are named. Prime and target actions were either identical, showed the same action with different actors and environments, or were unrelated. Relative to unrelated primes, identical and same-action primes facilitated naming the target action, even when presented for 50 ms. In Experiment 2, neutral primes assessed the direction of effects. Identical and same-action scenes induced facilitation but unrelated actions induced interference. In Experiment 3, written verbs were used as targets for naming, preceded by action primes. When target verbs denoted the prime action, clear facilitation was obtained. In contrast, interference was observed when target verbs were phonologically similar, but otherwise unrelated, to the names of prime actions. This is clear evidence for word-form activation by masked action scenes. Masked action pictures thus provide conceptual information that is detailed enough to facilitate apprehension and naming of immediately following scenes. Masked actions even activate their word-form information-as is evident when targets are words. We thus show how language production can be primed with briefly flashed masked action scenes, in answer to long-standing questions in scene processing.
Collapse
Affiliation(s)
- Pienie Zwitserlood
- Institute for Psychology, University of Münster, Münster, Germany
- Otto-Creutzfeldt Center for Cognitive Neuroscience, University of Münster, Münster, Germany
- * E-mail:
| | - Jens Bölte
- Institute for Psychology, University of Münster, Münster, Germany
- Otto-Creutzfeldt Center for Cognitive Neuroscience, University of Münster, Münster, Germany
| | - Reinhild Hofmann
- Clinic for Phoniatrics and Pediatric Audiology, University of Münster, Münster, Germany
| | | | - Christian Dobel
- Department of Otorhinolaryngology, Medical Faculty, University of Jena, Jena, Germany
| |
Collapse
|
38
|
Affiliation(s)
- Miguel P. Eckstein
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California 93106-9660
| |
Collapse
|
39
|
Simpson MJ. Mini-review: Far peripheral vision. Vision Res 2017; 140:96-105. [PMID: 28882754 DOI: 10.1016/j.visres.2017.08.001] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2017] [Revised: 08/18/2017] [Accepted: 08/22/2017] [Indexed: 11/18/2022]
Abstract
The region of far peripheral vision, beyond 60 degrees of visual angle, is important to the evaluation of peripheral dark shadows (negative dysphotopsia) seen by some intraocular lens (IOL) patients. Theoretical calculations show that the limited diameter of an IOL affects ray paths at large angles, leading to a dimming of the main image for small pupils, and to peripheral illumination by light bypassing the IOL for larger pupils. These effects are rarely bothersome, and cataract surgery is highly successful, but there is a need to improve the characterization of far peripheral vision, for both pseudophakic and phakic eyes. Perimetry is the main quantitative test, but the purpose is to evaluate pathologies rather than characterize vision (and object and image regions are no longer uniquely related in the pseudophakic eye). The maximum visual angle is approximately 1050, but there is limited information about variations with age, race, or refractive error (in case there is an unexpected link with the development of myopia), or about how clear cornea, iris location, and the limiting retina are related. Also, the detection of peripheral motion is widely recognized to be important, yet rarely evaluated. Overall, people rarely complain specifically about this visual region, but with "normal" vision including an IOL for >5% of people, and increasing interest in virtual reality and augmented reality, there are new reasons to characterize peripheral vision more completely.
Collapse
Affiliation(s)
- Michael J Simpson
- Simpson Optics LLC, 3004 Waterway Court, Arlington, TX 76012, United States.
| |
Collapse
|
40
|
Fademrecht L, Bülthoff I, de la Rosa S. Action recognition is viewpoint-dependent in the visual periphery. Vision Res 2017; 135:10-15. [DOI: 10.1016/j.visres.2017.01.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 01/13/2017] [Accepted: 01/14/2017] [Indexed: 10/19/2022]
|
41
|
Bognár A, Csete G, Németh M, Csibri P, Kincses TZ, Sáry G. Transcranial Stimulation of the Orbitofrontal Cortex Affects Decisions about Magnocellular Optimized Stimuli. Front Neurosci 2017; 11:234. [PMID: 28491018 PMCID: PMC5405140 DOI: 10.3389/fnins.2017.00234] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 04/07/2017] [Indexed: 11/13/2022] Open
Abstract
Visual categorization plays an important role in fast and efficient information processing; still the neuronal basis of fast categorization has not been established yet. There are two main hypotheses known; both agree that primary, global impressions are based on the information acquired through the magnocellular pathway (MC). It is unclear whether this information is available through the MC that provides information (also) for the ventral pathway or through top-down mechanisms by connections between the dorsal pathway and the ventral pathway via the frontal cortex. To clarify this, a categorization task was performed by 48 subjects; they had to make decisions about objects' sizes. We created stimuli specific to the magno- and parvocellular pathway (PC) on the basis of their spatial frequency content. Transcranial direct-current stimulation was used to assess the role of frontal areas, a target of the MC. Stimulation did not bias the accuracy of decisions when stimuli optimized for the PC were used. In the case of stimuli optimized for the MC, anodal stimulation improved the subjects' accuracy in the behavioral test, while cathodal stimulation impaired accuracy. Our results support the hypothesis that fast visual categorization processes rely on top-down mechanisms that promote fast predictions through coarse information carried by MC via the orbitofrontal cortex.
Collapse
Affiliation(s)
- Anna Bognár
- Department of Physiology, University of SzegedSzeged, Hungary
| | - Gergő Csete
- Department of Neurology, University of SzegedSzeged, Hungary
- Department of Anaesthesiology and Intensive Therapy, University of SzegedSzeged, Hungary
| | - Margit Németh
- Department of Physiology, University of SzegedSzeged, Hungary
| | - Péter Csibri
- Department of Physiology, University of SzegedSzeged, Hungary
| | | | - Gyula Sáry
- Department of Physiology, University of SzegedSzeged, Hungary
| |
Collapse
|
42
|
De Cesarei A, Loftus GR, Mastria S, Codispoti M. Understanding natural scenes: Contributions of image statistics. Neurosci Biobehav Rev 2017; 74:44-57. [DOI: 10.1016/j.neubiorev.2017.01.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Revised: 01/05/2017] [Accepted: 01/09/2017] [Indexed: 10/20/2022]
|
43
|
Praß M, Grimsen C, Fahle M. Functional modulation of contralateral bias in early and object-selective areas after stroke of the occipital ventral cortices. Neuropsychologia 2017; 95:73-85. [PMID: 27956263 DOI: 10.1016/j.neuropsychologia.2016.12.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Revised: 11/14/2016] [Accepted: 12/08/2016] [Indexed: 11/16/2022]
Abstract
Object agnosia is a rare symptom, occurring mainly after bilateral damage of the ventral visual cortex. Most patients suffering from unilateral ventral lesions are clinically non-agnosic. Here, we studied the effect of unilateral occipito-temporal lesions on object categorization and its underlying neural correlates in visual areas. Thirteen non-agnosic stroke patients and twelve control subjects performed an event-related rapid object categorization task in the fMRI scanner where images were presented either to the left or to the right of a fixed point. Eight patients had intact central visual fields within at least 10° eccentricity while five patients showed an incomplete hemianopia. Patients made more errors than controls for both contra- and ipsilesional presentation, meaning that object categorization was impaired bilaterally in both patient groups. The activity in cortical visual areas is usually higher when a stimulus is presented contralaterally compared to presented ipsilaterally (contralateral bias). A region of interest analysis of early visual (V1-V4) and object-selective areas (lateral occipital complex, LOC; fusiform face area, FFA; and parahippocampal place area, PPA) revealed that the lesioned-hemisphere of patients showed reduced contralateral bias in early visual areas and LOC. In contrast, literally no contralateral bias in FFA and PPA was found. These findings indicate disturbed processing in the lesioned hemisphere, which might be related to the processing of visually presented objects. Thus, unilateral occipito-temporal damage leads to altered contralateral bias in the lesioned hemisphere, which might be the cause of impaired categorization performance in both visual hemifields in clinically non-agnosic patients. We conclude that both hemispheres need to be functionally intact for unimpaired object processing.
Collapse
Affiliation(s)
- Maren Praß
- Center for Cognitive Science, Human Neurobiology, Bremen University, Hochschulring 18, 28359 Bremen, Germany.
| | - Cathleen Grimsen
- Center for Cognitive Science, Human Neurobiology, Bremen University, Hochschulring 18, 28359 Bremen, Germany.
| | - Manfred Fahle
- Center for Cognitive Science, Human Neurobiology, Bremen University, Hochschulring 18, 28359 Bremen, Germany.
| |
Collapse
|
44
|
Craddock M, Oppermann F, Müller MM, Martinovic J. Modulation of microsaccades by spatial frequency during object categorization. Vision Res 2016; 130:48-56. [PMID: 27876511 DOI: 10.1016/j.visres.2016.10.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2015] [Revised: 10/20/2016] [Accepted: 10/31/2016] [Indexed: 11/16/2022]
Abstract
The organization of visual processing into a coarse-to-fine information processing based on the spatial frequency properties of the input forms an important facet of the object recognition process. During visual object categorization tasks, microsaccades occur frequently. One potential functional role of these eye movements is to resolve high spatial frequency information. To assess this hypothesis, we examined the rate, amplitude and speed of microsaccades in an object categorization task in which participants viewed object and non-object images and classified them as showing either natural objects, man-made objects or non-objects. Images were presented unfiltered (broadband; BB) or filtered to contain only low (LSF) or high spatial frequency (HSF) information. This allowed us to examine whether microsaccades were modulated independently by the presence of a high-level feature - the presence of an object - and by low-level stimulus characteristics - spatial frequency. We found a bimodal distribution of saccades based on their amplitude, with a split between smaller and larger microsaccades at 0.4° of visual angle. The rate of larger saccades (⩾0.4°) was higher for objects than non-objects, and higher for objects with high spatial frequency content (HSF and BB objects) than for LSF objects. No effects were observed for smaller microsaccades (<0.4°). This is consistent with a role for larger microsaccades in resolving HSF information for object identification, and previous evidence that more microsaccades are directed towards informative image regions.
Collapse
Affiliation(s)
- Matt Craddock
- Institute of Psychology, University of Leipzig, Germany; School of Psychology, University of Leeds, UK
| | - Frank Oppermann
- Institute of Psychology, University of Leipzig, Germany; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Netherlands
| | | | | |
Collapse
|
45
|
Abstract
Vision in the fovea, the center of the visual field, is much more accurate and detailed than vision in the periphery. This is not in line with the rich phenomenology of peripheral vision. Here, we investigated a visual illusion that shows that detailed peripheral visual experience is partially based on a reconstruction of reality. Participants fixated on the center of a visual display in which central stimuli differed from peripheral stimuli. Over time, participants perceived that the peripheral stimuli changed to match the central stimuli, so that the display seemed uniform. We showed that a wide range of visual features, including shape, orientation, motion, luminance, pattern, and identity, are susceptible to this uniformity illusion. We argue that the uniformity illusion is the result of a reconstruction of sparse visual information (from the periphery) based on more readily available detailed visual information (from the fovea), which gives rise to a rich, but illusory, experience of peripheral vision.
Collapse
|
46
|
Zhu W, Drewes J, Peatfield NA, Melcher D. Differential Visual Processing of Animal Images, with and without Conscious Awareness. Front Hum Neurosci 2016; 10:513. [PMID: 27790106 PMCID: PMC5061858 DOI: 10.3389/fnhum.2016.00513] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2016] [Accepted: 09/27/2016] [Indexed: 12/02/2022] Open
Abstract
The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.
Collapse
Affiliation(s)
- Weina Zhu
- School of Information Science, Yunnan UniversityKunming, China; Department of Psychology, Giessen UniversityGiessen, Germany; Center for Mind/Brain Sciences (CIMeC), University of TrentoRovereto, Italy; Kunming Institute of Zoology, Chinese Academy of SciencesKunming, China
| | - Jan Drewes
- Center for Mind/Brain Sciences (CIMeC), University of Trento Rovereto, Italy
| | - Nicholas A Peatfield
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University Burnaby, BC, Canada
| | - David Melcher
- Center for Mind/Brain Sciences (CIMeC), University of Trento Rovereto, Italy
| |
Collapse
|
47
|
Vanmarcke S, Calders F, Wagemans J. The Time-Course of Ultrarapid Categorization: The Influence of Scene Congruency and Top-Down Processing. Iperception 2016; 7:2041669516673384. [PMID: 27803794 PMCID: PMC5076752 DOI: 10.1177/2041669516673384] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Although categorization can take place at different levels of abstraction, classic studies on semantic labeling identified the basic level, for example, dog, as entry point for categorization. Ultrarapid categorization tasks have contradicted these findings, indicating that participants are faster at detecting superordinate-level information, for example, animal, in a complex visual image. We argue that both seemingly contradictive findings can be reconciled within the framework of parallel distributed processing and its successor Leabra (Local, Error-driven and Associative, Biologically Realistic Algorithm). The current study aimed at verifying this prediction in an ultrarapid categorization task with a dynamically changing presentation time (PT) for each briefly presented object, followed by a perceptual mask. Furthermore, we manipulated two defining task variables: level of categorization (basic vs. superordinate categorization) and object presentation mode (object-in-isolation vs. object-in-context). In contradiction with previous ultrarapid categorization research, focusing on reaction time, we used accuracy as our main dependent variable. Results indicated a consistent superordinate processing advantage, coinciding with an overall improvement in performance with longer PT and a significantly more accurate detection of objects in isolation, compared with objects in context, at lower stimulus PT. This contextual disadvantage disappeared when PT increased, indicating that figure-ground separation with recurrent processing is vital for meaningful contextual processing to occur.
Collapse
|
48
|
Explicit and implicit emotional processing in peripheral vision: A saccadic choice paradigm. Biol Psychol 2016; 119:91-100. [PMID: 27423626 DOI: 10.1016/j.biopsycho.2016.07.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Revised: 07/12/2016] [Accepted: 07/12/2016] [Indexed: 01/12/2023]
Abstract
We investigated explicit and implicit emotional processing in peripheral vision using saccadic choice tasks. Emotional-neutral pairs of scenes were presented peripherally either at 10, 30 or 60° away from fixation. The participants had to make a saccadic eye movement to the target scene: emotional vs neutral in the explicit task, and oval vs rectangular in the implicit task. In the explicit task, pleasant scenes were reliably categorized as emotional up to 60° while performance for unpleasant scenes decreased between 10° and 30° and did not differ from chance at 60°. Categorization of neutral scenes did not differ from chance. Performance in the implicit task was significantly better for emotional targets than for neutral targets at 10° and this beneficial effect of emotion persisted only for pleasant scenes at 30°. Thus, these findings show that explicit and implicit emotional processing in peripheral vision depends on eccentricity and valence of stimuli.
Collapse
|
49
|
Bevilacqua A, Paas F, Krigbaum G. Effects of Motion in the Far Peripheral Visual Field on Cognitive Test Performance and Cognitive Load. Percept Mot Skills 2016; 122:452-69. [DOI: 10.1177/0031512516633344] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Cognitive load theory posits that limited attention is in actuality a limitation in working memory resources. The load theory of selective attention and cognitive control sees the interplay between attention and awareness as separate modifying functions that act on working memory. Reconciling the theoretical differences in these two theories has important implications for learning. Thirty-nine adult participants performed a cognitively demanding test, with and without movement in the far peripheral field. Although the results for movement effects on cognitive load in this experiment were not statistically significant, men spent less time on the cognitive test in the peripheral movement condition than in the conditions without peripheral movement. No such difference was found for women. The implications of these results and recommendations for future research that extends the present study are presented.
Collapse
Affiliation(s)
| | - Fred Paas
- Erasmus University Rotterdam, Rotterdam, the Netherlands; Early Start Research Institute, University of Wollongong, Australia
| | - Genomary Krigbaum
- Grand Canyon University, Phoenix, AZ, USA; Marian University College of Osteopathic Medicine, Indianapolis, IN, USA
| |
Collapse
|
50
|
Serre T. Models of visual categorization. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 7:197-213. [DOI: 10.1002/wcs.1385] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2013] [Revised: 01/12/2016] [Accepted: 01/13/2016] [Indexed: 11/08/2022]
Affiliation(s)
- Thomas Serre
- Cognitive, Linguistic & Psychological Sciences Department, Institute for Brain Sciences; Brown University; Providence RI USA
| |
Collapse
|