1
|
Yoon HK, Jung Y, Persichetti AS, Dilks DD. A scene-selective region in the superior parietal lobule for visually guided navigation. Cereb Cortex 2025; 35:bhaf082. [PMID: 40264261 PMCID: PMC12014905 DOI: 10.1093/cercor/bhaf082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 03/10/2025] [Accepted: 03/14/2025] [Indexed: 04/24/2025] Open
Abstract
Growing evidence indicates that the occipital place area (OPA) is involved in "visually guided navigation." Here, we propose that a recently uncovered scene-selective region in the superior parietal lobule is also involved in visually guided navigation. First, using functional magnetic resonance imaging (fMRI), we found that the superior parietal lobule (SPL) responds significantly more to scene stimuli than to face and object stimuli across two sets of stimuli (i.e. dynamic and static), confirming its scene selectivity. Second, we found that the SPL, like the OPA, processes two kinds of information necessary for visually guided navigation: first-person perspective motion and sense (left/right) information in scenes. Third, resting-state fMRI data revealed that SPL is preferentially connected to OPA, compared to other scene-selective regions, indicating that SPL and OPA are part of the same system. Fourth, analysis of previously published fMRI data showed that SPL, like OPA, responds significantly more while participants perform a visually guided navigation task compared to both a scene categorization task and a baseline task, further supporting our hypothesis in an independent dataset. Taken together, these findings indicate the existence of a new scene-selective region for visually guided navigation and raise interesting questions about the precise role that SPL, compared to OPA, may play within visually guided navigation.
Collapse
Affiliation(s)
- Hee Kyung Yoon
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, United States
| | - Yaelan Jung
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, United States
| | - Andrew S Persichetti
- Section on Cognitive Neuropsychology, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892, United States
| | - Daniel D Dilks
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA, 30322, United States
| |
Collapse
|
2
|
Koc AN, Urgen BA, Afacan Y. Task-modulated neural responses in scene-selective regions of the human brain. Vision Res 2025; 227:108539. [PMID: 39733756 DOI: 10.1016/j.visres.2024.108539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 10/29/2024] [Accepted: 12/20/2024] [Indexed: 12/31/2024]
Abstract
The study of scene perception is crucial to the understanding of how one interprets and interacts with their environment, and how the environment impacts various cognitive functions. The literature so far has mainly focused on the impact of low-level and categorical properties of scenes and how they are represented in the scene-selective regions in the brain, PPA, RSC, and OPA. However, higher-level scene perception and the impact of behavioral goals is a developing research area. Moreover, the selection of the stimuli has not been systematic and mainly focused on outdoor environments. In this fMRI experiment, we adopted multiple behavioral tasks, selected real-life indoor stimuli with a systematic categorization approach, and used various multivariate analysis techniques to explain the neural modulation of scene perception in the scene-selective regions of the human brain. Participants (N = 21) performed categorization and approach-avoidance tasks during fMRI scans while they were viewing scenes from built environment categories based on different affordances ((i)access and (ii)circulation elements, (iii)restrooms and (iv)eating/seating areas). ROI-based classification analysis revealed that the OPA was significantly successful in decoding scene category regardless of the task, and that the task condition affected category decoding performances of all the scene-selective regions. Model-based representational similarity analysis (RSA) revealed that the activity patterns in scene-selective regions are best explained by task. These results contribute to the literature by extending the task and stimulus content of scene perception research, and uncovering the impact of behavioral goals on the scene-selective regions of the brain.
Collapse
Affiliation(s)
- Aysu Nur Koc
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany; Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey.
| | - Burcu A Urgen
- Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey; Department of Psychology, Bilkent University, Ankara, Turkey; Aysel Sabuncu Brain Research Center and National Magnetic Resonance Imaging Center, Bilkent University, Ankara, Turkey.
| | - Yasemin Afacan
- Interdisciplinary Neuroscience Program, Bilkent University, Ankara, Turkey; Department of Interior Architecture and Environmental Design, Bilkent University, Ankara, Turkey; Aysel Sabuncu Brain Research Center and National Magnetic Resonance Imaging Center, Bilkent University, Ankara, Turkey.
| |
Collapse
|
3
|
Guo J, Pratt J, Walther DB. No evidence for a privileged role of global ensemble statistics in rapid scene perception: A registered replication attempt. Atten Percept Psychophys 2025; 87:685-697. [PMID: 39658730 DOI: 10.3758/s13414-024-02994-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2024] [Indexed: 12/12/2024]
Abstract
The nature of visual processes underlying scene perception remains a hotly debated topic. According to one view, scene and object perception rely on similar neural mechanisms, and their processing pathways are tightly interlinked. According to another, scene gist might follow a separate pathway, relying primarily on global image properties. Recently, this latter idea has been supported with a set of experiments using content priming as a probe into scene and object perception (Brady et al. Journal of Experimental Psychology: Human Perception and Performance, 43, 1160-1176, 2017). The experiments have shown that preserving only structureless global ensemble texture information in the images of scenes could support rapid scene perception; however, preserving the same information in the images of objects failed to support object perception. We were intrigued by these results, since they are at odds with findings showing that scene content is primarily carried by the explicit encoding of scene structure as represented, for instance, by contours and their properties. In an attempt to reconcile these results, we attempted to replicate the experiments. In our replication experiment, we failed to find any evidence for a privileged use of texture information for scene as opposed to object primes. We conclude that there is no sufficient evidence for any fundamental differences in the processing pathways for object and scene perception: both rely on structural features that describe spatial relationships between constituent parts as well as texture information. To address this issue in the most rigorous manner possible, we here present the results of both a pilot experiment and a pre-registered replication attempt.
Collapse
Affiliation(s)
- Jiongtian Guo
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON, M5S 3G3, Canada
| | - Jay Pratt
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON, M5S 3G3, Canada
| | - Dirk B Walther
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON, M5S 3G3, Canada.
| |
Collapse
|
4
|
Naveilhan C, Saulay-Carret M, Zory R, Ramanoël S. Spatial Contextual Information Modulates Affordance Processing and Early Electrophysiological Markers of Scene Perception. J Cogn Neurosci 2024; 36:2084-2099. [PMID: 39023371 DOI: 10.1162/jocn_a_02223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Scene perception allows humans to extract information from their environment and plan navigation efficiently. The automatic extraction of potential paths in a scene, also referred to as navigational affordance, is supported by scene-selective regions (SSRs) that enable efficient human navigation. Recent evidence suggests that the activity of these SSRs can be influenced by information from adjacent spatial memory areas. However, it remains unexplored how this contextual information could influence the extraction of bottom-up information, such as navigational affordances, from a scene and the underlying neural dynamics. Therefore, we analyzed ERPs in 26 young adults performing scene and spatial memory tasks in artificially generated rooms with varying numbers and locations of available doorways. We found that increasing the number of navigational affordances only impaired performance in the spatial memory task. ERP results showed a similar pattern of activity for both tasks, but with increased P2 amplitude in the spatial memory task compared with the scene memory. Finally, we reported no modulation of the P2 component by the number of affordances in either task. This modulation of early markers of visual processing suggests that the dynamics of SSR activity are influenced by a priori knowledge, with increased amplitude when participants have more contextual information about the perceived scene. Overall, our results suggest that prior spatial knowledge about the scene, such as the location of a goal, modulates early cortical activity associated with SSRs, and that this information may interact with bottom-up processing of scene content, such as navigational affordances.
Collapse
Affiliation(s)
| | | | - Raphaël Zory
- LAMHESS, Université Côte d'Azur, Nice, France
- Institut Universitaire de France (IUF)
| | - Stephen Ramanoël
- LAMHESS, Université Côte d'Azur, Nice, France
- INSERM, CNRS, Institut de la Vision, Sorbonne Université, Paris, France
| |
Collapse
|
5
|
Kang J, Park S. Combined representation of visual features in the scene-selective cortex. Behav Brain Res 2024; 471:115110. [PMID: 38871131 PMCID: PMC11375617 DOI: 10.1016/j.bbr.2024.115110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/15/2024]
Abstract
Visual features of separable dimensions conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. A glass wall was placed in some paths to restrict navigational distance. To test how the OPA represents path directions and distances, we took three approaches. First, the independent-features approach examined whether the OPA codes each direction and distance. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA's representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than as a pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a navigation file.
Collapse
Affiliation(s)
- Jisu Kang
- Department of Psychology, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, the Republic of Korea
| | - Soojin Park
- Department of Psychology, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, the Republic of Korea.
| |
Collapse
|
6
|
Park J, Soucy E, Segawa J, Mair R, Konkle T. Immersive scene representation in human visual cortex with ultra-wide-angle neuroimaging. Nat Commun 2024; 15:5477. [PMID: 38942766 PMCID: PMC11213904 DOI: 10.1038/s41467-024-49669-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 06/13/2024] [Indexed: 06/30/2024] Open
Abstract
While human vision spans 220°, traditional functional MRI setups display images only up to central 10-15°. Thus, it remains unknown how the brain represents a scene perceived across the full visual field. Here, we introduce a method for ultra-wide angle display and probe signatures of immersive scene representation. An unobstructed view of 175° is achieved by bouncing the projected image off angled-mirrors onto a custom-built curved screen. To avoid perceptual distortion, scenes are created with wide field-of-view from custom virtual environments. We find that immersive scene representation drives medial cortex with far-peripheral preferences, but shows minimal modulation in classic scene regions. Further, scene and face-selective regions maintain their content preferences even with extreme far-periphery stimulation, highlighting that not all far-peripheral information is automatically integrated into scene regions computations. This work provides clarifying evidence on content vs. peripheral preferences in scene representation and opens new avenues to research immersive vision.
Collapse
Affiliation(s)
- Jeongho Park
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Edward Soucy
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Jennifer Segawa
- Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Ross Mair
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Talia Konkle
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Center for Brain Science, Harvard University, Cambridge, MA, USA
- Kempner Institute for Biological and Artificial Intelligence, Harvard University, Boston, MA, USA
| |
Collapse
|
7
|
Kamps FS, Chen EM, Kanwisher N, Saxe R. Representation of navigational affordances and ego-motion in the occipital place area. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.30.591964. [PMID: 38746251 PMCID: PMC11092631 DOI: 10.1101/2024.04.30.591964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Humans effortlessly use vision to plan and guide navigation through the local environment, or "scene". A network of three cortical regions responds selectively to visual scene information, including the occipital (OPA), parahippocampal (PPA), and medial place areas (MPA) - but how this network supports visually-guided navigation is unclear. Recent evidence suggests that one region in particular, the OPA, supports visual representations for navigation, while PPA and MPA support other aspects of scene processing. However, most previous studies tested only static scene images, which lack the dynamic experience of navigating through scenes. We used dynamic movie stimuli to test whether OPA, PPA, and MPA represent two critical kinds of navigationally-relevant information: navigational affordances (e.g., can I walk to the left, right, or both?) and ego-motion (e.g., am I walking forward or backward? turning left or right?). We found that OPA is sensitive to both affordances and ego-motion, as well as the conflict between these cues - e.g., turning toward versus away from an open doorway. These effects were significantly weaker or absent in PPA and MPA. Responses in OPA were also dissociable from those in early visual cortex, consistent with the idea that OPA responses are not merely explained by lower-level visual features. OPA responses to affordances and ego-motion were stronger in the contralateral than ipsilateral visual field, suggesting that OPA encodes navigationally relevant information within an egocentric reference frame. Taken together, these results support the hypothesis that OPA contains visual representations that are useful for planning and guiding navigation through scenes.
Collapse
|
8
|
Jung Y, Hsu D, Dilks DD. "Walking selectivity" in the occipital place area in 8-year-olds, not 5-year-olds. Cereb Cortex 2024; 34:bhae101. [PMID: 38494889 PMCID: PMC10945045 DOI: 10.1093/cercor/bhae101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/20/2024] [Accepted: 02/21/2024] [Indexed: 03/19/2024] Open
Abstract
A recent neuroimaging study in adults found that the occipital place area (OPA)-a cortical region involved in "visually guided navigation" (i.e. moving about the immediately visible environment, avoiding boundaries, and obstacles)-represents visual information about walking, not crawling, suggesting that OPA is late developing, emerging only when children are walking, not beforehand. But when precisely does this "walking selectivity" in OPA emerge-when children first begin to walk in early childhood, or perhaps counterintuitively, much later in childhood, around 8 years of age, when children are adult-like walking? To directly test these two hypotheses, using functional magnetic resonance imaging (fMRI) in two groups of children, 5- and 8-year-olds, we measured the responses in OPA to first-person perspective videos through scenes from a "walking" perspective, as well as three control perspectives ("crawling," "flying," and "scrambled"). We found that the OPA in 8-year-olds-like adults-exhibited walking selectivity (i.e. responding significantly more to the walking videos than to any of the others, and no significant differences across the crawling, flying, and scrambled videos), while the OPA in 5-year-olds exhibited no walking selectively. These findings reveal that OPA undergoes protracted development, with walking selectivity only emerging around 8 years of age.
Collapse
Affiliation(s)
- Yaelan Jung
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Debbie Hsu
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
9
|
Park J, Soucy E, Segawa J, Mair R, Konkle T. Immersive scene representation in human visual cortex with ultra-wide angle neuroimaging. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.14.540275. [PMID: 37292806 PMCID: PMC10245572 DOI: 10.1101/2023.05.14.540275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
While humans experience the visual environment in a panoramic 220° view, traditional functional MRI setups are limited to display images like postcards in the central 10-15° of the visual field. Thus, it remains unknown how a scene is represented in the brain when perceived across the full visual field. Here, we developed a novel method for ultra-wide angle visual presentation and probed for signatures of immersive scene representation. To accomplish this, we bounced the projected image off angled-mirrors directly onto a custom-built curved screen, creating an unobstructed view of 175°. Scene images were created from custom-built virtual environments with a compatible wide field-of-view to avoid perceptual distortion. We found that immersive scene representation drives medial cortex with far-peripheral preferences, but surprisingly had little effect on classic scene regions. That is, scene regions showed relatively minimal modulation over dramatic changes of visual size. Further, we found that scene and face-selective regions maintain their content preferences even under conditions of central scotoma, when only the extreme far-peripheral visual field is stimulated. These results highlight that not all far-peripheral information is automatically integrated into the computations of scene regions, and that there are routes to high-level visual areas that do not require direct stimulation of the central visual field. Broadly, this work provides new clarifying evidence on content vs. peripheral preferences in scene representation, and opens new neuroimaging research avenues to understand immersive visual representation.
Collapse
Affiliation(s)
| | | | | | - Ross Mair
- Center for Brain Science, Harvard University
- Department of Radiology, Harvard Medical School
- Department of Radiology, Massachusetts General Hospital
| | - Talia Konkle
- Department of Psychology, Harvard University
- Center for Brain Science, Harvard University
- Kempner Institute for Biological and Artificial Intelligence, Harvard University
| |
Collapse
|
10
|
Chai XJ, Tang L, Gabrieli JDE, Ofen N. From vision to memory: How scene-sensitive regions support episodic memory formation during child development. Dev Cogn Neurosci 2024; 65:101340. [PMID: 38218015 PMCID: PMC10825658 DOI: 10.1016/j.dcn.2024.101340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 12/21/2023] [Accepted: 01/02/2024] [Indexed: 01/15/2024] Open
Abstract
Previous brain imaging studies have identified three brain regions that selectively respond to visual scenes, the parahippocampal place area (PPA), the occipital place area (OPA), and the retrosplenial cortex (RSC). There is growing evidence that these visual scene-sensitive regions process different types of scene information and may have different developmental timelines in supporting scene perception. How these scene-sensitive regions support memory functions during child development is largely unknown. We investigated PPA, OPA and RSC activations associated with episodic memory formation in childhood (5-7 years of age) and young adulthood, using a subsequent scene memory paradigm and a functional localizer for scenes. PPA, OPA, and RSC subsequent memory activation and functional connectivity differed between children and adults. Subsequent memory effects were found in activations of all three scene regions in adults. In children, however, robust subsequent memory effects were only found in the PPA. Functional connectivity during successful encoding was significant among the three regions in adults, but not in children. PPA subsequently memory activations and PPA-RSC subsequent memory functional connectivity correlated with accuracy in adults, but not children. These age-related differences add new evidence linking protracted development of the scene-sensitive regions to the protracted development of episodic memory.
Collapse
Affiliation(s)
- Xiaoqian J Chai
- Department of Neurology and Neurosurgery, McGill University, USA.
| | - Lingfei Tang
- Department of Psychology and the Institute of Gerontology, Wayne State University, USA
| | - John DE Gabrieli
- Department of Brain and Cognitive Sciences and McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Noa Ofen
- Department of Psychology and the Institute of Gerontology, Wayne State University, USA; Center for Vital Longevity and School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA.
| |
Collapse
|
11
|
Kamps FS, Rennert RJ, Radwan SF, Wahab S, Pincus JE, Dilks DD. Dissociable Cognitive Systems for Recognizing Places and Navigating through Them: Developmental and Neuropsychological Evidence. J Neurosci 2023; 43:6320-6329. [PMID: 37580121 PMCID: PMC10490455 DOI: 10.1523/jneurosci.0153-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 07/11/2023] [Accepted: 08/03/2023] [Indexed: 08/16/2023] Open
Abstract
Recent neural evidence suggests that the human brain contains dissociable systems for "scene categorization" (i.e., recognizing a place as a particular kind of place, for example, a kitchen), including the parahippocampal place area, and "visually guided navigation" (e.g., finding our way through a kitchen, not running into the kitchen walls or banging into the kitchen table), including the occipital place area. However, converging behavioral data - for instance, whether scene categorization and visually guided navigation abilities develop along different timelines and whether there is differential breakdown under neurologic deficit - would provide even stronger support for this two-scene-systems hypothesis. Thus, here we tested scene categorization and visually guided navigation abilities in 131 typically developing children between 4 and 9 years of age, as well as 46 adults with Williams syndrome, a developmental disorder with known impairment on "action" tasks, yet relative sparing on "perception" tasks, in object processing. We found that (1) visually guided navigation is later to develop than scene categorization, and (2) Williams syndrome adults are impaired in visually guided navigation, but not scene categorization, relative to mental age-matched children. Together, these findings provide the first developmental and neuropsychological evidence for dissociable cognitive systems for recognizing places and navigating through them.SIGNIFICANCE STATEMENT Two decades ago, Milner and Goodale showed us that identifying objects and manipulating them involve distinct cognitive and neural systems. Recent neural evidence suggests that the same may be true of our interactions with our environment: identifying places and navigating through them are dissociable systems. Here we provide converging behavioral evidence supporting this two-scene-systems hypothesis - finding both differential development and breakdown of "scene categorization" and "visually guided navigation." This finding suggests that the division of labor between perception and action systems is a general organizing principle for the visual system, not just a principle of the object processing system in particular.
Collapse
Affiliation(s)
- Frederik S Kamps
- Department of Psychology, Emory University, Atlanta, Georgia 30322
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | | | - Samaher F Radwan
- Department of Psychology, Emory University, Atlanta, Georgia 30322
| | - Stephanie Wahab
- Department of Psychology, Emory University, Atlanta, Georgia 30322
| | - Jordan E Pincus
- Department of Psychology, Emory University, Atlanta, Georgia 30322
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, Georgia 30322
| |
Collapse
|
12
|
Kang J, Park S. Combined representation of visual features in the scene-selective cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550280. [PMID: 37546776 PMCID: PMC10402097 DOI: 10.1101/2023.07.24.550280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Visual features of separable dimensions like color and shape conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight different types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. Each path contained a glass wall located either near or far, changing the navigational distance. To test how the OPA represents paths in terms of direction and distance features, we took three approaches. First, the independent-features approach examined whether the OPA codes directions and distances independently in single-path scenes. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA's representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a navigation file.
Collapse
Affiliation(s)
- Jisu Kang
- Department of Psychology, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Soojin Park
- Department of Psychology, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| |
Collapse
|
13
|
Jones CM, Byland J, Dilks DD. The occipital place area represents visual information about walking, not crawling. Cereb Cortex 2023; 33:7500-7505. [PMID: 36918999 PMCID: PMC10267618 DOI: 10.1093/cercor/bhad055] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/06/2023] [Accepted: 02/07/2023] [Indexed: 03/16/2023] Open
Abstract
Recent work has shown that the occipital place area (OPA)-a scene-selective region in adult humans-supports "visually guided navigation" (i.e. moving about the local visual environment and avoiding boundaries/obstacles). But what is the precise role of OPA in visually guided navigation? Considering humans move about their local environments beginning with crawling followed by walking, 1 possibility is that OPA is involved in both modes of locomotion. Another possibility is that OPA is specialized for walking only, since walking and crawling are different kinds of locomotion. To test these possibilities, we measured the responses in OPA to first-person perspective videos from both "walking" and "crawling" perspectives as well as for 2 conditions by which humans do not navigate ("flying" and "scrambled"). We found that OPA responded more to walking videos than to any of the others, including crawling, and did not respond more to crawling videos than to flying or scrambled ones. These results (i) reveal that OPA represents visual information only from a walking (not crawling) perspective, (ii) suggest crawling is processed by a different neural system, and (iii) raise questions for how OPA develops; namely, OPA may have never supported crawling, which is consistent with the hypothesis that OPA undergoes protracted development.
Collapse
Affiliation(s)
- Christopher M Jones
- Department of Psychology, Emory University, Atlanta, GA 30322, United States
| | - Joshua Byland
- Department of Psychology, Emory University, Atlanta, GA 30322, United States
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA 30322, United States
| |
Collapse
|
14
|
Okrent Smolar AL, Gagrani M, Ghate D. Peripheral visual field loss and activities of daily living. Curr Opin Neurol 2023; 36:19-25. [PMID: 36409221 DOI: 10.1097/wco.0000000000001125] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
PURPOSE OF REVIEW Peripheral visual field (VF) loss affects 13% of the population over 65. Its effect on activities of daily living and higher order visual processing is as important as it is inadequately understood. The purpose of this review is to summarize available literature on the impact of peripheral vision loss on driving, reading, face recognition, scene recognition and scene navigation. RECENT FINDINGS In this review, glaucoma and retrochiasmal cortical damage are utilized as examples of peripheral field loss which typically spare central vision and have patterns respecting the horizontal and vertical meridians, respectively. In both glaucoma and retrochiasmal damage, peripheral field loss causes driving difficulty - especially with lane maintenance - leading to driving cessation, loss of independence, and depression. Likewise, peripheral field loss can lead to slower reading speeds and decreased enjoyment from reading, and anxiety. In glaucoma and retrochiasmal field loss, face processing is impaired which impacts social functioning. Finally, scene recognition and navigation are also adversely affected, impacting wayfinding and hazard detection leading to decreased independence as well as more frequent injury. SUMMARY Peripheral VF loss is an under-recognized cause of patient distress and disability. All peripheral field loss is not the same, differential patterns of loss affect parameters of activities of daily living (ADL) and visual processing in particular ways. Future research should aim to further characterize patterns of deranged ADL and visual processing, their correlation with types of field loss, and associated mechanisms.
Collapse
Affiliation(s)
| | - Meghal Gagrani
- Department of Ophthalmology, University of Pittsburgh School of Medicine Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Deepta Ghate
- Department of Ophthalmology, Emory University School of Medicine, Atlanta, Georgia
| |
Collapse
|
15
|
Three cortical scene systems and their development. Trends Cogn Sci 2022; 26:117-127. [PMID: 34857468 PMCID: PMC8770598 DOI: 10.1016/j.tics.2021.11.002] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 10/14/2021] [Accepted: 11/06/2021] [Indexed: 02/03/2023]
Abstract
Since the discovery of three scene-selective regions in the human brain, a central assumption has been that all three regions directly support navigation. We propose instead that cortical scene processing regions support three distinct computational goals (and one not for navigation at all): (i) The parahippocampal place area supports scene categorization, which involves recognizing the kind of place we are in; (ii) the occipital place area supports visually guided navigation, which involves finding our way through the immediately visible environment, avoiding boundaries and obstacles; and (iii) the retrosplenial complex supports map-based navigation, which involves finding our way from a specific place to some distant, out-of-sight place. We further hypothesize that these systems develop along different timelines, with both navigation systems developing slower than the scene categorization system.
Collapse
|
16
|
Wilder J, Rezanejad M, Dickinson S, Siddiqi K, Jepson A, Walther DB. Neural correlates of local parallelism during naturalistic vision. PLoS One 2022; 17:e0260266. [PMID: 35061699 PMCID: PMC8782314 DOI: 10.1371/journal.pone.0260266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 11/07/2021] [Indexed: 11/18/2022] Open
Abstract
Human observers can rapidly perceive complex real-world scenes. Grouping visual elements into meaningful units is an integral part of this process. Yet, so far, the neural underpinnings of perceptual grouping have only been studied with simple lab stimuli. We here uncover the neural mechanisms of one important perceptual grouping cue, local parallelism. Using a new, image-computable algorithm for detecting local symmetry in line drawings and photographs, we manipulated the local parallelism content of real-world scenes. We decoded scene categories from patterns of brain activity obtained via functional magnetic resonance imaging (fMRI) in 38 human observers while they viewed the manipulated scenes. Decoding was significantly more accurate for scenes containing strong local parallelism compared to weak local parallelism in the parahippocampal place area (PPA), indicating a central role of parallelism in scene perception. To investigate the origin of the parallelism signal we performed a model-based fMRI analysis of the public BOLD5000 dataset, looking for voxels whose activation time course matches that of the locally parallel content of the 4916 photographs viewed by the participants in the experiment. We found a strong relationship with average local symmetry in visual areas V1-4, PPA, and retrosplenial cortex (RSC). Notably, the parallelism-related signal peaked first in V4, suggesting V4 as the site for extracting paralleism from the visual input. We conclude that local parallelism is a perceptual grouping cue that influences neuronal activity throughout the visual hierarchy, presumably starting at V4. Parallelism plays a key role in the representation of scene categories in PPA.
Collapse
Affiliation(s)
| | - Morteza Rezanejad
- University of Toronto, Toronto, Canada
- McGill University, Montreal, Canada
| | - Sven Dickinson
- University of Toronto, Toronto, Canada
- Samsung Toronto AI Research Center, Toronto, Canada
- Vector Institute, Toronto, Canada
| | | | - Allan Jepson
- University of Toronto, Toronto, Canada
- Samsung Toronto AI Research Center, Toronto, Canada
| | | |
Collapse
|
17
|
Harel A, Nador JD, Bonner MF, Epstein RA. Early Electrophysiological Markers of Navigational Affordances in Scenes. J Cogn Neurosci 2021; 34:397-410. [PMID: 35015877 DOI: 10.1162/jocn_a_01810] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Scene perception and spatial navigation are interdependent cognitive functions, and there is increasing evidence that cortical areas that process perceptual scene properties also carry information about the potential for navigation in the environment (navigational affordances). However, the temporal stages by which visual information is transformed into navigationally relevant information are not yet known. We hypothesized that navigational affordances are encoded during perceptual processing and therefore should modulate early visually evoked ERPs, especially the scene-selective P2 component. To test this idea, we recorded ERPs from participants while they passively viewed computer-generated room scenes matched in visual complexity. By simply changing the number of doors (no doors, 1 door, 2 doors, 3 doors), we were able to systematically vary the number of pathways that afford movement in the local environment, while keeping the overall size and shape of the environment constant. We found that rooms with no doors evoked a higher P2 response than rooms with three doors, consistent with prior research reporting higher P2 amplitude to closed relative to open scenes. Moreover, we found P2 amplitude scaled linearly with the number of doors in the scenes. Navigability effects on the ERP waveform were also observed in a multivariate analysis, which showed significant decoding of the number of doors and their location at earlier time windows. Together, our results suggest that navigational affordances are represented in the early stages of scene perception. This complements research showing that the occipital place area automatically encodes the structure of navigable space and strengthens the link between scene perception and navigation.
Collapse
|
18
|
Koppelaar H, Kordestani-Moghadam P, Kouhkani S, Irandoust F, Segers G, de Haas L, Bantje T, van Warmerdam M. Proof of Concept of Novel Visuo-Spatial-Motor Fall Prevention Training for Old People. Geriatrics (Basel) 2021; 6:66. [PMID: 34210015 PMCID: PMC8293049 DOI: 10.3390/geriatrics6030066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 06/13/2021] [Accepted: 06/22/2021] [Indexed: 11/16/2022] Open
Abstract
Falls in the geriatric population are one of the most important causes of disabilities in this age group. Its consequences impose a great deal of economic burden on health and insurance systems. This study was conducted by a multidisciplinary team with the aim of evaluating the effect of visuo-spatial-motor training for the prevention of falls in older adults. The subjects consisted of 31 volunteers aged 60 to 92 years who were studied in three groups: (1) A group under standard physical training, (2) a group under visuo-spatial-motor interventions, and (3) a control group (without any intervention). The results of the study showed that visual-spatial motor exercises significantly reduced the risk of falls of the subjects.
Collapse
Affiliation(s)
- Henk Koppelaar
- Faculty of Electric and Electronic Engineering, Mathematics and Computer Science, Delft University of Technology, 2628 CD Delft, The Netherlands
| | | | - Sareh Kouhkani
- Department of Mathematics, Islamic University Shabestar Branch, Shabestar, Iran;
| | - Farnoosh Irandoust
- Department of Ophtalmology, Lorestan University of Medical Sciences, Korramabad, Iran;
| | - Gijs Segers
- Gymi Sports & Visual Performance, 4907 BC Oosterhout, The Netherlands;
| | - Lonneke de Haas
- Monné Physical Care and Exercise, 4815 HD Breda, The Netherlands; (L.d.H.); (T.B.)
| | - Thijmen Bantje
- Monné Physical Care and Exercise, 4815 HD Breda, The Netherlands; (L.d.H.); (T.B.)
| | | |
Collapse
|
19
|
Suzuki S, Kamps FS, Dilks DD, Treadway MT. Two scene navigation systems dissociated by deliberate versus automatic processing. Cortex 2021; 140:199-209. [PMID: 33992908 DOI: 10.1016/j.cortex.2021.03.027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 11/25/2020] [Accepted: 03/20/2021] [Indexed: 10/21/2022]
Abstract
Successfully navigating the world requires avoiding boundaries and obstacles in one's immediately-visible environment, as well as finding one's way to distant places in the broader environment. Recent neuroimaging studies suggest that these two navigational processes involve distinct cortical scene processing systems, with the occipital place area (OPA) supporting navigation through the local visual environment, and the retrosplenial complex (RSC) supporting navigation through the broader spatial environment. Here we hypothesized that these systems are distinguished not only by the scene information they represent (i.e., the local visual versus broader spatial environment), but also based on the automaticity of the process they involve, with navigation through the broader environment (including RSC) operating deliberately, and navigation through the local visual environment (including OPA) operating automatically. We tested this hypothesis using fMRI and a maze-navigation paradigm, where participants navigated two maze structures (complex or simple, testing representation of the broader spatial environment) under two conditions (active or passive, testing deliberate versus automatic processing). Consistent with the hypothesis that RSC supports deliberate navigation through the broader environment, RSC responded significantly more to complex than simple mazes during active, but not passive navigation. By contrast, consistent with the hypothesis that OPA supports automatic navigation through the local visual environment, OPA responded strongly even during passive navigation, and did not differentiate between active versus passive conditions. Taken together, these findings suggest the novel hypothesis that navigation through the broader spatial environment is deliberate, whereas navigation through the local visual environment is automatic, shedding new light on the dissociable functions of these systems.
Collapse
Affiliation(s)
- Shosuke Suzuki
- Department of Psychology, Emory University, Atlanta, GA, United States
| | - Frederik S Kamps
- Department of Psychology, Emory University, Atlanta, GA, United States; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA, United States
| | - Michael T Treadway
- Department of Psychology, Emory University, Atlanta, GA, United States; Department of Psychiatry and Behavioral Sciences, Emory University, Atlanta, GA, United States.
| |
Collapse
|
20
|
Cheng A, Walther DB, Park S, Dilks DD. Concavity as a diagnostic feature of visual scenes. Neuroimage 2021; 232:117920. [PMID: 33652147 PMCID: PMC8256888 DOI: 10.1016/j.neuroimage.2021.117920] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 02/07/2021] [Accepted: 02/21/2021] [Indexed: 11/29/2022] Open
Abstract
Despite over two decades of research on the neural mechanisms underlying human visual scene, or place, processing, it remains unknown what exactly a “scene” is. Intuitively, we are always inside a scene, while interacting with the outside of objects. Hence, we hypothesize that one diagnostic feature of a scene may be concavity, portraying “inside”, and predict that if concavity is a scene-diagnostic feature, then: 1) images that depict concavity, even non-scene images (e.g., the “inside” of an object – or concave object), will be behaviorally categorized as scenes more often than those that depict convexity, and 2) the cortical scene-processing system will respond more to concave images than to convex images. As predicted, participants categorized concave objects as scenes more often than convex objects, and, using functional magnetic resonance imaging (fMRI), two scene-selective cortical regions (the parahippocampal place area, PPA, and the occipital place area, OPA) responded significantly more to concave than convex objects. Surprisingly, we found no behavioral or neural differences between images of concave versus convex buildings. However, in a follow-up experiment, using tightly-controlled images, we unmasked a selective sensitivity to concavity over convexity of scene boundaries (i.e., walls) in PPA and OPA. Furthermore, we found that even highly impoverished line drawings of concave shapes are behaviorally categorized as scenes more often than convex shapes. Together, these results provide converging behavioral and neural evidence that concavity is a diagnostic feature of visual scenes.
Collapse
Affiliation(s)
- Annie Cheng
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Dirk B Walther
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Soojin Park
- Department of Psychology, Yonsei University, Seoul, Republic of Korea.
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|
21
|
Ramanoël S, Durteste M, Bécu M, Habas C, Arleo A. Differential Brain Activity in Regions Linked to Visuospatial Processing During Landmark-Based Navigation in Young and Healthy Older Adults. Front Hum Neurosci 2020; 14:552111. [PMID: 33240060 PMCID: PMC7668216 DOI: 10.3389/fnhum.2020.552111] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 09/22/2020] [Indexed: 12/21/2022] Open
Abstract
Older adults have difficulties in navigating unfamiliar environments and updating their wayfinding behavior when faced with blocked routes. This decline in navigational capabilities has traditionally been ascribed to memory impairments and dysexecutive function, whereas the impact of visual aging has often been overlooked. The ability to perceive visuospatial information such as salient landmarks is essential to navigating efficiently. To date, the functional and neurobiological factors underpinning landmark processing in aging remain insufficiently characterized. To address this issue, functional magnetic resonance imaging (fMRI) was used to investigate the brain activity associated with landmark-based navigation in young and healthy older participants. The performances of 25 young adults (μ = 25.4 years, σ = 2.7; seven females) and 17 older adults (μ = 73.0 years, σ = 3.9; 10 females) were assessed in a virtual-navigation task in which they had to orient using salient landmarks. The underlying whole-brain patterns of activity as well as the functional roles of specific cerebral regions involved in landmark processing, namely the parahippocampal place area (PPA), the occipital place area (OPA), and the retrosplenial cortex (RSC), were analyzed. Older adults' navigational abilities were overall diminished compared to young adults. Also, the two age groups relied on distinct navigational strategies to solve the task. Better performances during landmark-based navigation were associated with increased neural activity in an extended neural network comprising several cortical and cerebellar regions. Direct comparisons between age groups revealed that young participants had greater anterior temporal activity. Also, only young adults showed significant activity in occipital areas corresponding to the cortical projection of the central visual field during landmark-based navigation. The region-of-interest analysis revealed an increased OPA activation in older adult participants during the landmark condition. There were no significant between-group differences in PPA and RSC activations. These preliminary results hint at the possibility that aging diminishes fine-grained information processing in occipital and temporal regions, thus hindering the capacity to use landmarks adequately for navigation. Keeping sight of its exploratory nature, this work helps towards a better comprehension of the neural dynamics subtending landmark-based navigation and it provides new insights on the impact of age-related visuospatial processing differences on navigation capabilities.
Collapse
Affiliation(s)
- Stephen Ramanoël
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- Faculty of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
- University of Côte d’Azur, LAMHESS, Nice, France
| | - Marion Durteste
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Marcia Bécu
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | | | - Angelo Arleo
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| |
Collapse
|
22
|
Sulpizio V, Galati G, Fattori P, Galletti C, Pitzalis S. A common neural substrate for processing scenes and egomotion-compatible visual motion. Brain Struct Funct 2020; 225:2091-2110. [PMID: 32647918 PMCID: PMC7473967 DOI: 10.1007/s00429-020-02112-8] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 07/02/2020] [Indexed: 12/20/2022]
Abstract
Neuroimaging studies have revealed two separate classes of category-selective regions specialized in optic flow (egomotion-compatible) processing and in scene/place perception. Despite the importance of both optic flow and scene/place recognition to estimate changes in position and orientation within the environment during self-motion, the possible functional link between egomotion- and scene-selective regions has not yet been established. Here we reanalyzed functional magnetic resonance images from a large sample of participants performing two well-known “localizer” fMRI experiments, consisting in passive viewing of navigationally relevant stimuli such as buildings and places (scene/place stimulus) and coherently moving fields of dots simulating the visual stimulation during self-motion (flow fields). After interrogating the egomotion-selective areas with respect to the scene/place stimulus and the scene-selective areas with respect to flow fields, we found that the egomotion-selective areas V6+ and pIPS/V3A responded bilaterally more to scenes/places compared to faces, and all the scene-selective areas (parahippocampal place area or PPA, retrosplenial complex or RSC, and occipital place area or OPA) responded more to egomotion-compatible optic flow compared to random motion. The conjunction analysis between scene/place and flow field stimuli revealed that the most important focus of common activation was found in the dorsolateral parieto-occipital cortex, spanning the scene-selective OPA and the egomotion-selective pIPS/V3A. Individual inspection of the relative locations of these two regions revealed a partial overlap and a similar response profile to an independent low-level visual motion stimulus, suggesting that OPA and pIPS/V3A may be part of a unique motion-selective complex specialized in encoding both egomotion- and scene-relevant information, likely for the control of navigation in a structured environment.
Collapse
Affiliation(s)
- Valentina Sulpizio
- Department of Biomedical and Neuromotor Sciences-DIBINEM, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy. .,Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.
| | - Gaspare Galati
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Brain Imaging Laboratory, Department of Psychology, Sapienza University, Rome, Italy
| | - Patrizia Fattori
- Department of Biomedical and Neuromotor Sciences-DIBINEM, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Claudio Galletti
- Department of Biomedical and Neuromotor Sciences-DIBINEM, University of Bologna, Piazza di Porta San Donato 2, 40126, Bologna, Italy
| | - Sabrina Pitzalis
- Department of Cognitive and Motor Rehabilitation and Neuroimaging, Santa Lucia Foundation (IRCCS Fondazione Santa Lucia), Rome, Italy.,Department of Movement, Human and Health Sciences, University of Rome ''Foro Italico'', Rome, Italy
| |
Collapse
|
23
|
Cant JS, Xu Y. One bad apple spoils the whole bushel: The neural basis of outlier processing. Neuroimage 2020; 211:116629. [PMID: 32057998 PMCID: PMC7942194 DOI: 10.1016/j.neuroimage.2020.116629] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Revised: 02/01/2020] [Accepted: 02/09/2020] [Indexed: 10/25/2022] Open
Abstract
How are outliers in an otherwise homogeneous object ensemble represented by our visual system? Are outliers ignored because they are the minority? Or do outliers alter our perception of an otherwise homogeneous ensemble? We have previously demonstrated ensemble representation in human anterior-medial ventral visual cortex (overlapping the scene-selective parahippocampal place area; PPA). In this study we investigated how outliers impact object-ensemble representation in this human brain region as well as visual representation throughout posterior brain regions. We presented a homogeneous ensemble followed by an ensemble containing either identical elements or a majority of identical elements with a few outliers. Human participants ignored the outliers and made a same/different judgment between the two ensembles. In PPA, fMRI adaptation was observed when the outliers in the second ensemble matched the items in the first, even though the majority of the elements in the second ensemble were distinct from those in the first; conversely, release from fMRI adaptation was observed when the outliers in the second ensemble were distinct from the items in the first, even though the majority of the elements in the second ensemble were identical to those in the first. A similarly robust outlier effect was also found in other brain regions, including a shape-processing region in lateral occipital cortex (LO) and task-processing fronto-parietal regions. These brain regions likely work in concert to flag the presence of outliers during visual perception and then weigh the outliers appropriately in subsequent behavioral decisions. To our knowledge, this is the first time the neural mechanisms involved in outlier processing have been systematically documented in the human brain. Such an outlier effect could well provide the neural basis mediating our perceptual experience in situations like "one bad apple spoils the whole bushel".
Collapse
Affiliation(s)
- Jonathan S Cant
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, M1C 1A4, Canada.
| | - Yaoda Xu
- Department of Psychology, Yale University, New Haven, CT, 06477, USA
| |
Collapse
|
24
|
Cross Recruitment of Domain-Selective Cortical Representations Enables Flexible Semantic Knowledge. J Neurosci 2020; 40:3096-3103. [PMID: 32152199 DOI: 10.1523/jneurosci.2224-19.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 02/12/2020] [Accepted: 02/14/2020] [Indexed: 11/21/2022] Open
Abstract
Knowledge about objects encompasses not only their prototypical features but also complex, atypical, semantic knowledge (e.g., "Pizza was invented in Naples"). This fMRI study of male and female human participants combines univariate and multivariate analyses to consider the cortical representation of this more complex semantic knowledge. Using the categories of food, people, and places, this study investigates whether access to spatially related geographic semantic knowledge (1) involves the same domain-selective neural representations involved in access to prototypical taste knowledge about food; and (2) elicits activation of neural representations classically linked to places when this geographic knowledge is accessed about food and people. In three experiments using word stimuli, domain-relevant and atypical conceptual access for the categories food, people, and places were assessed. Results uncover two principles of semantic representation: food-selective representations in the left insula continue to be recruited when prototypical taste knowledge is task-irrelevant and under conditions of high cognitive demand; access to geographic knowledge for food and people categories involves the additional recruitment of classically place-selective parahippocampal gyrus, retrosplenial complex, and transverse occipital sulcus. These findings underscore the importance of object category in the representation of a broad range of knowledge, while showing how the cross recruitment of specialized representations may endow the considerable flexibility of our complex semantic knowledge.SIGNIFICANCE STATEMENT We know not only stereotypical things about objects (an apple is round, graspable, edible) but can also flexibly combine typical and atypical features to form complex concepts (the metaphorical role an apple plays in Judeo-Christian belief). In this fMRI study, we observe that, when atypical geographic knowledge is accessed about food dishes, domain-selective sensorimotor-related cortical representations continue to be recruited, but that regions classically associated with place perception are additionally engaged. This interplay between categorically driven representations, linked to the object being accessed, and the flexible recruitment of semantic stores linked to the content being accessed, provides a potential mechanism for the broad representational repertoire of our semantic system.
Collapse
|
25
|
Kamps FS, Pincus JE, Radwan SF, Wahab S, Dilks DD. Late Development of Navigationally Relevant Motion Processing in the Occipital Place Area. Curr Biol 2020; 30:544-550.e3. [PMID: 31956027 PMCID: PMC7730705 DOI: 10.1016/j.cub.2019.12.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/18/2019] [Accepted: 12/03/2019] [Indexed: 10/25/2022]
Abstract
Human adults flawlessly and effortlessly navigate boundaries and obstacles in the immediately visible environment, a process we refer to as "visually guided navigation." Neuroimaging work in adults suggests this ability involves the occipital place area (OPA) [1, 2]-a scene-selective region in the dorsal stream that selectively represents information necessary for visually guided navigation [3-9]. Despite progress in understanding the neural basis of visually guided navigation, however, little is known about how this system develops. Is navigationally relevant information processing present in the first few years of life? Or does this information processing only develop after many years of experience? Although a handful of studies have found selective responses to scenes (relative to objects) in OPA in childhood [10-13], no study has explored how more specific navigationally relevant information processing emerges in this region. Here, we do just that by measuring OPA responses to first-person perspective motion information-a proxy for the visual experience of actually navigating the immediate environment-using fMRI in 5- and 8-year-old children. We found that, although OPA already responded more to scenes than objects by age 5, responses to first-person perspective motion were not yet detectable at this same age and rather only emerged by age 8. This protracted development was specific to first-person perspective motion through scenes, not motion on faces or objects, and was not found in other scene-selective regions (the parahippocampal place area or retrosplenial complex) or a motion-selective region (MT). These findings therefore suggest that navigationally relevant information processing in OPA undergoes prolonged development across childhood.
Collapse
Affiliation(s)
- Frederik S Kamps
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA
| | - Jordan E Pincus
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA
| | - Samaher F Radwan
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA
| | - Stephanie Wahab
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, 36 Eagle Row, Atlanta, GA 30322, USA.
| |
Collapse
|
26
|
Abstract
Learning abilities are present in infancy, as they are critical for adaptation. From simple habituation and novelty responses to stimuli, learning capacities evolve throughout the lifespan. During development, learning abilities become more flexible and integrated across sensory modalities, allowing the encoding of more complex information, and in larger amounts. In turn, an increasing knowledge base leads to adaptive changes in behavior, making responses and actions more precise and effective. The objective of this chapter is to review the main behavioral manifestations of human learning abilities in early development and their biologic underpinnings, ranging from the cellular level to neurocognitive systems and mechanisms. We first focus on the ability to learn from repetitions of stimuli and how years of research in this field have recently contributed to theories of fundamental brain mechanisms whose implications for cognitive development are under study. The ability to memorize associations between different items and events is addressed next as we review the variety of contexts in which this associative memory and its neurologic bases come into play. Together, repetition-based learning and associative memory provide powerful means of understanding the surrounding environment, not only through the gathering and consolidation of specific types of information, but also by continually testing and adjusting stored information to better adapt to changing conditions.
Collapse
Affiliation(s)
- Marc Philippe Lafontaine
- Research Centre, Centre Hospitalier Universitaire Sainte-Justine, Department of Psychology, Université de Montréal, Montréal, QC, Canada
| | - Inga Sophia Knoth
- Research Centre, Centre Hospitalier Universitaire Sainte-Justine, Department of Psychology, Université de Montréal, Montréal, QC, Canada
| | - Sarah Lippé
- Research Centre, Centre Hospitalier Universitaire Sainte-Justine, Department of Psychology, Université de Montréal, Montréal, QC, Canada.
| |
Collapse
|
27
|
Distinct representations of spatial and categorical relationships across human scene-selective cortex. Proc Natl Acad Sci U S A 2019; 116:21312-21317. [PMID: 31570605 DOI: 10.1073/pnas.1903057116] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
We represent the locations of places (e.g., the coffee shop on 10th Street vs. the coffee shop on Peachtree Street) so that we can use them as landmarks to orient ourselves while navigating large-scale environments. While several neuroimaging studies have argued that the parahippocampal place area (PPA) represents such navigationally relevant information, evidence from other studies suggests otherwise, leaving this issue unresolved. Here we hypothesize that the PPA is, in fact, not well suited to recognize specific landmarks in the environment (e.g., the coffee shop on 10th Street), but rather is involved in recognizing the general category membership of places (e.g., a coffee shop, regardless of its location). Using fMRI multivoxel pattern analysis, we directly test this hypothesis. If the PPA represents landmark information, then it must be able to discriminate between 2 places of the same category, but in different locations. Instead, if the PPA represents general category information (as hypothesized here), then it will not represent the location of a particular place, but only the category of the place. As predicted, we found that the PPA represents 2 buildings from the same category, but in different locations, as more similar than 2 buildings from different categories, but in the same location. In contrast, another scene-selective region of cortex, the retrosplenial complex (RSC), showed the exact opposite pattern of results. Such a double dissociation suggests distinct neural systems involved in categorizing and navigating our environment, including the PPA and RSC, respectively.
Collapse
|
28
|
Abstract
Humans are remarkably adept at perceiving and understanding complex real-world scenes. Uncovering the neural basis of this ability is an important goal of vision science. Neuroimaging studies have identified three cortical regions that respond selectively to scenes: parahippocampal place area, retrosplenial complex/medial place area, and occipital place area. Here, we review what is known about the visual and functional properties of these brain areas. Scene-selective regions exhibit retinotopic properties and sensitivity to low-level visual features that are characteristic of scenes. They also mediate higher-level representations of layout, objects, and surface properties that allow individual scenes to be recognized and their spatial structure ascertained. Challenges for the future include developing computational models of information processing in scene regions, investigating how these regions support scene perception under ecologically realistic conditions, and understanding how they operate in the context of larger brain networks.
Collapse
Affiliation(s)
- Russell A Epstein
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA;
| | - Chris I Baker
- Section on Learning and Plasticity, Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, Maryland 20892, USA;
| |
Collapse
|
29
|
Nag S, Berman D, Golomb JD. Category-selective areas in human visual cortex exhibit preferences for stimulus depth. Neuroimage 2019; 196:289-301. [PMID: 30978498 DOI: 10.1016/j.neuroimage.2019.04.025] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 03/21/2019] [Accepted: 04/07/2019] [Indexed: 12/01/2022] Open
Abstract
Multiple regions in the human brain are dedicated to accomplish the feat of object recognition; yet our brains must also compute the 2D and 3D locations of the objects we encounter in order to make sense of our visual environments. A number of studies have explored how various object category-selective regions are sensitive to and have preferences for specific 2D spatial locations in addition to processing their preferred-stimulus categories, but there is no survey of how these regions respond to depth information. In a blocked functional MRI experiment, subjects viewed a series of category-specific (i.e., faces, objects, scenes) and unspecific (e.g., random moving dots) stimuli with red/green anaglyph glasses. Critically, these stimuli were presented at different depth planes such that they appeared in front of, behind, or at the same (i.e., middle) depth plane as the fixation point (Experiment 1) or simultaneously in front of and behind fixation (i.e., mixed depth; Experiment 2). Comparisons of mean response magnitudes between back, middle, and front depth planes reveal that face and object regions OFA and LOC exhibit a preference for front depths, and motion area MT+ exhibits a strong linear preference for front, followed by middle, followed by back depth planes. In contrast, scene-selective regions PPA and OPA prefer front and/or back depth planes (relative to middle). Moreover, the occipital place area demonstrates a strong preference for "mixed" depth above and beyond back alone, raising potential implications about its particular role in scene perception. Crucially, the observed depth preferences in nearly all areas were evoked irrespective of the semantic stimulus category being viewed. These results reveal that the object category-selective regions may play a role in processing or incorporating depth information that is orthogonal to their primary processing of object category information.
Collapse
Affiliation(s)
- Samoni Nag
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, USA; Department of Psychology, The George Washington University, USA
| | - Daniel Berman
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, USA
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, USA.
| |
Collapse
|