1
|
Szubielska M, Szewczyk M, Augustynowicz P, Kędziora W, Möhring W. Adults' spatial scaling of tactile maps: Insights from studying sighted, early and late blind individuals. PLoS One 2024; 19:e0304008. [PMID: 38814897 DOI: 10.1371/journal.pone.0304008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 05/04/2024] [Indexed: 06/01/2024] Open
Abstract
The current study investigated spatial scaling of tactile maps among blind adults and blindfolded sighted controls. We were specifically interested in identifying spatial scaling strategies as well as effects of different scaling directions (up versus down) on participants' performance. To this aim, we asked late blind participants (with visual memory, Experiment 1) and early blind participants (without visual memory, Experiment 2) as well as sighted blindfolded controls to encode a map including a target and to place a response disc at the same spot on an empty, constant-sized referent space. Maps had five different sizes resulting in five scaling factors (1:3, 1:2, 1:1, 2:1, 3:1), allowing to investigate different scaling directions (up and down) in a single, comprehensive design. Accuracy and speed of learning about the target location as well as responding served as dependent variables. We hypothesized that participants who can use visual mental representations (i.e., late blind and blindfolded sighted participants) may adopt mental transformation scaling strategies. However, our results did not support this hypothesis. At the same time, we predicted the usage of relative distance scaling strategies in early blind participants, which was supported by our findings. Moreover, our results suggested that tactile maps can be scaled as accurately and even faster by blind participants than by sighted participants. Furthermore, irrespective of the visual status, participants of each visual status group gravitated their responses towards the center of the space. Overall, it seems that a lack of visual imagery does not impair early blind adults' spatial scaling ability but causes them to use a different strategy than sighted and late blind individuals.
Collapse
Affiliation(s)
- Magdalena Szubielska
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | - Marta Szewczyk
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | - Paweł Augustynowicz
- Faculty of Social Sciences, Institute of Psychology, The John Paul II Catholic University of Lublin, Poland
| | | | - Wenke Möhring
- Faculty of Psychology, University of Basel, Basel, Switzerland
- Department of Educational and Health Psychology, University of Education Schwäbisch Gmünd, Germany
| |
Collapse
|
2
|
Sigismondi F, Xu Y, Silvestri M, Bottini R. Altered grid-like coding in early blind people. Nat Commun 2024; 15:3476. [PMID: 38658530 PMCID: PMC11043432 DOI: 10.1038/s41467-024-47747-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 04/09/2024] [Indexed: 04/26/2024] Open
Abstract
Cognitive maps in the hippocampal-entorhinal system are central for the representation of both spatial and non-spatial relationships. Although this system, especially in humans, heavily relies on vision, the role of visual experience in shaping the development of cognitive maps remains largely unknown. Here, we test sighted and early blind individuals in both imagined navigation in fMRI and real-world navigation. During imagined navigation, the Human Navigation Network, constituted by frontal, medial temporal, and parietal cortices, is reliably activated in both groups, showing resilience to visual deprivation. However, neural geometry analyses highlight crucial differences between groups. A 60° rotational symmetry, characteristic of a hexagonal grid-like coding, emerges in the entorhinal cortex of sighted but not blind people, who instead show a 90° (4-fold) symmetry, indicative of a square grid. Moreover, higher parietal cortex activity during navigation in blind people correlates with the magnitude of 4-fold symmetry. In sum, early blindness can alter the geometry of entorhinal cognitive maps, possibly as a consequence of higher reliance on parietal egocentric coding during navigation.
Collapse
Affiliation(s)
| | - Yangwen Xu
- Center for Mind/Brain Sciences, University of Trento, 38122, Trento, Italy
- Max Planck Institute for Human Cognitive and Brain Sciences, D-04303, Leipzig, Germany
| | - Mattia Silvestri
- Center for Mind/Brain Sciences, University of Trento, 38122, Trento, Italy
| | - Roberto Bottini
- Center for Mind/Brain Sciences, University of Trento, 38122, Trento, Italy.
| |
Collapse
|
3
|
Orti R, Coello Y, Ruotolo F, Vincent M, Bartolo A, Iachini T, Ruggiero G. Cortical Correlates of Visuospatial Switching Processes Between Egocentric and Allocentric Frames of Reference: A fNIRS Study. Brain Topogr 2024:10.1007/s10548-023-01032-0. [PMID: 38315347 DOI: 10.1007/s10548-023-01032-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Accepted: 12/22/2023] [Indexed: 02/07/2024]
Abstract
Human beings represent spatial information according to egocentric (body-to-object) and allocentric (object-to-object) frames of reference. In everyday life, we constantly switch from one frame of reference to another in order to react effectively to the specific needs of the environment and task demands. However, to the best of our knowledge, no study to date has investigated the cortical activity of switching and non-switching processes between egocentric and allocentric spatial encodings. To this aim, a custom-designed visuo-spatial memory task was administered and the cortical activities underlying switching vs non-switching spatial processes were investigated. Changes in concentrations of oxygenated and deoxygenated haemoglobin were measured using functional near-infrared spectroscopy (fNIRS). Participants were asked to memorize triads of geometric objects and then make two consecutive judgments about the same triad. In the non-switching condition, both spatial judgments considered the same frame of reference: only egocentric or only allocentric. In the switching condition, if the first judgment was egocentric, the second one was allocentric (or vice versa). The results showed a generalized activation of the frontal regions during the switching compared to the non-switching condition. Additionally, increased cortical activity was found in the temporo-parietal junction during the switching condition compared to the non-switching condition. Overall, these results illustrate the cortical activity underlying the processing of switching between body position and environmental stimuli, showing an important role of the temporo-parietal junction and frontal regions in the preparation and switching between egocentric and allocentric reference frames.
Collapse
Affiliation(s)
- Renato Orti
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Viale Ellittico, 31, 81100, Caserta, Italy
| | - Yann Coello
- UMR 9193, SCALab, Sciences Cognitives et Sciences Affectives, Université de Lille, 59000, Lille, France
| | - Francesco Ruotolo
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Viale Ellittico, 31, 81100, Caserta, Italy
| | - Marion Vincent
- UMR 9193, SCALab, Sciences Cognitives et Sciences Affectives, Université de Lille, 59000, Lille, France
| | - Angela Bartolo
- UMR 9193, SCALab, Sciences Cognitives et Sciences Affectives, Université de Lille, 59000, Lille, France
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Viale Ellittico, 31, 81100, Caserta, Italy
| | - Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Viale Ellittico, 31, 81100, Caserta, Italy.
| |
Collapse
|
4
|
Tivadar RI, Franceschiello B, Minier A, Murray MM. Learning and navigating digitally rendered haptic spatial layouts. NPJ SCIENCE OF LEARNING 2023; 8:61. [PMID: 38102127 PMCID: PMC10724186 DOI: 10.1038/s41539-023-00208-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/28/2023] [Indexed: 12/17/2023]
Abstract
Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- Centre for Integrative and Complementary Medicine, Department of Anesthesiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Benedetta Franceschiello
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
- Institute of Systems Engineering, School of Engineering, University of Applied Sciences Western Switzerland (HES-SO Valais), Sion, Switzerland
| | - Astrid Minier
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
5
|
Bleau M, van Acker C, Martiniello N, Nemargut JP, Ptito M. Cognitive map formation in the blind is enhanced by three-dimensional tactile information. Sci Rep 2023; 13:9736. [PMID: 37322150 PMCID: PMC10272191 DOI: 10.1038/s41598-023-36578-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 06/06/2023] [Indexed: 06/17/2023] Open
Abstract
For blind individuals, tactile maps are useful tools to form cognitive maps through touch. However, they still experience challenges in cognitive map formation and independent navigation. Three-dimensional (3D) tactile information is thus increasingly being considered to convey enriched spatial information, but it remains unclear if it can facilitate cognitive map formation compared to traditional two-dimensional (2D) tactile information. Consequently, the present study investigated the impact of the type of sensory input (tactile 2D vs. tactile 3D vs. a visual control condition) on cognitive map formation. To do so, early blind (EB, n = 13), late blind (LB, n = 12), and sighted control (SC, n = 14) participants were tasked to learn the layouts of mazes produced with different sensory information (tactile 2D vs. tactile 3D vs. visual control) and to infer routes from memory. Results show that EB manifested stronger cognitive map formation with 3D mazes, LB performed equally well with 2D and 3D tactile mazes, and SC manifested equivalent cognitive map formation with visual and 3D tactile mazes but were negatively impacted by 2D tactile mazes. 3D tactile maps therefore have the potential to improve spatial learning for EB and newly blind individuals through a reduction of cognitive overload. Installation of 3D tactile maps in public spaces should be considered to promote universal accessibility and reduce blind individuals' wayfinding deficits related to the inaccessibility of spatial information through non-visual means.
Collapse
Affiliation(s)
- Maxime Bleau
- School of Optometry, University of Montreal, Montreal, QC, Canada
| | - Camille van Acker
- School of Optometry, University of Montreal, Montreal, QC, Canada
- Institut Royal Pour Sourds et Aveugles, Brussels, Belgium
| | | | | | - Maurice Ptito
- School of Optometry, University of Montreal, Montreal, QC, Canada.
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark.
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.
| |
Collapse
|
6
|
Iachini T, Ruotolo F, Rapuano M, Sbordone FL, Ruggiero G. The Role of Temporal Order in Egocentric and Allocentric Spatial Representations. J Clin Med 2023; 12:jcm12031132. [PMID: 36769780 PMCID: PMC9917670 DOI: 10.3390/jcm12031132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/18/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
Several studies have shown that spatial information is encoded using two types of reference systems: egocentric (body-based) and/or allocentric (environment-based). However, most studies have been conducted in static situations, neglecting the fact that when we explore the environment, the objects closest to us are also those we encounter first, while those we encounter later are usually those closest to other environmental objects/elements. In this study, participants were shown with two stimuli on a computer screen, each depicting a different geometric object, placed at different distances from them and an external reference (i.e., a bar). The crucial manipulation was that the stimuli were shown sequentially. After participants had memorized the position of both stimuli, they had to indicate which object appeared closest to them (egocentric judgment) or which object appeared closest to the bar (allocentric judgment). The results showed that egocentric judgements were facilitated when the object closest to them was presented first, whereas allocentric judgements were facilitated when the object closest to the bar was presented second. These results show that temporal order has a different effect on egocentric and allocentric frames of reference, presumably rooted in the embodied way in which individuals dynamically explore the environment.
Collapse
|
7
|
Zou X, Zhou Y. Spatial Cognition of the Visually Impaired: A Case Study in a Familiar Environment. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1753. [PMID: 36767116 PMCID: PMC9914542 DOI: 10.3390/ijerph20031753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 01/12/2023] [Accepted: 01/15/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVES This paper aims to explore the factors influencing the spatial cognition of the visually impaired in familiar environments. BACKGROUND Massage hospitals are some of the few places that can provide work for the visually impaired in China. Studying the spatial cognition of the visually impaired in a massage hospital could be instructive for the design of working environments for the visually impaired and other workplaces in the future. METHODS First, the subjective spatial cognition of the visually impaired was evaluated by object layout tasks for describing the spatial relationships among object parts. Second, physiological monitoring signal data, including the electrodermal activity, heart rate variability, and electroencephalography, were collected while the visually impaired doctors walked along prescribed routes based on the feature analysis of the physical environment in the hospital, and then their physiological monitoring signal data for each route were compared. The visual factors, physical environmental factors, and human-environment interactive factors that significantly impact the spatial cognition of visually impaired people were discussed. CONCLUSIONS (1) visual acuity affects the spatial cognition of the visually impaired in familiar environments; (2) the spatial cognition of the visually impaired can be promoted by a longer staying time and the more regular sequence of a physical environment; (3) the spatial comfort of the visually impaired can be improved by increasing the amount of greenery; and (4) the visual comfort of the visually impaired can be reduced by rich interior colors and contrasting lattice floor tiles.
Collapse
|
8
|
Mamus E, Speed LJ, Rissman L, Majid A, Özyürek A. Lack of Visual Experience Affects Multimodal Language Production: Evidence From Congenitally Blind and Sighted People. Cogn Sci 2023; 47:e13228. [PMID: 36607157 PMCID: PMC10078191 DOI: 10.1111/cogs.13228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 10/08/2022] [Accepted: 11/25/2022] [Indexed: 01/07/2023]
Abstract
The human experience is shaped by information from different perceptual channels, but it is still debated whether and how differential experience influences language use. To address this, we compared congenitally blind, blindfolded, and sighted people's descriptions of the same motion events experienced auditorily by all participants (i.e., via sound alone) and conveyed in speech and gesture. Comparison of blind and sighted participants to blindfolded participants helped us disentangle the effects of a lifetime experience of being blind versus the task-specific effects of experiencing a motion event by sound alone. Compared to sighted people, blind people's speech focused more on path and less on manner of motion, and encoded paths in a more segmented fashion using more landmarks and path verbs. Gestures followed the speech, such that blind people pointed to landmarks more and depicted manner less than sighted people. This suggests that visual experience affects how people express spatial events in the multimodal language and that blindness may enhance sensitivity to paths of motion due to changes in event construal. These findings have implications for the claims that language processes are deeply rooted in our sensory experiences.
Collapse
Affiliation(s)
- Ezgi Mamus
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics
| | | | - Lilia Rissman
- Department of Psychology, University of Wisconsin - Madison
| | - Asifa Majid
- Department of Experimental Psychology, University of Oxford
| | - Aslı Özyürek
- Centre for Language Studies, Radboud University.,Max Planck Institute for Psycholinguistics.,Donders Center for Cognition, Radboud University
| |
Collapse
|
9
|
Martolini C, Amadeo MB, Campus C, Cappagli G, Gori M. Effects of audio-motor training on spatial representations in long-term late blindness. Neuropsychologia 2022; 176:108391. [DOI: 10.1016/j.neuropsychologia.2022.108391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Revised: 08/16/2022] [Accepted: 10/01/2022] [Indexed: 11/15/2022]
|
10
|
Bleau M, Paré S, Chebat DR, Kupers R, Nemargut JP, Ptito M. Neural substrates of spatial processing and navigation in blindness: An activation likelihood estimation meta-analysis. Front Neurosci 2022; 16:1010354. [PMID: 36340755 PMCID: PMC9630591 DOI: 10.3389/fnins.2022.1010354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 09/30/2022] [Indexed: 12/02/2022] Open
Abstract
Even though vision is considered the best suited sensory modality to acquire spatial information, blind individuals can form spatial representations to navigate and orient themselves efficiently in space. Consequently, many studies support the amodality hypothesis of spatial representations since sensory modalities other than vision contribute to the formation of spatial representations, independently of visual experience and imagery. However, given the high variability in abilities and deficits observed in blind populations, a clear consensus about the neural representations of space has yet to be established. To this end, we performed a meta-analysis of the literature on the neural correlates of spatial processing and navigation via sensory modalities other than vision, like touch and audition, in individuals with early and late onset blindness. An activation likelihood estimation (ALE) analysis of the neuroimaging literature revealed that early blind individuals and sighted controls activate the same neural networks in the processing of non-visual spatial information and navigation, including the posterior parietal cortex, frontal eye fields, insula, and the hippocampal complex. Furthermore, blind individuals also recruit primary and associative occipital areas involved in visuo-spatial processing via cross-modal plasticity mechanisms. The scarcity of studies involving late blind individuals did not allow us to establish a clear consensus about the neural substrates of spatial representations in this specific population. In conclusion, the results of our analysis on neuroimaging studies involving early blind individuals support the amodality hypothesis of spatial representations.
Collapse
Affiliation(s)
- Maxime Bleau
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Samuel Paré
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), Department of Psychology, Faculty of Social Sciences and Humanities, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center of Ariel University (NARCA), Ariel University, Ariel, Israel
| | - Ron Kupers
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Institute of Neuroscience, Faculty of Medicine, Université de Louvain, Brussels, Belgium
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
| | | | - Maurice Ptito
- École d’Optométrie, Université de Montréal, Montreal, QC, Canada
- Department of Neuroscience, University of Copenhagen, Copenhagen, Denmark
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- *Correspondence: Maurice Ptito,
| |
Collapse
|
11
|
Ahulló-Fuster MA, Ortiz T, Varela-Donoso E, Nacher J, Sánchez-Sánchez ML. The Parietal Lobe in Alzheimer’s Disease and Blindness. J Alzheimers Dis 2022; 89:1193-1202. [DOI: 10.3233/jad-220498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The progressive aging of the population will notably increase the burden of those diseases which leads to a disabling situation, such as Alzheimer’s disease (AD) and ophthalmological diseases that cause a visual impairment (VI). Eye diseases that cause a VI raise neuroplastic processes in the parietal lobe. Meanwhile, the aforementioned lobe suffers a severe decline throughout AD. From this perspective, diving deeper into the particularities of the parietal lobe is of paramount importance. In this article, we discuss the functions of the parietal lobe, review the parietal anatomical and pathophysiological peculiarities in AD, and also describe some of the changes in the parietal region that occur after VI. Although the alterations in the hippocampus and the temporal lobe have been well documented in AD, the alterations of the parietal lobe have been less thoroughly explored. Recent neuroimaging studies have revealed that some metabolic and perfusion impairments along with a reduction of the white and grey matter could take place in the parietal lobe during AD. Conversely, it has been speculated that blinding ocular diseases induce a remodeling of the parietal region which is observable through the improvement of the integration of multimodal stimuli and in the increase of the volume of this cortical region. Based on current findings concerning the parietal lobe in both pathologies, we hypothesize that the increased activity of the parietal lobe in people with VI may diminish the neurodegeneration of this brain region in those who are visually impaired by oculardiseases.
Collapse
Affiliation(s)
- Mónica Alba Ahulló-Fuster
- Department of Radiology, Rehabilitation and Physiotherapy, Faculty of Nursing, Physiotherapy and Podiatry, University Complutense of Madrid, Spain
| | - Tomás Ortiz
- Department of Legal Medicine, Psychiatry and Pathology, Faculty of Medicine, University Complutense of Madrid, Spain
| | - Enrique Varela-Donoso
- Department of Radiology, Rehabilitation and Physiotherapy, Faculty of Nursing, Physiotherapy and Podiatry, University Complutense of Madrid, Spain
| | - Juan Nacher
- Neurobiology Unit, Institute for Biotechnology and Biomedicine (BIOTECMED), University of Valencia, Spain
- CIBERSAM, Spanish National Network for Research in Mental Health, Spain
- Fundación Investigación Hospital Clínico de Valencia, INCLIVA, Valencia, Spain
| | - M. Luz Sánchez-Sánchez
- Physiotherapy in Motion, Multispeciality Research Group (PTinMOTION), Department of Physiotherapy, University of Valencia, Valencia, Spain
| |
Collapse
|
12
|
Ottink L, Buimer H, van Raalte B, Doeller CF, van der Geest TM, van Wezel RJA. Cognitive map formation supported by auditory, haptic, and multimodal information in persons with blindness. Neurosci Biobehav Rev 2022; 140:104797. [PMID: 35902045 DOI: 10.1016/j.neubiorev.2022.104797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 06/23/2022] [Accepted: 07/24/2022] [Indexed: 10/16/2022]
Abstract
For efficient navigation, the brain needs to adequately represent the environment in a cognitive map. In this review, we sought to give an overview of literature about cognitive map formation based on non-visual modalities in persons with blindness (PWBs) and sighted persons. The review is focused on the auditory and haptic modalities, including research that combines multiple modalities and real-world navigation. Furthermore, we addressed implications of route and survey representations. Taking together, PWBs as well as sighted persons can build up cognitive maps based on non-visual modalities, although the accuracy sometime somewhat differs between PWBs and sighted persons. We provide some speculations on how to deploy information from different modalities to support cognitive map formation. Furthermore, PWBs and sighted persons seem to be able to construct route as well as survey representations. PWBs can experience difficulties building up a survey representation, but this is not always the case, and research suggests that they can acquire this ability with sufficient spatial information or training. We discuss possible explanations of these inconsistencies.
Collapse
Affiliation(s)
- Loes Ottink
- Donders Institute, Radboud University, Nijmegen, the Netherlands.
| | - Hendrik Buimer
- Donders Institute, Radboud University, Nijmegen, the Netherlands
| | - Bram van Raalte
- Donders Institute, Radboud University, Nijmegen, the Netherlands
| | - Christian F Doeller
- Psychology Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Kavli Insitute for Systems Neuroscience, NTNU, Trondheim, Norway
| | - Thea M van der Geest
- Lectorate Media Design, HAN University of Applied Sciences, Arnhem, the Netherlands
| | - Richard J A van Wezel
- Donders Institute, Radboud University, Nijmegen, the Netherlands; Techmed Centre, Biomedical Signals and System, University of Twente, Enschede, the Netherlands
| |
Collapse
|
13
|
Cognitive map formation through tactile map navigation in visually impaired and sighted persons. Sci Rep 2022; 12:11567. [PMID: 35798929 PMCID: PMC9262941 DOI: 10.1038/s41598-022-15858-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Accepted: 06/30/2022] [Indexed: 11/09/2022] Open
Abstract
The human brain can form cognitive maps of a spatial environment, which can support wayfinding. In this study, we investigated cognitive map formation of an environment presented in the tactile modality, in visually impaired and sighted persons. In addition, we assessed the acquisition of route and survey knowledge. Ten persons with a visual impairment (PVIs) and ten sighted control participants learned a tactile map of a city-like environment. The map included five marked locations associated with different items. Participants subsequently estimated distances between item pairs, performed a direction pointing task, reproduced routes between items and recalled item locations. In addition, we conducted questionnaires to assess general navigational abilities and the use of route or survey strategies. Overall, participants in both groups performed well on the spatial tasks. Our results did not show differences in performance between PVIs and sighted persons, indicating that both groups formed an equally accurate cognitive map. Furthermore, we found that the groups generally used similar navigational strategies, which correlated with performance on some of the tasks, and acquired similar and accurate route and survey knowledge. We therefore suggest that PVIs are able to employ a route as well as survey strategy if they have the opportunity to access route-like as well as map-like information such as on a tactile map.
Collapse
|
14
|
Setti W, Cuturi LF, Cocchi E, Gori M. Spatial Memory and Blindness: The Role of Visual Loss on the Exploration and Memorization of Spatialized Sounds. Front Psychol 2022; 13:784188. [PMID: 35686077 PMCID: PMC9171105 DOI: 10.3389/fpsyg.2022.784188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 04/21/2022] [Indexed: 11/20/2022] Open
Abstract
Spatial memory relies on encoding, storing, and retrieval of knowledge about objects’ positions in their surrounding environment. Blind people have to rely on sensory modalities other than vision to memorize items that are spatially displaced, however, to date, very little is known about the influence of early visual deprivation on a person’s ability to remember and process sound locations. To fill this gap, we tested sighted and congenitally blind adults and adolescents in an audio-spatial memory task inspired by the classical card game “Memory.” In this research, subjects (blind, n = 12; sighted, n = 12) had to find pairs among sounds (i.e., animal calls) displaced on an audio-tactile device composed of loudspeakers covered by tactile sensors. To accomplish this task, participants had to remember the spatialized sounds’ position and develop a proper mental spatial representation of their locations. The test was divided into two experimental conditions of increasing difficulty dependent on the number of sounds to be remembered (8 vs. 24). Results showed that sighted participants outperformed blind participants in both conditions. Findings were discussed considering the crucial role of visual experience in properly manipulating auditory spatial representations, particularly in relation to the ability to explore complex acoustic configurations.
Collapse
Affiliation(s)
- Walter Setti
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
- *Correspondence: Walter Setti,
| | - Luigi F. Cuturi
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
15
|
Job XE, Kirsch LP, Auvray M. Spatial perspective-taking: insights from sensory impairments. Exp Brain Res 2022; 240:27-37. [PMID: 34716457 PMCID: PMC8803716 DOI: 10.1007/s00221-021-06221-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 09/12/2021] [Indexed: 11/03/2022]
Abstract
Information can be perceived from a multiplicity of spatial perspectives, which is central to effectively understanding and interacting with our environment and other people. Sensory impairments such as blindness are known to impact spatial representations and perspective-taking is often thought of as a visual process. However, disturbed functioning of other sensory systems (e.g., vestibular, proprioceptive and auditory) can also influence spatial perspective-taking. These lines of research remain largely separate, yet together they may shed new light on the role that each sensory modality plays in this core cognitive ability. The findings to date reveal that spatial cognitive processes may be differently affected by various types of sensory loss. The visual system may be crucial for the development of efficient allocentric (object-to-object) representation; however, the role of vision in adopting another's spatial perspective remains unclear. On the other hand, the vestibular and the proprioceptive systems likely play an important role in anchoring the perceived self to the physical body, thus facilitating imagined self-rotations required to adopt another's spatial perspective. Findings regarding the influence of disturbed auditory functioning on perspective-taking are so far inconclusive and thus await further data. This review highlights that spatial perspective-taking is a highly plastic cognitive ability, as the brain is often able to compensate in the face of different sensory loss.
Collapse
Affiliation(s)
- Xavier E Job
- Department of Neuroscience, Karolinska Institutet, Solnavägen 9, 17165, Stockholm, Sweden.
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France.
| | - Louise P Kirsch
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France.
- Integrative Neuroscience and Cognition Center (INCC), Université de Paris, Paris, France.
| | - Malika Auvray
- Institut des Systèmes Intelligents et de Robotique (ISIR), Sorbonne Université, Paris, France
| |
Collapse
|
16
|
From aMCI to AD: The Role of Visuo-Spatial Memory Span and Executive Functions in Egocentric and Allocentric Spatial Impairments. Brain Sci 2021; 11:brainsci11111536. [PMID: 34827534 PMCID: PMC8615504 DOI: 10.3390/brainsci11111536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 11/17/2021] [Accepted: 11/17/2021] [Indexed: 11/29/2022] Open
Abstract
A difficulty in encoding spatial information in an egocentric (i.e., body-to-object) and especially allocentric (i.e., object-to-object) manner, and impairments in executive function (EF) are typical in amnestic mild cognitive impairment (aMCI) and Alzheimer’s disease (AD). Since executive functions are involved in spatial encodings, it is important to understand the extent of their reciprocal or selective impairment. To this end, AD patients, aMCI and healthy elderly people had to provide egocentric (What object was closest to you?) and allocentric (What object was closest to object X?) judgments about memorized objects. Participants’ frontal functions, attentional resources and visual-spatial memory were assessed with the Frontal Assessment Battery (FAB), the Trail Making Test (TMT) and the Corsi Block Tapping Test (forward/backward). Results showed that ADs performed worse than all others in all tasks but did not differ from aMCIs in allocentric judgments and Corsi forward. Regression analyses showed, although to different degrees in the three groups, a link between attentional resources, visuo-spatial memory and egocentric performance, and between frontal resources and allocentric performance. Therefore, visuo-spatial memory, especially when it involves allocentric frames and requires demanding active processing, should be carefully assessed to reveal early signs of conversion from aMCI to AD.
Collapse
|
17
|
Ruggiero G, Ruotolo F, Iachini T. How ageing and blindness affect egocentric and allocentric spatial memory. Q J Exp Psychol (Hove) 2021; 75:1628-1642. [PMID: 34670454 DOI: 10.1177/17470218211056772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Egocentric (subject-to-object) and allocentric (object-to-object) spatial reference frames are fundamental for representing the position of objects or places around us. The literature on spatial cognition in blind people has shown that lack of vision may limit the ability to represent spatial information in an allocentric rather than egocentric way. Furthermore, much research with sighted individuals has reported that ageing has a negative impact on spatial memory. However, as far as we know, no study has assessed how ageing may affect the processing of spatial reference frames in individuals with different degrees of visual experience. To fill this gap, here we report data from a cross-sectional study in which a large sample of young and elderly participants (160 participants in total) who were congenitally blind (long-term visual deprivation), adventitiously blind (late onset of blindness), blindfolded sighted (short-term visual deprivation) and sighted (full visual availability) performed a spatial memory task that required egocentric/allocentric distance judgements with regard to memorised stimuli. The results showed that egocentric judgements were better than allocentric ones and above all that the ability to process allocentric information was influenced by both age and visual status. Specifically, the allocentric judgements of congenitally blind elderly participants were worse than those of all other groups. These findings suggest that ageing and congenital blindness can contribute to the worsening of the ability to represent spatial relationships between external, non-body-centred anchor points.
Collapse
Affiliation(s)
- Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli," Caserta, Italy
| | - Francesco Ruotolo
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli," Caserta, Italy
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania "Luigi Vanvitelli," Caserta, Italy
| |
Collapse
|
18
|
Colorophone 2.0: A Wearable Color Sonification Device Generating Live Stereo-Soundscapes-Design, Implementation, and Usability Audit. SENSORS 2021; 21:s21217351. [PMID: 34770658 PMCID: PMC8587929 DOI: 10.3390/s21217351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/29/2021] [Accepted: 11/01/2021] [Indexed: 11/20/2022]
Abstract
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.
Collapse
|
19
|
Ottink L, Hoogendonk M, Doeller CF, Van der Geest TM, Van Wezel RJA. Cognitive map formation through haptic and visual exploration of tactile city-like maps. Sci Rep 2021; 11:15254. [PMID: 34315940 PMCID: PMC8316501 DOI: 10.1038/s41598-021-94778-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 07/13/2021] [Indexed: 11/09/2022] Open
Abstract
In this study, we compared cognitive map formation of small-scale models of city-like environments presented in visual or tactile/haptic modalities. Previous research often addresses only a limited amount of cognitive map aspects. We wanted to combine several of these aspects to elucidate a more complete view. Therefore, we assessed different types of spatial information, and consider egocentric as well as allocentric perspectives. Furthermore, we compared haptic map learning with visual map learning. In total 18 sighted participants (9 in a haptic condition, 9 visuo-haptic) learned three tactile maps of city-like environments. The maps differed in complexity, and had five marked locations associated with unique items. Participants estimated distances between item pairs, rebuilt the map, recalled locations, and navigated two routes, after learning each map. All participants overall performed well on the spatial tasks. Interestingly, only on the complex maps, participants performed worse in the haptic condition than the visuo-haptic, suggesting no distinct advantage of vision on the simple map. These results support ideas of modality-independent representations of space. Although it is less clear on the more complex maps, our findings indicate that participants using only haptic or a combination of haptic and visual information both form a quite accurate cognitive map of a simple tactile city-like map.
Collapse
Affiliation(s)
- Loes Ottink
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.
| | - Marit Hoogendonk
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Christian F Doeller
- Psychology Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Kavli Insitute for Systems Neuroscience, NTNU, Trondheim, Norway
| | - Thea M Van der Geest
- Lectorate Media Design, HAN University of Applied Sciences, Arnhem, The Netherlands
| | - Richard J A Van Wezel
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands.,Techmed Centre, Biomedical Signals and System, University of Twente, Enschede, The Netherlands
| |
Collapse
|
20
|
Bollini A, Campus C, Gori M. The development of allocentric spatial frame in the auditory system. J Exp Child Psychol 2021; 211:105228. [PMID: 34242896 DOI: 10.1016/j.jecp.2021.105228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 06/15/2021] [Accepted: 06/15/2021] [Indexed: 10/20/2022]
Abstract
The ability to encode space is a crucial aspect of interacting with the external world. Therefore, this ability appears to be fundamental for the correct development of the capacity to integrate different spatial reference frames. The spatial reference frame seems to be present in all the sensory modalities. However, it has been demonstrated that different sensory modalities follow various developmental courses. Nevertheless, to date these courses have been investigated only in people with sensory impairments, where there is a possible bias due to compensatory strategies and it is complicated to assess the exact age when these skills emerge. For these reasons, we investigated the development of the allocentric frame in the auditory domain in a group of typically developing children aged 6-10 years. To do so, we used an auditory Simon task, a paradigm that involves implicit spatial processing, and we asked children to perform the task in both the uncrossed and crossed hands postures. We demonstrated that the crossed hands posture affected the performance only in younger children (6-7 years), whereas at 10 years of age children performed as adults and were not affected by such posture. Moreover, we found that this task's performance correlated with age and developmental differences in spatial abilities. Our results support the hypothesis that auditory spatial cognition's developmental course is similar to the visual modality development as reported in the literature.
Collapse
Affiliation(s)
- Alice Bollini
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16163 Genova, Italy.
| | - Claudio Campus
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16163 Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16163 Genova, Italy
| |
Collapse
|
21
|
Martolini C, Cappagli G, Saligari E, Gori M, Signorini S. Allocentric spatial perception through vision and touch in sighted and blind children. J Exp Child Psychol 2021; 210:105195. [PMID: 34098165 DOI: 10.1016/j.jecp.2021.105195] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 05/03/2021] [Accepted: 05/03/2021] [Indexed: 10/21/2022]
Abstract
Vision and touch play a critical role in spatial development, facilitating the acquisition of allocentric and egocentric frames of reference, respectively. Previous works have shown that children's ability to adopt an allocentric frame of reference might be impaired by the absence of visual experience during growth. In the current work, we investigated whether visual deprivation also impairs the ability to shift from egocentric to allocentric frames of reference in a switching-perspective task performed in the visual and haptic domains. Children with and without visual impairments from 6 to 13 years of age were asked to visually (only sighted children) or haptically (blindfolded sighted children and blind children) explore and reproduce a spatial configuration of coins by assuming either an egocentric perspective or an allocentric perspective. Results indicated that temporary visual deprivation impaired the ability of blindfolded sighted children to switch from egocentric to allocentric perspective more in the haptic domain than in the visual domain. Moreover, results on visually impaired children indicated that blindness did not impair allocentric spatial coding in the haptic domain but rather affected the ability to rely on haptic egocentric cues in the switching-perspective task. Finally, our findings suggested that the total absence of vision might impair the development of an egocentric perspective in case of body midline-crossing targets.
Collapse
Affiliation(s)
- Chiara Martolini
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16152 Genoa, Italy.
| | - Giulia Cappagli
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16152 Genoa, Italy.
| | - Elena Saligari
- Center of Child NeuroOphthalmology, IRCCS Mondino Foundation, 27100 Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, 16152 Genoa, Italy.
| | - Sabrina Signorini
- Center of Child NeuroOphthalmology, IRCCS Mondino Foundation, 27100 Pavia, Italy.
| |
Collapse
|
22
|
Heimler B, Behor T, Dehaene S, Izard V, Amedi A. Core knowledge of geometry can develop independently of visual experience. Cognition 2021; 212:104716. [PMID: 33895652 DOI: 10.1016/j.cognition.2021.104716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 03/28/2021] [Accepted: 03/29/2021] [Indexed: 01/29/2023]
Abstract
Geometrical intuitions spontaneously drive visuo-spatial reasoning in human adults, children and animals. Is their emergence intrinsically linked to visual experience, or does it reflect a core property of cognition shared across sensory modalities? To address this question, we tested the sensitivity of blind-from-birth adults to geometrical-invariants using a haptic deviant-figure detection task. Blind participants spontaneously used many geometric concepts such as parallelism, right angles and geometrical shapes to detect intruders in haptic displays, but experienced difficulties with symmetry and complex spatial transformations. Across items, their performance was highly correlated with that of sighted adults performing the same task in touch (blindfolded) and in vision, as well as with the performances of uneducated preschoolers and Amazonian adults. Our results support the existence of an amodal core-system of geometry that arises independently of visual experience. However, performance at selecting geometric intruders was generally higher in the visual compared to the haptic modality, suggesting that sensory-specific spatial experience may play a role in refining the properties of this core-system of geometry.
Collapse
Affiliation(s)
- Benedetta Heimler
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel; Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Tel Hashomer, Israel.
| | - Tomer Behor
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France
| | - Véronique Izard
- Integrative Neuroscience and Cognition Center, Université de Paris, 45 rue des Saints-Pères, 75006 Paris, France; CNRS UMR 8002, 45 rue des Saints-Pères, 75006 Paris, France
| | - Amir Amedi
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel; The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
23
|
Gourgou E, Adiga K, Goettemoeller A, Chen C, Hsu AL. Caenorhabditis elegans learning in a structured maze is a multisensory behavior. iScience 2021; 24:102284. [PMID: 33889812 PMCID: PMC8050377 DOI: 10.1016/j.isci.2021.102284] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 11/23/2020] [Accepted: 03/04/2021] [Indexed: 11/05/2022] Open
Abstract
We show that C. elegans nematodes learn to associate food with a combination of proprioceptive cues and information on the structure of their surroundings (maze), perceived through mechanosensation. By using the custom-made Worm-Maze platform, we demonstrate that C. elegans young adults can locate food in T-shaped mazes and, following that experience, learn to reach a specific maze arm. C. elegans learning inside the maze is possible after a single training session, it resembles working memory, and it prevails over conflicting environmental cues. We provide evidence that the observed learning is a food-triggered multisensory behavior, which requires mechanosensory and proprioceptive input, and utilizes cues about the structural features of nematodes' environment and their body actions. The CREB-like transcription factor and dopamine signaling are also involved in maze performance. Lastly, we show that the observed aging-driven decline of C. elegans learning ability in the maze can be reversed by starvation. C. elegans can be trained to reach a target arm in a T-shaped maze Learning requires the contribution of tactile and proprioceptive cues C. elegans follow a kind of response learning strategy in the maze environment Learning is short-term and sensitive to distraction
Collapse
Affiliation(s)
- Eleni Gourgou
- Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109, USA.,Institute of Gerontology, University of Michigan Medical School, Ann Arbor, MI 41809, USA
| | - Kavya Adiga
- Department of Internal Medicine, Division of Geriatrics & Palliative Medicine, University of Michigan Medical School, Ann Arbor, MI 41809, USA
| | - Anne Goettemoeller
- Neuroscience Program, College of Literature, Science and the Arts, University of Michigan, Ann Arbor, MI 41809, USA
| | - Chieh Chen
- Institute of Biochemistry and Molecular Biology, National Yang Ming University, Taipei, 112 Taiwan
| | - Ao-Lin Hsu
- Department of Internal Medicine, Division of Geriatrics & Palliative Medicine, University of Michigan Medical School, Ann Arbor, MI 41809, USA.,Institute of Biochemistry and Molecular Biology, National Yang Ming University, Taipei, 112 Taiwan.,Research Center for Healthy Aging and Institute of New Drug Development, China Medical University, Taichung, 404, Taiwan
| |
Collapse
|
24
|
Blindness and the Reliability of Downwards Sensors to Avoid Obstacles: A Study with the EyeCane. SENSORS 2021; 21:s21082700. [PMID: 33921202 PMCID: PMC8070041 DOI: 10.3390/s21082700] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 04/07/2021] [Accepted: 04/09/2021] [Indexed: 11/17/2022]
Abstract
Vision loss has dramatic repercussions on the quality of life of affected people, particularly with respect to their orientation and mobility. Many devices are available to help blind people to navigate in their environment. The EyeCane is a recently developed electronic travel aid (ETA) that is inexpensive and easy to use, allowing for the detection of obstacles lying ahead within a 2 m range. The goal of this study was to investigate the potential of the EyeCane as a primary aid for spatial navigation. Three groups of participants were recruited: early blind, late blind, and sighted. They were first trained with the EyeCane and then tested in a life-size obstacle course with four obstacles types: cube, door, post, and step. Subjects were requested to cross the corridor while detecting, identifying, and avoiding the obstacles. Each participant had to perform 12 runs with 12 different obstacles configurations. All participants were able to learn quickly to use the EyeCane and successfully complete all trials. Amongst the various obstacles, the step appeared to prove the hardest to detect and resulted in more collisions. Although the EyeCane was effective for detecting obstacles lying ahead, its downward sensor did not reliably detect those on the ground, rendering downward obstacles more hazardous for navigation.
Collapse
|
25
|
Ilardi CR, Iavarone A, Villano I, Rapuano M, Ruggiero G, Iachini T, Chieffi S. Egocentric and allocentric spatial representations in a patient with Bálint-like syndrome: A single-case study. Cortex 2020; 135:10-16. [PMID: 33341593 DOI: 10.1016/j.cortex.2020.11.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 07/28/2020] [Accepted: 11/17/2020] [Indexed: 10/22/2022]
Abstract
Previous studies suggested that egocentric and allocentric spatial representations are supported by neural networks in the occipito-parietal (dorsal) and occipito-temporal (ventral) streams, respectively. The present study aimed to explore the integrity of ego- and allo-centric spatial representations in a patient (GP) who presented bilateral occipito-parietal damage consistent with the picture of a Bálint-like syndrome. GP and healthy controls were asked to provide memory-based spatial judgments on triads of objects after a short (1.5sec) or long (5sec) delay. The results showed that GP's performance was selectively impaired in the Ego/1.5sec delay condition. As a whole, our findings suggest that GP's spared ventral stream could generate short- and long-term allocentric representations. Furthermore, the stored perceptual representation processed within the ventral stream might have been used to generate long-term egocentric representation. Conversely, the generation of short-term egocentric representation appeared to be selectively undermined by the damage of the dorsal stream.
Collapse
Affiliation(s)
- Ciro Rosario Ilardi
- Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy; Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| | | | - Ines Villano
- Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Mariachiara Rapuano
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Sergio Chieffi
- Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| |
Collapse
|
26
|
Jicol C, Lloyd-Esenkaya T, Proulx MJ, Lange-Smith S, Scheller M, O'Neill E, Petrini K. Efficiency of Sensory Substitution Devices Alone and in Combination With Self-Motion for Spatial Navigation in Sighted and Visually Impaired. Front Psychol 2020; 11:1443. [PMID: 32754082 PMCID: PMC7381305 DOI: 10.3389/fpsyg.2020.01443] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 05/29/2020] [Indexed: 11/13/2022] Open
Abstract
Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, United Kingdom
| | | | - Michael J Proulx
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Simon Lange-Smith
- School of Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, United Kingdom
| | - Meike Scheller
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, United Kingdom
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
27
|
Martolini C, Cappagli G, Luparia A, Signorini S, Gori M. The Impact of Vision Loss on Allocentric Spatial Coding. Front Neurosci 2020; 14:565. [PMID: 32612500 PMCID: PMC7308590 DOI: 10.3389/fnins.2020.00565] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 05/07/2020] [Indexed: 11/13/2022] Open
Abstract
Several works have demonstrated that visual experience plays a critical role in the development of allocentric spatial coding. Indeed, while children with a typical development start to code space by relying on allocentric landmarks from the first year of life, blind children remain anchored to an egocentric perspective until late adolescence. Nonetheless, little is known about when and how visually impaired children acquire the ability to switch from an egocentric to an allocentric frame of reference across childhood. This work aims to investigate whether visual experience is necessary to shift from bodily to external frames of reference. Children with visual impairment and normally sighted controls between 4 and 9 years of age were asked to solve a visual switching-perspective task requiring them to assume an egocentric or an allocentric perspective depending on the task condition. We hypothesize that, if visual experience is necessary for allocentric spatial coding, then visually impaired children would have been impaired to switch from egocentric to allocentric perspectives. Results support this hypothesis, confirming a developmental delay in the ability to update spatial coordinates in visually impaired children. It suggests a pivotal role of vision in shaping allocentric spatial coding across development.
Collapse
Affiliation(s)
- Chiara Martolini
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics and Systems Engineering, University of Genoa, Genoa, Italy
| | - Giulia Cappagli
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Antonella Luparia
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Sabrina Signorini
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
28
|
May KR, Tomlinson BJ, Ma X, Roberts P, Walker BN. Spotlights and Soundscapes. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2020. [DOI: 10.1145/3378576] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
For persons with visual impairment, forming cognitive maps of unfamiliar interior spaces can be challenging. Various technical developments have converged to make it feasible, without specialized equipment, to represent a variety of useful landmark objects via spatial audio, rather than solely dispensing route information. Although such systems could be key to facilitating cognitive map formation, high-density auditory environments must be crafted carefully to avoid overloading the listener. This article recounts a set of research exercises with potential users, in which the optimization of such systems was explored. In Experiment 1, a virtual reality environment was used to rapidly prototype and adjust the auditory environment in response to participant comments. In Experiment 2, three variants of the system were evaluated in terms of their effectiveness in a real-world building. This methodology revealed a variety of optimization approaches and recommendations for designing dense mixed-reality auditory environments aimed at supporting cognitive map formation by visually impaired persons.
Collapse
Affiliation(s)
| | | | - Xiaomeng Ma
- Georgia Institute of Technology, Atlanta, Georgia
| | | | | |
Collapse
|
29
|
Ruggiero G, Ruotolo F, Iavarone A, Iachini T. Allocentric coordinate spatial representations are impaired in aMCI and Alzheimer's disease patients. Behav Brain Res 2020; 393:112793. [PMID: 32619567 DOI: 10.1016/j.bbr.2020.112793] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 06/19/2020] [Accepted: 06/27/2020] [Indexed: 12/18/2022]
Abstract
Research has reported deficits in egocentric (subject-to-object) and mainly allocentric (object-to-object) spatial representations in the early stages of the Alzheimer's disease (eAD). To identify early cognitive signs of neurodegenerative conversion, several studies have shown alterations in both reference frames, especially the allocentric ones in amnestic-Mild Cognitive Impairment (aMCI) and eAD patients. However, egocentric and allocentric spatial frames of reference are intrinsically connected with coordinate (metric/variant) and categorical (non-metric/invariant) spatial relations. This raises the question of whether allocentric deficit found to detect the conversion from aMCI to dementia is differently affected when combined with categorical or coordinate spatial relations. Here, we compared eAD and aMCI patients to Normal Controls (NC) on the Ego-Allo/Cat-Coor spatial memory task. Participants memorized triads of objects and then were asked to provide right/left (i.e. categorical) and distance based (i.e. coordinate) judgments according to an egocentric or allocentric reference frame. Results showed a selective deficit of coordinate, but not categorical, allocentric judgments in both aMCI and eAD patients as compared to NC group. These results suggest that a sign of the departure from normal/healthy aging towards the AD may be traced in elderly people's inability to represent metric distances among elements in the space.
Collapse
Affiliation(s)
- Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy.
| | - Francesco Ruotolo
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy
| | - Alessandro Iavarone
- Laboratory of Clinical Neuropsychology, Neurological Unit of "Ospedali dei Colli", Naples, Italy
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, CS-IVR, Department of Psychology, University of Campania L. Vanvitelli, Caserta, Italy
| |
Collapse
|
30
|
Karimpur H, Kurz J, Fiehler K. The role of perception and action on the use of allocentric information in a large-scale virtual environment. Exp Brain Res 2020; 238:1813-1826. [PMID: 32500297 PMCID: PMC7438369 DOI: 10.1007/s00221-020-05839-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 05/23/2020] [Indexed: 01/10/2023]
Abstract
In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Johannes Kurz
- NemoLab-Neuromotor Behavior Laboratory, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
31
|
Richardson M, Thar J, Alvarez J, Borchers J, Ward J, Hamilton-Fletcher G. How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches. Perception 2019; 48:1079-1103. [PMID: 31547778 DOI: 10.1177/0301006619873194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Collapse
Affiliation(s)
| | - Jan Thar
- Media Computing Group, RWTH Aachen University, Germany
| | - James Alvarez
- Department of Psychology, University of Sussex, Brighton, UK
| | - Jan Borchers
- Media Computing Group, RWTH Aachen University, Germany
| | - Jamie Ward
- Department of Psychology, University of Sussex, Brighton, UK; Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Giles Hamilton-Fletcher
- Department of Psychology, University of Sussex, Brighton, UK; Neuroimaging and Visual Science Laboratory, New York University Langone Health, NY, USA
| |
Collapse
|
32
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
33
|
Amadeo MB, Campus C, Gori M. Impact of years of blindness on neural circuits underlying auditory spatial representation. Neuroimage 2019; 191:140-149. [PMID: 30710679 DOI: 10.1016/j.neuroimage.2019.01.073] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 01/10/2019] [Accepted: 01/29/2019] [Indexed: 11/30/2022] Open
Abstract
Early visual deprivation impacts negatively on spatial bisection abilities. Recently, an early (50-90 ms) ERP response, selective for sound position in space, has been observed in the visual cortex of sighted individuals during the spatial but not the temporal bisection task. Here, we clarify the role of vision on spatial bisection abilities and neural correlates by studying late blind individuals. Results highlight that a shorter period of blindness is linked to a stronger contralateral activation in the visual cortex and a better performance during the spatial bisection task. Contrarily, not lateralized visual activation and lower performance are observed in individuals with a longer period of blindness. To conclude, the amount of time spent without vision may gradually impact on neural circuits underlying the construction of spatial representations in late blind participants. These findings suggest a key relationship between visual deprivation and auditory spatial abilities in humans.
Collapse
Affiliation(s)
- Maria Bianca Amadeo
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy; Università degli studi di Genova, Department of Informatics, Bioengineering, Robotics and Systems Engineering, Via all'Opera Pia, 13 - 16145, Genova, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Fondazione Istituto Italiano di Tecnologia, Via E. Melen, 83 - 16152, Genova, Italy.
| |
Collapse
|
34
|
Leo F, Tinti C, Chiesa S, Cavaglià R, Schmidt S, Cocchi E, Brayda L. Improving spatial working memory in blind and sighted youngsters using programmable tactile displays. SAGE Open Med 2018; 6:2050312118820028. [PMID: 30574309 PMCID: PMC6299321 DOI: 10.1177/2050312118820028] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 11/21/2018] [Indexed: 11/28/2022] Open
Abstract
OBJECTIVE To investigate whether training with tactile matrices displayed with a programmable tactile display improves recalling performance of spatial images in blind, low-vision and sighted youngsters. To code and understand the behavioral underpinnings of learning two-dimensional tactile dispositions, in terms of spontaneous exploration strategies. METHODS Three groups of blind, low-vision and sighted youngsters between 6 and 18 years old performed four training sessions with a weekly schedule in which they were asked to memorize single or double spatial layouts, featured as two-dimensional matrices. RESULTS Results showed that all groups of participants significantly improved their recall performance compared to the first session baseline in the single-matrix task. No statistical difference in performance between groups emerged in this task. Instead, the learning effect in visually impaired participants is reduced in the double-matrix task, whereas it is still robust in blindfolded sighted controls. We also coded tactile exploration strategies in both tasks and their correlation with performance. Sighted youngsters, in particular, favored a proprioceptive exploration strategy. Finally, performance in the double-matrix task negatively correlated with using one hand and positively correlated with a proprioceptive strategy. CONCLUSION The results of our study indicate that blind persons do not easily process two separate spatial layouts. However, rehabilitation programs promoting bi-manual and proprioceptive approaches to tactile exploration might help improve spatial abilities. Finally, programmable tactile displays are an effective way to make spatial and graphical configurations accessible to visually impaired youngsters and they can be profitably exploited in rehabilitation.
Collapse
Affiliation(s)
- Fabrizio Leo
- Robotics, Brain and Cognitive Sciences department, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Carla Tinti
- Dipartimento di Psicologia, Università degli Studi di Torino, Turin, Italy
| | - Silvia Chiesa
- Dipartimento di Psicologia, Università degli Studi di Torino, Turin, Italy
| | - Roberta Cavaglià
- Dipartimento di Psicologia, Università degli Studi di Torino, Turin, Italy
| | - Susanna Schmidt
- Dipartimento di Psicologia, Università degli Studi di Torino, Turin, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi e Ipovedenti Onlus, Genoa, Italy
| | - Luca Brayda
- Robotics, Brain and Cognitive Sciences department, Center for Human Technologies, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
35
|
Setti W, Cuturi LF, Cocchi E, Gori M. A novel paradigm to study spatial memory skills in blind individuals through the auditory modality. Sci Rep 2018; 8:13393. [PMID: 30190584 PMCID: PMC6127324 DOI: 10.1038/s41598-018-31588-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 08/09/2018] [Indexed: 11/26/2022] Open
Abstract
Spatial memory is a multimodal representation of the environment, which can be mediated by different sensory signals. Here we investigate how the auditory modality influences memorization, contributing to the mental representation of a scene. We designed an audio test inspired by a validated spatial memory test, the Corsi-Block test for blind individuals. The test was carried out in two different conditions, with non-semantic and semantic stimuli, presented in different sessions and displaced on an audio-tactile device. Furthermore, the semantic sounds were spatially displaced in order to reproduce an audio scene, explored by participants during the test. Thus, we verified if semantic rather than non-semantic sounds are better recalled and whether exposure to an auditory scene can enhance memorization skills. Our results show that sighted subjects performed better than blind participants after the exploration of the semantic scene. This suggests that blind participants focus on the perceived sound positions and do not use items’ locations learned during the exploration. We discuss these results in terms of the role of visual experience on spatial memorization skills and the ability to take advantage of semantic information stored in the memory.
Collapse
Affiliation(s)
- Walter Setti
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genoa, Italy.,Robotics, Brain and Cognitive Science (RBCS), Istituto Italiano di Tecnologia, Genoa, Italy.,DIBRIS Department, University of Genoa, Genoa, Italy
| | - Luigi F Cuturi
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genoa, Italy
| | | | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Istituto Italiano di Tecnologia, Genoa, Italy.
| |
Collapse
|
36
|
Pasqualotto A, Furlan M, Proulx MJ, Sereno MI. Visual loss alters multisensory face maps in humans. Brain Struct Funct 2018; 223:3731-3738. [PMID: 30043118 DOI: 10.1007/s00429-018-1713-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2015] [Accepted: 07/04/2018] [Indexed: 01/09/2023]
Abstract
Topographically organised responses to visual and tactile stimulation are aligned in the ventral intraparietal cortex. The critical biological importance of this region, which is thought to mediate visually guided defensive movements of the head and upper body, suggests that these maps might be hardwired from birth. Here, we investigated whether visual experience is necessary for the creation and positioning of these maps by assessing the representation of tactile stimulation in congenitally and totally blind participants, who had no visual experience, and late and totally blind participants. We used a single-subject approach to the analysis to focus on the potential individual differences in the functional neuroanatomy that might arise from different causes, durations and sensory experiences of visual impairment among participants. The overall results did not show any significant difference between congenitally and late blind participants; however, single-subject trends suggested that visual experience is not necessary to develop topographically organised maps in the intraparietal cortex, whilst losing vision disrupted topographic maps' integrity and organisation. These results discussed in terms of brain plasticity and sensitive periods.
Collapse
Affiliation(s)
- Achille Pasqualotto
- School of Biological and Chemical Sciences, Queen Mary University of London, London, UK. .,Department of Psychology, University of Bath, Bath, UK. .,Faculty of Arts and Social Sciences, Sabanci University, 34956, Tuzla, Istanbul, Turkey.
| | - Michele Furlan
- SISSA (Scuola Internazionale Superiore di Studi Avanzati), Trieste, Italy
| | - Michael J Proulx
- School of Biological and Chemical Sciences, Queen Mary University of London, London, UK.,Department of Psychology, University of Bath, Bath, UK
| | | |
Collapse
|
37
|
Massiceti D, Hicks SL, van Rheede JJ. Stereosonic vision: Exploring visual-to-auditory sensory substitution mappings in an immersive virtual reality navigation paradigm. PLoS One 2018; 13:e0199389. [PMID: 29975734 PMCID: PMC6033394 DOI: 10.1371/journal.pone.0199389] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2017] [Accepted: 06/06/2018] [Indexed: 01/16/2023] Open
Abstract
Sighted people predominantly use vision to navigate spaces, and sight loss has negative consequences for independent navigation and mobility. The recent proliferation of devices that can extract 3D spatial information from visual scenes opens up the possibility of using such mobility-relevant information to assist blind and visually impaired people by presenting this information through modalities other than vision. In this work, we present two new methods for encoding visual scenes using spatial audio: simulated echolocation and distance-dependent hum volume modulation. We implemented both methods in a virtual reality (VR) environment and tested them using a 3D motion-tracking device. This allowed participants to physically walk through virtual mobility scenarios, generating data on real locomotion behaviour. Blindfolded sighted participants completed two tasks: maze navigation and obstacle avoidance. Results were measured against a visual baseline in which participants performed the same two tasks without blindfolds. Task completion time, speed and number of collisions were used as indicators of successful navigation, with additional metrics exploring detailed dynamics of performance. In both tasks, participants were able to navigate using only audio information after minimal instruction. While participants were 65% slower using audio compared to the visual baseline, they reduced their audio navigation time by an average 21% over just 6 trials. Hum volume modulation proved over 20% faster than simulated echolocation in both mobility scenarios, and participants also showed the greatest improvement with this sonification method. Nevertheless, we do speculate that simulated echolocation remains worth exploring as it provides more spatial detail and could therefore be more useful in more complex environments. The fact that participants were intuitively able to successfully navigate space with two new visual-to-audio mappings for conveying spatial information motivates the further exploration of these and other mappings with the goal of assisting blind and visually impaired individuals with independent mobility.
Collapse
Affiliation(s)
- Daniela Massiceti
- Department of Engineering Science, University of Oxford, Oxford, United Kingdom
- * E-mail:
| | - Stephen Lloyd Hicks
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Joram Jacob van Rheede
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
38
|
Hamilton-Fletcher G, Pisanski K, Reby D, Stefańczyk M, Ward J, Sorokowska A. The role of visual experience in the emergence of cross-modal correspondences. Cognition 2018; 175:114-121. [DOI: 10.1016/j.cognition.2018.02.023] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Revised: 02/20/2018] [Accepted: 02/22/2018] [Indexed: 11/26/2022]
|
39
|
Nelson JS, Kuling IA. Spatial Representation of the Workspace in Blind, Low Vision, and Sighted Human Participants. Iperception 2018; 9:2041669518781877. [PMID: 29977492 PMCID: PMC6024533 DOI: 10.1177/2041669518781877] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Accepted: 05/16/2018] [Indexed: 11/17/2022] Open
Abstract
It has been proposed that haptic spatial perception depends on one's visual abilities. We tested spatial perception in the workspace using a combination of haptic matching and line drawing tasks. There were 132 participants with varying degrees of visual ability ranging from congenitally blind to normally sighted. Each participant was blindfolded and asked to match a haptic target position felt under a table with their nondominant hand using a pen in their dominant hand. Once the pen was in position on the tabletop, they had to draw a line of equal length to a previously felt reference object by moving the pen laterally. We used targets at three different locations to evaluate whether different starting positions relative to the body give rise to different matching errors, drawn line lengths, or drawn line angles. We found no influence of visual ability on matching error, drawn line length, or line angle, but we found that early-blind participants are slightly less consistent in their matching errors across space. We conclude that the elementary haptic abilities tested in these tasks do not depend on visual experience.
Collapse
Affiliation(s)
- Jacob S. Nelson
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Irene A. Kuling
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| |
Collapse
|
40
|
Congenital blindness limits allocentric to egocentric switching ability. Exp Brain Res 2018; 236:813-820. [PMID: 29340716 DOI: 10.1007/s00221-018-5176-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 01/09/2018] [Indexed: 10/18/2022]
Abstract
Many everyday spatial activities require the cooperation or switching between egocentric (subject-to-object) and allocentric (object-to-object) spatial representations. The literature on blind people has reported that the lack of vision (congenital blindness) may limit the capacity to represent allocentric spatial information. However, research has mainly focused on the selective involvement of egocentric or allocentric representations, not the switching between them. Here we investigated the effect of visual deprivation on the ability to switch between spatial frames of reference. To this aim, congenitally blind (long-term visual deprivation), blindfolded sighted (temporary visual deprivation) and sighted (full visual availability) participants were compared on the Ego-Allo switching task. This task assessed the capacity to verbally judge the relative distances between memorized stimuli in switching (from egocentric-to-allocentric: Ego-Allo; from allocentric-to-egocentric: Allo-Ego) and non-switching (only-egocentric: Ego-Ego; only-allocentric: Allo-Allo) conditions. Results showed a difficulty in congenitally blind participants when switching from allocentric to egocentric representations, not when the first anchor point was egocentric. In line with previous results, a deficit in processing allocentric representations in non-switching conditions also emerged. These findings suggest that the allocentric deficit in congenital blindness may determine a difficulty in simultaneously maintaining and combining different spatial representations. This deficit alters the capacity to switch between reference frames specifically when the first anchor point is external and not body-centered.
Collapse
|
41
|
Chiesa S, Schmidt S, Tinti C, Cornoldi C. Allocentric and contra-aligned spatial representations of a town environment in blind people. Acta Psychol (Amst) 2017; 180:8-15. [PMID: 28806576 DOI: 10.1016/j.actpsy.2017.08.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Revised: 05/27/2017] [Accepted: 08/04/2017] [Indexed: 11/29/2022] Open
Abstract
Evidence concerning the representation of space by blind individuals is still unclear, as sometimes blind people behave like sighted people do, while other times they present difficulties. A better understanding of blind people's difficulties, especially with reference to the strategies used to form the representation of the environment, may help to enhance knowledge of the consequences of the absence of vision. The present study examined the representation of the locations of landmarks of a real town by using pointing tasks that entailed either allocentric points of reference with mental rotations of different degrees, or contra-aligned representations. Results showed that, in general, people met difficulties when they had to point from a different perspective to aligned landmarks or from the original perspective to contra-aligned landmarks, but this difficulty was particularly evident for the blind. The examination of the strategies adopted to perform the tasks showed that only a small group of blind participants used a survey strategy and that this group had a better performance with respect to people who adopted route or verbal strategies. Implications for the comprehension of the consequences on spatial cognition of the absence of visual experience are discussed, focusing in particular on conceivable interventions.
Collapse
Affiliation(s)
- Silvia Chiesa
- University of Turin, via Verdi 10, 10124 Turin, Italy.
| | | | - Carla Tinti
- University of Turin, via Verdi 10, 10124 Turin, Italy.
| | | |
Collapse
|
42
|
Abstract
Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals' performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions.
Collapse
|
43
|
Chieffi S, Villano I, Iavarone A, Messina A, Monda V, Viggiano A, Messina G, Monda M. Manual asymmetry for temporal and spatial parameters in sensorimotor synchronization. Exp Brain Res 2017; 235:1511-1518. [PMID: 28251335 DOI: 10.1007/s00221-017-4919-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Accepted: 02/16/2017] [Indexed: 11/29/2022]
Abstract
Previous studies suggest a right hemisphere advantage for temporal processing and a left hemisphere advantage for planning of motor actions. In the present study, we studied sensorimotor synchronization of hand reaching movements with an auditory rhythm. Blindfolded right-handed participants were asked to synchronize left and right hand movements to an auditory rhythm (40 vs. 60 vs. 80 bpm) and simultaneously reproduce the amplitude of a previously shown movement. Constant and variable asynchronies and movement amplitude errors were measured. The results showed that (a) constant asynchrony was lesser with the left hand than the right hand and (b) constant and variable amplitude errors were lesser with the right hand than the left hand. We suggest that when hand reaching movements are synchronized with an auditory rhythm, the left hand/right hemisphere system appears relatively specialized in temporally adhering to the rhythm and the right hand/left hemisphere system in performing spatially accurate movements.
Collapse
Affiliation(s)
- Sergio Chieffi
- Dipartimento di Medicina Sperimentale, Seconda Università di Napoli, Via Costantinopoli 16, 80138, Napoli, Italy.
| | - Ines Villano
- Dipartimento di Medicina Sperimentale, Seconda Università di Napoli, Via Costantinopoli 16, 80138, Napoli, Italy
| | - Alessandro Iavarone
- Neurological and Stroke Unit, CTO Hospital, AORN "Ospedali dei Colli", Naples, Italy
| | - Antonietta Messina
- Dipartimento di Medicina Sperimentale, Seconda Università di Napoli, Via Costantinopoli 16, 80138, Napoli, Italy
| | - Vincenzo Monda
- Dipartimento di Medicina Sperimentale, Seconda Università di Napoli, Via Costantinopoli 16, 80138, Napoli, Italy
| | - Andrea Viggiano
- Department of Medicine and Surgery, University of Salerno, Baronissi, Italy
| | - Giovanni Messina
- Department of Clinical and Experimental Medicine, University of Foggia, Foggia, Italy
| | - Marcellino Monda
- Dipartimento di Medicina Sperimentale, Seconda Università di Napoli, Via Costantinopoli 16, 80138, Napoli, Italy
| |
Collapse
|
44
|
Abstract
Valuable insights into the role played by visual experience in shaping spatial representations can be gained by studying the effects of visual deprivation on the remaining sensory modalities. For instance, it has long been debated how spatial hearing evolves in the absence of visual input. While several anecdotal accounts tend to associate complete blindness with exceptional hearing abilities, experimental evidence supporting such claims is, however, matched by nearly equal amounts of evidence documenting spatial hearing deficits. The purpose of this review is to summarize the key findings which support either enhancements or deficits in spatial hearing observed following visual loss and to provide a conceptual framework that isolates the specific conditions under which they occur. Available evidence will be examined in terms of spatial dimensions (horizontal, vertical, and depth perception) and in terms of frames of reference (egocentric and allocentric). Evidence suggests that while early blind individuals show superior spatial hearing in the horizontal plane, they also show significant deficits in the vertical plane. Potential explanations underlying these contrasting findings will be discussed. Early blind individuals also show spatial hearing impairments when performing tasks that require the use of an allocentric frame of reference. Results obtained with late-onset blind individuals suggest that early visual experience plays a key role in the development of both spatial hearing enhancements and deficits.
Collapse
Affiliation(s)
- Patrice Voss
- Cognitive Neuroscience Unit, Department of Neurology and Neurosurgery, Montreal Neurological Institute – McGill UniversityMontreal, QC, Canada
| |
Collapse
|
45
|
Auditory spatial representations of the world are compressed in blind humans. Exp Brain Res 2016; 235:597-606. [PMID: 27837259 PMCID: PMC5272902 DOI: 10.1007/s00221-016-4823-1] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 11/05/2016] [Indexed: 11/30/2022]
Abstract
Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.
Collapse
|
46
|
Abstract
The hypothesis that highly overlapping networks underlie brain functions (neural reuse) is decisively supported by three decades of multisensory research. Multisensory areas process information from more than one sensory modality and therefore represent the best examples of neural reuse. Recent evidence of multisensory processing in primary visual cortices further indicates that neural reuse is a basic feature of the brain.
Collapse
|
47
|
Ruotolo F, Iachini T, Ruggiero G, van der Ham IJM, Postma A. Frames of reference and categorical/coordinate spatial relations in a "what was where" task. Exp Brain Res 2016; 234:2687-96. [PMID: 27180248 PMCID: PMC4978766 DOI: 10.1007/s00221-016-4672-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2016] [Accepted: 05/05/2016] [Indexed: 11/30/2022]
Abstract
The aim of this study was to explore how people use egocentric (i.e., with respect to their body) and allocentric (i.e., with respect to another element in the environment) references in combination with coordinate (metric) or categorical (abstract) spatial information to identify a target element. Participants were asked to memorize triads of 3D objects or 2D figures, and immediately or after a delay of 5 s, they had to verbally indicate what was the object/figure: (1) closest/farthest to them (egocentric coordinate task); (2) on their right/left (egocentric categorical task); (3) closest/farthest to another object/figure (allocentric coordinate task); (4) on the right/left of another object/figure (allocentric categorical task). Results showed that the use of 2D figures favored categorical judgments over the coordinate ones with either an egocentric or an allocentric reference frame, whereas the use of 3D objects specifically favored egocentric coordinate judgments rather than the allocentric ones. Furthermore, egocentric judgments were more accurate than allocentric judgments when the response was Immediate rather than delayed and 3D objects rather than 2D figures were used. This pattern of results is discussed in the light of the functional roles attributed to the frames of reference and spatial relations by relevant theories of visuospatial processing.
Collapse
Affiliation(s)
- Francesco Ruotolo
- Helmholtz Institute, Experimental Psychology, Utrecht University, Utrecht, The Netherlands. .,Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Caserta, Italy.
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Caserta, Italy
| | - Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, Second University of Naples, Caserta, Italy
| | - Ineke J M van der Ham
- Faculty of Social and Behavioral Sciences, Leiden University, Leiden, The Netherlands
| | - Albert Postma
- Helmholtz Institute, Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
48
|
Proulx MJ, Todorov OS, Taylor Aiken A, de Sousa AA. Where am I? Who am I? The Relation Between Spatial Cognition, Social Cognition and Individual Differences in the Built Environment. Front Psychol 2016; 7:64. [PMID: 26903893 PMCID: PMC4749931 DOI: 10.3389/fpsyg.2016.00064] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Accepted: 01/12/2016] [Indexed: 11/13/2022] Open
Abstract
Knowing who we are, and where we are, are two fundamental aspects of our physical and mental experience. Although the domains of spatial and social cognition are often studied independently, a few recent areas of scholarship have explored the interactions of place and self. This fits in with increasing evidence for embodied theories of cognition, where mental processes are grounded in action and perception. Who we are might be integrated with where we are, and impact how we move through space. Individuals vary in personality, navigational strategies, and numerous cognitive and social competencies. Here we review the relation between social and spatial spheres of existence in the realms of philosophical considerations, neural and psychological representations, and evolutionary context, and how we might use the built environment to suit who we are, or how it creates who we are. In particular we investigate how two spatial reference frames, egocentric and allocentric, might transcend into the social realm. We then speculate on how environments may interact with spatial cognition. Finally, we suggest how a framework encompassing spatial and social cognition might be taken in consideration by architects and urban planners.
Collapse
Affiliation(s)
- Michael J Proulx
- Crossmodal Cognition Laboratory, Department of Psychology, University of Bath Bath, UK
| | - Orlin S Todorov
- European Network for Brain Evolution Research The Hague, Netherlands
| | | | | |
Collapse
|
49
|
Schinazi VR, Thrash T, Chebat DR. Spatial navigation by congenitally blind individuals. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2015; 7:37-58. [PMID: 26683114 PMCID: PMC4737291 DOI: 10.1002/wcs.1375] [Citation(s) in RCA: 67] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2015] [Revised: 10/16/2015] [Accepted: 11/17/2015] [Indexed: 11/08/2022]
Abstract
Spatial navigation in the absence of vision has been investigated from a variety of perspectives and disciplines. These different approaches have progressed our understanding of spatial knowledge acquisition by blind individuals, including their abilities, strategies, and corresponding mental representations. In this review, we propose a framework for investigating differences in spatial knowledge acquisition by blind and sighted people consisting of three longitudinal models (i.e., convergent, cumulative, and persistent). Recent advances in neuroscience and technological devices have provided novel insights into the different neural mechanisms underlying spatial navigation by blind and sighted people and the potential for functional reorganization. Despite these advances, there is still a lack of consensus regarding the extent to which locomotion and wayfinding depend on amodal spatial representations. This challenge largely stems from methodological limitations such as heterogeneity in the blind population and terminological ambiguity related to the concept of cognitive maps. Coupled with an over‐reliance on potential technological solutions, the field has diffused into theoretical and applied branches that do not always communicate. Here, we review research on navigation by congenitally blind individuals with an emphasis on behavioral and neuroscientific evidence, as well as the potential of technological assistance. Throughout the article, we emphasize the need to disentangle strategy choice and performance when discussing the navigation abilities of the blind population. WIREs Cogn Sci 2016, 7:37–58. doi: 10.1002/wcs.1375 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Victor R Schinazi
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | - Tyler Thrash
- Department of Humanities, Social, and Political Sciences, ETH Zürich, Zürich, Switzerland
| | | |
Collapse
|
50
|
Pawluk DTV, Adams RJ, Kitada R. Designing Haptic Assistive Technology for Individuals Who Are Blind or Visually Impaired. IEEE TRANSACTIONS ON HAPTICS 2015; 8:258-278. [PMID: 26336151 DOI: 10.1109/toh.2015.2471300] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper considers issues relevant for the design and use of haptic technology for assistive devices for individuals who are blind or visually impaired in some of the major areas of importance: Braille reading, tactile graphics, orientation and mobility. We show that there is a wealth of behavioral research that is highly applicable to assistive technology design. In a few cases, conclusions from behavioral experiments have been directly applied to design with positive results. Differences in brain organization and performance capabilities between individuals who are "early blind" and "late blind" from using the same tactile/haptic accommodations, such as the use of Braille, suggest the importance of training and assessing these groups individually. Practical restrictions on device design, such as performance limitations of the technology and cost, raise questions as to which aspects of these restrictions are truly important to overcome to achieve high performance. In general, this raises the question of what it means to provide functional equivalence as opposed to sensory equivalence.
Collapse
|