1
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
2
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
3
|
Nguyen KV, Tansan M, Newcombe NS. Studying the Development of Navigation Using Virtual Environments. JOURNAL OF COGNITION AND DEVELOPMENT 2022; 24:1-16. [PMID: 37614812 PMCID: PMC10445272 DOI: 10.1080/15248372.2022.2133123] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Research on spatial navigation is essential to understanding how mobile species adapt to their environments. Such research increasingly uses virtual environments (VEs) because, although VE has drawbacks, it allows for standardization of procedures, precision in measuring behaviors, ease in introducing variation, and cross-investigator comparability. Developmental researchers have used a wide range of VE testing methods, including desktop computers, gaming consoles, virtual reality, and phone applications. We survey the paradigms to guide researchers' choices, organizing them by their characteristics using a framework proposed by Girard (2022) in which navigation is reactive or deliberative, and may be tied to sensory input or not. This organization highlights what representations each paradigm indicates. VE tools have enriched our picture of the development of navigation, but much research remains to be done, e.g., determining retest reliability, comparing performance on different paradigms, validating performance against real-world behavior and open sharing. Reliable and valid assessments available on open-science repositories are essential for work on the development of navigation, its neural bases, and its implications for other cognitive domains.
Collapse
Affiliation(s)
- Kim V Nguyen
- Department of Psychology and Neuroscience, Temple University
| | - Merve Tansan
- Department of Psychology and Neuroscience, Temple University
| | - Nora S Newcombe
- Department of Psychology and Neuroscience, Temple University
| |
Collapse
|