1
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
2
|
Kumari R, Lee S, Shin J, Lee S. Effect of Perceptual Training with Sound-Guided and Kinesthetic Feedback on Human 3D Sound Localization Capabilities. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115023. [PMID: 37299750 DOI: 10.3390/s23115023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 05/14/2023] [Accepted: 05/21/2023] [Indexed: 06/12/2023]
Abstract
In this paper, we experimentally investigate how the 3D sound localization capabilities of the blind can improve through perceptual training. To this end, we develop a novel perceptual training method with sound-guided feedback and kinesthetic assistance to evaluate its effectiveness compared to conventional training methods. In perceptual training, we exclude visual perception by blindfolding the subjects to apply the proposed method to the visually impaired. Subjects used a specially designed pointing stick to generate a sound at the tip, indicating localization error and tip position. The proposed perceptual training aims to evaluate the training effect on 3D sound localization, including variations in azimuth, elevation, and distance. The six days of training based on six subjects resulted in the following outcomes: (1) In general, accuracy in full 3D sound localization can be improved based on training. (2) Training based on relative error feedback is more effective than absolute error feedback. (3) Subjects tend to underestimate distance when the sound source is near, less than 1000 mm, or larger than 15° to the left, and overestimate the elevation when the sound source is near or in the center, and within ±15° in azimuth estimations.
Collapse
Affiliation(s)
- Ranjita Kumari
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Sukhan Lee
- Department of Artificial Intelligence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Jonghwan Shin
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Soojin Lee
- Department of Artificial Intelligence, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
3
|
Shvadron S, Snir A, Maimon A, Yizhar O, Harel S, Poradosu K, Amedi A. Shape detection beyond the visual field using a visual-to-auditory sensory augmentation device. Front Hum Neurosci 2023; 17:1058617. [PMID: 36936618 PMCID: PMC10017858 DOI: 10.3389/fnhum.2023.1058617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/09/2023] [Indexed: 03/06/2023] Open
Abstract
Current advancements in both technology and science allow us to manipulate our sensory modalities in new and unexpected ways. In the present study, we explore the potential of expanding what we perceive through our natural senses by utilizing a visual-to-auditory sensory substitution device (SSD), the EyeMusic, an algorithm that converts images to sound. The EyeMusic was initially developed to allow blind individuals to create a spatial representation of information arriving from a video feed at a slow sampling rate. In this study, we aimed to use the EyeMusic for the blind areas of sighted individuals. We use it in this initial proof-of-concept study to test the ability of sighted subjects to combine visual information with surrounding auditory sonification representing visual information. Participants in this study were tasked with recognizing and adequately placing the stimuli, using sound to represent the areas outside the standard human visual field. As such, the participants were asked to report shapes' identities as well as their spatial orientation (front/right/back/left), requiring combined visual (90° frontal) and auditory input (the remaining 270°) for the successful performance of the task (content in both vision and audition was presented in a sweeping clockwise motion around the participant). We found that participants were successful at a highly above chance level after a brief 1-h-long session of online training and one on-site training session of an average of 20 min. They could even draw a 2D representation of this image in some cases. Participants could also generalize, recognizing new shapes they were not explicitly trained on. Our findings provide an initial proof of concept indicating that sensory augmentation devices and techniques can potentially be used in combination with natural sensory information in order to expand the natural fields of sensory perception.
Collapse
Affiliation(s)
- Shira Shvadron
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- *Correspondence: Shira Shvadron,
| | - Adi Snir
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Amber Maimon
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Or Yizhar
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- Max Planck Dahlem Campus of Cognition (MPDCC), Max Planck Institute for Human Development, Berlin, Germany
| | - Sapir Harel
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| | - Keinan Poradosu
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
- Weizmann Institute of Science, Rehovot, Israel
| | - Amir Amedi
- Baruch Ivcher School of Psychology, The Baruch Ivcher Institute for Brain, Cognition, and Technology, Reichman University, Herzliya, Israel
- The Ruth and Meir Rosenthal, Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
4
|
Quantitative EEG measures in profoundly deaf and normal hearing individuals while performing a vibrotactile temporal discrimination task. Int J Psychophysiol 2021; 166:71-82. [PMID: 34023377 DOI: 10.1016/j.ijpsycho.2021.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 05/10/2021] [Accepted: 05/16/2021] [Indexed: 11/22/2022]
Abstract
Challenges in early oral language acquisition in profoundly deaf individuals have an impact on cognitive neurodevelopment. This has led to the exploration of alternative sound perception methods involving training of vibrotactile discrimination of sounds within the language spectrum. In particular, stimulus duration plays an important role in linguistic categorical perception. We comparatively evaluated vibrotactile temporal discrimination of sound and how specific training can modify the underlying electrical brain activity. Fifteen profoundly deaf (PD) and 15 normal-hearing (NH) subjects performed a vibrotactile oddball task with simultaneous EEG recording, before and after a short training period (5 one-hour sessions; in 2.5-3 weeks). The stimuli consisted of 700 Hz pure-tones with different duration (target: long 500 ms; non-target: short 250 ms). The sound-wave stimuli were delivered by a small device worn on the right index finger. A similar behavioral training effect was observed in both groups showing significant improvement in sound-duration discrimination. However, quantitative EEG measurements reveal distinct neurophysiological patterns characterized by higher and more diffuse delta band magnitudes in the PD group, together with a generalized decrement in absolute power in both groups that might reflect a facilitating process associated to learning. Furthermore, training-related changes were found in the beta-band in NH. Findings suggest PD have different cognitive adaptive mechanisms which are not a mere amplification effect due to greater cortical excitability.
Collapse
|
5
|
Bollini A, Campus C, Esposito D, Gori M. The Magnitude Effect on Tactile Spatial Representation: The Spatial-Tactile Association for Response Code (STARC) Effect. Front Neurosci 2020; 14:557063. [PMID: 33132821 PMCID: PMC7550691 DOI: 10.3389/fnins.2020.557063] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 08/27/2020] [Indexed: 12/04/2022] Open
Abstract
The human brain uses perceptual information to create a correct representation of the external world. Converging data indicate that the perceptual processing of, space, and quantities frequently is based on a shared mental magnitude system, where low and high quantities are represented in the left and right space, respectively. The present study explores how the magnitude affects spatial representation in the tactile modality. We investigated these processes using stimulus-response (S-R) compatibility tasks (i.e., sensorimotor tasks that present an association/dissociation between the perception of a stimulus and the required action, generally increasing/decreasing accuracy and decreasing/increasing reaction times of the subject). In our study, the participant performed a discrimination task between high- and low-frequency vibrotactile stimuli, regardless of the stimulation’s spatial position. When the response code was incompatible with the mental magnitude line (i.e., left button for high-frequency and right button for low-frequency responses), we found that the participants bypassed the spatial congruence, showing a magnitude S-R compatibility effect. We called this phenomenon the Spatial–Tactile Association of Response Codes (STARC) effect. Moreover, we observed that the internal frame of reference embodies the STARC effect. Indeed, the participants’ performance reversed between uncrossed- and crossed-hands posture, suggesting that spatial reference frames play a role in the process of expressing mental magnitude, at least in terms of the tactile modality.
Collapse
Affiliation(s)
- Alice Bollini
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Davide Esposito
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy.,DIBRIS, Università di Genova, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
6
|
Feierabend M, Karnath HO, Lewald J. Auditory Space Perception in the Blind: Horizontal Sound Localization in Acoustically Simple and Complex Situations. Perception 2019; 48:1039-1057. [PMID: 31462156 DOI: 10.1177/0301006619872062] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Faculty of Psychology, Ruhr University Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| |
Collapse
|
7
|
Ahmad H, Setti W, Campus C, Capris E, Facchini V, Sandini G, Gori M. The Sound of Scotoma: Audio Space Representation Reorganization in Individuals With Macular Degeneration. Front Integr Neurosci 2019; 13:44. [PMID: 31481884 PMCID: PMC6710446 DOI: 10.3389/fnint.2019.00044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Accepted: 08/05/2019] [Indexed: 12/12/2022] Open
Abstract
Blindness is an ideal condition to study the role of visual input on the development of spatial representation, as studies have shown how audio space representation reorganizes in blindness. However, how spatial reorganization works is still unclear. A limitation of the study on blindness is that it is a "stable" system and it does not allow for studying the mechanisms that subtend the progress of this reorganization. To overcome this problem here we study, for the first time, audio spatial reorganization in 18 adults with macular degeneration (MD) for which the loss of vision due to scotoma is an ongoing progressive process. Our results show that the loss of vision produces immediate changes in the processing of spatial audio signals. In individuals with MD, the lateral sounds are "attracted" toward the central scotoma position resulting in a strong bias in the spatial auditory percept. This result suggests that the reorganization of audio space representation is a fast and plastic process occurring also later in life, after vision loss.
Collapse
Affiliation(s)
- Hafsah Ahmad
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Walter Setti
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy.,Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy.,Department of Informatics, Bioengineering, Robotics, and Systems Engineering, University of Genoa, Genoa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| | | | | | - Giulio Sandini
- Robotics, Brain and Cognitive Sciences, Italian Institute of Technology, Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Italian Institute of Technology, Genoa, Italy
| |
Collapse
|
8
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
9
|
Raveh E, Portnoy S, Friedman J. Myoelectric Prosthesis Users Improve Performance Time and Accuracy Using Vibrotactile Feedback When Visual Feedback Is Disturbed. Arch Phys Med Rehabil 2018; 99:2263-2270. [DOI: 10.1016/j.apmr.2018.05.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2018] [Revised: 05/01/2018] [Accepted: 05/09/2018] [Indexed: 11/28/2022]
|
10
|
Spence C, Lee J, Van der Stoep N. Responding to sounds from unseen locations: crossmodal attentional orienting in response to sounds presented from the rear. Eur J Neurosci 2017; 51:1137-1150. [PMID: 28973789 DOI: 10.1111/ejn.13733] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 09/27/2017] [Accepted: 09/27/2017] [Indexed: 11/28/2022]
Abstract
To date, most of the research on spatial attention has focused on probing people's responses to stimuli presented in frontal space. That is, few researchers have attempted to assess what happens in the space that is currently unseen (essentially rear space). In a sense, then, 'out of sight' is, very much, 'out of mind'. In this review, we highlight what is presently known about the perception and processing of sensory stimuli (focusing on sounds) whose source is not currently visible. We briefly summarize known differences in the localizability of sounds presented from different locations in 3D space, and discuss the consequences for the crossmodal attentional and multisensory perceptual interactions taking place in various regions of space. The latest research now clearly shows that the kinds of crossmodal interactions that take place in rear space are very often different in kind from those that have been documented in frontal space. Developing a better understanding of how people respond to unseen sound sources in naturalistic environments by integrating findings emerging from multiple fields of research will likely lead to the design of better warning signals in the future. This review highlights the need for neuroscientists interested in spatial attention to spend more time researching what happens (in terms of the covert and overt crossmodal orienting of attention) in rear space.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Jae Lee
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, OX1 3UD, UK
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
11
|
Finocchietti S, Cappagli G, Gori M. Auditory Spatial Recalibration in Congenital Blind Individuals. Front Neurosci 2017; 11:76. [PMID: 28261053 PMCID: PMC5309234 DOI: 10.3389/fnins.2017.00076] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Accepted: 02/02/2017] [Indexed: 11/13/2022] Open
Abstract
Blind individuals show impairments for auditory spatial skills that require complex spatial representation of the environment. We suggest that this is partially due to the egocentric frame of reference used by blind individuals. Here we investigate the possibility of reducing the mentioned auditory spatial impairments with an audio-motor training. Our hypothesis is that the association between a motor command and the corresponding movement's sensory feedback can provide an allocentric frame of reference and consequently help blind individuals in understanding complex spatial relationships. Subjects were required to localize the end point of a moving sound before and after either 2-min of audio-motor training or a complete rest. During the training, subjects were asked to move their hand, and consequently the sound source, to freely explore the space around the setup and the body. Both congenital blind (N = 20) and blindfolded healthy controls (N = 28) participated in the study. Results suggest that the audio-motor training was effective in improving space perception of blind individuals. The improvement was not observed in those subjects that did not perform the training. This study demonstrates that it is possible to recalibrate the auditory spatial representation in congenital blind individuals with a short audio-motor training and provides new insights for rehabilitation protocols in blind people.
Collapse
Affiliation(s)
- Sara Finocchietti
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| | - Giulia Cappagli
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia Genoa, Italy
| |
Collapse
|
12
|
Tonelli A, Gori M, Brayda L. The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons. Front Psychol 2016; 7:1683. [PMID: 27847488 PMCID: PMC5088781 DOI: 10.3389/fpsyg.2016.01683] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 10/13/2016] [Indexed: 11/13/2022] Open
Abstract
We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people – one experimental and one control group – in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.
Collapse
Affiliation(s)
- Alessia Tonelli
- Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di TecnologiaGenova, Italy; Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di TecnologiaGenova, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Science and Technology for Children and Adults, Istituto Italiano di Tecnologia Genova, Italy
| | - Luca Brayda
- Robotics, Brain and Cognitive Sciences Department, Istituto Italiano di Tecnologia Genova, Italy
| |
Collapse
|
13
|
Campana G, Maniglia M. Editorial: Improving visual deficits with perceptual learning. Front Psychol 2015; 6:491. [PMID: 25954239 PMCID: PMC4404727 DOI: 10.3389/fpsyg.2015.00491] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2015] [Accepted: 04/06/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Gianluca Campana
- Department of General Psychology, University of Padova Padova, Italy ; Human Inspired Technologies Research Centre - HIT, University of Padova Padova, Italy
| | - Marcello Maniglia
- Centre de Recherche Cerveau et Cognition, Université de Toulouse-UPS Toulouse, France ; Centre National de la Recherche Scientifique Toulouse, France
| |
Collapse
|