1
|
Bordeau C, Scalvini F, Migniot C, Dubois J, Ambard M. Cross-modal correspondence enhances elevation localization in visual-to-auditory sensory substitution. Front Psychol 2023; 14:1079998. [PMID: 36777233 PMCID: PMC9909421 DOI: 10.3389/fpsyg.2023.1079998] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 01/06/2023] [Indexed: 01/27/2023] Open
Abstract
Introduction Visual-to-auditory sensory substitution devices are assistive devices for the blind that convert visual images into auditory images (or soundscapes) by mapping visual features with acoustic cues. To convey spatial information with sounds, several sensory substitution devices use a Virtual Acoustic Space (VAS) using Head Related Transfer Functions (HRTFs) to synthesize natural acoustic cues used for sound localization. However, the perception of the elevation is known to be inaccurate with generic spatialization since it is based on notches in the audio spectrum that are specific to each individual. Another method used to convey elevation information is based on the audiovisual cross-modal correspondence between pitch and visual elevation. The main drawback of this second method is caused by the limitation of the ability to perceive elevation through HRTFs due to the spectral narrowband of the sounds. Method In this study we compared the early ability to localize objects with a visual-to-auditory sensory substitution device where elevation is either conveyed using a spatialization-based only method (Noise encoding) or using pitch-based methods with different spectral complexities (Monotonic and Harmonic encodings). Thirty eight blindfolded participants had to localize a virtual target using soundscapes before and after having been familiarized with the visual-to-auditory encodings. Results Participants were more accurate to localize elevation with pitch-based encodings than with the spatialization-based only method. Only slight differences in azimuth localization performance were found between the encodings. Discussion This study suggests the intuitiveness of a pitch-based encoding with a facilitation effect of the cross-modal correspondence when a non-individualized sound spatialization is used.
Collapse
Affiliation(s)
- Camille Bordeau
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France,*Correspondence: Camille Bordeau ✉
| | | | | | - Julien Dubois
- ImViA EA 7535, Université de Bourgogne, Dijon, France
| | - Maxime Ambard
- LEAD-CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
2
|
Impact of a Vibrotactile Belt on Emotionally Challenging Everyday Situations of the Blind. SENSORS 2021; 21:s21217384. [PMID: 34770689 PMCID: PMC8587958 DOI: 10.3390/s21217384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 10/31/2021] [Accepted: 11/03/2021] [Indexed: 11/16/2022]
Abstract
Spatial orientation and navigation depend primarily on vision. Blind people lack this critical source of information. To facilitate wayfinding and to increase the feeling of safety for these people, the "feelSpace belt" was developed. The belt signals magnetic north as a fixed reference frame via vibrotactile stimulation. This study investigates the effect of the belt on typical orientation and navigation tasks and evaluates the emotional impact. Eleven blind subjects wore the belt daily for seven weeks. Before, during and after the study period, they filled in questionnaires to document their experiences. A small sub-group of the subjects took part in behavioural experiments before and after four weeks of training, i.e., a straight-line walking task to evaluate the belt's effect on keeping a straight heading, an angular rotation task to examine effects on egocentric orientation, and a triangle completion navigation task to test the ability to take shortcuts. The belt reduced subjective discomfort and increased confidence during navigation. Additionally, the participants felt safer wearing the belt in various outdoor situations. Furthermore, the behavioural tasks point towards an intuitive comprehension of the belt. Altogether, the blind participants benefited from the vibrotactile belt as an assistive technology in challenging everyday situations.
Collapse
|
3
|
The effects of an object's height and weight on force calibration and kinematics when post-stroke and healthy individuals reach and grasp. Sci Rep 2021; 11:20559. [PMID: 34663848 PMCID: PMC8523696 DOI: 10.1038/s41598-021-00036-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 09/06/2021] [Indexed: 11/08/2022] Open
Abstract
Impairment in force regulation and motor control impedes the independence of individuals with stroke by limiting their ability to perform daily activities. There is, at present, incomplete information about how individuals with stroke regulate the application of force and control their movement when reaching, grasping, and lifting objects of different weights, located at different heights. In this study, we assess force regulation and kinematics when reaching, grasping, and lifting a cup of two different weights (empty and full), located at three different heights, in a total of 46 participants: 30 sub-acute stroke participants, and 16 healthy individuals. We found that the height of the reached target affects both force calibration and kinematics, while its weight affects only the force calibration when post-stroke and healthy individuals perform a reach-to-grasp task. There was no difference between the two groups in the mean and peak force values. The individuals with stroke had slower, jerkier, less efficient, and more variable movements compared to the control group. This difference was more pronounced with increasing stroke severity. With increasing stroke severity, post-stroke individuals demonstrated altered anticipation and preparation for lifting, which was evident for either cortical lesion side.
Collapse
|
4
|
Abstract
Background: Elderly people with severe finger weakness may need assistive health technology interventions. Finger weakness impedes the elderly in executing activities of daily living such as unbuttoning shirts and opening clothes pegs. While studies have related finger weakness with ageing effects, there appears to be no research that uses an algorithmic problem-solving approach such as the theory of inventive problem-solving (TRIZ) to recommend finger grip assistive technologies that resolve the issue of finger weakness among the elderly. Using TRIZ, this study aims to conceptualise finger grip enhancer designs for elderly people. Methods: Several TRIZ tools such as the cause-and-effect chain (CEC) analysis, engineering contradiction, physical contradiction, and substance-field analysis are used to conceptualise solutions that assist elderly people in their day-to-day pinching activities. Results: Based on the segmentation principle, a finger assistant concept powered by a miniature linear actuator is recommended. Specific product development processes are used to further conceptualise the actuation system. The study concluded that the chosen concept should use a DC motor to actuate fingers through tendon cables triggered by a push start button. Conclusions: Finger pinch degradation worsens the quality of life of the elderly. A finger grip enhancer that assists in day-to-day activities may be an effective option for elderly people, not only for their physical but also their mental well-being in society.
Collapse
Affiliation(s)
- Dominic Wen How Tan
- Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, Bukit Beruang, Melaka, 75450, Malaysia
| | - Poh Kiat Ng
- Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, Bukit Beruang, Melaka, 75450, Malaysia
| | - Ervina Efzan Mhd Noor
- Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, Bukit Beruang, Melaka, 75450, Malaysia
| |
Collapse
|
5
|
Kvansakul J, Hamilton L, Ayton LN, McCarthy C, Petoe MA. Sensory augmentation to aid training with retinal prostheses. J Neural Eng 2020; 17:045001. [PMID: 32554868 DOI: 10.1088/1741-2552/ab9e1d] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Retinal prosthesis recipients require rehabilitative training to learn the non-intuitive nature of prosthetic 'phosphene vision'. This study investigated whether the addition of auditory cues, using The vOICe sensory substitution device (SSD), could improve functional performance with simulated phosphene vision. APPROACH Forty normally sighted subjects completed two visual tasks under three conditions. The phosphene condition converted the image to simulated phosphenes displayed on a virtual reality headset. The SSD condition provided auditory information via stereo headphones, translating the image into sound. Horizontal information was encoded as stereo timing differences between ears, vertical information as pitch, and pixel intensity as audio intensity. The third condition combined phosphenes and SSD. Tasks comprised light localisation from the Basic Assessment of Light and Motion (BaLM) and the Tumbling-E from the Freiburg Acuity and Contrast Test (FrACT). To examine learning effects, twenty of the forty subjects received SSD training prior to assessment. MAIN RESULTS Combining phosphenes with auditory SSD provided better light localisation accuracy than either phosphenes or SSD alone, suggesting a compound benefit of integrating modalities. Although response times for SSD-only were significantly longer than all other conditions, combined condition response times were as fast as phosphene-only, highlighting that audio-visual integration provided both response time and accuracy benefits. Prior SSD training provided a benefit to localisation accuracy and speed in SSD-only (as expected) and Combined conditions compared to untrained SSD-only. Integration of the two modalities did not improve spatial resolution task performance, with resolution limited to that of the higher resolution modality (SSD). SIGNIFICANCE Combining phosphene (visual) and SSD (auditory) modalities was effective even without SSD training and led to an improvement in light localisation accuracy and response times. Spatial resolution performance was dominated by auditory SSD. The results suggest there may be a benefit to including auditory cues when training vision prosthesis recipients.
Collapse
Affiliation(s)
- Jessica Kvansakul
- Bionics Institute, East Melbourne, VIC, Australia. Department of Medical Bionics, University of Melbourne, Parkville, VIC, Australia
| | | | | | | | | |
Collapse
|
6
|
La Rocca D, Ciuciu P, Engemann DA, van Wassenhove V. Emergence of β and γ networks following multisensory training. Neuroimage 2020; 206:116313. [PMID: 31676416 PMCID: PMC7355235 DOI: 10.1016/j.neuroimage.2019.116313] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/22/2019] [Accepted: 10/23/2019] [Indexed: 12/31/2022] Open
Abstract
Our perceptual reality relies on inferences about the causal structure of the world given by multiple sensory inputs. In ecological settings, multisensory events that cohere in time and space benefit inferential processes: hearing and seeing a speaker enhances speech comprehension, and the acoustic changes of flapping wings naturally pace the motion of a flock of birds. Here, we asked how a few minutes of (multi)sensory training could shape cortical interactions in a subsequent unisensory perceptual task. For this, we investigated oscillatory activity and functional connectivity as a function of individuals' sensory history during training. Human participants performed a visual motion coherence discrimination task while being recorded with magnetoencephalography. Three groups of participants performed the same task with visual stimuli only, while listening to acoustic textures temporally comodulated with the strength of visual motion coherence, or with auditory noise uncorrelated with visual motion. The functional connectivity patterns before and after training were contrasted to resting-state networks to assess the variability of common task-relevant networks, and the emergence of new functional interactions as a function of sensory history. One major finding is the emergence of a large-scale synchronization in the high γ (gamma: 60-120Hz) and β (beta: 15-30Hz) bands for individuals who underwent comodulated multisensory training. The post-training network involved prefrontal, parietal, and visual cortices. Our results suggest that the integration of evidence and decision-making strategies become more efficient following congruent multisensory training through plasticity in network routing and oscillatory regimes.
Collapse
Affiliation(s)
- Daria La Rocca
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Philippe Ciuciu
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Denis-Alexander Engemann
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France
| | - Virginie van Wassenhove
- CEA/DRF/Joliot, Université Paris-Saclay, 91191, Gif-sur-Yvette, France; Cognitive Neuroimaging Unit, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191, Gif-sur-Yvette, France.
| |
Collapse
|
7
|
Abstract
Reaching movements are usually initiated by visual events and controlled visually and kinesthetically. Lately, studies have focused on the possible benefit of auditory information for localization tasks, and also for movement control. This explorative study aimed to investigate if it is possible to code reaching space purely by auditory information. Therefore, the precision of reaching movements to merely acoustically coded target positions was analyzed. We studied the efficacy of acoustically effect-based and of additional acoustically performance-based instruction and feedback and the role of visual movement control. Twenty-four participants executed reaching movements to merely acoustically presented, invisible target positions in three mutually perpendicular planes in front of them. Effector-endpoint trajectories were tracked using inertial sensors. Kinematic data regarding the three spatial dimensions and the movement velocity were sonified. Thus, acoustic instruction and real-time feedback of the movement trajectories and the target position of the hand were provided. The subjects were able to align their reaching movements to the merely acoustically instructed targets. Reaching space can be coded merely acoustically, additional visual movement control does not enhance reaching performance. On the basis of these results, a remarkable benefit of kinematic movement acoustics for the neuromotor rehabilitation of everyday motor skills can be assumed.
Collapse
|
8
|
Tactile recognition of visual stimuli: Specificity versus generalization of perceptual learning. Vision Res 2018; 152:40-50. [DOI: 10.1016/j.visres.2017.11.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2017] [Revised: 10/30/2017] [Accepted: 11/16/2017] [Indexed: 11/19/2022]
|
9
|
Eizicovits D, Edan Y, Tabak I, Levy-Tzedek S. Robotic gaming prototype for upper limb exercise: Effects of age and embodiment on user preferences and movement. Restor Neurol Neurosci 2018. [PMID: 29526862 PMCID: PMC5870005 DOI: 10.3233/rnn-170802] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Background: Effective human-robot interactions in rehabilitation necessitates an understanding of how these should be tailored to the needs of the human. We report on a robotic system developed as a partner on a 3-D everyday task, using a gamified approach. Objectives: To: (1) design and test a prototype system, to be ultimately used for upper-limb rehabilitation; (2) evaluate how age affects the response to such a robotic system; and (3) identify whether the robot’s physical embodiment is an important aspect in motivating users to complete a set of repetitive tasks. Methods: 62 healthy participants, young (<30 yo) and old (>60 yo), played a 3D tic-tac-toe game against an embodied (a robotic arm) and a non-embodied (a computer-controlled lighting system) partner. To win, participants had to place three cups in sequence on a physical 3D grid. Cup picking-and-placing was chosen as a functional task that is often practiced in post-stroke rehabilitation. Movement of the participants was recorded using a Kinect camera. Results: The timing of the participants’ movement was primed by the response time of the system: participants moved slower when playing with the slower embodied system (p = 0.006). The majority of participants preferred the robot over the computer-controlled system. Slower response time of the robot compared to the computer-controlled one only affected the young group’s motivation to continue playing. Conclusion: We demonstrated the feasibility of the system to encourage the performance of repetitive 3D functional movements, and track these movements. Young and old participants preferred to interact with the robot, compared with the non-embodied system. We contribute to the growing knowledge concerning personalized human-robot interactions by (1) demonstrating the priming of the human movement by the robotic movement – an important design feature, and (2) identifying response-speed as a design variable, the importance of which depends on the age of the user.
Collapse
Affiliation(s)
- Danny Eizicovits
- Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Yael Edan
- Department of Industrial Engineering and Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Iris Tabak
- Department of Education, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
10
|
Buchs G, Simon N, Maidenbaum S, Amedi A. Waist-up protection for blind individuals using the EyeCane as a primary and secondary mobility aid. Restor Neurol Neurosci 2018; 35:225-235. [PMID: 28157111 PMCID: PMC5366249 DOI: 10.3233/rnn-160686] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Background: One of the most stirring statistics in relation to the mobility of blind individuals is the high rate of upper body injuries, even when using the white-cane. Objective: We here addressed a rehabilitation- oriented challenge of providing a reliable tool for blind people to avoid waist-up obstacles, namely one of the impediments to their successful mobility using currently available methods (e.g., white-cane). Methods: We used the EyeCane, a device we developed which translates distances from several angles to haptic and auditory cues in an intuitive and unobtrusive manner, serving both as a primary and secondary mobility aid. We investigated the rehabilitation potential of such a device in facilitating visionless waist-up body protection. Results: After ∼5 minutes of training with the EyeCane blind participants were able to successfully detect and avoid obstacles waist-high and up. This was significantly higher than their success when using the white-cane alone. As avoidance of obstacles required participants to perform an additional cognitive process after their detection, the avoidance rate was significantly lower than the detection rate. Conclusion: Our work has demonstrated that the EyeCane has the potential to extend the sensory world of blind individuals by expanding their currently accessible inputs, and has offered them a new practical rehabilitation tool.
Collapse
Affiliation(s)
- Galit Buchs
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Noa Simon
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shachar Maidenbaum
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Amir Amedi
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.,Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel.,Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris, France
| |
Collapse
|
11
|
Kristjánsson Á, Moldoveanu A, Jóhannesson ÓI, Balan O, Spagnol S, Valgeirsdóttir VV, Unnthorsson R. Designing sensory-substitution devices: Principles, pitfalls and potential1. Restor Neurol Neurosci 2018; 34:769-87. [PMID: 27567755 PMCID: PMC5044782 DOI: 10.3233/rnn-160647] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
An exciting possibility for compensating for loss of sensory function is to augment deficient senses by conveying missing information through an intact sense. Here we present an overview of techniques that have been developed for sensory substitution (SS) for the blind, through both touch and audition, with special emphasis on the importance of training for the use of such devices, while highlighting potential pitfalls in their design. One example of a pitfall is how conveying extra information about the environment risks sensory overload. Related to this, the limits of attentional capacity make it important to focus on key information and avoid redundancies. Also, differences in processing characteristics and bandwidth between sensory systems severely constrain the information that can be conveyed. Furthermore, perception is a continuous process and does not involve a snapshot of the environment. Design of sensory substitution devices therefore requires assessment of the nature of spatiotemporal continuity for the different senses. Basic psychophysical and neuroscientific research into representations of the environment and the most effective ways of conveying information should lead to better design of sensory substitution systems. Sensory substitution devices should emphasize usability, and should not interfere with other inter- or intramodal perceptual function. Devices should be task-focused since in many cases it may be impractical to convey too many aspects of the environment. Evidence for multisensory integration in the representation of the environment suggests that researchers should not limit themselves to a single modality in their design. Finally, we recommend active training on devices, especially since it allows for externalization, where proximal sensory stimulation is attributed to a distinct exterior object.
Collapse
Affiliation(s)
- Árni Kristjánsson
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Alin Moldoveanu
- University Politehnica of Bucharest, Faculty of Automatic Control and Computers, Computer Science and Engineering Department, Bucharest, Romania
| | - Ómar I Jóhannesson
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Oana Balan
- University Politehnica of Bucharest, Faculty of Automatic Control and Computers, Computer Science and Engineering Department, Bucharest, Romania
| | - Simone Spagnol
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, Reykjavik, Iceland
| | - Vigdís Vala Valgeirsdóttir
- Laboratory of Visual Perception and Visuomotor control, University of Iceland, Faculty of Psychology, School of Health Sciences, Reykjavik, Iceland
| | - Rúnar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, Reykjavik, Iceland
| |
Collapse
|
12
|
Levy-Tzedek S, Arbelle D, Forman D, Zlotnik Y. Improvement in upper-limb UPDRS motor scores following fast-paced arm exercise: A pilot study. Restor Neurol Neurosci 2018; 36:535-545. [PMID: 29889088 PMCID: PMC6087443 DOI: 10.3233/rnn-180818] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND The symptoms of patients with Parkinson's disease (PD) have been shown to improve when they perform fast-paced rhythmic cycling movements with their lower limbs. OBJECTIVE Our goal in this pilot experiment was to test the feasibility and the benefits of a short exercise program involving fast-paced rhythmic movements of the upper limb for patients with PD. METHODS We used an experimental procedure that elicits large, fast-paced movements by the participants without the direct instructions to do so by the experimenter. Ten participants with PD (71.0±6.5 years old) performed a 50-min fast-paced rhythmic exercise of the upper limb after withdrawal from PD medication for at least 12 hours. RESULTS Participants improved their kinematic performance, in terms of accuracy and combined speed and amplitude (p < 0.02), as well as their upper-limb MDS-UPDRS motor scores (p = 0.023). CONCLUSIONS The results demonstrate the feasibility of using the described apparatus to perform an exercise session of approximately 50 min with both arms, and give a preliminary indication of the potential benefit of such an exercise program.
Collapse
Affiliation(s)
- Shelly Levy-Tzedek
- Department of Physical Therapy, Recanati School for Community Health Professions, Ben Gurion University of the Negev, Beer Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer Sheva, Israel
| | - Dan Arbelle
- Department of Physical Therapy, Recanati School for Community Health Professions, Ben Gurion University of the Negev, Beer Sheva, Israel
| | - Dan Forman
- Department of Physical Therapy, Recanati School for Community Health Professions, Ben Gurion University of the Negev, Beer Sheva, Israel
| | - Yair Zlotnik
- Department of Neurology, Soroka University Medical Center, Beer Sheva, Israel
| |
Collapse
|
13
|
Graulty C, Papaioannou O, Bauer P, Pitts MA, Canseco-Gonzalez E. Hearing Shapes: Event-related Potentials Reveal the Time Course of Auditory-Visual Sensory Substitution. J Cogn Neurosci 2017; 30:498-513. [PMID: 29211649 DOI: 10.1162/jocn_a_01210] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.
Collapse
|
14
|
König SU, Schumann F, Keyser J, Goeke C, Krause C, Wache S, Lytochkin A, Ebert M, Brunsch V, Wahn B, Kaspar K, Nagel SK, Meilinger T, Bülthoff H, Wolbers T, Büchel C, König P. Learning New Sensorimotor Contingencies: Effects of Long-Term Use of Sensory Augmentation on the Brain and Conscious Perception. PLoS One 2016; 11:e0166647. [PMID: 27959914 PMCID: PMC5154504 DOI: 10.1371/journal.pone.0166647] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2016] [Accepted: 09/28/2016] [Indexed: 11/19/2022] Open
Abstract
Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation.
Collapse
Affiliation(s)
- Sabine U. König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Frank Schumann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Johannes Keyser
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Caspar Goeke
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Carina Krause
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Susan Wache
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Aleksey Lytochkin
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Manuel Ebert
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Vincent Brunsch
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Basil Wahn
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Kai Kaspar
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Department of Psychology, University of Cologne, Cologne, Germany
| | - Saskia K. Nagel
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| | - Tobias Meilinger
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | | - Thomas Wolbers
- Aging & Cognition Research Group, German Center for Neurodegenerative Diseases (DZNE), Magdeburg, Germany
| | - Christian Büchel
- NeuroImage Nord, Department of Systems Neuroscience, Hamburg University Hospital Eppendorf, Hamburg, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
15
|
Cecchetti L, Kupers R, Ptito M, Pietrini P, Ricciardi E. Are Supramodality and Cross-Modal Plasticity the Yin and Yang of Brain Development? From Blindness to Rehabilitation. Front Syst Neurosci 2016; 10:89. [PMID: 27877116 PMCID: PMC5099160 DOI: 10.3389/fnsys.2016.00089] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 10/27/2016] [Indexed: 12/20/2022] Open
Abstract
Research in blind individuals has primarily focused for a long time on the brain plastic reorganization that occurs in early visual areas. Only more recently, scientists have developed innovative strategies to understand to what extent vision is truly a mandatory prerequisite for the brain's fine morphological architecture to develop and function. As a whole, the studies conducted to date in sighted and congenitally blind individuals have provided ample evidence that several "visual" cortical areas develop independently from visual experience and do process information content regardless of the sensory modality through which a particular stimulus is conveyed: a property named supramodality. At the same time, lack of vision leads to a structural and functional reorganization within "visual" brain areas, a phenomenon known as cross-modal plasticity. Cross-modal recruitment of the occipital cortex in visually deprived individuals represents an adaptative compensatory mechanism that mediates processing of non-visual inputs. Supramodality and cross-modal plasticity appears to be the "yin and yang" of brain development: supramodal is what takes place despite the lack of vision, whereas cross-modal is what happens because of lack of vision. Here we provide a critical overview of the research in this field and discuss the implications that these novel findings have for the development of educative/rehabilitation approaches and sensory substitution devices (SSDs) in sensory-impaired individuals.
Collapse
Affiliation(s)
- Luca Cecchetti
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; Clinical Psychology Branch, Pisa University HospitalPisa, Italy
| | - Ron Kupers
- BRAINlab, Department of Neuroscience and Pharmacology, Panum Institute, University of CopenhagenCopenhagen, Denmark; Department of Radiology and Biomedical Imaging, Yale UniversityNew Haven, CT, USA
| | - Maurice Ptito
- Laboratory of Neuropsychiatry, Psychiatric Centre CopenhagenCopenhagen, Denmark; School of Optometry, Université de MontréalMontréal, QC, Canada
| | | | - Emiliano Ricciardi
- Department of Surgical, Medical, Molecular Pathology and Critical Care, University of PisaPisa, Italy; MOMILab, IMT School for Advanced Studies LuccaLucca, Italy
| |
Collapse
|
16
|
Proulx MJ, Gwinnutt J, Dell'Erba S, Levy-Tzedek S, de Sousa AA, Brown DJ. Other ways of seeing: From behavior to neural mechanisms in the online "visual" control of action with sensory substitution. Restor Neurol Neurosci 2016; 34:29-44. [PMID: 26599473 PMCID: PMC4927905 DOI: 10.3233/rnn-150541] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Vision is the dominant sense for perception-for-action in humans and other higher primates. Advances in sight restoration now utilize the other intact senses to provide information that is normally sensed visually through sensory substitution to replace missing visual information. Sensory substitution devices translate visual information from a sensor, such as a camera or ultrasound device, into a format that the auditory or tactile systems can detect and process, so the visually impaired can see through hearing or touch. Online control of action is essential for many daily tasks such as pointing, grasping and navigating, and adapting to a sensory substitution device successfully requires extensive learning. Here we review the research on sensory substitution for vision restoration in the context of providing the means of online control for action in the blind or blindfolded. It appears that the use of sensory substitution devices utilizes the neural visual system; this suggests the hypothesis that sensory substitution draws on the same underlying mechanisms as unimpaired visual control of action. Here we review the current state of the art for sensory substitution approaches to object recognition, localization, and navigation, and the potential these approaches have for revealing a metamodal behavioral and neural basis for the online control of action.
Collapse
Affiliation(s)
- Michael J Proulx
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - James Gwinnutt
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - Sara Dell'Erba
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| | - Shelly Levy-Tzedek
- Cognition, Aging and Rehabilitation Lab, Recanati School for Community Health Professions, Department of Physical Therapy & Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Alexandra A de Sousa
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK.,Department of Science, Bath Spa University, Bath, UK
| | - David J Brown
- Crossmodal Cognition Lab, Department of Psychology, University of Bath, Bath, UK
| |
Collapse
|
17
|
Loria T, de Grosbois J, Tremblay L. Can You Hear That Peak? Utilization of Auditory and Visual Feedback at Peak Limb Velocity. RESEARCH QUARTERLY FOR EXERCISE AND SPORT 2016; 87:254-261. [PMID: 27463070 DOI: 10.1080/02701367.2016.1196810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
PURPOSE At rest, the central nervous system combines and integrates multisensory cues to yield an optimal percept. When engaging in action, the relative weighing of sensory modalities has been shown to be altered. Because the timing of peak velocity is the critical moment in some goal-directed movements (e.g., overarm throwing), the current study sought to test whether visual and auditory cues are optimally integrated at that specific kinematic marker when it is the critical part of the trajectory. METHODS Participants performed an upper-limb movement in which they were required to reach their peak limb velocity when the right index finger intersected a virtual target (i.e., a flinging movement). Brief auditory, visual, or audiovisual feedback (i.e., 20 ms in duration) was provided to participants at peak limb velocity. Performance was assessed primarily through the resultant position of peak limb velocity and the variability of that position. RESULTS Relative to when no feedback was provided, auditory feedback significantly reduced the resultant endpoint variability of the finger position at peak limb velocity. However, no such reductions were found for the visual or audiovisual feedback conditions. Further, providing both auditory and visual cues concurrently also failed to yield the theoretically predicted improvements in endpoint variability. CONCLUSIONS Overall, the central nervous system can make significant use of an auditory cue but may not optimally integrate a visual and auditory cue at peak limb velocity, when peak velocity is the critical part of the trajectory.
Collapse
|
18
|
Levy-Tzedek S, Maidenbaum S, Amedi A, Lackner J. Aging and Sensory Substitution in a Virtual Navigation Task. PLoS One 2016; 11:e0151593. [PMID: 27007812 PMCID: PMC4805187 DOI: 10.1371/journal.pone.0151593] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Accepted: 03/01/2016] [Indexed: 11/21/2022] Open
Abstract
Virtual environments are becoming ubiquitous, and used in a variety of contexts–from entertainment to training and rehabilitation. Recently, technology for making them more accessible to blind or visually impaired users has been developed, by using sound to represent visual information. The ability of older individuals to interpret these cues has not yet been studied. In this experiment, we studied the effects of age and sensory modality (visual or auditory) on navigation through a virtual maze. We added a layer of complexity by conducting the experiment in a rotating room, in order to test the effect of the spatial bias induced by the rotation on performance. Results from 29 participants showed that with the auditory cues, it took participants a longer time to complete the mazes, they took a longer path length through the maze, they paused more, and had more collisions with the walls, compared to navigation with the visual cues. The older group took a longer time to complete the mazes, they paused more, and had more collisions with the walls, compared to the younger group. There was no effect of room rotation on the performance, nor were there any significant interactions among age, feedback modality and room rotation. We conclude that there is a decline in performance with age, and that while navigation with auditory cues is possible even at an old age, it presents more challenges than visual navigation.
Collapse
Affiliation(s)
- S. Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
- * E-mail:
| | - S. Maidenbaum
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - A. Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision, Paris, France
| | - J. Lackner
- Ashton Graybiel Spatial Orientation Laboratory, Department of Physiology, Brandeis University, Waltham, Massachusetts, United States of America
| |
Collapse
|
19
|
Buchs G, Maidenbaum S, Levy-Tzedek S, Amedi A. Integration and binding in rehabilitative sensory substitution: Increasing resolution using a new Zooming-in approach. Restor Neurol Neurosci 2016; 34:97-105. [PMID: 26518671 PMCID: PMC4927841 DOI: 10.3233/rnn-150592] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
PURPOSE To visually perceive our surroundings we constantly move our eyes and focus on particular details, and then integrate them into a combined whole. Current visual rehabilitation methods, both invasive, like bionic-eyes and non-invasive, like Sensory Substitution Devices (SSDs), down-sample visual stimuli into low-resolution images. Zooming-in to sub-parts of the scene could potentially improve detail perception. Can congenitally blind individuals integrate a 'visual' scene when offered this information via different sensory modalities, such as audition? Can they integrate visual information -perceived in parts - into larger percepts despite never having had any visual experience? METHODS We explored these questions using a zooming-in functionality embedded in the EyeMusic visual-to-auditory SSD. Eight blind participants were tasked with identifying cartoon faces by integrating their individual components recognized via the EyeMusic's zooming mechanism. RESULTS After specialized training of just 6-10 hours, blind participants successfully and actively integrated facial features into cartooned identities in 79±18% of the trials in a highly significant manner, (chance level 10% ; rank-sum P < 1.55E-04). CONCLUSIONS These findings show that even users who lacked any previous visual experience whatsoever can indeed integrate this visual information with increased resolution. This potentially has important practical visual rehabilitation implications for both invasive and non-invasive methods.
Collapse
Affiliation(s)
- Galit Buchs
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shachar Maidenbaum
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer-Sheva, Israel
| | - Amir Amedi
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, Hebrew University of Jerusalem Hadassah Ein-Kerem, Jerusalem, Israel
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- Sorbonne Universités UPMC Univ Paris 06, Institut de la Vision Paris, France
| |
Collapse
|
20
|
Levy-Tzedek S, Riemer D, Amedi A. Color improves "visual" acuity via sound. Front Neurosci 2014; 8:358. [PMID: 25426015 PMCID: PMC4227506 DOI: 10.3389/fnins.2014.00358] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2014] [Accepted: 10/17/2014] [Indexed: 11/13/2022] Open
Abstract
Visual-to-auditory sensory substitution devices (SSDs) convey visual information via sound, with the primary goal of making visual information accessible to blind and visually impaired individuals. We developed the EyeMusic SSD, which transforms shape, location, and color information into musical notes. We tested the “visual” acuity of 23 individuals (13 blind and 10 blindfolded sighted) on the Snellen tumbling-E test, with the EyeMusic. Participants were asked to determine the orientation of the letter “E.” The test was repeated twice: in one test, the letter “E” was drawn with a single color (white), and in the other test, with two colors (red and white). In the latter case, the vertical line in the letter, when upright, was drawn in red, with the three horizontal lines drawn in white. We found no significant differences in performance between the blind and the sighted groups. We found a significant effect of the added color on the “visual” acuity. The highest acuity participants reached in the monochromatic test was 20/800, whereas with the added color, acuity doubled to 20/400. We conclude that color improves “visual” acuity via sound.
Collapse
Affiliation(s)
- Shelly Levy-Tzedek
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem Jerusalem, Israel ; The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem Jerusalem, Israel
| | - Dar Riemer
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem Jerusalem, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem Jerusalem, Israel ; The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem Jerusalem, Israel ; The Cognitive Science Program, The Hebrew University of Jerusalem Jerusalem, Israel
| |
Collapse
|
21
|
Lancioni GE, Singh NN, O'Reilly MF, Green VA, Alberti G, Boccasini A, Smaldone A, Oliva D, Bosco A. Automatic feedback to promote safe walking and speech loudness control in persons with multiple disabilities: two single-case studies. Dev Neurorehabil 2014; 17:224-31. [PMID: 24102507 DOI: 10.3109/17518423.2012.749953] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE Assessing automatic feedback technologies to promote safe travel and speech loudness control in two men with multiple disabilities, respectively. METHOD The men were involved in two single-case studies. In Study I, the technology involved a microprocessor, two photocells, and a verbal feedback device. The man received verbal alerting/feedback when the photocells spotted an obstacle in front of him. In Study II, the technology involved a sound-detecting unit connected to a throat and an airborne microphone, and to a vibration device. Vibration occurred when the man's speech loudness exceeded a preset level. RESULTS The man included in Study I succeeded in using the automatic feedback in substitution of caregivers' alerting/feedback for safe travel. The man of Study II used the automatic feedback to successfully reduce his speech loudness. CONCLUSION Automatic feedback can be highly effective in helping persons with multiple disabilities improve their travel and speech performance.
Collapse
Affiliation(s)
- Giulio E Lancioni
- Department of Neuroscience and Sense Organs, University of Bari , Bari , Italy
| | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Maidenbaum S, Abboud S, Amedi A. Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci Biobehav Rev 2013; 41:3-15. [PMID: 24275274 DOI: 10.1016/j.neubiorev.2013.11.007] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2013] [Revised: 10/06/2013] [Accepted: 11/08/2013] [Indexed: 11/25/2022]
Abstract
Sensory substitution devices (SSDs) have come a long way since first developed for visual rehabilitation. They have produced exciting experimental results, and have furthered our understanding of the human brain. Unfortunately, they are still not used for practical visual rehabilitation, and are currently considered as reserved primarily for experiments in controlled settings. Over the past decade, our understanding of the neural mechanisms behind visual restoration has changed as a result of converging evidence, much of which was gathered with SSDs. This evidence suggests that the brain is more than a pure sensory-machine but rather is a highly flexible task-machine, i.e., brain regions can maintain or regain their function in vision even with input from other senses. This complements a recent set of more promising behavioral achievements using SSDs and new promising technologies and tools. All these changes strongly suggest that the time has come to revive the focus on practical visual rehabilitation with SSDs and we chart several key steps in this direction such as training protocols and self-train tools.
Collapse
Affiliation(s)
- Shachar Maidenbaum
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Sami Abboud
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel; The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91220, Israel.
| |
Collapse
|
23
|
Maidenbaum S, Levy-Tzedek S, Chebat DR, Amedi A. Increasing accessibility to the blind of virtual environments, using a virtual mobility aid based on the "EyeCane": feasibility study. PLoS One 2013; 8:e72555. [PMID: 23977316 PMCID: PMC3747209 DOI: 10.1371/journal.pone.0072555] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2013] [Accepted: 07/13/2013] [Indexed: 11/30/2022] Open
Abstract
Virtual worlds and environments are becoming an increasingly central part of our lives, yet they are still far from accessible to the blind. This is especially unfortunate as such environments hold great potential for them for uses such as social interaction, online education and especially for use with familiarizing the visually impaired user with a real environment virtually from the comfort and safety of his own home before visiting it in the real world. We have implemented a simple algorithm to improve this situation using single-point depth information, enabling the blind to use a virtual cane, modeled on the “EyeCane” electronic travel aid, within any virtual environment with minimal pre-processing. Use of the Virtual-EyeCane, enables this experience to potentially be later used in real world environments with identical stimuli to those from the virtual environment. We show the fast-learned practical use of this algorithm for navigation in simple environments.
Collapse
Affiliation(s)
- Shachar Maidenbaum
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- * E-mail: (SM); (AA)
| | - Shelly Levy-Tzedek
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, the Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Daniel-Robert Chebat
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, the Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Edmond and Lily Safra Center for Brain Research, the Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- * E-mail: (SM); (AA)
| |
Collapse
|
24
|
Levy-Tzedek S, Novick I, Arbel R, Abboud S, Maidenbaum S, Vaadia E, Amedi A. Cross-sensory transfer of sensory-motor information: visuomotor learning affects performance on an audiomotor task, using sensory-substitution. Sci Rep 2012; 2:949. [PMID: 23230514 PMCID: PMC3517987 DOI: 10.1038/srep00949] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2012] [Accepted: 11/19/2012] [Indexed: 11/09/2022] Open
Abstract
Visual-to-auditory sensory-substitution devices allow users to perceive a visual image using sound. Using a motor-learning task, we found that new sensory-motor information was generalized across sensory modalities. We imposed a rotation when participants reached to visual targets, and found that not only seeing, but also hearing the location of targets via a sensory-substitution device resulted in biased movements. When the rotation was removed, aftereffects occurred whether the location of targets was seen or heard. Our findings demonstrate that sensory-motor learning was not sensory-modality-specific. We conclude that novel sensory-motor information can be transferred between sensory modalities.
Collapse
Affiliation(s)
- Shelly Levy-Tzedek
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada (IMRIC), Faculty of Medicine, The Hebrew University of Jerusalem , Jerusalem, Israel.
| | | | | | | | | | | | | |
Collapse
|