1
|
Snir A, Cieśla K, Ozdemir G, Vekslar R, Amedi A. Localizing 3D motion through the fingertips: Following in the footsteps of elephants. iScience 2024; 27:109820. [PMID: 38799571 PMCID: PMC11126990 DOI: 10.1016/j.isci.2024.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/07/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Each sense serves a different specific function in spatial perception, and they all form a joint multisensory spatial representation. For instance, hearing enables localization in the entire 3D external space, while touch traditionally only allows localization of objects on the body (i.e., within the peripersonal space alone). We use an in-house touch-motion algorithm (TMA) to evaluate individuals' capability to understand externalized 3D information through touch, a skill that was not acquired during an individual's development or in evolution. Four experiments demonstrate quick learning and high accuracy in localization of motion using vibrotactile inputs on fingertips and successful audio-tactile integration in background noise. Subjective responses in some participants imply spatial experiences through visualization and perception of tactile "moving" sources beyond reach. We discuss our findings with respect to developing new skills in an adult brain, including combining a newly acquired "sense" with an existing one and computation-based brain organization.
Collapse
Affiliation(s)
- Adi Snir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Katarzyna Cieśla
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Mokra 17, 05-830 Kajetany, Nadarzyn, Poland
| | - Gizem Ozdemir
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Rotem Vekslar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, HaUniversita 8, Herzliya 461010, Israel
| |
Collapse
|
2
|
Abstract
Across the millennia, and across a range of disciplines, there has been a widespread desire to connect, or translate between, the senses in a manner that is meaningful, rather than arbitrary. Early examples were often inspired by the vivid, yet mostly idiosyncratic, crossmodal matches expressed by synaesthetes, often exploited for aesthetic purposes by writers, artists, and composers. A separate approach comes from those academic commentators who have attempted to translate between structurally similar dimensions of perceptual experience (such as pitch and colour). However, neither approach has succeeded in delivering consensually agreed crossmodal matches. As such, an alternative approach to sensory translation is needed. In this narrative historical review, focusing on the translation between audition and vision, we attempt to shed light on the topic by addressing the following three questions: (1) How is the topic of sensory translation related to synaesthesia, multisensory integration, and crossmodal associations? (2) Are there common processing mechanisms across the senses that can help to guarantee the success of sensory translation, or, rather, is mapping among the senses mediated by allegedly universal (e.g., amodal) stimulus dimensions? (3) Is the term 'translation' in the context of cross-sensory mappings used metaphorically or literally? Given the general mechanisms and concepts discussed throughout the review, the answers we come to regarding the nature of audio-visual translation are likely to apply to the translation between other perhaps less-frequently studied modality pairings as well.
Collapse
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK.
- Department of Experimental Psychology, New Radcliffe House, University of Oxford, Oxford, OX2 6BW, UK.
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy (CNR), Rome, Italy
| |
Collapse
|
3
|
Hamilton-Fletcher G, Liu M, Sheng D, Feng C, Hudson TE, Rizzo JR, Chan KC. Accuracy and Usability of Smartphone-Based Distance Estimation Approaches for Visual Assistive Technology Development. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:54-58. [PMID: 38487094 PMCID: PMC10939328 DOI: 10.1109/ojemb.2024.3358562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 12/08/2023] [Accepted: 01/22/2024] [Indexed: 03/17/2024] Open
Abstract
Goal: Distance information is highly requested in assistive smartphone Apps by people who are blind or low vision (PBLV). However, current techniques have not been evaluated systematically for accuracy and usability. Methods: We tested five smartphone-based distance-estimation approaches in the image center and periphery at 1-3 meters, including machine learning (CoreML), infrared grid distortion (IR_self), light detection and ranging (LiDAR_back), and augmented reality room-tracking on the front (ARKit_self) and back-facing cameras (ARKit_back). Results: For accuracy in the image center, all approaches had <±2.5 cm average error, except CoreML which had ±5.2-6.2 cm average error at 2-3 meters. In the periphery, all approaches were more inaccurate, with CoreML and IR_self having the highest average errors at ±41 cm and ±32 cm respectively. For usability, CoreML fared favorably with the lowest central processing unit usage, second lowest battery usage, highest field-of-view, and no specialized sensor requirements. Conclusions: We provide key information that helps design reliable smartphone-based visual assistive technologies to enhance the functionality of PBLV.
Collapse
Affiliation(s)
- Giles Hamilton-Fletcher
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
- Department of Rehabilitative Medicine, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| | - Mingxin Liu
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| | - Diwei Sheng
- Department of Civil and Urban Engineering & Department of Mechanical and Aerospace EngineeringNew York University Tandon School of EngineeringBrooklynNY11201USA
| | - Chen Feng
- Department of Civil and Urban Engineering & Department of Mechanical and Aerospace EngineeringNew York University Tandon School of EngineeringBrooklynNY11201USA
| | - Todd E. Hudson
- Department of Rehabilitative Medicine, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| | - John-Ross Rizzo
- Department of Rehabilitative Medicine, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
- Department of Biomedical Engineering, Tandon School of EngineeringNew York UniversityNew YorkNY11201USA
| | - Kevin C. Chan
- Department of Ophthalmology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
- Department of Biomedical Engineering, Tandon School of EngineeringNew York UniversityNew YorkNY11201USA
- Department of Radiology, NYU Grossman School of Medicine, NYU Langone HealthNew York UniversityNew YorkNY10017USA
| |
Collapse
|
4
|
Pinardi M, Di Stefano N, Di Pino G, Spence C. Exploring crossmodal correspondences for future research in human movement augmentation. Front Psychol 2023; 14:1190103. [PMID: 37397340 PMCID: PMC10308310 DOI: 10.3389/fpsyg.2023.1190103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 05/30/2023] [Indexed: 07/04/2023] Open
Abstract
"Crossmodal correspondences" are the consistent mappings between perceptual dimensions or stimuli from different sensory domains, which have been widely observed in the general population and investigated by experimental psychologists in recent years. At the same time, the emerging field of human movement augmentation (i.e., the enhancement of an individual's motor abilities by means of artificial devices) has been struggling with the question of how to relay supplementary information concerning the state of the artificial device and its interaction with the environment to the user, which may help the latter to control the device more effectively. To date, this challenge has not been explicitly addressed by capitalizing on our emerging knowledge concerning crossmodal correspondences, despite these being tightly related to multisensory integration. In this perspective paper, we introduce some of the latest research findings on the crossmodal correspondences and their potential role in human augmentation. We then consider three ways in which the former might impact the latter, and the feasibility of this process. First, crossmodal correspondences, given the documented effect on attentional processing, might facilitate the integration of device status information (e.g., concerning position) coming from different sensory modalities (e.g., haptic and visual), thus increasing their usefulness for motor control and embodiment. Second, by capitalizing on their widespread and seemingly spontaneous nature, crossmodal correspondences might be exploited to reduce the cognitive burden caused by additional sensory inputs and the time required for the human brain to adapt the representation of the body to the presence of the artificial device. Third, to accomplish the first two points, the benefits of crossmodal correspondences should be maintained even after sensory substitution, a strategy commonly used when implementing supplementary feedback.
Collapse
Affiliation(s)
- Mattia Pinardi
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Giovanni Di Pino
- NeXT Lab, Neurophysiology and Neuroengineering of Human-Technology Interaction Research Unit, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
5
|
Samara M, Deriche M, Al-Sadah J, Osais Y. Design and Implementation of a Real-Time Color Recognition System for the Visually Impaired. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07506-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
6
|
Spence C, Di Stefano N. Coloured hearing, colour music, colour organs, and the search for perceptually meaningful correspondences between colour and sound. Iperception 2022; 13:20416695221092802. [PMID: 35572076 PMCID: PMC9099070 DOI: 10.1177/20416695221092802] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/16/2022] [Accepted: 03/22/2022] [Indexed: 11/17/2022] Open
Abstract
There has long been interest in the nature of the relationship(s) between hue and pitch or, in other words, between colour and musical/pure tones, stretching back at least as far as Newton, Goethe, Helmholtz, and beyond. In this narrative historical review, we take a closer look at the motivations that have lain behind the various assertions that have been made in the literature concerning the analogies, and possible perceptual similarities, between colour and sound. During the last century, a number of experimental psychologists have also investigated the nature of the correspondence between these two primary dimensions of perceptual experience. The multitude of different crossmodal mappings that have been put forward over the centuries are summarized, and a distinction drawn between physical/structural and psychological correspondences. The latter being further sub-divided into perceptual and affective categories. Interest in physical correspondences has typically been motivated by the structural similarities (analogous mappings) between the organization of perceptible dimensions of auditory and visual experience. Emphasis has been placed both on the similarity in terms of the number of basic categories into which pitch and colour can be arranged and also on the fact that both can be conceptualized as circular dimensions. A distinction is drawn between those commentators who have argued for a dimensional alignment of pitch and hue (based on a structural mapping), and those who appear to have been motivated by the existence of specific correspondences between particular pairs of auditory and visual stimuli instead (often, as we will see, based on the idiosyncratic correspondences that have been reported by synaesthetes). Ultimately, though, the emotional-mediation account would currently appear to provide the most parsimonious account for whatever affinity the majority of people experience between musical sounds and colour.
Collapse
|
7
|
Predicting the crossmodal correspondences of odors using an electronic nose. Heliyon 2022; 8:e09284. [PMID: 35497032 PMCID: PMC9043411 DOI: 10.1016/j.heliyon.2022.e09284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Revised: 09/21/2021] [Accepted: 04/12/2022] [Indexed: 11/23/2022] Open
|
8
|
Colorophone 2.0: A Wearable Color Sonification Device Generating Live Stereo-Soundscapes-Design, Implementation, and Usability Audit. SENSORS 2021; 21:s21217351. [PMID: 34770658 PMCID: PMC8587929 DOI: 10.3390/s21217351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 10/29/2021] [Accepted: 11/01/2021] [Indexed: 11/20/2022]
Abstract
The successful development of a system realizing color sonification would enable auditory representation of the visual environment. The primary beneficiary of such a system would be people that cannot directly access visual information—the visually impaired community. Despite the plethora of sensory substitution devices, developing systems that provide intuitive color sonification remains a challenge. This paper presents design considerations, development, and the usability audit of a sensory substitution device that converts spatial color information into soundscapes. The implemented wearable system uses a dedicated color space and continuously generates natural, spatialized sounds based on the information acquired from a camera. We developed two head-mounted prototype devices and two graphical user interface (GUI) versions. The first GUI is dedicated to researchers, and the second has been designed to be easily accessible for visually impaired persons. Finally, we ran fundamental usability tests to evaluate the new spatial color sonification algorithm and to compare the two prototypes. Furthermore, we propose recommendations for the development of the next iteration of the system.
Collapse
|
9
|
Abstract
The development of assistive technologies is improving the independent access of blind and visually impaired people to visual artworks through non-visual channels. Current single modality tactile and auditory approaches to communicate color contents must compromise between conveying a broad color palette, ease of learning, and suffer from limited expressiveness. In this work, we propose a multi-sensory color code system that uses sound and scent to represent colors. Melodies express each color’s hue and scents the saturated, light, and dark color dimensions for each hue. In collaboration with eighteen participants, we evaluated the color identification rate achieved when using the multi-sensory approach. Seven (39%) of the participants improved their identification rate, five (28%) remained the same, and six (33%) performed worse when compared to an audio-only color code alternative. The participants then evaluated and compared a color content exploration prototype that uses the proposed color code with a tactile graphic equivalent using the System Usability Scale. For a visual artwork color exploration task, the multi-sensory color code integrated prototype received a score of 78.61, while the tactile graphics equivalent received 61.53. User feedback indicates that the multi-sensory color code system improved the convenience and confidence of the participants.
Collapse
|
10
|
Buchs G, Haimler B, Kerem M, Maidenbaum S, Braun L, Amedi A. A self-training program for sensory substitution devices. PLoS One 2021; 16:e0250281. [PMID: 33905446 PMCID: PMC8078811 DOI: 10.1371/journal.pone.0250281] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/01/2021] [Indexed: 11/30/2022] Open
Abstract
Sensory Substitution Devices (SSDs) convey visual information through audition or touch, targeting blind and visually impaired individuals. One bottleneck towards adopting SSDs in everyday life by blind users, is the constant dependency on sighted instructors throughout the learning process. Here, we present a proof-of-concept for the efficacy of an online self-training program developed for learning the basics of the EyeMusic visual-to-auditory SSD tested on sighted blindfolded participants. Additionally, aiming to identify the best training strategy to be later re-adapted for the blind, we compared multisensory vs. unisensory as well as perceptual vs. descriptive feedback approaches. To these aims, sighted participants performed identical SSD-stimuli identification tests before and after ~75 minutes of self-training on the EyeMusic algorithm. Participants were divided into five groups, differing by the feedback delivered during training: auditory-descriptive, audio-visual textual description, audio-visual perceptual simultaneous and interleaved, and a control group which had no training. At baseline, before any EyeMusic training, participants SSD objects’ identification was significantly above chance, highlighting the algorithm’s intuitiveness. Furthermore, self-training led to a significant improvement in accuracy between pre- and post-training tests in each of the four feedback groups versus control, though no significant difference emerged among those groups. Nonetheless, significant correlations between individual post-training success rates and various learning measures acquired during training, suggest a trend for an advantage of multisensory vs. unisensory feedback strategies, while no trend emerged for perceptual vs. descriptive strategies. The success at baseline strengthens the conclusion that cross-modal correspondences facilitate learning, given SSD algorithms are based on such correspondences. Additionally, and crucially, the results highlight the feasibility of self-training for the first stages of SSD learning, and suggest that for these initial stages, unisensory training, easily implemented also for blind and visually impaired individuals, may suffice. Together, these findings will potentially boost the use of SSDs for rehabilitation.
Collapse
Affiliation(s)
- Galit Buchs
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Cognitive Science, Faculty of Humanities, Hebrew University of Jerusalem, Jerusalem, Israel
- * E-mail: (AA); (GB)
| | - Benedetta Haimler
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Center of Advanced Technologies in Rehabilitation (CATR), The Chaim Sheba Medical Center, Ramat Gan, Israel
| | - Menachem Kerem
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
| | - Shachar Maidenbaum
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Department of Biomedical Engineering, Ben Gurion University, Beersheba, Israel
| | - Liraz Braun
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- Hebrew University of Jerusalem, Jerusalem, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel
- * E-mail: (AA); (GB)
| |
Collapse
|
11
|
Zhou T, Wu Y, Meng Q, Kang J. Influence of the Acoustic Environment in Hospital Wards on Patient Physiological and Psychological Indices. Front Psychol 2020; 11:1600. [PMID: 32848994 PMCID: PMC7396688 DOI: 10.3389/fpsyg.2020.01600] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 06/15/2020] [Indexed: 12/15/2022] Open
Abstract
Patients in general wards are often exposed to excessive levels of noise and activity, and high levels of noise have been associated with depression and anxiety. Previous studies have found that an appropriate acoustic environment is beneficial to the patient's therapeutic and treatment process; however, the soundscape is rarely intentionally designed or operated to improve patient recovery, especially for psychological rehabilitation. To gain the most accurate, and least variable, estimate of acoustic environmental stimuli/properties, virtual reality (VR) technology should be used to ensure that other environmental factors are stable and uniform in order to reduce the stimulation of other environmental factors. Therefore, this study aims to discuss the influence of the acoustic environment on patient physiological/psychological indicators and the mechanism of the effect on recovery using VR technology. A digital three-dimensional (3D) model of a hospital room was constructed, and experimental subjects wore VR glasses to visualize a real ward scene. Four typical sound categories were selected to analyze the effect of the acoustic environment on recovery; physiological indicators were monitored, and psychological factors were subjectively evaluated. The results show that music plays an important role in reducing stress as it can aid in a patient's physiological (skin conduction levels) and psychological stress recovery. Furthermore, mechanical and anthropogenic sounds exert negative effects on a patient's stress recovery. However, the effect is only limited to psychological stress indicators. The interaction effects of demographic characteristics and the acoustic environment are not significant, and future studies could consider the social-economic characteristics of patients. Based on these findings, we provide evidence that indicates that a hospital's acoustic environment is an important influencing factor on the stress recovery of patients and can serve as a reference for healthcare architects and policy makers.
Collapse
Affiliation(s)
- Tianfu Zhou
- Department of Architecture, Shanghai Academy of Fine Arts, Shanghai University, Shanghai, China
| | - Yue Wu
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| | - Qi Meng
- Key Laboratory of Cold Region Urban and Rural Human Settlement Environment Science and Technology, Ministry of Industry and Information Technology, School of Architecture, Harbin Institute of Technology, Harbin, China
| | - Jian Kang
- UCL Institute for Environmental Design and Engineering, The Bartlett, University College London, London, United Kingdom
| |
Collapse
|
12
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
13
|
Richardson M, Thar J, Alvarez J, Borchers J, Ward J, Hamilton-Fletcher G. How Much Spatial Information Is Lost in the Sensory Substitution Process? Comparing Visual, Tactile, and Auditory Approaches. Perception 2019; 48:1079-1103. [PMID: 31547778 DOI: 10.1177/0301006619873194] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensory substitution devices (SSDs) can convey visuospatial information through spatialised auditory or tactile stimulation using wearable technology. However, the level of information loss associated with this transformation is unknown. In this study, novice users discriminated the location of two objects at 1.2 m using devices that transformed a 16 × 8-depth map into spatially distributed patterns of light, sound, or touch on the abdomen. Results showed that through active sensing, participants could discriminate the vertical position of objects to a visual angle of 1°, 14°, and 21°, and their distance to 2 cm, 8 cm, and 29 cm using these visual, auditory, and haptic SSDs, respectively. Visual SSDs significantly outperformed auditory and tactile SSDs on vertical localisation, whereas for depth perception, all devices significantly differed from one another (visual > auditory > haptic). Our findings highlight the high level of acuity possible for SSDs even with low spatial resolutions (e.g., 16 × 8) and quantify the level of information loss attributable to this transformation for the SSD user. Finally, we discuss ways of closing this “modality gap” found in SSDs and conclude that this process is best benchmarked against performance with SSDs that return to their primary modality (e.g., visuospatial into visual).
Collapse
Affiliation(s)
| | - Jan Thar
- Media Computing Group, RWTH Aachen University, Germany
| | - James Alvarez
- Department of Psychology, University of Sussex, Brighton, UK
| | - Jan Borchers
- Media Computing Group, RWTH Aachen University, Germany
| | - Jamie Ward
- Department of Psychology, University of Sussex, Brighton, UK; Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Giles Hamilton-Fletcher
- Department of Psychology, University of Sussex, Brighton, UK; Neuroimaging and Visual Science Laboratory, New York University Langone Health, NY, USA
| |
Collapse
|
14
|
What makes a shape “baba”? The shape features prioritized in sound–shape correspondence change with development. J Exp Child Psychol 2019; 179:73-89. [DOI: 10.1016/j.jecp.2018.10.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 10/15/2018] [Accepted: 10/15/2018] [Indexed: 11/21/2022]
|
15
|
Cieśla K, Wolak T, Lorens A, Heimler B, Skarżyński H, Amedi A. Immediate improvement of speech-in-noise perception through multisensory stimulation via an auditory to tactile sensory substitution. Restor Neurol Neurosci 2019; 37:155-166. [PMID: 31006700 PMCID: PMC6598101 DOI: 10.3233/rnn-190898] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
BACKGROUND Hearing loss is becoming a real social and health problem. Its prevalence in the elderly is an epidemic. The risk of developing hearing loss is also growing among younger people. If left untreated, hearing loss can perpetuate development of neurodegenerative diseases, including dementia. Despite recent advancements in hearing aid (HA) and cochlear implant (CI) technologies, hearing impaired users still encounter significant practical and social challenges, with or without aids. In particular, they all struggle with understanding speech in challenging acoustic environments, especially in presence of a competing speaker. OBJECTIVES In the current proof-of-concept study we tested whether multisensory stimulation, pairing audition and a minimal-size touch device would improve intelligibility of speech in noise. METHODS To this aim we developed an audio-to-tactile sensory substitution device (SSD) transforming low-frequency speech signals into tactile vibrations delivered on two finger tips. Based on the inverse effectiveness law, i.e., multisensory enhancement is strongest when signal-to-noise ratio is lowest between senses, we embedded non-native language stimuli in speech-like noise and paired it with a low-frequency input conveyed through touch. RESULTS We found immediate and robust improvement in speech recognition (i.e. in the Signal-To-Noise-ratio) in the multisensory condition without any training, at a group level as well as in every participant. The reported improvement at the group-level of 6 dB was indeed major considering that an increase of 10 dB represents a doubling of the perceived loudness. CONCLUSIONS These results are especially relevant when compared to previous SSD studies showing effects in behavior only after a demanding cognitive training. We discuss the implications of our results for development of SSDs and of specific rehabilitation programs for the hearing impaired either using or not using HAs or CIs. We also discuss the potential application of such a set-up for sense augmentation, such as when learning a new language.
Collapse
Affiliation(s)
- Katarzyna Cieśla
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Artur Lorens
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Benedetta Heimler
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
| | - Henryk Skarżyński
- Institute of Physiology and Pathology of Hearing, World Hearing Center, Warsaw, Poland
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada, Faculty of Medicine, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
16
|
Hamilton-Fletcher G, Witzel C, Reby D, Ward J. Sound Properties Associated With Equiluminant Colours. Multisens Res 2017; 30:337-362. [DOI: 10.1163/22134808-00002567] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 03/27/2017] [Indexed: 11/19/2022]
Abstract
There is a widespread tendency to associate certain properties of sound with those of colour (e.g., higher pitches with lighter colours). Yet it is an open question how sound influences chroma or hue when properly controlling for lightness. To examine this, we asked participants to adjust physically equiluminant colours until they ‘went best’ with certain sounds. For pure tones, complex sine waves and vocal timbres, increases in frequency were associated with increases in chroma. Increasing the loudness of pure tones also increased chroma. Hue associations varied depending on the type of stimuli. In stimuli that involved only limited bands of frequencies (pure tones, vocal timbres), frequency correlated with hue, such that low frequencies gave blue hues and progressed to yellow hues at 800 Hz. Increasing the loudness of a pure tone was also associated with a shift from blue to yellow. However, for complex sounds that share the same bandwidth of frequencies (100–3200 Hz) but that vary in terms of which frequencies have the most power, all stimuli were associated with yellow hues. This suggests that the presence of high frequencies (above 800 Hz) consistently yields yellow hues. Overall we conclude that while pitch–chroma associations appear to flexibly re-apply themselves across a variety of contexts, frequencies above 800 Hz appear to produce yellow hues irrespective of context. These findings reveal new sound–colour correspondences previously obscured through not controlling for lightness. Findings are discussed in relation to understanding the underlying rules of cross-modal correspondences, synaesthesia, and optimising the sensory substitution of visual information through sound.
Collapse
Affiliation(s)
- Giles Hamilton-Fletcher
- School of Psychology, University of Sussex, Brighton, UK
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| | - Christoph Witzel
- Allgemeine Psychologie, Justus-Liebig-Universität Gießen, Gießen, Germany
| | - David Reby
- School of Psychology, University of Sussex, Brighton, UK
| | - Jamie Ward
- School of Psychology, University of Sussex, Brighton, UK
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, UK
| |
Collapse
|
17
|
Improving training for sensory augmentation using the science of expertise. Neurosci Biobehav Rev 2016; 68:234-244. [PMID: 27264831 DOI: 10.1016/j.neubiorev.2016.05.026] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2015] [Revised: 05/04/2016] [Accepted: 05/23/2016] [Indexed: 11/20/2022]
Abstract
Sensory substitution and augmentation devices (SSADs) allow users to perceive information about their environment that is usually beyond their sensory capabilities. Despite an extensive history, SSADs are arguably not used to their fullest, both as assistive technology for people with sensory impairment or as research tools in the psychology and neuroscience of sensory perception. Studies of the non-use of other assistive technologies suggest one factor is the balance of benefits gained against the costs incurred. We argue that improving the learning experience would improve this balance, suggest three ways in which it can be improved by leveraging existing cognitive science findings on expertise and skill development, and acknowledge limitations and relevant concerns. We encourage the systematic evaluation of learning programs, and suggest that a more effective learning process for SSADs could reduce the barrier to uptake and allow users to reach higher levels of overall capacity.
Collapse
|