1
|
Akar S, Beaucousin V, Velin L, Lenay C, Deschamps L, Roy V. Visual-to-auditory sensory substitution with passive movements in a double participant setup. Q J Exp Psychol (Hove) 2025:17470218251334990. [PMID: 40205727 DOI: 10.1177/17470218251334990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
In the field of sensory substitution, research has highlighted the role of participants' actions on the sensors of sensory substitution devices. These observations are in line with the conception of perception as a dynamic process in which action plays an essential role. However, a debate remains between several conceptions. According to the ecological psychology approach, action may correspond to voluntary movements, but also to passive movements that expose us to invariants and enable perception. For the enactive cognitive science approach, action corresponds mainly to voluntary movements, the aim of which is to test sensorimotor contingencies, and give rise to perception. To contribute to this debate, we have set up a visual-to-auditory sensory substitution device coupled with a pantograph system for transferring identical movements. This makes it possible to test two participants simultaneously, one acting voluntarily on the device's sensors, the other subjected to passive movements that are nonetheless correctly associated with auditory feedback. Participants were asked to recognize 2D shapes, and our results show that they improved their perception irrespective of whether the experimental condition was active or passive. Thus, our results confirm that sensory substitution is possible via passive movements.
Collapse
Affiliation(s)
- Salim Akar
- Université de Rouen Normandie, CRFDP UR 7475, Rouen, France
| | | | - Laetitia Velin
- Université de Rouen Normandie, CRFDP UR 7475, Rouen, France
| | - Charles Lenay
- EA 2223 COSTECH (Connaissance, Organisation et Systèmes Techniques), Université de Technologie de Compiègne, Compiègne, France
| | - Loïc Deschamps
- Université de Rouen Normandie, CRFDP UR 7475, Rouen, France
| | - Vincent Roy
- Université de Rouen Normandie, CRFDP UR 7475, Rouen, France
| |
Collapse
|
2
|
Remache-Vinueza B, Trujillo-León A, Vidal-Verdú F. Vibrotactile stimulus duration threshold for perception of pulse to vibration transition. Sci Rep 2025; 15:5057. [PMID: 39934639 PMCID: PMC11814341 DOI: 10.1038/s41598-025-85778-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Accepted: 01/06/2025] [Indexed: 02/13/2025] Open
Abstract
This study investigates the minimum stimulus duration required to perceive the transition from pulse to vibration sensations, a critical parameter for optimizing information transmission via haptic interfaces such as smartphones, tablets, smartwatches, game consoles, and sensory substitution systems. Efficient transmission relies on minimizing stimulus duration, enabling more information to be conveyed in less time. A preliminary experiment established intensity perception thresholds-the minimum vibration intensities detectable-at 40, 80, 150, 250, 300, and 590 Hz, frequencies primarily activating the Pacinian (Rapid Adapting II) psychophysical channel. Subsequently, 35 participants determined the minimum durations needed to perceive the transition from pulse to vibration sensations across this frequency range. Results revealed a consistent minimum duration of approximately 30 ms, contrasting with findings in audition, where shorter durations suffice at higher frequencies, but aligning with prior studies in tactile perception.
Collapse
Affiliation(s)
- Byron Remache-Vinueza
- Departamento de Electrónica, Universidad de Málaga, 29071, Málaga, Spain.
- Facultad de Ingenierías, Ingeniería Industrial, Universidad Tecnológica Indoamérica, 170103, Quito, Ecuador.
| | - Andrés Trujillo-León
- Departamento de Electrónica, Universidad de Málaga, 29071, Málaga, Spain
- Instituto Universitario de Investigación en Ingeniería Mecatrónica y Sistemas Ciberfísicos, IMECH.UMA, 29590, Campanillas, Spain
| | - Fernando Vidal-Verdú
- Departamento de Electrónica, Universidad de Málaga, 29071, Málaga, Spain
- Instituto Universitario de Investigación en Ingeniería Mecatrónica y Sistemas Ciberfísicos, IMECH.UMA, 29590, Campanillas, Spain
| |
Collapse
|
3
|
Irigoyen E, Larrea M, Graña M. A Narrative Review of Haptic Technologies and Their Value for Training, Rehabilitation, and the Education of Persons with Special Needs. SENSORS (BASEL, SWITZERLAND) 2024; 24:6946. [PMID: 39517844 PMCID: PMC11548615 DOI: 10.3390/s24216946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 10/22/2024] [Accepted: 10/23/2024] [Indexed: 11/16/2024]
Abstract
Haptic technologies are increasingly valuable for human-computer interaction in its many flavors, including, of course, virtual reality systems, which are becoming very useful tools for education, training, and rehabilitation in many areas of medicine, engineering, and daily life. There is a broad spectrum of technologies and approaches that provide haptic stimuli, ranging from the well-known force feedback to subtile pseudo-haptics and visual haptics. Correspondingly, there is a broad spectrum of applications and system designs that include haptic technologies as a relevant component and interaction feature. Paramount is their use in training of medical procedures, but they appear in a plethora of systems deploying virtual reality applications. This narrative review covers the panorama of haptic devices and approaches and the most salient areas of application. Special emphasis is given to education of persons with special needs, aiming to foster the development of innovative systems and methods addressing the enhancement of the quality of life of this segment of the population.
Collapse
Affiliation(s)
- Eloy Irigoyen
- Systems Engineering and Automation Department, Bilbao School of Engineering, University of the Basque Country (UPV/EHU), 48013 Bilbao, Spain;
| | - Mikel Larrea
- Group of Computational Intelligence, Faculty of Engineering of Gipuzkoa, University of the Basque Country (UPV/EHU), 20018 San Sebastian, Spain;
| | - Manuel Graña
- Faculty of Computer Science, University of the Basque Country (UPV/EHU), 20018 San Sebastian, Spain
| |
Collapse
|
4
|
Miklós G, Halász L, Hasslberger M, Toth E, Manola L, Hagh Gooie S, van Elswijk G, Várkuti B, Erőss L. Sensory-substitution based sound perception using a spinal computer-brain interface. Sci Rep 2024; 14:24879. [PMID: 39438593 PMCID: PMC11496521 DOI: 10.1038/s41598-024-75779-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Accepted: 10/08/2024] [Indexed: 10/25/2024] Open
Abstract
Sensory substitution offers a promising approach to restore lost sensory functions. Here we show that spinal cord stimulation (SCS), typically used for chronic pain management, can potentially serve as a novel auditory sensory substitution device. We recruited 13 patients undergoing SCS implantation and translated everyday sound samples into personalized SCS patterns during their trial phase. In a sound identification task-where chance-level performance was 33.3%-participants ( n = 8 ) achieved a mean accuracy of 72.8% using only SCS input. We observed a weak positive correlation between stimulation bitrate and identification accuracy. A follow-up discrimination task ( n = 5 ) confirmed that reduced bitrates significantly impaired participants' ability to distinguish between consecutive SCS patterns, indicating effective processing of additional information at higher bitrates. These findings demonstrate the feasibility of using existing SCS technology to create a novel neural interface for a sound prosthesis. Our results pave the way for future research to enhance stimulation fidelity, assess long-term training effects, and explore integration with other auditory aids for comprehensive hearing rehabilitation.
Collapse
Affiliation(s)
- Gabriella Miklós
- Institute of Neurosurgery and Neurointervention, Faculty of Medicine, Semmelweis University, Budapest, Hungary
- János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Budapest, Hungary
- CereGate GmbH, München, Germany
| | - László Halász
- Institute of Neurosurgery and Neurointervention, Faculty of Medicine, Semmelweis University, Budapest, Hungary
- Albert Szent-Györgyi Medical School, Doctoral School of Clinical Medicine, Clinical and Experimental Research for Reconstructive and Organ-Sparing Surgery, University of Szeged, Szeged, Hungary
| | | | | | | | | | | | | | - Loránd Erőss
- Institute of Neurosurgery and Neurointervention, Faculty of Medicine, Semmelweis University, Budapest, Hungary
| |
Collapse
|
5
|
Fletcher MD, Akis E, Verschuur CA, Perry SW. Improved tactile speech perception and noise robustness using audio-to-tactile sensory substitution with amplitude envelope expansion. Sci Rep 2024; 14:15029. [PMID: 38951556 PMCID: PMC11217272 DOI: 10.1038/s41598-024-65510-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 06/20/2024] [Indexed: 07/03/2024] Open
Abstract
Recent advances in haptic technology could allow haptic hearing aids, which convert audio to tactile stimulation, to become viable for supporting people with hearing loss. A tactile vocoder strategy for audio-to-tactile conversion, which exploits these advances, has recently shown significant promise. In this strategy, the amplitude envelope is extracted from several audio frequency bands and used to modulate the amplitude of a set of vibro-tactile tones. The vocoder strategy allows good consonant discrimination, but vowel discrimination is poor and the strategy is susceptible to background noise. In the current study, we assessed whether multi-band amplitude envelope expansion can effectively enhance critical vowel features, such as formants, and improve speech extraction from noise. In 32 participants with normal touch perception, tactile-only phoneme discrimination with and without envelope expansion was assessed both in quiet and in background noise. Envelope expansion improved performance in quiet by 10.3% for vowels and by 5.9% for consonants. In noise, envelope expansion improved overall phoneme discrimination by 9.6%, with no difference in benefit between consonants and vowels. The tactile vocoder with envelope expansion can be deployed in real-time on a compact device and could substantially improve clinical outcomes for a new generation of haptic hearing aids.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Esma Akis
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Carl A Verschuur
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| |
Collapse
|
6
|
Fletcher MD, Perry SW, Thoidis I, Verschuur CA, Goehring T. Improved tactile speech robustness to background noise with a dual-path recurrent neural network noise-reduction method. Sci Rep 2024; 14:7357. [PMID: 38548750 PMCID: PMC10978864 DOI: 10.1038/s41598-024-57312-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/17/2024] [Indexed: 04/01/2024] Open
Abstract
Many people with hearing loss struggle to understand speech in noisy environments, making noise robustness critical for hearing-assistive devices. Recently developed haptic hearing aids, which convert audio to vibration, can improve speech-in-noise performance for cochlear implant (CI) users and assist those unable to access hearing-assistive devices. They are typically body-worn rather than head-mounted, allowing additional space for batteries and microprocessors, and so can deploy more sophisticated noise-reduction techniques. The current study assessed whether a real-time-feasible dual-path recurrent neural network (DPRNN) can improve tactile speech-in-noise performance. Audio was converted to vibration on the wrist using a vocoder method, either with or without noise reduction. Performance was tested for speech in a multi-talker noise (recorded at a party) with a 2.5-dB signal-to-noise ratio. An objective assessment showed the DPRNN improved the scale-invariant signal-to-distortion ratio by 8.6 dB and substantially outperformed traditional noise-reduction (log-MMSE). A behavioural assessment in 16 participants showed the DPRNN improved tactile-only sentence identification in noise by 8.2%. This suggests that advanced techniques like the DPRNN could substantially improve outcomes with haptic hearing aids. Low-cost haptic devices could soon be an important supplement to hearing-assistive devices such as CIs or offer an alternative for people who cannot access CI technology.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Iordanis Thoidis
- School of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124, Thessaloniki, Greece
| | - Carl A Verschuur
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Tobias Goehring
- MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK
| |
Collapse
|
7
|
Fletcher MD, Akis E, Verschuur CA, Perry SW. Improved tactile speech perception using audio-to-tactile sensory substitution with formant frequency focusing. Sci Rep 2024; 14:4889. [PMID: 38418558 PMCID: PMC10901863 DOI: 10.1038/s41598-024-55429-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 02/23/2024] [Indexed: 03/01/2024] Open
Abstract
Haptic hearing aids, which provide speech information through tactile stimulation, could substantially improve outcomes for both cochlear implant users and for those unable to access cochlear implants. Recent advances in wide-band haptic actuator technology have made new audio-to-tactile conversion strategies viable for wearable devices. One such strategy filters the audio into eight frequency bands, which are evenly distributed across the speech frequency range. The amplitude envelopes from the eight bands modulate the amplitudes of eight low-frequency tones, which are delivered through vibration to a single site on the wrist. This tactile vocoder strategy effectively transfers some phonemic information, but vowels and obstruent consonants are poorly portrayed. In 20 participants with normal touch perception, we tested (1) whether focusing the audio filters of the tactile vocoder more densely around the first and second formant frequencies improved tactile vowel discrimination, and (2) whether focusing filters at mid-to-high frequencies improved obstruent consonant discrimination. The obstruent-focused approach was found to be ineffective. However, the formant-focused approach improved vowel discrimination by 8%, without changing overall consonant discrimination. The formant-focused tactile vocoder strategy, which can readily be implemented in real time on a compact device, could substantially improve speech perception for haptic hearing aid users.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Esma Akis
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Carl A Verschuur
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| |
Collapse
|
8
|
Kohler I, Perrotta MV, Ferreira T, Eagleman DM. Cross-Modal Sensory Boosting to Improve High-Frequency Hearing Loss: Device Development and Validation. JMIRX MED 2024; 5:e49969. [PMID: 38345294 PMCID: PMC11008433 DOI: 10.2196/49969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/23/2023] [Accepted: 12/13/2023] [Indexed: 04/13/2024]
Abstract
Background High-frequency hearing loss is one of the most common problems in the aging population and with those who have a history of exposure to loud noises. This type of hearing loss can be frustrating and disabling, making it difficult to understand speech communication and interact effectively with the world. Objective This study aimed to examine the impact of spatially unique haptic vibrations representing high-frequency phonemes on the self-perceived ability to understand conversations in everyday situations. Methods To address high-frequency hearing loss, a multi-motor wristband was developed that uses machine learning to listen for specific high-frequency phonemes. The wristband vibrates in spatially unique locations to represent which phoneme was present in real time. A total of 16 participants with high-frequency hearing loss were recruited and asked to wear the wristband for 6 weeks. The degree of disability associated with hearing loss was measured weekly using the Abbreviated Profile of Hearing Aid Benefit (APHAB). Results By the end of the 6-week study, the average APHAB benefit score across all participants reached 12.39 points, from a baseline of 40.32 to a final score of 27.93 (SD 13.11; N=16; P=.002, 2-tailed dependent t test). Those without hearing aids showed a 10.78-point larger improvement in average APHAB benefit score at 6 weeks than those with hearing aids (t14=2.14; P=.10, 2-tailed independent t test). The average benefit score across all participants for ease of communication was 15.44 (SD 13.88; N=16; P<.001, 2-tailed dependent t test). The average benefit score across all participants for background noise was 10.88 (SD 17.54; N=16; P=.03, 2-tailed dependent t test). The average benefit score across all participants for reverberation was 10.84 (SD 16.95; N=16; P=.02, 2-tailed dependent t test). Conclusions These findings show that vibrotactile sensory substitution delivered by a wristband that produces spatially distinguishable vibrations in correspondence with high-frequency phonemes helps individuals with high-frequency hearing loss improve their perceived understanding of verbal communication. Vibrotactile feedback provides benefits whether or not a person wears hearing aids, albeit in slightly different ways. Finally, individuals with the greatest perceived difficulty understanding speech experienced the greatest amount of perceived benefit from vibrotactile feedback.
Collapse
Affiliation(s)
| | | | | | - David M Eagleman
- Neosensory, Los Altos, CA, United States
- Department of Psychiatry, Stanford University, Stanford, CA, United States
| |
Collapse
|
9
|
Várkuti B, Halász L, Hagh Gooie S, Miklós G, Smits Serena R, van Elswijk G, McIntyre CC, Lempka SF, Lozano AM, Erōss L. Conversion of a medical implant into a versatile computer-brain interface. Brain Stimul 2024; 17:39-48. [PMID: 38145752 DOI: 10.1016/j.brs.2023.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND Information transmission into the human nervous system is the basis for a variety of prosthetic applications. Spinal cord stimulation (SCS) systems are widely available, have a well documented safety record, can be implanted minimally invasively, and are known to stimulate afferent pathways. Nonetheless, SCS devices are not yet used for computer-brain-interfacing applications. OBJECTIVE Here we aimed to establish computer-to-brain communication via medical SCS implants in a group of 20 individuals who had been operated for the treatment of chronic neuropathic pain. METHODS In the initial phase, we conducted interface calibration with the aim of determining personalized stimulation settings that yielded distinct and reproducible sensations. These settings were subsequently utilized to generate inputs for a range of behavioral tasks. We evaluated the required calibration time, task training duration, and the subsequent performance in each task. RESULTS We could establish a stable spinal computer-brain interface in 18 of the 20 participants. Each of the 18 then performed one or more of the following tasks: A rhythm-discrimination task (n = 13), a Morse-decoding task (n = 3), and/or two different balance/body-posture tasks (n = 18; n = 5). The median calibration time was 79 min. The median training time for learning to use the interface in a subsequent task was 1:40 min. In each task, every participant demonstrated successful performance, surpassing chance levels. CONCLUSION The results constitute the first proof-of-concept of a general purpose computer-brain interface paradigm that could be deployed on present-day medical SCS platforms.
Collapse
Affiliation(s)
| | - László Halász
- Albert-Szentgyörgyi Medical School, Doctoral School of Clinical Medicine, Clinical and Experimental Research for Reconstructive and Organ-Sparing Surgery, University of Szeged, Szeged, Hungary
| | | | - Gabriella Miklós
- CereGate GmbH, München, Germany; National Institute of Mental Health, Neurology, and Neurosurgery, Budapest, Hungary; János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Budapest, Hungary
| | - Ricardo Smits Serena
- CereGate GmbH, München, Germany; Department of Orthopaedics and Sports Orthopaedics, Klinikum Rechts der Isar, Technical University of Munich, München, Germany
| | | | - Cameron C McIntyre
- Department of Biomedical Engineering and Department of Neurosurgery, Duke University, Durham, NC, USA
| | - Scott F Lempka
- Department of Biomedical Engineering, Department of Anesthesiology and the Biointerfaces Institute, University of Michigan, Ann Arbor, MI, USA
| | - Andres M Lozano
- Division of Neurosurgery, Department of Surgery, University of Toronto, Toronto, Ontario, Canada
| | - Loránd Erōss
- National Institute of Mental Health, Neurology, and Neurosurgery, Budapest, Hungary
| |
Collapse
|
10
|
Macklin AS, Yau JM, Fischer-Baum S, O'Malley MK. Representational Similarity Analysis for Tracking Neural Correlates of Haptic Learning on a Multimodal Device. IEEE TRANSACTIONS ON HAPTICS 2023; 16:424-435. [PMID: 37556331 PMCID: PMC10605963 DOI: 10.1109/toh.2023.3303838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
A goal of wearable haptic devices has been to enable haptic communication, where individuals learn to map information typically processed visually or aurally to haptic cues via a process of cross-modal associative learning. Neural correlates have been used to evaluate haptic perception and may provide a more objective approach to assess association performance than more commonly used behavioral measures of performance. In this article, we examine Representational Similarity Analysis (RSA) of electroencephalography (EEG) as a framework to evaluate how the neural representation of multifeatured haptic cues changes with association training. We focus on the first phase of cross-modal associative learning, perception of multimodal cues. A participant learned to map phonemes to multimodal haptic cues, and EEG data were acquired before and after training to create neural representational spaces that were compared to theoretical models. Our perceptual model showed better correlations to the neural representational space before training, while the feature-based model showed better correlations with the post-training data. These results suggest that training may lead to a sharpening of the sensory response to haptic cues. Our results show promise that an EEG-RSA approach can capture a shift in the representational space of cues, as a means to track haptic learning.
Collapse
|
11
|
Flores Ramones A, del-Rio-Guerra MS. Recent Developments in Haptic Devices Designed for Hearing-Impaired People: A Literature Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:2968. [PMID: 36991680 PMCID: PMC10055558 DOI: 10.3390/s23062968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/01/2023] [Accepted: 03/01/2023] [Indexed: 06/19/2023]
Abstract
Haptic devices transmit information to the user, using tactile stimuli to augment or replace sensory input. People with limited sensory abilities, such as vision or hearing can receive supplementary information by relying on them. This review analyses recent developments in haptic devices for deaf and hard-of-hearing individuals by extracting the most relevant information from each of the selected papers. The process of finding relevant literature is detailed using the PRISMA guidelines for literature reviews. In this review, the devices are categorized to better understand the review topic. The categorization results have highlighted several areas of future research into haptic devices for hearing-impaired users. We believe this review may be useful to researchers interested in haptic devices, assistive technologies, and human-computer interaction.
Collapse
|
12
|
Oh Y, Kalpin N, Hunter J, Schwalm M. The impact of temporally coherent visual and vibrotactile cues on speech recognition in noise. JASA EXPRESS LETTERS 2023; 3:025203. [PMID: 36858994 DOI: 10.1121/10.0017326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Inputs delivered to different sensory organs provide us with complementary speech information about the environment. The goal of this study was to establish which multisensory characteristics can facilitate speech recognition in noise. The major finding is that the tracking of temporal cues of visual/tactile speech synced with auditory speech can play a key role in speech-in-noise performance. This suggests that multisensory interactions are fundamentally important for speech recognition ability in noisy environments, and they require salient temporal cues. The amplitude envelope, serving as a reliable temporal cue source, can be applied through different sensory modalities when speech recognition is compromised.
Collapse
Affiliation(s)
- Yonghee Oh
- Department of Otolaryngology-Head and Neck Surgery and Communicative Disorders, University of Louisville, Louisville, Kentucky 40202, USA
| | - Nicole Kalpin
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA , , ,
| | - Jessica Hunter
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA , , ,
| | - Meg Schwalm
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA , , ,
| |
Collapse
|
13
|
Eagleman DM, Perrotta MV. The future of sensory substitution, addition, and expansion via haptic devices. Front Hum Neurosci 2023; 16:1055546. [PMID: 36712151 PMCID: PMC9880183 DOI: 10.3389/fnhum.2022.1055546] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 12/23/2022] [Indexed: 01/14/2023] Open
Abstract
Haptic devices use the sense of touch to transmit information to the nervous system. As an example, a sound-to-touch device processes auditory information and sends it to the brain via patterns of vibration on the skin for people who have lost hearing. We here summarize the current directions of such research and draw upon examples in industry and academia. Such devices can be used for sensory substitution (replacing a lost sense, such as hearing or vision), sensory expansion (widening an existing sensory experience, such as detecting electromagnetic radiation outside the visible light spectrum), and sensory addition (providing a novel sense, such as magnetoreception). We review the relevant literature, the current status, and possible directions for the future of sensory manipulation using non-invasive haptic devices.
Collapse
Affiliation(s)
- David M. Eagleman
- Department of Psychiatry, Stanford University School of Medicine, Stanford, CA, United States,Neosensory, Palo Alto, CA, United States,*Correspondence: David M. Eagleman ✉
| | | |
Collapse
|
14
|
Maimon A, Yizhar O, Buchs G, Heimler B, Amedi A. A case study in phenomenology of visual experience with retinal prosthesis versus visual-to-auditory sensory substitution. Neuropsychologia 2022; 173:108305. [PMID: 35752268 PMCID: PMC9297294 DOI: 10.1016/j.neuropsychologia.2022.108305] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 04/30/2022] [Accepted: 06/13/2022] [Indexed: 11/26/2022]
Abstract
The phenomenology of the blind has provided an age-old, unparalleled means of exploring the enigmatic link between the brain and mind. This paper delves into the unique phenomenological experience of a man who became blind in adulthood. He subsequently underwent both an Argus II retinal prosthesis implant and training, and extensive training on the EyeMusic visual to auditory sensory substitution device (SSD), thereby becoming the first reported case to date of dual proficiency with both devices. He offers a firsthand account into what he considers the great potential of combining sensory substitution devices with visual prostheses as part of a complete visual restoration protocol. While the Argus II retinal prosthesis alone provided him with immediate visual percepts by way of electrically stimulated phosphenes elicited by the device, the EyeMusic SSD requires extensive training from the onset. Yet following the extensive training program with the EyeMusic sensory substitution device, our subject reports that the sensory substitution device allowed him to experience a richer, more complex perceptual experience, that felt more "second nature" to him, while the Argus II prosthesis (which also requires training) did not allow him to achieve the same levels of automaticity and transparency. Following long-term use of the EyeMusic SSD, our subject reported that visual percepts representing mainly, but not limited to, colors portrayed by the EyeMusic SSD are elicited in association with auditory stimuli, indicating the acquisition of a high level of automaticity. Finally, the case study indicates an additive benefit to the combination of both devices on the user's subjective phenomenological visual experience.
Collapse
Affiliation(s)
- Amber Maimon
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| | - Or Yizhar
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel; Max Planck Institute for Human Development, Research Group Adaptive Memory and Decision Making, Berlin, Germany; Max Planck Institute for Human Development, Max Planck Dahlem Campus of Cognition (MPDCC), Berlin, Germany
| | - Galit Buchs
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Benedetta Heimler
- Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Ramat Gan, Israel
| | - Amir Amedi
- The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel; The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.
| |
Collapse
|
15
|
Cieśla K, Wolak T, Lorens A, Mentzel M, Skarżyński H, Amedi A. Effects of training and using an audio-tactile sensory substitution device on speech-in-noise understanding. Sci Rep 2022; 12:3206. [PMID: 35217676 PMCID: PMC8881456 DOI: 10.1038/s41598-022-06855-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 01/28/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding speech in background noise is challenging. Wearing face-masks, imposed by the COVID19-pandemics, makes it even harder. We developed a multi-sensory setup, including a sensory substitution device (SSD) that can deliver speech simultaneously through audition and as vibrations on the fingertips. The vibrations correspond to low frequencies extracted from the speech input. We trained two groups of non-native English speakers in understanding distorted speech in noise. After a short session (30-45 min) of repeating sentences, with or without concurrent matching vibrations, we showed comparable mean group improvement of 14-16 dB in Speech Reception Threshold (SRT) in two test conditions, i.e., when the participants were asked to repeat sentences only from hearing and also when matching vibrations on fingertips were present. This is a very strong effect, if one considers that a 10 dB difference corresponds to doubling of the perceived loudness. The number of sentence repetitions needed for both types of training to complete the task was comparable. Meanwhile, the mean group SNR for the audio-tactile training (14.7 ± 8.7) was significantly lower (harder) than for the auditory training (23.9 ± 11.8), which indicates a potential facilitating effect of the added vibrations. In addition, both before and after training most of the participants (70-80%) showed better performance (by mean 4-6 dB) in speech-in-noise understanding when the audio sentences were accompanied with matching vibrations. This is the same magnitude of multisensory benefit that we reported, with no training at all, in our previous study using the same experimental procedures. After training, performance in this test condition was also best in both groups (SRT ~ 2 dB). The least significant effect of both training types was found in the third test condition, i.e. when participants were repeating sentences accompanied with non-matching tactile vibrations and the performance in this condition was also poorest after training. The results indicate that both types of training may remove some level of difficulty in sound perception, which might enable a more proper use of speech inputs delivered via vibrotactile stimulation. We discuss the implications of these novel findings with respect to basic science. In particular, we show that even in adulthood, i.e. long after the classical "critical periods" of development have passed, a new pairing between a certain computation (here, speech processing) and an atypical sensory modality (here, touch) can be established and trained, and that this process can be rapid and intuitive. We further present possible applications of our training program and the SSD for auditory rehabilitation in patients with hearing (and sight) deficits, as well as healthy individuals in suboptimal acoustic situations.
Collapse
Affiliation(s)
- K Cieśla
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel. .,World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland.
| | - T Wolak
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Lorens
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - M Mentzel
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| | - H Skarżyński
- World Hearing Centre, Institute of Physiology and Pathology of Hearing, Warsaw, Poland
| | - A Amedi
- The Baruch Ivcher Institute for Brain, Cognition & Technology, The Baruch Ivcher School of Psychology and the Ruth and Meir Rosental Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
16
|
Abramson CI, Levin M. Behaviorist approaches to investigating memory and learning: A primer for synthetic biology and bioengineering. Commun Integr Biol 2021; 14:230-247. [PMID: 34925687 PMCID: PMC8677006 DOI: 10.1080/19420889.2021.2005863] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
The fields of developmental biology, biomedicine, and artificial life are being revolutionized by advances in synthetic morphology. The next phase of synthetic biology and bioengineering is resulting in the construction of novel organisms (biobots), which exhibit not only morphogenesis and physiology but functional behavior. It is now essential to begin to characterize the behavioral capacity of novel living constructs in terms of their ability to make decisions, form memories, learn from experience, and anticipate future stimuli. These synthetic organisms are highly diverse, and often do not resemble familiar model systems used in behavioral science. Thus, they represent an important context in which to begin to unify and standardize vocabulary and techniques across developmental biology, behavioral ecology, and neuroscience. To facilitate the study of behavior in novel living systems, we present a primer on techniques from the behaviorist tradition that can be used to probe the functions of any organism – natural, chimeric, or synthetic – regardless of the details of their construction or origin. These techniques provide a rich toolkit for advancing the fields of synthetic bioengineering, evolutionary developmental biology, basal cognition, exobiology, and robotics.
Collapse
Affiliation(s)
- Charles I Abramson
- Department of Psychology, Laboratory of Comparative Psychology and Behavioral Biology at Oklahoma State University, United States of America
| | - Michael Levin
- Department of Biology, Allen Discovery Center at Tufts University, United States of America
| |
Collapse
|
17
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
18
|
Fletcher MD, Zgheib J, Perry SW. Sensitivity to Haptic Sound-Localization Cues at Different Body Locations. SENSORS (BASEL, SWITZERLAND) 2021; 21:3770. [PMID: 34071729 PMCID: PMC8198414 DOI: 10.3390/s21113770] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 05/21/2021] [Accepted: 05/24/2021] [Indexed: 01/09/2023]
Abstract
Cochlear implants (CIs) recover hearing in severely to profoundly hearing-impaired people by electrically stimulating the cochlea. While they are extremely effective, spatial hearing is typically severely limited. Recent studies have shown that haptic stimulation can supplement the electrical CI signal (electro-haptic stimulation) and substantially improve sound localization. In haptic sound-localization studies, the signal is extracted from the audio received by behind-the-ear devices and delivered to each wrist. Localization is achieved using tactile intensity differences (TIDs) across the wrists, which match sound intensity differences across the ears (a key sound localization cue). The current study established sensitivity to across-limb TIDs at three candidate locations for a wearable haptic device, namely: the lower tricep and the palmar and dorsal wrist. At all locations, TID sensitivity was similar to the sensitivity to across-ear intensity differences for normal-hearing listeners. This suggests that greater haptic sound-localization accuracy than previously shown can be achieved. The dynamic range was also measured and far exceeded that available through electrical CI stimulation for all of the locations, suggesting that haptic stimulation could provide additional sound-intensity information. These results indicate that an effective haptic aid could be deployed for any of the candidate locations, and could offer a low-cost, non-invasive means of improving outcomes for hearing-impaired listeners.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton SO17 1BJ, UK
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton SO17 1BJ, UK;
| | - Jana Zgheib
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton SO17 1BJ, UK;
| | - Samuel W. Perry
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton SO17 1BJ, UK
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton SO17 1BJ, UK;
| |
Collapse
|