1
|
Wang X, Bai G, Liang J, Xie Q, Chen Z, Zhou E, Li M, Wei X, Sun L, Zhang Z, Yang C, Tao TH, Zhou Z. Gustatory interface for operative assessment and taste decoding in patients with tongue cancer. Nat Commun 2024; 15:8967. [PMID: 39420050 PMCID: PMC11487085 DOI: 10.1038/s41467-024-53379-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 10/10/2024] [Indexed: 10/19/2024] Open
Abstract
Taste, a pivotal sense modality, plays a fundamental role in discerning flavors and evaluating the potential harm of food, thereby contributing to human survival, physical and mental health. Patients with tongue cancer may experience a loss of taste following extensive surgical resection with flap reconstruction. Here, we designed a gustatory interface that enables the non-invasive detection of tongue electrical activities for a comprehensive operative assessment. Moreover, it decodes gustatory information from the reconstructed tongue without taste buds. Our gustatory interface facilitates the recording and analysis of electrical activities on the tongue, yielding an electrical mapping across the entire tongue surface, which delineates the safe margin for surgical management and assesses flap viability for postoperative structure monitoring and prompt intervention. Furthermore, the gustatory interface helps patients discern tastes with an accuracy of 97.8%. Our invention offers a promising approach to clinical assessment and management and holds potential for improving the quality of life for individuals with tongue cancer.
Collapse
Affiliation(s)
- Xiner Wang
- 2020 X-Lab, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Guo Bai
- Department of Oral Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, 200011, China
| | - Jizhi Liang
- 2020 X-Lab, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Qianyang Xie
- Department of Oral Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, 200011, China
| | | | - Erda Zhou
- 2020 X-Lab, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Meng Li
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China
| | - Xiaoling Wei
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China
| | - Liuyang Sun
- 2020 X-Lab, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Zhiyuan Zhang
- Department of Oral Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, 200011, China
| | - Chi Yang
- Department of Oral Surgery, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine; College of Stomatology, Shanghai Jiao Tong University; National Center for Stomatology; National Clinical Research Center for Oral Diseases; Shanghai Key Laboratory of Stomatology; Shanghai Research Institute of Stomatology; Research Unit of Oral and Maxillofacial Regenerative Medicine, Chinese Academy of Medical Sciences, Shanghai, 200011, China.
| | - Tiger H Tao
- 2020 X-Lab, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China.
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Neuroxess Co. Ltd, Shanghai, 200023, China.
- State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China.
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, 200031, China.
- Guangdong Institute of Intelligence Science and Technology, Hengqin, Zhuhai, Guangdong, 519031, China.
- Tianqiao and Chrissy Chen Institute for Translational Research, Shanghai, China.
| | - Zhitao Zhou
- School of Graduate Study, University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai, 200050, China.
| |
Collapse
|
2
|
Aker SC, Faulkner KF, Innes-Brown H, Marozeau J. Perceived auditory dynamic range is enhanced with wrist-based tactile stimulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:2759-2766. [PMID: 39436360 DOI: 10.1121/10.0028676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 08/27/2024] [Indexed: 10/23/2024]
Abstract
Tactile stimulation has been shown to increase auditory loudness judgments in listeners. This bias could be utilized to enhance perception for people with deficiencies in auditory intensity perception, such as cochlear implant users. However, several aspects of this enhancement remain uncertain. For instance, does the tactile stimulation need to be applied to the hand or body, or can it be applied to the wrist? Furthermore, can the tactile stimulation both amplify and attenuate the perceived auditory loudness? To address these questions, two loudness-matching experiments were conducted. Participants matched a comparison auditory stimulus with an auditory reference, either with or without spectro-temporally identical tactile stimulation. In the first experiment, fixed-level tactile stimulation was administered to the wrist during the comparison stimulus to assess whether perceived auditory loudness increased. The second experiment replicated the same conditions but introduced tactile stimulation to both the reference and comparison, aiming to investigate the potential decrease in perceived auditory loudness when the two tactile accompaniments were incongruent between the reference and comparison. The results provide evidence supporting the existence of the tactile loudness bias in each experiment and are a step towards wrist-based haptic devices that modulate the auditory dynamic range for a user.
Collapse
Affiliation(s)
- Scott C Aker
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
- Oticon A/S, Smørum, 2765, Denmark
| | | | - Hamish Innes-Brown
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
- Eriksholm Research Centre, Snekkersten, 3070, Denmark
| | - Jeremy Marozeau
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
3
|
Paisa R, Andersen J, Ganis F, Percy-Smith LM, Serafin S. A Concert-Based Study on Melodic Contour Identification among Varied Hearing Profiles-A Preliminary Report. J Clin Med 2024; 13:3142. [PMID: 38892853 PMCID: PMC11172703 DOI: 10.3390/jcm13113142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 05/14/2024] [Accepted: 05/22/2024] [Indexed: 06/21/2024] Open
Abstract
Background: This study investigated how different hearing profiles influenced melodic contour identification (MCI) in a real-world concert setting with a live band including drums, bass, and a lead instrument. We aimed to determine the impact of various auditory assistive technologies on music perception in an ecologically valid environment. Methods: The study involved 43 participants with varying hearing capabilities: normal hearing, bilateral hearing aids, bimodal hearing, single-sided cochlear implants, and bilateral cochlear implants. Participants were exposed to melodies played on a piano or accordion, with and without an electric bass as a masker, accompanied by a basic drum rhythm. Bayesian logistic mixed-effects models were utilized to analyze the data. Results: The introduction of an electric bass as a masker did not significantly affect MCI performance for any hearing group when melodies were played on the piano, contrary to its effect on accordion melodies and previous studies. Greater challenges were observed with accordion melodies, especially when accompanied by an electric bass. Conclusions: MCI performance among hearing aid users was comparable to other hearing-impaired profiles, challenging the hypothesis that they would outperform cochlear implant users. A cohort of short melodies inspired by Western music styles was developed for future contour identification tasks.
Collapse
Affiliation(s)
- Razvan Paisa
- Multisensory Experience Lab, Aalborg University Copenhagen, A.C. Meyers Vænge 15, 2450 Copenhagen, Denmark; (F.G.); (S.S.)
| | - Jesper Andersen
- The Royal Danish Academy for Music, Rosenørns Alle 22, 1970 Frederiksberg, Denmark;
| | - Francesco Ganis
- Multisensory Experience Lab, Aalborg University Copenhagen, A.C. Meyers Vænge 15, 2450 Copenhagen, Denmark; (F.G.); (S.S.)
| | | | - Stefania Serafin
- Multisensory Experience Lab, Aalborg University Copenhagen, A.C. Meyers Vænge 15, 2450 Copenhagen, Denmark; (F.G.); (S.S.)
| |
Collapse
|
4
|
Aker SC, Faulkner KF, Innes-Brown H, Vatti M, Marozeau J. Some, but not all, cochlear implant users prefer music stimuli with congruent haptic stimulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3101-3117. [PMID: 38722101 DOI: 10.1121/10.0025854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 04/10/2024] [Indexed: 09/20/2024]
Abstract
Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.
Collapse
Affiliation(s)
- Scott C Aker
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
- Oticon A/S, Smørum, 2765, Denmark
| | | | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, 3070, Denmark
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
| | | | - Jeremy Marozeau
- Music and CI Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 1165, Denmark
| |
Collapse
|
5
|
Siedenburg K, Bürgel M, Özgür E, Scheicht C, Töpken S. Vibrotactile enhancement of musical engagement. Sci Rep 2024; 14:7764. [PMID: 38565622 PMCID: PMC10987628 DOI: 10.1038/s41598-024-57961-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 03/23/2024] [Indexed: 04/04/2024] Open
Abstract
Sound is sensed by the ear but can also be felt on the skin, by means of vibrotactile stimulation. Only little research has addressed perceptual implications of vibrotactile stimulation in the realm of music. Here, we studied which perceptual dimensions of music listening are affected by vibrotactile stimulation and whether the spatial segregation of vibrations improves vibrotactile stimulation. Forty-one listeners were presented with vibrotactile stimuli via a chair's surfaces (left and right arm rests, back rest, seat) in addition to music presented over headphones. Vibrations for each surface were derived from individual tracks of the music (multi condition) or conjointly by a mono-rendering, in addition to incongruent and headphones-only conditions. Listeners evaluated unknown music from popular genres according to valence, arousal, groove, the feeling of being part of a live performance, the feeling of being part of the music, and liking. Results indicated that the multi- and mono vibration conditions robustly enhanced the nature of the musical experience compared to listening via headphones alone. Vibrotactile enhancement was strong in the latent dimension of 'musical engagement', encompassing the sense of being a part of the music, arousal, and groove. These findings highlight the potential of vibrotactile cues for creating intensive musical experiences.
Collapse
Affiliation(s)
- Kai Siedenburg
- Graz University of Technology, Signal Processing and Speech Communication Laboratory, 8010, Graz, Austria.
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany.
| | - Michel Bürgel
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Elif Özgür
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Christoph Scheicht
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| | - Stephan Töpken
- Department of Medical Physics and Acoustics, Carl von Ossietzy Universität Oldenburg, 26129, Oldenburg, Germany
| |
Collapse
|
6
|
Fletcher MD, Akis E, Verschuur CA, Perry SW. Improved tactile speech perception using audio-to-tactile sensory substitution with formant frequency focusing. Sci Rep 2024; 14:4889. [PMID: 38418558 PMCID: PMC10901863 DOI: 10.1038/s41598-024-55429-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 02/23/2024] [Indexed: 03/01/2024] Open
Abstract
Haptic hearing aids, which provide speech information through tactile stimulation, could substantially improve outcomes for both cochlear implant users and for those unable to access cochlear implants. Recent advances in wide-band haptic actuator technology have made new audio-to-tactile conversion strategies viable for wearable devices. One such strategy filters the audio into eight frequency bands, which are evenly distributed across the speech frequency range. The amplitude envelopes from the eight bands modulate the amplitudes of eight low-frequency tones, which are delivered through vibration to a single site on the wrist. This tactile vocoder strategy effectively transfers some phonemic information, but vowels and obstruent consonants are poorly portrayed. In 20 participants with normal touch perception, we tested (1) whether focusing the audio filters of the tactile vocoder more densely around the first and second formant frequencies improved tactile vowel discrimination, and (2) whether focusing filters at mid-to-high frequencies improved obstruent consonant discrimination. The obstruent-focused approach was found to be ineffective. However, the formant-focused approach improved vowel discrimination by 8%, without changing overall consonant discrimination. The formant-focused tactile vocoder strategy, which can readily be implemented in real time on a compact device, could substantially improve speech perception for haptic hearing aid users.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Esma Akis
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Carl A Verschuur
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| |
Collapse
|
7
|
Yüksel M, Sarlik E, Çiprut A. Emotions and Psychological Mechanisms of Listening to Music in Cochlear Implant Recipients. Ear Hear 2023; 44:1451-1463. [PMID: 37280743 DOI: 10.1097/aud.0000000000001388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Music is a multidimensional phenomenon and is classified by its arousal properties, emotional quality, and structural characteristics. Although structural features of music (i.e., pitch, timbre, and tempo) and music emotion recognition in cochlear implant (CI) recipients are popular research topics, music-evoked emotions, and related psychological mechanisms that reflect both the individual and social context of music are largely ignored. Understanding the music-evoked emotions (the "what") and related mechanisms (the "why") can help professionals and CI recipients better comprehend the impact of music on CI recipients' daily lives. Therefore, the purpose of this study is to evaluate these aspects in CI recipients and compare their findings to those of normal hearing (NH) controls. DESIGN This study included 50 CI recipients with diverse auditory experiences who were prelingually deafened (deafened at or before 6 years of age)-early implanted (N = 21), prelingually deafened-late implanted (implanted at or after 12 years of age-N = 13), and postlingually deafened (N = 16) as well as 50 age-matched NH controls. All participants completed the same survey, which included 28 emotions and 10 mechanisms (Brainstem reflex, Rhythmic entrainment, Evaluative Conditioning, Contagion, Visual imagery, Episodic memory, Musical expectancy, Aesthetic judgment, Cognitive appraisal, and Lyrics). Data were presented in detail for CI groups and compared between CI groups and between CI and NH groups. RESULTS The principal component analysis showed five emotion factors that are explained by 63.4% of the total variance, including anxiety and anger, happiness and pride, sadness and pain, sympathy and tenderness, and serenity and satisfaction in the CI group. Positive emotions such as happiness, tranquility, love, joy, and trust ranked as most often experienced in all groups, whereas negative and complex emotions such as guilt, fear, anger, and anxiety ranked lowest. The CI group ranked lyrics and rhythmic entrainment highest in the emotion mechanism, and there was a statistically significant group difference in the episodic memory mechanism, in which the prelingually deafened, early implanted group scored the lowest. CONCLUSION Our findings indicate that music can evoke similar emotions in CI recipients with diverse auditory experiences as it does in NH individuals. However, prelingually deafened and early implanted individuals lack autobiographical memories associated with music, which affects the feelings evoked by music. In addition, the preference for rhythmic entrainment and lyrics as mechanisms of music-elicited emotions suggests that rehabilitation programs should pay particular attention to these cues.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Ankara Medipol University School of Health Sciences, Department of Speech and Language Therapy, Ankara, Turkey
| | - Esra Sarlik
- Marmara University Institute of Health Sciences, Audiology and Speech Disorders Program, Istanbul, Turkey
| | - Ayça Çiprut
- Marmara University Faculty of Medicine, Department of Audiology, Istanbul, Turkey
| |
Collapse
|
8
|
Fletcher MD, Verschuur CA, Perry SW. Improving speech perception for hearing-impaired listeners using audio-to-tactile sensory substitution with multiple frequency channels. Sci Rep 2023; 13:13336. [PMID: 37587166 PMCID: PMC10432540 DOI: 10.1038/s41598-023-40509-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 08/11/2023] [Indexed: 08/18/2023] Open
Abstract
Cochlear implants (CIs) have revolutionised treatment of hearing loss, but large populations globally cannot access them either because of disorders that prevent implantation or because they are expensive and require specialist surgery. Recent technology developments mean that haptic aids, which transmit speech through vibration, could offer a viable low-cost, non-invasive alternative. One important development is that compact haptic actuators can now deliver intense stimulation across multiple frequencies. We explored whether these multiple frequency channels can transfer spectral information to improve tactile phoneme discrimination. To convert audio to vibration, the speech amplitude envelope was extracted from one or more audio frequency bands and used to amplitude modulate one or more vibro-tactile tones delivered to a single-site on the wrist. In 26 participants with normal touch sensitivity, tactile-only phoneme discrimination was assessed with one, four, or eight frequency bands. Compared to one frequency band, performance improved by 5.9% with four frequency bands and by 8.4% with eight frequency bands. The multi-band signal-processing approach can be implemented in real-time on a compact device, and the vibro-tactile tones can be reproduced by the latest compact, low-powered actuators. This approach could therefore readily be implemented in a low-cost haptic hearing aid to deliver real-world benefits.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Carl A Verschuur
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
- Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| |
Collapse
|
9
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
10
|
Flores Ramones A, del-Rio-Guerra MS. Recent Developments in Haptic Devices Designed for Hearing-Impaired People: A Literature Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:2968. [PMID: 36991680 PMCID: PMC10055558 DOI: 10.3390/s23062968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/01/2023] [Accepted: 03/01/2023] [Indexed: 06/19/2023]
Abstract
Haptic devices transmit information to the user, using tactile stimuli to augment or replace sensory input. People with limited sensory abilities, such as vision or hearing can receive supplementary information by relying on them. This review analyses recent developments in haptic devices for deaf and hard-of-hearing individuals by extracting the most relevant information from each of the selected papers. The process of finding relevant literature is detailed using the PRISMA guidelines for literature reviews. In this review, the devices are categorized to better understand the review topic. The categorization results have highlighted several areas of future research into haptic devices for hearing-impaired users. We believe this review may be useful to researchers interested in haptic devices, assistive technologies, and human-computer interaction.
Collapse
|
11
|
Aker SC, Innes-Brown H, Faulkner KF, Vatti M, Marozeau J. Effect of audio-tactile congruence on vibrotactile music enhancement. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3396. [PMID: 36586853 DOI: 10.1121/10.0016444] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
Music listening experiences can be enhanced with tactile vibrations. However, it is not known which parameters of the tactile vibration must be congruent with the music to enhance it. Devices that aim to enhance music with tactile vibrations often require coding an acoustic signal into a congruent vibrotactile signal. Therefore, understanding which of these audio-tactile congruences are important is crucial. Participants were presented with a simple sine wave melody through supra-aural headphones and a haptic actuator held between the thumb and forefinger. Incongruent versions of the stimuli were made by randomizing physical parameters of the tactile stimulus independently of the auditory stimulus. Participants were instructed to rate the stimuli against the incongruent stimuli based on preference. It was found making the intensity of the tactile stimulus incongruent with the intensity of the auditory stimulus, as well as misaligning the two modalities in time, had the biggest negative effect on ratings for the melody used. Future vibrotactile music enhancement devices can use time alignment and intensity congruence as a baseline coding strategy, which improved strategies can be tested against.
Collapse
Affiliation(s)
- Scott C Aker
- Music and Cochlear Implant Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 2800, Denmark
| | | | | | | | - Jeremy Marozeau
- Music and Cochlear Implant Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, 2800, Denmark
| |
Collapse
|
12
|
Kawar K, Kishon-Rabin L, Segal O. Identification and Comprehension of Narrow Focus by Arabic-Speaking Adolescents With Moderate-to-Profound Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2029-2046. [PMID: 35472256 DOI: 10.1044/2022_jslhr-21-00296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Processing narrow focus (NF), the stressed word in the sentence, includes both the perceptual ability to identify the stressed word in the sentence and the pragmatic-semantic ability to comprehend the nonexplicit linguistic message. NF and its underlying meaning can be conveyed only via the auditory modality. Therefore, NF can be considered as a measure for assessing the efficacy of the hearing aid (HA) and cochlear implants (CIs) for acquiring nonexplicit language skills. The purpose of this study was to assess identification and comprehension of NF by HA and CI users who are native speakers of Arabic and to associate NF outcomes with speech perception and cognitive and linguistic abilities. METHOD A total of 46 adolescents (age range: 11;2-18;8) participated: 18 with moderate-to-severe hearing loss who used HAs, 10 with severe-to-profound hearing loss who used CIs, and 18 with typical hearing (TH). Test materials included the Arabic Narrow Focus Test (ANFT), which includes three subtests assessing identification (ANFT1), comprehension of NF in simple four-word sentences (ANFT2), and longer sentences with a construction list at the clause or noun phrase level (ANFT3). In addition, speech perception, vocabulary, and working memory were assessed. RESULTS All the participants successfully identified the word carrying NF, with no significant difference between the groups. Comprehension of NF in ANFT2 and ANFT3 was reduced for HA and CI users compared with TH peers, and speech perception, hearing status, and memory for digits predicted the variability in the overall results of ANFT1, ANFT2, and ANFT3, respectively. CONCLUSIONS Arabic speakers who used HAs or CIs were able to identify NF successfully, suggesting that the acoustic cues were perceptually available to them. However, HA and CI users had considerable difficulty in understanding NF. Different factors may contribute to this difficulty, including the memory load during the task as well as pragmatic-linguistic knowledge on the possible meanings of NF.
Collapse
Affiliation(s)
- Khaloob Kawar
- Department of Special Education, Beit Berl College, Kfar Saba, Israel
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| | - Liat Kishon-Rabin
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| | - Osnat Segal
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| |
Collapse
|
13
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
14
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
15
|
Fletcher MD, Zgheib J, Perry SW. Sensitivity to Haptic Sound-Localization Cues at Different Body Locations. SENSORS (BASEL, SWITZERLAND) 2021; 21:3770. [PMID: 34071729 PMCID: PMC8198414 DOI: 10.3390/s21113770] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 05/21/2021] [Accepted: 05/24/2021] [Indexed: 01/09/2023]
Abstract
Cochlear implants (CIs) recover hearing in severely to profoundly hearing-impaired people by electrically stimulating the cochlea. While they are extremely effective, spatial hearing is typically severely limited. Recent studies have shown that haptic stimulation can supplement the electrical CI signal (electro-haptic stimulation) and substantially improve sound localization. In haptic sound-localization studies, the signal is extracted from the audio received by behind-the-ear devices and delivered to each wrist. Localization is achieved using tactile intensity differences (TIDs) across the wrists, which match sound intensity differences across the ears (a key sound localization cue). The current study established sensitivity to across-limb TIDs at three candidate locations for a wearable haptic device, namely: the lower tricep and the palmar and dorsal wrist. At all locations, TID sensitivity was similar to the sensitivity to across-ear intensity differences for normal-hearing listeners. This suggests that greater haptic sound-localization accuracy than previously shown can be achieved. The dynamic range was also measured and far exceeded that available through electrical CI stimulation for all of the locations, suggesting that haptic stimulation could provide additional sound-intensity information. These results indicate that an effective haptic aid could be deployed for any of the candidate locations, and could offer a low-cost, non-invasive means of improving outcomes for hearing-impaired listeners.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton SO17 1BJ, UK
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton SO17 1BJ, UK;
| | - Jana Zgheib
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton SO17 1BJ, UK;
| | - Samuel W. Perry
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, University Road, Southampton SO17 1BJ, UK
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton SO17 1BJ, UK;
| |
Collapse
|
16
|
Abstract
The impaired brain is often difficult to restore, owing to our limited knowledge of the complex nervous system. Accumulating knowledge in systems neuroscience, combined with the development of innovative technologies, may enable brain restoration in patients with nervous system disorders that are currently untreatable. The Neuroprosthetics in Systems Neuroscience and Medicine Collection provides a platform for interdisciplinary research in neuroprosthetics.
Collapse
Affiliation(s)
- Kenji Kansaku
- Department of Physiology, Dokkyo Medical University School of Medicine, 880 Kitakobayashi, Mibu, Tochigi, 321-0293, Japan. .,Center for Neuroscience and Biomedical Engineering, The University of Electro-Communications, Tokyo, Japan.
| |
Collapse
|
17
|
Finite Element Modelling of Cochlear Electrode Arrays. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2021. [DOI: 10.4028/www.scientific.net/jbbbe.49.47] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
The implant of cochlear electrode arrays is standard nowadays as a result of the improvement of medical surgery, equipment, and material properties. In this paper, the finite element modeling FEM will be utilized to characterize the mechanical properties of the electrode arrays. The results show that a good agreement between the finite element results and the experimental. Besides, it shows that no significant difference between the tapered and uniform correctional electrodes.
Collapse
|
18
|
Abstract
Hearing aid and cochlear implant (CI) users often struggle to locate and segregate sounds. The dominant sound-localisation cues are time and intensity differences across the ears. A recent study showed that CI users locate sounds substantially better when these cues are provided through haptic stimulation on each wrist. However, the sensitivity of the wrists to these cues and the robustness of this sensitivity to aging is unknown. The current study showed that time difference sensitivity is much poorer across the wrists than across the ears and declines with age. In contrast, high sensitivity to across-wrist intensity differences was found that was robust to aging. This high sensitivity was observed across a range of stimulation intensities for both amplitude modulated and unmodulated sinusoids and matched across-ear intensity difference sensitivity for normal-hearing individuals. Furthermore, the usable dynamic range for haptic stimulation on the wrists was found to be around four times larger than for CIs. These findings suggest that high-precision haptic sound-localisation can be achieved, which could aid many hearing-impaired listeners. Furthermore, the finding that high-fidelity across-wrist intensity information can be transferred could be exploited in human-machine interfaces to enhance virtual reality and improve remote control of military, medical, or research robots.
Collapse
|
19
|
Fletcher MD. Using haptic stimulation to enhance auditory perception in hearing-impaired listeners. Expert Rev Med Devices 2020; 18:63-74. [PMID: 33372550 DOI: 10.1080/17434440.2021.1863782] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
INTRODUCTION Hearing-assistive devices, such as hearing aids and cochlear implants, transform the lives of hearing-impaired people. However, users often struggle to locate and segregate sounds. This leads to impaired threat detection and an inability to understand speech in noisy environments. Recent evidence suggests that segregation and localization can be improved by providing missing sound-information through haptic stimulation. AREAS COVERED This article reviews the evidence that haptic stimulation can effectively provide sound information. It then discusses the research and development required for this approach to be implemented in a clinically viable device. This includes discussion of what sound information should be provided and how that information can be extracted and delivered. EXPERT OPINION Although this research area has only recently emerged, it builds on a significant body of work showing that sound information can be effectively transferred through haptic stimulation. Current evidence suggests that haptic stimulation is highly effective at providing missing sound-information to cochlear implant users. However, a great deal of work remains to implement this approach in an effective wearable device. If successful, such a device could offer an inexpensive, noninvasive means of improving educational, work, and social experiences for hearing-impaired individuals, including those without access to hearing-assistive devices.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Southampton, UK.,Institute of Sound and Vibration Research, University of Southampton, Southampton, UK
| |
Collapse
|
20
|
Fletcher MD, Zgheib J. Haptic sound-localisation for use in cochlear implant and hearing-aid users. Sci Rep 2020; 10:14171. [PMID: 32843659 PMCID: PMC7447810 DOI: 10.1038/s41598-020-70379-2] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 07/28/2020] [Indexed: 11/10/2022] Open
Abstract
Users of hearing-assistive devices often struggle to locate and segregate sounds, which can make listening in schools, cafes, and busy workplaces extremely challenging. A recent study in unilaterally implanted CI users showed that sound-localisation was improved when the audio received by behind-the-ear devices was converted to haptic stimulation on each wrist. We built on this work, using a new signal-processing approach to improve localisation accuracy and increase generalisability to a wide range of stimuli. We aimed to: (1) improve haptic sound-localisation accuracy using a varied stimulus set and (2) assess whether accuracy improved with prolonged training. Thirty-two adults with normal touch perception were randomly assigned to an experimental or control group. The experimental group completed a 5-h training regime and the control group were not trained. Without training, haptic sound-localisation was substantially better than in previous work on haptic sound-localisation. It was also markedly better than sound-localisation by either unilaterally or bilaterally implanted CI users. After training, accuracy improved, becoming better than for sound-localisation by bilateral hearing-aid users. These findings suggest that a wrist-worn haptic device could be effective for improving spatial hearing for a range of hearing-impaired listeners.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK. .,Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Jana Zgheib
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| |
Collapse
|
21
|
Fletcher MD, Song H, Perry SW. Electro-haptic stimulation enhances speech recognition in spatially separated noise for cochlear implant users. Sci Rep 2020; 10:12723. [PMID: 32728109 PMCID: PMC7391652 DOI: 10.1038/s41598-020-69697-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 07/14/2020] [Indexed: 11/10/2022] Open
Abstract
Hundreds of thousands of profoundly hearing-impaired people perceive sounds through electrical stimulation of the auditory nerve using a cochlear implant (CI). However, CI users are often poor at understanding speech in noisy environments and separating sounds that come from different locations. We provided missing speech and spatial hearing cues through haptic stimulation to augment the electrical CI signal. After just 30 min of training, we found this “electro-haptic” stimulation substantially improved speech recognition in multi-talker noise when the speech and noise came from different locations. Our haptic stimulus was delivered to the wrists at an intensity that can be produced by a compact, low-cost, wearable device. These findings represent a significant step towards the production of a non-invasive neuroprosthetic that can improve CI users’ ability to understand speech in realistic noisy environments.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK.
| | - Haoheng Song
- Faculty of Engineering and Physical Sciences, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| | - Samuel W Perry
- University of Southampton Auditory Implant Service, University of Southampton, University Road, Southampton, SO17 1BJ, UK
| |
Collapse
|