1
|
Shahin AJ, Gonzales MG, Dimitrijevic A. Cross-Modal Tinnitus Remediation: A Tentative Theoretical Framework. Brain Sci 2024; 14:95. [PMID: 38275515 PMCID: PMC10813772 DOI: 10.3390/brainsci14010095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 01/15/2024] [Accepted: 01/17/2024] [Indexed: 01/27/2024] Open
Abstract
Tinnitus is a prevalent hearing-loss deficit manifested as a phantom (internally generated by the brain) sound that is heard as a high-frequency tone in the majority of afflicted persons. Chronic tinnitus is debilitating, leading to distress, sleep deprivation, anxiety, and even suicidal thoughts. It has been theorized that, in the majority of afflicted persons, tinnitus can be attributed to the loss of high-frequency input from the cochlea to the auditory cortex, known as deafferentation. Deafferentation due to hearing loss develops with aging, which progressively causes tonotopic regions coding for the lost high-frequency coding to synchronize, leading to a phantom high-frequency sound sensation. Approaches to tinnitus remediation that demonstrated promise include inhibitory drugs, the use of tinnitus-specific frequency notching to increase lateral inhibition to the deafferented neurons, and multisensory approaches (auditory-motor and audiovisual) that work by coupling multisensory stimulation to the deafferented neural populations. The goal of this review is to put forward a theoretical framework of a multisensory approach to remedy tinnitus. Our theoretical framework posits that due to vision's modulatory (inhibitory, excitatory) influence on the auditory pathway, a prolonged engagement in audiovisual activity, especially during daily discourse, as opposed to auditory-only activity/discourse, can progressively reorganize deafferented neural populations, resulting in the reduced synchrony of the deafferented neurons and a reduction in tinnitus severity over time.
Collapse
Affiliation(s)
- Antoine J. Shahin
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA;
- Health Science Research Institute, University of California, Merced, CA 95343, USA
| | - Mariel G. Gonzales
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA;
| | - Andrew Dimitrijevic
- Sunnybrook Research Institute, University of Toronto, Toronto, ON M4N 3M5, Canada;
| |
Collapse
|
2
|
Santoyo AE, Gonzales MG, Iqbal ZJ, Backer KC, Balasubramaniam R, Bortfeld H, Shahin AJ. Neurophysiological time course of timbre-induced music-like perception. J Neurophysiol 2023. [PMID: 37377190 PMCID: PMC10396220 DOI: 10.1152/jn.00042.2023] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 06/26/2023] [Indexed: 06/29/2023] Open
Abstract
Traditionally, pitch variation in a sound stream has been integral to music identity. We attempt to expand music's definition, by demonstrating that the neural code for musicality is independent of pitch encoding. That is, pitchless sound streams can still induce music-like perception and a neurophysiological hierarchy similar to pitched melodies. Previous work reported that neural processing of sounds with no-pitch, fixed-pitch, and irregular-pitch (melodic) patterns, exhibits a right-lateralized hierarchical shift, with pitchless sounds favorably processed in Heschl's gyrus, ascending laterally to non-primary auditory areas for fixed-pitch and even more laterally for melodic patterns. The objective of this EEG study was to assess whether sound encoding maintains a similar hierarchical profile when musical perception is driven by timbre irregularities in the absence of pitch changes. Individuals listened to repetitions of three musical and three non-musical sound-streams. The non-musical streams were comprised of seven 200-ms segments of white, pink, or brown noise, separated by silent gaps. Musical streams were created similarly, but with all three noise types combined in a unique order within each stream to induce timbre variations and music-like perception. Subjects classified the sound streams as musical or non-musical. Musical processing exhibited right dominant alpha power enhancement, followed by a lateralized increase in theta phase-locking and spectral power. The theta phase-locking was stronger in musicians than in non-musicians. The lateralization of activity suggests higher-level auditory processing. Our findings validate the existence of a hierarchical shift, traditionally observed with pitched-melodic perception, underscoring that musicality can be achieved with timbre irregularities alone.
Collapse
Affiliation(s)
- Alejandra E Santoyo
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| | - Mariel G Gonzales
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| | - Zunaira J Iqbal
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| | - Kristina C Backer
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
- Health Science Research Institute, University of California, Merced, Merced, CA, United States
| | - Ramesh Balasubramaniam
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
- Health Science Research Institute, University of California, Merced, Merced, CA, United States
| | - Heather Bortfeld
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
- Health Science Research Institute, University of California, Merced, Merced, CA, United States
- Department of Psychology, University of California, Merced, Merced, CA, United States
| | - Antoine J Shahin
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
- Health Science Research Institute, University of California, Merced, Merced, CA, United States
| |
Collapse
|
3
|
Gonzales MG, Backer KC, Yan Y, Miller LM, Bortfeld H, Shahin AJ. Audition controls the flow of visual time during multisensory perception. iScience 2022; 25:104671. [PMID: 35845168 PMCID: PMC9283509 DOI: 10.1016/j.isci.2022.104671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 05/06/2022] [Accepted: 06/21/2022] [Indexed: 12/02/2022] Open
Abstract
Previous work addressing the influence of audition on visual perception has mainly been assessed using non-speech stimuli. Herein, we introduce the Audiovisual Time-Flow Illusion in spoken language, underscoring the role of audition in multisensory processing. When brief pauses were inserted into or brief portions were removed from an acoustic speech stream, individuals perceived the corresponding visual speech as “pausing” or “skipping”, respectively—even though the visual stimulus was intact. When the stimulus manipulation was reversed—brief pauses were inserted into, or brief portions were removed from the visual speech stream—individuals failed to perceive the illusion in the corresponding intact auditory stream. Our findings demonstrate that in the context of spoken language, people continually realign the pace of their visual perception based on that of the auditory input. In short, the auditory modality sets the pace of the visual modality during audiovisual speech processing. We describe the significance of the Audiovisual Time-Flow Illusion Temporal perturbations to auditory speech drive perception of visual speech However, perturbing visual speech stimuli does not affect auditory perception Auditory processing controls the temporal perception of the visual speech stream
Collapse
|
4
|
Gonzales MG, Backer KC, Mandujano B, Shahin AJ. Rethinking the Mechanisms Underlying the McGurk Illusion. Front Hum Neurosci 2021; 15:616049. [PMID: 33867954 PMCID: PMC8046930 DOI: 10.3389/fnhum.2021.616049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 03/12/2021] [Indexed: 11/13/2022] Open
Abstract
The McGurk illusion occurs when listeners hear an illusory percept (i.e., "da"), resulting from mismatched pairings of audiovisual (AV) speech stimuli (i.e., auditory/ba/paired with visual/ga/). Hearing a third percept-distinct from both the auditory and visual input-has been used as evidence of AV fusion. We examined whether the McGurk illusion is instead driven by visual dominance, whereby the third percept, e.g., "da," represents a default percept for visemes with an ambiguous place of articulation (POA), like/ga/. Participants watched videos of a talker uttering various consonant vowels (CVs) with (AV) and without (V-only) audios of/ba/. Individuals transcribed the CV they saw (V-only) or heard (AV). In the V-only condition, individuals predominantly saw "da"/"ta" when viewing CVs with indiscernible POAs. Likewise, in the AV condition, upon perceiving an illusion, they predominantly heard "da"/"ta" for CVs with indiscernible POAs. The illusion was stronger in individuals who exhibited weak/ba/auditory encoding (examined using a control auditory-only task). In Experiment2, we attempted to replicate these findings using stimuli recorded from a different talker. The V-only results were not replicated, but again individuals predominately heard "da"/"ta"/"tha" as an illusory percept for various AV combinations, and the illusion was stronger in individuals who exhibited weak/ba/auditory encoding. These results demonstrate that when visual CVs with indiscernible POAs are paired with a weakly encoded auditory/ba/, listeners default to hearing "da"/"ta"/"tha"-thus, tempering the AV fusion account, and favoring a default mechanism triggered when both AV stimuli are ambiguous.
Collapse
Affiliation(s)
- Mariel G. Gonzales
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| | - Kristina C. Backer
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| | - Brenna Mandujano
- Department of Psychology, California State University, Fresno, Fresno, CA, United States
| | - Antoine J. Shahin
- Department of Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| |
Collapse
|
5
|
Ohbayashi H, Endo T, Mihaesco E, Gonzales MG, Kochibe N, Kobata A. Structural studies of the asparagine-linked sugar chains of two immunoglobulin M's purified from a patient with Waldenström's macroglobulinemia. Arch Biochem Biophys 1989; 269:463-75. [PMID: 2493215 DOI: 10.1016/0003-9861(89)90130-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
The structures of the sugar chains present in two human monoclonal IgM molecules purified from the serum of a patient with Waldenström's macroglobulinemia have been determined. The asparagine-linked sugar chains were liberated as oligosaccharides by hydrazinolysis and labeled by reduction with NaB3H4 after N-acetylation. Their structures were studied by serial lectin column chromatography and sequential exoglycosidase digestion in combination with methylation analysis. These two IgM's were shown to contain almost the same sugar chains. The sugar chains were a mixture of a series of high-mannose-type and biantennary complex-type oligosaccharides. The complex-type oligosaccharides contain Man alpha 1----6(+/- GlcNAc beta 1----4)(Man alpha 1----3)Man beta 1----4GlcNAc beta 1----4(Fuc alpha 1----6)GlcNAc as their core and GlcNAc beta 1----, Gal beta 1----4GlcNAc beta 1---- and Neu5Ac alpha 2----6Gal beta 1----4GlcNAc beta 1---- groups in their outer chain moieties.
Collapse
Affiliation(s)
- H Ohbayashi
- Department of Biochemistry, University of Tokyo, Japan
| | | | | | | | | | | |
Collapse
|