1
|
Osorio S, Assaneo MF. Anatomically distinct cortical tracking of music and speech by slow (1-8Hz) and fast (70-120Hz) oscillatory activity. PLoS One 2025; 20:e0320519. [PMID: 40341725 PMCID: PMC12061428 DOI: 10.1371/journal.pone.0320519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Accepted: 02/19/2025] [Indexed: 05/11/2025] Open
Abstract
Music and speech encode hierarchically organized structural complexity at the service of human expressiveness and communication. Previous research has shown that populations of neurons in auditory regions track the envelope of acoustic signals within the range of slow and fast oscillatory activity. However, the extent to which cortical tracking is influenced by the interplay between stimulus type, frequency band, and brain anatomy remains an open question. In this study, we reanalyzed intracranial recordings from thirty subjects implanted with electrocorticography (ECoG) grids in the left cerebral hemisphere, drawn from an existing open-access ECoG database. Participants passively watched a movie where visual scenes were accompanied by either music or speech stimuli. Cross-correlation between brain activity and the envelope of music and speech signals, along with density-based clustering analyses and linear mixed-effects modeling, revealed both anatomically overlapping and functionally distinct mapping of the tracking effect as a function of stimulus type and frequency band. We observed widespread left-hemisphere tracking of music and speech signals in the Slow Frequency Band (SFB, band-passed filtered low-frequency signal between 1-8Hz), with near zero temporal lags. In contrast, cortical tracking in the High Frequency Band (HFB, envelope of the 70-120Hz band-passed filtered signal) was higher during speech perception, was more densely concentrated in classical language processing areas, and showed a frontal-to-temporal gradient in lag values that was not observed during perception of musical stimuli. Our results highlight a complex interaction between cortical region and frequency band that shapes temporal dynamics during processing of naturalistic music and speech signals.
Collapse
Affiliation(s)
- Sergio Osorio
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital, Boston, Massachusetts, United States of America
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, Massachusetts, United States of America
| | | |
Collapse
|
2
|
Chutia P, Tripathi SM, Jv A. The digitalization of psychopathology: 'TV sign' and 'Smartphone sign' as red flags for dementia. Neurocase 2025:1-5. [PMID: 39982201 DOI: 10.1080/13554794.2025.2467925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Accepted: 02/11/2025] [Indexed: 02/22/2025]
Abstract
This case series elucidates pathological signs for diagnosis in two patients with Dementia. The first case highlights the term 'Smartphone sign', a novel psychopathology uncovered based on the existing 'TV sign', a rare type of delusional misidentification syndrome (DMS). The second case had symptoms consistent with the 'TV sign'. The possible underlying cause of these signs was hypothesized based on psychopathology, brain region, sensory system, cognition, and environmental factors. Moreover, the treatment outcome in terms of cognition and behavior on low doses of Risperidone and Escitalopram shows promising results and paves the way for the treatment of other DMS.
Collapse
Affiliation(s)
- Porimita Chutia
- Department of Geriatric Mental Health, King George's Medical University, Lucknow, India
| | - Shailendra Mohan Tripathi
- Department of Geriatric Mental Health, King George's Medical University, Lucknow, India
- Institute of Medical Sciences, University of Aberdeen. Foresterhill, Aberdeen, UK
| | - Ashwin Jv
- Department of Geriatric Mental Health, King George's Medical University, Lucknow, India
| |
Collapse
|
3
|
Ma D, Wang L, Liu S, Ma X, Jia F, Hua Y, Liao Y, Qu H. Brain anatomy differences in Chinese children who stutter: a preliminary study. Front Neurol 2025; 16:1483157. [PMID: 39931552 PMCID: PMC11807804 DOI: 10.3389/fneur.2025.1483157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Accepted: 01/10/2025] [Indexed: 02/13/2025] Open
Abstract
Background and purpose It is unknown the neural mechanisms of developmental stuttering (DS). The aim of this study was to investigate the changes in the structural morphology of the brain in Chinese children who stutter. Methods A case-control study was conducted to collect magnetic resonance imaging data from stuttering and non-stuttering children, thereby analyzing whole-brain gray matter volume and cortical morphological changes in stuttering children. Results A total of 108 subjects were recruited (stuttering group: control group = 1:1). Comparing to healthy controls, the gray matter volume was significantly decreased in right temporal gyrus and bilateral cerebellum. Additionally, there was a significant reduction in cortical folds in the right insula and right superior temporal gyrus. Moreover, the gray matter volume of the right cerebellum and right temporal gyrus is related to the severity score of stuttering. Conclusion The present study proposes that the neural mechanisms underlying DS are intricately linked to the cortico-basal ganglia-thalamo-cortical loop and the dorsal language pathway. This finding is expected to provide reference value for the clinical treatment of DS.
Collapse
Affiliation(s)
- Dan Ma
- Department of Rehabilitation Medicine, West China Second University Hospital, Sichuan University, Chengdu, Sichuan, China
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
| | - Lingling Wang
- Department of Rehabilitation Medicine, West China Second University Hospital, Sichuan University, Chengdu, Sichuan, China
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
| | - Sai Liu
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
- Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China
| | - XinMao Ma
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
- Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Fenglin Jia
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
- Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Yimin Hua
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
- Department of Pediatrics, West China Second University Hospital, Sichuan University, Chengdu, Sichuan, China
| | - Yi Liao
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
- Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Haibo Qu
- Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education, Chengdu, Sichuan, China
- Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
4
|
Wang W, Zhou E, Nie Z, Deng Z, Gong Q, Ma S, Kang L, Yao L, Cheng J, Liu Z. Exploring mechanisms of anhedonia in depression through neuroimaging and data-driven approaches. J Affect Disord 2024; 363:409-419. [PMID: 39038623 DOI: 10.1016/j.jad.2024.07.133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 07/05/2024] [Accepted: 07/16/2024] [Indexed: 07/24/2024]
Abstract
BACKGROUND Anhedonia is a core symptom of depression that is closely related to prognosis and treatment outcomes. However, accurate and efficient treatments for anhedonia are lacking, mandating a deeper understanding of the underlying mechanisms. METHODS A total of 303 patients diagnosed with depression and anhedonia were assessed by the Snaith-Hamilton Pleasure Scale (SHAPS) and magnetic resonance imaging (MRI). The patients were categorized into a low-anhedonia group and a high-anhedonia group using the K-means algorithm. A data-driven approach was used to explore the differences in brain structure and function with different degrees of anhedonia based on MATLAB. A random forest model was used exploratorily to test the predictive ability of differences in brain structure and function on anhedonia in depression. RESULTS Structural and functional differences were apparent in several brain regions of patients with depression and high-level anhedonia, including in the temporal lobe, paracingulate gyrus, superior frontal gyrus, inferior occipital gyrus, right insular gyrus, and superior parietal lobule. And changes in these brain regions were significantly correlated with scores of SHAPS. CONCLUSIONS These brain regions may be useful as biomarkers that provide a more objective assessment of anhedonia in depression, laying the foundation for precision medicine in this treatment-resistant, relatively poor prognosis group.
Collapse
Affiliation(s)
- Wei Wang
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Enqi Zhou
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zhaowen Nie
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zipeng Deng
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Qian Gong
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Simeng Ma
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lijun Kang
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Lihua Yao
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jing Cheng
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China
| | - Zhongchun Liu
- Department of Psychiatry, Renmin Hospital of Wuhan University, Wuhan, China; Taikang Center for Life and Medical Sciences, Wuhan University, Wuhan, China.
| |
Collapse
|
5
|
Karthik G, Cao CZ, Demidenko MI, Jahn A, Stacey WC, Wasade VS, Brang D. Auditory cortex encodes lipreading information through spatially distributed activity. Curr Biol 2024; 34:4021-4032.e5. [PMID: 39153482 PMCID: PMC11387126 DOI: 10.1016/j.cub.2024.07.073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/29/2024] [Accepted: 07/19/2024] [Indexed: 08/19/2024]
Abstract
Watching a speaker's face improves speech perception accuracy. This benefit is enabled, in part, by implicit lipreading abilities present in the general population. While it is established that lipreading can alter the perception of a heard word, it is unknown how these visual signals are represented in the auditory system or how they interact with auditory speech representations. One influential, but untested, hypothesis is that visual speech modulates the population-coded representations of phonetic and phonemic features in the auditory system. This model is largely supported by data showing that silent lipreading evokes activity in the auditory cortex, but these activations could alternatively reflect general effects of arousal or attention or the encoding of non-linguistic features such as visual timing information. This gap limits our understanding of how vision supports speech perception. To test the hypothesis that the auditory system encodes visual speech information, we acquired functional magnetic resonance imaging (fMRI) data from healthy adults and intracranial recordings from electrodes implanted in patients with epilepsy during auditory and visual speech perception tasks. Across both datasets, linear classifiers successfully decoded the identity of silently lipread words using the spatial pattern of auditory cortex responses. Examining the time course of classification using intracranial recordings, lipread words were classified at earlier time points relative to heard words, suggesting a predictive mechanism for facilitating speech. These results support a model in which the auditory system combines the joint neural distributions evoked by heard and lipread words to generate a more precise estimate of what was said.
Collapse
Affiliation(s)
- Ganesan Karthik
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Cody Zhewei Cao
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | | | - Andrew Jahn
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA
| | - William C Stacey
- Department of Neurology, University of Michigan, Ann Arbor, MI 48109, USA
| | - Vibhangini S Wasade
- Henry Ford Hospital, Detroit, MI 48202, USA; Department of Neurology, Wayne State University School of Medicine, Detroit, MI 48201, USA
| | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI 48109, USA.
| |
Collapse
|
6
|
Van Engen KJ, Dey A, Sommers MS, Peelle JE. Audiovisual speech perception: Moving beyond McGurk. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3216. [PMID: 36586857 PMCID: PMC9894660 DOI: 10.1121/10.0015262] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 10/26/2022] [Accepted: 11/05/2022] [Indexed: 05/29/2023]
Abstract
Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.
Collapse
Affiliation(s)
- Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Avanti Dey
- PLOS ONE, 1265 Battery Street, San Francisco, California 94111, USA
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University, St. Louis, Missouri 63130, USA
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University, St. Louis, Missouri 63130, USA
| |
Collapse
|
7
|
Aller M, Økland HS, MacGregor LJ, Blank H, Davis MH. Differential Auditory and Visual Phase-Locking Are Observed during Audio-Visual Benefit and Silent Lip-Reading for Speech Perception. J Neurosci 2022; 42:6108-6120. [PMID: 35760528 PMCID: PMC9351641 DOI: 10.1523/jneurosci.2476-21.2022] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 04/04/2022] [Accepted: 04/12/2022] [Indexed: 11/21/2022] Open
Abstract
Speech perception in noisy environments is enhanced by seeing facial movements of communication partners. However, the neural mechanisms by which audio and visual speech are combined are not fully understood. We explore MEG phase-locking to auditory and visual signals in MEG recordings from 14 human participants (6 females, 8 males) that reported words from single spoken sentences. We manipulated the acoustic clarity and visual speech signals such that critical speech information is present in auditory, visual, or both modalities. MEG coherence analysis revealed that both auditory and visual speech envelopes (auditory amplitude modulations and lip aperture changes) were phase-locked to 2-6 Hz brain responses in auditory and visual cortex, consistent with entrainment to syllable-rate components. Partial coherence analysis was used to separate neural responses to correlated audio-visual signals and showed non-zero phase-locking to auditory envelope in occipital cortex during audio-visual (AV) speech. Furthermore, phase-locking to auditory signals in visual cortex was enhanced for AV speech compared with audio-only speech that was matched for intelligibility. Conversely, auditory regions of the superior temporal gyrus did not show above-chance partial coherence with visual speech signals during AV conditions but did show partial coherence in visual-only conditions. Hence, visual speech enabled stronger phase-locking to auditory signals in visual areas, whereas phase-locking of visual speech in auditory regions only occurred during silent lip-reading. Differences in these cross-modal interactions between auditory and visual speech signals are interpreted in line with cross-modal predictive mechanisms during speech perception.SIGNIFICANCE STATEMENT Verbal communication in noisy environments is challenging, especially for hearing-impaired individuals. Seeing facial movements of communication partners improves speech perception when auditory signals are degraded or absent. The neural mechanisms supporting lip-reading or audio-visual benefit are not fully understood. Using MEG recordings and partial coherence analysis, we show that speech information is used differently in brain regions that respond to auditory and visual speech. While visual areas use visual speech to improve phase-locking to auditory speech signals, auditory areas do not show phase-locking to visual speech unless auditory speech is absent and visual speech is used to substitute for missing auditory signals. These findings highlight brain processes that combine visual and auditory signals to support speech understanding.
Collapse
Affiliation(s)
- Máté Aller
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| | - Heidi Solberg Økland
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| | - Lucy J MacGregor
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| | - Helen Blank
- University Medical Center Hamburg-Eppendorf, Hamburg, 20246, Germany
| | - Matthew H Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| |
Collapse
|
8
|
Zhou D, Zhang G, Dang J, Unoki M, Liu X. Detection of Brain Network Communities During Natural Speech Comprehension From Functionally Aligned EEG Sources. Front Comput Neurosci 2022; 16:919215. [PMID: 35874316 PMCID: PMC9301328 DOI: 10.3389/fncom.2022.919215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 06/14/2022] [Indexed: 11/30/2022] Open
Abstract
In recent years, electroencephalograph (EEG) studies on speech comprehension have been extended from a controlled paradigm to a natural paradigm. Under the hypothesis that the brain can be approximated as a linear time-invariant system, the neural response to natural speech has been investigated extensively using temporal response functions (TRFs). However, most studies have modeled TRFs in the electrode space, which is a mixture of brain sources and thus cannot fully reveal the functional mechanism underlying speech comprehension. In this paper, we propose methods for investigating the brain networks of natural speech comprehension using TRFs on the basis of EEG source reconstruction. We first propose a functional hyper-alignment method with an additive average method to reduce EEG noise. Then, we reconstruct neural sources within the brain based on the EEG signals to estimate TRFs from speech stimuli to source areas, and then investigate the brain networks in the neural source space on the basis of the community detection method. To evaluate TRF-based brain networks, EEG data were recorded in story listening tasks with normal speech and time-reversed speech. To obtain reliable structures of brain networks, we detected TRF-based communities from multiple scales. As a result, the proposed functional hyper-alignment method could effectively reduce the noise caused by individual settings in an EEG experiment and thus improve the accuracy of source reconstruction. The detected brain networks for normal speech comprehension were clearly distinctive from those for non-semantically driven (time-reversed speech) audio processing. Our result indicates that the proposed source TRFs can reflect the cognitive processing of spoken language and that the multi-scale community detection method is powerful for investigating brain networks.
Collapse
Affiliation(s)
- Di Zhou
- School of Information Science, Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| | - Gaoyan Zhang
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Jianwu Dang
- School of Information Science, Japan Advanced Institute of Science and Technology, Ishikawa, Japan
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Masashi Unoki
- School of Information Science, Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| | - Xin Liu
- School of Information Science, Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| |
Collapse
|
9
|
Zhang L, Du Y. Lip movements enhance speech representations and effective connectivity in auditory dorsal stream. Neuroimage 2022; 257:119311. [PMID: 35589000 DOI: 10.1016/j.neuroimage.2022.119311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 05/09/2022] [Accepted: 05/11/2022] [Indexed: 11/25/2022] Open
Abstract
Viewing speaker's lip movements facilitates speech perception, especially under adverse listening conditions, but the neural mechanisms of this perceptual benefit at the phonemic and feature levels remain unclear. This fMRI study addressed this question by quantifying regional multivariate representation and network organization underlying audiovisual speech-in-noise perception. Behaviorally, valid lip movements improved recognition of place of articulation to aid phoneme identification. Meanwhile, lip movements enhanced neural representations of phonemes in left auditory dorsal stream regions, including frontal speech motor areas and supramarginal gyrus (SMG). Moreover, neural representations of place of articulation and voicing features were promoted differentially by lip movements in these regions, with voicing enhanced in Broca's area while place of articulation better encoded in left ventral premotor cortex and SMG. Next, dynamic causal modeling (DCM) analysis showed that such local changes were accompanied by strengthened effective connectivity along the dorsal stream. Moreover, the neurite orientation dispersion of the left arcuate fasciculus, the bearing skeleton of auditory dorsal stream, predicted the visual enhancements of neural representations and effective connectivity. Our findings provide novel insight to speech science that lip movements promote both local phonemic and feature encoding and network connectivity in the dorsal pathway and the functional enhancement is mediated by the microstructural architecture of the circuit.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China 200031; Chinese Institute for Brain Research, Beijing, China 102206.
| |
Collapse
|
10
|
Li YT, Chen JW, Yan LF, Hu B, Chen TQ, Chen ZH, Sun JT, Shang YX, Lu LJ, Cui GB, Wang W. Dynamic Alterations of Functional Connectivity and Amplitude of Low-Frequency Fluctuations in Patients with Unilateral Sudden Sensorineural Hearing Loss. Neurosci Lett 2022; 772:136470. [PMID: 35066092 DOI: 10.1016/j.neulet.2022.136470] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 12/26/2021] [Accepted: 01/17/2022] [Indexed: 02/05/2023]
Abstract
Unilateral sudden sensorineural hearing loss (SSNHL) adversely affects the quality of life, leading to increased risk of depression and cognitive decline. Our previous studies have mainly focused on the static brain function abnormalities in SSNHL patients. However, the dynamic features of brain activity in SSNHL patients are not elucidated. To explore the dynamic brain functional alterations in SSNHL patients, age- and sex- matched SSNHL patients (n=38) and healthy controls (HC, n=44) were enrolled. The dynamic functional connectivity (dFC) and dynamic amplitude of low-frequency fluctuation (dALFF) methods were used to compare the temporal features and dynamic neural activity between the two groups. In dFC analyses, the multiple functional connectivities (FCs) were clustered into 2 different states; a greater proportion of FCs in SSNHL patients showed sparse state compared with HC. In dALFF analyses, SSNHL individuals exhibited decreased dALFF variability in bilateral inferior occipital gyrus, middle occipital gyrus, calcarine, right lingual gyrus, and right fusiform gyrus. dALFF variability showed a negative correlation with activated partial thromboplatin time. The dynamic characteristics of SSNHL patients were different from static functional connectivity and static amplitude of low-frequency fluctuation, especially within the visual cortices. These findings suggest that SSNHL patients experience cross-modal plasticity and visual compensation, which may be closely related to the pathophysiology of SSNHL.
Collapse
Affiliation(s)
- Yu-Ting Li
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China
| | - Jia-Wei Chen
- Department of Otolaryngology Head and Neck Surgery, Tangdu Hospital, Fourth Military Medical University, Xi'an 710038, Shaanxi, China
| | - Lin-Feng Yan
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China
| | - Bo Hu
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China
| | - Tian-Qi Chen
- Institution of Basic Medicine, Fourth Military Medical University, 169 Changle Road, Xi'an 710032, Shaanxi, China
| | - Zhu-Hong Chen
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China
| | - Jing-Ting Sun
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China; Shaanxi University of Chinese Medicine, Middle Section of Century Avenue, Xianyang 712046, Shaanxi, China
| | - Yu-Xuan Shang
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China
| | - Lian-Jun Lu
- Department of Otolaryngology Head and Neck Surgery, Tangdu Hospital, Fourth Military Medical University, Xi'an 710038, Shaanxi, China.
| | - Guang-Bin Cui
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China.
| | - Wen Wang
- Department of Radiology, Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 569 Xinsi Road, Xi'an 710038, Shaanxi, China.
| |
Collapse
|
11
|
Al-Zubaidi A, Bräuer S, Holdgraf CR, Schepers IM, Rieger JW. OUP accepted manuscript. Cereb Cortex Commun 2022; 3:tgac007. [PMID: 35281216 PMCID: PMC8914075 DOI: 10.1093/texcom/tgac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/26/2022] [Accepted: 01/29/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Arkan Al-Zubaidi
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
- Research Center Neurosensory Science, Oldenburg University, 26129 Oldenburg, Germany
| | - Susann Bräuer
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Chris R Holdgraf
- Department of Statistics, UC Berkeley, Berkeley, CA 94720, USA
- International Interactive Computing Collaboration
| | - Inga M Schepers
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Jochem W Rieger
- Corresponding author: Department of Psychology, Faculty VI, Oldenburg University, 26129 Oldenburg, Germany.
| |
Collapse
|
12
|
Beynon AJ, Luijten BM, Mylanus EAM. Intracorporeal Cortical Telemetry as a Step to Automatic Closed-Loop EEG-Based CI Fitting: A Proof of Concept. Audiol Res 2021; 11:691-705. [PMID: 34940020 PMCID: PMC8698912 DOI: 10.3390/audiolres11040062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/04/2021] [Accepted: 12/09/2021] [Indexed: 11/16/2022] Open
Abstract
Electrically evoked auditory potentials have been used to predict auditory thresholds in patients with a cochlear implant (CI). However, with exception of electrically evoked compound action potentials (eCAP), conventional extracorporeal EEG recording devices are still needed. Until now, built-in (intracorporeal) back-telemetry options are limited to eCAPs. Intracorporeal recording of auditory responses beyond the cochlea is still lacking. This study describes the feasibility of obtaining longer latency cortical responses by concatenating interleaved short recording time windows used for eCAP recordings. Extracochlear reference electrodes were dedicated to record cortical responses, while intracochlear electrodes were used for stimulation, enabling intracorporeal telemetry (i.e., without an EEG device) to assess higher cortical processing in CI recipients. Simultaneous extra- and intra-corporeal recordings showed that it is feasible to obtain intracorporeal slow vertex potentials with a CI similar to those obtained by conventional extracorporeal EEG recordings. Our data demonstrate a proof of concept of closed-loop intracorporeal auditory cortical response telemetry (ICT) with a cochlear implant device. This research breaks new ground for next generation CI devices to assess higher cortical neural processing based on acute or continuous EEG telemetry to enable individualized automatic and/or adaptive CI fitting with only a CI.
Collapse
Affiliation(s)
- Andy J. Beynon
- Vestibular & Auditory Evoked Potential Lab, Department Oto-Rhino-Laryngology, Head & Neck Surgery, 6525 EX Nijmegen, The Netherlands
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
- Correspondence:
| | - Bart M. Luijten
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
| | - Emmanuel A. M. Mylanus
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
| |
Collapse
|
13
|
Karthik G, Plass J, Beltz AM, Liu Z, Grabowecky M, Suzuki S, Stacey WC, Wasade VS, Towle VL, Tao JX, Wu S, Issa NP, Brang D. Visual speech differentially modulates beta, theta, and high gamma bands in auditory cortex. Eur J Neurosci 2021; 54:7301-7317. [PMID: 34587350 DOI: 10.1111/ejn.15482] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/20/2021] [Accepted: 08/28/2021] [Indexed: 12/13/2022]
Abstract
Speech perception is a central component of social communication. Although principally an auditory process, accurate speech perception in everyday settings is supported by meaningful information extracted from visual cues. Visual speech modulates activity in cortical areas subserving auditory speech perception including the superior temporal gyrus (STG). However, it is unknown whether visual modulation of auditory processing is a unitary phenomenon or, rather, consists of multiple functionally distinct processes. To explore this question, we examined neural responses to audiovisual speech measured from intracranially implanted electrodes in 21 patients with epilepsy. We found that visual speech modulated auditory processes in the STG in multiple ways, eliciting temporally and spatially distinct patterns of activity that differed across frequency bands. In the theta band, visual speech suppressed the auditory response from before auditory speech onset to after auditory speech onset (-93 to 500 ms) most strongly in the posterior STG. In the beta band, suppression was seen in the anterior STG from -311 to -195 ms before auditory speech onset and in the middle STG from -195 to 235 ms after speech onset. In high gamma, visual speech enhanced the auditory response from -45 to 24 ms only in the posterior STG. We interpret the visual-induced changes prior to speech onset as reflecting crossmodal prediction of speech signals. In contrast, modulations after sound onset may reflect a decrease in sustained feedforward auditory activity. These results are consistent with models that posit multiple distinct mechanisms supporting audiovisual speech perception.
Collapse
Affiliation(s)
- G Karthik
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| | - John Plass
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| | - Adriene M Beltz
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| | - Zhongming Liu
- Department of Biomedical Engineering and Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, USA
| | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, Illinois, USA
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, Illinois, USA
| | - William C Stacey
- Department of Neurology and Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | - Vibhangini S Wasade
- Department of Neurology, Henry Ford Hospital, Detroit, Michigan, USA.,Department of Neurology, Wayne State University School of Medicine, Detroit, Michigan, USA
| | - Vernon L Towle
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - James X Tao
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - Shasha Wu
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - Naoum P Issa
- Department of Neurology, The University of Chicago, Chicago, Illinois, USA
| | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, USA
| |
Collapse
|
14
|
Xie Y, Yang Q, Liu C, Zhang Q, Jiang J, Han Y. Exploring the Pattern Associated With Longitudinal Changes of β-Amyloid Deposition During Cognitively Normal Healthy Aging. Front Med (Lausanne) 2021; 7:617173. [PMID: 33585514 PMCID: PMC7874155 DOI: 10.3389/fmed.2020.617173] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 12/14/2020] [Indexed: 12/20/2022] Open
Abstract
The aim of this study was to determine a pattern associated with longitudinal changes of β-amyloid (Aβ) deposition during cognitively normal(CN) healthy aging. We used 18F-florbetapir (AV-45) PET images of the brains of 207 cognitively normal subjects (CN1), obtained through the Alzheimer's Disease Neuroimaging Initiative (ADNI), to identify the healthy aging pattern and 76 cognitively normal healthy subjects (CN2), obtained through the Xuanwu Hospital of Capital Medical University, Beijing, China, to verify it. A voxel-based correlation analysis of standardized uptake value ratio (SUVR) map image and age was conducted using the DPABI (Data Processing & Analysis of Brain Imaging) software to identify the pattern. The sum of squares due to errors (SSE), R-square (R2) and the root-mean-square error (RMSE) were calculated to assess the quality of curve fitting. Among them, R2 was proposed as the coherence coefficient, which was as an index to assess the correlation between SUVR value of the pattern and subjects' age. The pattern characterized by age-associated longitudinal changes of Aβ deposition was mainly distributed in the right middle and inferior temporal gyrus, the right temporal pole: middle temporal gyrus, the right inferior occipital gyrus, the right inferior frontal gyrus (triangular portion), and the right precentral gyrus. There were a significant positive correlation between the SUVR value of the pattern and age for each CN group (CN1: R2 = 0.120, p < 0.001 for quadratic model; CN2: R2 = 0.152, p = 0.002 for quadratic model). These findings suggest a pattern of changes in Aβ deposition that can be used to distinguish physiological changes from pathophysiological changes, constituting a new method for elucidating the neuropathological mechanism of Alzheimer's disease.
Collapse
Affiliation(s)
- Yunyan Xie
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing, China
| | - Qin Yang
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing, China
| | - Chunhua Liu
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Technology, Shanghai University, Shanghai, China
| | - Qi Zhang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Technology, Shanghai University, Shanghai, China
| | - Jiehui Jiang
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, School of Communication and Information Technology, Shanghai University, Shanghai, China.,Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China.,Institute of Biomedical Engineering, Shanghai University, Shanghai, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, Beijing, China.,Center of Alzheimer's Disease, Beijing Institute for Brain Disorders, Beijing, China.,National Clinical Research Center for Geriatric Disorders, Beijing, China
| | | |
Collapse
|
15
|
Thézé R, Giraud AL, Mégevand P. The phase of cortical oscillations determines the perceptual fate of visual cues in naturalistic audiovisual speech. SCIENCE ADVANCES 2020; 6:6/45/eabc6348. [PMID: 33148648 PMCID: PMC7673697 DOI: 10.1126/sciadv.abc6348] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 09/17/2020] [Indexed: 06/11/2023]
Abstract
When we see our interlocutor, our brain seamlessly extracts visual cues from their face and processes them along with the sound of their voice, making speech an intrinsically multimodal signal. Visual cues are especially important in noisy environments, when the auditory signal is less reliable. Neuronal oscillations might be involved in the cortical processing of audiovisual speech by selecting which sensory channel contributes more to perception. To test this, we designed computer-generated naturalistic audiovisual speech stimuli where one mismatched phoneme-viseme pair in a key word of sentences created bistable perception. Neurophysiological recordings (high-density scalp and intracranial electroencephalography) revealed that the precise phase angle of theta-band oscillations in posterior temporal and occipital cortex of the right hemisphere was crucial to select whether the auditory or the visual speech cue drove perception. We demonstrate that the phase of cortical oscillations acts as an instrument for sensory selection in audiovisual speech processing.
Collapse
Affiliation(s)
- Raphaël Thézé
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1202 Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1202 Geneva, Switzerland
| | - Pierre Mégevand
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1202 Geneva, Switzerland.
- Division of Neurology, Department of Clinical Neurosciences, Geneva University Hospitals, 1205 Geneva, Switzerland
| |
Collapse
|
16
|
Mégevand P, Mercier MR, Groppe DM, Zion Golumbic E, Mesgarani N, Beauchamp MS, Schroeder CE, Mehta AD. Crossmodal Phase Reset and Evoked Responses Provide Complementary Mechanisms for the Influence of Visual Speech in Auditory Cortex. J Neurosci 2020; 40:8530-8542. [PMID: 33023923 PMCID: PMC7605423 DOI: 10.1523/jneurosci.0555-20.2020] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 07/27/2020] [Accepted: 08/31/2020] [Indexed: 12/26/2022] Open
Abstract
Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.
Collapse
Affiliation(s)
- Pierre Mégevand
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York 11549
- Feinstein Institutes for Medical Research, Manhasset, New York 11030
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, 1211 Geneva, Switzerland
| | - Manuel R Mercier
- Department of Neurology, Montefiore Medical Center, Bronx, New York 10467
- Department of Neuroscience, Albert Einstein College of Medicine, Bronx, New York 10461
- Institut de Neurosciences des Systèmes, Aix Marseille University, INSERM, 13005 Marseille, France
| | - David M Groppe
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York 11549
- Feinstein Institutes for Medical Research, Manhasset, New York 11030
- The Krembil Neuroscience Centre, University Health Network, Toronto, Ontario M5T 1M8, Canada
| | - Elana Zion Golumbic
- The Gonda Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, New York 10027
| | - Michael S Beauchamp
- Department of Neurosurgery, Baylor College of Medicine, Houston, Texas 77030
| | - Charles E Schroeder
- Nathan S. Kline Institute, Orangeburg, New York 10962
- Department of Psychiatry, Columbia University, New York, New York 10032
| | - Ashesh D Mehta
- Department of Neurosurgery, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, New York 11549
- Feinstein Institutes for Medical Research, Manhasset, New York 11030
| |
Collapse
|
17
|
Hearing-impaired listeners show increased audiovisual benefit when listening to speech in noise. Neuroimage 2019; 196:261-268. [PMID: 30978494 DOI: 10.1016/j.neuroimage.2019.04.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2018] [Revised: 04/02/2019] [Accepted: 04/04/2019] [Indexed: 11/22/2022] Open
Abstract
Recent studies provide evidence for changes in audiovisual perception as well as for adaptive cross-modal auditory cortex plasticity in older individuals with high-frequency hearing impairments (presbycusis). We here investigated whether these changes facilitate the use of visual information, leading to an increased audiovisual benefit of hearing-impaired individuals when listening to speech in noise. We used a naturalistic design in which older participants with a varying degree of high-frequency hearing loss attended to running auditory or audiovisual speech in noise and detected rare target words. Passages containing only visual speech served as a control condition. Simultaneously acquired scalp electroencephalography (EEG) data were used to study cortical speech tracking. Target word detection accuracy was significantly increased in the audiovisual as compared to the auditory listening condition. The degree of this audiovisual enhancement was positively related to individual high-frequency hearing loss and subjectively reported listening effort in challenging daily life situations, which served as a subjective marker of hearing problems. On the neural level, the early cortical tracking of the speech envelope was enhanced in the audiovisual condition. Similar to the behavioral findings, individual differences in the magnitude of the enhancement were positively associated with listening effort ratings. Our results therefore suggest that hearing-impaired older individuals make increased use of congruent visual information to compensate for the degraded auditory input.
Collapse
|