1
|
Zheng Y, Liu L, Li R, Wu Z, Chen L, Li J, Wu C, Kong L, Zhang C, Lei M, She S, Ning Y, Li L. Impaired interaural correlation processing in people with schizophrenia. Eur J Neurosci 2021; 54:6646-6662. [PMID: 34494695 DOI: 10.1111/ejn.15449] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 08/19/2021] [Accepted: 09/03/2021] [Indexed: 01/05/2023]
Abstract
Detection of transient changes in interaural correlation is based on the temporal precision of the central representations of acoustic signals. Whether schizophrenia impairs the temporal precision in the interaural correlation process is not clear. In both participants with schizophrenia and matched healthy-control participants, this study examined the detection of a break in interaural correlation (BIC, a change in interaural correlation from 1 to 0 and back to 1), including the longest interaural delay at which a BIC was just audible, representing the temporal extent of the primitive auditory memory (PAM). Moreover, BIC-induced electroencephalograms (EEGs) and the relationships between the early binaural psychoacoustic processing and higher cognitive functions, which were assessed by the Repeatable Battery for the Assessment of Neuropsychological Status (RBANS), were examined. The results showed that compared to healthy controls, participants with schizophrenia exhibited poorer BIC detection, PAM and RBANS score. Both the BIC-detection accuracy and the PAM extent were correlated with the RBANS score. Moreover, participants with schizophrenia showed weaker BIC-induced N1-P2 amplitude which was correlated with both theta-band power and inter-trial phase coherence. These results suggested that schizophrenia impairs the temporal precision of the central representations of acoustic signals, affecting both interaural correlation processing and higher-order cognitions.
Collapse
Affiliation(s)
- Yingjun Zheng
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China
| | - Lei Liu
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Ruikeng Li
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China
| | - Zhemeng Wu
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Liangjie Chen
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Juanhua Li
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China
| | - Chao Wu
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Lingzhi Kong
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Changxin Zhang
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Ming Lei
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Shenglin She
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China
| | - Yuping Ning
- The Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Key Laboratory on Machine Perception (Ministry of Education), Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
2
|
Liepins R, Kaider A, Honeder C, Auinger AB, Dahm V, Riss D, Arnoldner C. Formant frequency discrimination with a fine structure sound coding strategy for cochlear implants. Hear Res 2020; 392:107970. [PMID: 32339775 DOI: 10.1016/j.heares.2020.107970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 03/04/2020] [Accepted: 04/05/2020] [Indexed: 11/16/2022]
Abstract
Recent sound coding strategies for cochlear implants (CI) have focused on the transmission of temporal fine structure to the CI recipient. To date, knowledge about the effects of fine structure coding in electrical hearing is poorly charactarized. The aim of this study was to examine whether the presence of temporal fine structure coding affects how the CI recipient perceives sound. This was done by comparing two sound coding strategies with different temporal fine structure coverage in a longitudinal cross-over setting. The more recent FS4 coding strategy provides fine structure coding on typically four apical stimulation channels compared to FSP with usually one or two fine structure channels. 34 adult CI patients with a minimum CI experience of one year were included. All subjects were fitted according to clinical routine and used both coding strategies for three months in a randomized sequence. Formant frequency discrimination thresholds (FFDT) were measured to assess the ability to resolve timbre information. Further outcome measures included a monosyllables test in quiet and the speech reception threshold of an adaptive matrix sentence test in noise (Oldenburger sentence test). In addition, the subjective sound quality was assessed using visual analogue scales and a sound quality questionnaire after each three months period. The extended fine structure range of FS4 yields FFDT similar to FSP for formants occurring in the frequency range only covered by FS4. There is a significant interaction (p = 0.048) between the extent of fine structure coverage in FSP and the improvement in FFDT in favour of FS4 for these stimuli. FS4 Speech perception in noise and quiet was similar with both coding strategies. Sound quality was rated heterogeneously showing that both strategies represent valuable options for CI fitting to allow for best possible individual optimization.
Collapse
Affiliation(s)
- R Liepins
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - A Kaider
- Medical University of Vienna, Center for Medical Statistics, Informatics, and Intelligent Systems, Vienna, Austria
| | - C Honeder
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - A B Auinger
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - V Dahm
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| | - D Riss
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria.
| | - C Arnoldner
- Medical University of Vienna, Department of Otolaryngology, Head and Neck Surgery, Vienna, Austria
| |
Collapse
|
3
|
Qi B, Mao Y, Liu J, Liu B, Xu L. Relative contributions of acoustic temporal fine structure and envelope cues for lexical tone perception in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:3022. [PMID: 28599529 PMCID: PMC5415402 DOI: 10.1121/1.4982247] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Revised: 03/21/2017] [Accepted: 04/11/2017] [Indexed: 06/07/2023]
Abstract
Previous studies have shown that lexical tone perception in quiet relies on the acoustic temporal fine structure (TFS) but not on the envelope (E) cues. The contributions of TFS to speech recognition in noise are under debate. In the present study, Mandarin tone tokens were mixed with speech-shaped noise (SSN) or two-talker babble (TTB) at five signal-to-noise ratios (SNRs; -18 to +6 dB). The TFS and E were then extracted from each of the 30 bands using Hilbert transform. Twenty-five combinations of TFS and E from the sound mixtures of the same tone tokens at various SNRs were created. Twenty normal-hearing, native-Mandarin-speaking listeners participated in the tone-recognition test. Results showed that tone-recognition performance improved as the SNRs in either TFS or E increased. The masking effects on tone perception for the TTB were weaker than those for the SSN. For both types of masker, the perceptual weights of TFS and E in tone perception in noise was nearly equivalent, with E playing a slightly greater role than TFS. Thus, the relative contributions of TFS and E cues to lexical tone perception in noise or in competing-talker maskers differ from those in quiet and those to speech perception of non-tonal languages.
Collapse
Affiliation(s)
- Beier Qi
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yitao Mao
- Department of Radiology, Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Jiaxing Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Bo Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Xu
- Communication Sciences and Disorders, Ohio University, Athens, Ohio 45701, USA
| |
Collapse
|