1
|
Bader M, Schröger E, Grimm S. Auditory Pattern Representations Under Conditions of Uncertainty-An ERP Study. Front Hum Neurosci 2021; 15:682820. [PMID: 34305553 PMCID: PMC8299531 DOI: 10.3389/fnhum.2021.682820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 06/11/2021] [Indexed: 11/13/2022] Open
Abstract
The auditory system is able to recognize auditory objects and is thought to form predictive models of them even though the acoustic information arriving at our ears is often imperfect, intermixed, or distorted. We investigated implicit regularity extraction for acoustically intact versus disrupted six-tone sound patterns via event-related potentials (ERPs). In an exact-repetition condition, identical patterns were repeated; in two distorted-repetition conditions, one randomly chosen segment in each sound pattern was replaced either by white noise or by a wrong pitch. In a roving-standard paradigm, sound patterns were repeated 1-12 times (standards) in a row before a new pattern (deviant) occurred. The participants were not informed about the roving rule and had to detect rarely occurring loudness changes. Behavioral detectability of pattern changes was assessed in a subsequent behavioral task. Pattern changes (standard vs. deviant) elicited mismatch negativity (MMN) and P3a, and were behaviorally detected above the chance level in all conditions, suggesting that the auditory system extracts regularities despite distortions in the acoustic input. However, MMN and P3a amplitude were decreased by distortions. At the level of MMN, both types of distortions caused similar impairments, suggesting that auditory regularity extraction is largely determined by the stimulus statistics of matching information. At the level of P3a, wrong-pitch distortions caused larger decreases than white-noise distortions. Wrong-pitch distortions likely prevented the engagement of restoration mechanisms and the segregation of disrupted from true pattern segments, causing stronger informational interference with the relevant pattern information.
Collapse
Affiliation(s)
- Maria Bader
- Cognitive and Biological Psychology, Institute of Psychology-Wilhelm Wundt, Faculty of Life Sciences, Leipzig University, Leipzig, Germany
| | - Erich Schröger
- Cognitive and Biological Psychology, Institute of Psychology-Wilhelm Wundt, Faculty of Life Sciences, Leipzig University, Leipzig, Germany
| | - Sabine Grimm
- Cognitive and Biological Psychology, Institute of Psychology-Wilhelm Wundt, Faculty of Life Sciences, Leipzig University, Leipzig, Germany
| |
Collapse
|
2
|
Atılgan A, Çiprut A. Effects of spatial separation with better- ear listening on N1-P2 complex. Auris Nasus Larynx 2021; 48:1067-1073. [PMID: 33745789 DOI: 10.1016/j.anl.2021.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 02/16/2021] [Accepted: 03/02/2021] [Indexed: 11/15/2022]
Abstract
OBJECTIVE The purpose of this study was to determine better- ear listening effect on spatial separation with the N1-P2 complex. METHODS Twenty individuals with normal hearing participated in this study. The speech stimulus /ba/ was presented in front of the participant (0°). Continuous Speech Noise (5 dB signal-to-noise ratio) was presented either in front of the participant (0°), left-side (-90°), or right-side (+90°). N1- P2 complex has been recorded in quiet and three noisy conditions. RESULTS There was a remarkable effect of noise direction on N1, P2 latencies. When the noise was separated from the stimulus, N1 and P2 latency increased in terms of when noise was co-located with the stimulus. There was no statistically significant difference in N1-P2 amplitudes between the stimulus-only and co-located condition. N1-P2 amplitude was increased when the noise came from the sides, according to the stimulus-only and co-located conditions. CONCLUSION These findings demonstrate that the latency shifts on N1-P2 complex explain cortical mechanisms of spatial separation in better-ear listening.
Collapse
Affiliation(s)
- Atılım Atılgan
- Marmara University, School of Medicine, Audiology Department, İstanbul, Turkey; İstanbul Medeniyet University, Faculty of Health Sciences, Audiology Department, İstanbul, Turkey.
| | - Ayça Çiprut
- Marmara University, School of Medicine, Audiology Department, İstanbul, Turkey
| |
Collapse
|
3
|
Fitzhugh MC, Schaefer SY, Baxter LC, Rogalsky C. Cognitive and neural predictors of speech comprehension in noisy backgrounds in older adults. LANGUAGE, COGNITION AND NEUROSCIENCE 2020; 36:269-287. [PMID: 34250179 PMCID: PMC8261331 DOI: 10.1080/23273798.2020.1828946] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Accepted: 09/18/2020] [Indexed: 06/13/2023]
Abstract
Older adults often experience difficulties comprehending speech in noisy backgrounds, which hearing loss does not fully explain. It remains unknown how cognitive abilities, brain networks, and age-related hearing loss may uniquely contribute to speech in noise comprehension at the sentence level. In 31 older adults, using cognitive measures and resting-state fMRI, we investigated the cognitive and neural predictors of speech comprehension with energetic (broadband noise) and informational masking (multi-speakers) effects. Better hearing thresholds and greater working memory abilities were associated with better speech comprehension with energetic masking. Conversely, faster processing speed and stronger functional connectivity between frontoparietal and language networks were associated with better speech comprehension with informational masking. Our findings highlight the importance of the frontoparietal network in older adults' ability to comprehend speech in multi-speaker backgrounds, and that hearing loss and working memory in older adults contributes to speech comprehension abilities related to energetic, but not informational masking.
Collapse
Affiliation(s)
- Megan C. Fitzhugh
- Stevens Neuroimaging and Informatics Institute, University of Southern California, Los Angeles, CA
- College of Health Solutions, Arizona State University, Tempe, AZ
| | - Sydney Y. Schaefer
- School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ
| | | | | |
Collapse
|
4
|
Rao A, Koerner TK, Madsen B, Zhang Y. Investigating Influences of Medial Olivocochlear Efferent System on Central Auditory Processing and Listening in Noise: A Behavioral and Event-Related Potential Study. Brain Sci 2020; 10:brainsci10070428. [PMID: 32635442 PMCID: PMC7408540 DOI: 10.3390/brainsci10070428] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 06/21/2020] [Accepted: 06/30/2020] [Indexed: 11/16/2022] Open
Abstract
This electrophysiological study investigated the role of the medial olivocochlear (MOC) efferents in listening in noise. Both ears of eleven normal-hearing adult participants were tested. The physiological tests consisted of transient-evoked otoacoustic emission (TEOAE) inhibition and the measurement of cortical event-related potentials (ERPs). The mismatch negativity (MMN) and P300 responses were obtained in passive and active listening tasks, respectively. Behavioral responses for the word recognition in noise test were also analyzed. Consistent with previous findings, the TEOAE data showed significant inhibition in the presence of contralateral acoustic stimulation. However, performance in the word recognition in noise test was comparable for the two conditions (i.e., without contralateral stimulation and with contralateral stimulation). Peak latencies and peak amplitudes of MMN and P300 did not show changes with contralateral stimulation. Behavioral performance was also maintained in the P300 task. Together, the results show that the peripheral auditory efferent effects captured via otoacoustic emission (OAE) inhibition might not necessarily be reflected in measures of central cortical processing and behavioral performance. As the MOC effects may not play a role in all listening situations in adults, the functional significance of the cochlear effects of the medial olivocochlear efferents and the optimal conditions conducive to corresponding effects in behavioral and cortical responses remain to be elucidated.
Collapse
Affiliation(s)
- Aparna Rao
- Department of Speech and Hearing Science, Arizona State University, Tempe, AZ 85287, USA
- Correspondence: (A.R.); (Y.Z.); Tel.: +1-480-727-2761 (A.R.); +1-612-624-7818 (Y.Z.)
| | - Tess K. Koerner
- VA RR & D National Center for Rehabilitative Auditory Research, Portland, OR 97239, USA; (T.K.K.); (B.M.)
| | - Brandon Madsen
- VA RR & D National Center for Rehabilitative Auditory Research, Portland, OR 97239, USA; (T.K.K.); (B.M.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (A.R.); (Y.Z.); Tel.: +1-480-727-2761 (A.R.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
5
|
Morrison EL, DeLong CM, Wilcox KT. How humans discriminate acoustically among bottlenose dolphin signature whistles with and without masking by boat noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:4162. [PMID: 32611182 DOI: 10.1121/10.0001450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 05/31/2020] [Indexed: 06/11/2023]
Abstract
Anthropogenic noise in the world's oceans is known to impede many species' ability to perceive acoustic signals, but little research has addressed how this noise affects the perception of bioacoustic signals used for communication in marine mammals. Bottlenose dolphins (Tursiops truncatus) use signature whistles containing identification information. Past studies have used human participants to gain insight into dolphin perception, but most previous research investigated echolocation. In Experiment 1, human participants were tested on their ability to discriminate among signature whistles from three dolphins. Participants' performance was nearly errorless. In Experiment 2, participants identified signature whistles masked by five different samples of boat noise utilizing different signal-to-noise ratios. Lower signal-to-noise ratio and proximity in frequency between the whistle and noise both significantly decreased performance. Like dolphins, human participants primarily identified whistles using frequency contour. Participants reported greater use of amplitude in noise-present vs noise-absent trials, but otherwise did not vary cue usage. These findings can be used to generate hypotheses about dolphins' performance and auditory cue use for future research. This study may provide insight into how specific characteristics of boat noise affect dolphin whistle perception and may have implications for conservation and regulations.
Collapse
Affiliation(s)
- Evan L Morrison
- Department of Psychology, College of Liberal Arts, Rochester Institute of Technology, 18 Lomb Memorial Drive, Rochester, New York 14623, USA
| | - Caroline M DeLong
- Department of Psychology, College of Liberal Arts, Rochester Institute of Technology, 18 Lomb Memorial Drive, Rochester, New York 14623, USA
| | - Kenneth Tyler Wilcox
- Department of Psychology, College of Arts and Letters, University of Notre Dame, 390 Corbett Family Hall, Notre Dame, Indiana 46556, USA
| |
Collapse
|
6
|
Francis AM, Knott VJ, Labelle A, Fisher DJ. Interaction of Background Noise and Auditory Hallucinations on Phonemic Mismatch Negativity (MMN) and P3a Processing in Schizophrenia. Front Psychiatry 2020; 11:540738. [PMID: 33093834 PMCID: PMC7523538 DOI: 10.3389/fpsyt.2020.540738] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Accepted: 08/17/2020] [Indexed: 12/21/2022] Open
Abstract
UNLABELLED Auditory hallucinations (AHs) are among the cardinal symptoms of schizophrenia (SZ). During the presence of AHs aberrant activity of auditory cortices have been observed, including hyperactivation during AHs alone and hypoactivation when AHs are accompanied by a concurrent external auditory competitor. Mismatch negativity (MMN) and P3a are common ERPs of interest within the study of SZ as they are robustly reduced in the chronic phase of the illness. The present study aimed to explore whether background noise altered the auditory MMN and P3a in those with SZ and treatment-resistant AHs. METHODS MMN and P3a were assessed in 12 hallucinating patients (HPs), 11 non-hallucinating patients (NPs) and 9 healthy controls (HCs) within an auditory oddball paradigm. Standard (P = 0.85) and deviant (P = 0.15) stimuli were presented during three noise conditions: silence (SL), traffic noise (TN), and wide-band white noise (WN). RESULTS HPs showed significantly greater deficits in MMN amplitude relative to NPs in all background noise conditions, though predominantly at central electrodes. Conversely, both NPs and HPs exhibited significant deficits in P3a amplitude relative to HCs under the SL condition only. SIGNIFICANCE These findings suggest that the presence of AHs may specifically impair the MMN, while the P3a appears to be more generally impaired in SZ. That MMN amplitudes are specifically reduced for HPs during background noise conditions suggests HPs may have a harder time detecting changes in phonemic sounds during situations with external traffic or "real-world" noise compared to NPs.
Collapse
Affiliation(s)
- Ashley M Francis
- Department of Psychology, Saint Mary's University, Halifax, NS, Canada
| | - Verner J Knott
- Royal Ottawa Mental Health Centre, Ottawa, ON, Canada.,Department of Psychology, Carleton University, Ottawa, ON, Canada
| | - Alain Labelle
- Royal Ottawa Mental Health Centre, Ottawa, ON, Canada
| | - Derek J Fisher
- Department of Psychology, Saint Mary's University, Halifax, NS, Canada.,Royal Ottawa Mental Health Centre, Ottawa, ON, Canada.,Department of Psychology, Carleton University, Ottawa, ON, Canada.,Department of Psychology, Mount Saint Vincent University, Halifax, NS, Canada
| |
Collapse
|
7
|
Zhang J, Meng Y, Wu C, Xiang YT, Yuan Z. Non-speech and speech pitch perception among Cantonese-speaking children with autism spectrum disorder: An ERP study. Neurosci Lett 2019; 703:205-212. [DOI: 10.1016/j.neulet.2019.03.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 02/13/2019] [Accepted: 03/13/2019] [Indexed: 11/16/2022]
|
8
|
Zhang C, Tao R, Zhao H. Auditory spatial attention modulates the unmasking effect of perceptual separation in a "cocktail party" environment. Neuropsychologia 2019; 124:108-116. [PMID: 30659864 DOI: 10.1016/j.neuropsychologia.2019.01.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 11/01/2018] [Accepted: 01/15/2019] [Indexed: 11/30/2022]
Abstract
The perceptual separation between a signal speech and a competing speech (masker), induced by the precedence effect, plays an important role in releasing the signal speech from the masker, especially in a reverberant environment. The perceptual-separation-induced unmasking effect has been suggested to involve multiple cognitive processes, such as selective attention. However, whether listeners' spatial attention modulate the perceptual-separation-induced unmasking effect is not clear. The present study investigated how perceptual separation and auditory spatial attention interact with each other to facilitate speech perception under a simulated noisy and reverberant environment by analyzing the cortical auditory evoked potentials to the signal speech. The results showed that the N1 wave was significantly enhanced by perceptual separation between the signal and masker regardless of whether the participants' spatial attention was directed to the signal or not. However, the P2 wave was significantly enhanced by perceptual separation only when the participants attended to the signal speech. The results indicate that the perceptual-separation-induced facilitation of P2 needs more attentional resource than that of N1. The results also showed that the signal speech caused an enhanced N1 in the contralateral hemisphere regardless of whether participants' attention was directed to the signal or not. In contrast, the signal speech caused an enhanced P2 in the contralateral hemisphere only when the participant attended to the signal. The results indicate that the hemispheric distribution of N1 is mainly affected by the perceptual features of the acoustic stimuli, while that of P2 is affected by the listeners' attentional status.
Collapse
Affiliation(s)
- Changxin Zhang
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China.
| | - Renxia Tao
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China
| | - Hang Zhao
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China
| |
Collapse
|
9
|
Koerner TK, Zhang Y. Differential effects of hearing impairment and age on electrophysiological and behavioral measures of speech in noise. Hear Res 2018; 370:130-142. [DOI: 10.1016/j.heares.2018.10.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 10/06/2018] [Accepted: 10/14/2018] [Indexed: 10/28/2022]
|
10
|
Niemitalo-Haapola E, Haapala S, Kujala T, Raappana A, Kujala T, Jansson-Verkasalo E. Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2297-2309. [PMID: 28763806 DOI: 10.1044/2017_jslhr-h-16-0267] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Accepted: 02/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children. METHOD P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years. RESULTS The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise. CONCLUSIONS Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5233939.
Collapse
Affiliation(s)
- Elina Niemitalo-Haapola
- Child Language Research Center, Faculty of Humanities, University of Oulu, Finland
- Clinical Neurophysiology, Oulu University Hospital, Finland
| | - Sini Haapala
- Clinical Neurophysiology, Oulu University Hospital, Finland
- Department of Psychology and Speech-Language Pathology, University of Turku, Finland
| | - Teija Kujala
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Finland
| | - Antti Raappana
- PEDEGO Research Unit, University of Oulu, Finland
- Department of Otorhinolaryngology-Head and Neck Surgery, Institute of Clinical Medicine, Oulu University Hospital, Finland
| | - Tiia Kujala
- PEDEGO Research Unit, University of Oulu, Finland
- Medical Research Center Oulu, Finland
| | | |
Collapse
|
11
|
Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: A P3 study. Hear Res 2017; 350:58-67. [DOI: 10.1016/j.heares.2017.04.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2016] [Revised: 04/12/2017] [Accepted: 04/16/2017] [Indexed: 10/19/2022]
|
12
|
Mamashli F, Khan S, Bharadwaj H, Michmizos K, Ganesan S, Garel KLA, Ali Hashmi J, Herbert MR, Hämäläinen M, Kenet T. Auditory processing in noise is associated with complex patterns of disrupted functional connectivity in autism spectrum disorder. Autism Res 2016; 10:631-647. [PMID: 27910247 DOI: 10.1002/aur.1714] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Revised: 09/09/2016] [Accepted: 09/16/2016] [Indexed: 11/12/2022]
Abstract
Autism spectrum disorder (ASD) is associated with difficulty in processing speech in a noisy background, but the neural mechanisms that underlie this deficit have not been mapped. To address this question, we used magnetoencephalography to compare the cortical responses between ASD and typically developing (TD) individuals to a passive mismatch paradigm. We repeated the paradigm twice, once in a quiet background, and once in the presence of background noise. We focused on both the evoked mismatch field (MMF) response in temporal and frontal cortical locations, and functional connectivity with spectral specificity between those locations. In the quiet condition, we found common neural sources of the MMF response in both groups, in the right temporal gyrus and inferior frontal gyrus (IFG). In the noise condition, the MMF response in the right IFG was preserved in the TD group, but reduced relative to the quiet condition in ASD group. The MMF response in the right IFG also correlated with severity of ASD. Moreover, in noise, we found significantly reduced normalized coherence (deviant normalized by standard) in ASD relative to TD, in the beta band (14-25 Hz), between left temporal and left inferior frontal sub-regions. However, unnormalized coherence (coherence during deviant or standard) was significantly increased in ASD relative to TD, in multiple frequency bands. Our findings suggest increased recruitment of neural resources in ASD irrespective of the task difficulty, alongside a reduction in top-down modulations, usually mediated by the beta band, needed to mitigate the impact of noise on auditory processing. Autism Res 2016,. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 631-647. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Fahimeh Mamashli
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| | - Sheraz Khan
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts.,McGovern Institute for Brain Research Massachusetts Institute of Technology, Boston, Massachusetts
| | - Hari Bharadwaj
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| | - Konstantinos Michmizos
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts.,McGovern Institute for Brain Research Massachusetts Institute of Technology, Boston, Massachusetts
| | - Santosh Ganesan
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| | - Keri-Lee A Garel
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| | - Javeria Ali Hashmi
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| | - Martha R Herbert
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts.,Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts.,Department of Neuroscience and Biomedical Engineering, Aalto University School of Science Espoo, Finland
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| | - Tal Kenet
- Department of Neurology, Massachusetts General Hospital, Boston, Massachusetts.,Athinoula A. Martinos Center for Biomedical Imaging, MGH/MIT/Harvard, Boston, Massachusetts.,Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
13
|
Koerner TK, Zhang Y, Nelson PB, Wang B, Zou H. Neural indices of phonemic discrimination and sentence-level speech intelligibility in quiet and noise: A mismatch negativity study. Hear Res 2016; 339:40-9. [PMID: 27267705 DOI: 10.1016/j.heares.2016.06.001] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 05/16/2016] [Accepted: 06/02/2016] [Indexed: 11/17/2022]
Abstract
Successful speech communication requires the extraction of important acoustic cues from irrelevant background noise. In order to better understand this process, this study examined the effects of background noise on mismatch negativity (MMN) latency, amplitude, and spectral power measures as well as behavioral speech intelligibility tasks. Auditory event-related potentials (AERPs) were obtained from 15 normal-hearing participants to determine whether pre-attentive MMN measures recorded in response to a consonant (from /ba/ to /bu/) and vowel change (from /ba/ to /da/) in a double-oddball paradigm can predict sentence-level speech perception. The results showed that background noise increased MMN latencies and decreased MMN amplitudes with a reduction in the theta frequency band power. Differential noise-induced effects were observed for the pre-attentive processing of consonant and vowel changes due to different degrees of signal degradation by noise. Linear mixed-effects models further revealed significant correlations between the MMN measures and speech intelligibility scores across conditions and stimuli. These results confirm the utility of MMN as an objective neural marker for understanding noise-induced variations as well as individual differences in speech perception, which has important implications for potential clinical applications.
Collapse
Affiliation(s)
- Tess K Koerner
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA; Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA; Center for Applied Translational Sensory Science, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Peggy B Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA; Center for Applied Translational Sensory Science, University of Minnesota, Minneapolis, MN 55455, USA
| | - Boxiang Wang
- School of Statistics, University of Minnesota, Minneapolis, MN 55455, USA
| | - Hui Zou
- School of Statistics, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
14
|
Attentional modulation of informational masking on early cortical representations of speech signals. Hear Res 2016; 331:119-30. [DOI: 10.1016/j.heares.2015.11.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/05/2013] [Revised: 10/27/2015] [Accepted: 11/04/2015] [Indexed: 11/27/2022]
|
15
|
Zhang C, Lu L, Wu X, Li L. Attentional modulation of the early cortical representation of speech signals in informational or energetic masking. BRAIN AND LANGUAGE 2014; 135:85-95. [PMID: 24992572 DOI: 10.1016/j.bandl.2014.06.002] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Revised: 06/04/2014] [Accepted: 06/05/2014] [Indexed: 06/03/2023]
Abstract
It is easier to recognize a masked speech when the speech and its masker are perceived as spatially segregated. Using event-related potentials, this study examined how the early cortical representation of speech is affected by different masker types and perceptual locations, when the listener is either passively or actively listening to the target speech syllable. The results showed that the two-talker-speech masker induced a much larger masking effect on the N1/P2 complex than either the steady-state-noise masker or the amplitude-modulated speech-spectrum-noise masker did. Also, a switch from the passive- to active-listening condition enhanced the N1/P2 complex only when the masker was speech. Moreover, under the active-listening condition, perceived separation between target and masker enhanced the N1/P2 complex only when the masker was speech. Thus, when a masker is present, the effect of selective attention to the target-speech signal on the early cortical representation of the speech signal is masker-type dependent.
Collapse
Affiliation(s)
- Changxin Zhang
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China
| | - Lingxi Lu
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China
| | - Xihong Wu
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China
| | - Liang Li
- Department of Psychology, Speech and Hearing Research Center, McGovern Institute for Brain Research at PKU, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing 100871, China.
| |
Collapse
|
16
|
Moreau P, Jolicœur P, Lidji P, Peretz I. Successful measurement of the mismatch negativity despite a concurrent movie soundtrack: reduced amplitude but normal component morphology. Clin Neurophysiol 2013; 124:2378-88. [PMID: 23770087 DOI: 10.1016/j.clinph.2013.05.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2012] [Revised: 05/14/2013] [Accepted: 05/24/2013] [Indexed: 10/26/2022]
Abstract
OBJECTIVE To examine the mechanisms responsible for the reduction of the mismatch negativity (MMN) ERP component observed in response to pitch changes when the soundtrack of a movie is presented while recording the MMN. METHODS In three experiments we measured the MMN to tones that differed in pitch from a repeated standard tone presented with a silent subtitled movie, with the soundtrack played forward or backward, or with soundtracks set at different intensity levels. RESULTS MMN amplitude was reduced when the soundtrack was presented either forward or backward compared to the silent subtitled movie. With the soundtrack, MMN amplitude increased proportionally to the increments in the sound-to-noise intensity ratio. CONCLUSION MMN was reduced in amplitude but had normal morphology with a concurrent soundtrack, most likely because of basic acoustical interference from the soundtrack with MMN-critical tones rather than from attentional effects. SIGNIFICANCE A normal MMN can be recorded with a concurrent movie soundtrack, but signal amplitudes need to be set with caution to ensure a sufficiently high sound-to-noise ratio between MMN stimuli and the soundtrack.
Collapse
Affiliation(s)
- Patricia Moreau
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Department of Psychology, University of Montreal, Canada.
| | | | | | | |
Collapse
|
17
|
Fisher DJ, Labelle A, Knott VJ. Alterations of mismatch negativity (MMN) in schizophrenia patients with auditory hallucinations experiencing acute exacerbation of illness. Schizophr Res 2012; 139:237-45. [PMID: 22727705 DOI: 10.1016/j.schres.2012.06.004] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2012] [Revised: 06/01/2012] [Accepted: 06/04/2012] [Indexed: 11/19/2022]
Abstract
Auditory verbal hallucinations (AHs), or hearing 'voices', are one of the hallmark symptoms of patients with schizophrenia. The primary objective of this study was to compare hallucinating schizophrenia patients with respect to differences in deviance detection, as indexed by the auditory mismatch negativity (MMN). Patients were recruited during an acute psychotic episode requiring hospitalization, during which time symptoms of psychosis, including auditory hallucinations, are likely to be at their most severe. MMNs to duration, frequency, gap, intensity and location deviants (as elicited by the 'optimal' multi-feature paradigm) were recorded in 12 acutely ill schizophrenia patients (SZ) with persistent AHs and 15 matched healthy controls (HC). Electrical activity was recorded from 32 scalp electrodes. MMN amplitudes and latencies for each deviant were compared between groups and were correlated with trait (PSYRATS) and state measures of AH severity and Positive and Negative Syndrome Scale (PANSS) ratings in SZs. There were significant group differences for duration, gap, intensity and location MMN amplitudes, such that SZs exhibited reduced MMNs compared to HCs. Additionally, gap MMN amplitudes were correlated with measures of hallucinatory state and frequency of AHs, while location MMN was correlated with perceived location of AHs. In summary, this study corroborates previous research reporting a robust duration MMN deficit in schizophrenia, as well as reporting gap, intensity and location MMN deficits in acutely ill schizophrenia patients with persistent AHs. Additionally, MMN amplitudes were correlated with state and trait measures of AHs. These findings offer further support to previous work suggesting that the presence of auditory hallucinations may make a significant contribution to the widely reported MMN deficits in schizophrenia.
Collapse
Affiliation(s)
- Derek J Fisher
- Department of Psychology, Mount Saint Vincent University, Halifax, Nova Scotia, Canada.
| | | | | |
Collapse
|
18
|
Shetake JA, Wolf JT, Cheung RJ, Engineer CT, Ram SK, Kilgard MP. Cortical activity patterns predict robust speech discrimination ability in noise. Eur J Neurosci 2011; 34:1823-38. [PMID: 22098331 DOI: 10.1111/j.1460-9568.2011.07887.x] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The neural mechanisms that support speech discrimination in noisy conditions are poorly understood. In quiet conditions, spike timing information appears to be used in the discrimination of speech sounds. In this study, we evaluated the hypothesis that spike timing is also used to distinguish between speech sounds in noisy conditions that significantly degrade neural responses to speech sounds. We tested speech sound discrimination in rats and recorded primary auditory cortex (A1) responses to speech sounds in background noise of different intensities and spectral compositions. Our behavioral results indicate that rats, like humans, are able to accurately discriminate consonant sounds even in the presence of background noise that is as loud as the speech signal. Our neural recordings confirm that speech sounds evoke degraded but detectable responses in noise. Finally, we developed a novel neural classifier that mimics behavioral discrimination. The classifier discriminates between speech sounds by comparing the A1 spatiotemporal activity patterns evoked on single trials with the average spatiotemporal patterns evoked by known sounds. Unlike classifiers in most previous studies, this classifier is not provided with the stimulus onset time. Neural activity analyzed with the use of relative spike timing was well correlated with behavioral speech discrimination in quiet and in noise. Spike timing information integrated over longer intervals was required to accurately predict rat behavioral speech discrimination in noisy conditions. The similarity of neural and behavioral discrimination of speech in noise suggests that humans and rats may employ similar brain mechanisms to solve this problem.
Collapse
Affiliation(s)
- Jai A Shetake
- The University of Texas at Dallas, School of Behavioral Brain Sciences, 800 West Campbell Road, GR41 Richardson, TX 75080-3021, USA
| | | | | | | | | | | |
Collapse
|
19
|
Renvall H, Formisano E, Parviainen T, Bonte M, Vihla M, Salmelin R. Parametric Merging of MEG and fMRI Reveals Spatiotemporal Differences in Cortical Processing of Spoken Words and Environmental Sounds in Background Noise. Cereb Cortex 2011; 22:132-43. [DOI: 10.1093/cercor/bhr095] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
20
|
Boulenger V, Hoen M, Jacquier C, Meunier F. Interplay between acoustic/phonetic and semantic processes during spoken sentence comprehension: an ERP study. BRAIN AND LANGUAGE 2011; 116:51-63. [PMID: 20965558 DOI: 10.1016/j.bandl.2010.09.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2010] [Revised: 09/14/2010] [Accepted: 09/18/2010] [Indexed: 05/30/2023]
Abstract
When listening to speech in everyday-life situations, our cognitive system must often cope with signal instabilities such as sudden breaks, mispronunciations, interfering noises or reverberations potentially causing disruptions at the acoustic/phonetic interface and preventing efficient lexical access and semantic integration. The physiological mechanisms allowing listeners to react instantaneously to such fast and unexpected perturbations in order to maintain intelligibility of the delivered message are still partly unknown. The present electroencephalography (EEG) study aimed at investigating the cortical responses to real-time detection of a sudden acoustic/phonetic change occurring in connected speech and how these mechanisms interfere with semantic integration. Participants listened to sentences in which final words could contain signal reversals along the temporal dimension (time-reversed speech) of varying durations and could have either a low- or high-cloze probability within sentence context. Results revealed that early detection of the acoustic/phonetic change elicited a fronto-central negativity shortly after the onset of the manipulation that matched the spatio-temporal features of the Mismatch Negativity (MMN) recorded in the same participants during an oddball paradigm. Time reversal also affected late event-related potentials (ERPs) reflecting semantic expectancies (N400) differently when words were predictable or not from the sentence context. These findings are discussed in the context of brain signatures to transient acoustic/phonetic variations in speech. They contribute to a better understanding of natural speech comprehension as they show that acoustic/phonetic information and semantic knowledge strongly interact under adverse conditions.
Collapse
Affiliation(s)
- Véronique Boulenger
- Laboratoire Dynamique du Langage, CNRS, Université Lyon 2, UMR 5596, Lyon, France.
| | | | | | | |
Collapse
|
21
|
Miettinen I, Alku P, Salminen N, May PJ, Tiitinen H. Responsiveness of the human auditory cortex to degraded speech sounds: Reduction of amplitude resolution vs. additive noise. Brain Res 2011; 1367:298-309. [DOI: 10.1016/j.brainres.2010.10.037] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2010] [Revised: 10/07/2010] [Accepted: 10/12/2010] [Indexed: 11/15/2022]
|
22
|
Miettinen I, Tiitinen H, Alku P, May PJC. Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds. BMC Neurosci 2010; 11:24. [PMID: 20175890 PMCID: PMC2837048 DOI: 10.1186/1471-2202-11-24] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2009] [Accepted: 02/22/2010] [Indexed: 12/04/2022] Open
Abstract
Background Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects. Results We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion. Conclusions We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.
Collapse
Affiliation(s)
- Ismo Miettinen
- Department of Biomedical Engineering and Computational Science, Aalto University School of Science and Technology, Espoo, Finland.
| | | | | | | |
Collapse
|
23
|
Winkler I, Horváth J, Weisz J, Trejo LJ. Deviance detection in congruent audiovisual speech: Evidence for implicit integrated audiovisual memory representations. Biol Psychol 2009; 82:281-92. [DOI: 10.1016/j.biopsycho.2009.08.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2009] [Revised: 08/27/2009] [Accepted: 08/31/2009] [Indexed: 11/26/2022]
|
24
|
Kujala T, Brattico E. Detrimental noise effects on brain's speech functions. Biol Psychol 2009; 81:135-43. [DOI: 10.1016/j.biopsycho.2009.03.010] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2008] [Revised: 12/12/2008] [Accepted: 03/30/2009] [Indexed: 11/16/2022]
|
25
|
SEQUEIRA SARAHDOSSANTOS, SPECHT KARSTEN, HMLINEN HEIKKI, HUGDAHL KENNETH. The effects of different intensity levels of background noise on dichotic listening to consonant-vowel syllables. Scand J Psychol 2008; 49:305-10. [DOI: 10.1111/j.1467-9450.2008.00664.x] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
26
|
Fisher DJ, Labelle A, Knott VJ. Auditory hallucinations and the mismatch negativity: processing speech and non-speech sounds in schizophrenia. Int J Psychophysiol 2008; 70:3-15. [PMID: 18511139 DOI: 10.1016/j.ijpsycho.2008.04.001] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2007] [Revised: 02/21/2008] [Accepted: 04/03/2008] [Indexed: 11/16/2022]
Abstract
BACKGROUND In line with emerging research strategies focusing on specific symptoms rather than global syndromes in psychiatric disorders, we examined the functional neural correlates of auditory verbal hallucinations (AHs) in schizophrenia. Recent neuroimaging and behavioural evidence suggest a reciprocal relationship between auditory cortex response to external sounds versus that induced by AHs. METHODS The mismatch negativity (MMN), a well established event-related potential (ERP) index of auditory cortex function, was assessed in 12 hallucinating patients (HP), 12 non-hallucinating patients (NP) and 12 healthy controls (HC). The primary endpoints, MMN amplitudes and latencies recorded from anterior and posterior scalp regions, were measured in response to non-phonetic and phonetic sounds. RESULTS While schizophrenia patients as a whole differed from HCs, no significant between-group differences were observed when patients were divided into hallucinated and non-hallucinated subgroups but, compared to NPs and HCs, whose MMN amplitudes were greatest in response to across phoneme change at frontal but not temporal sites, MMN amplitudes in HPs at frontal sites were not significantly different to any of the presented stimuli, while temporal MMNs in HPs were maximally sensitive to phonetic change. SIGNIFICANCE These findings demonstrate that auditory verbal hallucinations are associated with impaired pre-attentive processing of speech in fronto-temporal networks, which may involve defective attribution of significance that is sensitive to resource limitations. Overall, this research suggests that MMN may be a useful non-invasive tool for probing relationships between hallucinatory and neural states within schizophrenia and the manner in which auditory processing is altered in these afflicted patients.
Collapse
Affiliation(s)
- Derek J Fisher
- Department of Psychology/Institute of Neuroscience, Carleton University, Ottawa, Ontario, Canada.
| | | | | |
Collapse
|
27
|
Bertoli S, Smurzynski J, Probst R. Effects of age, age-related hearing loss, and contralateral cafeteria noise on the discrimination of small frequency changes: psychoacoustic and electrophysiological measures. J Assoc Res Otolaryngol 2006; 6:207-22. [PMID: 16027962 PMCID: PMC2504594 DOI: 10.1007/s10162-005-5029-6] [Citation(s) in RCA: 65] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2004] [Accepted: 03/25/2005] [Indexed: 10/25/2022] Open
Abstract
The aim of the study was to examine central auditory processes compromised by age, age-related hearing loss, and the presentation of a distracting cafeteria noise using auditory event-related potentials (ERPs). In addition, the relation of ERPs to behavioral measures of discrimination was investigated. Three groups of subjects participated: young normal hearing, elderly subjects with normal hearing for their age, and elderly hearing-impaired subjects. Psychoacoustic frequency discrimination thresholds for a 1000-Hz pure tone were determined in quiet and in the presence of a contralateral cafeteria noise. To elicit ERPs, small frequency contrasts were presented with and without noise under unattended and attended conditions. In the attended condition, behavioral measures of d' detectability and reaction times were also obtained. Noise affected all measures of behavioral frequency discrimination significantly. Except N1, all ERP components in the standard and difference waveforms decreased significantly in amplitude and increased in latency to the same degree in all three subject groups, arguing against a specific age-related sensitivity to the effects of contralateral background noise. For N1 amplitude, the effect of noise was different in the three subject groups, with a complex interaction of age, hearing loss, and attention. Behavioral frequency discrimination was not affected by age but deteriorated significantly in the elderly subjects with hearing loss. In the electrophysiological test, age-related changes occurred at various levels. The most prominent finding in the response to the standard stimuli was a sustained negativity (N2) following P2 in the young subjects that was absent in the elderly, possibly indicating a deficit in the inhibition of irrelevant information processing. In the attended difference waveform, significantly larger N2b and smaller P3b amplitudes and longer N2b and P3b latencies were observed in the elderly indicating different processing strategies. The pronounced age-related changes in the later cognitive components suggest that the discrimination of difficult contrasts, although behaviorally maintained, becomes more effortful in the elderly.
Collapse
Affiliation(s)
- Sibylle Bertoli
- Department of Otorhinolaryngology, University Hospital, CH-4031, Basel, Switzerland.
| | | | | |
Collapse
|
28
|
Muller-Gass A, Stelmack RM, Campbell KB. The effect of visual task difficulty and attentional direction on the detection of acoustic change as indexed by the Mismatch Negativity. Brain Res 2006; 1078:112-30. [PMID: 16497283 DOI: 10.1016/j.brainres.2005.12.125] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2005] [Revised: 12/06/2005] [Accepted: 12/29/2005] [Indexed: 11/15/2022]
Abstract
Näätänen's model of auditory processing purports that attention does not affect the MMN. The present study investigates this claim through two different manipulations. First, the effect of visual task difficulty on the passively elicited MMN is assessed. Second, the MMNs elicited by stimuli under attended and ignored conditions are compared. In Experiment 1, subjects were presented with mixed sequences of equiprobable auditory and visual stimuli. The auditory stimuli consisted of standard (80 dB SPL 1000 Hz), frequency deviant (1050 Hz), and intensity deviant (70 dB SPL) tone pips. In a first instance, subjects were instructed to ignore the auditory stimulation and engage in an easy and difficult visual discrimination task (focused condition). Subsequently, they were asked to attend to both modalities and detect visual and auditory deviant stimuli (divided condition). The results indicate that the passively elicited MMN to frequency and intensity deviants did not significantly vary with visual task difficulty, in spite of the fact that the easy and difficult tasks showed a wide variation in performance. The manipulation of the attentional direction (focused vs. divided conditions) did result in a significant effect on the MMN elicited by the intensity, but not frequency, deviant. The intensity MMN was larger at frontal sites when subjects' attention was directed to both modalities as compared to only the visual modality. The attentional effect on the MMN to the intensity deviants only may be due to the specific deviant feature or the poorer perceptual discriminability of this deviant from the standard. Experiment 2 was designed to address this issue. The methods of Experiment 2 were identical to those of Experiment 1 with the exception that the intensity deviant (60 dB SPL) was made to be more perceptible than the frequency deviant (1016 Hz) when compared to the standard stimulus (80 dB SPL 1000 Hz). The results of Experiment 2 also demonstrated that the passively elicited MMN was not affected by large variations in visual task difficulty; this provides convincing evidence that the MMN is independent of visual task demands. Similarly to Experiment 1, the direction of attention again had a significant effect on the MMN. In Experiment 2, however, the frequency MMN (and not the intensity MMN) was larger at frontal sites during divided attention compared to focused visual attention. The most parsimonious explanation of these results is that attention enhances the discriminability of the deviant from the standard background stimulation. As such, small acoustic changes would benefit from attention whereas the discriminability of larger changes may not be significantly enhanced.
Collapse
|
29
|
Sabri M, Campbell KB. Is the failure to detect stimulus deviance during sleep due to a rapid fading of sensory memory or a degradation of stimulus encoding? J Sleep Res 2005; 14:113-22. [PMID: 15910509 DOI: 10.1111/j.1365-2869.2005.00446.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The mismatch negativity (MMN) is thought to reflect the outcome of a system responsible for the detection of change in an otherwise repetitive, homogenous acoustic environment. This process depends on the storage and maintenance of a sensory representation of the frequently presented stimulus to which the deviant stimulus is compared. Few studies have been able to record the MMN in non-rapid eye movement (NREM) sleep. This pattern of results might be explained by either a rapid fading of sensory memory or an inhibition of stimulus input prior to entry into the cortical MMN generator site. The present study used a very rapid rate of presentation in an attempt to capture mismatch-related negativity prior to the fading of sensory memory. Auditory event-related potentials were recorded from 12 subjects during a single sleep period. A 1000 Hz standard stimulus was presented every 150 ms. At random, on 6.6% of the trials, the standard was changed to either a large 2000 Hz or a small 1100 Hz deviant. In wakefulness, the large deviant elicited an extended negativity that was reduced in amplitude following the presentation of the small deviant. This negativity was also apparent during REM sleep following the presentation of the large deviant. These deviant-related negativities (DRNs) were probably a composite of N1 and MMN activity. During NREM sleep (stage 2 and slow-wave sleep), only the large deviant continued to elicit a DRN. However this DRN might be overlapped by the initial activity of a component that is unique to sleep, the N350. There was little evidence of the DRN or the MMN during sleep following the presentation of the small deviant. A rapid rate of presentation, therefore, does not preserve the MMN following small deviance within sleep. It is possible that inhibition of sensory input occurs before entry into the MMN generating system in the temporal cortex.
Collapse
Affiliation(s)
- Merav Sabri
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI 53226-3548, USA.
| | | |
Collapse
|
30
|
Abstract
OBJECTIVE This study investigated the effects of decreased audibility in low-frequency spectral regions, produced by low-pass noise masking, on cortical event-related potentials (ERPs) to the speech sounds /ba/ and /da/. DESIGN The speech sounds were presented to normal-hearing adults (N = 10) at 65- and 80-dB peak-to-peak equivalent SPL while they were engaged in an active condition (pressing a button to deviant sounds) and a passive condition (ignoring the stimuli and reading a book). Broadband masking noise was simultaneously presented at an intensity sufficient to mask the response to the 65-dB speech sounds and subsequently low-pass filtered. The conditions were quiet (no masking), low-pass noise cutoff frequencies of 250, 500, 1000, 2000, and 4000 Hz, and broadband noise. RESULTS As the cutoff frequency of the low-pass noise masker was raised, ERP latencies increased and amplitudes decreased. The low-pass noise affected N1 differently than the other ERP or behavioral measures, particularly for responses to 80-dB speech stimuli. N1 showed a smaller decrease in amplitude and a smaller increase in latency compared with the other measures. Further, the cutoff frequency where changes first occurred was different for N1. For 80-dB stimuli, N1 amplitudes showed significant changes when the low-pass noise masker cutoff was raised to 4000 Hz. In contrast, d', MMN, N2, and P3 amplitudes did not change significantly until the low-pass noise masker was raised to 2000 Hz. N1 latencies showed significant changes when the low-pass noise masker was raised to 1000 Hz, whereas RT, MMN, N2, and P3 latencies did not change significantly until the low-pass noise masker was raised to 2000 Hz. No significant differences in response amplitudes were seen across the hemispheres (electrode sites C3M versus C4M) in quiet, or in masking noise. CONCLUSIONS These results indicate that decreased audibility, resulting from the masking, affects N1 in a differential manner compared with MMN, N2, P3, and behavioral measures. N1 indexes the presence of audible stimulus energy, being present when speech sounds are audible, whether or not they are discriminable. MMN indexes stimulus discrimination at a pre-attentive level. It was present only when behavioral measures indicated the ability to differentiate the speech sounds. N2 and P3 also were present only when the speech sounds were behaviorally discriminated. N2 and P3 index stimulus discrimination at a conscious level. These cortical ERP in low-pass noise studies provide insight into the changes in brain processes and behavioral performance that occur when audibility is reduced, such as with low frequency hearing loss.
Collapse
Affiliation(s)
- Brett A Martin
- School of Graduate Medical Education, Seton Hall University, South Orange, New Jersey, USA
| | | |
Collapse
|
31
|
Kozou H, Kujala T, Shtyrov Y, Toppila E, Starck J, Alku P, Näätänen R. The effect of different noise types on the speech and non-speech elicited mismatch negativity. Hear Res 2005; 199:31-9. [PMID: 15574298 DOI: 10.1016/j.heares.2004.07.010] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/27/2004] [Accepted: 07/09/2004] [Indexed: 11/18/2022]
Abstract
The effect of different types of real-life noise on the central auditory processing of speech and non-speech sounds was evaluated by the means of mismatch negativity and behavioral responses. Subjects (19-34 years old; 6 males, 4 females) were presented, in separate conditions, with either speech or non-speech stimuli of approximately equal complexity in five background conditions: babble noise, industrial noise, traffic noise, wide band noise, and silent condition. Whereas there were no effects of stimuli or noise on the behavioral responses, the MMN results revealed that speech and non-speech sounds are processed differently both in silent and noisy conditions. Speech processing was more affected than non-speech processing in all noise conditions. Moreover, different noise types had a differential effect on the pre-attentive discrimination, as reflected in MMN, on speech and non-speech sounds. Babble and industrial noises dramatically reduced the MMN amplitudes for both stimulus types, while traffic noise affected only speech stimuli.
Collapse
Affiliation(s)
- H Kozou
- Department of Psychology, Cognitive Brain Research Unit, University of Helsinki, P.O. Box 9, FIN-00014 Helsinki, Finland
| | | | | | | | | | | | | |
Collapse
|
32
|
Muller-Gass A, Campbell K. Event-related potential measures of the inhibition of information processing: I. Selective attention in the waking state. Int J Psychophysiol 2002; 46:177-95. [PMID: 12445947 DOI: 10.1016/s0167-8760(02)00111-3] [Citation(s) in RCA: 41] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
This article reviews the effects of selective attention on event-related potentials (ERPs). Attention has little, if any, effect on short-latency exogenous ERPs. The longer-latency ERPs can be markedly affected by manipulation of the subject's level of attention. For example, a late positive wave, P300, appears to occur only if subjects actively detect an infrequently occurring target stimulus. However, a number of other late positive waves may also occur independently of the direction of attention, particularly if elicited by highly biologically or psychologically relevant stimuli. Attention may also interact with an earlier, apparently exogenous, negative waveform, N1. This could be due to the overlapping and summating effect of an attentional-related waveform, the processing negativity. The presentation of a physically deviant stimulus occurring among a train of homogeneous standard stimuli will elicit another negative wave, the mismatch negativity (MMN). The MMN has traditionally been thought to occur independently of attention. More recent studies have, however, shown that attention can modulate the MMN. This may, however, be explained by the summating effects of other overlapping components. Interpreting the scalp-recorded ERP can therefore require judicious care. Design of experiments must take into account the fact that the magnitude of attentional effects will depend on a number of different influences, some of which are very subtle and complex. A problem with any study in the waking and alert human is that the subject may not be able to completely ignore stimuli, in spite of instructions to do so. For this reason, the study of unconscious states, such as sleep, may prove to be especially fruitful in understanding the effects of attention in the waking state.
Collapse
|
33
|
Hertrich I, Mathiak K, Lutzenberger W, Ackermann H. Hemispheric lateralization of the processing of consonant-vowel syllables (formant transitions): effects of stimulus characteristics and attentional demands on evoked magnetic fields. Neuropsychologia 2002; 40:1902-17. [PMID: 12207989 DOI: 10.1016/s0028-3932(02)00063-5] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
It is still unsettled in how far temporal resolution of dynamic acoustic events (formant transitions) or phonetic/linguistic processes contribute to predominant left-hemisphere encoding of consonant-vowel syllables. To further elucidate the underlying mechanisms, evoked magnetic fields in response to consonant-vowel events (synthetic versus spoken) were recorded (oddball design: standards=binaural/ba/, deviants=dichotic/ba/-/da/; 20 right-handed subjects) under different attentional conditions (visual distraction versus stimulus identification). Spoken events yielded a left-lateralized peak phase of the mismatch field (MMF; 150-200ms post-stimulus onset) in response to right-ear deviants during distraction. By contrast, pre-attentive processing of synthetic items gave rise to a left-enhanced MMF onset (100ms), but failed to elicit later lateralization effects. In case of directed attention, synthetic deviants elicited a left-pronounced MMF peak resembling the pre-attentive response to natural syllables. These interactions of MMF asymmetry with signal structure and attentional load indicate two distinct successive left-lateralization effects: signal-related operations and representation of 'phonetic traces'. Furthermore, a right-lateralized early MMF component (100ms) emerged in response to natural syllables during pre-attentive processing and to synthetic stimuli in case of directed attention. Conceivably, these effects indicate right hemisphere operations prior to phonetic evaluation such as periodicity representation. Two distinct time windows showed correlations between dichotic listening performance and ear effects on magnetic responses reflecting early gain factors (ca. 75ms post-stimulus onset) and binaural fusion strategies (ca. 200ms), respectively. Finally, gender interacted with MMF lateralization, indicating different processing strategies in case of artificial speech signals.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology, University of Tübingen, Otfried-Müller-Street 47, Germany.
| | | | | | | |
Collapse
|