1
|
Whitten A, Key AP, Mefferd AS, Bodfish JW. Auditory event-related potentials index faster processing of natural speech but not synthetic speech over nonspeech analogs in children. BRAIN AND LANGUAGE 2020; 207:104825. [PMID: 32563764 DOI: 10.1016/j.bandl.2020.104825] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 05/29/2020] [Accepted: 05/30/2020] [Indexed: 06/11/2023]
Abstract
Given the crucial role of speech sounds in human language, it may be beneficial for speech to be supported by more efficient auditory and attentional neural processing mechanisms compared to nonspeech sounds. However, previous event-related potential (ERP) studies have found either no differences or slower auditory processing of speech than nonspeech, as well as inconsistent attentional processing. We hypothesized that this may be due to the use of synthetic stimuli in past experiments. The present study measured ERP responses during passive listening to both synthetic and natural speech and complexity-matched nonspeech analog sounds in 22 8-11-year-old children. We found that although children were more likely to show immature auditory ERP responses to the more complex natural stimuli, ERP latencies were significantly faster to natural speech compared to cow vocalizations, but were significantly slower to synthetic speech compared to tones. The attentional results indicated a P3a orienting response only to the cow sound, and we discuss potential methodological reasons for this. We conclude that our results support more efficient auditory processing of natural speech sounds in children, though more research with a wider array of stimuli will be necessary to confirm these results. Our results also highlight the importance of using natural stimuli in research investigating the neurobiology of language.
Collapse
Affiliation(s)
- Allison Whitten
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA.
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - Antje S Mefferd
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - James W Bodfish
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA; Vanderbilt Brain Institute, 6133 Medical Research Building III, 465 21st Avenue S., Nashville, TN, USA
| |
Collapse
|
2
|
Zhang X, Li X, Chen J, Gong Q. Background Suppression and its Relation to Foreground Processing of Speech Versus Non-speech Streams. Neuroscience 2018; 373:60-71. [PMID: 29337239 DOI: 10.1016/j.neuroscience.2018.01.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Revised: 01/02/2018] [Accepted: 01/03/2018] [Indexed: 10/18/2022]
Abstract
Since sound perception takes place against a background with a certain amount of noise, both speech and non-speech processing involve extraction of target signals and suppression of background noise. Previous works on early processing of speech phonemes largely neglected how background noise is encoded and suppressed. This study aimed to fill in this gap. We adopted an oddball paradigm where speech (vowels) or non-speech stimuli (complex tones) were presented with or without a background of amplitude-modulated noise and analyzed cortical responses related to foreground stimulus processing, including mismatch negativity (MMN), N2b, and P300, as well as neural representations of the background noise, that is, auditory steady-state response (ASSR). We found that speech deviants elicited later and weaker MMN, later N2b, and later P300 than non-speech ones, but N2b and P300 had similar strength, suggesting more complex processing of certain acoustic features in speech. Only for vowels, background noise enhanced N2b strength relative to silence, suggesting an attention-related speech-specific process to improve perception of foreground targets. In addition, noise suppression in speech contexts, quantified by ASSR amplitude reduction after stimulus onset, was lateralized towards the left hemisphere. The left-lateralized suppression following N2b was associated with the N2b enhancement in noise for speech, indicating that foreground processing may interact with background suppression, particularly during speech processing. Together, our findings indicate that the differences between perception of speech and non-speech sounds involve not only the processing of target information in the foreground but also the suppression of irrelevant aspects in the background.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Xiaolin Li
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Jingjing Chen
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Research Center of Biomedical Engineering, Graduate School at Shenzhen, Tsinghua University, Shenzhen, Guangdong Province, China.
| |
Collapse
|
3
|
Beukema S, Gonzalez-Lara LE, Finoia P, Kamau E, Allanson J, Chennu S, Gibson RM, Pickard JD, Owen AM, Cruse D. A hierarchy of event-related potential markers of auditory processing in disorders of consciousness. Neuroimage Clin 2016; 12:359-71. [PMID: 27595064 PMCID: PMC4995605 DOI: 10.1016/j.nicl.2016.08.003] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2016] [Accepted: 08/03/2016] [Indexed: 11/28/2022]
Abstract
Functional neuroimaging of covert perceptual and cognitive processes can inform the diagnoses and prognoses of patients with disorders of consciousness, such as the vegetative and minimally conscious states (VS;MCS). Here we report an event-related potential (ERP) paradigm for detecting a hierarchy of auditory processes in a group of healthy individuals and patients with disorders of consciousness. Simple cortical responses to sounds were observed in all 16 patients; 7/16 (44%) patients exhibited markers of the differential processing of speech and noise; and 1 patient produced evidence of the semantic processing of speech (i.e. the N400 effect). In several patients, the level of auditory processing that was evident from ERPs was higher than the abilities that were evident from behavioural assessment, indicating a greater sensitivity of ERPs in some cases. However, there were no differences in auditory processing between VS and MCS patient groups, indicating a lack of diagnostic specificity for this paradigm. Reliably detecting semantic processing by means of the N400 effect in passively listening single-subjects is a challenge. Multiple assessment methods are needed in order to fully characterise the abilities of patients with disorders of consciousness.
Collapse
Affiliation(s)
- Steve Beukema
- The Brain and Mind Institute, University of Western Ontario, London, ON, Canada
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
| | | | - Paola Finoia
- Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Evelyn Kamau
- Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Judith Allanson
- Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Srivas Chennu
- School of Computing, University of Kent, Chatham Maritime, UK
- Department of Clinical Neurosciences, The University of Cambridge, Cambridge, UK
| | - Raechelle M. Gibson
- The Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - John D. Pickard
- Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Adrian M. Owen
- The Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Damian Cruse
- School of Psychology, University of Birmingham, Birmingham, UK
| |
Collapse
|
4
|
Auditory discrimination predicts linguistic outcome in Italian infants with and without familial risk for language learning impairment. Dev Cogn Neurosci 2016; 20:23-34. [PMID: 27295127 PMCID: PMC6987703 DOI: 10.1016/j.dcn.2016.03.002] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2015] [Revised: 01/23/2016] [Accepted: 03/01/2016] [Indexed: 01/21/2023] Open
Abstract
Italian infants with familial risk for LLI show deficits in RAP abilities. Early multi-feature RAP skills predict to later expressive language skills. Different acoustical features are critical to normative language acquisition. Early RAP skills represent a stable cross-linguistic risk marker for LLI. Early intervention programs should be implemented based on these results.
Infants’ ability to discriminate between auditory stimuli presented in rapid succession and differing in fundamental frequency (Rapid Auditory Processing [RAP] abilities) has been shown to be anomalous in infants at familial risk for Language Learning Impairment (LLI) and to predict later language outcomes. This study represents the first attempt to investigate RAP in Italian infants at risk for LLI (FH+), examining two critical acoustic features: frequency and duration, both embedded in a rapidly-presented acoustic environment. RAP skills of 24 FH+ and 32 control (FH−) Italian 6-month-old infants were characterized via EEG/ERP using a multi-feature oddball paradigm. Outcome measures of expressive vocabulary were collected at 20 months. Group differences favoring FH− infants were identified: in FH+ infants, the latency of the N2* peak was delayed and the mean amplitude of the positive mismatch response was reduced, primarily for frequency discrimination and within the right hemisphere. Moreover, both EEG measures were correlated with language scores at 20 months. Results indicate that RAP abilities are atypical in Italian infants with a first-degree relative affected by LLI and that this impacts later linguistic skills. These findings provide a compelling cross-linguistic comparison with previous research on American infants, supporting the biological unity hypothesis of LLI.
Collapse
|
5
|
Bendixen A, Schwartze M, Kotz SA. Temporal dynamics of contingency extraction from tonal and verbal auditory sequences. BRAIN AND LANGUAGE 2015; 148:64-73. [PMID: 25512177 DOI: 10.1016/j.bandl.2014.11.009] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2014] [Revised: 11/12/2014] [Accepted: 11/15/2014] [Indexed: 06/04/2023]
Abstract
Consecutive sound events are often to some degree predictive of each other. Here we investigated the brain's capacity to detect contingencies between consecutive sounds by means of electroencephalography (EEG) during passive listening. Contingencies were embedded either within tonal or verbal stimuli. Contingency extraction was measured indirectly via the elicitation of the mismatch negativity (MMN) component of the event-related potential (ERP) by contingency violations. MMN results indicate that structurally identical forms of predictability can be extracted from both tonal and verbal stimuli. We also found similar generators to underlie the processing of contingency violations across stimulus types, as well as similar performance in an active-listening follow-up test. However, the process of passive contingency extraction was considerably slower (twice as many rule exemplars were needed) for verbal than for tonal stimuli These results suggest caution in transferring findings on complex predictive regularity processing obtained with tonal stimuli directly to the speech domain.
Collapse
Affiliation(s)
- Alexandra Bendixen
- Auditory Psychophysiology Lab, Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University of Oldenburg, D-26111 Oldenburg, Germany; Institute of Psychology, University of Leipzig, D-04103 Leipzig, Germany.
| | - Michael Schwartze
- School of Psychological Sciences, University of Manchester, M13 9PL Manchester, UK; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, D-04103 Leipzig, Germany.
| | - Sonja A Kotz
- School of Psychological Sciences, University of Manchester, M13 9PL Manchester, UK; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, D-04103 Leipzig, Germany.
| |
Collapse
|
6
|
Christmann CA, Berti S, Steinbrink C, Lachmann T. Differences in sensory processing of German vowels and physically matched non-speech sounds as revealed by the mismatch negativity (MMN) of the human event-related brain potential (ERP). BRAIN AND LANGUAGE 2014; 136:8-18. [PMID: 25108306 DOI: 10.1016/j.bandl.2014.07.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Revised: 07/14/2014] [Accepted: 07/17/2014] [Indexed: 06/03/2023]
Abstract
We compared processing of speech and non-speech by means of the mismatch negativity (MMN). For this purpose, the MMN elicited by vowels was compared to those elicited by two non-speech stimulus types: spectrally rotated vowels, having the same stimulus complexity as the speech stimuli, and sounds based on the bands of formants of the vowels, representing non-speech stimuli of lower complexity as compared to the other stimulus types. This design allows controlling for effects of stimulus complexity when comparing neural correlates of processing speech to non-speech. Deviants within a modified multi-feature design differed either in duration or spectral property. Moreover, the difficulty to discriminate between the standard and the two deviants was controlled for each stimulus type by means of an additional active discrimination task. Vowels elicited a larger MMN compared to both non-speech stimulus types, supporting the concept of language-specific phoneme representations and the role of the participants' prior experience.
Collapse
Affiliation(s)
- Corinna A Christmann
- Center for Cognitive Science, Cognitive and Developmental Psychology Unit, University of Kaiserslautern, Kaiserslautern, Germany.
| | - Stefan Berti
- Department of Clinical Psychology and Neuropsychology, Institute for Psychology, Johannes Gutenberg University Mainz, Mainz, Germany
| | - Claudia Steinbrink
- Center for Cognitive Science, Cognitive and Developmental Psychology Unit, University of Kaiserslautern, Kaiserslautern, Germany
| | - Thomas Lachmann
- Center for Cognitive Science, Cognitive and Developmental Psychology Unit, University of Kaiserslautern, Kaiserslautern, Germany
| |
Collapse
|
7
|
Varvatsoulias G. Voice-Sensitive Areas in the Brain: A Single Participant Study Coupled With Brief Evolutionary Psychological Considerations. PSYCHOLOGICAL THOUGHT 2014. [DOI: 10.5964/psyct.v7i1.98] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
8
|
Kuuluvainen S, Nevalainen P, Sorokin A, Mittag M, Partanen E, Putkinen V, Seppänen M, Kähkönen S, Kujala T. The neural basis of sublexical speech and corresponding nonspeech processing: a combined EEG-MEG study. BRAIN AND LANGUAGE 2014; 130:19-32. [PMID: 24576806 DOI: 10.1016/j.bandl.2014.01.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Revised: 01/10/2014] [Accepted: 01/23/2014] [Indexed: 06/03/2023]
Abstract
We addressed the neural organization of speech versus nonspeech sound processing by investigating preattentive cortical auditory processing of changes in five features of a consonant-vowel syllable (consonant, vowel, sound duration, frequency, and intensity) and their acoustically matched nonspeech counterparts in a simultaneous EEG-MEG recording of mismatch negativity (MMN/MMNm). Overall, speech-sound processing was enhanced compared to nonspeech sound processing. This effect was strongest for changes which affect word meaning (consonant, vowel, and vowel duration) in the left and for the vowel identity change in the right hemisphere also. Furthermore, in the right hemisphere, speech-sound frequency and intensity changes were processed faster than their nonspeech counterparts, and there was a trend for speech-enhancement in frequency processing. In summary, the results support the proposed existence of long-term memory traces for speech sounds in the auditory cortices, and indicate at least partly distinct neural substrates for speech and nonspeech sound processing.
Collapse
Affiliation(s)
- Soila Kuuluvainen
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland.
| | - Päivi Nevalainen
- BioMag Laboratory, Hospital District of Helsinki and Uusimaa, HUS Medical Imaging Center, P.O. Box 340, 00029 HUS, Helsinki University Central Hospital, Helsinki, Finland
| | - Alexander Sorokin
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland; Laboratory of Neurophysiology, Mental Health Research Centre, Russian Academy of Medical Sciences, Kashirskoe sh. 34, 115522 Moscow, Russia; Centre of Neurobiological Diagnostics, Moscow State University of Psychology and Education, Sretenka 29, 127051 Moscow, Russia
| | - Maria Mittag
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland; University Of Washington, Institute for Learning and Brain Sciences, Seattle, Washington, United States of America
| | - Eino Partanen
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland; Center of Excellence in Interdisciplinary Music Research, Department of Music, P.O. Box 35, 40014 University of Jyväskylä, Finland
| | - Vesa Putkinen
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland; Center of Excellence in Interdisciplinary Music Research, Department of Music, P.O. Box 35, 40014 University of Jyväskylä, Finland
| | - Miia Seppänen
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland; Center of Excellence in Interdisciplinary Music Research, Department of Music, P.O. Box 35, 40014 University of Jyväskylä, Finland
| | - Seppo Kähkönen
- BioMag Laboratory, Hospital District of Helsinki and Uusimaa, HUS Medical Imaging Center, P.O. Box 340, 00029 HUS, Helsinki University Central Hospital, Helsinki, Finland
| | - Teija Kujala
- Cognitive Brain Research Unit, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland; CICERO Learning, Institute of Behavioral Sciences, P.O. Box 9, 00014 University of Helsinki, Finland
| |
Collapse
|
9
|
Baart M, Stekelenburg JJ, Vroomen J. Electrophysiological evidence for speech-specific audiovisual integration. Neuropsychologia 2014; 53:115-21. [DOI: 10.1016/j.neuropsychologia.2013.11.011] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2013] [Revised: 11/07/2013] [Accepted: 11/19/2013] [Indexed: 11/26/2022]
|
10
|
Lohvansuu K, Hämäläinen JA, Tanskanen A, Bartling J, Bruder J, Honbolygó F, Schulte-Körne G, Démonet JF, Csépe V, Leppänen PHT. Separating mismatch negativity (MMN) response from auditory obligatory brain responses in school-aged children. Psychophysiology 2013; 50:640-52. [DOI: 10.1111/psyp.12048] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2012] [Accepted: 02/23/2013] [Indexed: 11/28/2022]
Affiliation(s)
- Kaisa Lohvansuu
- Department of Psychology; University of Jyväskylä; Jyväskylä; Finland
| | | | - Annika Tanskanen
- Department of Psychology; University of Jyväskylä; Jyväskylä; Finland
| | - Jürgen Bartling
- Department of Child and Adolescent Psychiatry; Psychosomatics and Psychotherapy; University of Munich; München; Germany
| | - Jennifer Bruder
- Department of Child and Adolescent Psychiatry; Psychosomatics and Psychotherapy; University of Munich; München; Germany
| | - Ferenc Honbolygó
- Institute for Psychology; Hungarian Academy of Sciences; Budapest; Hungary
| | - Gerd Schulte-Körne
- Department of Child and Adolescent Psychiatry; Psychosomatics and Psychotherapy; University of Munich; München; Germany
| | | | - Valéria Csépe
- Institute for Psychology; Hungarian Academy of Sciences; Budapest; Hungary
| | | |
Collapse
|
11
|
Fast parametric evaluation of central speech-sound processing with mismatch negativity (MMN). Int J Psychophysiol 2013. [DOI: 10.1016/j.ijpsycho.2012.11.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
12
|
Leung S, Cornella M, Grimm S, Escera C. Is fast auditory change detection feature specific? An electrophysiological study in humans. Psychophysiology 2012; 49:933-42. [DOI: 10.1111/j.1469-8986.2012.01375.x] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2011] [Accepted: 02/28/2012] [Indexed: 11/30/2022]
|
13
|
The effects of visual material and temporal synchrony on the processing of letters and speech sounds. Exp Brain Res 2011; 211:287-98. [DOI: 10.1007/s00221-011-2686-z] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2010] [Accepted: 04/06/2011] [Indexed: 10/18/2022]
|
14
|
How can the brain's resting state activity generate hallucinations? A 'resting state hypothesis' of auditory verbal hallucinations. Schizophr Res 2011; 127:202-14. [PMID: 21146961 DOI: 10.1016/j.schres.2010.11.009] [Citation(s) in RCA: 142] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2010] [Revised: 09/16/2010] [Accepted: 11/08/2010] [Indexed: 11/20/2022]
Abstract
While several hypotheses about the neural mechanisms underlying auditory verbal hallucinations (AVH) have been suggested, the exact role of the recently highlighted intrinsic resting state activity of the brain remains unclear. Based on recent findings, we therefore developed what we call the 'resting state hypotheses' of AVH. Our hypothesis suggest that AVH may be traced back to abnormally elevated resting state activity in auditory cortex itself, abnormal modulation of the auditory cortex by anterior cortical midline regions as part of the default-mode network, and neural confusion between auditory cortical resting state changes and stimulus-induced activity. We discuss evidence in favour of our 'resting state hypothesis' and show its correspondence with phenomenological accounts.
Collapse
|
15
|
Zevin JD, Datta H, Maurer U, Rosania KA, McCandliss BD. Native language experience influences the topography of the mismatch negativity to speech. Front Hum Neurosci 2010; 4:212. [PMID: 21267425 PMCID: PMC3024563 DOI: 10.3389/fnhum.2010.00212] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2009] [Accepted: 10/11/2010] [Indexed: 11/13/2022] Open
Abstract
The ability to learn second language speech sound categories declines during development. We examined this phenomenon by studying the mismatch negativity (MMN) to the /r/ - /l/ distinction in native English speakers and learners of English as a second language who are native speakers of Japanese. Previous studies have suggested that the MMN is remarkably plastic when evaluated as a waveform at a central electrode. We replicated this finding: analyses of the MMN at a typical electrode location (Fz) revealed only small, non-significant differences between groups, despite large behavioral differences in the ability to discriminate these sounds from one another. Topographic analyses, however, revealed reliable differences in lateralization of the MMN, such that native English speakers' responses were left-lateralized relative to native Japanese speakers' responses.
Collapse
Affiliation(s)
- Jason D. Zevin
- Sackler Institute for Developmental Psychobiology, Weill Cornell Medical CollegeNew York, NY, USA
- Neuroscience Program, Weill Cornell Medical CollegeNew York, NY, USA
| | - Hia Datta
- Sackler Institute for Developmental Psychobiology, Weill Cornell Medical CollegeNew York, NY, USA
| | - Urs Maurer
- Department of Child and Adolescent Psychiatry, University of ZurichZurich, Switzerland
| | - Kara A. Rosania
- Neuroscience Program, Weill Cornell Medical CollegeNew York, NY, USA
| | - Bruce D. McCandliss
- Department of Psychology, Peabody College of Education, Vanderbilt UniversityNashville, USA
| |
Collapse
|
16
|
Effects of various articulatory features of speech on cortical event-related potentials and behavioral measures of speech-sound processing. Ear Hear 2010; 31:491-504. [PMID: 20453651 DOI: 10.1097/aud.0b013e3181d8683d] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To investigate the effects of three articulatory features of speech (i.e., vowel-space contrast, place of articulation of stop consonants, and voiced/voiceless distinctions) on cortical event-related potentials (ERPs) (waves N1, mismatch negativity, N2b, and P3b) and their related behavioral measures of discrimination (d-prime sensitivity and reaction time [RT]) in normal-hearing adults to increase our knowledge regarding how the brain responds to acoustical differences that occur within an articulatory speech feature and across articulatory features of speech. DESIGN Cortical ERPs were recorded to three sets of consonant-vowel speech stimuli (/bi versus /bu/, /ba/ versus /da/, /da/ versus /ta/) presented at 65 and 80 dB peak-to-peak equivalent SPL from 20 normal-hearing adults. All speech stimuli were presented in an oddball paradigm. Cortical ERPs were recorded from 10 individuals in the active-listening condition and another 10 individuals in the passive-listening condition. All listeners were tested at both stimulus intensities. RESULTS Mean amplitudes for all ERP components were considerably larger for the responses to the vowel contrast in comparison with the responses to the two consonant contrasts. Similarly, the mean mismatch negativity, P3b, and RT latencies were significantly shorter for the responses to the vowel versus consonant contrasts. For the majority of ERP components, only small nonsignificant differences occurred in either the ERP amplitude or the latency response measurements for stimuli within a particular articulatory feature of speech. CONCLUSIONS The larger response amplitudes and earlier latencies for the cortical ERPs to the vowel versus consonant stimuli are likely related, in part, to the large spectral differences present in these speech contrasts. The measurements of response strength (amplitudes and d-prime scores) and response timing (ERP and RT latencies) for the various cortical ERPs suggest that the brain may have an easier task processing the steady state information present in the vowel stimuli in comparison with the rapidly changing formant transitions in the consonant stimuli.
Collapse
|
17
|
Steinberg J, Truckenbrodt H, Jacobsen T. Preattentive Phonotactic Processing as Indexed by the Mismatch Negativity. J Cogn Neurosci 2010; 22:2174-85. [DOI: 10.1162/jocn.2009.21408] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Processing of an obligatory phonotactic restriction outside the focus of the participants' attention was investigated by means of ERPs using (reversed) experimental oddball blocks. Dorsal fricative assimilation (DFA) is a phonotactic constraint in German grammar that is violated in *[ɛx] but not in [ɔx], [ɛ∫], and [ɔ∫]. These stimulus sequences engage the auditory deviance detection mechanism as reflected by the MMN component of the ERP. In Experiment 1 (n = 16), stimuli were contrasted pairwise such that they shared the initial vowel but differed with regard to the fricative. Phonotactically ill-formed deviants elicited stronger MMN responses than well-formed deviants that differed acoustically in the same way from the standard stimulation but did not contain a phonotactic violation. In Experiment 2 (n = 16), stimuli were contrasted such that they differed with regard to the vowel but shared the fricative. MMN was elicited by the vowel change. An additional, later MMN response was observed for the phonotactically ill-formed syllable only. This MMN cannot be attributed to any phonetic or segmental difference between standard and deviant. These findings suggest that implicit phonotactic knowledge is activated and applied in preattentive speech processing.
Collapse
Affiliation(s)
| | | | - Thomas Jacobsen
- 1University of Leipzig, Germany
- 3Helmut Schmidt University/University of the Federal Armed Forces, Hamburg, Germany
| |
Collapse
|
18
|
Sorokin A, Alku P, Kujala T. Change and novelty detection in speech and non-speech sound streams. Brain Res 2010; 1327:77-90. [DOI: 10.1016/j.brainres.2010.02.052] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2009] [Revised: 01/30/2010] [Accepted: 02/18/2010] [Indexed: 10/19/2022]
|
19
|
Milovanov R, Huotilainen M, Esquef PAA, Alku P, Välimäki V, Tervaniemi M. The role of musical aptitude and language skills in preattentive duration processing in school-aged children. Neurosci Lett 2009; 460:161-5. [PMID: 19481587 DOI: 10.1016/j.neulet.2009.05.063] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2009] [Revised: 05/12/2009] [Accepted: 05/22/2009] [Indexed: 11/16/2022]
Abstract
We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.
Collapse
Affiliation(s)
- Riia Milovanov
- Department of English, University of Turku, Finland; Centre for Cognitive Neuroscience, University of Turku, Finland.
| | | | | | | | | | | |
Collapse
|
20
|
Inouchi M, Kubota M, Ohta K, Matsushima E, Ferrari P, Scovel T. Neuromagnetic mismatch field (MMF) dependence on the auditory temporal integration window and the existence of categorical boundaries: comparisons between dissyllabic words and their equivalent tones. Brain Res 2008; 1232:155-62. [PMID: 18671951 DOI: 10.1016/j.brainres.2008.07.026] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2008] [Revised: 07/05/2008] [Accepted: 07/08/2008] [Indexed: 11/26/2022]
Abstract
Previous duration-related auditory mismatch response studies have tested vowels, words, and tones. Recently, the elicitation of strong neuromagnetic mismatch field (MMF) components in response to large (>32%) vowel-duration decrements was clearly observed within dissyllabic words. To date, however, the issues of whether this MMF duration-decrement effect also extends to duration increments, and to what degree these duration decrements and increments are attributed to their corresponding non-speech acoustic properties remainto be resolved. Accordingly, this magnetoencephalographic (MEG) study investigated whether prominent MMF components would be evoked by both duration decrements and increments for dissyllabic word stimuli as well as frequency-band matched tones in order to corroborate the relation between the MMF elicitation and the directions of duration changes in speech and non-speech. Further, the peak latency effectsdepending on stimulus types (words vs. tones) were examined. MEG responses were recorded with a whole-head 148-channel magnetometer, while subjects passively listened to the stimuli presented within an odd-ball paradigm for both shortened duration (180-->100%) and lengthened duration (100-->180%). Prominent MMF components were observed in the shortened and lengthened paradigms for the word stimuli, but only in the shortened paradigm for tones. The MMF peak latency results showed that the words ledtoearlier peak latencies than the tones. These findings suggest that duration lengthening as well as shortening in words produces a salient acoustic MMF response when the divergent point between the long and short durations fallswithin the temporal window ofauditory integration post sound onset (<200 ms), and that theearlier latency of the dissyllabic word stimuli over tones is due to a prominent syllable structure in words which is used to generate temporal categorical boundaries.
Collapse
Affiliation(s)
- Mayako Inouchi
- Center for Japanese Language, Waseda University, Tokyo, Japan
| | | | | | | | | | | |
Collapse
|
21
|
Musical aptitude and second language pronunciation skills in school-aged children: neural and behavioral evidence. Brain Res 2007; 1194:81-9. [PMID: 18182165 DOI: 10.1016/j.brainres.2007.11.042] [Citation(s) in RCA: 71] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2007] [Revised: 10/24/2007] [Accepted: 11/19/2007] [Indexed: 11/23/2022]
Abstract
The main focus of this study was to examine the relationship between musical aptitude and second language pronunciation skills. We investigated whether children with superior performance in foreign language production represent musical sound features more readily in the preattentive level of neural processing compared with children with less-advanced production skills. Sound processing accuracy was examined in elementary school children by means of event-related potential (ERP) recordings and behavioral measures. Children with good linguistic skills had better musical skills as measured by the Seashore musicality test than children with less accurate linguistic skills. The ERP data accompany the results of the behavioral tests: children with good linguistic skills showed more pronounced sound-change evoked activation with the music stimuli than children with less accurate linguistic skills. Taken together, the results imply that musical and linguistic skills could partly be based on shared neural mechanisms.
Collapse
|
22
|
Joanisse MF, Robertson EK, Newman RL. Mismatch negativity reflects sensory and phonetic speech processing. Neuroreport 2007; 18:901-5. [PMID: 17515798 DOI: 10.1097/wnr.0b013e3281053c4e] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
We examined phonetic and sensory processes in speech perception using mismatch negativity, an event-related potential component congruent with discrimination, but which occurs for unattended stimuli. Adult listeners (N=16) heard a repeated standard (the syllable 'da') that was interrupted infrequently by a phonetically different 'deviant' syllable ('ba'). The acoustic difference between standard and deviant was manipulated to create both acoustically Strong and Weak deviant stimuli. Mismatch negativities in response to the Strong deviant were significantly greater than those for the Weak deviant, in spite of the fact that both represented stable instances of the phonetic category. The data suggest that the mismatch negativity component can be strongly influenced by sensory factors beyond what is predicted by overt categorization and discrimination judgments.
Collapse
Affiliation(s)
- Marc F Joanisse
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.
| | | | | |
Collapse
|
23
|
Nittrouer S, Lowenstein JH. Children's weighting strategies for word-final stop voicing are not explained by auditory sensitivities. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2007; 50:58-73. [PMID: 17344548 PMCID: PMC1994088 DOI: 10.1044/1092-4388(2007/005)] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
PURPOSE It has been reported that children and adults weight differently the various acoustic properties of the speech signal that support phonetic decisions. This finding is generally attributed to the fact that the amount of weight assigned to various acoustic properties by adults varies across languages, and that children have not yet discovered the mature weighting strategies of their own native languages. But an alternative explanation exists: Perhaps children's auditory sensitivities for some acoustic properties of speech are poorer than those of adults, and children cannot categorize stimuli based on properties to which they are not keenly sensitive. The purpose of the current study was to test that hypothesis. METHOD Edited-natural, synthetic-formant, and sine wave stimuli were all used, and all were modeled after words with voiced and voiceless final stops. Adults and children (5 and 7 years of age) listened to pairs of stimuli in 5 conditions: 2 involving a temporal property (1 with speech and 1 with nonspeech stimuli) and 3 involving a spectral property (1 with speech and 2 with nonspeech stimuli). An AX discrimination task was used in which a standard stimulus (A) was compared with all other stimuli (X) equal numbers of times (method of constant stimuli). RESULTS Adults and children had similar difference thresholds (i.e., 50% point on the discrimination function) for 2 of the 3 sets of nonspeech stimuli (1 temporal and 1 spectral), but children's thresholds were greater for both sets of speech stimuli. CONCLUSION Results are interpreted as evidence that children's auditory sensitivities are adequate to support weighting strategies similar to those of adults, and so observed differences between children and adults in speech perception cannot be explained by differences in auditory perception. Furthermore, it is concluded that listeners bring expectations to the listening task about the nature of the signals they are hearing based on their experiences with those signals.
Collapse
|
24
|
Hewson-Stoate N, Schönwiesner M, Krumbholz K. Vowel processing evokes a large sustained response anterior to primary auditory cortex. Eur J Neurosci 2007; 24:2661-71. [PMID: 17100854 DOI: 10.1111/j.1460-9568.2006.05096.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The present study uses electroencephalography (EEG) and a new stimulation paradigm, the 'continuous stimulation paradigm', to investigate the neural correlate of phonological processing in human auditory cortex. Evoked responses were recorded to stimuli consisting of a control sound (1000 ms) immediately followed by a test sound (150 ms). On half of the trials, the control sound was a noise and the test sound a vowel; to control for unavoidable effects of spectral change at the transition, the roles of the stimuli were reversed on the other half of the trials. The acoustical properties of the vowel and noise sounds were carefully matched to isolate the response specific to phonological processing. As the unspecific response to sound energy onset has subsided by the transition to the test sound, we hypothesized that the transition response from a noise to a vowel would reveal vowel-specific processing. Contrary to this expectation, however, the most striking difference between vowel and noise processing was a large, vertex-negative sustained response to the vowel control sound, which had a fast onset (30-50 ms) and remained constant throughout presentation of the vowel. The vowel-specific response was isolated using a subtraction technique analogous to that commonly applied in neuroimaging studies. This similarity in analysis methodology enabled close comparison of the EEG data collected in the present study with relevant functional magnetic resonance (fMRI) literature. Dipole source analysis revealed the vowel-specific component to be located anterior and inferior to primary auditory cortex, consistent with previous data investigating speech processing with fMRI.
Collapse
|
25
|
Kujala T, Tervaniemi M, Schröger E. The mismatch negativity in cognitive and clinical neuroscience: Theoretical and methodological considerations. Biol Psychol 2007; 74:1-19. [PMID: 16844278 DOI: 10.1016/j.biopsycho.2006.06.001] [Citation(s) in RCA: 355] [Impact Index Per Article: 20.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2005] [Revised: 05/12/2006] [Accepted: 06/03/2006] [Indexed: 11/20/2022]
Abstract
Mismatch negativity (MMN) component of the event-related brain potentials has become popular in cognitive and clinical brain research during the recent years. It is an early response to a violation of an auditory rule such as an infrequent change in the physical feature of a repetitive sound. There is a lot of evidence on the association of the MMN parameters and behavioral discrimination ability, although this relationship is not always straight-forward. Since the MMN reflects sound discrimination accuracy, it can be used for probing how well different groups of individuals perceive sound differences, and how training or remediation affects this ability. In the present review, we first introduce some of the essential MMN findings in probing sound discrimination, memory, and their deficits. Thereafter, issues which need to be taken into account in MMN investigations as well as new improved recording paradigms are discussed.
Collapse
Affiliation(s)
- Teija Kujala
- Helsinki Collegium for Advanced Studies, University of Helsinki, FIN-00014 Helsinki, Finland.
| | | | | |
Collapse
|
26
|
Zhang P, Chen X, Yuan P, Zhang D, He S. The effect of visuospatial attentional load on the processing of irrelevant acoustic distractors. Neuroimage 2006; 33:715-24. [PMID: 16956775 DOI: 10.1016/j.neuroimage.2006.07.015] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2006] [Revised: 06/29/2006] [Accepted: 07/20/2006] [Indexed: 10/24/2022] Open
Abstract
This work investigated the role of cognitive control functions in selective attention when task-relevant and -irrelevant stimuli come from different sensory modalities. We parametrically manipulated the load of an attentive tracking task and investigated its effect on irrelevant acoustic change-related processing. While subjects were performing the visual attentive tracking task, event-related potentials (ERPs) were recorded for frequent standard tones and rare deviant tones presented as auditory distractors. The deviant tones elicited two change-related ERP components: the mismatch negativity (MMN) and the P3a. The amplitude of the MMN, which indexes the early detection of irregular changes, increased with increasing attentional load, whereas the subsequent P3a component, which indicates the involuntary orienting of attention to deviants, was significant only in the lowest load condition. These findings suggest that active exclusion of the early detection process of irrelevant acoustic changes depends on available resources of cognitive control, whereas the late involuntary orienting of attention to deviants can be passively suppressed by high demand on central attentional resources. The present study thus reveals opposing visual attentional load effects at different temporal and functional stages in the rejection of deviant auditory distractors and provides a new perspective on the resolution of the long-standing early versus late attention selection debate.
Collapse
Affiliation(s)
- Peng Zhang
- Department of Neurobiology and Biophysics, Hefei National Laboratory for Physical Science at Microscale, and School of Life Science, University of Science and Technology of China, Hefei, Anhui, 230026, PR China
| | | | | | | | | |
Collapse
|
27
|
Shtyrov Y, Pihko E, Pulvermüller F. Determinants of dominance: Is language laterality explained by physical or linguistic features of speech? Neuroimage 2005; 27:37-47. [PMID: 16023039 DOI: 10.1016/j.neuroimage.2005.02.003] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2004] [Revised: 01/28/2005] [Accepted: 02/03/2005] [Indexed: 11/19/2022] Open
Abstract
The nature of cerebral asymmetry of the language function is still not fully understood. Two main views are that laterality is best explained (1) by left cortical specialization for the processing of spectrally rich and rapidly changing sounds, and (2) by a predisposition of one hemisphere to develop a module for phonemes. We tested both of these views by investigating magnetic brain responses to the same brief acoustic stimulus, placed in contexts where it was perceived either as a noise burst with no resemblance of speech, or as a native language sound being part of a meaningless pseudoword. In further experiments, the same acoustic element was placed in the context of words. We found reliable left hemispheric dominance only when the sound was placed in word context. These results, obtained in a passive odd-ball paradigm, suggest that neither physical properties nor phoneme status of a sound are sufficient for laterality. In order to elicit left lateralized cortical activation in normal right-handed individuals, a rapidly changing spectrally rich sound with phoneme status needs to be placed in the context of frequently encountered larger language elements, such as words. This demonstrates that language laterality is bound to the processing of sounds as units of frequently occurring meaningful items and can thus be linked to the processes of learning and memory trace formation for such items rather than to their physical or phonological properties.
Collapse
Affiliation(s)
- Yury Shtyrov
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 2EF, UK.
| | | | | |
Collapse
|
28
|
Takegata R, Nakagawa S, Tonoike M, Näätänen R. Hemispheric processing of duration changes in speech and non-speech sounds. Neuroreport 2004; 15:1683-6. [PMID: 15232307 DOI: 10.1097/01.wnr.0000134929.04561.64] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Sound duration conveys phonemic information in some languages. The present study, using magnetoencephalography (MEG), examined whether the hemispheric activation associated with the processing of duration is different between speech and non-speech sounds in subjects whose native language uses duration as a phonemic cue. The magnetic mismatch negativity (MMNm) response was recorded for equal-duration decrements in vowel, sinusoidal, and spectrally rich complex sounds. Although the MMNm responses to duration changes were predominant in the right hemisphere, the distribution of this response for the vowel stimuli was significantly displaced leftward compared with that for the other two types of stimuli. The results suggest that the hemispheric distribution of the MMNm response to duration change depends on the linguistic relevance of the change.
Collapse
Affiliation(s)
- Rika Takegata
- Cognitive Brain Research Unit, Department of Psychology, P.O. Box 9, University of Helsinki, FI-00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
29
|
Sussman E, Kujala T, Halmetoja J, Lyytinen H, Alku P, Näätänen R. Automatic and controlled processing of acoustic and phonetic contrasts. Hear Res 2004; 190:128-40. [PMID: 15051135 DOI: 10.1016/s0378-5955(04)00016-4] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2003] [Accepted: 12/16/2003] [Indexed: 11/24/2022]
Abstract
Changes in the temporal properties of the speech signal provide important cues for phoneme identification. An impairment or inability to detect such changes may adversely affect one's ability to understand spoken speech. The difference in meaning between the Finnish words tuli (fire) and tuuli (wind), for example, lies in the difference between the duration of the vowel /u/. Detecting changes in the temporal properties of the speech signal, therefore, is critical for distinguishing between phonemes and identifying words. In the current study, we tested whether detection of changes in speech sounds, in native Finnish speakers, would vary as a function of the position within the word that the informational changes occurred (beginning, middle, or end) by evaluating how length contrasts in segments of three-syllable Finnish pseudo-words and their acoustic correlates were discriminated. We recorded a combination of cortical components of event-related brain potentials (MMN, N2b, P3b) along with behavioral measures of the perception of the same sounds. It was found that speech sounds were not processed differently than non-speech sounds in the early stages of auditory processing indexed by MMN. Differences occurred only in later stages associated with controlled processes. The effects of position and attention on speech and non-speech stimuli are discussed.
Collapse
Affiliation(s)
- Elyse Sussman
- Department of Neuroscience and Department of Otolaryngology, Albert Einstein College of Medicine, 1410 Pelham Parkway S., Bronx, NY, USA.
| | | | | | | | | | | |
Collapse
|
30
|
Abstract
In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Cognitive Brain Research Unit, Department of Psychology, University of Helsinki, Helsinki, Finland.
| | | |
Collapse
|
31
|
Menning H, Imaizumi S, Zwitserlood P, Pantev C. Plasticity of the human auditory cortex induced by discrimination learning of non-native, mora-timed contrasts of the Japanese language. Learn Mem 2002; 9:253-67. [PMID: 12359835 PMCID: PMC187135 DOI: 10.1101/lm.49402] [Citation(s) in RCA: 64] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
In this magnetoencephalographic (MEG) study, we examined with high temporal resolution the traces of learning in the speech-dominant left-hemispheric auditory cortex as a function of newly trained mora-timing. In Japanese, the "mora" is a temporal unit that divides words into almost isochronous segments (e.g., na-ka-mu-ra and to-o-kyo-o each comprises four mora). Changes in the brain responses of a group of German and Japanese subjects to differences in the mora structure of Japanese words were compared. German subjects performed a discrimination training in 10 sessions of 1.5 h each day. They learned to discriminate Japanese pairs of words (in a consonant, anni-ani; and a vowel, kiyo-kyo, condition), where the second word was shortened by one mora in eight steps of 15 msec each. A significant increase in learning performance, as reflected by behavioral measures, was observed, accompanied by a significant increase of the amplitude of the Mismatch Negativity Field (MMF). The German subjects' hit rate for detecting durational deviants increased by up to 35%. Reaction times and MMF latencies decreased significantly across training sessions. Japanese subjects showed a more sensitive MMF to smaller differences. Thus, even in young adults, perceptual learning of non-native mora-timing occurs rapidly and deeply. The enhanced behavioral and neurophysiological sensitivity found after training indicates a strong relationship between learning and (plastic) changes in the cortical substrate.
Collapse
Affiliation(s)
- Hans Menning
- Center for Biomagnetism, Institute of Experimental Audiology, Münster, Germany.
| | | | | | | |
Collapse
|