1
|
Burunat I, Levitin DJ, Toiviainen P. Breaking (musical) boundaries by investigating brain dynamics of event segmentation during real-life music-listening. Proc Natl Acad Sci U S A 2024; 121:e2319459121. [PMID: 39186645 PMCID: PMC11388323 DOI: 10.1073/pnas.2319459121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 06/26/2024] [Indexed: 08/28/2024] Open
Abstract
The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.
Collapse
Affiliation(s)
- Iballa Burunat
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| | - Daniel J Levitin
- School of Social Sciences, Minerva University, San Francisco, CA 94103
- Department of Psychology, McGill University, Montreal, QC H3A 1G1, Canada
| | - Petri Toiviainen
- Centre of Excellence in Music, Mind, Body and Brain, Department of Music, Arts and Culture Studies, University of Jyväskylä, Jyväskylä 40014, Finland
| |
Collapse
|
2
|
Chang R, Zhang Q, Yang X. The impact of music training on temporal order processing in Mandarin Chinese sentence reading: Evidence from event-related potentials (ERPs). COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2024; 24:766-778. [PMID: 38773021 DOI: 10.3758/s13415-024-01195-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/02/2024] [Indexed: 05/23/2024]
Abstract
The objective of this study was to investigate the impact of music training on the processing of temporal order in Mandarin sentence reading using event-related potentials (ERPs). Two-clause sentences with temporal connectives ("before" or "after") were presented to both musicians and non-musicians. Additionally, a verbal N-back task was utilized to evaluate the participants' working memory capacities. The findings revealed that musicians, but not nonmusicians, demonstrated a more negative amplitude in the second clauses of "before" sentences compared with "after" sentences. In the N-back task, musicians exhibited faster reaction times than nonmusicians in the two-back condition. Furthermore, a correlation was observed between the ERP amplitude differences (before vs. after) and reaction time differences in the N-back task (0-back vs. 2-back) among musicians. These findings suggested that music training enhances the depth of temporal order processing, potentially mediated by improvements in working memory capacity.
Collapse
Affiliation(s)
- Ruohan Chang
- School of Psychology, Beijing Language and Culture University, Beijing, China
| | - Qian Zhang
- Naval Medical Center, Naval Medical University, Shanghai, China
| | - Xiaohong Yang
- Department of Psychology, Renmin University of China, Beijing, China.
- Jiangsu Collaborative Innovation Center for Language Ability, Jiangsu Normal University, Xuzhou, China.
| |
Collapse
|
3
|
Sun M, Xing W, Yu W, Slevc LR, Li W. ERP evidence for cross-domain prosodic priming from music to speech. BRAIN AND LANGUAGE 2024; 254:105439. [PMID: 38945108 DOI: 10.1016/j.bandl.2024.105439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 06/19/2024] [Accepted: 06/25/2024] [Indexed: 07/02/2024]
Abstract
Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.
Collapse
Affiliation(s)
- Mingjiang Sun
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Weijing Xing
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenjing Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD, USA.
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
4
|
Coderre EL, Cohn N. Individual differences in the neural dynamics of visual narrative comprehension: The effects of proficiency and age of acquisition. Psychon Bull Rev 2024; 31:89-103. [PMID: 37578688 PMCID: PMC10866750 DOI: 10.3758/s13423-023-02334-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2023] [Indexed: 08/15/2023]
Abstract
Understanding visual narrative sequences, as found in comics, is known to recruit similar cognitive mechanisms to verbal language. As measured by event-related potentials (ERPs), these manifest as initial negativities (N400, LAN) and subsequent positivities (P600). While these components are thought to index discrete processing stages, they differentially arise across participants for any given stimulus. In language contexts, proficiency modulates brain responses, with smaller N400 effects and larger P600 effects appearing with increasing proficiency. In visual narratives, recent work has also emphasized the role of proficiency in neural response patterns. We thus explored whether individual differences in proficiency modulate neural responses to visual narrative sequencing in similar ways as in language. We combined ERP data from 12 studies examining semantic and/or grammatical processing of visual narrative sequences. Using linear mixed effects modeling, we demonstrate differential effects of visual language proficiency and "age of acquisition" on N400 and P600 responses. Our results align with those reported in language contexts, providing further evidence for the similarity of linguistic and visual narrative processing, and emphasize the role of both proficiency and age of acquisition in visual narrative comprehension.
Collapse
Affiliation(s)
- Emily L Coderre
- Department of Communication Sciences and Disorders, University of Vermont, 489 Main St, Burlington, VT, 05405, USA
| | - Neil Cohn
- Department of Communication and Cognition, Tilburg School of Humanities and Digital Sciences, Tilburg Center for Cognition and Communication (TiCC), Tilburg University, Tilburg, The Netherlands.
| |
Collapse
|
5
|
Cecchetti G, Tomasini CA, Herff SA, Rohrmeier MA. Interpreting Rhythm as Parsing: Syntactic-Processing Operations Predict the Migration of Visual Flashes as Perceived During Listening to Musical Rhythms. Cogn Sci 2023; 47:e13389. [PMID: 38038624 DOI: 10.1111/cogs.13389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 11/10/2023] [Accepted: 11/13/2023] [Indexed: 12/02/2023]
Abstract
Music can be interpreted by attributing syntactic relationships to sequential musical events, and, computationally, such musical interpretation represents an analogous combinatorial task to syntactic processing in language. While this perspective has been primarily addressed in the domain of harmony, we focus here on rhythm in the Western tonal idiom, and we propose for the first time a framework for modeling the moment-by-moment execution of processing operations involved in the interpretation of music. Our approach is based on (1) a music-theoretically motivated grammar formalizing the competence of rhythmic interpretation in terms of three basic types of dependency (preparation, syncopation, and split; Rohrmeier, 2020), and (2) psychologically plausible predictions about the complexity of structural integration and memory storage operations, necessary for parsing hierarchical dependencies, derived from the dependency locality theory (Gibson, 2000). With a behavioral experiment, we exemplify an empirical implementation of the proposed theoretical framework. One hundred listeners were asked to reproduce the location of a visual flash presented while listening to three rhythmic excerpts, each exemplifying a different interpretation under the formal grammar. The hypothesized execution of syntactic-processing operations was found to be a significant predictor of the observed displacement between the reported and the objective location of the flashes. Overall, this study presents a theoretical approach and a first empirical proof-of-concept for modeling the cognitive process resulting in such interpretation as a form of syntactic parsing with algorithmic similarities to its linguistic counterpart. Results from the present small-scale experiment should not be read as a final test of the theory, but they are consistent with the theoretical predictions after controlling for several possible confounding factors and may form the basis for further large-scale and ecological testing.
Collapse
Affiliation(s)
- Gabriele Cecchetti
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Cédric A Tomasini
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| | - Steffen A Herff
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University
| | - Martin A Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne
| |
Collapse
|
6
|
Lu Y, Jin P, Ding N, Tian X. Delta-band neural tracking primarily reflects rule-based chunking instead of semantic relatedness between words. Cereb Cortex 2023; 33:4448-4458. [PMID: 36124831 PMCID: PMC10110438 DOI: 10.1093/cercor/bhac354] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 08/12/2022] [Accepted: 08/13/2022] [Indexed: 11/14/2022] Open
Abstract
It is debated whether cortical responses matching the time scales of phrases and sentences mediate the mental construction of the syntactic chunks or are simply caused by the semantic properties of words. Here, we investigate to what extent delta-band neural responses to speech can be explained by semantic relatedness between words. To dissociate the contribution of semantic relatedness from sentential structures, participants listened to sentence sequences and paired-word sequences in which semantically related words repeated at 1 Hz. Semantic relatedness in the 2 types of sequences was quantified using a word2vec model that captured the semantic relation between words without considering sentential structure. The word2vec model predicted comparable 1-Hz responses with paired-word sequences and sentence sequences. However, empirical neural activity, recorded using magnetoencephalography, showed a weaker 1-Hz response to paired-word sequences than sentence sequences in a word-level task that did not require sentential processing. Furthermore, when listeners applied a task-related rule to parse paired-word sequences into multi-word chunks, 1-Hz response was stronger than that in word-level task on the same sequences. Our results suggest that cortical activity tracks multi-word chunks constructed by either syntactic rules or task-related rules, whereas the semantic relatedness between words contributes only in a minor way.
Collapse
Affiliation(s)
- Yuhan Lu
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| | - Peiqing Jin
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
- Research Center for Applied Mathematics and Machine Intelligence, Research Institute of Basic Theories, Zhejiang Lab, Hangzhou 311121, China
| | - Xing Tian
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Division of Arts and Sciences, New York University Shanghai
| |
Collapse
|
7
|
Music training is associated with better clause segmentation during spoken language processing. Psychon Bull Rev 2022; 29:1472-1479. [PMID: 35318581 DOI: 10.3758/s13423-022-02076-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 11/08/2022]
Abstract
Musical expertise is known to affect speech perception at units below clause/sentence. This study investigated whether the musician's advantage extends to a higher and more central level of speech processing (i.e., clause segmentation). Two groups of participants (musician vs. nonmusician) were presented with sentences that contain an internal clause boundary. The acoustic correlates of the boundary were manipulated in six conditions: all-cue, pause-only, final-lengthening-only, pitch-reset-only, pause-and-final-lengthening-in-combination, and no-cue conditions. Participants were asked to judge whether the sentence they heard had an internal boundary. Results showed that the musicians detected more boundaries than the nonmusicians in the all-cue and the pause-only conditions, but fewer boundaries in the no-cue condition. Further analyses of cue weight showed that both musicians and nonmusicians placed more importance on pause than the other two cues, but this weighting bias was more pronounced for the musicians. These results suggest that music training is associated with increased perceptual acuity not only to the acoustic markings of speech boundaries but also to the weighting of the cues. Our findings extend the role of musical expertise to sentence-level speech processing.
Collapse
|
8
|
White PA. The extended present: an informational context for perception. Acta Psychol (Amst) 2021; 220:103403. [PMID: 34454251 DOI: 10.1016/j.actpsy.2021.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 01/29/2023] Open
Abstract
Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains.
Collapse
|
9
|
Popescu T, Widdess R, Rohrmeier M. Western listeners detect boundary hierarchy in Indian music: a segmentation study. Sci Rep 2021; 11:3112. [PMID: 33542358 PMCID: PMC7862587 DOI: 10.1038/s41598-021-82629-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Accepted: 01/04/2021] [Indexed: 11/23/2022] Open
Abstract
How are listeners able to follow and enjoy complex pieces of music? Several theoretical frameworks suggest links between the process of listening and the formal structure of music, involving a division of the musical surface into structural units at multiple hierarchical levels. Whether boundaries between structural units are perceivable to listeners unfamiliar with the style, and are identified congruently between naïve listeners and experts, remains unclear. Here, we focused on the case of Indian music, and asked 65 Western listeners (of mixed levels of musical training; most unfamiliar with Indian music) to intuitively segment into phrases a recording of sitar ālāp of two different rāga-modes. Each recording was also segmented by two experts, who identified boundary regions at section and phrase levels. Participant- and region-wise scores were computed on the basis of "clicks" inside or outside boundary regions (hits/false alarms), inserted earlier or later within those regions (high/low "promptness"). We found substantial agreement-expressed as hit rates and click densities-among participants, and between participants' and experts' segmentations. The agreement and promptness scores differed between participants, levels, and recordings. We found no effect of musical training, but detected real-time awareness of grouping completion and boundary hierarchy. The findings may potentially be explained by underlying general bottom-up processes, implicit learning of structural relationships, cross-cultural musical similarities, or universal cognitive capacities.
Collapse
Affiliation(s)
- Tudor Popescu
- Department of Behavioural and Cognitive Biology, Universität Wien, Althanstrasse 14, 1090, Vienna, Austria.
- Medizinische Universität Wien, Spitalgasse 23, 1090, Vienna, Austria.
| | - Richard Widdess
- Department of Music, School of Arts, SOAS University of London, London, UK
| | - Martin Rohrmeier
- Centre for Music and Science, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
10
|
Sun L, Hu L, Ren G, Yang Y. Musical Tension Associated With Violations of Hierarchical Structure. Front Hum Neurosci 2020; 14:578112. [PMID: 33192408 PMCID: PMC7531224 DOI: 10.3389/fnhum.2020.578112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 08/27/2020] [Indexed: 11/13/2022] Open
Abstract
Tension is one of the core principles of emotion evoked by music, linking objective musical events and subjective experience. The present study used continuous behavioral rating and electroencephalography (EEG) to investigate the dynamic process of tension generation and its underlying neurocognitive mechanisms; specifically, tension induced by structural violations at different music hierarchical levels. In the experiment, twenty-four musicians were required to rate felt tension continuously in real-time, while listening to music sequences with either well-formed structure, phrase violations, or period violations. The behavioral data showed that structural violations gave rise to increasing and accumulating tension experience as the music unfolded; tension was increased dramatically by structural violations. Correspondingly, structural violations elicited N5 at GFP peaks, and induced decreasing neural oscillations power in the alpha frequency band (8–13 Hz). Furthermore, compared to phrase violations, period violations elicited larger N5 and induced a longer-lasting decrease of power in the alpha band, suggesting a hierarchical manner of musical processing. These results demonstrate the important role of musical structure in the generation of the experience of tension, providing support to the dynamic view of musical emotion and the hierarchical manner of tension processing.
Collapse
Affiliation(s)
- Lijun Sun
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Li Hu
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Guiqin Ren
- College of Psychology, Liaoning Normal University, Dalian, China
| | - Yufang Yang
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
11
|
Jin P, Lu Y, Ding N. Low-frequency neural activity reflects rule-based chunking during speech listening. eLife 2020; 9:55613. [PMID: 32310082 PMCID: PMC7213976 DOI: 10.7554/elife.55613] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 04/20/2020] [Indexed: 12/26/2022] Open
Abstract
Chunking is a key mechanism for sequence processing. Studies on speech sequences have suggested low-frequency cortical activity tracks spoken phrases, that is, chunks of words defined by tacit linguistic knowledge. Here, we investigate whether low-frequency cortical activity reflects a general mechanism for sequence chunking and can track chunks defined by temporarily learned artificial rules. The experiment records magnetoencephalographic (MEG) responses to a sequence of spoken words. To dissociate word properties from the chunk structures, two tasks separately require listeners to group pairs of semantically similar or semantically dissimilar words into chunks. In the MEG spectrum, a clear response is observed at the chunk rate. More importantly, the chunk-rate response is task-dependent. It is phase locked to chunk boundaries, instead of the semantic relatedness between words. The results strongly suggest that cortical activity can track chunks constructed based on task-related rules and potentially reflects a general mechanism for chunk-level representations. From digital personal assistants like Siri and Alexa to customer service chatbots, computers are slowly learning to talk to us. But as anyone who has interacted with them will appreciate, the results are often imperfect. Each time we speak or write, we use grammatical rules to combine words in a specific order. These rules enable us to produce new sentences that we have never seen or heard before, and to understand the sentences of others. But computer scientists adopt a different strategy when training computers to use language. Instead of grammar, they provide the computers with vast numbers of example sentences and phrases. The computers then use this input to calculate how likely for one word to follow another in a given context. "The sky is blue" is more common than "the sky is green", for example. But is it possible that the human brain also uses this approach? When we listen to speech, the brain shows patterns of activity that correspond to units such as sentences. But previous research has been unable to tell whether the brain is using grammatical rules to recognise sentences, or whether it relies on a probability-based approach like a computer. Using a simple artificial language, Jin et al. have now managed to tease apart these alternatives. Healthy volunteers listened to lists of words while lying inside a brain scanner. The volunteers had to group the words into pairs, otherwise known as chunks, by following various rules that simulated the grammatical rules present in natural languages. Crucially, the volunteers’ brain activity tracked the chunks – which differed depending on which rule had been applied – rather than the individual words. This suggests that the brain processes speech using abstract rules instead of word probabilities. While computers are now much better at processing language, they still perform worse than people. Understanding how the human brain solves this task could ultimately help to improve the performance of personal digital assistants.
Collapse
Affiliation(s)
- Peiqing Jin
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Yuhan Lu
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China.,Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou, China
| |
Collapse
|
12
|
|
13
|
Ma X, Ding N, Tao Y, Yang YF. Syntactic complexity and musical proficiency modulate neural processing of non-native music. Neuropsychologia 2018; 121:164-174. [PMID: 30359654 DOI: 10.1016/j.neuropsychologia.2018.10.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2018] [Revised: 09/21/2018] [Accepted: 10/08/2018] [Indexed: 11/18/2022]
Abstract
In music, chords are organized into hierarchical structures on the basis of musical syntax and the syntax of Western music can be implicitly acquired by listeners growing up in a Western musical culture. Here, we investigated whether Western musical syntax of different complexities can be implicitly acquired by non-native listeners growing up in China. This study used electroencephalography (EEG) to measure how the neural responses to musical sequences that either follow a simple rule, i.e., finite state grammar (FSG), or a complex rule, i.e., phrase structure grammar (PSG), are affected. We tested three groups of Chinese listeners who varied in their proficiency and experience in Western music. Only the high-proficiency group had received formal Western musical training, whereas the low- and moderate-proficiency groups varied in their degree of exposure to Western music. The results showed that in the FSG condition, the event-related potentials (ERPs) evoked by regular and irregular final chords were not significantly different in the low-proficiency group. In contrast, in the moderate- and high-proficiency groups, the irregular final chords evoked an ERAN-N5 biphasic response. In the PSG condition, however, only the high-proficiency group showed an ERAN-N5 biphasic response evoked by irregular final chords. This study provides evidence that although simple structures of Western music, such as FSG, can be acquired by long-term implicit learning, the acquisition of more complex structures, such as PSG, merely from exposure to western music may not be as easy.
Collapse
Affiliation(s)
- Xie Ma
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China; College of Educational Science and Management, Yunnan Normal University, Kunming, China; Key Laboratory of Educational Informatization for Nationalities, Yunnan Normal University, Kunming, China
| | - Nai Ding
- College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou, China; Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China; State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou, China
| | - Yun Tao
- College of Educational Science and Management, Yunnan Normal University, Kunming, China; Key Laboratory of Educational Informatization for Nationalities, Yunnan Normal University, Kunming, China
| | - Yu Fang Yang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
14
|
Zhang J, Zhou X, Chang R, Yang Y. Effects of global and local contexts on chord processing: An ERP study. Neuropsychologia 2018; 109:149-154. [DOI: 10.1016/j.neuropsychologia.2017.12.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Revised: 12/05/2017] [Accepted: 12/08/2017] [Indexed: 11/26/2022]
|