1
|
Lenc T, Peter V, Hooper C, Keller PE, Burnham D, Nozaradan S. Infants show enhanced neural responses to musical meter frequencies beyond low-level features. Dev Sci 2023; 26:e13353. [PMID: 36415027 DOI: 10.1111/desc.13353] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/20/2022] [Accepted: 11/16/2022] [Indexed: 11/24/2022]
Abstract
Music listening often entails spontaneous perception and body movement to a periodic pulse-like meter. There is increasing evidence that this cross-cultural ability relates to neural processes that selectively enhance metric periodicities, even when these periodicities are not prominent in the acoustic stimulus. However, whether these neural processes emerge early in development remains largely unknown. Here, we recorded the electroencephalogram (EEG) of 20 healthy 5- to 6-month-old infants, while they were exposed to two rhythms known to induce the perception of meter consistently across Western adults. One rhythm contained prominent acoustic periodicities corresponding to the meter, whereas the other rhythm did not. Infants showed significantly enhanced representations of meter periodicities in their EEG responses to both rhythms. This effect is unlikely to reflect the tracking of salient acoustic features in the stimulus, as it was observed irrespective of the prominence of meter periodicities in the audio signals. Moreover, as previously observed in adults, the neural enhancement of meter was greater when the rhythm was delivered by low-pitched sounds. Together, these findings indicate that the endogenous enhancement of metric periodicities beyond low-level acoustic features is a neural property that is already present soon after birth. These high-level neural processes could set the stage for internal representations of musical meter that are critical for human movement coordination during rhythmic musical behavior. RESEARCH HIGHLIGHTS: 5- to 6-month-old infants were presented with auditory rhythms that induce the perception of a periodic pulse-like meter in adults. Infants showed selective enhancement of EEG activity at meter-related frequencies irrespective of the prominence of these frequencies in the stimulus. Responses at meter-related frequencies were boosted when the rhythm was conveyed by bass sounds. High-level neural processes that transform rhythmic auditory stimuli into internal meter templates emerge early after birth.
Collapse
Affiliation(s)
- Tomas Lenc
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Brussels, Belgium
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Varghese Peter
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Health and Behavioural Sciences, University of the Sunshine Coast, Queensland, Australia
| | - Caitlin Hooper
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Peter E Keller
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- Center for Music in the Brain & Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Denis Burnham
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Brussels, Belgium
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
| |
Collapse
|
2
|
Modeling enculturated bias in entrainment to rhythmic patterns. PLoS Comput Biol 2022; 18:e1010579. [PMID: 36174063 PMCID: PMC9553061 DOI: 10.1371/journal.pcbi.1010579] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 10/11/2022] [Accepted: 09/16/2022] [Indexed: 11/19/2022] Open
Abstract
Long-term and culture-specific experience of music shapes rhythm perception, leading to enculturated expectations that make certain rhythms easier to track and more conducive to synchronized movement. However, the influence of enculturated bias on the moment-to-moment dynamics of rhythm tracking is not well understood. Recent modeling work has formulated entrainment to rhythms as a formal inference problem, where phase is continuously estimated based on precise event times and their correspondence to timing expectations: PIPPET (Phase Inference from Point Process Event Timing). Here we propose that the problem of optimally tracking a rhythm also requires an ongoing process of inferring which pattern of event timing expectations is most suitable to predict a stimulus rhythm. We formalize this insight as an extension of PIPPET called pPIPPET (PIPPET with pattern inference). The variational solution to this problem introduces terms representing the likelihood that a stimulus is based on a particular member of a set of event timing patterns, which we initialize according to culturally-learned prior expectations of a listener. We evaluate pPIPPET in three experiments. First, we demonstrate that pPIPPET can qualitatively reproduce enculturated bias observed in human tapping data for simple two-interval rhythms. Second, we simulate categorization of a continuous three-interval rhythm space by Western-trained musicians through derivation of a comprehensive set of priors for pPIPPET from metrical patterns in a sample of Western rhythms. Third, we simulate iterated reproduction of three-interval rhythms, and show that models configured with notated rhythms from different cultures exhibit both universal and enculturated biases as observed experimentally in listeners from those cultures. These results suggest the influence of enculturated timing expectations on human perceptual and motor entrainment can be understood as approximating optimal inference about the rhythmic stimulus, with respect to prototypical patterns in an empirical sample of rhythms that represent the music-cultural environment of the listener. Cross-cultural studies have highlighted that listeners from non-Western cultures can precisely tap along with complex rhythms present in music from their culture that are challenging for participants from Western cultures. Therefore, while most adults can synchronize movements with simple periodic patterns (e.g. a ticking clock, a metronome), the ability to precisely track more complex rhythmic patterns depends on musical experience. Many computer models have been developed to describe the remarkable precision of human “entrainment”, but they have done little to explain how this ability depends on cultural musical experience. Here, we describe this as the problem of estimating the phase of a cycle underlying an auditory rhythm in real time, by drawing upon learned patterns (reference structures) that could plausibly describe the structure of observed events. By creating a model that solves this inference problem, and configuring these patterns to reflect specific musical features, we are able to simulate cultural variation in synchronization to rhythm. These results highlight that while humans universally move to musical rhythm, the ability to do so depends on musical experience within a cultural tradition, as reflected by the distinct “categories” of rhythm learned during such experience.
Collapse
|
3
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 79] [Impact Index Per Article: 39.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
4
|
Lenc T, Merchant H, Keller PE, Honing H, Varlet M, Nozaradan S. Mapping between sound, brain and behaviour: four-level framework for understanding rhythm processing in humans and non-human primates. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200325. [PMID: 34420381 PMCID: PMC8380981 DOI: 10.1098/rstb.2020.0325] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/16/2022] Open
Abstract
Humans perceive and spontaneously move to one or several levels of periodic pulses (a meter, for short) when listening to musical rhythm, even when the sensory input does not provide prominent periodic cues to their temporal location. Here, we review a multi-levelled framework to understanding how external rhythmic inputs are mapped onto internally represented metric pulses. This mapping is studied using an approach to quantify and directly compare representations of metric pulses in signals corresponding to sensory inputs, neural activity and behaviour (typically body movement). Based on this approach, recent empirical evidence can be drawn together into a conceptual framework that unpacks the phenomenon of meter into four levels. Each level highlights specific functional processes that critically enable and shape the mapping from sensory input to internal meter. We discuss the nature, constraints and neural substrates of these processes, starting with fundamental mechanisms investigated in macaque monkeys that enable basic forms of mapping between simple rhythmic stimuli and internally represented metric pulse. We propose that human evolution has gradually built a robust and flexible system upon these fundamental processes, allowing more complex levels of mapping to emerge in musical behaviours. This approach opens promising avenues to understand the many facets of rhythmic behaviours across individuals and species. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Tomas Lenc
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, Campus Juriquilla, Querétaro 76230, Mexico
| | - Peter E. Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Henkjan Honing
- Amsterdam Brain and Cognition (ABC), Institute for Logic, Language and Computation (ILLC), University of Amsterdam, Amsterdam 1090 GE, The Netherlands
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- School of Psychology, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| |
Collapse
|
5
|
Cannon J. Expectancy-based rhythmic entrainment as continuous Bayesian inference. PLoS Comput Biol 2021; 17:e1009025. [PMID: 34106918 PMCID: PMC8216548 DOI: 10.1371/journal.pcbi.1009025] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 06/21/2021] [Accepted: 04/29/2021] [Indexed: 11/18/2022] Open
Abstract
When presented with complex rhythmic auditory stimuli, humans are able to track underlying temporal structure (e.g., a "beat"), both covertly and with their movements. This capacity goes far beyond that of a simple entrained oscillator, drawing on contextual and enculturated timing expectations and adjusting rapidly to perturbations in event timing, phase, and tempo. Previous modeling work has described how entrainment to rhythms may be shaped by event timing expectations, but sheds little light on any underlying computational principles that could unify the phenomenon of expectation-based entrainment with other brain processes. Inspired by the predictive processing framework, we propose that the problem of rhythm tracking is naturally characterized as a problem of continuously estimating an underlying phase and tempo based on precise event times and their correspondence to timing expectations. We present two inference problems formalizing this insight: PIPPET (Phase Inference from Point Process Event Timing) and PATIPPET (Phase and Tempo Inference). Variational solutions to these inference problems resemble previous "Dynamic Attending" models of perceptual entrainment, but introduce new terms representing the dynamics of uncertainty and the influence of expectations in the absence of sensory events. These terms allow us to model multiple characteristics of covert and motor human rhythm tracking not addressed by other models, including sensitivity of error corrections to inter-event interval and perceived tempo changes induced by event omissions. We show that positing these novel influences in human entrainment yields a range of testable behavioral predictions. Guided by recent neurophysiological observations, we attempt to align the phase inference framework with a specific brain implementation. We also explore the potential of this normative framework to guide the interpretation of experimental data and serve as building blocks for even richer predictive processing and active inference models of timing.
Collapse
Affiliation(s)
- Jonathan Cannon
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
6
|
Benedetto A, Baud-Bovy G. Tapping Force Encodes Metrical Aspects of Rhythm. Front Hum Neurosci 2021; 15:633956. [PMID: 33986651 PMCID: PMC8111927 DOI: 10.3389/fnhum.2021.633956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Accepted: 03/26/2021] [Indexed: 11/13/2022] Open
Abstract
Humans possess the ability to extract highly organized perceptual structures from sequences of temporal stimuli. For instance, we can organize specific rhythmical patterns into hierarchical, or metrical, systems. Despite the evidence of a fundamental influence of the motor system in achieving this skill, few studies have attempted to investigate the organization of our motor representation of rhythm. To this aim, we studied-in musicians and non-musicians-the ability to perceive and reproduce different rhythms. In a first experiment participants performed a temporal order-judgment task, for rhythmical sequences presented via auditory or tactile modality. In a second experiment, they were asked to reproduce the same rhythmic sequences, while their tapping force and timing were recorded. We demonstrate that tapping force encodes the metrical aspect of the rhythm, and the strength of the coding correlates with the individual's perceptual accuracy. We suggest that the similarity between perception and tapping-force organization indicates a common representation of rhythm, shared between the perceptual and motor systems.
Collapse
Affiliation(s)
| | - Gabriel Baud-Bovy
- Robotics, Brain and Cognitive Science Unit, Italian Institute of Technology, Genoa, Italy
- Faculty of Psychology, Vita-Salute San Raffaele University, Milan, Italy
| |
Collapse
|
7
|
Samuels B, Grahn J, Henry MJ, MacDougall-Shackleton SA. European starlings (sturnus vulgaris) discriminate rhythms by rate, not temporal patterns. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:2546. [PMID: 33940875 DOI: 10.1121/10.0004215] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 03/17/2021] [Indexed: 06/12/2023]
Abstract
Humans can perceive a regular psychological pulse in music known as the beat. The evolutionary origins and neural mechanisms underlying this ability are hypothetically linked to imitative vocal learning, a rare trait found only in some species of mammals and birds. Beat perception has been demonstrated in vocal learning parrots but not in songbirds. We trained European starlings (Sturnus vulgaris) on two sound discriminations to investigate their perception of the beat and temporal structure in rhythmic patterns. First, we trained birds on a two-choice discrimination between rhythmic patterns of tones that contain or lack a regular beat. Despite receiving extensive feedback, the starlings were unable to distinguish the first two patterns. Next, we probed the temporal cues that starlings use for discriminating rhythms in general. We trained birds to discriminate a baseline set of isochronous and triplet tone sequences. On occasional probe trials, we presented transformations of the baseline patterns. The starlings' responses to the probes suggest they relied on absolute temporal features to sort the sounds into "fast" and "slow" and otherwise ignored patterns that were present. Our results support that starlings attend to local features in rhythms and are less sensitive to the global temporal organization.
Collapse
Affiliation(s)
- Brendon Samuels
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, 1151 Richmond Street, London, Ontario, N6A 5K7, Canada
| | - Jessica Grahn
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, 1151 Richmond Street, London, Ontario, N6A 5K7, Canada
| | - Molly J Henry
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, 1151 Richmond Street, London, Ontario, N6A 5K7, Canada
| | | |
Collapse
|
8
|
Proksch S, Comstock DC, Médé B, Pabst A, Balasubramaniam R. Motor and Predictive Processes in Auditory Beat and Rhythm Perception. Front Hum Neurosci 2020; 14:578546. [PMID: 33061902 PMCID: PMC7518112 DOI: 10.3389/fnhum.2020.578546] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 08/18/2020] [Indexed: 11/30/2022] Open
Abstract
In this article, we review recent advances in research on rhythm and musical beat perception, focusing on the role of predictive processes in auditory motor interactions. We suggest that experimental evidence of the motor system's role in beat perception, including in passive listening, may be explained by the generation and maintenance of internal predictive models, concordant with the Active Inference framework of sensory processing. We highlight two complementary hypotheses for the neural underpinnings of rhythm perception: The Action Simulation for Auditory Prediction hypothesis (Patel and Iversen, 2014) and the Gradual Audiomotor Evolution hypothesis (Merchant and Honing, 2014) and review recent experimental progress supporting each of these hypotheses. While initial formulations of ASAP and GAE explain different aspects of beat-based timing-the involvement of motor structures in the absence of movement, and physical entrainment to an auditory beat respectively-we suggest that work under both hypotheses provide converging evidence toward understanding the predictive role of the motor system in the perception of rhythm, and the specific neural mechanisms involved. We discuss future experimental work necessary to further evaluate the causal neural mechanisms underlying beat and rhythm perception.
Collapse
Affiliation(s)
- Shannon Proksch
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Daniel C Comstock
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Butovens Médé
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Alexandria Pabst
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| | - Ramesh Balasubramaniam
- Sensorimotor Neuroscience Laboratory, Cognitive & Information Sciences, University of California, Merced, Merced, CA, United States
| |
Collapse
|
9
|
Lenc T, Keller PE, Varlet M, Nozaradan S. Neural and Behavioral Evidence for Frequency-Selective Context Effects in Rhythm Processing in Humans. Cereb Cortex Commun 2020; 1:tgaa037. [PMID: 34296106 PMCID: PMC8152888 DOI: 10.1093/texcom/tgaa037] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 06/30/2020] [Accepted: 07/16/2020] [Indexed: 01/17/2023] Open
Abstract
When listening to music, people often perceive and move along with a periodic meter. However, the dynamics of mapping between meter perception and the acoustic cues to meter periodicities in the sensory input remain largely unknown. To capture these dynamics, we recorded the electroencephalography while nonmusician and musician participants listened to nonrepeating rhythmic sequences, where acoustic cues to meter frequencies either gradually decreased (from regular to degraded) or increased (from degraded to regular). The results revealed greater neural activity selectively elicited at meter frequencies when the sequence gradually changed from regular to degraded compared with the opposite. Importantly, this effect was unlikely to arise from overall gain, or low-level auditory processing, as revealed by physiological modeling. Moreover, the context effect was more pronounced in nonmusicians, who also demonstrated facilitated sensory-motor synchronization with the meter for sequences that started as regular. In contrast, musicians showed weaker effects of recent context in their neural responses and robust ability to move along with the meter irrespective of stimulus degradation. Together, our results demonstrate that brain activity elicited by rhythm does not only reflect passive tracking of stimulus features, but represents continuous integration of sensory input with recent context.
Collapse
Affiliation(s)
- Tomas Lenc
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Peter E Keller
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Manuel Varlet
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
- School of Psychology, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Sylvie Nozaradan
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal QC H3C 3J7, Canada
| |
Collapse
|
10
|
Bouwer FL, Honing H, Slagter HA. Beat-based and Memory-based Temporal Expectations in Rhythm: Similar Perceptual Effects, Different Underlying Mechanisms. J Cogn Neurosci 2020; 32:1221-1241. [PMID: 31933432 DOI: 10.1162/jocn_a_01529] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Predicting the timing of incoming information allows the brain to optimize information processing in dynamic environments. Behaviorally, temporal expectations have been shown to facilitate processing of events at expected time points, such as sounds that coincide with the beat in musical rhythm. Yet, temporal expectations can develop based on different forms of structure in the environment, not just the regularity afforded by a musical beat. Little is still known about how different types of temporal expectations are neurally implemented and affect performance. Here, we orthogonally manipulated the periodicity and predictability of rhythmic sequences to examine the mechanisms underlying beat-based and memory-based temporal expectations, respectively. Behaviorally and using EEG, we looked at the effects of beat-based and memory-based expectations on auditory processing when rhythms were task-relevant or task-irrelevant. At expected time points, both beat-based and memory-based expectations facilitated target detection and led to attenuation of P1 and N1 responses, even when expectations were task-irrelevant (unattended). For beat-based expectations, we additionally found reduced target detection and enhanced N1 responses for events at unexpected time points (e.g., off-beat), regardless of the presence of memory-based expectations or task relevance. This latter finding supports the notion that periodicity selectively induces rhythmic fluctuations in neural excitability and furthermore indicates that, although beat-based and memory-based expectations may similarly affect auditory processing of expected events, their underlying neural mechanisms may be different.
Collapse
|
11
|
Kaliakatsos-Papakostas M, Cambouropoulos E. Conceptual blending of high-level features and data-driven salience computation in melodic generation. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2019.05.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
12
|
Task-set control, chunking, and hierarchical timing in rhythm production. PSYCHOLOGICAL RESEARCH 2019; 83:1685-1702. [PMID: 29909429 DOI: 10.1007/s00426-018-1038-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2017] [Accepted: 06/12/2018] [Indexed: 10/14/2022]
Abstract
We investigated task-set control processes and chunking in 16 novices and 16 amateur musicians, who produced unimanual rhythms in three experimental conditions: low-level timing tasks required isochronous tapping at constant target durations; sequencing tasks consisted of individual rhythmic patterns comprising multiple target durations; the task-set control condition required alternations between two rhythmic patterns. According to our hierarchical timing control model conditions differed in their task-set control demands necessary to provide rhythm programs for the sequencing of individual intervals. Transitions at predicted chunk boundaries were marked by increased frequencies of sequence errors, relative lengthening of intervals preceding the switch to a new rhythm chunk, and increased variabilities in intervals immediately following a switch. Amateur musicians showed superior timing (less variability) in complex rhythm tasks. Moreover, they made fewer sequence errors than novices at set-switch points with their error patterns suggesting that they relied on larger chunks compared with novices. Our findings elucidate the time course of task reconfiguration processes in rhythm production and the role of chunking in the context of musical skill.
Collapse
|
13
|
Morgan E, Fogel A, Nair A, Patel AD. Statistical learning and Gestalt-like principles predict melodic expectations. Cognition 2019; 189:23-34. [PMID: 30913527 DOI: 10.1016/j.cognition.2018.12.015] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 12/21/2018] [Accepted: 12/28/2018] [Indexed: 10/27/2022]
Abstract
Expectation, or prediction, has become a major theme in cognitive science. Music offers a powerful system for studying how expectations are formed and deployed in the processing of richly structured sequences that unfold rapidly in time. We ask to what extent expectations about an upcoming note in a melody are driven by two distinct factors: Gestalt-like principles grounded in the auditory system (e.g.a preference for subsequent notes to move in small intervals), and statistical learning of melodic structure. We use multinomial regression modeling to evaluate the predictions of computationally implemented models of melodic expectation against behavioral data from a musical cloze task, in which participants hear a novel melodic opening and are asked to sing the note they expect to come next. We demonstrate that both Gestalt-like principles and statistical learning contribute to listeners' online expectations. In conjunction with results in the domain of language, our results point to a larger-than-previously-assumed role for statistical learning in predictive processing across cognitive domains, even in cases that seem potentially governed by a smaller set of theoretically motivated rules. However, we also find that both of the models tested here leave much variance in the human data unexplained, pointing to a need for models of melodic expectation that incorporate underlying hierarchical and/or harmonic structure. We propose that our combined behavioral (melodic cloze) and modeling (multinomial regression) approach provides a powerful method for further testing and development of models of melodic expectation.
Collapse
Affiliation(s)
- Emily Morgan
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States; Department of Linguistics, University of California, Davis, United States.
| | - Allison Fogel
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States
| | - Anjali Nair
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, 490 Boston Ave, Medford, MA 02155, United States; Azrieli Program in Brain, Mind, & Consciousness, Canadian Institute for Advanced Research (CIFAR), Canada; Radcliffe Institute for Advanced Studies, Harvard University, United States
| |
Collapse
|
14
|
Omigie D, Pearce M, Lehongre K, Hasboun D, Navarro V, Adam C, Samson S. Intracranial Recordings and Computational Modeling of Music Reveal the Time Course of Prediction Error Signaling in Frontal and Temporal Cortices. J Cogn Neurosci 2019; 31:855-873. [PMID: 30883293 DOI: 10.1162/jocn_a_01388] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Prediction is held to be a fundamental process underpinning perception, action, and cognition. To examine the time course of prediction error signaling, we recorded intracranial EEG activity from nine presurgical epileptic patients while they listened to melodies whose information theoretical predictability had been characterized using a computational model. We examined oscillatory activity in the superior temporal gyrus (STG), the middle temporal gyrus (MTG), and the pars orbitalis of the inferior frontal gyrus, lateral cortical areas previously implicated in auditory predictive processing. We also examined activity in anterior cingulate gyrus (ACG), insula, and amygdala to determine whether signatures of prediction error signaling may also be observable in these subcortical areas. Our results demonstrate that the information content (a measure of unexpectedness) of musical notes modulates the amplitude of low-frequency oscillatory activity (theta to beta power) in bilateral STG and right MTG from within 100 and 200 msec of note onset, respectively. Our results also show this cortical activity to be accompanied by low-frequency oscillatory modulation in ACG and insula-areas previously associated with mediating physiological arousal. Finally, we showed that modulation of low-frequency activity is followed by that of high-frequency (gamma) power from approximately 200 msec in the STG, between 300 and 400 msec in the left insula, and between 400 and 500 msec in the ACG. We discuss these results with respect to models of neural processing that emphasize gamma activity as an index of prediction error signaling and highlight the usefulness of musical stimuli in revealing the wide-reaching neural consequences of predictive processing.
Collapse
Affiliation(s)
- Diana Omigie
- Max Planck Institute for Empirical Aesthetics.,Goldsmiths, University of London
| | | | - Katia Lehongre
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Vincent Navarro
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,Inserm U 1127, CNRS UMR 7225, Sorbonne Université, UMPC Univ Paris 06 UMR 5 1127, Institut du Cerveau et de la Moelle épinière, ICM, F-75013
| | | | - Severine Samson
- AP-HP, GH Pitié-Salpêtrière-Charles Foix.,University of Lille
| |
Collapse
|
15
|
Harrison PMC, Müllensiefen D. Development and Validation of the Computerised Adaptive Beat Alignment Test (CA-BAT). Sci Rep 2018; 8:12395. [PMID: 30120265 PMCID: PMC6097996 DOI: 10.1038/s41598-018-30318-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Accepted: 07/26/2018] [Indexed: 01/20/2023] Open
Abstract
Beat perception is increasingly being recognised as a fundamental musical ability. A number of psychometric instruments have been developed to assess this ability, but these tests do not take advantage of modern psychometric techniques, and rarely receive systematic validation. The present research addresses this gap in the literature by developing and validating a new test, the Computerised Adaptive Beat Alignment Test (CA-BAT), a variant of the Beat Alignment Test (BAT) that leverages recent advances in psychometric theory, including item response theory, adaptive testing, and automatic item generation. The test is constructed and validated in four empirical studies. The results support the reliability and validity of the CA-BAT for laboratory testing, but suggest that the test is not well-suited to online testing, owing to its reliance on fine perceptual discrimination.
Collapse
Affiliation(s)
- Peter M C Harrison
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, E1 4NS, United Kingdom.
| | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, United Kingdom.,University of Music, Drama, and Media, Hanover, Germany
| |
Collapse
|
16
|
Ozernov-Palchik O, Patel AD. Musical rhythm and reading development: does beat processing matter? Ann N Y Acad Sci 2018; 1423:166-175. [PMID: 29781084 DOI: 10.1111/nyas.13853] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 04/13/2018] [Accepted: 04/23/2018] [Indexed: 01/24/2023]
Abstract
There is mounting evidence for links between musical rhythm processing and reading-related cognitive skills, such as phonological awareness. This may be because music and speech are rhythmic: both involve processing complex sound sequences with systematic patterns of timing, accent, and grouping. Yet, there is a salient difference between musical and speech rhythm: musical rhythm is often beat-based (based on an underlying grid of equal time intervals), while speech rhythm is not. Thus, the role of beat-based processing in the reading-rhythm relationship is not clear. Is there is a distinct relation between beat-based processing mechanisms and reading-related language skills, or is the rhythm-reading link entirely due to shared mechanisms for processing nonbeat-based aspects of temporal structure? We discuss recent evidence for a distinct link between beat-based processing and early reading abilities in young children, and suggest experimental designs that would allow one to further methodically investigate this relationship. We propose that beat-based processing taps into a listener's ability to use rich contextual regularities to form predictions, a skill important for reading development.
Collapse
Affiliation(s)
- Ola Ozernov-Palchik
- Eliot Pearson Department of Child Study and Human Development, Tufts University, Medford, Massachusetts
| | - Aniruddh D Patel
- Department of Psychology, Tufts University, Medford, Massachusetts
- Azrieli Program in Brain, Mind and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario, Canada
| |
Collapse
|
17
|
Pearce MT. Statistical learning and probabilistic prediction in music cognition: mechanisms of stylistic enculturation. Ann N Y Acad Sci 2018; 1423:378-395. [PMID: 29749625 PMCID: PMC6849749 DOI: 10.1111/nyas.13654] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2017] [Revised: 01/31/2018] [Accepted: 02/06/2018] [Indexed: 11/28/2022]
Abstract
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here.
Collapse
Affiliation(s)
- Marcus T. Pearce
- Cognitive Science Research Group, School of Electronic Engineering and Computer ScienceQueen Mary University of LondonLondonUK
- Centre for Music in the BrainAarhus UniversityAarhusDenmark
| |
Collapse
|
18
|
Bouwer FL, Burgoyne JA, Odijk D, Honing H, Grahn JA. What makes a rhythm complex? The influence of musical training and accent type on beat perception. PLoS One 2018; 13:e0190322. [PMID: 29320533 PMCID: PMC5761885 DOI: 10.1371/journal.pone.0190322] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/12/2017] [Indexed: 11/18/2022] Open
Abstract
Perception of a regular beat in music is inferred from different types of accents. For example, increases in loudness cause intensity accents, and the grouping of time intervals in a rhythm creates temporal accents. Accents are expected to occur on the beat: when accents are "missing" on the beat, the beat is more difficult to find. However, it is unclear whether accents occurring off the beat alter beat perception similarly to missing accents on the beat. Moreover, no one has examined whether intensity accents influence beat perception more or less strongly than temporal accents, nor how musical expertise affects sensitivity to each type of accent. In two experiments, we obtained ratings of difficulty in finding the beat in rhythms with either temporal or intensity accents, and which varied in the number of accents on the beat as well as the number of accents off the beat. In both experiments, the occurrence of accents on the beat facilitated beat detection more in musical experts than in musical novices. In addition, the number of accents on the beat affected beat finding more in rhythms with temporal accents than in rhythms with intensity accents. The effect of accents off the beat was much weaker than the effect of accents on the beat and appeared to depend on musical expertise, as well as on the number of accents on the beat: when many accents on the beat are missing, beat perception is quite difficult, and adding accents off the beat may not reduce beat perception further. Overall, the different types of accents were processed qualitatively differently, depending on musical expertise. Therefore, these findings indicate the importance of designing ecologically valid stimuli when testing beat perception in musical novices, who may need different types of accent information than musical experts to be able to find a beat. Furthermore, our findings stress the importance of carefully designing rhythms for social and clinical applications of beat perception, as not all listeners treat all rhythms alike.
Collapse
Affiliation(s)
- Fleur L. Bouwer
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - J. Ashley Burgoyne
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Daan Odijk
- Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Henkjan Honing
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Jessica A. Grahn
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, London (ON), Canada
| |
Collapse
|
19
|
Ravignani A, Honing H, Kotz SA. Editorial: The Evolution of Rhythm Cognition: Timing in Music and Speech. Front Hum Neurosci 2017; 11:303. [PMID: 28659775 PMCID: PMC5468413 DOI: 10.3389/fnhum.2017.00303] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Accepted: 05/26/2017] [Indexed: 01/12/2023] Open
Affiliation(s)
- Andrea Ravignani
- Veterinary and Research Department, Sealcentre PieterburenPieterburen, Netherlands.,Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands.,Artificial Intelligence Lab, Vrije Universiteit BrusselBrussels, Belgium
| | - Henkjan Honing
- Music Cognition Group, Amsterdam Brain and Cognition, Institute for Logic, Language, and Computation, University of AmsterdamAmsterdam, Netherlands
| | - Sonja A Kotz
- Basic and Applied NeuroDynamics Lab, Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands.,Department of Neuropsychology, Max-Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| |
Collapse
|