1
|
Neural dynamics of predictive timing and motor engagement in music listening. SCIENCE ADVANCES 2024; 10:eadi2525. [PMID: 38446888 PMCID: PMC10917349 DOI: 10.1126/sciadv.adi2525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 01/30/2024] [Indexed: 03/08/2024]
Abstract
Why do humans spontaneously dance to music? To test the hypothesis that motor dynamics reflect predictive timing during music listening, we created melodies with varying degrees of rhythmic predictability (syncopation) and asked participants to rate their wanting-to-move (groove) experience. Degree of syncopation and groove ratings are quadratically correlated. Magnetoencephalography data showed that, while auditory regions track the rhythm of melodies, beat-related 2-hertz activity and neural dynamics at delta (1.4 hertz) and beta (20 to 30 hertz) rates in the dorsal auditory pathway code for the experience of groove. Critically, the left sensorimotor cortex coordinates these groove-related delta and beta activities. These findings align with the predictions of a neurodynamic model, suggesting that oscillatory motor engagement during music listening reflects predictive timing and is effected by interaction of neural dynamics along the dorsal auditory pathway.
Collapse
|
2
|
Testing a computational model for aural detection of aircraft in ambient noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3799-3809. [PMID: 38109404 DOI: 10.1121/10.0023933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 11/16/2023] [Indexed: 12/20/2023]
Abstract
Computational models are used to predict the performance of human listeners for carefully specified signal and noise conditions. However, there may be substantial discrepancies between the conditions under which listeners are tested and those used for model predictions. Thus, models may predict better performance than exhibited by the listeners, or they may "fail" to capture the ability of the listener to respond to subtle stimulus conditions. This study tested a computational model devised to predict a listener's ability to detect an aircraft in various soundscapes. The model and listeners processed the same sound recordings under carefully specified testing conditions. Details of signal and masker calibration were carefully matched, and the model was tested using the same adaptive tracking paradigm. Perhaps most importantly, the behavioral results were not available to the modeler before the model predictions were presented. Recordings from three different aircraft were used as the target signals. Maskers were derived from recordings obtained at nine locations ranging from very quiet rural environments to suburban and urban settings. Overall, with a few exceptions, model predictions matched the performance of the listeners very well. Discussion focuses on those differences and possible reasons for their occurrence.
Collapse
|
3
|
Hebbian learning with elasticity explains how the spontaneous motor tempo affects music performance synchronization. PLoS Comput Biol 2023; 19:e1011154. [PMID: 37285380 DOI: 10.1371/journal.pcbi.1011154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 05/02/2023] [Indexed: 06/09/2023] Open
Abstract
A musician's spontaneous rate of movement, called spontaneous motor tempo (SMT), can be measured while spontaneously playing a simple melody. Data shows that the SMT influences the musician's tempo and synchronization. In this study we present a model that captures these phenomena. We review the results from three previously-published studies: solo musical performance with a pacing metronome tempo that is different from the SMT, solo musical performance without a metronome at a tempo that is faster or slower than the SMT, and duet musical performance between musicians with matching or mismatching SMTs. These studies showed, respectively, that the asynchrony between the pacing metronome and the musician's tempo grew as a function of the difference between the metronome tempo and the musician's SMT, musicians drifted away from the initial tempo toward the SMT, and the absolute asynchronies were smaller if musicians had matching SMTs. We hypothesize that the SMT constantly acts as a pulling force affecting musical actions at a tempo different from a musician's SMT. To test our hypothesis, we developed a model consisting of a non-linear oscillator with Hebbian tempo learning and a pulling force to the model's spontaneous frequency. While the model's spontaneous frequency emulates the SMT, elastic Hebbian learning allows for frequency learning to match a stimulus' frequency. To test our hypothesis, we first fit model parameters to match the data in the first of the three studies and asked whether this same model would explain the data the remaining two studies without further tuning. Results showed that the model's dynamics allowed it to explain all three experiments with the same set of parameters. Our theory offers a dynamical-systems explanation of how an individual's SMT affects synchronization in realistic music performance settings, and the model also enables predictions about performance settings not yet tested.
Collapse
|
4
|
Dynamic models for musical rhythm perception and coordination. Front Comput Neurosci 2023; 17:1151895. [PMID: 37265781 PMCID: PMC10229831 DOI: 10.3389/fncom.2023.1151895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/28/2023] [Indexed: 06/03/2023] Open
Abstract
Rhythmicity permeates large parts of human experience. Humans generate various motor and brain rhythms spanning a range of frequencies. We also experience and synchronize to externally imposed rhythmicity, for example from music and song or from the 24-h light-dark cycles of the sun. In the context of music, humans have the ability to perceive, generate, and anticipate rhythmic structures, for example, "the beat." Experimental and behavioral studies offer clues about the biophysical and neural mechanisms that underlie our rhythmic abilities, and about different brain areas that are involved but many open questions remain. In this paper, we review several theoretical and computational approaches, each centered at different levels of description, that address specific aspects of musical rhythmic generation, perception, attention, perception-action coordination, and learning. We survey methods and results from applications of dynamical systems theory, neuro-mechanistic modeling, and Bayesian inference. Some frameworks rely on synchronization of intrinsic brain rhythms that span the relevant frequency range; some formulations involve real-time adaptation schemes for error-correction to align the phase and frequency of a dedicated circuit; others involve learning and dynamically adjusting expectations to make rhythm tracking predictions. Each of the approaches, while initially designed to answer specific questions, offers the possibility of being integrated into a larger framework that provides insights into our ability to perceive and generate rhythmic patterns.
Collapse
|
5
|
Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
|
6
|
The relationship between entrainment dynamics and reading fluency assessed by sensorimotor perturbation. Exp Brain Res 2022; 240:1775-1790. [PMID: 35507069 DOI: 10.1007/s00221-022-06369-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 04/06/2022] [Indexed: 11/25/2022]
Abstract
A consistent relationship has been found between rhythmic processing and reading skills. Impairment of the ability to entrain movements to an auditory rhythm in clinical populations with language-related deficits, such as children with developmental dyslexia, has been found in both behavioral and neural studies. In this study, we explored the relationship between rhythmic entrainment, behavioral synchronization, reading fluency, and reading comprehension in neurotypical English- and Mandarin-speaking adults. First, we examined entrainment stability by asking participants to coordinate taps with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Next, we assessed behavioral synchronization by asking participants to coordinate taps with the syllables they produced while reading sentences as naturally as possible (tap to syllable task). Finally, we measured reading fluency and reading comprehension for native English and native Mandarin speakers. Stability of entrainment correlated strongly with tap to syllable task performance and with reading fluency, and both findings generalized across English and Mandarin speakers.
Collapse
|
7
|
A Dynamical, Radically Embodied, and Ecological Theory of Rhythm Development. Front Psychol 2022; 13:653696. [PMID: 35282203 PMCID: PMC8907845 DOI: 10.3389/fpsyg.2022.653696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 01/03/2022] [Indexed: 11/13/2022] Open
Abstract
Musical rhythm abilities-the perception of and coordinated action to the rhythmic structure of music-undergo remarkable change over human development. In the current paper, we introduce a theoretical framework for modeling the development of musical rhythm. The framework, based on Neural Resonance Theory (NRT), explains rhythm development in terms of resonance and attunement, which are formalized using a general theory that includes non-linear resonance and Hebbian plasticity. First, we review the developmental literature on musical rhythm, highlighting several developmental processes related to rhythm perception and action. Next, we offer an exposition of Neural Resonance Theory and argue that elements of the theory are consistent with dynamical, radically embodied (i.e., non-representational) and ecological approaches to cognition and development. We then discuss how dynamical models, implemented as self-organizing networks of neural oscillations with Hebbian plasticity, predict key features of music development. We conclude by illustrating how the notions of dynamical embodiment, resonance, and attunement provide a conceptual language for characterizing musical rhythm development, and, when formalized in physiologically informed dynamical models, provide a theoretical framework for generating testable empirical predictions about musical rhythm development, such as the kinds of native and non-native rhythmic structures infants and children can learn, steady-state evoked potentials to native and non-native musical rhythms, and the effects of short-term (e.g., infant bouncing, infant music classes), long-term (e.g., perceptual narrowing to musical rhythm), and very-long term (e.g., music enculturation, musical training) learning on music perception-action.
Collapse
|
8
|
Bouncing the network: A dynamical systems model of auditory-vestibular interactions underlying infants' perception of musical rhythm. Dev Sci 2021; 24:e13103. [PMID: 33570778 DOI: 10.1111/desc.13103] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 02/03/2021] [Indexed: 11/26/2022]
Abstract
Previous work suggests that auditory-vestibular interactions, which emerge during bodily movement to music, can influence the perception of musical rhythm. In a seminal study on the ontogeny of musical rhythm, Phillips-Silver and Trainor (2005) found that bouncing infants to an unaccented rhythm influenced infants' perceptual preferences for accented rhythms that matched the rate of bouncing. In the current study, we ask whether nascent, diffuse coupling between auditory and motor systems is sufficient to bootstrap short-term Hebbian plasticity in the auditory system and explain infants' preferences for accented rhythms thought to arise from auditory-vestibular interactions. First, we specify a nonlinear, dynamical system in which two oscillatory neural networks, representing developmentally nascent auditory and motor systems, interact through weak, non-specific coupling. The auditory network was equipped with short-term Hebbian plasticity, allowing the auditory network to tune its intrinsic resonant properties. Next, we simulate the effect of vestibular input (e.g., infant bouncing) on infants' perceptual preferences for accented rhythms. We found that simultaneous auditory-vestibular training shaped the model's response to musical rhythm, enhancing vestibular-related frequencies in auditory-network activity. Moreover, simultaneous auditory-vestibular training, relative to auditory- or vestibular-only training, facilitated short-term auditory plasticity in the model, producing stronger oscillator connections in the auditory network. Finally, when tested on a musical rhythm, models which received simultaneous auditory-vestibular training, but not models that received auditory- or vestibular-only training, resonated strongly at frequencies related to their "bouncing," a finding qualitatively similar to infants' preferences for accented rhythms that matched the rate of infant bouncing.
Collapse
|
9
|
Multifrequency Hebbian plasticity in coupled neural oscillators. BIOLOGICAL CYBERNETICS 2021; 115:43-57. [PMID: 33399947 DOI: 10.1007/s00422-020-00854-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Accepted: 12/16/2020] [Indexed: 06/12/2023]
Abstract
We study multifrequency Hebbian plasticity by analyzing phenomenological models of weakly connected neural networks. We start with an analysis of a model for single-frequency networks previously shown to learn and memorize phase differences between component oscillators. We then study a model for gradient frequency neural networks (GrFNNs) which extends the single-frequency model by introducing frequency detuning and nonlinear coupling terms for multifrequency interactions. Our analysis focuses on models of two coupled oscillators and examines the dynamics of steady-state behaviors in multiple parameter regimes available to the models. We find that the model for two distinct frequencies shares essential dynamical properties with the single-frequency model and that Hebbian learning results in stronger connections for simple frequency ratios than for complex ratios. We then compare the analysis of the two-frequency model with numerical simulations of the GrFNN model and show that Hebbian plasticity in the latter is locally dominated by a nonlinear resonance captured by the two-frequency model.
Collapse
|
10
|
Delayed feedback embedded in perception-action coordination cycles results in anticipation behavior during synchronized rhythmic action: A dynamical systems approach. PLoS Comput Biol 2019; 15:e1007371. [PMID: 31671096 PMCID: PMC6822724 DOI: 10.1371/journal.pcbi.1007371] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Accepted: 09/02/2019] [Indexed: 11/19/2022] Open
Abstract
Dancing and playing music require people to coordinate actions with auditory rhythms. In laboratory perception-action coordination tasks, people are asked to synchronize taps with a metronome. When synchronizing with a metronome, people tend to anticipate stimulus onsets, tapping slightly before the stimulus. The anticipation tendency increases with longer stimulus periods of up to 3500ms, but is less pronounced in trained individuals like musicians compared to non-musicians. Furthermore, external factors influence the timing of tapping. These factors include the presence of auditory feedback from one’s own taps, the presence of a partner performing coordinated joint tapping, and transmission latencies (TLs) between coordinating partners. Phenomena like the anticipation tendency can be explained by delay-coupled systems, which may be inherent to the sensorimotor system during perception-action coordination. Here we tested whether a dynamical systems model based on this hypothesis reproduces observed patterns of human synchronization. We simulated behavior with a model consisting of an oscillator receiving its own delayed activity as input. Three simulation experiments were conducted using previously-published behavioral data from 1) simple tapping, 2) two-person alternating beat-tapping, and 3) two-person alternating rhythm-clapping in the presence of a range of constant auditory TLs. In Experiment 1, our model replicated the larger anticipation observed for longer stimulus intervals and adjusting the amplitude of the delayed feedback reproduced the difference between musicians and non-musicians. In Experiment 2, by connecting two models we replicated the smaller anticipation observed in human joint tapping with bi-directional auditory feedback compared to joint tapping without feedback. In Experiment 3, we varied TLs between two models alternately receiving signals from one another. Results showed reciprocal lags at points of alternation, consistent with behavioral patterns. Overall, our model explains various anticipatory behaviors, and has potential to inform theories of adaptive human synchronization. When navigating a busy sidewalk, people coordinate their behavior in an orderly manner. Other activities require people to carefully synchronize periodic actions, as in a group rowing or marching. When individuals tap in synchrony with a metronome, their taps tend to anticipate the metronome. Experiments have revealed that factors like musical expertise, the presence of a synchronizing partner, auditory feedback, and the sound travel time, all systematically affect the tendency to anticipate. While researchers have hypothesized a number of potential mechanisms for such anticipatory behavior, none have successfully accounted for all of the effects. Previous research on coupled physical systems has shown that when one system receives input from a second system, plus its own delayed signal as input, this causes system 1 to anticipate system 2. We hypothesize that the tendency to anticipate is the result of delayed communication between neurons. Our work demonstrates the ability of delay-coupled physical systems to capture human anticipation and the effect of external factors in the anticipation tendency. Our model supports the theory that delayed communication within the nervous system is crucial to understanding anticipatory coordinative behavior.
Collapse
|
11
|
A canonical oscillator model of cochlear dynamics. Hear Res 2019; 380:100-107. [PMID: 31234108 DOI: 10.1016/j.heares.2019.06.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 05/06/2019] [Accepted: 06/12/2019] [Indexed: 11/27/2022]
Abstract
Nonlinear responses to acoustic signals arise through active processes in the cochlea, which has an exquisite sensitivity and wide dynamic range that can be explained by critical nonlinear oscillations of outer hair cells. Here we ask how the interaction of critical nonlinearities with the basilar membrane and other organ of Corti components could determine tuning properties of the mammalian cochlea. We propose a canonical oscillator model that captures the dynamics of the interaction between the basilar membrane and organ of Corti, using a pair of coupled oscillators for each place along the cochlea. We analyze two models in which a linear oscillator, representing basilar membrane dynamics, is coupled to a nonlinear oscillator poised at a Hopf instability. The coupling in the first model is unidirectional, and that of the second is bidirectional. Parameters are determined by fitting 496 auditory-nerve (AN) tuning curves of macaque monkeys. We find that the unidirectionally and bidirectionally coupled models account equally well for threshold tuning. In addition, however, the bidirectionally coupled model exhibits low-amplitude, spontaneous oscillation in the absence of stimulation, predicting that phase locking will occur before a significant increase in firing frequency, in accordance with well known empirical observations. This leads us to a canonical oscillator cochlear model based on the fundamental principles of critical nonlinear oscillation and coupling dynamics. The model is more biologically realistic than widely used linear or nonlinear filter-based models, yet parsimoniously displays key features of nonlinear mechanistic models. It is efficient enough for computational studies of auditory perception and auditory physiology.
Collapse
|
12
|
Modeling infants' perceptual narrowing to musical rhythms: neural oscillation and Hebbian plasticity. Ann N Y Acad Sci 2019; 1453:125-139. [PMID: 31021447 DOI: 10.1111/nyas.14050] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2018] [Revised: 02/01/2019] [Accepted: 02/15/2019] [Indexed: 12/17/2022]
Abstract
Previous research suggests that infants' perception of musical rhythm is fine-tuned to culture-specific rhythmic structures over the first postnatal year of human life. To date, however, little is known about the neurobiological principles that may underlie this process. In the current study, we used a dynamical systems model featuring neural oscillation and Hebbian plasticity to simulate infants' perceptual learning of culture-specific musical rhythms. First, we demonstrate that oscillatory activity in an untrained network reflects the rhythmic structure of either a Western or a Balkan training rhythm in a veridical fashion. Next, during a period of unsupervised learning, we show that the network learns the rhythmic structure of either a Western or a Balkan training rhythm through the self-organization of network connections. Finally, we demonstrate that the learned connections affect the networks' response to violations to the metrical structure of native and nonnative rhythms, a pattern of findings that mirrors the behavioral data on infants' perceptual narrowing to musical rhythms.
Collapse
|
13
|
Mode locking in periodically forced gradient frequency neural networks. Phys Rev E 2019; 99:022421. [PMID: 30934299 DOI: 10.1103/physreve.99.022421] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Indexed: 11/07/2022]
Abstract
We study mode locking in a canonical model of gradient frequency neural networks under periodic forcing. The canonical model is a generic mathematical model for a network of nonlinear oscillators tuned to a range of distinct frequencies. It is mathematically more tractable than biological neuron models and allows close analysis of mode-locking behaviors. Here we analyze individual modes of synchronization for a periodically forced canonical model and present a complete set of driven behaviors for all parameter regimes available in the model. Using a closed-form approximation, we show that the Arnold tongue (i.e., locking region) for k:m synchronization gets narrower as k and m increase. We find that numerical simulations of the canonical model closely follow the analysis of individual modes when forcing is weak, but they deviate at high forcing amplitudes for which oscillator dynamics are simultaneously influenced by multiple modes of synchronization.
Collapse
|
14
|
Cortical tracking of rhythm in music and speech. Neuroimage 2018; 185:96-101. [PMID: 30336253 DOI: 10.1016/j.neuroimage.2018.10.037] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2018] [Revised: 10/08/2018] [Accepted: 10/13/2018] [Indexed: 11/26/2022] Open
Abstract
Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.
Collapse
|
15
|
Author Correction: Overcoming Bias: Cognitive Control Reduces Susceptibility to Framing Effects in Evaluating Musical Performance. Sci Rep 2018; 8:8662. [PMID: 29849068 PMCID: PMC5976720 DOI: 10.1038/s41598-018-26663-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
A correction to this article has been published and is linked from the HTML and PDF versions of this paper. The error has not been fixed in the paper.
Collapse
|
16
|
Mode-locking behavior of Izhikevich neurons under periodic external forcing. Phys Rev E 2017; 95:062414. [PMID: 28709287 DOI: 10.1103/physreve.95.062414] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Indexed: 11/07/2022]
Abstract
Many neurons in the auditory system of the brain must encode periodic signals. These neurons under periodic stimulation display rich dynamical states including mode locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated sounds can lead to various forms of n:m mode-locked states, in which a neuron fires n action potentials per m cycles of the stimulus. Here, we study mode-locking in the Izhikevich neurons, a reduced model of the Hodgkin-Huxley neurons. The Izhikevich model is much simpler in terms of the dimension of the coupled nonlinear differential equations compared with other existing models, but excellent for generating the complex spiking patterns observed in real neurons. We obtained the regions of existence of the various mode-locked states on the frequency-amplitude plane, called Arnold tongues, for the Izhikevich neurons. Arnold tongue analysis provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. We find these tongues for both class-1 and class-2 excitable neurons in both deterministic and noisy regimes.
Collapse
|
17
|
Editorial: Overlap of Neural Systems for Processing Language and Music. Front Psychol 2016; 7:876. [PMID: 27378976 PMCID: PMC4905966 DOI: 10.3389/fpsyg.2016.00876] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Accepted: 05/27/2016] [Indexed: 11/13/2022] Open
|
18
|
Scaling the Dynamic Approach to Path Planning and Control: Competition among Behavioral Constraints. Int J Rob Res 2016. [DOI: 10.1177/027836499901800103] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The dynamic-systems approach to robotpathplanningdefinesa dynamics ofrbotbehavior in which task constraints contribute independently to a nonlinear vector field that governs robot actions. We address problems that arise in scaling this approach to handle complex behavioral requirements. We propose a dynamics that operates in the space of task constraints, determining the relative contribution of each constraint to the behavioral dynamics. Competition among task constraints is able to deal with problems that arise when combining constraint contributions, making it possible to specify tasks that are mome complex than simple navigation. To demonstrate the utility of this approach, we design a system of two agents to perform a cooperative navigation task We show how competition among constraints enables agents to make decisions regarding which behavior to execute in a given situation, resulting in the execution of sequences of behaviors that satisfy task requirements. We discuss the scalability of the competitive-dynamics approach to the design of more complex autonomous systems.
Collapse
|
19
|
Beat Keeping in a Sea Lion As Coupled Oscillation: Implications for Comparative Understanding of Human Rhythm. Front Neurosci 2016; 10:257. [PMID: 27375418 PMCID: PMC4891632 DOI: 10.3389/fnins.2016.00257] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2016] [Accepted: 05/23/2016] [Indexed: 12/11/2022] Open
Abstract
Human capacity for entraining movement to external rhythms-i.e., beat keeping-is ubiquitous, but its evolutionary history and neural underpinnings remain a mystery. Recent findings of entrainment to simple and complex rhythms in non-human animals pave the way for a novel comparative approach to assess the origins and mechanisms of rhythmic behavior. The most reliable non-human beat keeper to date is a California sea lion, Ronan, who was trained to match head movements to isochronous repeating stimuli and showed spontaneous generalization of this ability to novel tempos and to the complex rhythms of music. Does Ronan's performance rely on the same neural mechanisms as human rhythmic behavior? In the current study, we presented Ronan with simple rhythmic stimuli at novel tempos. On some trials, we introduced "perturbations," altering either tempo or phase in the middle of a presentation. Ronan quickly adjusted her behavior following all perturbations, recovering her consistent phase and tempo relationships to the stimulus within a few beats. Ronan's performance was consistent with predictions of mathematical models describing coupled oscillation: a model relying solely on phase coupling strongly matched her behavior, and the model was further improved with the addition of period coupling. These findings are the clearest evidence yet for parity in human and non-human beat keeping and support the view that the human ability to perceive and move in time to rhythm may be rooted in broadly conserved neural mechanisms.
Collapse
|
20
|
Signal Processing in Periodically Forced Gradient Frequency Neural Networks. Front Comput Neurosci 2015; 9:152. [PMID: 26733858 PMCID: PMC4689852 DOI: 10.3389/fncom.2015.00152] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 12/10/2015] [Indexed: 11/25/2022] Open
Abstract
Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing.
Collapse
|
21
|
|
22
|
Abstract
Entrainment of cortical rhythms to acoustic rhythms has been hypothesized to be the neural correlate of pulse and meter perception in music. Dynamic attending theory first proposed synchronization of endogenous perceptual rhythms nearly 40 years ago, but only recently has the pivotal role of neural synchrony been demonstrated. Significant progress has since been made in understanding the role of neural oscillations and the neural structures that support synchronized responses to musical rhythm. Synchronized neural activity has been observed in auditory and motor networks, and has been linked with attentional allocation and movement coordination. Here we describe a neurodynamic model that shows how self-organization of oscillations in interacting sensory and motor networks could be responsible for the formation of the pulse percept in complex rhythms. In a pulse synchronization study, we test the model's key prediction that pulse can be perceived at a frequency for which no spectral energy is present in the amplitude envelope of the acoustic rhythm. The result shows that participants perceive the pulse at the theoretically predicted frequency. This model is one of the few consistent with neurophysiological evidence on the role of neural oscillation, and it explains a phenomenon that other computational models fail to explain. Because it is based on a canonical model, the predictions hold for an entire family of dynamical systems, not only a specific one. Thus, this model provides a theoretical link between oscillatory neurodynamics and the induction of pulse and meter in musical rhythm.
Collapse
|
23
|
Abstract
The emergence of speech and music in the human species represent major evolutionary transitions that enabled the use of complex, temporally structured acoustic signals to coordinate social interaction. While the fundamental capacity for temporal coordination with complex acoustic signals has been shown in a few distantly related species, the extent to which nonhuman primates exhibit sensitivity to auditory rhythms remains controversial. In Experiment 1, we assessed spontaneous motor tempo and tempo matching in a bonobo (Pan paniscus), in the context of a social drumming interaction. In Experiment 2, the bonobo spontaneously entrained and synchronized her drum strikes within a range around her spontaneous motor tempo. Our results are consistent with the hypothesis that the evolution of acoustic communication builds upon fundamental neurodynamic mechanisms that can be found in a wide range of species, and are recruited for social interactions.
Collapse
|
24
|
Fractal structure enables temporal prediction in music. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:EL256-EL262. [PMID: 25324107 DOI: 10.1121/1.4890198] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
1/f serial correlations and statistical self-similarity (fractal structure) have been measured in various dimensions of musical compositions. Musical performances also display 1/f properties in expressive tempo fluctuations, and listeners predict tempo changes when synchronizing. Here the authors show that the 1/f structure is sufficient for listeners to predict the onset times of upcoming musical events. These results reveal what information listeners use to anticipate events in complex, non-isochronous acoustic rhythms, and this will entail innovative models of temporal synchronization. This finding could improve therapies for Parkinson's and related disorders and inform deeper understanding of how endogenous neural rhythms anticipate events in complex, temporally structured communication signals.
Collapse
|
25
|
Abstract
Is there something special about the way music communicates feelings? Theorists since Meyer (1956) have attempted to explain how music could stimulate varied and subtle affective experiences by violating learned expectancies, or by mimicking other forms of social interaction. Our proposal is that music speaks to the brain in its own language; it need not imitate any other form of communication. We review recent theoretical and empirical literature, which suggests that all conscious processes consist of dynamic neural events, produced by spatially dispersed processes in the physical brain. Intentional thought and affective experience arise as dynamical aspects of neural events taking place in multiple brain areas simultaneously. At any given moment, this content comprises a unified "scene" that is integrated into a dynamic core through synchrony of neuronal oscillations. We propose that (1) neurodynamic synchrony with musical stimuli gives rise to musical qualia including tonal and temporal expectancies, and that (2) music-synchronous responses couple into core neurodynamics, enabling music to directly modulate core affect. Expressive music performance, for example, may recruit rhythm-synchronous neural responses to support affective communication. We suggest that the dynamic relationship between musical expression and the experience of affect presents a unique opportunity for the study of emotional experience. This may help elucidate the neural mechanisms underlying arousal and valence, and offer a new approach to exploring the complex dynamics of the how and why of emotional experience.
Collapse
|
26
|
Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals. Hear Res 2013; 308:41-9. [PMID: 24091182 DOI: 10.1016/j.heares.2013.09.010] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Revised: 09/13/2013] [Accepted: 09/17/2013] [Indexed: 11/25/2022]
Abstract
The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.
Collapse
|
27
|
What is special about musical emotion? Phys Life Rev 2013; 10:267-8. [PMID: 23928472 DOI: 10.1016/j.plrev.2013.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2013] [Accepted: 07/16/2013] [Indexed: 10/26/2022]
|
28
|
Abstract
Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.
Collapse
|
29
|
Temporal coordination and adaptation to rate change in music performance. J Exp Psychol Hum Percept Perform 2011; 37:1292-309. [PMID: 21553990 DOI: 10.1037/a0023102] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People often coordinate their actions with sequences that exhibit temporal variability and unfold at multiple periodicities. We compared oscillator- and timekeeper-based accounts of temporal coordination by examining musicians' coordination of rhythmic musical sequences with a metronome that gradually changed rate at the end of a musical phrase (Experiment 1) or at the beginning of a phrase (Experiment 2). The rhythms contained events that occurred at the same periodic rate as the metronome and at half the period. Rate change consisted of a linear increase or decrease in intervals between metronome onsets. Musicians coordinated their performances better with a metronome that decreased than increased in tempo (as predicted by an oscillator model), at both beginnings and ends of musical phrases. Model performance was tested with an oscillator period or timekeeper interval set to the same period as the metronome (1:1 coordination) or half the metronome period (2:1 coordination). Only the oscillator model was able to predict musicians' coordination at both periods. These findings suggest that coordination is based on internal neural oscillations that entrain to external sequences.
Collapse
|
30
|
EEG Correlates of Song Prosody: A New Look at the Relationship between Linguistic and Musical Rhythm. Front Psychol 2011; 2:352. [PMID: 22144972 PMCID: PMC3225926 DOI: 10.3389/fpsyg.2011.00352] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2011] [Accepted: 11/09/2011] [Indexed: 11/26/2022] Open
Abstract
Song composers incorporate linguistic prosody into their music when setting words to melody, a process called “textsetting.” Composers tend to align the expected stress of the lyrics with strong metrical positions in the music. The present study was designed to explore the idea that temporal alignment helps listeners to better understand song lyrics by directing listeners’ attention to instances where strong syllables occur on strong beats. Three types of textsettings were created by aligning metronome clicks with all, some or none of the strong syllables in sung sentences. Electroencephalographic recordings were taken while participants listened to the sung sentences (primes) and performed a lexical decision task on subsequent words and pseudowords (targets, presented visually). Comparison of misaligned and well-aligned sentences showed that temporal alignment between strong/weak syllables and strong/weak musical beats were associated with modulations of induced beta and evoked gamma power, which have been shown to fluctuate with rhythmic expectancies. Furthermore, targets that followed well-aligned primes elicited greater induced alpha and beta activity, and better lexical decision task performance, compared with targets that followed misaligned and varied sentences. Overall, these findings suggest that alignment of linguistic stress and musical meter in song enhances musical beat tracking and comprehension of lyrics by synchronizing neural activity with strong syllables. This approach may begin to explain the mechanisms underlying the relationship between linguistic and musical rhythm in songs, and how rhythmic attending facilitates learning and recall of song lyrics. Moreover, the observations reported here coincide with a growing number of studies reporting interactions between the linguistic and musical dimensions of song, which likely stem from shared neural resources for processing music and speech.
Collapse
|
31
|
Neural responses to complex auditory rhythms: the role of attending. Front Psychol 2010; 1:224. [PMID: 21833279 PMCID: PMC3153829 DOI: 10.3389/fpsyg.2010.00224] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2010] [Accepted: 11/26/2010] [Indexed: 11/13/2022] Open
Abstract
The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus.
Collapse
|
32
|
|
33
|
|
34
|
Abstract
The experience of musical rhythm is a remarkable psychophysical phenomenon, in part because the perception of periodicities, namely pulse and meter, arise from stimuli that are not periodic. One possible function of such a transformation is to enable synchronization between individuals through perception of a common abstract temporal structure (e.g., during music performance). Thus, understanding the brain processes that underlie rhythm perception is fundamental to explaining musical behavior. Here, we propose that neural resonance provides an excellent account of many aspects of human rhythm perception. Our framework is consistent with recent brain-imaging studies showing neural correlates of rhythm perception in high-frequency oscillatory activity, and leads to the hypothesis that perception of pulse and meter result from rhythmic bursts of high-frequency neural activity in response to musical rhythms. High-frequency bursts of activity may enable communication between neural areas, such as auditory and motor cortices, during rhythm perception and production.
Collapse
|
35
|
|
36
|
|
37
|
Abstract
WE INVESTIGATED PEOPLES' ABILITY TO ADAPT TO THE fluctuating tempi of music performance. In Experiment 1, four pieces from different musical styles were chosen, and performances were recorded from a skilled pianist who was instructed to play with natural expression. Spectral and rescaled range analyses on interbeat interval time-series revealed long-range (1/f type) serial correlations and fractal scaling in each piece. Stimuli for Experiment 2 included two of the performances from Experiment 1, with mechanical versions serving as controls. Participants tapped the beat at ¼- and ⅛-note metrical levels, successfully adapting to large tempo fluctuations in both performances. Participants predicted the structured tempo fluctuations, with superior performance at the ¼-note level. Thus, listeners may exploit long-range correlations and fractal scaling to predict tempo changes in music.
Collapse
|
38
|
Abstract
We outline a theory of tonality that predicts tonal stability, attraction, and categorization based on the principles of nonlinear resonance. Perception of tonality is the natural consequence of neural resonance, arising from central auditory nonlinearities.
Collapse
|
39
|
Gamma-band activity reflects the metric structure of rhythmic tone sequences. ACTA ACUST UNITED AC 2005; 24:117-26. [PMID: 15922164 DOI: 10.1016/j.cogbrainres.2004.12.014] [Citation(s) in RCA: 138] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2004] [Revised: 12/16/2004] [Accepted: 12/28/2004] [Indexed: 10/25/2022]
Abstract
Relatively little is known about the dynamics of auditory cortical rhythm processing using non-invasive methods, partly because resolving responses to events in patterns is difficult using long-latency auditory neuroelectric responses. We studied the relationship between short-latency gamma-band (20-60 Hz) activity (GBA) and the structure of rhythmic tone sequences. We show that induced (non-phase-locked) GBA predicts tone onsets and persists when expected tones are omitted. Evoked (phase-locked) GBA occurs in response to tone onsets with approximately 50 ms latency, and is strongly diminished during tone omissions. These properties of auditory GBA correspond with perception of meter in acoustic sequences and provide evidence for the dynamic allocation of attention to temporally structured auditory sequences.
Collapse
|
40
|
Tempo dependence of middle- and long-latency auditory responses: power and phase modulation of the EEG at multiple time-scales. Clin Neurophysiol 2004; 115:1885-95. [PMID: 15261867 DOI: 10.1016/j.clinph.2004.03.024] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/15/2004] [Indexed: 11/28/2022]
Abstract
OBJECTIVE We measured the influences of power and phase modulations of neuroelectric activity on auditory responses to pure-tone patterns with inter-onset intervals typical of music. METHODS Tones were presented to 8 subjects at 10 different tempos from 150 to 3125 ms and with random intervals. We quantified time-frequency (TF) power with respect to a pre-tone-onset baseline and the TF phase coherence across trials. Peak-to-peak event-related potential (ERP) amplitude values for the middle and long-latency auditory responses were obtained for comparison. RESULTS ERP amplitude, size of power modulation, and amount of phase coherence were larger at slower tempos for the long-latency response (LLR) but not for the middle-latency response (MLR). Multiple regression analysis indicated that for MLR and LLR, phase modulation was a better predictor of ERP amplitude than power modulation. CONCLUSIONS Phase modulation is a better predictor of ERP amplitude than power modulation for middle and long-latency auditory responses. SIGNIFICANCE Lack of diminution of the MLR at fast tempos indicates its usefulness for studying early cortical processing of music and speech patterns.
Collapse
|
41
|
Abstract
The measurement of time is fundamental to the perception of complex, temporally structured acoustic signals such as speech and music, yet the mechanisms of temporal sensitivity in the auditory system remain largely unknown. Recently, temporal feature detectors have been discovered in several vertebrate auditory systems. For example, midbrain neurons in the fish Pollimyrus are activated by specific rhythms contained in the simple sounds they use for communication. This poses the significant challenge of uncovering the neuro-computational mechanisms that underlie temporal feature detection. Here we describe a model network that responds selectively to temporal features of communication sounds, yielding temporal selectivity in output neurons that matches the selectivity functions found in the auditory system of Pollimyrus. The output of the network depends upon the timing of excitatory and inhibitory input and post-inhibitory rebound excitation. Interval tuning is achieved in a behaviorally relevant range (10 to 40 ms) using a biologically constrained model, providing a simple mechanism that is suitable for the neural extraction of the relatively long duration temporal cues (i.e. tens to hundreds of ms) that are important in animal communication and human speech.
Collapse
|
42
|
Abstract
We address issues of synchronization to rhythms of musical complexity. In two experiments, synchronization to simple and more complex rhythmic sequences was investigated. Experiment 1 examined responses to phase and tempo perturbations within simple, structurally isochronous sequences, presented at different base rates. Experiment 2 investigated responses to similar perturbations embedded within more complex, metrically structured sequences; participants were explicitly instructed to synchronize at different metrical levels (i.e., tap at different rates to the same rhythmic patterns) on different trials. We found evidence that (1) the intrinsic tapping frequency adapts in response to temporal perturbations in both simple (isochronous) and complex (metrically structured) rhythms, (2) people can synchronize with unpredictable, metrically structured rhythms at different metrical levels, with qualitatively different patterns of synchronization seen at higher versus lower levels of metrical structure, and (3) synchronization at each tapping level reflects information from other metrical levels. The latter finding provides evidence for a dynamic and flexible internal representation of the sequence's metrical structure.
Collapse
|
43
|
|
44
|
|