1
|
Kim M, Schachner A. Sounds of Hidden Agents: The Development of Causal Reasoning About Musical Sounds. Dev Sci 2025; 28:e70021. [PMID: 40313093 PMCID: PMC12046371 DOI: 10.1111/desc.70021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 03/10/2025] [Accepted: 04/04/2025] [Indexed: 05/03/2025]
Abstract
Listening to music activates representations of movement and social agents. Why? We test whether causal reasoning plays a role, and find that from childhood, people can intuitively reason about how musical sounds were generated, inferring the events and agents that caused the sounds. In Experiment 1 (N = 120, pre-registered), 6-year-old children and adults inferred the presence of an unobserved animate agent from hearing musical sounds, by integrating information from the sounds' timing with knowledge of the visual context. Thus, children inferred that an agent was present when the sounds would require self-propelled movement to produce, given the current visual context (e.g., unevenly-timed notes, from evenly-spaced xylophone bars). Consistent with Bayesian causal inference, this reasoning was flexible, allowing people to make inferences not only about unobserved agents, but also the structure of the visual environment in which sounds were produced (in Experiment 2, N = 114). Across experiments, we found evidence of developmental change: Younger children ages 4-5 years failed to integrate auditory and visual information, focusing solely on auditory features (Experiment 1) and failing to connect sounds to visual contexts that produced them (Experiment 2). Our findings support a developmental account in which before age 6, children's reasoning about the causes of musical sounds is limited by failure to integrate information from multiple modalities when engaging in causal reasoning. By age 6, children and adults integrate auditory information with other knowledge to reason about how musical sounds were generated, and thereby link musical sounds with the agents, contexts, and events that caused them.
Collapse
Affiliation(s)
- Minju Kim
- Department of PsychologyUniversity of CaliforniaSan DiegoCaliforniaUSA
- Teaching and Learning CommonsUniversity of CaliforniaSan DiegoCaliforniaUSA
| | - Adena Schachner
- Department of PsychologyUniversity of CaliforniaSan DiegoCaliforniaUSA
| |
Collapse
|
2
|
Yuan Z, Ransbeeck WV, Wiggins GA, Botteldooren D. A Dynamic Systems Approach to Modeling Human-Machine Rhythm Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2025; 55:2052-2064. [PMID: 40131747 DOI: 10.1109/tcyb.2025.3547216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2025]
Abstract
Rhythm is an inherent aspect of human behavior, present from infancy and embedded in cultural practices. At the core of rhythm perception lies meter anticipation, a spontaneous process in the human brain that typically occurs before actual beats. This anticipation can be framed as a time series prediction problem. From the perspective of human embodied system behavior, although many models have been developed for time series prediction, most prioritize accuracy over biological realism, contrasting with the natural imprecision of human internal clocks. Neuroscientific evidence, such as infants' natural meter synchronization, underscores the need for biologically plausible models. Therefore, we propose a neuron oscillator-based dynamic system that simulates human behavior during meter perception. The model introduces two tunable parameters for local and global adjustments, fine-tuning the oscillation combinations to emulate human-like rhythmic behavior. The experiments are conducted under three common scenarios encountered during human-machine interaction, demonstrating that the proposed model can exhibit human-like reactions. Additionally, experiments involving human-machine and interhuman interactions show that the model successfully replicates real-world rhythmic behavior, advancing toward more natural and synchronized human-machine rhythm interaction.
Collapse
|
3
|
Mondok C, Wiener M. A coupled oscillator model predicts the effect of neuromodulation and a novel human tempo-matching bias. J Neurophysiol 2025; 133:1607-1617. [PMID: 40298211 DOI: 10.1152/jn.00348.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 11/10/2024] [Accepted: 04/21/2025] [Indexed: 04/30/2025] Open
Abstract
Humans are known to exhibit endogenous neural oscillations in response to rhythmic stimuli that are phase-locked and frequency matched to those stimuli, a process known as entrainment. Yet, whether entrainment, as measured by electrophysiological recordings, reflects actual processing of rhythms or merely a reflection of the periodic nature of the stimulus is debated. Prior evidence for entrainment as a perceptual phenomenon comes from studies requiring subjects to listen to, compare sequentially, or detect features in rhythmic stimuli. However, one paradigm so far not used is one where subjects must listen to two simultaneous rhythms at different frequencies and adjust them to match. Here, human participants performed this task during EEG recordings (experiment 1), demonstrating spectral peaks at both tempo frequencies at frontocentral electrodes that shifted into alignment over the course of each trial. Behaviorally, participants tended to anchor the matched tempo to the starting comparison frequency, such that they underestimated the tempo for slower initial conditions and overestimated for faster initial conditions. A model of phase-coupled oscillators, in which both tempos were pulled toward one another, replicated both effects. This model further predicted that by enhancing the coupling strength of the constant tempo oscillator, both bias effects could be reduced. To test this, a second group of subjects performed the task while receiving 2 Hz transcranial alternating current stimulation (tACS) to the frontocentral region. Consistent with model predictions, tACS attenuated both behavioral effects, particularly for initially slower conditions. These results support entrainment as an endogenous process that mediates beat perception.NEW & NOTEWORTHY This work proposes how humans perceive the difference between two simultaneously presented tempos and bring them into perceived synchrony. EEG data provide evidence of entrainment to both tempos that move into alignment, and transcranial alternating current stimulation (tACS) data provide causal evidence that strengthening one tempo improves performance.
Collapse
Affiliation(s)
- Chloe Mondok
- Department of Psychology, George Mason University, Fairfax, Virginia, United States
| | - Martin Wiener
- Department of Psychology, George Mason University, Fairfax, Virginia, United States
| |
Collapse
|
4
|
Nave KM, Hannon EE, Snyder JS. Registered Report: Replication and Extension of. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.13.643168. [PMID: 40166247 PMCID: PMC11956986 DOI: 10.1101/2025.03.13.643168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Cognitive neuroscience research has attempted to disentangle stimulus-driven processing from conscious perceptual processing for decades. Some prior evidence for neural processing of perceived musical beat (periodic pulse) may be confounded by stimulus-driven neural activity. However, one study used frequency tagging, which measures electrical brain activity at frequencies present in a stimulus, to show increased brain activity at imagery-related frequencies when listeners imagined a metrical pattern while listening to an isochronous auditory stimulus (Nozaradan et al., 2011) in a manner that controlled for stimulus factors. It is unclear though whether this represents repeatable evidence for conscious perception of beat and whether the effect is influenced by relevant music experience, such as music and dance training. This registered report details the results of 13 independent conceptual replications of Nozaradan et al. (2011), all using the same vetted protocol. Listeners performed the same imagery tasks as in Nozaradan et al. (2011), with the addition of a behavioral task on each trial to measure conscious perception. Meta-analyses examined the effect of imagery condition, revealing smaller raw effect sizes (Binary: 0.03 μV, Ternary: 0.03 μV) than in the original study (Binary: 0.12 μV, Ternary: 0.20 μV) with no moderating effects of music or dance training. The difference in estimated effects sizes (this study: n = 152, η p 2 =.03 - .04; 2011 study: n = 8, η p 2 =.62 - .76) suggests that large sample sizes may be required to reliably observe these effects, which challenges the use of frequency tagging as a method to study (neural correlates of) beat perception. Furthermore, a binary logistic regression on individual trials revealed that only neural activity at the stimulus frequency predicted performance on the imagery-related task; contrary to our hypothesis, the neural activity at the imagery-related frequency was not a significant predictor. We discuss possible explanations for discrepancies between these findings and the original study and implications of the extensions provided by this registered report.
Collapse
|
5
|
Quiroga-Martinez DR, Rubio GF, Bonetti L, Achyutuni KG, Tzovara A, Knight RT, Vuust P. Decoding reveals the neural representation of perceived and imagined musical sounds. PLoS Biol 2024; 22:e3002858. [PMID: 39432519 PMCID: PMC11527242 DOI: 10.1371/journal.pbio.3002858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 10/31/2024] [Accepted: 09/20/2024] [Indexed: 10/23/2024] Open
Abstract
Vividly imagining a song or a melody is a skill that many people accomplish with relatively little effort. However, we are only beginning to understand how the brain represents, holds, and manipulates these musical "thoughts." Here, we decoded perceived and imagined melodies from magnetoencephalography (MEG) brain data (N = 71) to characterize their neural representation. We found that, during perception, auditory regions represent the sensory properties of individual sounds. In contrast, a widespread network including fronto-parietal cortex, hippocampus, basal nuclei, and sensorimotor regions hold the melody as an abstract unit during both perception and imagination. Furthermore, the mental manipulation of a melody systematically changes its neural representation, reflecting volitional control of auditory images. Our work sheds light on the nature and dynamics of auditory representations, informing future research on neural decoding of auditory imagination.
Collapse
Affiliation(s)
- David R. Quiroga-Martinez
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, California, United States of America
- Psychology Department, University of Copenhagen, Copenhagen, Denmark
| | - Gemma Fernández Rubio
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
- Center for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford United Kingdom
- Department of Psychiatry, University of Oxford, Oxford United Kingdom
| | - Kriti G. Achyutuni
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, California, United States of America
| | - Athina Tzovara
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, California, United States of America
- Institute of Computer Science, University of Bern, Bern, Switzerland
- Center for Experimental Neurology, Sleep Wake Epilepsy Center, NeuroTec, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Robert T. Knight
- Helen Wills Neuroscience Institute & Department of Psychology and Neuroscience, University of California Berkeley, Berkeley, California, United States of America
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus, Denmark
| |
Collapse
|
6
|
Chang A, Teng X, Assaneo MF, Poeppel D. The human auditory system uses amplitude modulation to distinguish music from speech. PLoS Biol 2024; 22:e3002631. [PMID: 38805517 PMCID: PMC11132470 DOI: 10.1371/journal.pbio.3002631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 04/17/2024] [Indexed: 05/30/2024] Open
Abstract
Music and speech are complex and distinct auditory signals that are both foundational to the human experience. The mechanisms underpinning each domain are widely investigated. However, what perceptual mechanism transforms a sound into music or speech and how basic acoustic information is required to distinguish between them remain open questions. Here, we hypothesized that a sound's amplitude modulation (AM), an essential temporal acoustic feature driving the auditory system across processing levels, is critical for distinguishing music and speech. Specifically, in contrast to paradigms using naturalistic acoustic signals (that can be challenging to interpret), we used a noise-probing approach to untangle the auditory mechanism: If AM rate and regularity are critical for perceptually distinguishing music and speech, judging artificially noise-synthesized ambiguous audio signals should align with their AM parameters. Across 4 experiments (N = 335), signals with a higher peak AM frequency tend to be judged as speech, lower as music. Interestingly, this principle is consistently used by all listeners for speech judgments, but only by musically sophisticated listeners for music. In addition, signals with more regular AM are judged as music over speech, and this feature is more critical for music judgment, regardless of musical sophistication. The data suggest that the auditory system can rely on a low-level acoustic property as basic as AM to distinguish music from speech, a simple principle that provokes both neurophysiological and evolutionary experiments and speculations.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, New York University, New York, New York, United States of America
| | - Xiangbin Teng
- Department of Psychology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - M. Florencia Assaneo
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, México
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, United States of America
- Ernst Struengmann Institute for Neuroscience, Frankfurt am Main, Germany
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, United States of America
- Music and Audio Research Lab (MARL), New York University, New York, New York, United States of America
| |
Collapse
|
7
|
Pando-Naude V, Matthews TE, Højlund A, Jakobsen S, Østergaard K, Johnsen E, Garza-Villarreal EA, Witek MAG, Penhune V, Vuust P. Dopamine dysregulation in Parkinson's disease flattens the pleasurable urge to move to musical rhythms. Eur J Neurosci 2024; 59:101-118. [PMID: 37724707 DOI: 10.1111/ejn.16128] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/12/2023] [Accepted: 08/08/2023] [Indexed: 09/21/2023]
Abstract
The pleasurable urge to move to music (PLUMM) activates motor and reward areas of the brain and is thought to be driven by predictive processes. Dopamine in motor and limbic networks is implicated in beat-based timing and music-induced pleasure, suggesting a central role of basal ganglia (BG) dopaminergic systems in PLUMM. This study tested this hypothesis by comparing PLUMM in participants with Parkinson's disease (PD), age-matched controls, and young controls. Participants listened to musical sequences with varying rhythmic and harmonic complexity (low, medium and high), and rated their experienced pleasure and urge to move to the rhythm. In line with previous results, healthy younger participants showed an inverted U-shaped relationship between rhythmic complexity and ratings, with preference for medium complexity rhythms, while age-matched controls showed a similar, but weaker, inverted U-shaped response. Conversely, PD showed a significantly flattened response for both the urge to move and pleasure. Crucially, this flattened response could not be attributed to differences in rhythm discrimination and did not reflect an overall decrease in ratings. For harmonic complexity, PD showed a negative linear pattern for both the urge to move and pleasure while healthy age-matched controls showed the same pattern for pleasure and an inverted U for the urge to move. This contrasts with the pattern observed in young healthy controls in previous studies, suggesting that both healthy aging and PD also influence affective responses to harmonic complexity. Together, these results support the role of dopamine within cortico-striatal circuits in the predictive processes that form the link between the perceptual processing of rhythmic patterns and the affective and motor responses to rhythmic music.
Collapse
Affiliation(s)
- Victor Pando-Naude
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Tomas Edward Matthews
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Andreas Højlund
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Linguistics, Cognitive Science and Semiotics, School of Communication and Culture, Aarhus University, Aarhus, Denmark
| | - Sebastian Jakobsen
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Linguistics, Cognitive Science and Semiotics, School of Communication and Culture, Aarhus University, Aarhus, Denmark
| | - Karen Østergaard
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Neurology, Aarhus University Hospital, Aarhus, Denmark
- Sano, Private Hospital, Aarhus, Denmark
| | - Erik Johnsen
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University Hospital, Aarhus, Denmark
- Department of Neurology, Aarhus University Hospital, Aarhus, Denmark
| | - Eduardo A Garza-Villarreal
- Instituto de Neurobiología, Universidad Nacional Autónoma de México (UNAM), Juriquilla, Querétaro, Mexico
| | - Maria A G Witek
- Department of Music School of Languages, Cultures, Art History and Music, University of Birmingham, Birmingham, UK
| | - Virginia Penhune
- Department of Psychology, Concordia University, Montreal, Quebec, Canada
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
8
|
Meng J, Zhao Y, Wang K, Sun J, Yi W, Xu F, Xu M, Ming D. Rhythmic temporal prediction enhances neural representations of movement intention for brain-computer interface. J Neural Eng 2023; 20:066004. [PMID: 37875107 DOI: 10.1088/1741-2552/ad0650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 10/24/2023] [Indexed: 10/26/2023]
Abstract
Objective.Detecting movement intention is a typical use of brain-computer interfaces (BCI). However, as an endogenous electroencephalography (EEG) feature, the neural representation of movement is insufficient for improving motor-based BCI. This study aimed to develop a new movement augmentation BCI encoding paradigm by incorporating the cognitive function of rhythmic temporal prediction, and test the feasibility of this new paradigm in optimizing detections of movement intention.Methods.A visual-motion synchronization task was designed with two movement intentions (left vs. right) and three rhythmic temporal prediction conditions (1000 ms vs. 1500 ms vs. no temporal prediction). Behavioural and EEG data of 24 healthy participants were recorded. Event-related potentials (ERPs), event-related spectral perturbation induced by left- and right-finger movements, the common spatial pattern (CSP) and support vector machine, Riemann tangent space algorithm and logistic regression were used and compared across the three temporal prediction conditions, aiming to test the impact of temporal prediction on movement detection.Results.Behavioural results showed significantly smaller deviation time for 1000 ms and 1500 ms conditions. ERP analyses revealed 1000 ms and 1500 ms conditions led to rhythmic oscillations with a time lag in contralateral and ipsilateral areas of movement. Compared with no temporal prediction, 1000 ms condition exhibited greater beta event-related desynchronization (ERD) lateralization in motor area (P< 0.001) and larger beta ERD in frontal area (P< 0.001). 1000 ms condition achieved an averaged left-right decoding accuracy of 89.71% using CSP and 97.30% using Riemann tangent space, both significantly higher than no temporal prediction. Moreover, movement and temporal information can be decoded simultaneously, achieving 88.51% four-classification accuracy.Significance.The results not only confirm the effectiveness of rhythmic temporal prediction in enhancing detection ability of motor-based BCI, but also highlight the dual encodings of movement and temporal information within a single BCI paradigm, which is promising to expand the range of intentions that can be decoded by the BCI.
Collapse
Affiliation(s)
- Jiayuan Meng
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Yingru Zhao
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Kun Wang
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Jinsong Sun
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing, People's Republic of China
| | - Fangzhou Xu
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, People's Republic of China
| | - Minpeng Xu
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, People's Republic of China
| | - Dong Ming
- The Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| |
Collapse
|
9
|
Torres NL, Castro SL, Silva S. Beat cues facilitate time estimation at longer intervals. Front Psychol 2023; 14:1130788. [PMID: 37842702 PMCID: PMC10576433 DOI: 10.3389/fpsyg.2023.1130788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 09/18/2023] [Indexed: 10/17/2023] Open
Abstract
Introduction Time perception in humans can be relative (beat-based) or absolute (duration-based). Although the classic view in the field points to different neural substrates underlying beat-based vs. duration-based mechanisms, recent neuroimaging evidence provided support to a unified model wherein these two systems overlap. In line with this, previous research demonstrated that internalized beat cues benefit motor reproduction of longer intervals (> 5.5 s) by reducing underestimation, but little is known about this effect on pure perceptual tasks. The present study was designed to investigate whether and how interval estimation is modulated by available beat cues. Methods To that end, we asked 155 participants to estimate auditory intervals ranging from 500 ms to 10 s, while manipulating the presence of cues before the interval, as well as the reinforcement of these cues by beat-related interference within the interval (vs. beat-unrelated and no interference). Results Beat cues aided time estimation depending on interval duration: for intervals longer than 5 s, estimation was better in the cue than in the no-cue condition. Specifically, the levels of underestimation decreased in the presence of cues, indicating that beat cues had a facilitating effect on time perception very similar to the one observed previously for time production. Discussion Interference had no effects, suggesting that this manipulation was not effective. Our findings are consistent with the idea of cooperation between beat- and duration-based systems and suggest that this cooperation is quite similar across production and perception.
Collapse
Affiliation(s)
- Nathércia L. Torres
- Speech Laboratory, Faculty of Psychology and Education Sciences, University of Porto, Porto, Portugal
| | | | | |
Collapse
|
10
|
Rosso M, Moens B, Leman M, Moumdjian L. Neural entrainment underpins sensorimotor synchronization to dynamic rhythmic stimuli. Neuroimage 2023; 277:120226. [PMID: 37321359 DOI: 10.1016/j.neuroimage.2023.120226] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/02/2023] [Accepted: 06/12/2023] [Indexed: 06/17/2023] Open
Abstract
Neural entrainment, defined as unidirectional synchronization of neural oscillations to an external rhythmic stimulus, is a topic of major interest in the field of neuroscience. Despite broad scientific consensus on its existence, on its pivotal role in sensory and motor processes, and on its fundamental definition, empirical research struggles in quantifying it with non-invasive electrophysiology. To this date, broadly adopted state-of-the-art methods still fail to capture the dynamic underlying the phenomenon. Here, we present event-related frequency adjustment (ERFA) as a methodological framework to induce and to measure neural entrainment in human participants, optimized for multivariate EEG datasets. By applying dynamic phase and tempo perturbations to isochronous auditory metronomes during a finger-tapping task, we analyzed adaptive changes in instantaneous frequency of entrained oscillatory components during error correction. Spatial filter design allowed us to untangle, from the multivariate EEG signal, perceptual and sensorimotor oscillatory components attuned to the stimulation frequency. Both components dynamically adjusted their frequency in response to perturbations, tracking the stimulus dynamics by slowing down and speeding up the oscillation over time. Source separation revealed that sensorimotor processing enhanced the entrained response, supporting the notion that the active engagement of the motor system plays a critical role in processing rhythmic stimuli. In the case of phase shift, motor engagement was a necessary condition to observe any response, whereas sustained tempo changes induced frequency adjustment even in the perceptual oscillatory component. Although the magnitude of the perturbations was controlled across positive and negative direction, we observed a general bias in the frequency adjustments towards positive changes, which points at the effect of intrinsic dynamics constraining neural entrainment. We conclude that our findings provide compelling evidence for neural entrainment as mechanism underlying overt sensorimotor synchronization, and highlight that our methodology offers a paradigm and a measure for quantifying its oscillatory dynamics by means of non-invasive electrophysiology, rigorously informed by the fundamental definition of entrainment.
Collapse
Affiliation(s)
- Mattia Rosso
- IPEM Institute for Systematic Musicology, Ghent University, Ghent, Belgium; Université de Lille, ULR 4072 - PSITEC - Psychologie: Interactions, Temps, Emotions, Cognition, Lille, France.
| | - Bart Moens
- IPEM Institute for Systematic Musicology, Ghent University, Ghent, Belgium
| | - Marc Leman
- IPEM Institute for Systematic Musicology, Ghent University, Ghent, Belgium
| | - Lousin Moumdjian
- IPEM Institute for Systematic Musicology, Ghent University, Ghent, Belgium; REVAL Rehabilitation Research Center, Faculty of Rehabilitation Sciences, Hasselt University, Hasselt, Belgium; UMSC Hasselt, Pelt, Belgium
| |
Collapse
|
11
|
Correa JP. Cross-Modal Musical Expectancy in Complex Sound Music: A Grounded Theory. J Cogn 2023; 6:33. [PMID: 37426063 PMCID: PMC10327858 DOI: 10.5334/joc.281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 05/16/2023] [Indexed: 07/11/2023] Open
Abstract
Expectancy is a core mechanism for constructing affective and cognitive experiences of music. However, research on musical expectations has been largely founded upon the perception of tonal music. Therefore, it is still to be determined how this mechanism explains the cognition of sound-based acoustic and electroacoustic music, such as complex sound music (CSM). Additionally, the dominant methodologies have consisted of well-controlled experimental designs with low ecological validity that have overlooked the listening experience as described by the listeners. This paper presents results concerning musical expectancy from a qualitative research project that investigated the listening experiences of 15 participants accustomed to CSM listening. Corbin and Strauss' (2015) grounded theory was used to triangulate data from interviews along with musical analyses of the pieces chosen by the participants to describe their listening experiences. Cross-modal musical expectancy (CMME) emerged from the data as a subcategory that explained prediction through the interaction of multimodal elements beyond just the acoustic properties of music. The results led to hypothesise that multimodal information coming from sounds, performance gestures, and indexical, iconic, and conceptual associations re-enact cross-modal schemata and episodic memories where real and imagined sounds, objects, actions, and narratives interrelate to give rise to CMME processes. This construct emphasises the effect of CSM's subversive acoustic features and performance practices on the listening experience. Further, it reveals the multiplicity of factors involved in musical expectancy, such as cultural values, subjective musical and non-musical experiences, music structure, listening situation, and psychological mechanisms. Following these ideas, CMME is conceived as a grounded cognition process.
Collapse
|
12
|
Meng J, Zhao Y, Wang H, Sun J, Xu M, Ming D. Temporal prediction changes motor-related EEG phase synchronization and network centrality in alpha and beta band. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083725 DOI: 10.1109/embc40787.2023.10340297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Much neurophysiological evidence revealed motor system is involved in temporal prediction. However, It remains unknown how temporal prediction influences motor-related neural representations. Thus, more neural evidence is needed to understand better how temporal prediction influences the motor. This study designed a rhythmic finger-tap task and formed three temporal prediction conditions, i.e., 1000ms temporal prediction, 1500ms temporal prediction, and no temporal prediction. Behavioral and EEG data from 24 healthy subjects were recorded. The weighted phase lag index was calculated to measure the degree of phase synchronization. Eigenvector centrality and betweenness centrality were used to measure brain connectivity. Behavioral results showed that tap-visual asynchronies were decreased when temporal prediction existed. Phase synchronization results showed, compared to no temporal prediction, the alpha-band phase synchronization between the frontal and central area was reduced in 1000ms temporal prediction, and the beta-band phase synchronization between the frontal and parietal area was decreased in 1500ms temporal prediction. As to the brain connectivity, compared to no temporal prediction condition, the eigenvector centrality of the left frontal in 1500ms temporal prediction was decreased in the alpha band, and the betweenness centrality of the right temporal in 1000ms temporal prediction was reduced in the alpha-band. These results can provide new neural evidence for a better understanding of temporal prediction and motor interactions.
Collapse
|
13
|
Luo L, Lu L. Studying rhythm processing in speech through the lens of auditory-motor synchronization. Front Neurosci 2023; 17:1146298. [PMID: 36937684 PMCID: PMC10017839 DOI: 10.3389/fnins.2023.1146298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 02/20/2023] [Indexed: 03/06/2023] Open
Abstract
Continuous speech is organized into a hierarchy of rhythms. Accurate processing of this rhythmic hierarchy through the interactions of auditory and motor systems is fundamental to speech perception and production. In this mini-review, we aim to evaluate the implementation of behavioral auditory-motor synchronization paradigms when studying rhythm processing in speech. First, we present an overview of the classic finger-tapping paradigm and its application in revealing differences in auditory-motor synchronization between the typical and clinical populations. Next, we highlight key findings on rhythm hierarchy processing in speech and non-speech stimuli from finger-tapping studies. Following this, we discuss the potential caveats of the finger-tapping paradigm and propose the speech-speech synchronization (SSS) task as a promising tool for future studies. Overall, we seek to raise interest in developing new methods to shed light on the neural mechanisms of speech processing.
Collapse
Affiliation(s)
- Lu Luo
- School of Psychology, Beijing Sport University, Beijing, China
- Laboratory of Sports Stress and Adaptation of General Administration of Sport, Beijing, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing, China
- *Correspondence: Lingxi Lu,
| |
Collapse
|
14
|
Chen WG, Iversen JR, Kao MH, Loui P, Patel AD, Zatorre RJ, Edwards E. Music and Brain Circuitry: Strategies for Strengthening Evidence-Based Research for Music-Based Interventions. J Neurosci 2022; 42:8498-8507. [PMID: 36351825 PMCID: PMC9665917 DOI: 10.1523/jneurosci.1135-22.2022] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/07/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022] Open
Abstract
The neuroscience of music and music-based interventions (MBIs) is a fascinating but challenging research field. While music is a ubiquitous component of every human society, MBIs may encompass listening to music, performing music, music-based movement, undergoing music education and training, or receiving treatment from music therapists. Unraveling the brain circuits activated and influenced by MBIs may help us gain better understanding of the therapeutic and educational values of MBIs by gathering strong research evidence. However, the complexity and variety of MBIs impose unique research challenges. This article reviews the recent endeavor led by the National Institutes of Health to support evidence-based research of MBIs and their impact on health and diseases. It also highlights fundamental challenges and strategies of MBI research with emphases on the utilization of animal models, human brain imaging and stimulation technologies, behavior and motion capturing tools, and computational approaches. It concludes with suggestions of basic requirements when studying MBIs and promising future directions to further strengthen evidence-based research on MBIs in connections with brain circuitry.SIGNIFICANCE STATEMENT Music and music-based interventions (MBI) engage a wide range of brain circuits and hold promising therapeutic potentials for a variety of health conditions. Comparative studies using animal models have helped in uncovering brain circuit activities involved in rhythm perception, while human imaging, brain stimulation, and motion capture technologies have enabled neural circuit analysis underlying the effects of MBIs on motor, affective/reward, and cognitive function. Combining computational analysis, such as prediction method, with mechanistic studies in animal models and humans may unravel the complexity of MBIs and their effects on health and disease.
Collapse
Affiliation(s)
- Wen Grace Chen
- Division of Extramural Research, National Center for Complementary and Integrative Health, National Institutes of Health, Bethesda, Maryland, 20892
| | | | - Mimi H Kao
- Tufts University, Medford, Massachusetts 02155
| | - Psyche Loui
- Northeastern University, Boston, Massachusetts 02115
| | | | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
| | - Emmeline Edwards
- Division of Extramural Research, National Center for Complementary and Integrative Health, National Institutes of Health, Bethesda, Maryland, 20892
| |
Collapse
|