1
|
Pazdera JK, Trainor LJ. Pitch biases sensorimotor synchronization to auditory rhythms. Sci Rep 2025; 15:17012. [PMID: 40379668 DOI: 10.1038/s41598-025-00827-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Accepted: 04/30/2025] [Indexed: 05/19/2025] Open
Abstract
Current models of rhythm perception propose that humans track musical beats using the phase, period, and amplitude of sound patterns. However, a growing body of evidence suggests that pitch can also influence the perceived timing of auditory signals. In the present study, we conducted two experiments to investigate whether pitch affects the phase and period of sensorimotor synchronization. To do so, we asked participants to synchronize with a repeating tone, whose pitch on each trial was drawn from one of six different octaves (110-3520 Hz). In Experiment 1, we observed U-shaped patterns in both mean asynchrony and continuation tapping rates, with participants tapping latest and slowest when synchronizing to low and extremely high (above 2000 Hz) pitches, and tapping earliest and fastest to moderately high pitches. In Experiment 2, we found that extremely high pitches still produced slower timing than moderately high pitches when participants were exposed to an exclusively high-pitched context. Based on our results, we advocate for the incorporation of pitch into models of rhythm perception and discuss possible origins of these effects.
Collapse
Affiliation(s)
- Jesse K Pazdera
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada.
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, ON, Canada
- McMaster Institute for Music and the Mind, McMaster University, Hamilton, ON, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| |
Collapse
|
2
|
Seeberg AB, Matthews TE, Højlund A, Vuust P, Petersen B. Beyond syncopation: The number of rhythmic layers shapes the pleasurable urge to move to music. Cognition 2025; 262:106178. [PMID: 40373735 DOI: 10.1016/j.cognition.2025.106178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 04/30/2025] [Accepted: 05/05/2025] [Indexed: 05/17/2025]
Abstract
People experience the strongest pleasurable urge to move to music (PLUMM) with rhythms of medium complexity, showing an inverted U-shaped relationship. Rhythmic complexity is typically defined by syncopation but likely interacts with the number and instrumentation of rhythmic layers (e.g., snare only vs snare and bass drum) in affecting PLUMM. This study investigated this interaction by comparing PLUMM ratings of rhythms with varying rhythmic layers and syncopation degrees. Two online studies (study 1, n = 108; study 2, n = 46) were conducted asking participants to rate how much they wanted to move and the pleasure they felt while listening to rhythms. Each study used 12 rhythms in four versions: 1) snare only (SN) in study I and bass drum only (BD) in study II; 2) snare and hi-hat (SN + HH) in study I and bass drum and hi-hat (BD + HH) in study II; 3) snare and bass drum (SN + BD) and 4) the original with snare, bass drum, and hi-hat (SN + BD + HH) in both studies, totaling 48 stimuli per study. We tested for linear and quadratic effects of syncopation and rhythmic layers on PLUMM ratings. Study I showed a significant interaction between syncopation and rhythmic layers. The SN + BD + HH versions exhibited the strongest inverted U as an effect of syncopation, followed by SN + BD and SN + HH, while SN showed a near-flat pattern of ratings as an effect of syncopation. Study II had similar findings, but differences between versions were smaller, and the interaction was mainly driven by differences between BD and BD + HH and between SN + BD and SN + BD + HH, especially at moderate syncopation levels. These findings suggest that the PLUMM response is shaped by the number of rhythmic layers, the roles that the different instruments play, and the way that they interact with each other and with syncopation, thus extending our understanding of the rhythmic features that drive the motor and hedonic responses to music.
Collapse
Affiliation(s)
- Alberte B Seeberg
- Center for Music in the Brain, dpt. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - Tomas E Matthews
- Center for Music in the Brain, dpt. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - Andreas Højlund
- Department of Linguistics and Cognitive Science, Aarhus University, Aarhus, Denmark.
| | - Peter Vuust
- Center for Music in the Brain, dpt. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Bjørn Petersen
- Center for Music in the Brain, dpt. of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| |
Collapse
|
3
|
Bürgel M, Mares D, Siedenburg K. Enhanced salience of edge frequencies in auditory pattern recognition. Atten Percept Psychophys 2024; 86:2811-2820. [PMID: 39461935 DOI: 10.3758/s13414-024-02971-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/20/2024] [Indexed: 10/28/2024]
Abstract
Within musical scenes or textures, sounds from certain instruments capture attention more prominently than others, hinting at biases in the perception of multisource mixtures. Besides musical factors, these effects might be related to frequency biases in auditory perception. Using an auditory pattern-recognition task, we studied the existence of such frequency biases. Mixtures of pure tone melodies were presented in six frequency bands. Listeners were instructed to assess whether the target melody was part of the mixture or not, with the target melody presented either before or after the mixture. In Experiment 1, the mixture always contained melodies in five out of the six bands. In Experiment 2, the mixture contained three bands that stemmed from the lower or the higher part of the range. As expected, Experiments 1 and 2 both highlighted strong effects of presentation order, with higher accuracies for the target presented before the mixture. Notably, Experiment 1 showed that edge frequencies yielded superior accuracies compared with center frequencies. Experiment 2 corroborated this finding by yielding enhanced accuracies for edge frequencies irrespective of the absolute frequency region. Our results highlight the salience of sound elements located at spectral edges within complex musical scenes. Overall, this implies that neither the high voice superiority effect nor the insensitivity to bass instruments observed by previous research can be explained by absolute frequency biases in auditory perception.
Collapse
Affiliation(s)
- Michel Bürgel
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany.
| | - Diana Mares
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany.
| | - Kai Siedenburg
- Dept. of Medical Physics and Acoustics, Carl Von Ossietzy University of Oldenburg, 26129, Oldenburg, Germany
- Signal Processing and Speech Communication Laboratory, Graz University of Technology, 8010, Graz, Austria
| |
Collapse
|
4
|
Cirelli LK, Talukder LS, Kragness HE. Infant attention to rhythmic audiovisual synchrony is modulated by stimulus properties. Front Psychol 2024; 15:1393295. [PMID: 39027053 PMCID: PMC11256966 DOI: 10.3389/fpsyg.2024.1393295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/06/2024] [Indexed: 07/20/2024] Open
Abstract
Musical interactions are a common and multimodal part of an infant's daily experiences. Infants hear their parents sing while watching their lips move and see their older siblings dance along to music playing over the radio. Here, we explore whether 8- to 12-month-old infants associate musical rhythms they hear with synchronous visual displays by tracking their dynamic visual attention to matched and mismatched displays. Visual attention was measured using eye-tracking while they attended to a screen displaying two videos of a finger tapping at different speeds. These videos were presented side by side while infants listened to an auditory rhythm (high or low pitch) synchronized with one of the two videos. Infants attended more to the low-pitch trials than to the high-pitch trials but did not display a preference for attending to the synchronous hand over the asynchronous hand within trials. Exploratory evidence, however, suggests that tempo, pitch, and rhythmic complexity interactively engage infants' visual attention to a tapping hand, especially when that hand is aligned with the auditory stimulus. For example, when the rhythm was complex and the auditory stimulus was low in pitch, infants attended to the fast hand more when it aligned with the auditory stream than to misaligned trials. These results suggest that the audiovisual integration in rhythmic non-speech contexts is influenced by stimulus properties.
Collapse
Affiliation(s)
- Laura K. Cirelli
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Labeeb S. Talukder
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Haley E. Kragness
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
- Psychology Department, Bucknell University, Lewisburg, PA, United States
| |
Collapse
|
5
|
Black T, Jenkins BW, Laprairie RB, Howland JG. Therapeutic potential of gamma entrainment using sensory stimulation for cognitive symptoms associated with schizophrenia. Neurosci Biobehav Rev 2024; 161:105681. [PMID: 38641090 DOI: 10.1016/j.neubiorev.2024.105681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 03/27/2024] [Accepted: 04/16/2024] [Indexed: 04/21/2024]
Abstract
Schizophrenia is a complex neuropsychiatric disorder with significant morbidity. Treatment options that address the spectrum of symptoms are limited, highlighting the need for innovative therapeutic approaches. Gamma Entrainment Using Sensory Stimulation (GENUS) is an emerging treatment for neuropsychiatric disorders that uses sensory stimulation to entrain impaired oscillatory network activity and restore brain function. Aberrant oscillatory activity often underlies the symptoms experienced by patients with schizophrenia. We propose that GENUS has therapeutic potential for schizophrenia. This paper reviews the current status of schizophrenia treatment and explores the use of sensory stimulation as an adjunctive treatment, specifically through gamma entrainment. Impaired gamma frequency entrainment is observed in patients, particularly in response to auditory and visual stimuli. Thus, sensory stimulation, such as music listening, may have therapeutic potential for individuals with schizophrenia. GENUS holds novel therapeutic potential to improve the lives of individuals with schizophrenia, but further research is required to determine the efficacy of GENUS, optimize its delivery and therapeutic window, and develop strategies for its implementation in specific patient populations.
Collapse
Affiliation(s)
- Tallan Black
- College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, SK, Canada.
| | - Bryan W Jenkins
- Division of Behavioral Biology, Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| | - Robert B Laprairie
- College of Pharmacy and Nutrition, University of Saskatchewan, Saskatoon, SK, Canada; Department of Pharmacology, College of Medicine, Dalhousie University, Halifax, NS, Canada
| | - John G Howland
- Department of Anatomy, Physiology, and Pharmacology, College of Medicine, University of Saskatchewan, Saskatoon, SK, Canada
| |
Collapse
|
6
|
Etani T, Miura A, Kawase S, Fujii S, Keller PE, Vuust P, Kudo K. A review of psychological and neuroscientific research on musical groove. Neurosci Biobehav Rev 2024; 158:105522. [PMID: 38141692 DOI: 10.1016/j.neubiorev.2023.105522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 12/18/2023] [Accepted: 12/19/2023] [Indexed: 12/25/2023]
Abstract
When listening to music, we naturally move our bodies rhythmically to the beat, which can be pleasurable and difficult to resist. This pleasurable sensation of wanting to move the body to music has been called "groove." Following pioneering humanities research, psychological and neuroscientific studies have provided insights on associated musical features, behavioral responses, phenomenological aspects, and brain structural and functional correlates of the groove experience. Groove research has advanced the field of music science and more generally informed our understanding of bidirectional links between perception and action, and the role of the motor system in prediction. Activity in motor and reward-related brain networks during music listening is associated with the groove experience, and this neural activity is linked to temporal prediction and learning. This article reviews research on groove as a psychological phenomenon with neurophysiological correlates that link musical rhythm perception, sensorimotor prediction, and reward processing. Promising future research directions range from elucidating specific neural mechanisms to exploring clinical applications and socio-cultural implications of groove.
Collapse
Affiliation(s)
- Takahide Etani
- School of Medicine, College of Medical, Pharmaceutical, and Health, Kanazawa University, Kanazawa, Japan; Graduate School of Media and Governance, Keio University, Fujisawa, Japan; Advanced Research Center for Human Sciences, Waseda University, Tokorozawa, Japan.
| | - Akito Miura
- Faculty of Human Sciences, Waseda University, Tokorozawa, Japan
| | - Satoshi Kawase
- The Faculty of Psychology, Kobe Gakuin University, Kobe, Japan
| | - Shinya Fujii
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Japan
| | - Peter E Keller
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - Peter Vuust
- Center for Music in the Brain, Aarhus University, Aarhus, Denmark/The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Kazutoshi Kudo
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
7
|
Yang H, Cai B, Tan W, Luo L, Zhang Z. Pitch Improvement in Attentional Blink: A Study across Audiovisual Asymmetries. Behav Sci (Basel) 2024; 14:145. [PMID: 38392498 PMCID: PMC10885858 DOI: 10.3390/bs14020145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/07/2024] [Accepted: 02/16/2024] [Indexed: 02/24/2024] Open
Abstract
Attentional blink (AB) is a phenomenon in which the perception of a second target is impaired when it appears within 200-500 ms after the first target. Sound affects an AB and is accompanied by the appearance of an asymmetry during audiovisual integration, but it is not known whether this is related to the tonal representation of sound. The aim of the present study was to investigate the effect of audiovisual asymmetry on attentional blink and whether the presentation of pitch improves the ability to detect a target during an AB that is accompanied by audiovisual asymmetry. The results showed that as the lag increased, the subject's target recognition improved and the pitch produced further improvements. These improvements exhibited a significant asymmetry across the audiovisual channel. Our findings could contribute to better utilizations of audiovisual integration resources to improve attentional transients and auditory recognition decline, which could be useful in areas such as driving and education.
Collapse
Affiliation(s)
- Haoping Yang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
| | - Biye Cai
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Wenjie Tan
- Suzhou Cognitive Psychology Co-Operative Society, Soochow University, Suzhou 215021, China
- Department of Physical Education, South China University of Technology, Guangzhou 518100, China
| | - Li Luo
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| | - Zonghao Zhang
- School of Physical Education and Sports Science, Soochow University, Suzhou 215021, China
| |
Collapse
|
8
|
Bowling DL. Biological principles for music and mental health. Transl Psychiatry 2023; 13:374. [PMID: 38049408 PMCID: PMC10695969 DOI: 10.1038/s41398-023-02671-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 10/30/2023] [Accepted: 11/17/2023] [Indexed: 12/06/2023] Open
Abstract
Efforts to integrate music into healthcare systems and wellness practices are accelerating but the biological foundations supporting these initiatives remain underappreciated. As a result, music-based interventions are often sidelined in medicine. Here, I bring together advances in music research from neuroscience, psychology, and psychiatry to bridge music's specific foundations in human biology with its specific therapeutic applications. The framework I propose organizes the neurophysiological effects of music around four core elements of human musicality: tonality, rhythm, reward, and sociality. For each, I review key concepts, biological bases, and evidence of clinical benefits. Within this framework, I outline a strategy to increase music's impact on health based on standardizing treatments and their alignment with individual differences in responsivity to these musical elements. I propose that an integrated biological understanding of human musicality-describing each element's functional origins, development, phylogeny, and neural bases-is critical to advancing rational applications of music in mental health and wellness.
Collapse
Affiliation(s)
- Daniel L Bowling
- Department of Psychiatry and Behavioral Sciences, Stanford University, School of Medicine, Stanford, CA, USA.
- Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, School of Humanities and Sciences, Stanford, CA, USA.
| |
Collapse
|
9
|
Lenc T, Peter V, Hooper C, Keller PE, Burnham D, Nozaradan S. Infants show enhanced neural responses to musical meter frequencies beyond low-level features. Dev Sci 2023; 26:e13353. [PMID: 36415027 DOI: 10.1111/desc.13353] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/20/2022] [Accepted: 11/16/2022] [Indexed: 11/24/2022]
Abstract
Music listening often entails spontaneous perception and body movement to a periodic pulse-like meter. There is increasing evidence that this cross-cultural ability relates to neural processes that selectively enhance metric periodicities, even when these periodicities are not prominent in the acoustic stimulus. However, whether these neural processes emerge early in development remains largely unknown. Here, we recorded the electroencephalogram (EEG) of 20 healthy 5- to 6-month-old infants, while they were exposed to two rhythms known to induce the perception of meter consistently across Western adults. One rhythm contained prominent acoustic periodicities corresponding to the meter, whereas the other rhythm did not. Infants showed significantly enhanced representations of meter periodicities in their EEG responses to both rhythms. This effect is unlikely to reflect the tracking of salient acoustic features in the stimulus, as it was observed irrespective of the prominence of meter periodicities in the audio signals. Moreover, as previously observed in adults, the neural enhancement of meter was greater when the rhythm was delivered by low-pitched sounds. Together, these findings indicate that the endogenous enhancement of metric periodicities beyond low-level acoustic features is a neural property that is already present soon after birth. These high-level neural processes could set the stage for internal representations of musical meter that are critical for human movement coordination during rhythmic musical behavior. RESEARCH HIGHLIGHTS: 5- to 6-month-old infants were presented with auditory rhythms that induce the perception of a periodic pulse-like meter in adults. Infants showed selective enhancement of EEG activity at meter-related frequencies irrespective of the prominence of these frequencies in the stimulus. Responses at meter-related frequencies were boosted when the rhythm was conveyed by bass sounds. High-level neural processes that transform rhythmic auditory stimuli into internal meter templates emerge early after birth.
Collapse
Affiliation(s)
- Tomas Lenc
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Brussels, Belgium
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Varghese Peter
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- School of Health and Behavioural Sciences, University of the Sunshine Coast, Queensland, Australia
| | - Caitlin Hooper
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Peter E Keller
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- Center for Music in the Brain & Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Denis Burnham
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Brussels, Belgium
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
| |
Collapse
|
10
|
Merseal HM, Beaty RE, Kenett YN, Lloyd-Cox J, de Manzano Ö, Norgaard M. Representing melodic relationships using network science. Cognition 2023; 233:105362. [PMID: 36628852 DOI: 10.1016/j.cognition.2022.105362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 11/13/2022] [Accepted: 12/18/2022] [Indexed: 01/11/2023]
Abstract
Music is a complex system consisting of many dimensions and hierarchically organized information-the organization of which, to date, we do not fully understand. Network science provides a powerful approach to representing such complex systems, from the social networks of people to modelling the underlying network structures of different cognitive mechanisms. In the present research, we explored whether network science methodology can be extended to model the melodic patterns underlying expert improvised music. Using a large corpus of transcribed improvisations, we constructed a network model in which 5-pitch sequences were linked depending on consecutive occurrences, constituting 116,403 nodes (sequences) and 157,429 edges connecting them. We then investigated whether mathematical graph modelling relates to musical characteristics in real-world listening situations via a behavioral experiment paralleling those used to examine language. We found that as melodic distance within the network increased, participants judged melodic sequences as less related. Moreover, the relationship between distance and reaction time (RT) judgements was quadratic: participants slowed in RT up to distance four, then accelerated; a parallel finding to research in language networks. This study offers insights into the hidden network structure of improvised tonal music and suggests that humans are sensitive to the property of melodic distance in this network. More generally, our work demonstrates the similarity between music and language as complex systems, and how network science methods can be used to quantify different aspects of its complexity.
Collapse
Affiliation(s)
- Hannah M Merseal
- Department of Psychology, Pennsylvania State University, United States.
| | - Roger E Beaty
- Department of Psychology, Pennsylvania State University, United States
| | - Yoed N Kenett
- Faculty of Data and Decisions Sciences, Technion Institute of Technology, Israel
| | - James Lloyd-Cox
- Department of Cognitive Neuroscience, Goldsmiths, University of London, England, United Kingdom
| | - Örjan de Manzano
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Germany
| | - Martin Norgaard
- Department of Music Education, Georgia State University, United States
| |
Collapse
|
11
|
Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Nicole Page
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA 02115, USA
- Correspondence:
| |
Collapse
|
12
|
Thomassen S, Hartung K, Einhäuser W, Bendixen A. Low-high-low or high-low-high? Pattern effects on sequential auditory scene analysis. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:2758. [PMID: 36456271 DOI: 10.1121/10.0015054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 10/17/2022] [Indexed: 06/17/2023]
Abstract
Sequential auditory scene analysis (ASA) is often studied using sequences of two alternating tones, such as ABAB or ABA_, with "_" denoting a silent gap, and "A" and "B" sine tones differing in frequency (nominally low and high). Many studies implicitly assume that the specific arrangement (ABAB vs ABA_, as well as low-high-low vs high-low-high within ABA_) plays a negligible role, such that decisions about the tone pattern can be governed by other considerations. To explicitly test this assumption, a systematic comparison of different tone patterns for two-tone sequences was performed in three different experiments. Participants were asked to report whether they perceived the sequences as originating from a single sound source (integrated) or from two interleaved sources (segregated). Results indicate that core findings of sequential ASA, such as an effect of frequency separation on the proportion of integrated and segregated percepts, are similar across the different patterns during prolonged listening. However, at sequence onset, the integrated percept was more likely to be reported by the participants in ABA_low-high-low than in ABA_high-low-high sequences. This asymmetry is important for models of sequential ASA, since the formation of percepts at onset is an integral part of understanding how auditory interpretations build up.
Collapse
Affiliation(s)
- Sabine Thomassen
- Cognitive Systems Lab, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| | - Kevin Hartung
- Cognitive Systems Lab, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| | - Wolfgang Einhäuser
- Physics of Cognition Group, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Faculty of Natural Sciences, Chemnitz University of Technology, 09107 Chemnitz, Germany
| |
Collapse
|
13
|
Bürgel M, Picinali L, Siedenburg K. Listening in the Mix: Lead Vocals Robustly Attract Auditory Attention in Popular Music. Front Psychol 2021; 12:769663. [PMID: 35024038 PMCID: PMC8744650 DOI: 10.3389/fpsyg.2021.769663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 12/02/2021] [Indexed: 11/13/2022] Open
Abstract
Listeners can attend to and track instruments or singing voices in complex musical mixtures, even though the acoustical energy of sounds from individual instruments may overlap in time and frequency. In popular music, lead vocals are often accompanied by sound mixtures from a variety of instruments, such as drums, bass, keyboards, and guitars. However, little is known about how the perceptual organization of such musical scenes is affected by selective attention, and which acoustic features play the most important role. To investigate these questions, we explored the role of auditory attention in a realistic musical scenario. We conducted three online experiments in which participants detected single cued instruments or voices in multi-track musical mixtures. Stimuli consisted of 2-s multi-track excerpts of popular music. In one condition, the target cue preceded the mixture, allowing listeners to selectively attend to the target. In another condition, the target was presented after the mixture, requiring a more “global” mode of listening. Performance differences between these two conditions were interpreted as effects of selective attention. In Experiment 1, results showed that detection performance was generally dependent on the target’s instrument category, but listeners were more accurate when the target was presented prior to the mixture rather than the opposite. Lead vocals appeared to be nearly unaffected by this change in presentation order and achieved the highest accuracy compared with the other instruments, which suggested a particular salience of vocal signals in musical mixtures. In Experiment 2, filtering was used to avoid potential spectral masking of target sounds. Although detection accuracy increased for all instruments, a similar pattern of results was observed regarding the instrument-specific differences between presentation orders. In Experiment 3, adjusting the sound level differences between the targets reduced the effect of presentation order, but did not affect the differences between instruments. While both acoustic manipulations facilitated the detection of targets, vocal signals remained particularly salient, which suggest that the manipulated features did not contribute to vocal salience. These findings demonstrate that lead vocals serve as robust attractor points of auditory attention regardless of the manipulation of low-level acoustical cues.
Collapse
Affiliation(s)
- Michel Bürgel
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- *Correspondence: Michel Bürgel,
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Kai Siedenburg
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
14
|
Emmanouil A, Rousanoglou E, Georgaki A, Boudolos KD. When Musical Accompaniment Allows the Preferred Spatio-Temporal Pattern of Movement. Sports Med Int Open 2021; 5:E81-E90. [PMID: 34646934 PMCID: PMC8500738 DOI: 10.1055/a-1553-7063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 05/11/2021] [Indexed: 11/24/2022] Open
Abstract
A musical accompaniment is often used in movement coordination and stability
exercise modalities, although considered obstructive for their fundament of
preferred movement pace. This study examined if the rhythmic strength of musical
excerpts used in movement coordination and exercise modalities allows the
preferred spatio-temporal pattern of movement. Voluntary and spontaneous body
sway (70 s) were tested (N=20 young women) in a non-musical
(preferred) and two rhythmic strength (RS) musical conditions (Higher:HrRS,
Lower:LrRS). The center of pressure trajectory was used for the body sway
spatio-temporal characteristics (Kistler forceplate, 100 Hz). Statistics
included paired t-tests between each musical condition and the non-musical one,
as well as between musical conditions (p≤0.05). Results indicated no
significant difference between the musical and the non-musical conditions
(p>0.05). The HrRS differed significantly from LrRS only in the
voluntary body sway, with increased sway duration (p=0.03), center of
pressure path (p=0.04) and velocity (p=0.01). The findings
provide evidence-based support for the rhythmic strength recommendations in
movement coordination and stability exercise modalities. The HrRS to LrRS
differences in voluntary body sway most possibly indicate that low-frequency
musical features rather than just tempo and pulse clarity are also
important.
Collapse
Affiliation(s)
- Analina Emmanouil
- National and Kapodistrian University of Athens, Faculty of Physical Education and Sport Science, Department of Sport Medicine and Biology of Exercise, Sport Biomechanics Lab, Daphne, Greece
| | - Elissavet Rousanoglou
- National and Kapodistrian University of Athens, Faculty of Physical Education and Sport Science, Department of Sport Medicine and Biology of Exercise, Sport Biomechanics Lab, Daphne, Greece
| | - Anastasia Georgaki
- National and Kapodistrian University of Athens, Department of Music Studies, Athens, Greece
| | - Konstantinos D Boudolos
- National and Kapodistrian University of Athens, Faculty of Physical Education and Sport Science, Department of Sport Medicine and Biology of Exercise, Sport Biomechanics Lab, Daphne, Greece
| |
Collapse
|
15
|
The influence of auditory rhythms on the speed of inferred motion. Atten Percept Psychophys 2021; 84:2360-2383. [PMID: 34435321 DOI: 10.3758/s13414-021-02364-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/26/2021] [Indexed: 12/24/2022]
Abstract
The present research explored the influence of isochronous auditory rhythms on the timing of movement-related prediction in two experiments. In both experiments, participants observed a moving disc that was visible for a predetermined period before disappearing behind a small, medium, or large occluded area for the remainder of its movement. In Experiment 1, the disc was visible for 1 s. During this period, participants were exposed to either a fast or slow auditory rhythm, or they heard nothing. They were instructed to press a key to indicate when they believed the moving disc had reached a specified location on the other side of the occluded area. The procedure measured the (signed) error in participants' estimate of the time it would take for a moving object to contact a stationary one. The principal results of Experiment 1 were main effects of the rate of the auditory rhythm and of the size of the occlusion on participants' judgments. In Experiment 2, the period of visibility was varied with size of the occlusion area to keep the total movement time constant for all three levels of occlusion. The results replicated the main effect of rhythm found in Experiment 1 and showed a small, significant interaction, but indicated no main effect of occlusion size. Overall, the results indicate that exposure to fast isochronous auditory rhythms during an interval of inferred motion can influence the imagined rate of such motion and suggest a possible role of an internal rhythmicity in the maintenance of temporally accurate dynamic mental representations.
Collapse
|
16
|
Møller C, Stupacher J, Celma-Miralles A, Vuust P. Beat perception in polyrhythms: Time is structured in binary units. PLoS One 2021; 16:e0252174. [PMID: 34415911 PMCID: PMC8378699 DOI: 10.1371/journal.pone.0252174] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022] Open
Abstract
In everyday life, we group and subdivide time to understand the sensory environment surrounding us. Organizing time in units, such as diurnal rhythms, phrases, and beat patterns, is fundamental to behavior, speech, and music. When listening to music, our perceptual system extracts and nests rhythmic regularities to create a hierarchical metrical structure that enables us to predict the timing of the next events. Foot tapping and head bobbing to musical rhythms are observable evidence of this process. In the special case of polyrhythms, at least two metrical structures compete to become the reference for these temporal regularities, rendering several possible beats with which we can synchronize our movements. While there is general agreement that tempo, pitch, and loudness influence beat perception in polyrhythms, we focused on the yet neglected influence of beat subdivisions, i.e., the least common denominator of a polyrhythm ratio. In three online experiments, 300 participants listened to a range of polyrhythms and tapped their index fingers in time with the perceived beat. The polyrhythms consisted of two simultaneously presented isochronous pulse trains with different ratios (2:3, 2:5, 3:4, 3:5, 4:5, 5:6) and different tempi. For ratios 2:3 and 3:4, we additionally manipulated the pitch of the pulse trains. Results showed a highly robust influence of subdivision grouping on beat perception. This was manifested as a propensity towards beats that are subdivided into two or four equally spaced units, as opposed to beats with three or more complex groupings of subdivisions. Additionally, lower pitched pulse trains were more often perceived as the beat. Our findings suggest that subdivisions, not beats, are the basic unit of beat perception, and that the principle underlying the binary grouping of subdivisions reflects a propensity towards simplicity. This preference for simple grouping is widely applicable to human perception and cognition of time.
Collapse
Affiliation(s)
- Cecilie Møller
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Jan Stupacher
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Alexandre Celma-Miralles
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| |
Collapse
|
17
|
Beveridge S, Cano E, Herff SA. The effect of low-frequency equalisation on preference and sensorimotor synchronisation in music. Q J Exp Psychol (Hove) 2021; 75:475-490. [PMID: 34293989 DOI: 10.1177/17470218211037145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Equalisation, a signal processing technique commonly used to shape the sound of music, is defined as the adjustment of the energy in specific frequency components of a signal. In this work, we investigate the effects of equalisation on preference and sensorimotor synchronisation in music. A total of 21 participants engaged in a goal-directed upper body movements in synchrony with stimuli equalised in three low-frequency sub-bands (0-50, 50-100, and 100-200 Hz). To quantify the effect of equalisation, music features including spectral flux, pulse clarity, and beat confidence were extracted from seven differently equalised versions of music tracks-one original and six manipulated versions for each music track. These music tracks were then used in a movement synchronisation task. Bayesian mixed-effects models revealed different synchronisation behaviours in response to the three sub-bands considered. Boosting energy in the 100-200 Hz sub-band reduced synchronisation performance irrespective of the sub-band energy of the original version. An energy boost in the 0-50 Hz band resulted in increased synchronisation performance only when the sub-band energy of the original version was high. An energy boost in the 50-100 Hz band increased synchronisation performance only when the sub-band energy of the original version was low. Boosting the energy in any of the three sub-bands increased preference regardless of the energy of the original version. Our results provide empirical support for the importance of low-frequency information for sensorimotor synchronisation and suggest that the effects of equalisation on preference and synchronisation are largely independent of one another.
Collapse
Affiliation(s)
- Scott Beveridge
- Social and Cognitive Computing, Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Estefanía Cano
- Social and Cognitive Computing, Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Steffen A Herff
- École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.,Music Cognition and Action Research Group (MCA), MARCS Institute for Brain, Behaviour & Development, Western Sydney University (WSU), Sydney, NSW, Australia
| |
Collapse
|
18
|
Varlet M, Nozaradan S, Trainor L, Keller PE. Dynamic Modulation of Beta Band Cortico-Muscular Coupling Induced by Audio-Visual Rhythms. Cereb Cortex Commun 2021; 1:tgaa043. [PMID: 34296112 PMCID: PMC8263089 DOI: 10.1093/texcom/tgaa043] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 07/27/2020] [Accepted: 07/28/2020] [Indexed: 12/18/2022] Open
Abstract
Human movements often spontaneously fall into synchrony with auditory and visual environmental rhythms. Related behavioral studies have shown that motor responses are automatically and unintentionally coupled with external rhythmic stimuli. However, the neurophysiological processes underlying such motor entrainment remain largely unknown. Here, we investigated with electroencephalography (EEG) and electromyography (EMG) the modulation of neural and muscular activity induced by periodic audio and/or visual sequences. The sequences were presented at either 1 or 2 Hz, while participants maintained constant finger pressure on a force sensor. The results revealed that although there was no change of amplitude in participants' EMG in response to the sequences, the synchronization between EMG and EEG recorded over motor areas in the beta (12-40 Hz) frequency band was dynamically modulated, with maximal coherence occurring about 100 ms before each stimulus. These modulations in beta EEG-EMG motor coherence were found for the 2-Hz audio-visual sequences, confirming at a neurophysiological level the enhancement of motor entrainment with multimodal rhythms that fall within preferred perceptual and movement frequency ranges. Our findings identify beta band cortico-muscular coupling as a potential underlying mechanism of motor entrainment, further elucidating the nature of the link between sensory and motor systems in humans.
Collapse
Affiliation(s)
- Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - Sylvie Nozaradan
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | - Laurel Trainor
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Peter E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| |
Collapse
|
19
|
Cannon J. Expectancy-based rhythmic entrainment as continuous Bayesian inference. PLoS Comput Biol 2021; 17:e1009025. [PMID: 34106918 PMCID: PMC8216548 DOI: 10.1371/journal.pcbi.1009025] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 06/21/2021] [Accepted: 04/29/2021] [Indexed: 11/18/2022] Open
Abstract
When presented with complex rhythmic auditory stimuli, humans are able to track underlying temporal structure (e.g., a "beat"), both covertly and with their movements. This capacity goes far beyond that of a simple entrained oscillator, drawing on contextual and enculturated timing expectations and adjusting rapidly to perturbations in event timing, phase, and tempo. Previous modeling work has described how entrainment to rhythms may be shaped by event timing expectations, but sheds little light on any underlying computational principles that could unify the phenomenon of expectation-based entrainment with other brain processes. Inspired by the predictive processing framework, we propose that the problem of rhythm tracking is naturally characterized as a problem of continuously estimating an underlying phase and tempo based on precise event times and their correspondence to timing expectations. We present two inference problems formalizing this insight: PIPPET (Phase Inference from Point Process Event Timing) and PATIPPET (Phase and Tempo Inference). Variational solutions to these inference problems resemble previous "Dynamic Attending" models of perceptual entrainment, but introduce new terms representing the dynamics of uncertainty and the influence of expectations in the absence of sensory events. These terms allow us to model multiple characteristics of covert and motor human rhythm tracking not addressed by other models, including sensitivity of error corrections to inter-event interval and perceived tempo changes induced by event omissions. We show that positing these novel influences in human entrainment yields a range of testable behavioral predictions. Guided by recent neurophysiological observations, we attempt to align the phase inference framework with a specific brain implementation. We also explore the potential of this normative framework to guide the interpretation of experimental data and serve as building blocks for even richer predictive processing and active inference models of timing.
Collapse
Affiliation(s)
- Jonathan Cannon
- Department of Brain and Cognitive Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- * E-mail:
| |
Collapse
|
20
|
What makes music memorable? Relationships between acoustic musical features and music-evoked emotions and memories in older adults. PLoS One 2021; 16:e0251692. [PMID: 33989366 PMCID: PMC8121320 DOI: 10.1371/journal.pone.0251692] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 05/03/2021] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND AND OBJECTIVES Music has a unique capacity to evoke both strong emotions and vivid autobiographical memories. Previous music information retrieval (MIR) studies have shown that the emotional experience of music is influenced by a combination of musical features, including tonal, rhythmic, and loudness features. Here, our aim was to explore the relationship between music-evoked emotions and music-evoked memories and how musical features (derived with MIR) can predict them both. METHODS Healthy older adults (N = 113, age ≥ 60 years) participated in a listening task in which they rated a total of 140 song excerpts comprising folk songs and popular songs from 1950s to 1980s on five domains measuring the emotional (valence, arousal, emotional intensity) and memory (familiarity, autobiographical salience) experience of the songs. A set of 24 musical features were extracted from the songs using computational MIR methods. Principal component analyses were applied to reduce multicollinearity, resulting in six core musical components, which were then used to predict the behavioural ratings in multiple regression analyses. RESULTS All correlations between behavioural ratings were positive and ranged from moderate to very high (r = 0.46-0.92). Emotional intensity showed the highest correlation to both autobiographical salience and familiarity. In the MIR data, three musical components measuring salience of the musical pulse (Pulse strength), relative strength of high harmonics (Brightness), and fluctuation in the frequencies between 200-800 Hz (Low-mid) predicted both music-evoked emotions and memories. Emotional intensity (and valence to a lesser extent) mediated the predictive effect of the musical components on music-evoked memories. CONCLUSIONS The results suggest that music-evoked emotions are strongly related to music-evoked memories in healthy older adults and that both music-evoked emotions and memories are predicted by the same core musical features.
Collapse
|
21
|
Keeping in time with social and non-social stimuli: Synchronisation with auditory, visual, and audio-visual cues. Sci Rep 2021; 11:8805. [PMID: 33888822 PMCID: PMC8062473 DOI: 10.1038/s41598-021-88112-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Accepted: 03/31/2021] [Indexed: 02/02/2023] Open
Abstract
Everyday social interactions require us to closely monitor, predict, and synchronise our movements with those of an interacting partner. Experimental studies of social synchrony typically examine the social-cognitive outcomes associated with synchrony, such as affiliation. On the other hand, research on the sensorimotor aspects of synchronisation generally uses non-social stimuli (e.g. a moving dot). To date, the differences in sensorimotor aspects of synchronisation to social compared to non-social stimuli remain largely unknown. The present study aims to address this gap using a verbal response paradigm where participants were asked to synchronise a 'ba' response in time with social and non-social stimuli, which were presented auditorily, visually, or audio-visually combined. For social stimuli a video/audio recording of an actor performing the same verbal 'ba' response was presented, whereas for non-social stimuli a moving dot, an auditory metronome or both combined were presented. The impact of autistic traits on participants' synchronisation performance was examined using the Autism Spectrum Quotient (AQ). Our results revealed more accurate synchronisation for social compared to non-social stimuli, suggesting that greater familiarity with and motivation in attending to social stimuli may enhance our ability to better predict and synchronise with them. Individuals with fewer autistic traits demonstrated greater social learning, as indexed through an improvement in synchronisation performance to social vs non-social stimuli across the experiment.
Collapse
|
22
|
Turn the beat around: Commentary on "Slow and fast beat sequences are represented differently through space" (De Tommaso & Prpic, 2020, in Attention, Perception, & Psychophysics). Atten Percept Psychophys 2021; 83:1518-1521. [PMID: 33686588 PMCID: PMC8084794 DOI: 10.3758/s13414-021-02247-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/08/2021] [Indexed: 11/12/2022]
Abstract
There has been increasing interest in the spatial mapping of various perceptual and cognitive magnitudes, such as expanding the spatial-numerical association of response codes (SNARC) effect into domains outside of numerical cognition. Recently, De Tommaso and Prpic (Attention, Perception, & Psychophysics, 82, 2765–2773, 2020) reported in this journal that only fast tempos over 104 beats per minute have spatial associations, with more right-sided associations and faster responses for faster tempos. After discussing the role of perceived loudness and possible response strategies, we propose and recommend methodological improvements for further research.
Collapse
|
23
|
Gilmore SA, Russo FA. Neural and Behavioral Evidence for Vibrotactile Beat Perception and Bimodal Enhancement. J Cogn Neurosci 2021; 33:635-650. [PMID: 33475449 DOI: 10.1162/jocn_a_01673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to synchronize movements to a rhythmic stimulus, referred to as sensorimotor synchronization (SMS), is a behavioral measure of beat perception. Although SMS is generally superior when rhythms are presented in the auditory modality, recent research has demonstrated near-equivalent SMS for vibrotactile presentations of isochronous rhythms [Ammirante, P., Patel, A. D., & Russo, F. A. Synchronizing to auditory and tactile metronomes: A test of the auditory-motor enhancement hypothesis. Psychonomic Bulletin & Review, 23, 1882-1890, 2016]. The current study aimed to replicate and extend this study by incorporating a neural measure of beat perception. Nonmusicians were asked to tap to rhythms or to listen passively while EEG data were collected. Rhythmic complexity (isochronous, nonisochronous) and presentation modality (auditory, vibrotactile, bimodal) were fully crossed. Tapping data were consistent with those observed by Ammirante et al. (2016), revealing near-equivalent SMS for isochronous rhythms across modality conditions and a drop-off in SMS for nonisochronous rhythms, especially in the vibrotactile condition. EEG data revealed a greater degree of neural entrainment for isochronous compared to nonisochronous trials as well as for auditory and bimodal compared to vibrotactile trials. These findings led us to three main conclusions. First, isochronous rhythms lead to higher levels of beat perception than nonisochronous rhythms across modalities. Second, beat perception is generally enhanced for auditory presentations of rhythm but still possible under vibrotactile presentation conditions. Finally, exploratory analysis of neural entrainment at harmonic frequencies suggests that beat perception may be enhanced for bimodal presentations of rhythm.
Collapse
|
24
|
Takehana A, Uehara T, Sakaguchi Y. Audiovisual synchrony perception in observing human motion to music. PLoS One 2019; 14:e0221584. [PMID: 31454393 PMCID: PMC6711538 DOI: 10.1371/journal.pone.0221584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 08/09/2019] [Indexed: 11/18/2022] Open
Abstract
To examine how individuals perceive synchrony between music and body motion, we investigated the characteristics of synchrony perception during observation of a Japanese Radio Calisthenics routine. We used the constant stimuli method to present video clips of an individual performing an exercise routine. We generated stimuli with a range of temporal shifts between the visual and auditory streams, and asked participants to make synchrony judgments. We then examined which movement-feature points agreed with music beats when the participants perceived synchrony. We found that extremities (e.g., hands and feet) reached the movement endpoint or moved through the lowest position at music beats associated with synchrony. Movement onsets never agreed with music beats. To investigate whether visual information about the feature points was necessary for synchrony perception, we conducted a second experiment where only limited portions of video clips were presented to the participants. Participants consistently judged synchrony even when the video image did not contain the critical feature points, suggesting that a prediction mechanism contributes to synchrony perception. To discuss the meaning of these feature points with respect to synchrony perception, we examined the temporal relationship between the motion of body parts and the ground reaction force (GRF) of exercise performers, which reflected the total force acting on the performer. Interestingly, vertical GRF showed local peaks consistently synchronized with music beats for most exercises, with timing that was closely correlated with the timing of movement feature points. This result suggests that synchrony perception in humans is based on some global variable anticipated from visual information, instead of the feature points found in the motion of individual body parts. In summary, the present results indicate that synchrony perception during observation of human motion to music depends largely on spatiotemporal prediction of the performer's motion.
Collapse
Affiliation(s)
- Akira Takehana
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
| | - Tsukasa Uehara
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
| | - Yutaka Sakaguchi
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
- Research Center for Performance Art Science, University of Electro-Communications, Chofu, Tokyo, Japan
- * E-mail:
| |
Collapse
|
25
|
Ogata T, Katayama T, Ota J. Cross-feedback with Partner Contributes to Performance Accuracy in Finger-tapping Rhythm Synchronization between One Leader and Two Followers. Sci Rep 2019; 9:7800. [PMID: 31127127 PMCID: PMC6534596 DOI: 10.1038/s41598-019-43352-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 04/12/2019] [Indexed: 11/12/2022] Open
Abstract
As observed in musical ensembles, people synchronize with a leader together with other people. This study aimed to investigate whether interdependency with a partner improves performance accuracy in rhythm synchronization with the leader. Participants performed a synchronization task via auditory signal by finger tapping in which two followers simultaneously synchronized with a leader: an isochronous metronome or a human leader with or without feedback from the followers. This task was conducted with and without cross-feedback (CFB) between the followers. The followers’ weak mutual tempo tracking via the CFB and the followers’ strong tempo tracking to the leader improved the tempo stability. Additionally, because the interdependency between the followers was weaker than the followers’ dependency on the human leader, the CFB did not enlarge the synchronization error between the human leader and the followers, which occurred in synchronization with the metronome. Thus, the CFB between the followers contributed to accuracy in synchronization with the human leader. The results suggest that in ensembles, players should strongly attend to the leader and should attempt to be less conscious of partners to maintain the appropriate balance between influences from the leader and partners.
Collapse
Affiliation(s)
- Taiki Ogata
- Research into Artifacts, Center for Engineering (RACE), The University of Tokyo, Kashiwano-ha, 5-1-5, Kashiwa, Chiba, 277-8568, Japan.
| | - Takahiro Katayama
- Department of Precision Engineering, School of Engineering, The University of Tokyo, Hongo 7-3-1, Bunkyo, Tokyo, 113-8656, Japan
| | - Jun Ota
- Research into Artifacts, Center for Engineering (RACE), The University of Tokyo, Kashiwano-ha, 5-1-5, Kashiwa, Chiba, 277-8568, Japan
| |
Collapse
|
26
|
Hove MJ, Vuust P, Stupacher J. Increased levels of bass in popular music recordings 1955-2016 and their relation to loudness. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2247. [PMID: 31046334 DOI: 10.1121/1.5097587] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Accepted: 03/23/2019] [Indexed: 06/09/2023]
Abstract
The sound of recorded music has changed over time. These changes can be captured by different audio features. Over the past decades, popular songs have shown clear increases in RMS energy and loudness, but far less attention has addressed whether this upward trend is more prevalent in specific frequency bands, such as the bass. Bass frequencies are especially important for movement induction, such as foot tapping or dancing, and might offer competitive advantages of capturing attention and increasing engagement. Here, the authors examined the evolution of audio features, such as root-mean-square (RMS) energy, loudness, and spectral fluctuations (changes in the audio signal's frequency content) in ten frequency bands from songs on the Billboard Hot 100 charts from 1955 to 2016. Over time, RMS energy and loudness increased while dynamic range decreased. The largest increases were found in the bass range: Spectral flux increased most strongly in the lowest frequency bands (0-100 Hz), and when controlling for overall RMS, only the lowest frequency bands showed an increase over time. The upward trend of bass could reflect changes in technology and style; but based on links between bass and movement, it is likely a widespread technique to increase engagement and contribute to chart success.
Collapse
Affiliation(s)
- Michael J Hove
- Fitchburg State University, Fitchburg, Massachusetts 01420, USA
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - Jan Stupacher
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Denmark
| |
Collapse
|
27
|
Bowling DL, Graf Ancochea P, Hove MJ, Fitch WT. Pupillometry of Groove: Evidence for Noradrenergic Arousal in the Link Between Music and Movement. Front Neurosci 2019; 12:1039. [PMID: 30686994 PMCID: PMC6335267 DOI: 10.3389/fnins.2018.01039] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2018] [Accepted: 12/21/2018] [Indexed: 11/16/2022] Open
Abstract
The capacity to entrain motor action to rhythmic auditory stimulation is highly developed in humans and extremely limited in our closest relatives. An important aspect of auditory-motor entrainment is that not all forms of rhythmic stimulation motivate movement to the same degree. This variation is captured by the concept of musical groove: high-groove music stimulates a strong desire for movement, whereas low-groove music does not. Here, we utilize this difference to investigate the neurophysiological basis of our capacity for auditory-motor entrainment. In a series of three experiments we examine pupillary responses to musical stimuli varying in groove. Our results show stronger pupil dilation in response to (1) high- vs. low-groove music, (2) high vs. low spectral content, and (3) syncopated vs. straight drum patterns. We additionally report evidence for consistent sex differences in music-induced pupillary responses, with males exhibiting larger differences between responses, but females exhibiting stronger responses overall. These results imply that the biological link between movement and auditory rhythms in our species is supported by the capacity of high-groove music to stimulate arousal in the central and peripheral nervous system, presumably via highly conserved noradrenergic mechanisms.
Collapse
Affiliation(s)
- Daniel L. Bowling
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| | | | - Michael J. Hove
- Department of Psychological Science, Fitchburg State University, Fitchburg, MA, United States
| | - W. Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna, Vienna, Austria
| |
Collapse
|
28
|
Ravignani A. Honing, H. (Ed.). The Origins of Musicality. Perception 2018. [DOI: 10.1177/0301006618817430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
29
|
Tagging the musical beat: Neural entrainment or event-related potentials? Proc Natl Acad Sci U S A 2018; 115:E11002-E11003. [PMID: 30425178 DOI: 10.1073/pnas.1815311115] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
30
|
Varlet M, Williams R, Keller PE. Effects of pitch and tempo of auditory rhythms on spontaneous movement entrainment and stabilisation. PSYCHOLOGICAL RESEARCH 2018; 84:568-584. [PMID: 30116886 DOI: 10.1007/s00426-018-1074-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Accepted: 08/09/2018] [Indexed: 10/28/2022]
Abstract
Human movements spontaneously entrain to auditory rhythms, which can help to stabilise movements in time and space. The properties of auditory rhythms supporting the occurrence of this phenomenon, however, remain largely unclear. Here, we investigate in two experiments the effects of pitch and tempo on spontaneous movement entrainment and stabilisation. We examined spontaneous entrainment of hand-held pendulum swinging in time with low-pitched (100 Hz) and high-pitched (1600 Hz) metronomes to test whether low pitch favours movement entrainment and stabilisation. To investigate whether stimulation and movement tempi moderate these effects of pitch, we manipulated (1) participants' preferred movement tempo by varying pendulum mechanical constraints (Experiment 1) and (2) stimulation tempo, which was either equal to, or slightly slower or faster (± 10%) than the participant's preferred movement tempo (Experiment 2). The results showed that participants' movements spontaneously entrained to auditory rhythms, and that this effect was stronger with low-pitched rhythms independently of stimulation and movement tempi. Results also indicated that auditory rhythms can lead to increased movement amplitude and stabilisation of movement tempo and amplitude, particularly when low-pitched. However, stabilisation effects were found to depend on intrinsic movement variability. Auditory rhythms decreased movement variability of individuals with higher intrinsic variability but increased movement variability of individuals with lower intrinsic variability. These findings provide new insights into factors that influence auditory-motor entrainment and how they may be optimised to enhance movement efficiency.
Collapse
Affiliation(s)
- Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia.
| | - Rohan Williams
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
| | - Peter E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia
| |
Collapse
|
31
|
Abstract
Bass sounds play a special role in conveying the rhythm and stimulating motor entrainment to the beat of music. However, the biological roots of this culturally widespread musical practice remain mysterious, despite its fundamental relevance in the sciences and arts, and also for music-assisted clinical rehabilitation of motor disorders. Here, we show that this musical convention may exploit a neurophysiological mechanism whereby low-frequency sounds shape neural representations of rhythmic input at the cortical level by boosting selective neural locking to the beat, thus explaining the privileged role of bass sounds in driving people to move along with the musical beat. Music makes us move, and using bass instruments to build the rhythmic foundations of music is especially effective at inducing people to dance to periodic pulse-like beats. Here, we show that this culturally widespread practice may exploit a neurophysiological mechanism whereby low-frequency sounds shape the neural representations of rhythmic input by boosting selective locking to the beat. Cortical activity was captured using electroencephalography (EEG) while participants listened to a regular rhythm or to a relatively complex syncopated rhythm conveyed either by low tones (130 Hz) or high tones (1236.8 Hz). We found that cortical activity at the frequency of the perceived beat is selectively enhanced compared with other frequencies in the EEG spectrum when rhythms are conveyed by bass sounds. This effect is unlikely to arise from early cochlear processes, as revealed by auditory physiological modeling, and was particularly pronounced for the complex rhythm requiring endogenous generation of the beat. The effect is likewise not attributable to differences in perceived loudness between low and high tones, as a control experiment manipulating sound intensity alone did not yield similar results. Finally, the privileged role of bass sounds is contingent on allocation of attentional resources to the temporal properties of the stimulus, as revealed by a further control experiment examining the role of a behavioral task. Together, our results provide a neurobiological basis for the convention of using bass instruments to carry the rhythmic foundations of music and to drive people to move to the beat.
Collapse
|
32
|
Zuk NJ, Carney LH, Lalor EC. Preferred Tempo and Low-Audio-Frequency Bias Emerge From Simulated Sub-cortical Processing of Sounds With a Musical Beat. Front Neurosci 2018; 12:349. [PMID: 29896080 PMCID: PMC5987030 DOI: 10.3389/fnins.2018.00349] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2018] [Accepted: 05/07/2018] [Indexed: 11/17/2022] Open
Abstract
Prior research has shown that musical beats are salient at the level of the cortex in humans. Yet below the cortex there is considerable sub-cortical processing that could influence beat perception. Some biases, such as a tempo preference and an audio frequency bias for beat timing, could result from sub-cortical processing. Here, we used models of the auditory-nerve and midbrain-level amplitude modulation filtering to simulate sub-cortical neural activity to various beat-inducing stimuli, and we used the simulated activity to determine the tempo or beat frequency of the music. First, irrespective of the stimulus being presented, the preferred tempo was around 100 beats per minute, which is within the range of tempi where tempo discrimination and tapping accuracy are optimal. Second, sub-cortical processing predicted a stronger influence of lower audio frequencies on beat perception. However, the tempo identification algorithm that was optimized for simple stimuli often failed for recordings of music. For music, the most highly synchronized model activity occurred at a multiple of the beat frequency. Using bottom-up processes alone is insufficient to produce beat-locked activity. Instead, a learned and possibly top-down mechanism that scales the synchronization frequency to derive the beat frequency greatly improves the performance of tempo identification.
Collapse
Affiliation(s)
- Nathaniel J. Zuk
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
| | - Laurel H. Carney
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
| | - Edmund C. Lalor
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Del Monte Institute for Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Trinity Centre for Bioengineering, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
33
|
Comstock DC, Balasubramaniam R. Neural responses to perturbations in visual and auditory metronomes during sensorimotor synchronization. Neuropsychologia 2018; 117:55-66. [PMID: 29768189 DOI: 10.1016/j.neuropsychologia.2018.05.013] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Revised: 04/05/2018] [Accepted: 05/11/2018] [Indexed: 10/16/2022]
Abstract
Tapping in synchrony to an isochronous rhythm involves several key functions of the sensorimotor system including timing, prediction and error correction. While auditory sensorimotor synchronization (SMS) has been well studied, much less is known about mechanisms involved in visual SMS. By comparing error correction in auditory and visual SMS, it can be determined if the neural mechanisms for detection and correction of synchronization errors are generalized or domain specific. To study this problem, we measured EEG while subjects tapped in synchrony to separate visual and auditory metronomes that both contained small temporal perturbations to induce errors. The metronomes had inter-onset intervals of 600 ms and the perturbations where of 4 kinds: ± 66 ms to induce period corrections, and ± 16 ms to induce phase corrections. We hypothesize that given the less precise nature of visual SMS, error correction to perturbed visual flashing rhythms will be more gradual than with the equivalent auditory perturbations. Additionally, we expect this more gradual error correction will be reflected in the visual evoked potentials. Our findings indicate that the visual system is only capable of more gradual phase corrections to even the larger induced errors. This is opposed to the swifter period correction of the auditory system to large induced errors. EEG data found the peak N1 auditory evoked potential is modulated by the size and direction of an induced error in line with previous research, while the P1 visual evoked potential was only effected by the large late-coming perturbations resulting in reduced peak latency. Looking at the error response EEG data, an Error Related Negativity (ERN) and related Error Positivity (pE) was found only in the auditory + 66 condition, while no ERN or pE were found in any of the visual perturbation conditions. In addition to the ERPs, we performed a dipole source localization and clustering analysis indicating that the anterior cingulate was active in the error detection of the perturbed stimulus for both auditory and visual conditions in addition to being involved in producing the ERN and pE induced by the auditory + 66 perturbation. Taken together, these results confirm that the visual system is less developed for synchronizing and error correction with flashing rhythms by its more gradual error correction. The reduced latency of the P1 to the visual + 66 suggests that the visual system can detect these errors, but that detection does not translate into any meaningful improvement in error correction. This indicates that the visual system is not as tightly coupled to the motor system as the auditory system is for SMS, suggesting the mechanisms of SMS are not completely domain general.
Collapse
Affiliation(s)
- Daniel C Comstock
- Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
| | | |
Collapse
|
34
|
Haumann NT, Vuust P, Bertelsen F, Garza-Villarreal EA. Influence of Musical Enculturation on Brain Responses to Metric Deviants. Front Neurosci 2018; 12:218. [PMID: 29720932 PMCID: PMC5915898 DOI: 10.3389/fnins.2018.00218] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 03/19/2018] [Indexed: 11/13/2022] Open
Abstract
The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN) brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm) to attenuated beats in a “Western group” of listeners (n = 12) mainly exposed to Western music and a “Bicultural group” of listeners (n = 13) exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the “Western group” the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the “Bicultural group.” In support of this finding, there was also a trend of the “Western group” to rate omitted beats as more surprising on odd than even metric positions, whereas the “Bicultural group” seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET). Furthermore, source localization analyses suggest that auditory, inferior temporal, sensory-motor, superior frontal, and parahippocampal regions might be involved in eliciting the MMNm to the metric deviants. These findings suggest that effects of music enculturation can be measured on MMNm responses to attenuated tones on specific metric positions.
Collapse
Affiliation(s)
- Niels T Haumann
- Department of Aesthetics and Communication (Musicology), Faculty of Arts, Aarhus University, Aarhus, Denmark.,Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark
| | - Peter Vuust
- Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark
| | - Freja Bertelsen
- Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Aarhus, Denmark
| | - Eduardo A Garza-Villarreal
- Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark.,Clinical Research Division, Instituto Nacional de Psiquiatría Ramón de la Fuente Muñiz (INPRFM), Mexico City, Mexico.,Department of Neurology, Faculty of Medicine and University Hospital, Universidad Autonoma de Nuevo Leon, Monterrey, Mexico
| |
Collapse
|
35
|
Nozaradan S, Schönwiesner M, Keller PE, Lenc T, Lehmann A. Neural bases of rhythmic entrainment in humans: critical transformation between cortical and lower-level representations of auditory rhythm. Eur J Neurosci 2018; 47:321-332. [PMID: 29356161 DOI: 10.1111/ejn.13826] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Revised: 01/12/2018] [Accepted: 01/15/2018] [Indexed: 11/29/2022]
Abstract
The spontaneous ability to entrain to meter periodicities is central to music perception and production across cultures. There is increasing evidence that this ability involves selective neural responses to meter-related frequencies. This phenomenon has been observed in the human auditory cortex, yet it could be the product of evolutionarily older lower-level properties of brainstem auditory neurons, as suggested by recent recordings from rodent midbrain. We addressed this question by taking advantage of a new method to simultaneously record human EEG activity originating from cortical and lower-level sources, in the form of slow (< 20 Hz) and fast (> 150 Hz) responses to auditory rhythms. Cortical responses showed increased amplitudes at meter-related frequencies compared to meter-unrelated frequencies, regardless of the prominence of the meter-related frequencies in the modulation spectrum of the rhythmic inputs. In contrast, frequency-following responses showed increased amplitudes at meter-related frequencies only in rhythms with prominent meter-related frequencies in the input but not for a more complex rhythm requiring more endogenous generation of the meter. This interaction with rhythm complexity suggests that the selective enhancement of meter-related frequencies does not fully rely on subcortical auditory properties, but is critically shaped at the cortical level, possibly through functional connections between the auditory cortex and other, movement-related, brain structures. This process of temporal selection would thus enable endogenous and motor entrainment to emerge with substantial flexibility and invariance with respect to the rhythmic input in humans in contrast with non-human animals.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, Sydney, NSW, 2751, Australia.,Institute of Neuroscience (IONS), Université catholique de Louvain (UCL), Louvain, Belgium.,International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Faculty of Psychology, Université de Montréal, Montreal, QC, Canada
| | - Peter E Keller
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, Sydney, NSW, 2751, Australia
| | - Tomas Lenc
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, Sydney, NSW, 2751, Australia
| | - Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.,Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Faculty of Psychology, Université de Montréal, Montreal, QC, Canada.,Otolaryngology Department, Faculty of Medicine, McGill University Hospital, Montreal, QC, Canada
| |
Collapse
|
36
|
Hove MJ, Gravel N, Spencer RMC, Valera EM. Finger tapping and pre-attentive sensorimotor timing in adults with ADHD. Exp Brain Res 2017; 235:3663-3672. [PMID: 28913612 PMCID: PMC5671889 DOI: 10.1007/s00221-017-5089-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 09/10/2017] [Indexed: 10/18/2022]
Abstract
Sensorimotor timing deficits are considered central to attention-deficit/hyperactivity disorder (ADHD). However, the tasks establishing timing impairments often involve interconnected processes, including low-level sensorimotor timing and higher level executive processes such as attention. Thus, the source of timing deficits in ADHD remains unclear. Low-level sensorimotor timing can be isolated from higher level processes in a finger-tapping task that examines the motor response to unexpected shifts of metronome onsets. In this study, adults with ADHD and ADHD-like symptoms (n = 25) and controls (n = 26) performed two finger-tapping tasks. The first assessed tapping variability in a standard tapping task (metronome-paced and unpaced). In the other task, participants tapped along with a metronome that contained unexpected shifts (±15, 50 ms); the timing adjustment on the tap following the shift captures pre-attentive sensorimotor timing (i.e., phase correction) and thus should be free of potential higher order confounds (e.g., attention). In the standard tapping task, as expected, the ADHD group had higher timing variability in both paced and unpaced tappings. However, in the pre-attentive task, performance did not differ between the ADHD and control groups. Together, results suggest that low-level sensorimotor timing and phase correction are largely preserved in ADHD and that some timing impairments observed in ADHD may stem from higher level factors (such as sustained attention).
Collapse
Affiliation(s)
- Michael J Hove
- Department of Psychiatry, Harvard Medical School, Boston, USA.
- Department of Psychological Science, Fitchburg State University, 160 Pearl Street, Fitchburg, MA, 01420, USA.
| | - Nickolas Gravel
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, USA
| | - Rebecca M C Spencer
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, USA
- Neuroscience and Behavior Program, University of Massachusetts, Amherst, USA
| | - Eve M Valera
- Department of Psychiatry, Harvard Medical School, Boston, USA
| |
Collapse
|
37
|
Synchronization to metrical levels in music depends on low-frequency spectral components and tempo. PSYCHOLOGICAL RESEARCH 2017; 82:1195-1211. [DOI: 10.1007/s00426-017-0894-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2016] [Accepted: 07/11/2017] [Indexed: 10/19/2022]
|
38
|
Bridwell DA, Leslie E, McCoy DQ, Plis SM, Calhoun VD. Cortical Sensitivity to Guitar Note Patterns: EEG Entrainment to Repetition and Key. Front Hum Neurosci 2017; 11:90. [PMID: 28298889 PMCID: PMC5331856 DOI: 10.3389/fnhum.2017.00090] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Accepted: 02/14/2017] [Indexed: 11/13/2022] Open
Abstract
Music is ubiquitous throughout recent human culture, and many individual's have an innate ability to appreciate and understand music. Our appreciation of music likely emerges from the brain's ability to process a series of repeated complex acoustic patterns. In order to understand these processes further, cortical responses were measured to a series of guitar notes presented with a musical pattern or without a pattern. ERP responses to individual notes were measured using a 24 electrode Bluetooth mobile EEG system (Smarting mBrainTrain) while 13 healthy non-musicians listened to structured (i.e., within musical keys and with repetition) or random sequences of guitar notes for 10 min each. We demonstrate an increased amplitude to the ERP that appears ~200 ms to notes presented within the musical sequence. This amplitude difference between random notes and patterned notes likely reflects individual's cortical sensitivity to guitar note patterns. These amplitudes were compared to ERP responses to a rare note embedded within a stream of frequent notes to determine whether the sensitivity to complex musical structure overlaps with the sensitivity to simple irregularities reflected in traditional auditory oddball experiments. Response amplitudes to the negative peak at ~175 ms are statistically correlated with the mismatch negativity (MMN) response measured to a rare note presented among a series of frequent notes (i.e., in a traditional oddball sequence), but responses to the subsequent positive peak at ~200 do not show a statistical relationship with the P300 response. Thus, the sensitivity to musical structure identified to 4 Hz note patterns appears somewhat distinct from the sensitivity to statistical regularities reflected in the traditional "auditory oddball" sequence. Overall, we suggest that this is a promising approach to examine individual's sensitivity to complex acoustic patterns, which may overlap with higher level cognitive processes, including language.
Collapse
Affiliation(s)
| | | | - Dakarai Q McCoy
- The Mind Research NetworkAlbuquerque, NM, USA; Department of Electrical and Computer Engineering, University of New MexicoAlbuquerque, NM, USA; The MARC Program, University of New MexicoAlbuquerque, NM, USA
| | | | - Vince D Calhoun
- The Mind Research NetworkAlbuquerque, NM, USA; Department of Electrical and Computer Engineering, University of New MexicoAlbuquerque, NM, USA
| |
Collapse
|
39
|
Comparison of DP3 Signals Evoked by Comfortable 3D Images and 2D Images - an Event-Related Potential Study using an Oddball Task. Sci Rep 2017; 7:43110. [PMID: 28225044 PMCID: PMC5320480 DOI: 10.1038/srep43110] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2016] [Accepted: 01/19/2017] [Indexed: 11/21/2022] Open
Abstract
The horizontal binocular disparity is a critical factor for the visual fatigue induced by watching stereoscopic TVs. Stereoscopic images that possess the disparity within the ‘comfort zones’ and remain still in the depth direction are considered comfortable to the viewers as 2D images. However, the difference in brain activities between processing such comfortable stereoscopic images and 2D images is still less studied. The DP3 (differential P3) signal refers to an event-related potential (ERP) component indicating attentional processes, which is typically evoked by odd target stimuli among standard stimuli in an oddball task. The present study found that the DP3 signal elicited by the comfortable 3D images exhibits the delayed peak latency and enhanced peak amplitude over the anterior and central scalp regions compared to the 2D images. The finding suggests that compared to the processing of the 2D images, more attentional resources are involved in the processing of the stereoscopic images even though they are subjectively comfortable.
Collapse
|
40
|
Huberth M, Fujioka T. Neural representation of a melodic motif: Effects of polyphonic contexts. Brain Cogn 2017; 111:144-155. [DOI: 10.1016/j.bandc.2016.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Revised: 09/26/2016] [Accepted: 11/11/2016] [Indexed: 11/28/2022]
|
41
|
Rhythm judgments reveal a frequency asymmetry in the perception and neural coding of sound synchrony. Proc Natl Acad Sci U S A 2017; 114:1201-1206. [PMID: 28096408 DOI: 10.1073/pnas.1615669114] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In modern Western music, melody is commonly conveyed by pitch changes in the highest-register voice, whereas meter or rhythm is often carried by instruments with lower pitches. An intriguing and recently suggested possibility is that the custom of assigning rhythmic functions to lower-pitch instruments may have emerged because of fundamental properties of the auditory system that result in superior time encoding for low pitches. Here we compare rhythm and synchrony perception between low- and high-frequency tones, using both behavioral and EEG techniques. Both methods were consistent in showing no superiority in time encoding for low over high frequencies. However, listeners were consistently more sensitive to timing differences between two nearly synchronous tones when the high-frequency tone followed the low-frequency tone than vice versa. The results demonstrate no superiority of low frequencies in timing judgments but reveal a robust asymmetry in the perception and neural coding of synchrony that reflects greater tolerance for delays of low- relative to high-frequency sounds than vice versa. We propose that this asymmetry exists to compensate for inherent and variable time delays in cochlear processing, as well as the acoustical properties of sound sources in the natural environment, thereby providing veridical perceptual experiences of simultaneity.
Collapse
|
42
|
Nozaradan S, Mouraux A, Jonas J, Colnat-Coulbois S, Rossion B, Maillard L. Intracerebral evidence of rhythm transform in the human auditory cortex. Brain Struct Funct 2016; 222:2389-2404. [PMID: 27990557 DOI: 10.1007/s00429-016-1348-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 12/06/2016] [Indexed: 01/23/2023]
Abstract
Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (<30 Hz) and high-frequency (>30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), 53, Avenue Mounier, UCL 53.75, 1200, Brussels, Belgium. .,The MARCS Institute, Western Sydney University, Sydney, NSW, 2214, Australia. .,International Laboratory for Brain, Music and Sound Research (Brams), Montreal, H3C 3J7, Canada.
| | - André Mouraux
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), 53, Avenue Mounier, UCL 53.75, 1200, Brussels, Belgium
| | - Jacques Jonas
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), 53, Avenue Mounier, UCL 53.75, 1200, Brussels, Belgium.,Service de Neurologie, Centre Hospitalier Universitaire de Nancy, 54035, Nancy, France.,CRAN UMR 7039 CNRS Université de Lorraine, 54035, Nancy, France
| | - Sophie Colnat-Coulbois
- Neurosurgery Department, Centre Hospitalier Universitaire de Nancy, 54035, Nancy, France
| | - Bruno Rossion
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), 53, Avenue Mounier, UCL 53.75, 1200, Brussels, Belgium.,Service de Neurologie, Centre Hospitalier Universitaire de Nancy, 54035, Nancy, France.,Psychological Sciences Research Institute, Université Catholique de Louvain (UCL), 1348, Louvain-la-Neuve, Belgium
| | - Louis Maillard
- Service de Neurologie, Centre Hospitalier Universitaire de Nancy, 54035, Nancy, France.,CRAN UMR 7039 CNRS Université de Lorraine, 54035, Nancy, France
| |
Collapse
|
43
|
Harris R, van Kranenburg P, de Jong BM. Behavioral Quantification of Audiomotor Transformations in Improvising and Score-Dependent Musicians. PLoS One 2016; 11:e0166033. [PMID: 27835631 PMCID: PMC5105996 DOI: 10.1371/journal.pone.0166033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2016] [Accepted: 10/22/2016] [Indexed: 11/18/2022] Open
Abstract
The historically developed practice of learning to play a music instrument from notes instead of by imitation or improvisation makes it possible to contrast two types of skilled musicians characterized not only by dissimilar performance practices, but also disparate methods of audiomotor learning. In a recent fMRI study comparing these two groups of musicians while they either imagined playing along with a recording or covertly assessed the quality of the performance, we observed activation of a right-hemisphere network of posterior superior parietal and dorsal premotor cortices in improvising musicians, indicating more efficient audiomotor transformation. In the present study, we investigated the detailed performance characteristics underlying the ability of both groups of musicians to replicate music on the basis of aural perception alone. Twenty-two classically-trained improvising and score-dependent musicians listened to short, unfamiliar two-part excerpts presented with headphones. They played along or replicated the excerpts by ear on a digital piano, either with or without aural feedback. In addition, they were asked to harmonize or transpose some of the excerpts either to a different key or to the relative minor. MIDI recordings of their performances were compared with recordings of the aural model. Concordance was expressed in an audiomotor alignment score computed with the help of music information retrieval algorithms. Significantly higher alignment scores were found when contrasting groups, voices, and tasks. The present study demonstrates the superior ability of improvising musicians to replicate both the pitch and rhythm of aurally perceived music at the keyboard, not only in the original key, but also in other tonalities. Taken together with the enhanced activation of the right dorsal frontoparietal network found in our previous fMRI study, these results underscore the conclusion that the practice of improvising music can be associated with enhanced audiomotor transformation in response to aurally perceived music.
Collapse
Affiliation(s)
- Robert Harris
- Department of Neurology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- BCN Neuroimaging Center, University of Groningen, Groningen, The Netherlands
- Prince Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands
- * E-mail:
| | | | - Bauke M. de Jong
- Department of Neurology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- BCN Neuroimaging Center, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
44
|
Wesolowski BC, Hofmann A. There's More to Groove than Bass in Electronic Dance Music: Why Some People Won't Dance to Techno. PLoS One 2016; 11:e0163938. [PMID: 27798645 PMCID: PMC5087899 DOI: 10.1371/journal.pone.0163938] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2015] [Accepted: 09/16/2016] [Indexed: 11/21/2022] Open
Abstract
The purpose of this study was to explore the relationship between audio descriptors for groove-based electronic dance music (EDM) and raters' perceived cognitive, affective, and psychomotor responses. From 198 musical excerpts (length: 15 sec.) representing 11 subgenres of EDM, 19 low-level audio feature descriptors were extracted. A principal component analysis of the feature vectors indicated that the musical excerpts could effectively be classified using five complex measures, describing the rhythmical properties of: (a) the high-frequency band, (b) the mid-frequency band, and (c) the low-frequency band, as well as overall fluctuations in (d) dynamics, and (e) timbres. Using these five complex audio measures, four meaningful clusters of the EDM excerpts emerged with distinct musical attributes comprising music with: (a) isochronous bass and static timbres, (b) isochronous bass with fluctuating dynamics and rhythmical variations in the mid-frequency range, (c) non-isochronous bass and fluctuating timbres, and (d) non-isochronous bass with rhythmical variations in the high frequencies. Raters (N = 99) were each asked to respond to four musical excerpts using a four point Likert-Type scale consisting of items representing cognitive (n = 9), affective (n = 9), and psychomotor (n = 3) domains. Musical excerpts falling under the cluster of "non-isochronous bass with rhythmical variations in the high frequencies" demonstrated the overall highest composite scores as evaluated by the raters. Musical samples falling under the cluster of "isochronous bass with static timbres" demonstrated the overall lowest composite scores as evaluated by the raters. Moreover, music preference was shown to significantly affect the systematic patterning of raters' responses for those with a musical preference for "contemporary" music, "sophisticated" music, and "intense" music.
Collapse
Affiliation(s)
- Brian C. Wesolowski
- Hugh Hodgson School of Music, The University of Georgia, Athens, GA, United States of America
| | - Alex Hofmann
- Austrian Research Institute for Artificial Intelligence (OFAI), Freyung 6/6, A-1010, Vienna, Austria
- Institute of Music Acoustics, The University of Performing Arts, Anton-von-Webern-Platz 1, 1030, Vienna, Austria
| |
Collapse
|
45
|
Enhanced brainstem and cortical encoding of sound during synchronized movement. Neuroimage 2016; 142:231-240. [PMID: 27397623 DOI: 10.1016/j.neuroimage.2016.07.015] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Revised: 07/05/2016] [Accepted: 07/06/2016] [Indexed: 01/23/2023] Open
Abstract
Movement to a steady beat has been widely studied as a model of alignment of motor outputs on sensory inputs. However, how the encoding of sensory inputs is shaped during synchronized movements along the sensory pathway remains unknown. To investigate this, we simultaneously recorded brainstem and cortical electro-encephalographic activity while participants listened to periodic amplitude-modulated tones. Participants listened either without moving or while tapping in sync on every second beat. Cortical responses were identified at the envelope modulation rate (beat frequency), whereas brainstem responses were identified at the partials frequencies of the chord and at their modulation by the beat frequency (sidebands). During sensorimotor synchronization, cortical responses at beat frequency were larger than during passive listening. Importantly, brainstem responses were also enhanced, with a selective amplification of the sidebands, in particular at the lower-pitched tone of the chord, and no significant correlation with electromyographic measures at tapping frequency. These findings provide first evidence for an online gain in the cortical and subcortical encoding of sounds during synchronized movement, selective to behavior-relevant sound features. Moreover, the frequency-tagging method to isolate concurrent brainstem and cortical activities even during actual movements appears promising to reveal coordinated processes along the human auditory pathway.
Collapse
|
46
|
Rodger MWM, Craig CM. Beyond the Metronome: Auditory Events and Music May Afford More than Just Interval Durations as Gait Cues in Parkinson's Disease. Front Neurosci 2016; 10:272. [PMID: 27378841 PMCID: PMC4906221 DOI: 10.3389/fnins.2016.00272] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 05/31/2016] [Indexed: 11/13/2022] Open
Affiliation(s)
| | - Cathy M Craig
- School of Psychology, Queen's University Belfast Belfast, UK
| |
Collapse
|
47
|
|
48
|
Chang A, Bosnyak DJ, Trainor LJ. Unpredicted Pitch Modulates Beta Oscillatory Power during Rhythmic Entrainment to a Tone Sequence. Front Psychol 2016; 7:327. [PMID: 27014138 PMCID: PMC4782565 DOI: 10.3389/fpsyg.2016.00327] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2015] [Accepted: 02/21/2016] [Indexed: 11/13/2022] Open
Abstract
Extracting temporal regularities in external stimuli in order to predict upcoming events is an essential aspect of perception. Fluctuations in induced power of beta band (15–25 Hz) oscillations in auditory cortex are involved in predictive timing during rhythmic entrainment, but whether such fluctuations are affected by prediction in the spectral (frequency/pitch) domain remains unclear. We tested whether unpredicted (i.e., unexpected) pitches in a rhythmic tone sequence modulate beta band activity by recording EEG while participants passively listened to isochronous auditory oddball sequences with occasional unpredicted deviant pitches at two different presentation rates. The results showed that the power in low-beta (15–20 Hz) was larger around 200–300 ms following deviant tones compared to standard tones, and this effect was larger when the deviant tones were less predicted. Our results suggest that the induced beta power activities in auditory cortex are consistent with a role in sensory prediction of both “when” (timing) upcoming sounds will occur as well as the prediction precision error of “what” (spectral content in this case). We suggest, further, that both timing and content predictions may co-modulate beta oscillations via attention. These findings extend earlier work on neural oscillations by investigating the functional significance of beta oscillations for sensory prediction. The findings help elucidate the functional significance of beta oscillations in perception.
Collapse
Affiliation(s)
- Andrew Chang
- Department of Psychology, Neuroscience and Behaviour, McMaster University Hamilton, ON, Canada
| | - Dan J Bosnyak
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada; McMaster Institute for Music and the Mind, McMaster UniversityHamilton, ON, Canada
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster UniversityHamilton, ON, Canada; McMaster Institute for Music and the Mind, McMaster UniversityHamilton, ON, Canada; Rotman Research Institute, Baycrest HospitalToronto, ON, Canada
| |
Collapse
|
49
|
Fitch WT. Dance, Music, Meter and Groove: A Forgotten Partnership. Front Hum Neurosci 2016; 10:64. [PMID: 26973489 PMCID: PMC4771755 DOI: 10.3389/fnhum.2016.00064] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Accepted: 02/08/2016] [Indexed: 11/30/2022] Open
Abstract
I argue that core aspects of musical rhythm, especially “groove” and syncopation, can only be fully understood in the context of their origins in the participatory social experience of dance. Musical meter is first considered in the context of bodily movement. I then offer an interpretation of the pervasive but somewhat puzzling phenomenon of syncopation in terms of acoustic emphasis on certain offbeat components of the accompanying dance style. The reasons for the historical tendency of many musical styles to divorce themselves from their dance-based roots are also briefly considered. To the extent that musical rhythms only make sense in the context of bodily movement, researchers interested in ecologically valid approaches to music cognition should make a more concerted effort to extend their analyses to dance, particularly if we hope to understand the cognitive constraints underlying rhythmic aspects of music like meter and groove.
Collapse
Affiliation(s)
- W Tecumseh Fitch
- Department of Cognitive Biology, University of Vienna Vienna, Austria
| |
Collapse
|
50
|
Novembre G, Varlet M, Muawiyath S, Stevens CJ, Keller PE. The E-music box: an empirical method for exploring the universal capacity for musical production and for social interaction through music. ROYAL SOCIETY OPEN SCIENCE 2015; 2:150286. [PMID: 26715993 PMCID: PMC4680608 DOI: 10.1098/rsos.150286] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2015] [Accepted: 10/26/2015] [Indexed: 06/05/2023]
Abstract
Humans are assumed to have a natural-universal-predisposition for making music and for musical interaction. Research in this domain is, however, typically conducted with musically trained individuals, and therefore confounded with expertise. Here, we present a rediscovered and updated invention-the E-music box-that we establish as an empirical method to investigate musical production and interaction in everyone. The E-music box transforms rotatory cyclical movements into pre-programmable digital musical output, with tempo varying according to rotation speed. The user's movements are coded as continuous oscillatory data, which can be analysed using linear or nonlinear analytical tools. We conducted a proof-of-principle experiment to demonstrate that, using this method, pairs of non-musically trained individuals can interact according to conventional musical practices (leader/follower roles and lower-pitch dominance). The results suggest that the E-music box brings 'active' and 'interactive' musical capacities within everyone's reach. We discuss the potential of this method for exploring the universal predisposition for music making and interaction in developmental and cross-cultural contexts, and for neurologic musical therapy and rehabilitation.
Collapse
Affiliation(s)
- Giacomo Novembre
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Shujau Muawiyath
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Catherine J. Stevens
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- School of Social Sciences and Psychology, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Peter E. Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| |
Collapse
|