1
|
Fiorin G, Delfitto D. Syncopation as structure bootstrapping: the role of asymmetry in rhythm and language. Front Psychol 2024; 15:1304485. [PMID: 38440243 PMCID: PMC10911290 DOI: 10.3389/fpsyg.2024.1304485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/22/2024] [Indexed: 03/06/2024] Open
Abstract
Syncopation - the occurrence of a musical event on a metrically weak position preceding a rest on a metrically strong position - represents an important challenge in the study of the mapping between rhythm and meter. In this contribution, we present the hypothesis that syncopation is an effective strategy to elicit the bootstrapping of a multi-layered, hierarchically organized metric structure from a linear rhythmic surface. The hypothesis is inspired by a parallel with the problem of linearization in natural language syntax, which is the problem of how hierarchically organized phrase-structure markers are mapped onto linear sequences of words. The hypothesis has important consequences for the role of meter in music perception and cognition and, more particularly, for its role in the relationship between rhythm and bodily entrainment.
Collapse
Affiliation(s)
- Gaetano Fiorin
- Department of Humanities, University of Trieste, Trieste, Italy
| | - Denis Delfitto
- Department of Cultures and Civilizations, University of Verona, Verona, Italy
| |
Collapse
|
2
|
Vathagavorakul R, Gonjo T, Homma M. The influence of sound waves and musical experiences on movement coordination with beats. Hum Mov Sci 2024; 93:103170. [PMID: 38043482 DOI: 10.1016/j.humov.2023.103170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 11/12/2023] [Accepted: 11/21/2023] [Indexed: 12/05/2023]
Abstract
Synchronizing movement with external stimuli is important in musicians and athletes. This study investigated the effects of sound characteristics, including sound with harmonics (square wave) and without harmonics (sine wave) and levels of expertise in sports and music on rhythmic ability. Thirty-two university students participated in the study. The participants were divided into sixteen music education (ME) and sixteen physical education (PE) majors. They were asked to perform finger tapping tasks with 1,2 and 3 Hz beat rates, tapping in time with the sine and square wave beat produced by a metronome. The relative phase angle of finger tapping and the onset time of metronome sound were calculated using circular statistics. The results showed that type of wave and music experience affected the rhythmic ability of participants. Our study highlights the importance of types of waves on rhythmic ability, especially for participants with no background in music. The square wave is recommended for athletes to learn to synchronize their movement with beats.
Collapse
Affiliation(s)
- Ravisara Vathagavorakul
- Division of Health and Physical Education, Department of Curriculum and Instruction, Faculty of Education, Chulalongkorn University, Bangkok, Thailand.
| | - Tomohiro Gonjo
- School of Energy, Geoscience, Infrastructure and Society, Institute for Life and Earth Sciences, Heriot-Watt University, Edinburgh, UK
| | - Miwako Homma
- Institute of Health and Sport Sciences, University of Tsukuba, Japan
| |
Collapse
|
3
|
Criscuolo A, Schwartze M, Prado L, Ayala Y, Merchant H, Kotz SA. Macaque monkeys and humans sample temporal regularities in the acoustic environment. Prog Neurobiol 2023; 229:102502. [PMID: 37442410 DOI: 10.1016/j.pneurobio.2023.102502] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 07/06/2023] [Accepted: 07/10/2023] [Indexed: 07/15/2023]
Abstract
Many animal species show comparable abilities to detect basic rhythms and produce rhythmic behavior. Yet, the capacities to process complex rhythms and synchronize rhythmic behavior appear to be species-specific: vocal learning animals can, but some primates might not. This discrepancy is of high interest as there is a putative link between rhythm processing and the development of sophisticated sensorimotor behavior in humans. Do our closest ancestors show comparable endogenous dispositions to sample the acoustic environment in the absence of task instructions and training? We recorded EEG from macaque monkeys and humans while they passively listened to isochronous equitone sequences. Individual- and trial-level analyses showed that macaque monkeys' and humans' delta-band neural oscillations encoded and tracked the timing of auditory events. Further, mu- (8-15 Hz) and beta-band (12-20 Hz) oscillations revealed the superimposition of varied accentuation patterns on a subset of trials. These observations suggest convergence in the encoding and dynamic attending of temporal regularities in the acoustic environment, bridging a gap in the phylogenesis of rhythm cognition.
Collapse
Affiliation(s)
- Antonio Criscuolo
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| | - Michael Schwartze
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands
| | - Luis Prado
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, 76230 Queretaro, QRO, Mexico
| | - Yaneri Ayala
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, 76230 Queretaro, QRO, Mexico
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, 76230 Queretaro, QRO, Mexico
| | - Sonja A Kotz
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|
4
|
Radchenko G, Demareva V, Gromov K, Zayceva I, Rulev A, Zhukova M, Demarev A. Neural mechanisms of temporal and rhythmic structure processing in non-musicians. Front Neurosci 2023; 17:1124038. [PMID: 37234263 PMCID: PMC10206032 DOI: 10.3389/fnins.2023.1124038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 04/19/2023] [Indexed: 05/27/2023] Open
Abstract
Music is increasingly being used as a therapeutic tool in the field of rehabilitation medicine and psychophysiology. One of the main key components of music is its temporal organization. The characteristics of neurocognitive processes during music perception of meter in different tempo variations technique have been studied by using the event-related potentials technique. The study involved 20 volunteers (6 men, the median age of the participants was 23 years). The participants were asked to listen to 4 experimental series that differed in tempo (fast vs. slow) and meter (duple vs. triple). Each series consisted of 625 audio stimuli, 85% of which were organized with a standard metric structure (standard stimulus) while 15% included unexpected accents (deviant stimulus). The results revealed that the type of metric structure influences the detection of the change in stimuli. The analysis showed that the N200 wave occurred significantly faster for stimuli with duple meter and fast tempo and was the slowest for stimuli with triple meter and fast pace.
Collapse
|
5
|
MacIntyre AD, Lo HYJ, Cross I, Scott S. Task-irrelevant auditory metre shapes visuomotor sequential learning. PSYCHOLOGICAL RESEARCH 2023; 87:872-893. [PMID: 35690927 PMCID: PMC10017598 DOI: 10.1007/s00426-022-01690-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 05/17/2022] [Indexed: 11/24/2022]
Abstract
The ability to learn and reproduce sequences is fundamental to every-day life, and deficits in sequential learning are associated with developmental disorders such as specific language impairment. Individual differences in sequential learning are usually investigated using the serial reaction time task (SRTT), wherein a participant responds to a series of regularly timed, seemingly random visual cues that in fact follow a repeating deterministic structure. Although manipulating inter-cue interval timing has been shown to adversely affect sequential learning, the role of metre (the patterning of salience across time) remains unexplored within the regularly timed, visual SRTT. The current experiment consists of an SRTT adapted to include task-irrelevant auditory rhythms conferring a sense of metre. We predicted that (1) participants' (n = 41) reaction times would reflect the auditory metric structure; (2) that disrupting the correspondence between the learned visual sequence and auditory metre would impede performance; and (3) that individual differences in sensitivity to rhythm would predict the magnitude of these effects. Altering the relationship via a phase shift between the trained visual sequence and auditory metre slowed reaction times. Sensitivity to rhythm was predictive of reaction times over all. In an exploratory analysis, we, moreover, found that approximately half of participants made systematically different responses to visual cues on the basis of the cues' position within the auditory metre. We demonstrate the influence of auditory temporal structures on visuomotor sequential learning in a widely used task where metre and timing are rarely considered. The current results indicate sensitivity to metre as a possible latent factor underpinning individual differences in SRTT performance.
Collapse
Affiliation(s)
- Alexis Deighton MacIntyre
- Institute of Cognitive Neuroscience, University College London, London, UK. .,Centre for Music and Science, University of Cambridge, Cambridge, UK. .,MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK.
| | | | - Ian Cross
- Centre for Music and Science, University of Cambridge, Cambridge, UK
| | - Sophie Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
6
|
Criscuolo A, Schwartze M, Henry MJ, Obermeier C, Kotz SA. Individual neurophysiological signatures of spontaneous rhythm processing. Neuroimage 2023; 273:120090. [PMID: 37028735 DOI: 10.1016/j.neuroimage.2023.120090] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/14/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
When sensory input conveys rhythmic regularity, we can form predictions about the timing of upcoming events. Although rhythm processing capacities differ considerably between individuals, these differences are often obscured by participant- and trial-level data averaging procedures in M/EEG research. Here, we systematically assessed neurophysiological variability displayed by individuals listening to isochronous (1.54Hz) equitone sequences interspersed with unexpected (amplitude-attenuated) deviant tones. Our approach aimed at revealing time-varying adaptive neural mechanisms for sampling the acoustic environment at multiple timescales. Rhythm tracking analyses confirmed that individuals encode temporal regularities and form temporal expectations, as indicated in delta-band (1.54Hz) power and its anticipatory phase alignment to expected tone onsets. Zooming into tone- and participant-level data, we further characterized intra- and inter-individual variabilities in phase-alignment across auditory sequences. Further, individual modelling of beta-band tone-locked responses showed that a subset of auditory sequences was sampled rhythmically by superimposing binary (strong-weak; S-w), ternary (S-w-w) and mixed accentuation patterns. In these sequences, neural responses to standard and deviant tones were modulated by a binary accentuation pattern, thus pointing towards a mechanism of dynamic attending. Altogether, the current results point toward complementary roles of delta- and beta-band activity in rhythm processing and further highlight diverse and adaptive mechanisms to track and sample the acoustic environment at multiple timescales, even in the absence of task-specific instructions.
Collapse
Affiliation(s)
- A Criscuolo
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - M Schwartze
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands
| | - M J Henry
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany; Department of Psychology, Toronto Metropolitan University, Canada
| | - C Obermeier
- BG Klinikum Bergmannstrost Halle, Halle 06112, Germany; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - S A Kotz
- Department of Neuropsychology & Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, the Netherlands; Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.
| |
Collapse
|
7
|
Linares Gutiérrez D, Schmidt S, Meissner K, Wittmann M. Changes in Subjective Time and Self during Meditation. BIOLOGY 2022; 11:biology11081116. [PMID: 35892973 PMCID: PMC9330740 DOI: 10.3390/biology11081116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/13/2022] [Accepted: 07/24/2022] [Indexed: 11/16/2022]
Abstract
Simple Summary Meditation induces an altered state of consciousness, which is often described by meditators as being in the present moment and losing one’s sense of time and self. Few studies have assessed these experiences. We invited 22 experienced meditators to participate in two experimental sessions lasting 20 min each (1) to meditate and (2) to read a story as a control condition. We measured their heart and breathing rates during these two sessions and conducted a metronome task before and after each session. In this task, participants had to group metronome beats into perceptual units, a measure of the duration of the present moment. In comparison to the reading condition, the heart and breathing rates showed a mix of increased as well as decreased bodily activity in the meditation condition. In the meditation condition, participants subjectively perceived their body boundaries less strongly, paid less attention to time, and felt time pass more quickly compared to the control condition. No differences between conditions were apparent for the metronome task. This study is the first to show how the sense of self and time are relatively diminished during meditation. Abstract This study examined the effects of meditative states in experienced meditators on present-moment awareness, subjective time, and self-awareness while assessing meditation-induced changes in heart-rate variability and breathing rate. A sample of 22 experienced meditators who practiced meditation techniques stressing awareness of the present moment (average 20 years of practice) filled out subjective scales pertaining to sense of time and the bodily self and accomplished a metronome task as an operationalization of present-moment awareness before and after a 20 min meditation session (experimental condition) and a 20 min reading session (control condition) according to a within-subject design. A mixed pattern of increased sympathetic and parasympathetic activity was found during meditation regarding heart-rate measures. Breathing intervals were prolonged during meditation. Participants perceived their body boundaries as less salient during meditation than while reading the story; they also felt time passed more quickly and they paid less attention to time during meditation. No significant differences between conditions became apparent for the metronome task. This is probably the first quantitative study to show how the experience of time during a meditation session is altered together with the sense of the bodily self.
Collapse
Affiliation(s)
- Damisela Linares Gutiérrez
- Institute of Frontier Areas of Psychology and Mental Health, 79098 Freiburg, Germany
- Palliative Care Unit, Department of Internal Medicine, Medical Faculty, University of Freiburg, 79106 Freiburg, Germany
| | - Stefan Schmidt
- Institute of Frontier Areas of Psychology and Mental Health, 79098 Freiburg, Germany
- Department of Psychosomatic Medicine and Psychotherapy, Medical Center-University of Freiburg, Medical Faculty, University of Freiburg, 79104 Freiburg, Germany
| | - Karin Meissner
- Division of Integrative Health Promotion, Department of Social Work and Health, Coburg University of Applied Sciences, 96450 Coburg, Germany
| | - Marc Wittmann
- Institute of Frontier Areas of Psychology and Mental Health, 79098 Freiburg, Germany
| |
Collapse
|
8
|
Daniel S, Wimpory D, Delafield-Butt JT, Malloch S, Holck U, Geretsegger M, Tortora S, Osborne N, Schögler B, Koch S, Elias-Masiques J, Howorth MC, Dunbar P, Swan K, Rochat MJ, Schlochtermeier R, Forster K, Amos P. Rhythmic Relating: Bidirectional Support for Social Timing in Autism Therapies. Front Psychol 2022; 13:793258. [PMID: 35693509 PMCID: PMC9186469 DOI: 10.3389/fpsyg.2022.793258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 03/23/2022] [Indexed: 11/13/2022] Open
Abstract
We propose Rhythmic Relating for autism: a system of supports for friends, therapists, parents, and educators; a system which aims to augment bidirectional communication and complement existing therapeutic approaches. We begin by summarizing the developmental significance of social timing and the social-motor-synchrony challenges observed in early autism. Meta-analyses conclude the early primacy of such challenges, yet cite the lack of focused therapies. We identify core relational parameters in support of social-motor-synchrony and systematize these using the communicative musicality constructs: pulse; quality; and narrative. Rhythmic Relating aims to augment the clarity, contiguity, and pulse-beat of spontaneous behavior by recruiting rhythmic supports (cues, accents, turbulence) and relatable vitality; facilitating the predictive flow and just-ahead-in-time planning needed for good-enough social timing. From here, we describe possibilities for playful therapeutic interaction, small-step co-regulation, and layered sensorimotor integration. Lastly, we include several clinical case examples demonstrating the use of Rhythmic Relating within four different therapeutic approaches (Dance Movement Therapy, Improvisational Music Therapy, Play Therapy, and Musical Interaction Therapy). These clinical case examples are introduced here and several more are included in the Supplementary Material (Examples of Rhythmic Relating in Practice). A suite of pilot intervention studies is proposed to assess the efficacy of combining Rhythmic Relating with different therapeutic approaches in playful work with individuals with autism. Further experimental hypotheses are outlined, designed to clarify the significance of certain key features of the Rhythmic Relating approach.
Collapse
Affiliation(s)
- Stuart Daniel
- British Association of Play Therapists, London, United Kingdom
| | - Dawn Wimpory
- BCU Health Board (NHS), Bangor, United Kingdom
- School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Jonathan T. Delafield-Butt
- Laboratory for Innovation in Autism, University of Strathclyde, Glasgow, United Kingdom
- School of Education, University of Strathclyde, Glasgow, United Kingdom
| | - Stephen Malloch
- Westmead Psychotherapy Program, School of Medicine, University of Sydney, Sydney, NSW, Australia
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia
| | - Ulla Holck
- Music Therapy, Department of Communication and Psychology, Aalborg University, Aalborg, Denmark
| | - Monika Geretsegger
- The Grieg Academy Music Therapy Research Centre, NORCE Norwegian Research Centre, Bergen, Norway
| | - Suzi Tortora
- Dancing Dialogue, LCAT, New York, NY, United States
| | - Nigel Osborne
- Department of Music, University of Edinburgh, Edinburgh, United Kingdom
| | - Benjaman Schögler
- Perception Movement Action Research Consortium, University of Edinburgh, Edinburgh, United Kingdom
| | - Sabine Koch
- Research Institute for Creative Arts Therapies, Alanus University, Alfter, Germany
- School of Therapy Sciences, Creative Arts Therapies, SRH University Heidelberg, Heidelberg, Germany
| | - Judit Elias-Masiques
- BCU Health Board (NHS), Bangor, United Kingdom
- School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | | | | | - Karrie Swan
- Department of Counseling, Leadership, and Special Education, Missouri State University, Springfield, MO, United States
| | - Magali J. Rochat
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bologna, Italy
| | | | - Katharine Forster
- BCU Health Board (NHS), Bangor, United Kingdom
- School of Human and Behavioural Sciences, Bangor University, Bangor, United Kingdom
| | - Pat Amos
- Independent Researcher, Ardmore, PA, United States
| |
Collapse
|
9
|
Flaten E, Marshall SA, Dittrich A, Trainor L. Evidence for Top-down Meter Perception in Infancy as Shown by Primed Neural Responses to an Ambiguous Rhythm. Eur J Neurosci 2022; 55:2003-2023. [PMID: 35445451 DOI: 10.1111/ejn.15671] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/23/2022] [Accepted: 03/24/2022] [Indexed: 11/30/2022]
Abstract
From auditory rhythm patterns, listeners extract the underlying steady beat, and perceptually group beats to form meters. While previous studies show infants discriminate different auditory meters, it remains unknown whether they can maintain (imagine) a metrical interpretation of an ambiguous rhythm through top-down processes. We investigated this via electroencephalographic mismatch responses. We primed 6-month-old infants (N = 24) to hear a 6-beat ambiguous rhythm either in duple meter (n = 13), or in triple meter (n = 11) through loudness accents either on every second or every third beat. Periods of priming were inserted before sequences of the ambiguous unaccented rhythm. To elicit mismatch responses, occasional pitch deviants occurred on either beat 4 (strong beat in triple meter; weak in duple) or beat 5 (strong in duple; weak in triple) of the unaccented trials. At frontal left sites, we found a significant interaction between beat and priming group in the predicted direction. Post-hoc analyses showed mismatch response amplitudes were significantly larger for beat 5 in the duple- than triple-primed group (p = .047) and were non-significantly larger for beat 4 in the triple- than duple-primed group. Further, amplitudes were generally larger in infants with musically experienced parents. At frontal right sites, mismatch responses were generally larger for those in the duple compared to triple group, which may reflect a processing advantage for duple meter. These results indicate infants can impose a top-down, internally generated meter on ambiguous auditory rhythms, an ability that would aid early language and music learning.
Collapse
Affiliation(s)
- Erica Flaten
- Department of Psychology, Neuroscience and Behaviour, McMaster University
| | - Sara A Marshall
- Department of Psychology, Neuroscience and Behaviour, McMaster University
| | - Angela Dittrich
- Department of Psychology, Neuroscience and Behaviour, McMaster University
| | - Laurel Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University.,McMaster Institute for Music and the Mind, McMaster University.,Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| |
Collapse
|
10
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 79] [Impact Index Per Article: 39.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
11
|
Henrich K, Scharinger M. Predictive Processing in Poetic Language: Event-Related Potentials Data on Rhythmic Omissions in Metered Speech. Front Psychol 2022; 12:782765. [PMID: 35069363 PMCID: PMC8769205 DOI: 10.3389/fpsyg.2021.782765] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
Predictions during language comprehension are currently discussed from many points of view. One area where predictive processing may play a particular role concerns poetic language that is regularized by meter and rhyme, thus allowing strong predictions regarding the timing and stress of individual syllables. While there is growing evidence that these prosodic regularities influence language processing, less is known about the potential influence of prosodic preferences (binary, strong-weak patterns) on neurophysiological processes. To this end, the present electroencephalogram (EEG) study examined whether the predictability of strong and weak syllables within metered speech would differ as a function of meter (trochee vs. iamb). Strong, i.e., accented positions within a foot should be more predictable than weak, i.e., unaccented positions. Our focus was on disyllabic pseudowords that solely differed between trochaic and iambic structure, with trochees providing the preferred foot in German. Methodologically, we focused on the omission Mismatch Negativity (oMMN) that is elicited when an anticipated auditory stimulus is omitted. The resulting electrophysiological brain response is particularly interesting because its elicitation does not depend on a physical stimulus. Omissions in deviant position of a passive oddball paradigm occurred at either first- or second-syllable position of the aforementioned pseudowords, resulting in a 2-by-2 design with the factors foot type and omission position. Analyses focused on the mean oMMN amplitude and latency differences across the four conditions. The result pattern was characterized by an interaction of the effects of foot type and omission position for both amplitudes and latencies. In first position, omissions resulted in larger and earlier oMMNs for trochees than for iambs. In second position, omissions resulted in larger oMMNs for iambs than for trochees, but the oMMN latency did not differ. The results suggest that omissions, particularly in initial position, are modulated by a trochaic preference in German. The preferred strong-weak pattern may have strengthened the prosodic prediction, especially for matching, trochaic stimuli, such that the violation of this prediction led to an earlier and stronger prediction error. Altogether, predictive processing seems to play a particular role in metered speech, especially if the meter is based on the preferred foot type.
Collapse
Affiliation(s)
- Karen Henrich
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
- Research Group Phonetics, Philipps-University of Marburg, Marburg, Germany
- Center for Mind, Brain, and Behavior, Universities of Marburg and Giessen, Marburg, Germany
| |
Collapse
|
12
|
Lenc T, Merchant H, Keller PE, Honing H, Varlet M, Nozaradan S. Mapping between sound, brain and behaviour: four-level framework for understanding rhythm processing in humans and non-human primates. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200325. [PMID: 34420381 PMCID: PMC8380981 DOI: 10.1098/rstb.2020.0325] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 12/16/2022] Open
Abstract
Humans perceive and spontaneously move to one or several levels of periodic pulses (a meter, for short) when listening to musical rhythm, even when the sensory input does not provide prominent periodic cues to their temporal location. Here, we review a multi-levelled framework to understanding how external rhythmic inputs are mapped onto internally represented metric pulses. This mapping is studied using an approach to quantify and directly compare representations of metric pulses in signals corresponding to sensory inputs, neural activity and behaviour (typically body movement). Based on this approach, recent empirical evidence can be drawn together into a conceptual framework that unpacks the phenomenon of meter into four levels. Each level highlights specific functional processes that critically enable and shape the mapping from sensory input to internal meter. We discuss the nature, constraints and neural substrates of these processes, starting with fundamental mechanisms investigated in macaque monkeys that enable basic forms of mapping between simple rhythmic stimuli and internally represented metric pulse. We propose that human evolution has gradually built a robust and flexible system upon these fundamental processes, allowing more complex levels of mapping to emerge in musical behaviours. This approach opens promising avenues to understand the many facets of rhythmic behaviours across individuals and species. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.
Collapse
Affiliation(s)
- Tomas Lenc
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, Campus Juriquilla, Querétaro 76230, Mexico
| | - Peter E. Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Henkjan Honing
- Amsterdam Brain and Cognition (ABC), Institute for Logic, Language and Computation (ILLC), University of Amsterdam, Amsterdam 1090 GE, The Netherlands
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
- School of Psychology, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
| |
Collapse
|
13
|
Kondoh S, Okanoya K, Tachibana RO. Switching perception of musical meters by listening to different acoustic cues of biphasic sound stimulus. PLoS One 2021; 16:e0256712. [PMID: 34460855 PMCID: PMC8405023 DOI: 10.1371/journal.pone.0256712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Accepted: 08/12/2021] [Indexed: 11/20/2022] Open
Abstract
Meter is one of the core features of music perception. It is the cognitive grouping of regular sound sequences, typically for every 2, 3, or 4 beats. Previous studies have suggested that one can not only passively perceive the meter from acoustic cues such as loudness, pitch, and duration of sound elements, but also actively perceive it by paying attention to isochronous sound events without any acoustic cues. Studying the interaction of top-down and bottom-up processing in meter perception leads to understanding the cognitive system’s ability to perceive the entire structure of music. The present study aimed to demonstrate that meter perception requires the top-down process (which maintains and switches attention between cues) as well as the bottom-up process for discriminating acoustic cues. We created a “biphasic” sound stimulus, which consists of successive tone sequences designed to provide cues for both the triple and quadruple meters in different sound attributes, frequency, and duration. Participants were asked to focus on either frequency or duration of the stimulus, and to answer how they perceived meters on a five-point scale (ranged from “strongly triple” to “strongly quadruple”). As a result, we found that participants perceived different meters by switching their attention to specific cues. This result adds evidence to the idea that meter perception involves the interaction between top-down and bottom-up processes.
Collapse
Affiliation(s)
- Sotaro Kondoh
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Kazuo Okanoya
- Department of Life Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- Center for Evolutionary Cognitive Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- RIKEN Center for Brain Science, Saitama, Japan
- * E-mail: (KO); (ROT)
| | - Ryosuke O. Tachibana
- Center for Evolutionary Cognitive Sciences, Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
- * E-mail: (KO); (ROT)
| |
Collapse
|
14
|
Møller C, Stupacher J, Celma-Miralles A, Vuust P. Beat perception in polyrhythms: Time is structured in binary units. PLoS One 2021; 16:e0252174. [PMID: 34415911 PMCID: PMC8378699 DOI: 10.1371/journal.pone.0252174] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022] Open
Abstract
In everyday life, we group and subdivide time to understand the sensory environment surrounding us. Organizing time in units, such as diurnal rhythms, phrases, and beat patterns, is fundamental to behavior, speech, and music. When listening to music, our perceptual system extracts and nests rhythmic regularities to create a hierarchical metrical structure that enables us to predict the timing of the next events. Foot tapping and head bobbing to musical rhythms are observable evidence of this process. In the special case of polyrhythms, at least two metrical structures compete to become the reference for these temporal regularities, rendering several possible beats with which we can synchronize our movements. While there is general agreement that tempo, pitch, and loudness influence beat perception in polyrhythms, we focused on the yet neglected influence of beat subdivisions, i.e., the least common denominator of a polyrhythm ratio. In three online experiments, 300 participants listened to a range of polyrhythms and tapped their index fingers in time with the perceived beat. The polyrhythms consisted of two simultaneously presented isochronous pulse trains with different ratios (2:3, 2:5, 3:4, 3:5, 4:5, 5:6) and different tempi. For ratios 2:3 and 3:4, we additionally manipulated the pitch of the pulse trains. Results showed a highly robust influence of subdivision grouping on beat perception. This was manifested as a propensity towards beats that are subdivided into two or four equally spaced units, as opposed to beats with three or more complex groupings of subdivisions. Additionally, lower pitched pulse trains were more often perceived as the beat. Our findings suggest that subdivisions, not beats, are the basic unit of beat perception, and that the principle underlying the binary grouping of subdivisions reflects a propensity towards simplicity. This preference for simple grouping is widely applicable to human perception and cognition of time.
Collapse
Affiliation(s)
- Cecilie Møller
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Jan Stupacher
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Alexandre Celma-Miralles
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| |
Collapse
|
15
|
Takeya R, Nakamura S, Tanaka M. Spontaneous grouping of saccade timing in the presence of task-irrelevant objects. PLoS One 2021; 16:e0248530. [PMID: 33724997 PMCID: PMC7963089 DOI: 10.1371/journal.pone.0248530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 02/27/2021] [Indexed: 11/26/2022] Open
Abstract
Sequential movements are often grouped into several chunks, as evidenced by the modulation of the timing of each elemental movement. Even during synchronized tapping with a metronome, we sometimes feel subjective accent for every few taps. To examine whether motor segmentation emerges during synchronized movements, we trained monkeys to generate a series of predictive saccades synchronized with visual stimuli which sequentially appeared for a fixed interval (400 or 600 ms) at six circularly arranged landmark locations. We found two types of motor segmentations that featured periodic modulation of saccade timing. First, the intersaccadic interval (ISI) depended on the target location and saccade direction, indicating that particular combinations of saccades were integrated into motor chunks. Second, when a task-irrelevant rectangular contour surrounding three landmarks ("inducer") was presented, the ISI significantly modulated depending on the relative target location to the inducer. All patterns of individual differences seen in monkeys were also observed in humans. Importantly, the effects of the inducer greatly decreased or disappeared when the animals were trained to generate only reactive saccades (latency >100 ms), indicating that the motor segmentation may depend on the internal rhythms. Thus, our results demonstrate two types of motor segmentation during synchronized movements: one is related to the hierarchical organization of sequential movements and the other is related to the spontaneous grouping of rhythmic events. This experimental paradigm can be used to investigate the underlying neural mechanism of temporal grouping during rhythm production.
Collapse
Affiliation(s)
- Ryuji Takeya
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, Japan
- * E-mail: (RT); (MT)
| | - Shuntaro Nakamura
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, Japan
| | - Masaki Tanaka
- Department of Physiology, Hokkaido University School of Medicine, Sapporo, Japan
- * E-mail: (RT); (MT)
| |
Collapse
|
16
|
Zhao TC, Kuhl PK. Neural and physiological relations observed in musical beat and meter processing. Brain Behav 2020; 10:e01836. [PMID: 32920995 PMCID: PMC7667306 DOI: 10.1002/brb3.1836] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 08/22/2020] [Indexed: 12/12/2022] Open
Abstract
INTRODUCTION Music is ubiquitous and powerful in the world's cultures. Music listening involves abundant information processing (e.g., pitch, rhythm) in the central nervous system and can also induce changes in the physiology, such as heart rate and perspiration. Yet, previous studies tended to examine music information processing in the brain separately from physiological changes. In the current study, we focused on the temporal structure of music (i.e., beat and meter) and examined the physiology, neural processing, and, most importantly, the relation between the two areas. METHODS Simultaneous MEG and ECG data were collected from a group of adults (N = 15) while they passively listened to duple and triple rhythmic patterns. To characterize physiology, we measured heart rate variability (HRV), indexing the parasympathetic nervous system function (PSNS). To characterize neural processing of beat and meter, we examined the neural entertainment and calculated the beat-to-meter ratio to index the relation between beat-level and meter-level entrainment. Specifically, the current study investigated three related questions: (a) whether listening to musical rhythms affects HRV; (b) whether the neural beat-to-meter ratio differed between metrical conditions, and (c) whether neural beat-to-meter ratio is related to HRV. RESULTS Results suggest that while at the group level, both HRV and neural processing are highly similar across metrical conditions, at the individual level, neural beat-to-meter ratio significantly predicts HRV, establishing a neural-physiological link. CONCLUSION This observed link is discussed under the theoretical "neurovisceral integration model," and it provides important new perspectives in music cognition and auditory neuroscience research.
Collapse
Affiliation(s)
- T. Christina Zhao
- Institute for Learning and Brain SciencesUniversity of WashingtonSeattleWAUSA
| | - Patricia K. Kuhl
- Institute for Learning and Brain SciencesUniversity of WashingtonSeattleWAUSA
| |
Collapse
|
17
|
Lenc T, Keller PE, Varlet M, Nozaradan S. Neural and Behavioral Evidence for Frequency-Selective Context Effects in Rhythm Processing in Humans. Cereb Cortex Commun 2020; 1:tgaa037. [PMID: 34296106 PMCID: PMC8152888 DOI: 10.1093/texcom/tgaa037] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 06/30/2020] [Accepted: 07/16/2020] [Indexed: 01/17/2023] Open
Abstract
When listening to music, people often perceive and move along with a periodic meter. However, the dynamics of mapping between meter perception and the acoustic cues to meter periodicities in the sensory input remain largely unknown. To capture these dynamics, we recorded the electroencephalography while nonmusician and musician participants listened to nonrepeating rhythmic sequences, where acoustic cues to meter frequencies either gradually decreased (from regular to degraded) or increased (from degraded to regular). The results revealed greater neural activity selectively elicited at meter frequencies when the sequence gradually changed from regular to degraded compared with the opposite. Importantly, this effect was unlikely to arise from overall gain, or low-level auditory processing, as revealed by physiological modeling. Moreover, the context effect was more pronounced in nonmusicians, who also demonstrated facilitated sensory-motor synchronization with the meter for sequences that started as regular. In contrast, musicians showed weaker effects of recent context in their neural responses and robust ability to move along with the meter irrespective of stimulus degradation. Together, our results demonstrate that brain activity elicited by rhythm does not only reflect passive tracking of stimulus features, but represents continuous integration of sensory input with recent context.
Collapse
Affiliation(s)
- Tomas Lenc
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Peter E Keller
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Manuel Varlet
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
- School of Psychology, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
| | - Sylvie Nozaradan
- MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Penrith, Sydney, NSW 2751, Australia
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Brussels 1200, Belgium
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal QC H3C 3J7, Canada
| |
Collapse
|
18
|
Why do we move to the beat? A multi-scale approach, from physical principles to brain dynamics. Neurosci Biobehav Rev 2020; 112:553-584. [DOI: 10.1016/j.neubiorev.2019.12.024] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Revised: 10/20/2019] [Accepted: 12/13/2019] [Indexed: 01/08/2023]
|
19
|
Jasmin K, Dick F, Holt LL, Tierney A. Tailored perception: Individuals' speech and music perception strategies fit their perceptual abilities. J Exp Psychol Gen 2020; 149:914-934. [PMID: 31589067 PMCID: PMC7133494 DOI: 10.1037/xge0000688] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 08/09/2019] [Accepted: 08/12/2019] [Indexed: 01/09/2023]
Abstract
Perception involves integration of multiple dimensions that often serve overlapping, redundant functions, for example, pitch, duration, and amplitude in speech. Individuals tend to prioritize these dimensions differently (stable, individualized perceptual strategies), but the reason for this has remained unclear. Here we show that perceptual strategies relate to perceptual abilities. In a speech cue weighting experiment (trial N = 990), we first demonstrate that individuals with a severe deficit for pitch perception (congenital amusics; N = 11) categorize linguistic stimuli similarly to controls (N = 11) when the main distinguishing cue is duration, which they perceive normally. In contrast, in a prosodic task where pitch cues are the main distinguishing factor, we show that amusics place less importance on pitch and instead rely more on duration cues-even when pitch differences in the stimuli are large enough for amusics to discern. In a second experiment testing musical and prosodic phrase interpretation (N = 16 amusics; 15 controls), we found that relying on duration allowed amusics to overcome their pitch deficits to perceive speech and music successfully. We conclude that auditory signals, because of their redundant nature, are robust to impairments for specific dimensions, and that optimal speech and music perception strategies depend not only on invariant acoustic dimensions (the physical signal), but on perceptual dimensions whose precision varies across individuals. Computational models of speech perception (indeed, all types of perception involving redundant cues e.g., vision and touch) should therefore aim to account for the precision of perceptual dimensions and characterize individuals as well as groups. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Fred Dick
- Department of Psychological Sciences
| | | | | |
Collapse
|
20
|
Bouwer FL, Honing H, Slagter HA. Beat-based and Memory-based Temporal Expectations in Rhythm: Similar Perceptual Effects, Different Underlying Mechanisms. J Cogn Neurosci 2020; 32:1221-1241. [PMID: 31933432 DOI: 10.1162/jocn_a_01529] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Predicting the timing of incoming information allows the brain to optimize information processing in dynamic environments. Behaviorally, temporal expectations have been shown to facilitate processing of events at expected time points, such as sounds that coincide with the beat in musical rhythm. Yet, temporal expectations can develop based on different forms of structure in the environment, not just the regularity afforded by a musical beat. Little is still known about how different types of temporal expectations are neurally implemented and affect performance. Here, we orthogonally manipulated the periodicity and predictability of rhythmic sequences to examine the mechanisms underlying beat-based and memory-based temporal expectations, respectively. Behaviorally and using EEG, we looked at the effects of beat-based and memory-based expectations on auditory processing when rhythms were task-relevant or task-irrelevant. At expected time points, both beat-based and memory-based expectations facilitated target detection and led to attenuation of P1 and N1 responses, even when expectations were task-irrelevant (unattended). For beat-based expectations, we additionally found reduced target detection and enhanced N1 responses for events at unexpected time points (e.g., off-beat), regardless of the presence of memory-based expectations or task relevance. This latter finding supports the notion that periodicity selectively induces rhythmic fluctuations in neural excitability and furthermore indicates that, although beat-based and memory-based expectations may similarly affect auditory processing of expected events, their underlying neural mechanisms may be different.
Collapse
|
21
|
Abstract
This paper presents a model capable of learning the rhythmic characteristics of a music signal through unsupervised learning. The model learns a multi-layer hierarchy of rhythmic patterns ranging from simple structures on lower layers to more complex patterns on higher layers. The learned hierarchy is fully transparent, which enables observation and explanation of the structure of the learned patterns. The model employs tempo-invariant encoding of patterns and can thus learn and perform inference on tempo-varying and noisy input data. We demonstrate the model’s capabilities of learning distinctive rhythmic structures of different music genres using unsupervised learning. To test its robustness, we show how the model can efficiently extract rhythmic structures in songs with changing time signatures and live recordings. Additionally, the model’s time-complexity is empirically tested to show its usability for analysis-related applications.
Collapse
|
22
|
Abstract
In typical Western music, important pitches occur disproportionately often on important beats, referred to as the tonal-metric hierarchy (Prince & Schmuckler, 2014, Music Perception, 31, 254-270). We tested whether listeners are sensitive to this alignment of pitch and temporal structure. In Experiment 1, the stimuli were 200 artificial melodies with random pitch contours; all melodies had both a regular beat and a pitch class distribution that favored one musical key, but had either high or low agreement with the tonal-metric hierarchy. Thirty-two listeners rated the goodness of each melody, and another 41 listeners rated the melodies' metric clarity (how clear the beat was). The tonal-metric hierarchy did not affect either rating type, likely because the melodies may have only weakly (at best) established a musical key. In Experiment 2, we shuffled the pitches in 60 composed melodies (scrambling pitch contour, but not rhythm) to generate versions with high and low agreement with the tonal-metric hierarchy. Both ratings of goodness (N = 40) and metric clarity (N = 40) revealed strong evidence of the tonal-metric hierarchy influencing ratings; there was no effect of musical training. In Experiment 3, we phase-shifted, rather than shuffled, the pitches from the composed melodies, thus preserving pitch contour. Both rating types (goodness N = 43, metric clarity N = 32) replicated the results of Experiment 2. These findings establish the psychological reality of the tonal-metric hierarchy.
Collapse
|
23
|
Celma-Miralles A, Toro JM. Ternary meter from spatial sounds: Differences in neural entrainment between musicians and non-musicians. Brain Cogn 2019; 136:103594. [DOI: 10.1016/j.bandc.2019.103594] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 08/02/2019] [Accepted: 08/03/2019] [Indexed: 11/26/2022]
|
24
|
Sidiras C, Iliadou VV, Nimatoudis I, Grube M, Griffiths T, Bamiou DE. Deficits in Auditory Rhythm Perception in Children With Auditory Processing Disorder Are Unrelated to Attention. Front Neurosci 2019; 13:953. [PMID: 31551701 PMCID: PMC6743378 DOI: 10.3389/fnins.2019.00953] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 08/23/2019] [Indexed: 11/13/2022] Open
Abstract
Auditory processing disorder (APD) is defined as a specific deficit in the processing of auditory information along the central auditory nervous system, including bottom-up and top-down neural connectivity. Even though music comprises a big part of audition, testing music perception in APD population has not yet gained wide attention in research. This work tests the hypothesis that deficits in rhythm perception occur in a group of subjects with APD. The primary focus of this study is to measure perception of a simple auditory rhythm, i.e., short isochronous sequences of beats, in APD children and to compare their performance to age-matched normal controls. The secondary question is to study the relationship between cognition and auditory processing of rhythm perception. We tested 39 APD children and 25 control children aged between 6 and 12 years via (a) clinical APD tests, including a monaural speech in noise test, (b) isochrony task, a test measuring the detection of small deviations from perfect isochrony in a isochronous beats sequence, and (c) two cognitive tests (auditory memory and auditory attention). APD children scored worse in isochrony task compared to the age-matched control group. In the APD group, neither measure of cognition (attention nor memory) correlated with performance in isochrony task. Left (but not right) speech in noise performance correlated with performance in isochrony task. In the control group a large correlation (r = -0.701, p = 0.001) was observed between isochrony task and attention, but not with memory. The results demonstrate a deficit in the perception of regularly timed sequences in APD that is relevant to the perception of speech in noise, a ubiquitous complaint in this condition. Our results suggest (a) the existence of a non-attention related rhythm perception deficit in APD children and (b) differential effects of attention on task performance in normal vs. APD children. The potential beneficial use of music/rhythm training for rehabilitation purposes in APD children would need to be explored.
Collapse
Affiliation(s)
- Christos Sidiras
- Clinical Psychoacoustics Lab, Third Department of Psychiatry, Neuroscience Sector, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Vasiliki Vivian Iliadou
- Clinical Psychoacoustics Lab, Third Department of Psychiatry, Neuroscience Sector, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Ioannis Nimatoudis
- Clinical Psychoacoustics Lab, Third Department of Psychiatry, Neuroscience Sector, Medical School, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Manon Grube
- Auditory Group, Medical School, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Tim Griffiths
- Auditory Group, Medical School, Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Doris-Eva Bamiou
- Faculty of Brain Sciences, UCL Ear Institute, University College London, London, United Kingdom
- Hearing and Deafness Biomedical Research Centre, National Institute for Health Research, London, United Kingdom
| |
Collapse
|
25
|
Karageorghis CI, Lyne LP, Bigliassi M, Vuust P. Effects of auditory rhythm on movement accuracy in dance performance. Hum Mov Sci 2019; 67:102511. [PMID: 31450067 DOI: 10.1016/j.humov.2019.102511] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 08/16/2019] [Accepted: 08/17/2019] [Indexed: 11/17/2022]
Abstract
The present study addresses the impact of the rhythmic complexity of music on the accuracy of dance performance. This study examined the effects of different levels of auditory syncopation on the execution of a dance sequence by trained dancers and exercisers (i.e., nondancers). It was hypothesized that nondancers would make more errors in synchronizing movements with moderately and highly syncopated rhythms while no performance degradation would manifest among trained dancers. Participants performed a dance sequence synchronized with three different rhythm tracks that were regular, moderately syncopated, and highly syncopated. We found significant performance degradation when comparing conditions of no syncopation vs. high syncopation for both trained dancers (p = .002) and nondancers (p = .001). Dancers and nondancers did not differ in how they managed to execute the task with increasing levels of syncopation (p = .384). The pattern of difference between trained dancers and nondancers was similar across the No Syncop and Highly Syncop conditions. The present findings may have marked implications for practitioners given that the tasks employed were analogous to those frequently observed in real-life dance settings.
Collapse
Affiliation(s)
| | | | | | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Denmark and the Royal Academy of Music, Aarhus/Aalborg, Denmark
| |
Collapse
|
26
|
Baltzell LS, Srinivasan R, Richards V. Hierarchical organization of melodic sequences is encoded by cortical entrainment. Neuroimage 2019; 200:490-500. [PMID: 31254649 DOI: 10.1016/j.neuroimage.2019.06.054] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Revised: 06/20/2019] [Accepted: 06/23/2019] [Indexed: 11/26/2022] Open
Abstract
Natural speech is organized according to a hierarchical structure, with individual speech sounds combining to form abstract linguistic units, and abstract linguistic units combining to form higher-order linguistic units. Since the boundaries between these units are not always indicated by acoustic cues, they must often be computed internally. Signatures of this internal computation were reported by Ding et al. (2016), who presented isochronous sequences of mono-syllabic words that combined to form phrases that combined to form sentences, and showed that cortical responses simultaneously encode boundaries at multiple levels of the linguistic hierarchy. In the present study, we designed melodic sequences that were hierarchically organized according to Western music conventions. Specifically, isochronous sequences of "sung" nonsense syllables were constructed such that syllables combined to form triads outlining individual chords, which combined to form harmonic progressions. EEG recordings were made while participants listened to these sequences with the instruction to detect when violations in the sequence structure occurred. We show that cortical responses simultaneously encode boundaries at multiple levels of a melodic hierarchy, suggesting that the encoding of hierarchical structure is not unique to speech. No effect of musical training on cortical encoding was observed.
Collapse
Affiliation(s)
- Lucas S Baltzell
- Department of Cognitive Sciences, University of California, Irvine, 3151 Social Sciences Plaza, Irvine, CA, 92687, USA.
| | - Ramesh Srinivasan
- Department of Cognitive Sciences, University of California, Irvine, 3151 Social Sciences Plaza, Irvine, CA, 92687, USA; Department of Biomedical Engineering, University of California, Irvine, 3151 Social Sciences Plaza, Irvine, CA, 92687, USA
| | - Virginia Richards
- Department of Cognitive Sciences, University of California, Irvine, 3151 Social Sciences Plaza, Irvine, CA, 92687, USA
| |
Collapse
|
27
|
Bouvet CJ, Varlet M, Dalla Bella S, Keller PE, Bardy BG. Accent-induced stabilization of spontaneous auditory-motor synchronization. PSYCHOLOGICAL RESEARCH 2019; 84:2196-2209. [PMID: 31203454 DOI: 10.1007/s00426-019-01208-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 06/03/2019] [Indexed: 01/12/2023]
Abstract
Humans spontaneously synchronize their movements with external auditory rhythms such as a metronome or music. Although such synchronization preferentially occurs toward a simple 1:1 movement-sound frequency ratio, the parameters facilitating spontaneous synchronization to more complex frequency ratios remain largely unclear. The present study investigates the dynamics of spontaneous auditory-motor synchronization at a range of frequency ratios between movement and sound, and examines the benefit of simple accentuation pattern on synchronization emergence and stability. Participants performed index finger oscillations at their preferred tempo while listening to a metronome presented at either their preferred tempo, or twice or three times faster (frequency ratios of 1:1, 1:2 or 1:3) with different patterns of accentuation (unaccented, binary or ternary accented), and no instruction to synchronize. Participants' movements were spontaneously entrained to the auditory stimuli in the three different frequency ratio conditions. Moreover, the emergence and stability of the modes of coordination were influenced by the interaction between frequency ratio and pattern of accentuation. Coherent patterns, such as a 1:3 frequency ratio supported by a ternary accentuation, facilitated the emergence and stability of the corresponding mode of coordination. Furthermore, ternary accentuation induced a greater gain in stability for the corresponding mode of coordination than was observed with binary accentuation. Together, these findings demonstrate the importance of matching accentuation pattern and movement tempo for enhanced synchronization, opening new perspectives for stabilizing complex rhythmic motor behaviors, such as running.
Collapse
Affiliation(s)
- Cécile J Bouvet
- EuroMov, Univ. Montpellier, Montpellier, France.
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia.
| | - Manuel Varlet
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
- School of Social Sciences and Psychology, Western Sydney University, Penrith, Australia
| | - Simone Dalla Bella
- EuroMov, Univ. Montpellier, Montpellier, France
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Department of Cognitive Psychology, WSFiZ in Warsaw, Warsaw, Poland
| | - Peter E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, Australia
| | | |
Collapse
|
28
|
Jiam NT, Limb CJ. Rhythm processing in cochlear implant-mediated music perception. Ann N Y Acad Sci 2019; 1453:22-28. [PMID: 31168793 DOI: 10.1111/nyas.14130] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 04/24/2019] [Accepted: 05/03/2019] [Indexed: 11/29/2022]
Abstract
Cochlear implants (CIs) are biomedical devices that provide sound to people with severe-to-profound hearing loss by direct electrical stimulation of auditory neurons in the cochlea. Despite the remarkable achievements with respect to speech perception in quiet environments, music perception with CIs remains generally poor due to the degradation of auditory input. Prior studies have shown that both pitch perception and timbre discrimination are poor in CI users, whereas the performance on rhythmic tasks is nearly equivalent to normal hearing participants. There are several caveats, however, to this generalization regarding rhythm processing for CI users. The purpose of this article is to summarize the literature on rhythmic perception for CI users while highlighting important limitations within these studies. We will also identify areas for future research and development of CI-mediated music processing. It is likely that rhythm processing will continue to advance as our understanding of electrical current delivery to the auditory nerve improves.
Collapse
Affiliation(s)
- Nicole T Jiam
- Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California
| | - Charles J Limb
- Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California
| |
Collapse
|
29
|
Maróti E, Honbolygó F, Weiss B. Neural entrainment to the beat in multiple frequency bands in 6-7-year-old children. Int J Psychophysiol 2019; 141:45-55. [PMID: 31078641 DOI: 10.1016/j.ijpsycho.2019.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 05/03/2019] [Accepted: 05/08/2019] [Indexed: 11/28/2022]
Abstract
Entrainment to periodic acoustic stimuli has been found to relate both to the auditory and motor cortices, and it could be influenced by the maturity of these brain regions. However, existing research in this topic provides data about different oscillatory brain activities in different age groups with different musical background. In order to obtain a more coherent picture and examine early manifestations of entrainment, we assessed brain oscillations at multiple time scales (beta: 15-25 Hz, gamma: 28-48 Hz) and in steady state evoked potentials (SS-EPs in short) in 6-7-year-old children with no musical background right at the start of primary school before they learnt to read. Our goal was to exclude the effect of music training and reading, since previous studies have shown that sensorimotor entrainment (movement synchronization to the beat) is related to musical and reading abilities. We found evidence for endogenous anticipatory processing in the gamma band related to meter perception, and stimulus-related frequency specific responses. However, we did not find evidence for an interaction between auditory and motor networks, which suggests that endogenous mechanisms related to auditory processing may mature earlier than those that underlie motor actions, such as sensorimotor synchronization.
Collapse
Affiliation(s)
- Emese Maróti
- Brain Imaging Centre, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Department of Cognitive Science, Budapest University of Technology and Economics, Budapest, Hungary.
| | - Ferenc Honbolygó
- Brain Imaging Centre, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary; Institute of Psychology, Eötvös Loránd University, Budapest, Hungary
| | - Béla Weiss
- Brain Imaging Centre, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Budapest, Hungary
| |
Collapse
|
30
|
Linares Gutierrez D, Kübel S, Giersch A, Schmidt S, Meissner K, Wittmann M. Meditation-Induced States, Vagal Tone, and Breathing Activity Are Related to Changes in Auditory Temporal Integration. Behav Sci (Basel) 2019; 9:bs9050051. [PMID: 31067755 PMCID: PMC6562910 DOI: 10.3390/bs9050051] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Revised: 05/04/2019] [Accepted: 05/05/2019] [Indexed: 11/30/2022] Open
Abstract
This study is based on the relationship between meditation, the present moment, and psychophysiology. We employed the metronome task to operationalize the extension of the present moment. A pre-post longitudinal study was conducted. The performance in the metronome task was compared before and after the interventions (meditation, story). The aim was to assess whether physiological changes (heart, breathing) during meditation influence the temporal-integration (TI) of metronome beats. Mindfulness meditators either meditated (n = 41) or listened to a story (n = 43). The heart and breathing activity were recorded during the intervention and compared to a resting-state condition. By applying path analyses we found that meditation led to an increase of the duration of integration intervals at the slowest metronome frequency (inter-stimulus interval, ISI = 3 s). After meditation, the higher the heart-rate variability (i.e., the root mean square of successive differences, RMSSD), the longer the duration of integration intervals at the fastest frequency (ISI = 0.33 s). Moreover, the higher the breathing rate during meditation, the greater the integration of intervals at ISI = 1 s. These findings add evidence to meditation-induced changes on the TI of metronome beats and the concept of the embodiment of mental functioning.
Collapse
Affiliation(s)
| | - Sebastian Kübel
- Institute for Frontier Areas of Psychology and Mental Health, 79098 Freiburg, Germany.
| | - Anne Giersch
- INSERM U1114, 67091 Strasbourg, France.
- FMTS, Psychiatry Department, University Hospital of Strasbourg, 67200 Strasbourg, France.
| | - Stefan Schmidt
- Department of Psychosomatic Medicine and Psychotherapy, Medical Faculty, Medical Center-University of Freiburg, 79104 Freiburg, Germany.
| | - Karin Meissner
- Division of Integrative Health Promotion, Department of Social Work and Health, University of Applied Sciences, 96450 Coburg, Germany.
- Institute of Medical Psychology, Ludwig-Maximilian University Munich, 80336 Munich, Germany.
| | - Marc Wittmann
- Institute for Frontier Areas of Psychology and Mental Health, 79098 Freiburg, Germany.
| |
Collapse
|
31
|
Predictive Processes and the Peculiar Case of Music. Trends Cogn Sci 2019; 23:63-77. [DOI: 10.1016/j.tics.2018.10.006] [Citation(s) in RCA: 185] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2018] [Revised: 10/23/2018] [Accepted: 10/24/2018] [Indexed: 12/18/2022]
|
32
|
Cross-Modal Priming Effect of Rhythm on Visual Word Recognition and Its Relationships to Music Aptitude and Reading Achievement. Brain Sci 2018; 8:brainsci8120210. [PMID: 30501073 PMCID: PMC6316040 DOI: 10.3390/brainsci8120210] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Revised: 11/26/2018] [Accepted: 11/28/2018] [Indexed: 11/17/2022] Open
Abstract
Recent evidence suggests the existence of shared neural resources for rhythm processing in language and music. Such overlaps could be the basis of the facilitating effect of regular musical rhythm on spoken word processing previously reported for typical children and adults, as well as adults with Parkinson’s disease and children with developmental language disorders. The present study builds upon these previous findings by examining whether non-linguistic rhythmic priming also influences visual word processing, and the extent to which such cross-modal priming effect of rhythm is related to individual differences in musical aptitude and reading skills. An electroencephalogram (EEG) was recorded while participants listened to a rhythmic tone prime, followed by a visual target word with a stress pattern that either matched or mismatched the rhythmic structure of the auditory prime. Participants were also administered standardized assessments of musical aptitude and reading achievement. Event-related potentials (ERPs) elicited by target words with a mismatching stress pattern showed an increased fronto-central negativity. Additionally, the size of the negative effect correlated with individual differences in musical rhythm aptitude and reading comprehension skills. Results support the existence of shared neurocognitive resources for linguistic and musical rhythm processing, and have important implications for the use of rhythm-based activities for reading interventions.
Collapse
|
33
|
Harding EE, Sammler D, Henry MJ, Large EW, Kotz SA. Cortical tracking of rhythm in music and speech. Neuroimage 2018; 185:96-101. [PMID: 30336253 DOI: 10.1016/j.neuroimage.2018.10.037] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2018] [Revised: 10/08/2018] [Accepted: 10/13/2018] [Indexed: 11/26/2022] Open
Abstract
Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.
Collapse
Affiliation(s)
- Eleanor E Harding
- Department of Neuropsychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Molly J Henry
- Max Planck Research Group "Auditory Cognition", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Brain and Mind Institute, Department of Psychology, The University of Western Ontario, London, Ontario, Canada
| | - Edward W Large
- Department of Psychology, University of Connecticut Storrs, Connecticut, USA
| | - Sonja A Kotz
- Department of Neuropsychology, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany; Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
34
|
Kazemi Esfeh T, Hatami J, Lavasani MG. Influence of metrical structure on learning of positional regularities in movement sequences. PSYCHOLOGICAL RESEARCH 2018; 84:611-624. [PMID: 30229296 DOI: 10.1007/s00426-018-1096-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 09/10/2018] [Indexed: 10/28/2022]
Abstract
Sequential stimuli are usually perceived to have hierarchical temporal structures. However, some of these structures are only investigated in one type of sequence, regardless of the existing evidence, showing the domain-generality of the representation of these structures. Here, we assess whether the hierarchical representation of regularly segmented action sequences resembles the perceived metrical patterns that organize the representation of events hierarchically in temporally regular sequences. In all our experiments, we presented the participants with sequences of human movements and tested the perception of metrical pattern by segmenting the movement streams into temporally equal groups containing four movements. In Experiment 1, we found that a movement sequence with temporally equal groupings improves the learning of positional regularities inherent within each group of movements. To further clarify the degree to which this learning mechanism is affected by the perceived metrical patterns, we conducted Experiments 2a and 2b, in which the relative saliencies of the first and last positions in the movement groups, respectively, were studied. The results showed that, although in the learning of positional regularities, the rule-conforming first positions are as effective as when both first and last positions are legal, the last positions are not as influential. Based on these findings we conclude that, in grouped sequences, learning of positional regularities may be modulated by the metrical saliency patterns that are imposed by the temporal regularity of the sequential grouping pattern.
Collapse
Affiliation(s)
- Talieh Kazemi Esfeh
- Faculty of Psychology and Education, University of Tehran, Jalal Al-e-Ahmad Avenue, Tehran, 1445983861, Iran.
| | - Javad Hatami
- Faculty of Psychology and Education, University of Tehran, Jalal Al-e-Ahmad Avenue, Tehran, 1445983861, Iran
| | - Masoud Gholamali Lavasani
- Faculty of Psychology and Education, University of Tehran, Jalal Al-e-Ahmad Avenue, Tehran, 1445983861, Iran
| |
Collapse
|
35
|
Vuust P, Dietz MJ, Witek M, Kringelbach ML. Now you hear it: a predictive coding model for understanding rhythmic incongruity. Ann N Y Acad Sci 2018; 1423:19-29. [PMID: 29683495 DOI: 10.1111/nyas.13622] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2017] [Revised: 12/13/2017] [Accepted: 12/22/2017] [Indexed: 12/30/2022]
Abstract
Rhythmic incongruity in the form of syncopation is a prominent feature of many contemporary musical styles. Syncopations afford incongruity between rhythmic patterns and the meter, giving rise to mental models of differently accented isochronous beats. Syncopations occur either in isolation or as part of rhythmic patterns, so-called grooves. On the basis of the predictive coding framework, we discuss how brain processing of rhythm can be seen as a special case of predictive coding. We present a simple, yet powerful model for how the brain processes rhythmic incongruity: the model for predictive coding of rhythmic incongruity. Our model proposes that a given rhythm's syncopation and its metrical uncertainty (precision) is at the heart of how the brain models rhythm and meter based on priors, predictions, and prediction error. Our minimal model can explain prominent features of brain processing of syncopation: why isolated syncopations lead to stronger prediction error in the brains of musicians, as evidenced by larger event-related potentials to rhythmic incongruity, and why we all experience a stronger urge to move to grooves with a medium level of syncopation compared with low and high levels of syncopation.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Martin J Dietz
- Center for Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| | - Maria Witek
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Morten L Kringelbach
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
- The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
- Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
36
|
Haumann NT, Vuust P, Bertelsen F, Garza-Villarreal EA. Influence of Musical Enculturation on Brain Responses to Metric Deviants. Front Neurosci 2018; 12:218. [PMID: 29720932 PMCID: PMC5915898 DOI: 10.3389/fnins.2018.00218] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 03/19/2018] [Indexed: 11/13/2022] Open
Abstract
The ability to recognize metric accents is fundamental in both music and language perception. It has been suggested that music listeners prefer rhythms that follow simple binary meters, which are common in Western music. This means that listeners expect odd-numbered beats to be strong and even-numbered beats to be weak. In support of this, studies have shown that listeners exposed to Western music show stronger novelty and incongruity related P3 and irregularity detection related mismatch negativity (MMN) brain responses to attenuated odd- than attenuated even-numbered metric positions. Furthermore, behavioral evidence suggests that music listeners' preferences can be changed by long-term exposure to non-Western rhythms and meters, e.g., by listening to African or Balkan music. In our study, we investigated whether it might be possible to measure effects of music enculturation on neural responses to attenuated tones on specific metric positions. We compared the magnetic mismatch negativity (MMNm) to attenuated beats in a “Western group” of listeners (n = 12) mainly exposed to Western music and a “Bicultural group” of listeners (n = 13) exposed for at least 1 year to both Sub-Saharan African music in addition to Western music. We found that in the “Western group” the MMNm was higher in amplitude to deviant tones on odd compared to even metric positions, but not in the “Bicultural group.” In support of this finding, there was also a trend of the “Western group” to rate omitted beats as more surprising on odd than even metric positions, whereas the “Bicultural group” seemed to discriminate less between metric positions in terms of surprise ratings. Also, we observed that the overall latency of the MMNm was significantly shorter in the Bicultural group compared to the Western group. These effects were not biased by possible differences in rhythm perception ability or music training, measured with the Musical Ear Test (MET). Furthermore, source localization analyses suggest that auditory, inferior temporal, sensory-motor, superior frontal, and parahippocampal regions might be involved in eliciting the MMNm to the metric deviants. These findings suggest that effects of music enculturation can be measured on MMNm responses to attenuated tones on specific metric positions.
Collapse
Affiliation(s)
- Niels T Haumann
- Department of Aesthetics and Communication (Musicology), Faculty of Arts, Aarhus University, Aarhus, Denmark.,Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark
| | - Peter Vuust
- Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark
| | - Freja Bertelsen
- Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark.,Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Aarhus, Denmark
| | - Eduardo A Garza-Villarreal
- Department of Clinical Medicine, Center for Music in the Brain, Royal Academy of Music, Aarhus University, Aarhus, Denmark.,Clinical Research Division, Instituto Nacional de Psiquiatría Ramón de la Fuente Muñiz (INPRFM), Mexico City, Mexico.,Department of Neurology, Faculty of Medicine and University Hospital, Universidad Autonoma de Nuevo Leon, Monterrey, Mexico
| |
Collapse
|
37
|
Hannon EE, Nave-Blodgett JE, Nave KM. The Developmental Origins of the Perception and Production of Musical Rhythm. CHILD DEVELOPMENT PERSPECTIVES 2018. [DOI: 10.1111/cdep.12285] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
38
|
Bouwer FL, Burgoyne JA, Odijk D, Honing H, Grahn JA. What makes a rhythm complex? The influence of musical training and accent type on beat perception. PLoS One 2018; 13:e0190322. [PMID: 29320533 PMCID: PMC5761885 DOI: 10.1371/journal.pone.0190322] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Accepted: 12/12/2017] [Indexed: 11/18/2022] Open
Abstract
Perception of a regular beat in music is inferred from different types of accents. For example, increases in loudness cause intensity accents, and the grouping of time intervals in a rhythm creates temporal accents. Accents are expected to occur on the beat: when accents are "missing" on the beat, the beat is more difficult to find. However, it is unclear whether accents occurring off the beat alter beat perception similarly to missing accents on the beat. Moreover, no one has examined whether intensity accents influence beat perception more or less strongly than temporal accents, nor how musical expertise affects sensitivity to each type of accent. In two experiments, we obtained ratings of difficulty in finding the beat in rhythms with either temporal or intensity accents, and which varied in the number of accents on the beat as well as the number of accents off the beat. In both experiments, the occurrence of accents on the beat facilitated beat detection more in musical experts than in musical novices. In addition, the number of accents on the beat affected beat finding more in rhythms with temporal accents than in rhythms with intensity accents. The effect of accents off the beat was much weaker than the effect of accents on the beat and appeared to depend on musical expertise, as well as on the number of accents on the beat: when many accents on the beat are missing, beat perception is quite difficult, and adding accents off the beat may not reduce beat perception further. Overall, the different types of accents were processed qualitatively differently, depending on musical expertise. Therefore, these findings indicate the importance of designing ecologically valid stimuli when testing beat perception in musical novices, who may need different types of accent information than musical experts to be able to find a beat. Furthermore, our findings stress the importance of carefully designing rhythms for social and clinical applications of beat perception, as not all listeners treat all rhythms alike.
Collapse
Affiliation(s)
- Fleur L. Bouwer
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - J. Ashley Burgoyne
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Daan Odijk
- Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Henkjan Honing
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, The Netherlands
| | - Jessica A. Grahn
- Brain and Mind Institute, Department of Psychology, University of Western Ontario, London (ON), Canada
| |
Collapse
|
39
|
Affiliation(s)
- Daniel J. Levitin
- Department of Psychology, McGill University, Montreal, QC H3A 1G1, Canada
| | - Jessica A. Grahn
- Department of Psychology and Brain and Mind Institute, Western University, London, Ontario N6A 5B7, Canada
| | - Justin London
- Departments of Music and Cognitive Science, Carleton College, Northfield, Minnesota 55057
| |
Collapse
|
40
|
Haegens S, Zion Golumbic E. Rhythmic facilitation of sensory processing: A critical review. Neurosci Biobehav Rev 2017; 86:150-165. [PMID: 29223770 DOI: 10.1016/j.neubiorev.2017.12.002] [Citation(s) in RCA: 156] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Revised: 11/02/2017] [Accepted: 12/03/2017] [Indexed: 11/17/2022]
Abstract
Here we review the role of brain oscillations in sensory processing. We examine the idea that neural entrainment of intrinsic oscillations underlies the processing of rhythmic stimuli in the context of simple isochronous rhythms as well as in music and speech. This has been a topic of growing interest over recent years; however, many issues remain highly controversial: how do fluctuations of intrinsic neural oscillations-both spontaneous and entrained to external stimuli-affect perception, and does this occur automatically or can it be actively controlled by top-down factors? Some of the controversy in the literature stems from confounding use of terminology. Moreover, it is not straightforward how theories and findings regarding isochronous rhythms generalize to more complex, naturalistic stimuli, such as speech and music. Here we aim to clarify terminology, and distinguish between different phenomena that are often lumped together as reflecting "neural entrainment" but may actually vary in their mechanistic underpinnings. Furthermore, we discuss specific caveats and confounds related to making inferences about oscillatory mechanisms from human electrophysiological data.
Collapse
Affiliation(s)
- Saskia Haegens
- Department of Neurological Surgery, Columbia University College of Physicians and Surgeons, New York, NY 10032, USA; Centre for Cognitive Neuroimaging, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, 6500 HB Nijmegen, The Netherlands
| | | |
Collapse
|
41
|
Rajendran VG, Harper NS, Garcia-Lazaro JA, Lesica NA, Schnupp JWH. Midbrain adaptation may set the stage for the perception of musical beat. Proc Biol Sci 2017; 284:20171455. [PMID: 29118141 PMCID: PMC5698641 DOI: 10.1098/rspb.2017.1455] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Accepted: 10/13/2017] [Indexed: 11/20/2022] Open
Abstract
The ability to spontaneously feel a beat in music is a phenomenon widely believed to be unique to humans. Though beat perception involves the coordinated engagement of sensory, motor and cognitive processes in humans, the contribution of low-level auditory processing to the activation of these networks in a beat-specific manner is poorly understood. Here, we present evidence from a rodent model that midbrain preprocessing of sounds may already be shaping where the beat is ultimately felt. For the tested set of musical rhythms, on-beat sounds on average evoked higher firing rates than off-beat sounds, and this difference was a defining feature of the set of beat interpretations most commonly perceived by human listeners over others. Basic firing rate adaptation provided a sufficient explanation for these results. Our findings suggest that midbrain adaptation, by encoding the temporal context of sounds, creates points of neural emphasis that may influence the perceptual emergence of a beat.
Collapse
Affiliation(s)
- Vani G Rajendran
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | - Nicol S Harper
- Auditory Neuroscience Group, Department of Physiology, Anatomy, and Genetics, University of Oxford, Oxford, UK
| | | | - Nicholas A Lesica
- UCL Ear Institute, 332 Grays Inn Rd, Kings Cross, London WC1X 8EE, UK
| | - Jan W H Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, 1/F, Block 1, To Yuen Building, 31 To Yuen Street, Hong Kong
| |
Collapse
|
42
|
Rajendran VG, Teki S, Schnupp JWH. Temporal Processing in Audition: Insights from Music. Neuroscience 2017; 389:4-18. [PMID: 29108832 PMCID: PMC6371985 DOI: 10.1016/j.neuroscience.2017.10.041] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 10/24/2017] [Accepted: 10/27/2017] [Indexed: 11/28/2022]
Abstract
What music psychology reveals about the natural bounds of human temporal processing. Psychoacoustics of beat perception. Neurophysiology of beat perception. Predictable timing in auditory perception. Neural mechanisms of timing.
Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation. Perception of musical rhythms relies on an ability to form temporal predictions, a general feature of temporal processing that is equally relevant to auditory scene analysis, pattern detection, and speech perception. By bringing together findings from the music and auditory literature, we hope to inspire researchers to look beyond the conventions of their respective fields and consider the cross-disciplinary implications of studying auditory temporal sequence processing. We begin by highlighting music as an interesting sound stimulus that may provide clues to how temporal patterning in sound drives perception. Next, we review the SMS literature and discuss possible neural substrates for the perception of, and synchronization to, musical beat. We then move away from music to explore the perceptual effects of rhythmic timing in pattern detection, auditory scene analysis, and speech perception. Finally, we review the neurophysiology of general timing processes that may underlie aspects of the perception of rhythmic patterns. We conclude with a brief summary and outlook for future research.
Collapse
Affiliation(s)
- Vani G Rajendran
- Auditory Neuroscience Group, University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, UK
| | - Sundeep Teki
- Auditory Neuroscience Group, University of Oxford, Department of Physiology, Anatomy, and Genetics, Oxford, UK
| | - Jan W H Schnupp
- City University of Hong Kong, Department of Biomedical Sciences, 31 To Yuen Street, Kowloon Tong, Hong Kong.
| |
Collapse
|
43
|
Neural processing of musical meter in musicians and non-musicians. Neuropsychologia 2017; 106:289-297. [DOI: 10.1016/j.neuropsychologia.2017.10.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 10/01/2017] [Accepted: 10/03/2017] [Indexed: 11/17/2022]
|
44
|
Okawa H, Suefusa K, Tanaka T. Neural Entrainment to Auditory Imagery of Rhythms. Front Hum Neurosci 2017; 11:493. [PMID: 29081742 PMCID: PMC5645537 DOI: 10.3389/fnhum.2017.00493] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2017] [Accepted: 09/26/2017] [Indexed: 11/13/2022] Open
Abstract
A method of reconstructing perceived or imagined music by analyzing brain activity has not yet been established. As a first step toward developing such a method, we aimed to reconstruct the imagery of rhythm, which is one element of music. It has been reported that a periodic electroencephalogram (EEG) response is elicited while a human imagines a binary or ternary meter on a musical beat. However, it is not clear whether or not brain activity synchronizes with fully imagined beat and meter without auditory stimuli. To investigate neural entrainment to imagined rhythm during auditory imagery of beat and meter, we recorded EEG while nine participants (eight males and one female) imagined three types of rhythm without auditory stimuli but with visual timing, and then we analyzed the amplitude spectra of the EEG. We also recorded EEG while the participants only gazed at the visual timing as a control condition to confirm the visual effect. Furthermore, we derived features of the EEG using canonical correlation analysis (CCA) and conducted an experiment to individually classify the three types of imagined rhythm from the EEG. The results showed that classification accuracies exceeded the chance level in all participants. These results suggest that auditory imagery of meter elicits a periodic EEG response that changes at the imagined beat and meter frequency even in the fully imagined conditions. This study represents the first step toward the realization of a method for reconstructing the imagined music from brain activity.
Collapse
Affiliation(s)
- Haruki Okawa
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Kaori Suefusa
- Department of Electrical and Information Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Toshihisa Tanaka
- Department of Electrical and Electronic Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan.,RIKEN Brain Science Institute, Saitama, Japan
| |
Collapse
|
45
|
Dean T, Chubb C. Scale-sensitivity: A cognitive resource basic to music perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1432. [PMID: 28964076 DOI: 10.1121/1.4998572] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
A tone-scramble is a rapid, randomly ordered sequence of pure tones. Chubb, Dickson, Dean, Fagan, Mann, Wright, Guan, Silva, Gregersen, and Kowalski [(2013). J. Acoust. Soc. Am. 134(4), 3067-3078] showed that a task requiring listeners to classify major vs minor tone-scrambles yielded a strikingly bimodal distribution. The current study sought to clarify the nature of the skill required in this task. In each of the "semitone" tasks, all tone-scrambles contained eight each of the notes G5, D6, and G6 (to establish G as the tonic) and eight copies of a target note. The target note was either A♭ or A in the "2" task, B♭ or B in the "3" task, C or D♭ in the "4" task, E♭ or E in the "6" task, and F or G♭ in the "7" task. On each trial, the listener strove to classify each stimulus according to its target note. Performance was best (and nearly equal) in the 2, 3, and 6 tasks, intermediate in the 4 task and worst in the 7 task. The results were well-described by a model in which a single cognitive resource controls performance in all five semitone tasks. This resource is called "scale sensitivity" here because it seems to confer general sensitivity to variations in scale in the presence of a fixed tonic.
Collapse
Affiliation(s)
- Tyler Dean
- Department of Cognitive Sciences, University of California at Irvine, Irvine, California 92697-5100, USA
| | - Charles Chubb
- Department of Cognitive Sciences, University of California at Irvine, Irvine, California 92697-5100, USA
| |
Collapse
|
46
|
Transitional Probabilities Are Prioritized over Stimulus/Pattern Probabilities in Auditory Deviance Detection: Memory Basis for Predictive Sound Processing. J Neurosci 2017; 36:9572-9. [PMID: 27629709 DOI: 10.1523/jneurosci.1041-16.2016] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 07/25/2016] [Indexed: 11/21/2022] Open
Abstract
UNLABELLED Representations encoding the probabilities of auditory events do not directly support predictive processing. In contrast, information about the probability with which a given sound follows another (transitional probability) allows predictions of upcoming sounds. We tested whether behavioral and cortical auditory deviance detection (the latter indexed by the mismatch negativity event-related potential) relies on probabilities of sound patterns or on transitional probabilities. We presented healthy adult volunteers with three types of rare tone-triplets among frequent standard triplets of high-low-high (H-L-H) or L-H-L pitch structure: proximity deviant (H-H-H/L-L-L), reversal deviant (L-H-L/H-L-H), and first-tone deviant (L-L-H/H-H-L). If deviance detection was based on pattern probability, reversal and first-tone deviants should be detected with similar latency because both differ from the standard at the first pattern position. If deviance detection was based on transitional probabilities, then reversal deviants should be the most difficult to detect because, unlike the other two deviants, they contain no low-probability pitch transitions. The data clearly showed that both behavioral and cortical auditory deviance detection uses transitional probabilities. Thus, the memory traces underlying cortical deviance detection may provide a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing. SIGNIFICANCE STATEMENT Our research presents the first definite evidence for the auditory system prioritizing transitional probabilities over probabilities of individual sensory events. Forming representations for transitional probabilities paves the way for predictions of upcoming sounds. Several recent theories suggest that predictive processing provides the general basis of human perception, including important auditory functions, such as auditory scene analysis. Our results demonstrate that the memory traces underlying cortical deviance detection form a link between stimulus probability-based change/novelty detectors operating at lower levels of the auditory system and higher auditory cognitive functions that involve predictive processing.
Collapse
|
47
|
Sidiras C, Iliadou V, Nimatoudis I, Reichenbach T, Bamiou DE. Spoken Word Recognition Enhancement Due to Preceding Synchronized Beats Compared to Unsynchronized or Unrhythmic Beats. Front Neurosci 2017; 11:415. [PMID: 28769752 PMCID: PMC5513984 DOI: 10.3389/fnins.2017.00415] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 07/04/2017] [Indexed: 11/16/2022] Open
Abstract
The relation between rhythm and language has been investigated over the last decades, with evidence that these share overlapping perceptual mechanisms emerging from several different strands of research. The dynamic Attention Theory posits that neural entrainment to musical rhythm results in synchronized oscillations in attention, enhancing perception of other events occurring at the same rate. In this study, this prediction was tested in 10 year-old children by means of a psychoacoustic speech recognition in babble paradigm. It was hypothesized that rhythm effects evoked via a short isochronous sequence of beats would provide optimal word recognition in babble when beats and word are in sync. We compared speech recognition in babble performance in the presence of isochronous and in sync vs. non-isochronous or out of sync sequence of beats. Results showed that (a) word recognition was the best when rhythm and word were in sync, and (b) the effect was not uniform across syllables and gender of subjects. Our results suggest that pure tone beats affect speech recognition at early levels of sensory or phonemic processing.
Collapse
Affiliation(s)
- Christos Sidiras
- Clinical Psychoacoustics Laboratory, Neuroscience Division, 3rd Psychiatric Department, Aristotle University of ThessalonikiThessaloniki, Greece
| | - Vasiliki Iliadou
- Clinical Psychoacoustics Laboratory, Neuroscience Division, 3rd Psychiatric Department, Aristotle University of ThessalonikiThessaloniki, Greece
| | - Ioannis Nimatoudis
- Clinical Psychoacoustics Laboratory, Neuroscience Division, 3rd Psychiatric Department, Aristotle University of ThessalonikiThessaloniki, Greece
| | - Tobias Reichenbach
- Department of Bioengineering, Imperial College LondonLondon, United Kingdom
| | - Doris-Eva Bamiou
- Faculty of Brain Sciences, UCL Ear Institute, University College LondonLondon, United Kingdom
| |
Collapse
|
48
|
Abstract
Studies of musical corpora have given empirical grounding to the various features that characterize particular musical styles and genres. Palmer & Krumhansl (1990) found that in Western classical music the likeliest places for a note to occur are the most strongly accented beats in a measure, and this was also found in subsequent studies using both Western classical and folk music corpora (Huron & Ommen, 2006; Temperley, 2010). We present a rhythmic analysis of a corpus of 15 performances of percussion music from Bamako, Mali. In our corpus, the relative frequency of note onsets in a given metrical position does not correspond to patterns of metrical accent, though there is a stable relationship between onset frequency and metrical position. The implications of this non-congruence between simple statistical likelihood and metrical structure for the ways in which meter and metrical accent may be learned and understood are discussed, along with importance of cross-cultural studies for psychological research.
Collapse
|
49
|
Abstract
Musical rhythm positively impacts on subsequent speech processing. However, the neural mechanisms underlying this phenomenon are so far unclear. We investigated whether carryover effects from a preceding musical cue to a speech stimulus result from a continuation of neural phase entrainment to periodicities that are present in both music and speech. Participants listened and memorized French metrical sentences that contained (quasi-)periodic recurrences of accents and syllables. Speech stimuli were preceded by a rhythmically regular or irregular musical cue. Our results show that the presence of a regular cue modulates neural response as estimated by EEG power spectral density, intertrial coherence, and source analyses at critical frequencies during speech processing compared with the irregular condition. Importantly, intertrial coherences for regular cues were indicative of the participants' success in memorizing the subsequent speech stimuli. These findings underscore the highly adaptive nature of neural phase entrainment across fundamentally different auditory stimuli. They also support current models of neural phase entrainment as a tool of predictive timing and attentional selection across cognitive domains.
Collapse
Affiliation(s)
- Simone Falk
- Aix-Marseille Univ, LPL, UMR 7309, CNRS, Aix-en-Provence, France.,Université Sorbonne Nouvelle Paris-3, LPP, UMR 7018, CNRS, Paris, France.,Ludwig-Maximilians-University, Munich, Germany
| | | | | |
Collapse
|
50
|
Stupacher J, Wood G, Witte M. Neural Entrainment to Polyrhythms: A Comparison of Musicians and Non-musicians. Front Neurosci 2017; 11:208. [PMID: 28446864 PMCID: PMC5388767 DOI: 10.3389/fnins.2017.00208] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2016] [Accepted: 03/28/2017] [Indexed: 11/13/2022] Open
Abstract
Music can be thought of as a dynamic path over time. In most cases, the rhythmic structure of this path, such as specific sequences of strong and weak beats or recurring patterns, allows us to predict what and particularly when sounds are going to happen. Without this ability we would not be able to entrain body movements to music, like we do when we dance. By combining EEG and behavioral measures, the current study provides evidence illustrating the importance of ongoing neural oscillations at beat-related frequencies-i.e., neural entrainment-for tracking and predicting musical rhythms. Participants (13 musicians and 13 non-musicians) listened to drum rhythms that switched from a quadruple rhythm to a 3-over-4 polyrhythm. After a silent period of ~2-3 s, participants had to decide whether a target stimulus was presented on time with the triple beat of the polyrhythm, too early, or too late. Results showed that neural oscillations reflected the rhythmic structure of both the simple quadruple rhythm and the more complex polyrhythm with no differences between musicians and non-musicians. During silent periods, the observation of time-frequency plots and more commonly used frequency spectra analyses suggest that beat-related neural oscillations were more pronounced in musicians compared to non-musicians. Neural oscillations during silent periods are not driven by an external input and therefore are thought to reflect top-down controlled endogenous neural entrainment. The functional relevance of endogenous neural entrainment was demonstrated by a positive correlation between the amplitude of task-relevant neural oscillations during silent periods and the number of correctly identified target stimuli. In sum, our findings add to the evidence supporting the neural resonance theory of pulse and meter. Furthermore, they indicate that beat-related top-down controlled neural oscillations can exist without external stimulation and suggest that those endogenous oscillations are strengthened by musical expertise. Finally, this study shows that the analysis of neural oscillations can be a useful tool to assess how we perceive and process complex auditory stimuli such as polyrhythms.
Collapse
Affiliation(s)
- Jan Stupacher
- Department of Psychology, University of GrazGraz, Austria
| | - Guilherme Wood
- Department of Psychology, University of GrazGraz, Austria.,BioTechMed-GrazGraz, Austria
| | - Matthias Witte
- Department of Psychology, University of GrazGraz, Austria
| |
Collapse
|