1
|
Di Stefano N, Spence C. Perceiving temporal structure within and between the senses: A multisensory/crossmodal perspective. Atten Percept Psychophys 2025:10.3758/s13414-025-03045-2. [PMID: 40295425 DOI: 10.3758/s13414-025-03045-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2025] [Indexed: 04/30/2025]
Abstract
The literature demonstrates that people perceive temporal structure in sequences of auditory, tactile, or visual stimuli. However, to date, much less attention has been devoted to studying the perception of temporal structure that results from the presentation of stimuli to the chemical senses and/or crossmodally. In this review, we examine the literature on the perception of temporal features in the unisensory, multisensory and crossmodal domains in an attempt to answer, among others, the following foundational questions: Is the ability to perceive the temporal structure of stimuli demonstrated beyond the spatial senses (i.e., in the chemical senses)? Is the intriguing idea of an amodal, or supramodal, temporal processor in the human brain empirically grounded? Is the perception of temporal structure in crossmodal patterns (even) possible? Does the ability to perceive temporal patterns convey any biological advantage to humans? Overall, the reviewed literature suggests that humans perceive rhythmic structures, such as beat and metre, across audition, vision and touch, exhibiting similar behavioural traits. In contrast, only a limited number of studies have demonstrated this ability in crossmodal contexts (e.g., audiotactile interactions). Similar evidence within the chemical senses remains scarce and unconvincing, posing challenges to the concept of an amodal temporal processor and raising questions about its potential biological advantages. These limitations highlight the need for further investigation. To address these gaps, we propose several directions for future research, which may provide valuable insights into the nature and mechanisms of temporal processing across sensory modalities.
Collapse
Affiliation(s)
- Nicola Di Stefano
- Institute of Cognitive Sciences and Technologies, Via Gian Domenico Romagnosi 18A, 00196, Rome, Italy.
| | - Charles Spence
- Crossmodal Research Laboratory, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Rajendran VG, Tsdaka Y, Keung TY, Schnupp JW, Nelken I. Rats synchronize predictively to metronomes. iScience 2024; 27:111053. [PMID: 39507253 PMCID: PMC11539146 DOI: 10.1016/j.isci.2024.111053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 05/29/2024] [Accepted: 09/24/2024] [Indexed: 11/08/2024] Open
Abstract
Predictive auditory-motor synchronization, in which rhythmic movements anticipate rhythmic sounds, is at the core of the human capacity for music. Rodents show impressive capabilities in timing and motor tasks, but their ability to predictively coordinate sensation and action has not been demonstrated. Here, we reveal a clear capacity for predictive auditory-motor synchronization in rodent species using a modeling approach for the quantitative exploration of synchronization behaviors. We trained 8 rats to synchronize their licking to metronomes with tempi ranging from 0.5to 2 Hz and observed periodic lick patterns locked to metronome beats. We developed a flexible Markovian modeling framework to formally test how well different candidate strategies could explain the observed lick patterns. The best models required predictive control of licking that could not be explained by reactive strategies, indicating that predictive auditory-motor synchronization may be more widely shared across mammalian species than previously appreciated.
Collapse
Affiliation(s)
- Vani G. Rajendran
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
- Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Yehonadav Tsdaka
- Edmond and Lily Safra Center for Brain Sciences and the Department for Neurobiology, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Tung Yee Keung
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
| | - Jan W.H. Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, Hong Kong, China
- Gerald Choa Neuroscience Institute, The Chinese University of Hong Kong, Hong Kong, China
- Department of Otolaryngology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - Israel Nelken
- Edmond and Lily Safra Center for Brain Sciences and the Department for Neurobiology, Hebrew University of Jerusalem, Jerusalem, Israel
- Instituto de Fisiología Celular, Universidad Nacional Autónoma de México, Mexico City, Mexico
| |
Collapse
|
3
|
Sebastianelli M, Lukhele SM, Secomandi S, de Souza SG, Haase B, Moysi M, Nikiforou C, Hutfluss A, Mountcastle J, Balacco J, Pelan S, Chow W, Fedrigo O, Downs CT, Monadjem A, Dingemanse NJ, Jarvis ED, Brelsford A, vonHoldt BM, Kirschel ANG. A genomic basis of vocal rhythm in birds. Nat Commun 2024; 15:3095. [PMID: 38653976 DOI: 10.1038/s41467-024-47305-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/22/2024] [Indexed: 04/25/2024] Open
Abstract
Vocal rhythm plays a fundamental role in sexual selection and species recognition in birds, but little is known of its genetic basis due to the confounding effect of vocal learning in model systems. Uncovering its genetic basis could facilitate identifying genes potentially important in speciation. Here we investigate the genomic underpinnings of rhythm in vocal non-learning Pogoniulus tinkerbirds using 135 individual whole genomes distributed across a southern African hybrid zone. We find rhythm speed is associated with two genes that are also known to affect human speech, Neurexin-1 and Coenzyme Q8A. Models leveraging ancestry reveal these candidate loci also impact rhythmic stability, a trait linked with motor performance which is an indicator of quality. Character displacement in rhythmic stability suggests possible reinforcement against hybridization, supported by evidence of asymmetric assortative mating in the species producing faster, more stable rhythms. Because rhythm is omnipresent in animal communication, candidate genes identified here may shape vocal rhythm across birds and other vertebrates.
Collapse
Affiliation(s)
- Matteo Sebastianelli
- Department of Biological Sciences, University of Cyprus, PO Box 20537, Nicosia, 1678, Cyprus.
- Department of Medical Biochemistry and Microbiology, Uppsala University, Box 582, 751 23, Uppsala, Sweden.
| | - Sifiso M Lukhele
- Department of Biological Sciences, University of Cyprus, PO Box 20537, Nicosia, 1678, Cyprus
| | - Simona Secomandi
- Department of Biological Sciences, University of Cyprus, PO Box 20537, Nicosia, 1678, Cyprus
| | - Stacey G de Souza
- Department of Biological Sciences, University of Cyprus, PO Box 20537, Nicosia, 1678, Cyprus
| | - Bettina Haase
- Vertebrate Genome Lab, The Rockefeller University, New York, NY, USA
| | - Michaella Moysi
- Department of Biological Sciences, University of Cyprus, PO Box 20537, Nicosia, 1678, Cyprus
| | - Christos Nikiforou
- Department of Biological Sciences, University of Cyprus, PO Box 20537, Nicosia, 1678, Cyprus
| | - Alexander Hutfluss
- Behavioural Ecology, Faculty of Biology, LMU Munich (LMU), 82152, Planegg-Martinsried, Germany
| | | | - Jennifer Balacco
- Vertebrate Genome Lab, The Rockefeller University, New York, NY, USA
| | | | | | - Olivier Fedrigo
- Vertebrate Genome Lab, The Rockefeller University, New York, NY, USA
| | - Colleen T Downs
- Centre for Functional Biodiversity, School of Life Sciences, University of KwaZulu-Natal, Pietermaritzburg, 3209, South Africa
| | - Ara Monadjem
- Department of Biological Sciences, University of Eswatini, Kwaluseni, Eswatini
- Mammal Research Institute, Department of Zoology & Entomology, University of Pretoria, Private Bag 20, Hatfield, 0028, Pretoria, South Africa
| | - Niels J Dingemanse
- Behavioural Ecology, Faculty of Biology, LMU Munich (LMU), 82152, Planegg-Martinsried, Germany
| | - Erich D Jarvis
- Vertebrate Genome Lab, The Rockefeller University, New York, NY, USA
- Laboratory of Neurogenetics of Language, The Rockefeller University, New York, NY, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Alan Brelsford
- Department of Evolution, Ecology and Organismal Biology, University of California Riverside, Riverside, CA, 92521, USA
| | - Bridgett M vonHoldt
- Department of Ecology & Evolutionary Biology, Princeton University, Princeton, NJ, 08544, USA
| | - Alexander N G Kirschel
- Department of Biological Sciences, University of Cyprus, PO Box 20537, Nicosia, 1678, Cyprus.
| |
Collapse
|
4
|
Chen S, Thielk M, Gentner TQ. Auditory Feature-based Perceptual Distance. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.28.582631. [PMID: 38464215 PMCID: PMC10925319 DOI: 10.1101/2024.02.28.582631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Studies comparing acoustic signals often rely on pixel-wise differences between spectrograms, as in for example mean squared error (MSE). Pixel-wise errors are not representative of perceptual sensitivity, however, and such measures can be highly sensitive to small local signal changes that may be imperceptible. In computer vision, high-level visual features extracted with convolutional neural networks (CNN) can be used to calculate the fidelity of computer-generated images. Here, we propose the auditory perceptual distance (APD) metric based on acoustic features extracted with an unsupervised CNN and validated by perceptual behavior. Using complex vocal signals from songbirds, we trained a Siamese CNN on a self-supervised task using spectrograms rescaled to match the auditory frequency sensitivity of European starlings, Sturnus vulgaris. We define APD for any pair of sounds as the cosine distance between corresponding feature vectors extracted by the trained CNN. We show that APD is more robust to temporal and spectral translation than MSE, and captures the sigmoidal shape of typical behavioral psychometric functions over complex acoustic spaces. When fine-tuned using starlings' behavioral judgments of naturalistic song syllables, the APD model yields even more accurate predictions of perceptual sensitivity, discrimination, and categorization on novel complex (high-dimensional) acoustic dimensions, including diverging decisions for identical stimuli following different training conditions. Thus, the APD model outperforms MSE in robustness and perceptual accuracy, and offers tunability to match experience-dependent perceptual biases.
Collapse
Affiliation(s)
- Shukai Chen
- Department of Bioengineering, University of California, San Diego, La Jolla, CA, 92093
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093
| | - Marvin Thielk
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, 92093
| | - Timothy Q. Gentner
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA, 92093
- Neurobiology Section, Division of Biological Sciences, University of California, San Diego, La Jolla, CA, 92093
| |
Collapse
|
5
|
Crespo-Bojorque P, Cauvet E, Pallier C, Toro JM. Recognizing structure in novel tunes: differences between human and rats. Anim Cogn 2024; 27:17. [PMID: 38429431 PMCID: PMC10907461 DOI: 10.1007/s10071-024-01848-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 10/27/2023] [Accepted: 11/08/2023] [Indexed: 03/03/2024]
Abstract
A central feature in music is the hierarchical organization of its components. Musical pieces are not a simple concatenation of chords, but are characterized by rhythmic and harmonic structures. Here, we explore if sensitivity to music structure might emerge in the absence of any experience with musical stimuli. For this, we tested if rats detect the difference between structured and unstructured musical excerpts and compared their performance with that of humans. Structured melodies were excerpts of Mozart's sonatas. Unstructured melodies were created by the recombination of fragments of different sonatas. We trained listeners (both human participants and Long-Evans rats) with a set of structured and unstructured excerpts, and tested them with completely novel excerpts they had not heard before. After hundreds of training trials, rats were able to tell apart novel structured from unstructured melodies. Human listeners required only a few trials to reach better performance than rats. Interestingly, such performance was increased in humans when tonality changes were included, while it decreased to chance in rats. Our results suggest that, with enough training, rats might learn to discriminate acoustic differences differentiating hierarchical music structures from unstructured excerpts. More importantly, the results point toward species-specific adaptations on how tonality is processed.
Collapse
Affiliation(s)
| | - Elodie Cauvet
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif-Sur-Yvette, France
- DIS Study Abroad in Scandinavia, Stockholm, Sweden
| | - Christophe Pallier
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin Center, Gif-Sur-Yvette, France
| | - Juan M Toro
- Universitat Pompeu Fabra, C. Ramon Trias Fargas, 25-27, CP. 08005, Barcelona, Spain.
- Institució Catalana de Recerca I Estudis Avançats (ICREA), Barcelona, Spain.
| |
Collapse
|
6
|
Verga L, Kotz SA, Ravignani A. The evolution of social timing. Phys Life Rev 2023; 46:131-151. [PMID: 37419011 DOI: 10.1016/j.plrev.2023.06.006] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
Sociality and timing are tightly interrelated in human interaction as seen in turn-taking or synchronised dance movements. Sociality and timing also show in communicative acts of other species that might be pleasurable, but also necessary for survival. Sociality and timing often co-occur, but their shared phylogenetic trajectory is unknown: How, when, and why did they become so tightly linked? Answering these questions is complicated by several constraints; these include the use of divergent operational definitions across fields and species, the focus on diverse mechanistic explanations (e.g., physiological, neural, or cognitive), and the frequent adoption of anthropocentric theories and methodologies in comparative research. These limitations hinder the development of an integrative framework on the evolutionary trajectory of social timing and make comparative studies not as fruitful as they could be. Here, we outline a theoretical and empirical framework to test contrasting hypotheses on the evolution of social timing with species-appropriate paradigms and consistent definitions. To facilitate future research, we introduce an initial set of representative species and empirical hypotheses. The proposed framework aims at building and contrasting evolutionary trees of social timing toward and beyond the crucial branch represented by our own lineage. Given the integration of cross-species and quantitative approaches, this research line might lead to an integrated empirical-theoretical paradigm and, as a long-term goal, explain why humans are such socially coordinated animals.
Collapse
Affiliation(s)
- Laura Verga
- Comparative Bioacoustic Group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands; Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Andrea Ravignani
- Comparative Bioacoustic Group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
7
|
Fiveash A, Ferreri L, Bouwer FL, Kösem A, Moghimi S, Ravignani A, Keller PE, Tillmann B. Can rhythm-mediated reward boost learning, memory, and social connection? Perspectives for future research. Neurosci Biobehav Rev 2023; 149:105153. [PMID: 37019245 DOI: 10.1016/j.neubiorev.2023.105153] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 03/14/2023] [Accepted: 03/31/2023] [Indexed: 04/05/2023]
Abstract
Studies of rhythm processing and of reward have progressed separately, with little connection between the two. However, consistent links between rhythm and reward are beginning to surface, with research suggesting that synchronization to rhythm is rewarding, and that this rewarding element may in turn also boost this synchronization. The current mini review shows that the combined study of rhythm and reward can be beneficial to better understand their independent and combined roles across two central aspects of cognition: 1) learning and memory, and 2) social connection and interpersonal synchronization; which have so far been studied largely independently. From this basis, it is discussed how connections between rhythm and reward can be applied to learning and memory and social connection across different populations, taking into account individual differences, clinical populations, human development, and animal research. Future research will need to consider the rewarding nature of rhythm, and that rhythm can in turn boost reward, potentially enhancing other cognitive and social processes.
Collapse
Affiliation(s)
- A Fiveash
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France; University of Lyon 1, Lyon, France; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia.
| | - L Ferreri
- Department of Brain and Behavioural Sciences, University of Pavia, Pavia, Italy; Laboratoire d'Étude des Mécanismes Cognitifs, Université Lumière Lyon 2, Lyon, France
| | - F L Bouwer
- Department of Psychology, Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
| | - A Kösem
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France
| | - S Moghimi
- Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, INSERM U1105, Amiens, France
| | - A Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - P E Keller
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, Australia; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Denmark
| | - B Tillmann
- Lyon Neuroscience Research Center, CRNL, CNRS, UMR 5292, INSERM U1028, F-69000 Lyon, France; University of Lyon 1, Lyon, France; Laboratory for Research on Learning and Development, LEAD - CNRS UMR5022, Université de Bourgogne, Dijon, France
| |
Collapse
|
8
|
Cortical encoding of rhythmic kinematic structures in biological motion. Neuroimage 2023; 268:119893. [PMID: 36693597 DOI: 10.1016/j.neuroimage.2023.119893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 01/04/2023] [Accepted: 01/20/2023] [Indexed: 01/22/2023] Open
Abstract
Biological motion (BM) perception is of great survival value to human beings. The critical characteristics of BM information lie in kinematic cues containing rhythmic structures. However, how rhythmic kinematic structures of BM are dynamically represented in the brain and contribute to visual BM processing remains largely unknown. Here, we probed this issue in three experiments using electroencephalogram (EEG). We found that neural oscillations of observers entrained to the hierarchical kinematic structures of the BM sequences (i.e., step-cycle and gait-cycle for point-light walkers). Notably, only the cortical tracking of the higher-level rhythmic structure (i.e., gait-cycle) exhibited a BM processing specificity, manifested by enhanced neural responses to upright over inverted BM stimuli. This effect could be extended to different motion types and tasks, with its strength positively correlated with the perceptual sensitivity to BM stimuli at the right temporal brain region dedicated to visual BM processing. Modeling results further suggest that the neural encoding of spatiotemporally integrative kinematic cues, in particular the opponent motions of bilateral limbs, drives the selective cortical tracking of BM information. These findings underscore the existence of a cortical mechanism that encodes periodic kinematic features of body movements, which underlies the dynamic construction of visual BM perception.
Collapse
|
9
|
Chen WG, Iversen JR, Kao MH, Loui P, Patel AD, Zatorre RJ, Edwards E. Music and Brain Circuitry: Strategies for Strengthening Evidence-Based Research for Music-Based Interventions. J Neurosci 2022; 42:8498-8507. [PMID: 36351825 PMCID: PMC9665917 DOI: 10.1523/jneurosci.1135-22.2022] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/07/2022] [Accepted: 09/10/2022] [Indexed: 11/17/2022] Open
Abstract
The neuroscience of music and music-based interventions (MBIs) is a fascinating but challenging research field. While music is a ubiquitous component of every human society, MBIs may encompass listening to music, performing music, music-based movement, undergoing music education and training, or receiving treatment from music therapists. Unraveling the brain circuits activated and influenced by MBIs may help us gain better understanding of the therapeutic and educational values of MBIs by gathering strong research evidence. However, the complexity and variety of MBIs impose unique research challenges. This article reviews the recent endeavor led by the National Institutes of Health to support evidence-based research of MBIs and their impact on health and diseases. It also highlights fundamental challenges and strategies of MBI research with emphases on the utilization of animal models, human brain imaging and stimulation technologies, behavior and motion capturing tools, and computational approaches. It concludes with suggestions of basic requirements when studying MBIs and promising future directions to further strengthen evidence-based research on MBIs in connections with brain circuitry.SIGNIFICANCE STATEMENT Music and music-based interventions (MBI) engage a wide range of brain circuits and hold promising therapeutic potentials for a variety of health conditions. Comparative studies using animal models have helped in uncovering brain circuit activities involved in rhythm perception, while human imaging, brain stimulation, and motion capture technologies have enabled neural circuit analysis underlying the effects of MBIs on motor, affective/reward, and cognitive function. Combining computational analysis, such as prediction method, with mechanistic studies in animal models and humans may unravel the complexity of MBIs and their effects on health and disease.
Collapse
Affiliation(s)
- Wen Grace Chen
- Division of Extramural Research, National Center for Complementary and Integrative Health, National Institutes of Health, Bethesda, Maryland, 20892
| | | | - Mimi H Kao
- Tufts University, Medford, Massachusetts 02155
| | - Psyche Loui
- Northeastern University, Boston, Massachusetts 02115
| | | | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
| | - Emmeline Edwards
- Division of Extramural Research, National Center for Complementary and Integrative Health, National Institutes of Health, Bethesda, Maryland, 20892
| |
Collapse
|
10
|
Verga L, Sroka MGU, Varola M, Villanueva S, Ravignani A. Spontaneous rhythm discrimination in a mammalian vocal learner. Biol Lett 2022; 18:20220316. [PMID: 36285461 PMCID: PMC9597408 DOI: 10.1098/rsbl.2022.0316] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Rhythm and vocal production learning are building blocks of human music and speech. Vocal learning has been hypothesized as a prerequisite for rhythmic capacities. Yet, no mammalian vocal learner but humans have shown the capacity to flexibly and spontaneously discriminate rhythmic patterns. Here we tested untrained rhythm discrimination in a mammalian vocal learning species, the harbour seal (Phoca vitulina). Twenty wild-born seals were exposed to music-like playbacks of conspecific call sequences varying in basic rhythmic properties. These properties were called length, sequence regularity, and overall tempo. All three features significantly influenced seals' reaction (number of looks and their duration), demonstrating spontaneous rhythm discrimination in a vocal learning mammal. This finding supports the rhythm–vocal learning hypothesis and showcases pinnipeds as promising models for comparative research on rhythmic phylogenies.
Collapse
Affiliation(s)
- Laura Verga
- Comparative Bioacoustics Research Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands,Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Marlene G. U. Sroka
- Department of Behavioural Biology, University of Münster, Münster, Germany,Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands
| | - Mila Varola
- Comparative Bioacoustics Research Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands,Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands
| | - Stella Villanueva
- Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands
| | - Andrea Ravignani
- Comparative Bioacoustics Research Group, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands,Research Department, Sealcentre Pieterburen, Pieterburen, The Netherlands,Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark
| |
Collapse
|
11
|
Xing J, Sainburg T, Taylor H, Gentner TQ. Syntactic modulation of rhythm in Australian pied butcherbird song. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220704. [PMID: 36177196 PMCID: PMC9515642 DOI: 10.1098/rsos.220704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 09/05/2022] [Indexed: 05/04/2023]
Abstract
The acoustic structure of birdsong is spectrally and temporally complex. Temporal complexity is often investigated in a syntactic framework focusing on the statistical features of symbolic song sequences. Alternatively, temporal patterns can be investigated in a rhythmic framework that focuses on the relative timing between song elements. Here, we investigate the merits of combining both frameworks by integrating syntactic and rhythmic analyses of Australian pied butcherbird (Cracticus nigrogularis) songs, which exhibit organized syntax and diverse rhythms. We show that rhythms of the pied butcherbird song bouts in our sample are categorically organized and predictable by the song's first-order sequential syntax. These song rhythms remain categorically distributed and strongly associated with the first-order sequential syntax even after controlling for variance in note length, suggesting that the silent intervals between notes induce a rhythmic structure on note sequences. We discuss the implication of syntactic-rhythmic relations as a relevant feature of song complexity with respect to signals such as human speech and music, and advocate for a broader conception of song complexity that takes into account syntax, rhythm, and their interaction with other acoustic and perceptual features.
Collapse
Affiliation(s)
- Jeffrey Xing
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Tim Sainburg
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Hollis Taylor
- Sydney Conservatorium of Music, University of Sydney, Sydney, New South Wales, Australia
| | - Timothy Q. Gentner
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
- Neurobiology Section, Division of Biological Sciences, University of California San Diego, La Jolla, CA, USA
- Kavli Institute for Brain and Mind, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
12
|
Xing J, Sainburg T, Taylor H, Gentner TQ. Syntactic modulation of rhythm in Australian pied butcherbird song. ROYAL SOCIETY OPEN SCIENCE 2022; 9:220704. [PMID: 36177196 DOI: 10.6084/m9.figshare.c.6197494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 09/05/2022] [Indexed: 05/21/2023]
Abstract
The acoustic structure of birdsong is spectrally and temporally complex. Temporal complexity is often investigated in a syntactic framework focusing on the statistical features of symbolic song sequences. Alternatively, temporal patterns can be investigated in a rhythmic framework that focuses on the relative timing between song elements. Here, we investigate the merits of combining both frameworks by integrating syntactic and rhythmic analyses of Australian pied butcherbird (Cracticus nigrogularis) songs, which exhibit organized syntax and diverse rhythms. We show that rhythms of the pied butcherbird song bouts in our sample are categorically organized and predictable by the song's first-order sequential syntax. These song rhythms remain categorically distributed and strongly associated with the first-order sequential syntax even after controlling for variance in note length, suggesting that the silent intervals between notes induce a rhythmic structure on note sequences. We discuss the implication of syntactic-rhythmic relations as a relevant feature of song complexity with respect to signals such as human speech and music, and advocate for a broader conception of song complexity that takes into account syntax, rhythm, and their interaction with other acoustic and perceptual features.
Collapse
Affiliation(s)
- Jeffrey Xing
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Tim Sainburg
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
| | - Hollis Taylor
- Sydney Conservatorium of Music, University of Sydney, Sydney, New South Wales, Australia
| | - Timothy Q Gentner
- Department of Psychology, University of California San Diego, La Jolla, CA, USA
- Neurobiology Section, Division of Biological Sciences, University of California San Diego, La Jolla, CA, USA
- Kavli Institute for Brain and Mind, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
13
|
Kriengwatana BP, Mott R, ten Cate C. Music for animal welfare: a critical review & conceptual framework. Appl Anim Behav Sci 2022. [DOI: 10.1016/j.applanim.2022.105641] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
14
|
Mahmod SR, Narayanan LT, Abu Hasan R, Supriyanto E. Regulated Monosyllabic Talk Test vs. Counting Talk Test During Incremental Cardiorespiratory Exercise: Determining the Implications of the Utterance Rate on Exercise Intensity Estimation. Front Physiol 2022; 13:832647. [PMID: 35422713 PMCID: PMC9002174 DOI: 10.3389/fphys.2022.832647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 02/15/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose When utilizing breathing for speech, the rate and volume of inhalation, as well as the rate of exhalation during the utterance, seem to be largely governed by the speech-controlling system and its requirements with respect to phrasing, loudness, and articulation. However, since the Talk Test represents a non-standardized form of assessment of exercise intensity estimation, this study aimed to compare the utterance rate and the estimated exercise intensity using a newly introduced time-controlled monosyllabic Talk Test (tMTT) versus a self-paced Counting Talk Test (CTT) across incremental exercise stages and examined their associations with the exercise physiological measures. Methods Twenty-four participants, 10 males and 14 females (25 ± 4.0 yr; 160 ± 10 cm; 62 ± 14.5 kg) performed two sessions of submaximal cardiorespiratory exercise at incremental heart rate reserve (HRR) stages ranging from 40 to 85% of HRR: one session was performed with a currently available CTT that was affixed to a wall in front of the participants, and the other session was conducted with a tMTT with a 1-s inter-stimulus interval that was displayed from a tablet. In each session, the participants performed six stages of exercise at 40, 50, 60, 70, 80, and 85% HRR on a treadmill and were also asked to rate their perceived exertion based on Borg's 6 to 20 Rating of Perceived Exertion (RPE) at each exercise stage. Results The newly designed tMTT significantly delineated all the six stages of incremental exercise (p ≤ 0.017), while CTT could only delineate exercise stages at 60, 80, and 85% HRR. However, in estimations of exercise intensity, the tMTT demonstrated only moderate associations with HRR and Borg's RPE, similarly to the CTT. Conclusion If the purpose of exercise monitoring is to detect the intensity of light, moderate, and vigorous exercise intensity, the tMTT could be more universally applicable. However, due to its larger variability of speech rate across exercise intensities, the time-regulated approach may alter the speech breathing characteristics of the exercising individuals in other ways that should be investigated in future research.
Collapse
Affiliation(s)
- Siti Ruzita Mahmod
- Cardiorespiratory Physiotherapy Laboratory, School of Biomedical Engineering and Health Sciences, Faculty of Engineering, Universiti Teknologi Malaysia, Johor Bahru, Malaysia
| | - Leela T. Narayanan
- Cardiorespiratory Physiotherapy Laboratory, School of Biomedical Engineering and Health Sciences, Faculty of Engineering, Universiti Teknologi Malaysia, Johor Bahru, Malaysia
| | - Rumaisa Abu Hasan
- Cardiorespiratory Physiotherapy Laboratory, School of Biomedical Engineering and Health Sciences, Faculty of Engineering, Universiti Teknologi Malaysia, Johor Bahru, Malaysia
| | - Eko Supriyanto
- Department of Biomedical Engineering, School of Biomedical Engineering and Health Sciences, Faculty of Engineering, Universiti Teknologi Malaysia, Johor Bahru, Malaysia
| |
Collapse
|
15
|
Burchardt LS, Picciulin M, Parmentier E, Bolgan M. A primer on rhythm quantification for fish sounds: a Mediterranean case study. ROYAL SOCIETY OPEN SCIENCE 2021; 8:210494. [PMID: 34567587 PMCID: PMC8456132 DOI: 10.1098/rsos.210494] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Accepted: 08/16/2021] [Indexed: 06/13/2023]
Abstract
We have used a lately established workflow to quantify rhythms of three fish sound types recorded in different areas of the Mediterranean Sea. So far, the temporal structure of fish sound sequences has only been described qualitatively. Here, we propose a standardized approach to quantify them, opening the path for assessment and comparison of an often underestimated but potentially critical aspect of fish sounds. Our approach is based on the analysis of inter-onset-intervals (IOIs), the intervals between the start of one sound element and the next. We calculate exact beats of a sequence using Fourier analysis and IOI analysis. Furthermore, we report on important parameters describing the variability in timing within a given sound sequence. Datasets were chosen to depict different possible rhythmic properties: Sciaena umbra sounds have a simple isochronous-metronome-like-rhythm. The /Kwa/ sound type emitted by Scorpaena spp. has a more complex rhythm, still presenting an underlying isochronous pattern. Calls of Ophidion rochei males present no rhythm, but a random temporal succession of sounds. This approach holds great potential for shedding light on important aspects of fish bioacoustics. Applications span from the characterization of specific behaviours to the potential discrimination of yet not distinguishable species.
Collapse
Affiliation(s)
- Lara S Burchardt
- Museum für Naturkunde - Leibniz Institute for Evolution and Biodiversity Science, Invalidenstraße 43, 10115 Berlin, Germany
- Institute of Animal Behaviour, Freie Universität Berlin, Takustr. 6, 14195 Berlin, Germany
| | | | - Eric Parmentier
- Laboratory of Functional and Evolutionary Morphology (Freshwater and Oceanic sCience Unit of reSearch), Institut de Chimie B6c, University of Liège, Liège, Belgium
| | - Marta Bolgan
- Laboratory of Functional and Evolutionary Morphology (Freshwater and Oceanic sCience Unit of reSearch), Institut de Chimie B6c, University of Liège, Liège, Belgium
| |
Collapse
|