1
|
Gonan S, Vallortigara G, Chiandetti C. When sounds come alive: animacy in the auditory sense. Front Psychol 2024; 15:1498702. [PMID: 39526129 PMCID: PMC11543492 DOI: 10.3389/fpsyg.2024.1498702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2024] [Accepted: 10/14/2024] [Indexed: 11/16/2024] Open
Abstract
Despite the interest in animacy perception, few studies have considered sensory modalities other than vision. However, even everyday experience suggests that the auditory sense can also contribute to the recognition of animate beings, for example through the identification of voice-like sounds or through the perception of sounds that are the by-products of locomotion. Here we review the studies that have investigated the responses of humans and other animals to different acoustic features that may indicate the presence of a living entity, with particular attention to the neurophysiological mechanisms underlying such perception. Specifically, we have identified three different auditory animacy cues in the existing literature, namely voicelikeness, consonance, and acoustic motion. While the first two characteristics are clearly exclusive to the auditory sense and indicate the presence of an animate being capable of producing vocalizations or harmonic sounds-with the adaptive value of consonance also being exploited in musical compositions in which the musician wants to convey certain meanings-acoustic movement is, on the other hand, closely linked to the perception of animacy in the visual sense, in particular to self-propelled and biological motion stimuli. The results presented here support the existence of a multifaceted auditory sense of animacy that is shared by different distantly related species and probably represents an innate predisposition, and also suggest that the mechanisms underlying the perception of living things may all be part of an integrated network involving different sensory modalities.
Collapse
Affiliation(s)
- Stefano Gonan
- Department of Life Sciences, University of Trieste, Trieste, Italy
| | | | | |
Collapse
|
2
|
Hersh TA, Ravignani A, Whitehead H. Cetaceans are the next frontier for vocal rhythm research. Proc Natl Acad Sci U S A 2024; 121:e2313093121. [PMID: 38814875 PMCID: PMC11194516 DOI: 10.1073/pnas.2313093121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2024] Open
Abstract
While rhythm can facilitate and enhance many aspects of behavior, its evolutionary trajectory in vocal communication systems remains enigmatic. We can trace evolutionary processes by investigating rhythmic abilities in different species, but research to date has largely focused on songbirds and primates. We present evidence that cetaceans-whales, dolphins, and porpoises-are a missing piece of the puzzle for understanding why rhythm evolved in vocal communication systems. Cetaceans not only produce rhythmic vocalizations but also exhibit behaviors known or thought to play a role in the evolution of different features of rhythm. These behaviors include vocal learning abilities, advanced breathing control, sexually selected vocal displays, prolonged mother-infant bonds, and behavioral synchronization. The untapped comparative potential of cetaceans is further enhanced by high interspecific diversity, which generates natural ranges of vocal and social complexity for investigating various evolutionary hypotheses. We show that rhythm (particularly isochronous rhythm, when sounds are equally spaced in time) is prevalent in cetacean vocalizations but is used in different contexts by baleen and toothed whales. We also highlight key questions and research areas that will enhance understanding of vocal rhythms across taxa. By coupling an infraorder-level taxonomic assessment of vocal rhythm production with comparisons to other species, we illustrate how broadly comparative research can contribute to a more nuanced understanding of the prevalence, evolution, and possible functions of rhythm in animal communication.
Collapse
Affiliation(s)
- Taylor A. Hersh
- Marine Mammal Institute, Oregon State University, Newport, OR97365
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Nijmegen6525 XD, The Netherlands
- Department of Biology, Dalhousie University, HalifaxNS B3H 4R2, Canada
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Nijmegen6525 XD, The Netherlands
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus8000, Denmark
- Department of Human Neurosciences, Sapienza University of Rome, Rome00185, Italy
| | - Hal Whitehead
- Department of Biology, Dalhousie University, HalifaxNS B3H 4R2, Canada
| |
Collapse
|
3
|
Pouw W, Fuchs S. Origins Of Vocal-Entangled Gesture. Neurosci Biobehav Rev 2022; 141:104836. [PMID: 36031008 DOI: 10.1016/j.neubiorev.2022.104836] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 08/12/2022] [Accepted: 08/21/2022] [Indexed: 01/13/2023]
Abstract
Gestures during speaking are typically understood in a representational framework: they represent absent or distal states of affairs by means of pointing, resemblance, or symbolic replacement. However, humans also gesture along with the rhythm of speaking, which is amenable to a non-representational perspective. Such a perspective centers on the phenomenon of vocal-entangled gestures and builds on evidence showing that when an upper limb with a certain mass decelerates/accelerates sufficiently, it yields impulses on the body that cascade in various ways into the respiratory-vocal system. It entails a physical entanglement between body motions, respiration, and vocal activities. It is shown that vocal-entangled gestures are realized in infant vocal-motor babbling before any representational use of gesture develops. Similarly, an overview is given of vocal-entangled processes in non-human animals. They can frequently be found in rats, bats, birds, and a range of other species that developed even earlier in the phylogenetic tree. Thus, the origins of human gesture lie in biomechanics, emerging early in ontogeny and running deep in phylogeny.
Collapse
Affiliation(s)
- Wim Pouw
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands.
| | - Susanne Fuchs
- Leibniz Center General Linguistics, Berlin, Germany.
| |
Collapse
|
4
|
Papachatzis N, Slivka DR, Pipinos II, Schmid KK, Takahashi KZ. Does the Heel’s Dissipative Energetic Behavior Affect Its Thermodynamic Responses During Walking? Front Bioeng Biotechnol 2022; 10:908725. [PMID: 35832413 PMCID: PMC9271620 DOI: 10.3389/fbioe.2022.908725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 06/06/2022] [Indexed: 11/13/2022] Open
Abstract
Most of the terrestrial legged locomotion gaits, like human walking, necessitate energy dissipation upon ground collision. In humans, the heel mostly performs net-negative work during collisions, and it is currently unclear how it dissipates that energy. Based on the laws of thermodynamics, one possibility is that the net-negative collision work may be dissipated as heat. If supported, such a finding would inform the thermoregulation capacity of human feet, which may have implications for understanding foot complications and tissue damage. Here, we examined the correlation between energy dissipation and thermal responses by experimentally increasing the heel’s collisional forces. Twenty healthy young adults walked overground on force plates and for 10 min on a treadmill (both at 1.25 ms−1) while wearing a vest with three different levels of added mass (+0%, +15%, & +30% of their body mass). We estimated the heel’s work using a unified deformable segment analysis during overground walking. We measured the heel’s temperature immediately before and after each treadmill trial. We hypothesized that the heel’s temperature and net-negative work would increase when walking with added mass, and the temperature change is correlated with the increased net-negative work. We found that walking with +30% added mass significantly increased the heel’s temperature change by 0.72 ± 1.91 ℃ (p = 0.009) and the magnitude of net-negative work (extrapolated to 10 min of walking) by 326.94 ± 379.92 J (p = 0.005). However, we found no correlation between the heel’s net-negative work and temperature changes (p = 0.277). While this result refuted our second hypothesis, our findings likely demonstrate the heel’s dynamic thermoregulatory capacity. If all the negative work were dissipated as heat, we would expect excessive skin temperature elevation during prolonged walking, which may cause skin complications. Therefore, our results likely indicate that various heat dissipation mechanisms control the heel’s thermodynamic responses, which may protect the health and integrity of the surrounding tissue. Also, our results indicate that additional mechanical factors, besides energy dissipation, explain the heel’s temperature rise. Therefore, future experiments may explore alternative factors affecting thermodynamic responses, including mechanical (e.g., sound & shear-stress) and physiological mechanisms (e.g., sweating, local metabolic rate, & blood flow).
Collapse
Affiliation(s)
- Nikolaos Papachatzis
- Department of Biomechanics, University of Nebraska at Omaha, Omaha, NE, United States
| | - Dustin R. Slivka
- School of Health and Kinesiology, University of Nebraska at Omaha, Omaha, NE, United States
| | - Iraklis I. Pipinos
- Department of Surgery, University of Nebraska Medical Center, Omaha, NE, United States
| | - Kendra K. Schmid
- Department of Biostatistics, University of Nebraska Medical Center, Omaha, NE, United States
| | - Kota Z. Takahashi
- Department of Biomechanics, University of Nebraska at Omaha, Omaha, NE, United States
- *Correspondence: Kota Z. Takahashi,
| |
Collapse
|
5
|
Rösch AD, Taub E, Gschwandtner U, Fuhr P. Evaluating a Speech-Specific and a Computerized Step-Training-Specific Rhythmic Intervention in Parkinson's Disease: A Cross-Over, Multi-Arms Parallel Study. FRONTIERS IN REHABILITATION SCIENCES 2022; 2:783259. [PMID: 36188780 PMCID: PMC9397933 DOI: 10.3389/fresc.2021.783259] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Accepted: 12/14/2021] [Indexed: 11/27/2022]
Abstract
Background: Recent studies suggest movements of speech and gait in patients with Parkinson's Disease (PD) are impaired by a common underlying rhythmic dysfunction. If this being the case, motor deficits in speech and gait should equally benefit from rhythmic interventions regardless of whether it is a speech-specific or step-training-specific approach. Objective: In this intervention trial, we studied the effects of two rhythmic interventions on speech and gait. These rhythmic intervention programs are similar in terms of intensity and frequency (i.e., 3x per week, 45 min-long sessions for 4 weeks in total), but differ regarding therapeutic approach (rhythmic speech vs. rhythmic balance-mobility training). Methods: This study is a cross-over, parallel multi-arms, single blind intervention trial, in which PD patients treated with rhythmic speech-language therapy (rSLT; N = 16), rhythmic balance-mobility training (rBMT; N = 10), or no therapy (NT; N = 18) were compared to healthy controls (HC; N = 17; matched by age, sex, and education: p > 0.82). Velocity and cadence in speech and gait were evaluated at baseline (BL), 4 weeks (4W-T1), and 6 months (6M-T2) and correlated. Results: Parameters in speech and gait (i.e., speaking and walking velocity, as well as speech rhythm with gait cadence) were positively correlated across groups (p < 0.01). Statistical analyses involved repeated measures ANOVA across groups and time, as well as independent and one-samples t-tests for within groups analyses. Statistical analyses were amplified using Reliable Change (RC) and Reliable Change Indexes (RCI) to calculate true clinically significant changes due to the treatment on a patient individual level. Rhythmic intervention groups improved across variables and time (total Mean Difference: 3.07 [SD 1.8]; 95% CI 0.2–11.36]) compared to the NT group, whose performance declined significantly at 6 months (p < 0.01). HC outperformed rBMT and NT groups across variables and time (p < 0.001); the rSLT performed similarly to HC at 4 weeks and 6 months in speech rhythm and respiration. Conclusions: Speech and gait deficits in PD may share a common mechanism in the underlying cortical circuits. Further, rSLT was more beneficial to dysrhythmic PD patients than rBMT, likely because of the nature of the rhythmic cue.
Collapse
Affiliation(s)
- Anne Dorothée Rösch
- Department of Clinical Neurophysiology/Neurology, Hospital of the University of Basel, Basel, Switzerland
| | - Ethan Taub
- Department of Neurosurgery, Hospital of the University of Basel, Basel, Switzerland
| | - Ute Gschwandtner
- Department of Clinical Neurophysiology/Neurology, Hospital of the University of Basel, Basel, Switzerland
- *Correspondence: Ute Gschwandtner
| | - Peter Fuhr
- Department of Clinical Neurophysiology/Neurology, Hospital of the University of Basel, Basel, Switzerland
| |
Collapse
|
6
|
|
7
|
Rocha S, Southgate V, Mareschal D. Rate of infant carrying impacts infant spontaneous motor tempo. ROYAL SOCIETY OPEN SCIENCE 2021; 8:210608. [PMID: 34540253 PMCID: PMC8441131 DOI: 10.1098/rsos.210608] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 08/23/2021] [Indexed: 06/13/2023]
Abstract
Rhythm production is a critical component of human interaction, not least forming the basis of our musicality. Infants demonstrate a spontaneous motor tempo (SMT), or natural rate of rhythmic movement. Here, we ask whether infant SMT is influenced by the rate of locomotion infants experience when being carried. Ten-month-old, non-walking infants were tested using a free drumming procedure before and after 10 min of being carried by an experimenter walking at a slower (98 BPM) or faster (138 BPM) than average tempo. We find that infant SMT is differentially impacted by carrying experience dependent on the tempo at which they were carried: infants in the slow-walked group exhibited a slower SMT from pre-test to post-test, while infants in the fast-walked group showed a faster SMT from pre-test to post-test. Heart rate data suggest that this effect is not due to a general change in the state of arousal. We argue that being carried during caregiver locomotion is a predominant experience for infants throughout the first years of life, and as a source of regular, vestibular, information, may at least partially form the basis of their sense of rhythm.
Collapse
Affiliation(s)
- Sinead Rocha
- Department of Psychology, University of Cambridge, Cambridge CB2 1TN, UK
- Birkbeck University of London, London WC1E 7HX, UK
| | | | | |
Collapse
|
8
|
Møller C, Stupacher J, Celma-Miralles A, Vuust P. Beat perception in polyrhythms: Time is structured in binary units. PLoS One 2021; 16:e0252174. [PMID: 34415911 PMCID: PMC8378699 DOI: 10.1371/journal.pone.0252174] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Accepted: 08/01/2021] [Indexed: 11/19/2022] Open
Abstract
In everyday life, we group and subdivide time to understand the sensory environment surrounding us. Organizing time in units, such as diurnal rhythms, phrases, and beat patterns, is fundamental to behavior, speech, and music. When listening to music, our perceptual system extracts and nests rhythmic regularities to create a hierarchical metrical structure that enables us to predict the timing of the next events. Foot tapping and head bobbing to musical rhythms are observable evidence of this process. In the special case of polyrhythms, at least two metrical structures compete to become the reference for these temporal regularities, rendering several possible beats with which we can synchronize our movements. While there is general agreement that tempo, pitch, and loudness influence beat perception in polyrhythms, we focused on the yet neglected influence of beat subdivisions, i.e., the least common denominator of a polyrhythm ratio. In three online experiments, 300 participants listened to a range of polyrhythms and tapped their index fingers in time with the perceived beat. The polyrhythms consisted of two simultaneously presented isochronous pulse trains with different ratios (2:3, 2:5, 3:4, 3:5, 4:5, 5:6) and different tempi. For ratios 2:3 and 3:4, we additionally manipulated the pitch of the pulse trains. Results showed a highly robust influence of subdivision grouping on beat perception. This was manifested as a propensity towards beats that are subdivided into two or four equally spaced units, as opposed to beats with three or more complex groupings of subdivisions. Additionally, lower pitched pulse trains were more often perceived as the beat. Our findings suggest that subdivisions, not beats, are the basic unit of beat perception, and that the principle underlying the binary grouping of subdivisions reflects a propensity towards simplicity. This preference for simple grouping is widely applicable to human perception and cognition of time.
Collapse
Affiliation(s)
- Cecilie Møller
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Jan Stupacher
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Alexandre Celma-Miralles
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus C, Denmark
| |
Collapse
|
9
|
Ratnayake CP, Zhou Y, Dawson Pell FSE, Potvin DA, Radford AN, Magrath RD. Visual obstruction, but not moderate traffic noise, increases reliance on heterospecific alarm calls. Behav Ecol 2021. [DOI: 10.1093/beheco/arab051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Abstract
Animals rely on both personal and social information about danger to minimize risk, yet environmental conditions constrain information. Both visual obstructions and background noise can reduce detectability of predators, which may increase reliance on social information, such as from alarm calls. Furthermore, a combination of visual and auditory constraints might greatly increase reliance on social information, because the loss of information from one source cannot be compensated by the other. Testing these possibilities requires manipulating personal information while broadcasting alarm calls. We therefore experimentally tested the effects of a visual barrier, traffic noise, and their combination on the response of Australian magpies, Cracticus tibicen, to heterospecific alarm calls. The barrier blocked only visual cues, while playback of moderate traffic noise could mask subtle acoustic cues of danger, such as of a predator’s movement, but not the alarm-call playback. We predicted that response to alarm calls would increase with either visual or acoustic constraint, and that there would be a disproportionate response when both were present. As predicted, individuals responded more strongly to alarm calls when there was a visual barrier. However, moderate traffic noise did not affect responses, and the effect of the visual barrier was not greater during traffic-noise playback. We conclude that a reduction of personal, visual information led to a greater reliance on social information from alarm calls, confirming indirect evidence from other species. The absence of a traffic-noise effect could be because in Australian magpies hearing subtle cues is less important than vision in detecting predators.
Collapse
Affiliation(s)
- Chaminda P Ratnayake
- Division of Ecology and Evolution, Research School of Biology, 46 Sullivan’s Creek Road, Australian National University, Canberra 2600, Australia
| | - You Zhou
- Division of Ecology and Evolution, Research School of Biology, 46 Sullivan’s Creek Road, Australian National University, Canberra 2600, Australia
| | - Francesca S E Dawson Pell
- Division of Ecology and Evolution, Research School of Biology, 46 Sullivan’s Creek Road, Australian National University, Canberra 2600, Australia
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol BS8 1TQ, UK
| | - Dominique A Potvin
- Division of Ecology and Evolution, Research School of Biology, 46 Sullivan’s Creek Road, Australian National University, Canberra 2600, Australia
| | - Andrew N Radford
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol BS8 1TQ, UK
| | - Robert D Magrath
- Division of Ecology and Evolution, Research School of Biology, 46 Sullivan’s Creek Road, Australian National University, Canberra 2600, Australia
| |
Collapse
|
10
|
Katsu N, Yuki S, Okanoya K. Production of regular rhythm induced by external stimuli in rats. Anim Cogn 2021; 24:1133-1141. [PMID: 33751275 DOI: 10.1007/s10071-021-01505-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 03/07/2021] [Accepted: 03/11/2021] [Indexed: 11/25/2022]
Abstract
Rhythmic ability is important for locomotion, communication, and coordination between group members during the daily life of animals. We aimed to examine the rhythm perception and production abilities in rats within the range of a subsecond to a few seconds. We trained rats to respond to audio-visual stimuli presented in regular, isochronous rhythms at six time-intervals (0.5-2 s). Five out of six rats successfully learned to respond to the sequential stimuli. All subjects showed periodic actions. The actions to regular stimuli were faster than randomly presented stimuli in the medium-tempo conditions. In slower and faster tempo conditions, the actions of some subjects were not periodic or phase-matched to the stimuli. The asynchrony regarding the stimulus onset became larger or smaller when the last stimulus of the sequence was presented at deviated timings. Thus, the actions of the rats were tempo matched to the regular rhythm, but not completely anticipative. We also compared the extent of phase-matching and variability of rhythm production among the interval conditions. In interval conditions longer than 1.5 s, variability tended to be larger. In conclusion, rats showed a tempo matching ability to regular rhythms to a certain degree, but maintenance of a constant tempo to slower rhythm conditions was difficult. Our findings suggest that non-vocal learning mammals have the potential to produce flexible rhythms in subsecond timing.
Collapse
Affiliation(s)
- Noriko Katsu
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan.
- Japan Society for the Promotion of Science, Tokyo, Japan.
| | - Shoko Yuki
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| | - Kazuo Okanoya
- Graduate School of Arts and Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
11
|
Valencia GN, Khoo S, Wong T, Ta J, Hou B, Barsalou LW, Hazen K, Lin HH, Wang S, Brefczynski-Lewis JA, Frum CA, Lewis JW. Chinese-English bilinguals show linguistic-perceptual links in the brain associating short spoken phrases with corresponding real-world natural action sounds by semantic category. LANGUAGE, COGNITION AND NEUROSCIENCE 2021; 36:773-790. [PMID: 34568509 PMCID: PMC8462789 DOI: 10.1080/23273798.2021.1883073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 01/26/2021] [Indexed: 06/13/2023]
Abstract
Higher cognitive functions such as linguistic comprehension must ultimately relate to perceptual systems in the brain, though how and why this forms remains unclear. Different brain networks that mediate perception when hearing real-world natural sounds has recently been proposed to respect a taxonomic model of acoustic-semantic categories. Using functional magnetic resonance imaging (fMRI) with Chinese/English bilingual listeners, the present study explored whether reception of short spoken phrases, in both Chinese (Mandarin) and English, describing corresponding sound-producing events would engage overlapping brain regions at a semantic category level. The results revealed a double-dissociation of cortical regions that were preferential for representing knowledge of human versus environmental action events, whether conveyed through natural sounds or the corresponding spoken phrases depicted by either language. These findings of cortical hubs exhibiting linguistic-perceptual knowledge links at a semantic category level should help to advance neurocomputational models of the neurodevelopment of language systems.
Collapse
Affiliation(s)
- Gabriela N. Valencia
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Stephanie Khoo
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Ting Wong
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Joseph Ta
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Bob Hou
- Department of Radiology, Center for Advanced Imaging
| | | | - Kirk Hazen
- Department of English, West Virginia University
| | | | - Shuo Wang
- Department of Chemical and Biomedical Engineering
| | - Julie A. Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - Chris A. Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| | - James W. Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University (WVU), Morgantown, WV 26506, USA
| |
Collapse
|
12
|
Abstract
Music comprises a diverse category of cognitive phenomena that likely represent both the effects of psychological adaptations that are specific to music (e.g., rhythmic entrainment) and the effects of adaptations for non-musical functions (e.g., auditory scene analysis). How did music evolve? Here, we show that prevailing views on the evolution of music - that music is a byproduct of other evolved faculties, evolved for social bonding, or evolved to signal mate quality - are incomplete or wrong. We argue instead that music evolved as a credible signal in at least two contexts: coalitional interactions and infant care. Specifically, we propose that (1) the production and reception of coordinated, entrained rhythmic displays is a co-evolved system for credibly signaling coalition strength, size, and coordination ability; and (2) the production and reception of infant-directed song is a co-evolved system for credibly signaling parental attention to secondarily altricial infants. These proposals, supported by interdisciplinary evidence, suggest that basic features of music, such as melody and rhythm, result from adaptations in the proper domain of human music. The adaptations provide a foundation for the cultural evolution of music in its actual domain, yielding the diversity of musical forms and musical behaviors found worldwide.
Collapse
Affiliation(s)
- Samuel A Mehr
- Department of Psychology, Harvard University, Cambridge, MA02138, ; https://; https://projects.iq.harvard.edu/epl
- Data Science Initiative, Harvard University, Cambridge, MA02138
- School of Psychology, Victoria University of Wellington, Wellington6012, New Zealand
| | - Max M Krasnow
- Department of Psychology, Harvard University, Cambridge, MA02138, ; https://; https://projects.iq.harvard.edu/epl
| | - Gregory A Bryant
- Department of Communication, University of California Los Angeles, Los Angeles, CA90095, ; https://gabryant.bol.ucla.edu
- Center for Behavior, Evolution, & Culture, University of California Los Angeles, Los Angeles, CA90095
| | - Edward H Hagen
- Department of Anthropology, Washington State University, Vancouver, WA98686, USA. ; https://anthro.vancouver.wsu.edu/people/hagen
| |
Collapse
|
13
|
Slow and fast beat sequences are represented differently through space. Atten Percept Psychophys 2020; 82:2765-2773. [DOI: 10.3758/s13414-019-01945-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Kliger Amrani A, Zion Golumbic E. Spontaneous and stimulus-driven rhythmic behaviors in ADHD adults and controls. Neuropsychologia 2020; 146:107544. [PMID: 32598965 DOI: 10.1016/j.neuropsychologia.2020.107544] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Revised: 05/27/2020] [Accepted: 06/21/2020] [Indexed: 10/24/2022]
Abstract
Many aspects of human behavior are inherently rhythmic, requiring production of rhythmic motor actions as well as synchronizing to rhythms in the environment. It is well-established that individuals with ADHD exhibit deficits in temporal estimation and timing functions, which may impact their ability to accurately produce and interact with rhythmic stimuli. In the current study we seek to understand the specific aspects of rhythmic behavior that are implicated in ADHD. We specifically ask whether they are attributed to imprecision in the internal generation of rhythms or to reduced acuity in rhythm perception. We also test key predictions of the Preferred Period Hypothesis, which suggests that both perceptual and motor rhythmic behaviors are biased towards a specific personal 'default' tempo. To this end, we tested several aspects of rhythmic behavior and the correspondence between them, including spontaneous motor tempo (SMT), preferred auditory perceptual tempo (PPT) and synchronization-continuations tapping in a broad range of rhythms, from sub-second to supra-second intervals. Moreover, we evaluate the intra-subject consistency of rhythmic preferences, as a means for testing the reality and reliability of personal 'default-rhythms'. We used a modified operational definition for assessing SMT and PPT, instructing participants to tap or calibrate the rhythms most comfortable for them to count along with, to avoid subjective interpretations of the task. Our results shed new light on the specific aspect of rhythmic deficits implicated in ADHD adults. We find that individuals with ADHD are primarily challenged in producing and maintaining isochronous self-generated motor rhythms, during both spontaneous and memory-paced tapping. However, they nonetheless exhibit good flexibility for synchronizing to a broad range of external rhythms, suggesting that auditory-motor entrainment for simple rhythms is preserved in ADHD, and that the presence of an external pacer allows overcoming their inherent difficulty in self-generating isochronous motor rhythms. In addition, both groups showed optimal memory-paced tapping for rhythms near their 'counting-based' SMT and PPT, which were slightly faster in the ADHD group. This is in line with the predictions of the Preferred Period Hypothesis, indicating that at least for this well-defined rhythmic behavior (i.e., counting), individuals tend to prefer similar time-scales in both motor production and perceptual evaluation.
Collapse
|
15
|
Lewis JW, Silberman MJ, Donai JJ, Frum CA, Brefczynski-Lewis JA. Hearing and orally mimicking different acoustic-semantic categories of natural sound engage distinct left hemisphere cortical regions. BRAIN AND LANGUAGE 2018; 183:64-78. [PMID: 29966815 PMCID: PMC6461214 DOI: 10.1016/j.bandl.2018.05.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 03/22/2018] [Accepted: 05/06/2018] [Indexed: 05/10/2023]
Abstract
Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.
Collapse
Affiliation(s)
- James W Lewis
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA.
| | - Magenta J Silberman
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| | - Jeremy J Donai
- Rockefeller Neurosciences Institute, Department of Communication Sciences and Disorders, West Virginia University, Morgantown, WV 26506, USA
| | - Chris A Frum
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
16
|
Is the Capacity for Vocal Learning in Vertebrates Rooted in Fish Schooling Behavior? Evol Biol 2018; 45:359-373. [PMID: 30459479 PMCID: PMC6223759 DOI: 10.1007/s11692-018-9457-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Accepted: 06/07/2018] [Indexed: 01/13/2023]
Abstract
The capacity to learn and reproduce vocal sounds has evolved in phylogenetically distant tetrapod lineages. Vocal learners in all these lineages express similar neural circuitry and genetic factors when perceiving, processing, and reproducing vocalization, suggesting that brain pathways for vocal learning evolved within strong constraints from a common ancestor, potentially fish. We hypothesize that the auditory-motor circuits and genes involved in entrainment have their origins in fish schooling behavior and respiratory-motor coupling. In this acoustic advantages hypothesis, aural costs and benefits played a key role in shaping a wide variety of traits, which could readily be exapted for entrainment and vocal learning, including social grouping, group movement, and respiratory-motor coupling. Specifically, incidental sounds of locomotion and respiration (ISLR) may have reinforced synchronization by communicating important spatial and temporal information between school-members and extending windows of silence to improve situational awareness. This process would be mutually reinforcing. Neurons in the telencephalon, which were initially involved in linking ISLR with forelimbs, could have switched functions to serve vocal machinery (e.g. mouth, beak, tongue, larynx, syrinx). While previous vocal learning hypotheses invoke transmission of neurons from visual tasks (gestures) to the auditory channel, this hypothesis involves the auditory channel from the onset. Acoustic benefits of locomotor-respiratory coordination in fish may have selected for genetic factors and brain circuitry capable of synchronizing respiratory and limb movements, predisposing tetrapod lines to synchronized movement, vocalization, and vocal learning. We discuss how the capacity to entrain is manifest in fish, amphibians, birds, and mammals, and propose predictions to test our acoustic advantages hypothesis.
Collapse
|
17
|
Hamacher D, Schley F, Hollander K, Zech A. Effects of manipulated auditory information on local dynamic gait stability. Hum Mov Sci 2018; 58:219-223. [PMID: 29486428 DOI: 10.1016/j.humov.2018.02.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Revised: 02/16/2018] [Accepted: 02/19/2018] [Indexed: 11/17/2022]
Abstract
Auditory information affects sensorimotor control of gait. Noise or active noise cancelling alters the perception of movement related sounds and, probably, gait stability. The aim of the current study was to evaluate the effects of noise cancelling on gait stability. Twenty-five healthy older subjects (70 ± 6 years) were included into a randomized cross-over study. Gait stability (largest Lyapunov exponent) in normal overground walking was determined for the following hearing conditions: no manipulation and active noise cancelling. To assess differences between the two hearing conditions (no manipulation vs. active noise cancelling), Student's repeated measures t-test was used. The results indicate an improvement of gait stability when using active noise cancelling compared to normal hearing. In conclusion, our results indicate that auditory information might not be needed for a stable gait in elderly.
Collapse
Affiliation(s)
- Daniel Hamacher
- Institute of Sport Science, Friedrich Schiller University Jena, Seidelstraße 20, 07749 Jena, Germany.
| | - Franziska Schley
- Institute of Sport Science, Friedrich Schiller University Jena, Seidelstraße 20, 07749 Jena, Germany.
| | - Karsten Hollander
- Department of Sports and Exercise Medicine, Institute of Human Movement Science, University of Hamburg, Turmweg 2, 20148 Hamburg, Germany; Department of Sports and Rehabilitation Medicine, BG Trauma Hospital of Hamburg, Bergedorfer Str. 10, 21033 Hamburg, Germany.
| | - Astrid Zech
- Institute of Sport Science, Friedrich Schiller University Jena, Seidelstraße 20, 07749 Jena, Germany.
| |
Collapse
|
18
|
Rhythmic entrainment: Why humans want to, fireflies can't help it, pet birds try, and sea lions have to be bribed. Psychon Bull Rev 2017; 23:1647-1659. [PMID: 26920589 DOI: 10.3758/s13423-016-1013-x] [Citation(s) in RCA: 73] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Until recently, the literature on rhythmic ability took for granted that only humans are able to synchronize body movements to an external beat-to entrain. This assumption has been undercut by findings of beat-matching in various species of parrots and, more recently, in a sea lion, several species of primates, and possibly horses. This throws open the question of how widespread beat-matching ability is in the animal kingdom. Here we reassess the arguments and evidence for an absence of beat-matching in animals, and conclude that in fact no convincing case against beat-matching in animals has been made. Instead, such evidence as there is suggests that this capacity could be quite widespread. Furthermore, mutual entrainment of oscillations is a general principle of physical systems, both biological and nonbiological, suggesting that entrainment of motor systems by sensory systems may be a default rather than an oddity. The question then becomes, not why a few privileged species are able to beat-match, but why species do not always do so-why they vary in both spontaneous and learned beat-matching. We propose that when entrainment is not driven by fixed, mandatory connections between input and output (as in the case of, e.g., fireflies entraining to each others' flashes), it depends on voluntary control over, and voluntary or learned coupling of, sensory and motor systems, which can paradoxically lead to apparent failures of entrainment. Among the factors that affect whether an animal will entrain are sufficient control over the motor behavior to be entrained, sufficient perceptual sophistication to extract the entraining beat from the overall sensory environment, and the current cognitive state of the animal, including attention and motivation. The extent of entrainment in the animal kingdom potentially has widespread implications, not only for understanding the roots of human dance, but also for understanding the neural and cognitive architectures of animals.
Collapse
|
19
|
Ravignani A, Madison G. The Paradox of Isochrony in the Evolution of Human Rhythm. Front Psychol 2017; 8:1820. [PMID: 29163252 PMCID: PMC5681750 DOI: 10.3389/fpsyg.2017.01820] [Citation(s) in RCA: 53] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 09/30/2017] [Indexed: 01/04/2023] Open
Abstract
Isochrony is crucial to the rhythm of human music. Some neural, behavioral and anatomical traits underlying rhythm perception and production are shared with a broad range of species. These may either have a common evolutionary origin, or have evolved into similar traits under different evolutionary pressures. Other traits underlying rhythm are rare across species, only found in humans and few other animals. Isochrony, or stable periodicity, is common to most human music, but isochronous behaviors are also found in many species. It appears paradoxical that humans are particularly good at producing and perceiving isochronous patterns, although this ability does not conceivably confer any evolutionary advantage to modern humans. This article will attempt to solve this conundrum. To this end, we define the concept of isochrony from the present functional perspective of physiology, cognitive neuroscience, signal processing, and interactive behavior, and review available evidence on isochrony in the signals of humans and other animals. We then attempt to resolve the paradox of isochrony by expanding an evolutionary hypothesis about the function that isochronous behavior may have had in early hominids. Finally, we propose avenues for empirical research to examine this hypothesis and to understand the evolutionary origin of isochrony in general.
Collapse
Affiliation(s)
- Andrea Ravignani
- Language and Cognition Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands.,Veterinary and Research Department, Sealcentre Pieterburen, Pieterburen, Netherlands.,Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium
| | - Guy Madison
- Department of Psychology, Umeå University, Umeå, Sweden
| |
Collapse
|
20
|
Brefczynski-Lewis JA, Lewis JW. Auditory object perception: A neurobiological model and prospective review. Neuropsychologia 2017; 105:223-242. [PMID: 28467888 PMCID: PMC5662485 DOI: 10.1016/j.neuropsychologia.2017.04.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/27/2017] [Accepted: 04/27/2017] [Indexed: 12/15/2022]
Abstract
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.
Collapse
Affiliation(s)
- Julie A Brefczynski-Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA
| | - James W Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA.
| |
Collapse
|
21
|
Webster PJ, Skipper-Kallal LM, Frum CA, Still HN, Ward BD, Lewis JW. Divergent Human Cortical Regions for Processing Distinct Acoustic-Semantic Categories of Natural Sounds: Animal Action Sounds vs. Vocalizations. Front Neurosci 2017; 10:579. [PMID: 28111538 PMCID: PMC5216875 DOI: 10.3389/fnins.2016.00579] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 12/05/2016] [Indexed: 11/13/2022] Open
Abstract
A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds.
Collapse
Affiliation(s)
- Paula J. Webster
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| | - Laura M. Skipper-Kallal
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
- Department of Neurology, Georgetown University Medical CampusWashington, DC, USA
| | - Chris A. Frum
- Department of Physiology and Pharmacology, West Virginia UniversityMorgantown, WV, USA
| | - Hayley N. Still
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| | - B. Douglas Ward
- Department of Biophysics, Medical College of WisconsinMilwaukee, WI, USA
| | - James W. Lewis
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| |
Collapse
|
22
|
Richter J, Ostovar R. "It Don't Mean a Thing if It Ain't Got that Swing"- an Alternative Concept for Understanding the Evolution of Dance and Music in Human Beings. Front Hum Neurosci 2016; 10:485. [PMID: 27774058 PMCID: PMC5054692 DOI: 10.3389/fnhum.2016.00485] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2015] [Accepted: 09/13/2016] [Indexed: 12/28/2022] Open
Abstract
The functions of dance and music in human evolution are a mystery. Current research on the evolution of music has mainly focused on its melodic attribute which would have evolved alongside (proto-)language. Instead, we propose an alternative conceptual framework which focuses on the co-evolution of rhythm and dance (R&D) as intertwined aspects of a multimodal phenomenon characterized by the unity of action and perception. Reviewing the current literature from this viewpoint we propose the hypothesis that R&D have co-evolved long before other musical attributes and (proto-)language. Our view is supported by increasing experimental evidence particularly in infants and children: beat is perceived and anticipated already by newborns and rhythm perception depends on body movement. Infants and toddlers spontaneously move to a rhythm irrespective of their cultural background. The impulse to dance may have been prepared by the susceptibility of infants to be soothed by rocking. Conceivable evolutionary functions of R&D include sexual attraction and transmission of mating signals. Social functions include bonding, synchronization of many individuals, appeasement of hostile individuals, and pre- and extra-verbal communication enabling embodied individual and collective memorizing. In many cultures R&D are used for entering trance, a base for shamanism and early religions. Individual benefits of R&D include improvement of body coordination, as well as painkilling, anti-depressive, and anti-boredom effects. Rhythm most likely paved the way for human speech as supported by studies confirming the overlaps between cognitive and neural resources recruited for language and rhythm. In addition, dance encompasses visual and gestural communication. In future studies attention should be paid to which attribute of music is focused on and that the close mutual relation between R&D is taken into account. The possible evolutionary functions of dance deserve more attention.
Collapse
Affiliation(s)
- Joachim Richter
- Institute of Tropical Medicine and International Health, Charité UniversitätsmedizinBerlin, Germany
| | | |
Collapse
|
23
|
Niese RL, Tobalske BW. Specialized primary feathers produce tonal sounds during flight in rock pigeons (Columba livia). ACTA ACUST UNITED AC 2016; 219:2173-81. [PMID: 27207645 DOI: 10.1242/jeb.131649] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 05/05/2016] [Indexed: 11/20/2022]
Abstract
For centuries, naturalists have suggested that the tonal elements of pigeon wing sounds may be sonations (non-vocal acoustic signals) of alarm. However, spurious tonal sounds may be produced passively as a result of aeroelastic flutter in the flight feathers of almost all birds. Using mechanistic criteria emerging from recent work on sonations, we sought to: (1) identify characteristics of rock pigeon flight feathers that might be adapted for sound production rather than flight, and (2) provide evidence that this morphology is necessary for in vivo sound production and is sufficient to replicate in vivo sounds. Pigeons produce tonal sounds (700±50 Hz) during the latter two-thirds of each downstroke during take-off. These tones are produced when a small region of long, curved barbs on the inner vane of the outermost primary feather (P10) aeroelastically flutters. Tones were silenced in live birds when we experimentally increased the stiffness of this region to prevent flutter. Isolated P10 feathers were sufficient to reproduce in vivo sounds when spun at the peak angular velocity of downstroke (53.9-60.3 rad s(-1)), but did not produce tones at average downstroke velocity (31.8 rad s(-1)), whereas P9 and P1 feathers never produced tones. P10 feathers had significantly lower coefficients of resultant aerodynamic force (CR) when spun at peak angular velocity than at average angular velocity, revealing that production of tonal sounds incurs an aerodynamic cost. P9 and P1 feathers did not show this difference in CR These mechanistic results suggest that the tonal sounds produced by P10 feathers are not incidental and may function in communication.
Collapse
Affiliation(s)
- Robert L Niese
- Field Research Station at Fort Missoula, Division of Biological Sciences, University of Montana, Missoula, MT 59812, USA Slater Museum of Natural History, Biology Department, University of Puget Sound, Tacoma, WA 98416, USA
| | - Bret W Tobalske
- Field Research Station at Fort Missoula, Division of Biological Sciences, University of Montana, Missoula, MT 59812, USA
| |
Collapse
|
24
|
Phonological perception by birds: budgerigars can perceive lexical stress. Anim Cogn 2016; 19:643-54. [PMID: 26914456 PMCID: PMC4824828 DOI: 10.1007/s10071-016-0968-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2015] [Revised: 12/18/2015] [Accepted: 02/12/2016] [Indexed: 10/30/2022]
Abstract
Metrical phonology is the perceptual "strength" in language of some syllables relative to others. The ability to perceive lexical stress is important, as it can help a listener segment speech and distinguish the meaning of words and sentences. Despite this importance, there has been little comparative work on the perception of lexical stress across species. We used a go/no-go operant paradigm to train human participants and budgerigars (Melopsittacus undulatus) to distinguish trochaic (stress-initial) from iambic (stress-final) two-syllable nonsense words. Once participants learned the task, we presented both novel nonsense words, and familiar nonsense words that had certain cues removed (e.g., pitch, duration, loudness, or vowel quality) to determine which cues were most important in stress perception. Members of both species learned the task and were then able to generalize to novel exemplars, showing categorical learning rather than rote memorization. Tests using reduced stimuli showed that humans could identify stress patterns with amplitude and pitch alone, but not with only duration or vowel quality. Budgerigars required more than one cue to be present and had trouble if vowel quality or amplitude were missing as cues. The results suggest that stress patterns in human speech can be decoded by other species. Further comparative stress-perception research with more species could help to determine what species characteristics predict this ability. In addition, tests with a variety of stimuli could help to determine how much this ability depends on general pattern learning processes versus vocalization-specific cues.
Collapse
|
25
|
Ashoori A, Eagleman DM, Jankovic J. Effects of Auditory Rhythm and Music on Gait Disturbances in Parkinson's Disease. Front Neurol 2015; 6:234. [PMID: 26617566 PMCID: PMC4641247 DOI: 10.3389/fneur.2015.00234] [Citation(s) in RCA: 83] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Accepted: 10/22/2015] [Indexed: 12/05/2022] Open
Abstract
Gait abnormalities, such as shuffling steps, start hesitation, and freezing, are common and often incapacitating symptoms of Parkinson’s disease (PD) and other parkinsonian disorders. Pharmacological and surgical approaches have only limited efficacy in treating these gait disorders. Rhythmic auditory stimulation (RAS), such as playing marching music and dance therapy, has been shown to be a safe, inexpensive, and an effective method in improving gait in PD patients. However, RAS that adapts to patients’ movements may be more effective than rigid, fixed-tempo RAS used in most studies. In addition to auditory cueing, immersive virtual reality technologies that utilize interactive computer-generated systems through wearable devices are increasingly used for improving brain–body interaction and sensory–motor integration. Using multisensory cues, these therapies may be particularly suitable for the treatment of parkinsonian freezing and other gait disorders. In this review, we examine the affected neurological circuits underlying gait and temporal processing in PD patients and summarize the current studies demonstrating the effects of RAS on improving these gait deficits.
Collapse
Affiliation(s)
- Aidin Ashoori
- Columbia University College of Physicians & Surgeons , New York, NY , USA
| | - David M Eagleman
- Department of Neuroscience, Baylor College of Medicine , Houston, TX , USA
| | - Joseph Jankovic
- Department of Neurology, Parkinson's Disease Center and Movement Disorders Clinic, Baylor College of Medicine , Houston, TX , USA
| |
Collapse
|
26
|
Larsson M, Ekström SR, Ranjbar P. Effects of sounds of locomotion on speech perception. Noise Health 2015; 17:227-32. [PMID: 26168953 PMCID: PMC4900485 DOI: 10.4103/1463-1741.160711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Human locomotion typically creates noise, a possible consequence of which is the masking of sound signals originating in the surroundings. When walking side by side, people often subconsciously synchronize their steps. The neurophysiological and evolutionary background of this behavior is unclear. The present study investigated the potential of sound created by walking to mask perception of speech and compared the masking produced by walking in step with that produced by unsynchronized walking. The masking sound (footsteps on gravel) and the target sound (speech) were presented through the same speaker to 15 normal-hearing subjects. The original recorded walking sound was modified to mimic the sound of two individuals walking in pace or walking out of synchrony. The participants were instructed to adjust the sound level of the target sound until they could just comprehend the speech signal (“just follow conversation” or JFC level) when presented simultaneously with synchronized or unsynchronized walking sound at 40 dBA, 50 dBA, 60 dBA, or 70 dBA. Synchronized walking sounds produced slightly less masking of speech than did unsynchronized sound. The median JFC threshold in the synchronized condition was 38.5 dBA, while the corresponding value for the unsynchronized condition was 41.2 dBA. Combined results at all sound pressure levels showed an improvement in the signal-to-noise ratio (SNR) for synchronized footsteps; the median difference was 2.7 dB and the mean difference was 1.2 dB [P < 0.001, repeated-measures analysis of variance (RM-ANOVA)]. The difference was significant for masker levels of 50 dBA and 60 dBA, but not for 40 dBA or 70 dBA. This study provides evidence that synchronized walking may reduce the masking potential of footsteps.
Collapse
Affiliation(s)
- Matz Larsson
- The Cardiology--Lung Clinic; School of Health and Medical Sciences, Örebro University; Institute of Environmental Medicine, Karolinska Institutet, Örebro, Stockholm, Sweden
| | | | | |
Collapse
|
27
|
Tool-use-associated sound in the evolution of language. Anim Cogn 2015; 18:993-1005. [PMID: 26118672 DOI: 10.1007/s10071-015-0885-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2014] [Revised: 06/10/2015] [Accepted: 06/15/2015] [Indexed: 10/23/2022]
Abstract
Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.
Collapse
|
28
|
Know thy sound: perceiving self and others in musical contexts. Acta Psychol (Amst) 2014; 152:67-74. [PMID: 25113128 DOI: 10.1016/j.actpsy.2014.07.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Revised: 04/07/2014] [Accepted: 07/07/2014] [Indexed: 12/14/2022] Open
Abstract
This review article provides a summary of the findings from empirical studies that investigated recognition of an action's agent by using music and/or other auditory information. Embodied cognition accounts ground higher cognitive functions in lower level sensorimotor functioning. Action simulation, the recruitment of an observer's motor system and its neural substrates when observing actions, has been proposed to be particularly potent for actions that are self-produced. This review examines evidence for such claims from the music domain. It covers studies in which trained or untrained individuals generated and/or perceived (musical) sounds, and were subsequently asked to identify who was the author of the sounds (e.g., the self or another individual) in immediate (online) or delayed (offline) research designs. The review is structured according to the complexity of auditory-motor information available and includes sections on: 1) simple auditory information (e.g., clapping, piano, drum sounds), 2) complex instrumental sound sequences (e.g., piano/organ performances), and 3) musical information embedded within audiovisual performance contexts, when action sequences are both viewed as movements and/or listened to in synchrony with sounds (e.g., conductors' gestures, dance). This work has proven to be informative in unraveling the links between perceptual-motor processes, supporting embodied accounts of human cognition that address action observation. The reported findings are examined in relation to cues that contribute to agency judgments, and their implications for research concerning action understanding and applied musical practice.
Collapse
|
29
|
Chauvigné LAS, Gitau KM, Brown S. The neural basis of audiomotor entrainment: an ALE meta-analysis. Front Hum Neurosci 2014; 8:776. [PMID: 25324765 PMCID: PMC4179708 DOI: 10.3389/fnhum.2014.00776] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2014] [Accepted: 09/12/2014] [Indexed: 11/17/2022] Open
Abstract
Synchronization of body movement to an acoustic rhythm is a major form of entrainment, such as occurs in dance. This is exemplified in experimental studies of finger tapping. Entrainment to a beat is contrasted with movement that is internally driven and is therefore self-paced. In order to examine brain areas important for entrainment to an acoustic beat, we meta-analyzed the functional neuroimaging literature on finger tapping (43 studies) using activation likelihood estimation (ALE) meta-analysis with a focus on the contrast between externally-paced and self-paced tapping. The results demonstrated a dissociation between two subcortical systems involved in timing, namely the cerebellum and the basal ganglia. Externally-paced tapping highlighted the importance of the spinocerebellum, most especially the vermis, which was not activated at all by self-paced tapping. In contrast, the basal ganglia, including the putamen and globus pallidus, were active during both types of tapping, but preferentially during self-paced tapping. These results suggest a central role for the spinocerebellum in audiomotor entrainment. We conclude with a theoretical discussion about the various forms of entrainment in humans and other animals.
Collapse
Affiliation(s)
- Léa A S Chauvigné
- NeuroArts Lab, Department of Psychology, Neuroscience & Behaviour, McMaster University Hamilton, ON, Canada
| | - Kevin M Gitau
- NeuroArts Lab, Department of Psychology, Neuroscience & Behaviour, McMaster University Hamilton, ON, Canada
| | - Steven Brown
- NeuroArts Lab, Department of Psychology, Neuroscience & Behaviour, McMaster University Hamilton, ON, Canada
| |
Collapse
|
30
|
The influence of auditory-motor coupling on fractal dynamics in human gait. Sci Rep 2014; 4:5879. [PMID: 25080936 PMCID: PMC4118321 DOI: 10.1038/srep05879] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Accepted: 07/03/2014] [Indexed: 01/24/2023] Open
Abstract
Humans exhibit an innate ability to synchronize their movements to music. The field of gait rehabilitation has sought to capitalize on this phenomenon by invoking patients to walk in time to rhythmic auditory cues with a view to improving pathological gait. However, the temporal structure of the auditory cue, and hence the temporal structure of the target behavior has not been sufficiently explored. This study reveals the plasticity of auditory-motor coupling in human walking in relation to 'complex' auditory cues. The authors demonstrate that auditory-motor coupling can be driven by different coloured auditory noise signals (e.g. white, brown), shifting the fractal temporal structure of gait dynamics towards the statistical properties of the signals used. This adaptive capability observed in whole-body movement, could potentially be harnessed for targeted neuromuscular rehabilitation in patient groups, depending on the specific treatment goal.
Collapse
|