1
|
MacDonald A, Hebling A, Wei XP, Yackle K. The breath shape controls intonation of mouse vocalizations. eLife 2024; 13:RP93079. [PMID: 38963785 PMCID: PMC11223766 DOI: 10.7554/elife.93079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/06/2024] Open
Abstract
Intonation in speech is the control of vocal pitch to layer expressive meaning to communication, like increasing pitch to indicate a question. Also, stereotyped patterns of pitch are used to create distinct sounds with different denotations, like in tonal languages and, perhaps, the 10 sounds in the murine lexicon. A basic tone is created by exhalation through a constricted laryngeal voice box, and it is thought that more complex utterances are produced solely by dynamic changes in laryngeal tension. But perhaps, the shifting pitch also results from altering the swiftness of exhalation. Consistent with the latter model, we describe that intonation in most vocalization types follows deviations in exhalation that appear to be generated by the re-activation of the cardinal breathing muscle for inspiration. We also show that the brainstem vocalization central pattern generator, the iRO, can create this breath pattern. Consequently, ectopic activation of the iRO not only induces phonation, but also the pitch patterns that compose most of the vocalizations in the murine lexicon. These results reveal a novel brainstem mechanism for intonation.
Collapse
Affiliation(s)
- Alastair MacDonald
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
| | - Alina Hebling
- Neuroscience Graduate Program, University of California-San FranciscoSan FranciscoUnited States
| | - Xin Paul Wei
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
- Biomedical Sciences Graduate Program, University of California-San FranciscoSan FranciscoUnited States
| | - Kevin Yackle
- Department of Physiology, University of California-San FranciscoSan FranciscoUnited States
| |
Collapse
|
2
|
Rendall D. Aping Language: Historical Perspectives on the Quest for Semantics, Syntax, and Other Rarefied Properties of Human Language in the Communication of Primates and Other Animals. Front Psychol 2021; 12:675172. [PMID: 34366994 PMCID: PMC8345011 DOI: 10.3389/fpsyg.2021.675172] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/21/2021] [Indexed: 11/13/2022] Open
Abstract
In 1980, Robert Seyfarth, Dorothy Cheney and Peter Marler published a landmark paper in Science claiming language-like semantic communication in the alarm calls of vervet monkeys. This article and the career research program it spawned for its authors catalyzed countless other studies searching for semantics, and then also syntax and other rarefied properties of language, in the communication systems of non-human primates and other animals. It also helped bolster a parallel tradition of teaching symbolism and syntax in artificial language systems to great apes. Although the search for language rudiments in the communications of primates long predates the vervet alarm call story, it is difficult to overstate the impact of the vervet research, for it fueled field and laboratory research programs for several generations of primatologists and kept busy an equal number of philosophers, linguists, and cognitive scientists debating possible implications for the origins and evolution of language and other vaunted elements of the human condition. Now 40-years on, the original vervet alarm call findings have been revised and claims of semanticity recanted; while other evidence for semantics and syntax in the natural communications of non-humans is sparse and weak. Ultimately, we are forced to conclude that there are simply few substantive precedents in the natural communications of animals for the high-level informational and representational properties of language, nor its complex syntax. This conclusion does not mean primates cannot be taught some version of these elements of language in artificial language systems - in fact, they can. Nor does it mean there is no continuity between the natural communications of animals and humans that could inform the evolution of language - in fact, there is such continuity. It just does not lie in the specialized semantic and syntactic properties of language. In reviewing these matters, I consider why it is that primates do not evince high-level properties of language in their natural communications but why we so readily accepted that they did or should; and what lessons we might draw from that experience. In the process, I also consider why accounts of human-like characteristics in animals can be so irresistibly appealing.
Collapse
Affiliation(s)
- Drew Rendall
- Department of Biology, University of New Brunswick, Fredericton, NB, Canada
| |
Collapse
|
3
|
Zhang YS, Takahashi DY, Liao DA, Ghazanfar AA, Elemans CPH. Vocal state change through laryngeal development. Nat Commun 2019; 10:4592. [PMID: 31597928 PMCID: PMC6785551 DOI: 10.1038/s41467-019-12588-6] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Accepted: 09/13/2019] [Indexed: 01/26/2023] Open
Abstract
Across vertebrates, progressive changes in vocal behavior during postnatal development are typically attributed solely to developing neural circuits. How the changing body influences vocal development remains unknown. Here we show that state changes in the contact vocalizations of infant marmoset monkeys, which transition from noisy, low frequency cries to tonal, higher pitched vocalizations in adults, are caused partially by laryngeal development. Combining analyses of natural vocalizations, motorized excised larynx experiments, tensile material tests and high-speed imaging, we show that vocal state transition occurs via a sound source switch from vocal folds to apical vocal membranes, producing louder vocalizations with higher efficiency. We show with an empirically based model of descending motor control how neural circuits could interact with changing laryngeal dynamics, leading to adaptive vocal development. Our results emphasize the importance of embodied approaches to vocal development, where exploiting biomechanical consequences of changing material properties can simplify motor control, reducing the computational load on the developing brain.
Collapse
Affiliation(s)
- Yisi S Zhang
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08544, USA
| | - Daniel Y Takahashi
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08544, USA
| | - Diana A Liao
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08544, USA
| | - Asif A Ghazanfar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, 08544, USA.
- Department of Psychology, Princeton University, Princeton, NJ, 08544, USA.
- Department of Ecology & Evolutionary Biology, Princeton University, Princeton, NJ, 08544, USA.
| | - Coen P H Elemans
- Department of Biology, University of Southern Denmark, 5230, Odense M, Denmark.
| |
Collapse
|
4
|
Ahmadi F, Noorian F, Novakovic D, van Schaik A. A pneumatic Bionic Voice prosthesis-Pre-clinical trials of controlling the voice onset and offset. PLoS One 2018; 13:e0192257. [PMID: 29466455 PMCID: PMC5821320 DOI: 10.1371/journal.pone.0192257] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 01/18/2018] [Indexed: 11/18/2022] Open
Abstract
Despite emergent progress in many fields of bionics, a functional Bionic Voice prosthesis for laryngectomy patients (larynx amputees) has not yet been achieved, leading to a lifetime of vocal disability for these patients. This study introduces a novel framework of Pneumatic Bionic Voice Prostheses as an electronic adaptation of the Pneumatic Artificial Larynx (PAL) device. The PAL is a non-invasive mechanical voice source, driven exclusively by respiration with an exceptionally high voice quality, comparable to the existing gold standard of Tracheoesophageal (TE) voice prosthesis. Following PAL design closely as the reference, Pneumatic Bionic Voice Prostheses seem to have a strong potential to substitute the existing gold standard by generating a similar voice quality while remaining non-invasive and non-surgical. This paper designs the first Pneumatic Bionic Voice prosthesis and evaluates its onset and offset control against the PAL device through pre-clinical trials on one laryngectomy patient. The evaluation on a database of more than five hours of continuous/isolated speech recordings shows a close match between the onset/offset control of the Pneumatic Bionic Voice and the PAL with an accuracy of 98.45 ±0.54%. When implemented in real-time, the Pneumatic Bionic Voice prosthesis controller has an average onset/offset delay of 10 milliseconds compared to the PAL. Hence it addresses a major disadvantage of previous electronic voice prostheses, including myoelectric Bionic Voice, in meeting the short time-frames of controlling the onset/offset of the voice in continuous speech.
Collapse
Affiliation(s)
- Farzaneh Ahmadi
- The MARCS Institute for Brain Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia
| | - Farzad Noorian
- School of Electrical and Information Engineering, The University of Sydney, Sydney, New South Wales, Australia
| | - Daniel Novakovic
- Central Clinical School, Faculty of Medicine, The University of Sydney, Sydney, New South Wales, Australia
| | - André van Schaik
- The MARCS Institute for Brain Behaviour and Development, Western Sydney University, Sydney, New South Wales, Australia
| |
Collapse
|
5
|
Fischer J, Price T. Meaning, intention, and inference in primate vocal communication. Neurosci Biobehav Rev 2017; 82:22-31. [DOI: 10.1016/j.neubiorev.2016.10.014] [Citation(s) in RCA: 73] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Revised: 09/20/2016] [Accepted: 10/18/2016] [Indexed: 11/25/2022]
|
6
|
The origins and diversity of bat songs. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2016; 202:535-54. [DOI: 10.1007/s00359-016-1105-0] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Revised: 06/09/2016] [Accepted: 06/10/2016] [Indexed: 01/08/2023]
|
7
|
|
8
|
Brain mechanisms of acoustic communication in humans and nonhuman primates: An evolutionary perspective. Behav Brain Sci 2014; 37:529-46. [DOI: 10.1017/s0140525x13003099] [Citation(s) in RCA: 147] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
AbstractAny account of “what is special about the human brain” (Passingham 2008) must specify the neural basis of our unique ability to produce speech and delineate how these remarkable motor capabilities could have emerged in our hominin ancestors. Clinical data suggest that the basal ganglia provide a platform for the integration of primate-general mechanisms of acoustic communication with the faculty of articulate speech in humans. Furthermore, neurobiological and paleoanthropological data point at a two-stage model of the phylogenetic evolution of this crucial prerequisite of spoken language: (i) monosynaptic refinement of the projections of motor cortex to the brainstem nuclei that steer laryngeal muscles, presumably, as part of a “phylogenetic trend” associated with increasing brain size during hominin evolution; (ii) subsequent vocal-laryngeal elaboration of cortico-basal ganglia circuitries, driven by human-specificFOXP2mutations.;>This concept implies vocal continuity of spoken language evolution at the motor level, elucidating the deep entrenchment of articulate speech into a “nonverbal matrix” (Ingold 1994), which is not accounted for by gestural-origin theories. Moreover, it provides a solution to the question for the adaptive value of the “first word” (Bickerton 2009) since even the earliest and most simple verbal utterances must have increased the versatility of vocal displays afforded by the preceding elaboration of monosynaptic corticobulbar tracts, giving rise to enhanced social cooperation and prestige. At the ontogenetic level, the proposed model assumes age-dependent interactions between the basal ganglia and their cortical targets, similar to vocal learning in some songbirds. In this view, the emergence of articulate speech builds on the “renaissance” of an ancient organizational principle and, hence, may represent an example of “evolutionary tinkering” (Jacob 1977).
Collapse
|
9
|
Frühholz S, Klaas HS, Patel S, Grandjean D. Talking in Fury: The Cortico-Subcortical Network Underlying Angry Vocalizations. Cereb Cortex 2014; 25:2752-62. [PMID: 24735671 DOI: 10.1093/cercor/bhu074] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Although the neural basis for the perception of vocal emotions has been described extensively, the neural basis for the expression of vocal emotions is almost unknown. Here, we asked participants both to repeat and to express high-arousing angry vocalizations to command (i.e., evoked expressions). First, repeated expressions elicited activity in the left middle superior temporal gyrus (STG), pointing to a short auditory memory trace for the repetition of vocal expressions. Evoked expressions activated the left hippocampus, suggesting the retrieval of long-term stored scripts. Secondly, angry compared with neutral expressions elicited activity in the inferior frontal cortex IFC and the dorsal basal ganglia (BG), specifically during evoked expressions. Angry expressions also activated the amygdala and anterior cingulate cortex (ACC), and the latter correlated with pupil size as an indicator of bodily arousal during emotional output behavior. Though uncorrelated, both ACC activity and pupil diameter were also increased during repetition trials indicating increased control demands during the more constraint production type of precisely repeating prosodic intonations. Finally, different acoustic measures of angry expressions were associated with activity in the left STG, bilateral inferior frontal gyrus, and dorsal BG.
Collapse
Affiliation(s)
- Sascha Frühholz
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Hannah S Klaas
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Sona Patel
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Laboratory (NEAD), Department of Psychology, University of Geneva, Geneva, Switzerland Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
10
|
Liu Y, Feng J, Metzner W. Different auditory feedback control for echolocation and communication in horseshoe bats. PLoS One 2013; 8:e62710. [PMID: 23638137 PMCID: PMC3634746 DOI: 10.1371/journal.pone.0062710] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2012] [Accepted: 03/26/2013] [Indexed: 11/24/2022] Open
Abstract
Auditory feedback from the animal's own voice is essential during bat echolocation: to optimize signal detection, bats continuously adjust various call parameters in response to changing echo signals. Auditory feedback seems also necessary for controlling many bat communication calls, although it remains unclear how auditory feedback control differs in echolocation and communication. We tackled this question by analyzing echolocation and communication in greater horseshoe bats, whose echolocation pulses are dominated by a constant frequency component that matches the frequency range they hear best. To maintain echoes within this “auditory fovea”, horseshoe bats constantly adjust their echolocation call frequency depending on the frequency of the returning echo signal. This Doppler-shift compensation (DSC) behavior represents one of the most precise forms of sensory-motor feedback known. We examined the variability of echolocation pulses emitted at rest (resting frequencies, RFs) and one type of communication signal which resembles an echolocation pulse but is much shorter (short constant frequency communication calls, SCFs) and produced only during social interactions. We found that while RFs varied from day to day, corroborating earlier studies in other constant frequency bats, SCF-frequencies remained unchanged. In addition, RFs overlapped for some bats whereas SCF-frequencies were always distinctly different. This indicates that auditory feedback during echolocation changed with varying RFs but remained constant or may have been absent during emission of SCF calls for communication. This fundamentally different feedback mechanism for echolocation and communication may have enabled these bats to use SCF calls for individual recognition whereas they adjusted RF calls to accommodate the daily shifts of their auditory fovea.
Collapse
Affiliation(s)
- Ying Liu
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Changchun, Jilin, China
- Department Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, California, United States of America
| | - Jiang Feng
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Changchun, Jilin, China
- * E-mail: (WM); (JF)
| | - Walter Metzner
- Department Integrative Biology and Physiology, University of California Los Angeles, Los Angeles, California, United States of America
- Neurosensing and Bionavigation Research Center, Doshisha University, Kyotanabe, Kyoto, Japan
- * E-mail: (WM); (JF)
| |
Collapse
|
11
|
Bolhuis JJ, Okanoya K, Scharff C. Twitter evolution: converging mechanisms in birdsong and human speech. Nat Rev Neurosci 2010; 11:747-59. [PMID: 20959859 DOI: 10.1038/nrn2931] [Citation(s) in RCA: 276] [Impact Index Per Article: 18.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Vocal imitation in human infants and in some orders of birds relies on auditory-guided motor learning during a sensitive period of development. It proceeds from 'babbling' (in humans) and 'subsong' (in birds) through distinct phases towards the full-fledged communication system. Language development and birdsong learning have parallels at the behavioural, neural and genetic levels. Different orders of birds have evolved networks of brain regions for song learning and production that have a surprisingly similar gross anatomy, with analogies to human cortical regions and basal ganglia. Comparisons between different songbird species and humans point towards both general and species-specific principles of vocal learning and have identified common neural and molecular substrates, including the forkhead box P2 (FOXP2) gene.
Collapse
Affiliation(s)
- Johan J Bolhuis
- Behavioural Biology, Department of Biology and Helmholtz Institute, Utrecht University, Padualaan 8, Utrecht, the Netherlands.
| | | | | |
Collapse
|