1
|
Fournier F, Perrier L, Girard-Buttoz C, Keenan S, Bortolato T, Wittig R, Crockford C, Levrero F. Emotions mediate nonlinear phenomena production in the vocalizations of two ape species. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240013. [PMID: 40176511 PMCID: PMC11966156 DOI: 10.1098/rstb.2024.0013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 11/21/2024] [Accepted: 01/09/2025] [Indexed: 04/04/2025] Open
Abstract
Nonlinear phenomena (NLP) are widely observed in mammal vocalizations. One prominent, albeit rarely empirically tested, theory suggests that NLP serve to communicate individual emotional states. Here, we test this 'emotional hypothesis' by assessing NLP production in the vocalizations of chimpanzees and bonobos across various social contexts. These two species are relevant to test this hypothesis since bonobos are more socially opportunistic than chimpanzees. We found that both species produced, albeit at different frequencies, the same five distinct NLP types. Contextual valence influenced NLP production in both species with negative valence being associated with more frequent NLP production than positive and neutral valence. In contrast, using aggression severity and caller role as proxies for arousal, we found that in bonobos, but not in chimpanzees, vocalizations uttered during contact aggression or from victims and females contained more NLP. In contrast, the type of NLP produced was neither influenced by valence nor arousal in either species. Our study supports the emotional hypothesis regarding the occurrence of NLP production in mammals, particularly in opportunistics such as bonobos. This reinforces the hypothesis of an adaptative role of NLP in animal communication and prompts further investigations into their communicative functions.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Floriane Fournier
- ENES Bioacoustics Research Laboratory, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne42100, France
| | - Léo Perrier
- ENES Bioacoustics Research Laboratory, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne42100, France
| | - Cedric Girard-Buttoz
- ENES Bioacoustics Research Laboratory, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne42100, France
- Taï Chimpanzee Project, CSRS, Abidjan, Ivory Coast
- Department of Human Behaviour, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| | - Sumir Keenan
- ENES Bioacoustics Research Laboratory, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne42100, France
| | - Tatiana Bortolato
- Taï Chimpanzee Project, CSRS, Abidjan, Ivory Coast
- Department of Human Behaviour, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
- Ape Social Mind Lab, Institute of Cognitive Science Marc Jeannerod, UMR 5229, CNRS, Lyon, France
| | - Roman Wittig
- Taï Chimpanzee Project, CSRS, Abidjan, Ivory Coast
- Department of Human Behaviour, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
- Ape Social Mind Lab, Institute of Cognitive Science Marc Jeannerod, UMR 5229, CNRS, Lyon, France
| | - Catherine Crockford
- Taï Chimpanzee Project, CSRS, Abidjan, Ivory Coast
- Department of Human Behaviour, Ecology and Culture, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
- Ape Social Mind Lab, Institute of Cognitive Science Marc Jeannerod, UMR 5229, CNRS, Lyon, France
| | - Florence Levrero
- ENES Bioacoustics Research Laboratory, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne42100, France
- Institut universitaire de France, Paris, France
| |
Collapse
|
2
|
Linossier J, Charrier I, Mathevon N, Casey C, Reichmuth C. Nonlinear phenomena in pinnipeds: a preliminary investigation in the contact calls of northern elephant seal pups. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240016. [PMID: 40176523 PMCID: PMC11966153 DOI: 10.1098/rstb.2024.0016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 10/18/2024] [Accepted: 02/06/2025] [Indexed: 04/04/2025] Open
Abstract
As acoustic markers of emotional state, nonlinear phenomena (NLP) are commonly found in the calls that young mammals produce to solicit attention from their parents. However, data are lacking to assess the ontogeny of these NLP during early development, including the extent to which these acoustic cues vary with the age and sex of the emitter. In the present study, we evaluated the occurrence of NLP in the contact calls that northern elephant seal (Mirounga angustirostris) pups emit to solicit maternal care during the three-week period of maternal dependence. We found that five types of NLP are present at an early age. The relative occurrence of these NLP types varies with pup age, with more biphonation, chaos and subharmonics as pups get older, and fewer vibrato-like frequency-modulated components varying with both age and sex. Our results suggest that developmental changes-including body growth-facilitate increased flexibility in the vocal apparatus, which subsequently impacts the production of certain types of NLP. The production of nonlinear components within the calls of rapidly growing elephant seal pups is likely linked to their arousal state, which in turn is related to their high demand for maternal care. This can fluctuate throughout the lactation period and vary between male and female pups.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Juliette Linossier
- Long Marine Laboratory, Institute of Marine Sciences, University of California Santa Cruz, Santa Cruz, CA95060, USA
- Biophonia, Sualello 20232, Oletta, France
| | - Isabelle Charrier
- Institut des Neurosciences Paris-Saclay, Université Paris-Saclay, CNRS, UMR 9197, Saclay91400, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Lab, CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- Ecole Pratique des Hautes Etudes, CHArt lab, PSL University, Paris, France
- Institut universitaire de France, Paris, France
| | - Caroline Casey
- Long Marine Laboratory, Institute of Marine Sciences, University of California Santa Cruz, Santa Cruz, CA95060, USA
| | - Colleen Reichmuth
- Long Marine Laboratory, Institute of Marine Sciences, University of California Santa Cruz, Santa Cruz, CA95060, USA
- Alaska SeaLife Center, Seward, AK99664, USA
| |
Collapse
|
3
|
Blumstein DT. Nonlinear phenomena in marmot alarm calls: a mechanism encoding fear? Philos Trans R Soc Lond B Biol Sci 2025; 380:20240008. [PMID: 40176508 PMCID: PMC11966161 DOI: 10.1098/rstb.2024.0008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Revised: 08/26/2024] [Accepted: 11/15/2024] [Indexed: 04/04/2025] Open
Abstract
I review a case study of marmots that contributed to the empirical basis of the nonlinearity and fear hypothesis, which explains why certain nonlinear acoustic phenomena (NLP) are produced in extremely high-risk situations and communicate high urgency. In response to detecting predatory threats, yellow-bellied marmots (Marmota flaviventer) emit alarm calls and, in some situations, emit fear screams. Prior work on marmots has shown that call production is associated with the degree of risk the caller experiences and that they are individually distinctive. Receivers respond to calls and are sensitive to variation in caller reliability. Calls also contain nonlinear acoustic phenomena. Work has shown that socially isolated animals and those infected with Eimeria, an intestinal parasite, produced 'noisier' calls. However, animals that were likely under greater stress (as measured with faecal glucocorticoid metabolites) produced more structured and less noisy calls. The addition of NLP increases responsiveness in receivers. NLP in alarm calls have modest heritability. Taken together, the study of NLP in marmots has enhanced our understanding of the potential information encoded in alarm calls and is consistent with the hypothesis that variation in NLP production communicates fear, which stimulates work with other species, including humans.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Daniel T. Blumstein
- Department of Ecology and Evolutionary Biology, University of California, Los Angeles, CA90095-1606, USA
- The Rocky Mountain Biological Laboratory, Crested Butte, CO81224, USA
| |
Collapse
|
4
|
Valente D, Magnard C, Koutseff A, Patural H, Chauleur C, Reby D, Pisanski K. Vocal communication and perception of pain in childbirth vocalizations. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240009. [PMID: 40176506 PMCID: PMC11966154 DOI: 10.1098/rstb.2024.0009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 11/25/2024] [Accepted: 01/24/2025] [Indexed: 04/04/2025] Open
Abstract
Nonlinear acoustic phenomena (NLP) likely facilitate the expression of distress in animal vocalizations, making calls perceptually rough and hard to ignore. Yet, their function in adult human vocal communication remains poorly understood. Here, to examine the production and perception of acoustic correlates of pain in spontaneous human nonverbal vocalizations, we take advantage of childbirth-a natural context in which labouring women typically produce a range of highly evocative loud vocalizations, including moans and screams-as they experience excruciating pain. We combine acoustic analyses of these real-life pain vocalizations with psychoacoustic experiments involving the playback of natural and synthetic calls to both naïve and expert listeners. We show that vocalizations become acoustically rougher, higher in fundamental frequency (pitch), less stable, louder and longer as child labour progresses, paralleling a rise in women's self-assessed pain. In perception experiments, we show that both naïve listeners and obstetric professionals assign the highest pain ratings to vocalizations produced in the final expulsion phase of labour. Experiments with synthetic vocal stimuli confirm that listeners rely largely on nonlinear phenomena to assess pain. Our study confirms that nonlinear phenomena communicate intense, pain-induced distress in humans, consistent with their widespread function to signal distress and arousal in vertebrate vocal signals.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Daria Valente
- Department of Life Sciences and Systems Biology, University of Turin, Torino10123, Italy
| | - Cecile Magnard
- Lucie Hussel Hospital, Maternity Ward, Montée du Dr Chapuis, Vienne38200, France
| | - Alexis Koutseff
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, Jean Monnet University of Saint Étienne, St-Étienne42023, France
| | - Hugues Patural
- Department of Pediatrics, University Hospital Centre of Saint-Étienne, Saint-Étienne42055, France
| | - Celine Chauleur
- Department of Gynecology and Obstetrics, University Hospital Centre of Saint-Étienne, Saint-Étienne42055, France
| | - David Reby
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, Jean Monnet University of Saint Étienne, St-Étienne42023, France
- Institut Universitaire de France, Paris, Île-de-France75005, France
| | - Katarzyna Pisanski
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, Jean Monnet University of Saint Étienne, St-Étienne42023, France
- Institute of Psychology, University of Wrocław, Wrocław50-527, Poland
| |
Collapse
|
5
|
Anikin A, Herbst CT. How to analyse and manipulate nonlinear phenomena in voice recordings. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240003. [PMID: 40176526 PMCID: PMC11966163 DOI: 10.1098/rstb.2024.0003] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 07/22/2024] [Accepted: 10/01/2024] [Indexed: 04/04/2025] Open
Abstract
We address two research applications in this methodological review: starting from an audio recording, the goal may be to characterize nonlinear phenomena (NLP) at the level of voice production or to test their perceptual effects on listeners. A crucial prerequisite for this work is the ability to detect NLP in acoustic signals, which can then be correlated with biologically relevant information about the caller and with listeners' reaction. NLP are often annotated manually, but this is labour-intensive and not very reliable, although we describe potentially helpful advanced visualization aids such as reassigned spectrograms and phasegrams. Objective acoustic features can also be useful, including general descriptives (harmonics-to-noise ratio, cepstral peak prominence, vocal roughness), statistics derived from nonlinear dynamics (correlation dimension) and NLP-specific measures (depth of modulation and subharmonics). On the perception side, playback studies can greatly benefit from tools for directly manipulating NLP in recordings. Adding frequency jumps, amplitude modulation and subharmonics is relatively straightforward. Creating biphonation, imitating chaos or removing NLP from a recording are more challenging, but feasible with parametric voice synthesis. We describe the most promising algorithms for analysing and manipulating NLP and provide detailed examples with audio files and R code in supplementary material.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Lund University, Lund, Sweden
- ENES Bioacoustics Research Laboratory, Université Jean Monnet Saint-Étienne, Saint-Étienne, France
| | - Christian T. Herbst
- University of Vienna, Vienna, Austria
- Department of Communication Sciences and Disorders, College of Liberal Arts and Sciences, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
6
|
Corvin S, Massenet M, Hardy A, Patural H, Peyron R, Fauchon C, Mathevon N. Nonlinear acoustic phenomena affect the perception of pain in human baby cries. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240023. [PMID: 40176515 PMCID: PMC11966150 DOI: 10.1098/rstb.2024.0023] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 10/22/2024] [Accepted: 11/28/2024] [Indexed: 04/04/2025] Open
Abstract
What makes the painful cries of human babies so difficult to ignore? Vocal traits known as 'nonlinear phenomena' are prime candidates. These acoustic irregularities are common in babies' cries and are typically associated with high levels of distress or pain. Despite the vital importance of cries for a baby's survival, how these nonlinear phenomena drive pain perception in adult listeners has not previously been systematically investigated. Here, by combining acoustic analyses of cries recorded in different contexts with playback experiments using natural and synthetic cries, we show that baby cries expressing acute pain are characterized by a pronounced presence of different nonlinear phenomena, and that these nonlinear phenomena drive pain evaluation by adult listeners. While adult listeners rated all cries presenting any of these nonlinear phenomena as expressing more pain, they were particularly sensitive to the presence of chaos. Our results thus show that nonlinear phenomena, especially chaos, encode pain information in baby cries and may be critically helpful for the development of vocal-based tools for monitoring babies' needs in the context of paediatric care.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Siloé Corvin
- ENES Bioacoustics Research Lab, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42100, France
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
| | - Mathilde Massenet
- ENES Bioacoustics Research Lab, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42100, France
| | - Angélique Hardy
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
- Neuro-Dol, Inserm, University of Clermont Auvergne, University Hospital of Clermont-Ferrand, Clermont-Ferrand63100, France
| | - Hugues Patural
- Neonatal and Pediatric Intensive Care Unit SAINBIOSE Laboratory, Inserm, University Hospital of Saint-Etienne, University of Saint-Etienne, Saint-Etienne42270, France
| | - Roland Peyron
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
| | - Camille Fauchon
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
- Neuro-Dol, Inserm, University of Clermont Auvergne, University Hospital of Clermont-Ferrand, Clermont-Ferrand63100, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Lab, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42100, France
- Institut Universitaire de France, Paris75005, France
- Ecole Pratique des Hautes Etudes, CHArt lab, EPHE - PSL University, Paris, France
| |
Collapse
|
7
|
Arnal LH, Gonçalves N. Rough is salient: a conserved vocal niche to hijack the brain's salience system. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240020. [PMID: 40176527 PMCID: PMC11966164 DOI: 10.1098/rstb.2024.0020] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 11/21/2024] [Accepted: 12/01/2024] [Indexed: 04/04/2025] Open
Abstract
The propensity to communicate extreme emotional states and arousal through salient, non-referential vocalizations is ubiquitous among mammals and beyond. Screams, whether intended to warn conspecifics or deter aggressors, require a rapid increase of air influx through vocal folds to induce nonlinear distortions of the signal. These distortions contain salient, temporally patterned acoustic features in a restricted range of the audible spectrum. These features may have a biological significance, triggering fast behavioural responses in the receivers. We present converging neurophysiological and behavioural evidence from humans and animals supporting that the properties emerging from nonlinear vocal phenomena are ideally adapted to induce efficient sensory, emotional and behavioural responses. We argue that these fast temporal-rough-modulations are unlikely to be an epiphenomenon of vocal production but rather the result of selective evolutionary pressure on vocal warning signals to promote efficient communication. In this view, rough features may have been selected and conserved as an acoustic trait to recruit ancestral sensory salience pathways and elicit optimal reactions in the receiver. By exploring the impact of rough vocalizations at the receiver's end, we review the perceptual, behavioural and neural factors that may have shaped these signals to evolve as powerful communication tools.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Luc H. Arnal
- Université Paris Cité, Institut Pasteur, AP-HP, INSERM, CNRS, Fondation Pour l'Audition, Institut de l’Audition, IHU reConnect, Paris75012, France
| | - Noémi Gonçalves
- Université Paris Cité, Institut Pasteur, AP-HP, INSERM, CNRS, Fondation Pour l'Audition, Institut de l’Audition, IHU reConnect, Paris75012, France
| |
Collapse
|
8
|
Massenet M, Pisanski K, Reynaud K, Mathevon N, Reby D, Anikin A. Acoustic context and dynamics of nonlinear phenomena in mammalian calls: the case of puppy whines. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240022. [PMID: 40176516 PMCID: PMC11966151 DOI: 10.1098/rstb.2024.0022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 08/02/2024] [Accepted: 10/31/2024] [Indexed: 04/04/2025] Open
Abstract
Nonlinear phenomena (NLP) are often associated with high arousal and function to grab attention and/or signal urgency in vocalizations such as distress calls. Although biomechanical models and in vivo/ex vivo experiments suggest that their occurrence reflects the destabilization of vocal fold vibration under intense subglottal pressure and muscle tension, comprehensive descriptions of the dynamics of NLP occurrence in natural vocal signals are critically lacking. Here, to plug this gap, we report the timing, type, extent and acoustic context of NLP in 12 011 whines produced by Beagle puppies (Canis familiaris) during a brief separation from their mothers. Within bouts of whines, we show that both the proportion of time vocalizing and the number of whines containing NLP, especially those with chaos, increase with time since separation, presumably reflecting heightened arousal. Within whines, we show that NLP are typically produced during the first half of the call, following the steepest rises in pitch (fundamental frequency, fo) and amplitude. While our study reinforces the notion that NLP arise in calls due to instabilities in vocal production during high arousal, it also provides novel and efficient analytical tools for quantifying nonlinear acoustics in ecologically relevant mammal vocal communication contexts.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Mathilde Massenet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Division of Cognitive Science, Lund University, Lund, Sweden
| | - Katarzyna Pisanski
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- DDL Dynamics of Language Laboratory, University of Lyon 2, Lyon, France
- Institute of Psychology, University of Wrocław, Wrocław, Poland
| | - Karine Reynaud
- École Nationale Vétérinaire d’Alfort, EnvA, Maisons-Alfort, France
- Physiologie de la Reproduction et des Comportements, CNRS, INRAE, Université de Tours, PRC, Nouzilly, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Institut universitaire de France, Paris, France
- Ecole Pratique des Hautes Etudes, University Paris-Sciences-Lettres, Paris, France
| | - David Reby
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Institut universitaire de France, Paris, France
| | - Andrey Anikin
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Division of Cognitive Science, Lund University, Lund, Sweden
| |
Collapse
|
9
|
De Gregorio C, Valente D, Cristiano W, Carugati F, Prealta M, Ferrario V, Raimondi T, Torti V, Ratsimbazafy J, Giacoma C, Gamba M. Singing out of tune: sexual and developmental differences in the occurrence of nonlinear phenomena in primate songs. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240021. [PMID: 40176518 PMCID: PMC11966165 DOI: 10.1098/rstb.2024.0021] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Revised: 06/28/2024] [Accepted: 07/18/2024] [Indexed: 04/04/2025] Open
Abstract
Animal vocalizations contain a varying degree of nonlinear phenomena (NLP) caused by irregular or chaotic vocal organ dynamics. Several hypotheses have been proposed to explain NLP presence, from unintentional by-products of poor vocal technique to having a functional communicative role. We aimed to disentangle the role of sex, age and physiological constraints in the occurrence of NLP in the songs of the lemur Indri indri, which are complex harmonic vocal displays organized in phrases. Age and sex affected the presence and type of NLP in songs. In particular, the proportion of the phenomena considered decreased with age, except for subharmonics. Subharmonics potentially mediate the perception of lower pitch, making signallers appear larger. Subharmonics and frequency jumps occurred in lower-pitched notes than regular units, while chaos and sidebands occurred in higher-pitched units. This suggests that different types of NLP can be associated with different vocal constraints. Finally, indris might present short-term vocal fatigue, with units occurring in the last position of a phrase having the highest probability of containing NLP. The presence of NLP in indris might result from proximate causes, such as physiological constraints, and ultimate causes, such as evolutionary pressures, which shaped the communicative role of NLP.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Chiara De Gregorio
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
- Department of Psychology, University of Warwick, CoventryCV4 7A, UK
| | - Daria Valente
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
- Parco Natura Viva Garda Zoological Park (PNV), Bussolengo37012, Italy
| | - Walter Cristiano
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
- Environment and Health Department, Italian National Institute of Health, Roma00161, Italy
| | - Filippo Carugati
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
| | - Michela Prealta
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
| | - Valeria Ferrario
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
- Chester Zoo, Caughall Road, ChesterCH2 1LE, UK
| | - Teresa Raimondi
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
- Department of Human Neurosciences, Sapienza University of Rome, Roma00185, Italy
| | - Valeria Torti
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
| | - Jonah Ratsimbazafy
- Groupe d’Etude et de Recherche sur les Primates de Madagascar, Antananarivo779, Madagascar
| | - Cristina Giacoma
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
| | - Marco Gamba
- Department of Life Sciences and Systems Biology, University of Torino, Torino10123, Italy
| |
Collapse
|
10
|
Ponsonnet M, Coupé C, Pellegrino F, Garcia Arasco A, Pisanski K. Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languagesa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3118-3139. [PMID: 39531311 DOI: 10.1121/10.0032454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 10/01/2024] [Indexed: 11/16/2024]
Abstract
In this comparative cross-linguistic study we test whether expressive interjections (words like ouch or yay) share similar vowel signatures across the world's languages, and whether these can be traced back to nonlinguistic vocalizations (like screams and cries) expressing the same emotions of pain, disgust, and joy. We analyze vowels in interjections from dictionaries of 131 languages (over 600 tokens) and compare these with nearly 500 vowels based on formant frequency measures from voice recordings of volitional nonlinguistic vocalizations. We show that across the globe, pain interjections feature a-like vowels and wide falling diphthongs ("ai" as in Ayyy! "aw" as in Ouch!), whereas disgust and joy interjections do not show robust vowel regularities that extend geographically. In nonlinguistic vocalizations, all emotions yield distinct vowel signatures: pain prompts open vowels such as [a], disgust schwa-like central vowels, and joy front vowels such as [i]. Our results show that pain is the only affective experience tested with a clear, robust vowel signature that is preserved between nonlinguistic vocalizations and interjections across languages. These results offer empirical evidence for iconicity in some expressive interjections. We consider potential mechanisms and origins, from evolutionary pressures and sound symbolism to colexification, proposing testable hypotheses for future research.
Collapse
Affiliation(s)
- Maïa Ponsonnet
- Dynamique Du Langage, CNRS et Université Lumière Lyon 2, Lyon, France
- School of Social Sciences, The University of Western Australia, Perth, Australia
| | - Christophe Coupé
- Department of Linguistics, The University of Hong Kong, Hong Kong SAR, China
| | | | | | - Katarzyna Pisanski
- Dynamique Du Langage, CNRS et Université Lumière Lyon 2, Lyon, France
- ENES Bioacoustics Research Laboratory, University Jean Monnet of Saint-Etienne, CRNL, CNRS, Saint-Etienne, France
- Institute of Psychology, University of Wrocław, Wrocław, Poland
| |
Collapse
|
11
|
Pisanski K, Reby D, Oleszkiewicz A. Humans need auditory experience to produce typical volitional nonverbal vocalizations. COMMUNICATIONS PSYCHOLOGY 2024; 2:65. [PMID: 39242947 PMCID: PMC11332021 DOI: 10.1038/s44271-024-00104-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Accepted: 05/16/2024] [Indexed: 09/09/2024]
Abstract
Human nonverbal vocalizations such as screams and cries often reflect their evolved functions. Although the universality of these putatively primordial vocal signals and their phylogenetic roots in animal calls suggest a strong reflexive foundation, many of the emotional vocalizations that we humans produce are under our voluntary control. This suggests that, like speech, volitional vocalizations may require auditory input to develop typically. Here, we acoustically analyzed hundreds of volitional vocalizations produced by profoundly deaf adults and typically-hearing controls. We show that deaf adults produce unconventional and homogenous vocalizations of aggression and pain that are unusually high-pitched, unarticulated, and with extremely few harsh-sounding nonlinear phenomena compared to controls. In contrast, fear vocalizations of deaf adults are relatively acoustically typical. In four lab experiments involving a range of perception tasks with 444 participants, listeners were less accurate in identifying the intended emotions of vocalizations produced by deaf vocalizers than by controls, perceived their vocalizations as less authentic, and reliably detected deafness. Vocalizations of congenitally deaf adults with zero auditory experience were most atypical, suggesting additive effects of auditory deprivation. Vocal learning in humans may thus be required not only for speech, but also to acquire the full repertoire of volitional non-linguistic vocalizations.
Collapse
Affiliation(s)
- Katarzyna Pisanski
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, University of Saint-Étienne, 42023, Saint-Étienne, France.
- CNRS French National Centre for Scientific Research, DDL Dynamics of Language Lab, University of Lyon 2, 69007, Lyon, France.
- Institute of Psychology, University of Wrocław, 50-527, Wrocław, Poland.
| | - David Reby
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, University of Saint-Étienne, 42023, Saint-Étienne, France
- Institut Universitaire de France, Paris, France
| | - Anna Oleszkiewicz
- Institute of Psychology, University of Wrocław, 50-527, Wrocław, Poland.
- Department of Otorhinolaryngology, Smell and Taste Clinic, Carl Gustav Carus Medical School, Technische Universitaet Dresden, 01307, Dresden, Germany.
| |
Collapse
|
12
|
Kamiloğlu RG, Sauter DA. Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations. Cogn Emot 2024; 38:277-295. [PMID: 37997898 PMCID: PMC11057848 DOI: 10.1080/02699931.2023.2285854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.
Collapse
Affiliation(s)
- Roza G. Kamiloğlu
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
13
|
Kreiman J. Information conveyed by voice qualitya). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1264-1271. [PMID: 38345424 DOI: 10.1121/10.0024609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/09/2024] [Indexed: 02/15/2024]
Abstract
The problem of characterizing voice quality has long caused debate and frustration. The richness of the available descriptive vocabulary is overwhelming, but the density and complexity of the information voices convey lead some to conclude that language can never adequately specify what we hear. Others argue that terminology lacks an empirical basis, so that language-based scales are inadequate a priori. Efforts to provide meaningful instrumental characterizations have also had limited success. Such measures may capture sound patterns but cannot at present explain what characteristics, intentions, or identity listeners attribute to the speaker based on those patterns. However, some terms continually reappear across studies. These terms align with acoustic dimensions accounting for variance across speakers and languages and correlate with size and arousal across species. This suggests that labels for quality rest on a bedrock of biology: We have evolved to perceive voices in terms of size/arousal, and these factors structure both voice acoustics and descriptive language. Such linkages could help integrate studies of signals and their meaning, producing a truly interdisciplinary approach to the study of voice.
Collapse
Affiliation(s)
- Jody Kreiman
- Departments of Head and Neck Surgery and Linguistics, University of California, Los Angeles, Los Angeles, California 90095-1794, USA
| |
Collapse
|
14
|
Thévenet J, Papet L, Coureaud G, Boyer N, Levréro F, Grimault N, Mathevon N. Crocodile perception of distress in hominid baby cries. Proc Biol Sci 2023; 290:20230201. [PMID: 37554035 PMCID: PMC10410202 DOI: 10.1098/rspb.2023.0201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 07/07/2023] [Indexed: 08/10/2023] Open
Abstract
It is generally argued that distress vocalizations, a common modality for alerting conspecifics across a wide range of terrestrial vertebrates, share acoustic features that allow heterospecific communication. Yet studies suggest that the acoustic traits used to decode distress may vary between species, leading to decoding errors. Here we found through playback experiments that Nile crocodiles are attracted to infant hominid cries (bonobo, chimpanzee and human), and that the intensity of crocodile response depends critically on a set of specific acoustic features (mainly deterministic chaos, harmonicity and spectral prominences). Our results suggest that crocodiles are sensitive to the degree of distress encoded in the vocalizations of phylogenetically very distant vertebrates. A comparison of these results with those obtained with human subjects confronted with the same stimuli further indicates that crocodiles and humans use different acoustic criteria to assess the distress encoded in infant cries. Interestingly, the acoustic features driving crocodile reaction are likely to be more reliable markers of distress than those used by humans. These results highlight that the acoustic features encoding information in vertebrate sound signals are not necessarily identical across species.
Collapse
Affiliation(s)
- Julie Thévenet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Léo Papet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Gérard Coureaud
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Boyer
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
| | - Nicolas Grimault
- Equipe Cognition Auditive et Psychoacoustique, CRNL, CNRS, Inserm, University Lyon 1, Villeurbanne 69622, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, Rhône-Alpes, France
- Institut universitaire de France, Paris, Île-de-France, France
| |
Collapse
|
15
|
Erb WM, Barrow EJ, Hofner AN, Lecorchick JL, Mitra Setia T, Vogel ER. Wildfire smoke linked to vocal changes in wild Bornean orangutans. iScience 2023; 26:107088. [PMID: 37456857 PMCID: PMC10339020 DOI: 10.1016/j.isci.2023.107088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 06/23/2022] [Accepted: 06/07/2023] [Indexed: 07/18/2023] Open
Abstract
Tropical peatlands are the sites of Earth's largest fire events, with outsized contributions to greenhouse gases, toxic smoke, and haze rich with particulate matter. The human health risks from wildfire smoke are well known, but its effects on wildlife inhabiting these ecosystems are poorly understood. In 2015, peatland fires on Borneo created a thick haze of smoke that blanketed the region. We studied its effects on the long call vocalizations of four adult male Bornean orangutans (Pongo pygmaeus wurmbii) in a peat swamp forest. During the period of heavy smoke, orangutans called less often and showed reduced vocal quality-lower pitch, increased harshness and perturbations, and more nonlinear phenomena-similar to changes in human smokers. Most of these changes persisted for two months after the smoke had cleared and likely signal changes in health. Our work contributes valuable information to support non-invasive acoustic monitoring of this Critically Endangered primate.
Collapse
Affiliation(s)
- Wendy M. Erb
- K. Lisa Yang Center for Conservation Bioacoustics, Cornell Lab of Ornithology, Cornell University, Ithaca, NY 14850, USA
- Department of Anthropology, Rutgers, The State University of New Jersey, New Brunswick, NJ 08901, USA
| | - Elizabeth J. Barrow
- Department of Social Sciences, Oxford Brookes University, Headington, Oxford OX3 0BP, UK
- Gunung Palung Orangutan Conservation Program, West Kalimantan, Ketapang 78811, Indonesia
| | - Alexandra N. Hofner
- Department of Integrative Conservation, University of Georgia, Athens, GA 30602, USA
- Department of Anthropology, University of Georgia, Athens, GA 30602, USA
| | - Jessica L. Lecorchick
- K. Lisa Yang Center for Conservation Bioacoustics, Cornell Lab of Ornithology, Cornell University, Ithaca, NY 14850, USA
| | - Tatang Mitra Setia
- Fakultas Biologi, Universitas Nasional, Jakarta 12520, Indonesia
- Primate Research Center, Universitas Nasional, Jakarta 12520, Indonesia
| | - Erin R. Vogel
- Department of Anthropology, Rutgers, The State University of New Jersey, New Brunswick, NJ 08901, USA
- Center for Human Evolutionary Studies, Rutgers, The State University of New Jersey, New Brunswick, NJ 08901, USA
| |
Collapse
|
16
|
Grollero D, Petrolini V, Viola M, Morese R, Lettieri G, Cecchetti L. The structure underlying core affect and perceived affective qualities of human vocal bursts. Cogn Emot 2022; 37:1-17. [PMID: 36300588 DOI: 10.1080/02699931.2022.2139661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Vocal bursts are non-linguistic affectively-laden sounds with a crucial function in human communication, yet their affective structure is still debated. Studies showed that ratings of valence and arousal follow a V-shaped relationship in several kinds of stimuli: high arousal ratings are more likely to go on a par with very negative or very positive valence. Across two studies, we asked participants to listen to 1,008 vocal bursts and judge both how they felt when listening to the sound (i.e. core affect condition), and how the speaker felt when producing it (i.e. perception of affective quality condition). We show that a V-shaped fit outperforms a linear model in explaining the valence-arousal relationship across conditions and studies, even after equating the number of exemplars across emotion categories. Also, although subjective experience can be significantly predicted using affective quality ratings, core affect scores are significantly lower in arousal, less extreme in valence, more variable between individuals, and less reproducible between studies. Nonetheless, stimuli rated with opposite valence between conditions range from 11% (study 1) to 17% (study 2). Lastly, we demonstrate that ambiguity in valence (i.e. high between-participants variability) explains violations of the V-shape and relates to higher arousal.
Collapse
Affiliation(s)
- Demetrio Grollero
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Valentina Petrolini
- Lindy Lab - Language in Neurodiversity, Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Spain
| | - Marco Viola
- Department of Philosophy and Education, University of Turin, Turin, Italy
| | - Rosalba Morese
- Faculty of Communication, Culture and Society, Università della Svizzera Italiana, Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
| | - Giada Lettieri
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Crossmodal Perception and Plasticity Laboratory, IPSY, University of Louvain, Louvain-la-Neuve, Belgium
| | - Luca Cecchetti
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
17
|
Di Stefano N, Spence C. Roughness perception: A multisensory/crossmodal perspective. Atten Percept Psychophys 2022; 84:2087-2114. [PMID: 36028614 PMCID: PMC9481510 DOI: 10.3758/s13414-022-02550-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2022] [Indexed: 11/08/2022]
Abstract
Roughness is a perceptual attribute typically associated with certain stimuli that are presented in one of the spatial senses. In auditory research, the term is typically used to describe the harsh effects that are induced by particular sound qualities (i.e., dissonance) and human/animal vocalizations (e.g., screams, distress cries). In the tactile domain, roughness is a crucial factor determining the perceptual features of a surface. The same feature can also be ascertained visually, by means of the extraction of pattern features that determine the haptic quality of surfaces, such as grain size and density. By contrast, the term roughness has rarely been applied to the description of those stimuli perceived via the chemical senses. In this review, we take a critical look at the putative meaning(s) of the term roughness, when used in both unisensory and multisensory contexts, in an attempt to answer two key questions: (1) Is the use of the term 'roughness' the same in each modality when considered individually? and (2) Do crossmodal correspondences involving roughness match distinct perceptual features or (at least on certain occasions) do they merely pick-up on an amodal property? We start by examining the use of the term in the auditory domain. Next, we summarize the ways in which the term roughness has been used in the literature on tactile and visual perception, and in the domain of olfaction and gustation. Then, we move on to the crossmodal context, reviewing the literature on the perception of roughness in the audiovisual, audiotactile, and auditory-gustatory/olfactory domains. Finally, we highlight some limitations of the reviewed literature and we outline a number of key directions for future empirical research in roughness perception.
Collapse
Affiliation(s)
- Nicola Di Stefano
- National Research Council, Institute for Cognitive Sciences and Technologies, Rome, Italy.
| | | |
Collapse
|
18
|
Massenet M, Anikin A, Pisanski K, Reynaud K, Mathevon N, Reby D. Nonlinear vocal phenomena affect human perceptions of distress, size and dominance in puppy whines. Proc Biol Sci 2022; 289:20220429. [PMID: 35473375 PMCID: PMC9043735 DOI: 10.1098/rspb.2022.0429] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 03/31/2022] [Indexed: 11/12/2022] Open
Abstract
While nonlinear phenomena (NLP) are widely reported in animal vocalizations, often causing perceptual harshness and roughness, their communicative function remains debated. Several hypotheses have been put forward: attention-grabbing, communication of distress, exaggeration of body size and dominance. Here, we use state-of-the-art sound synthesis to investigate how NLP affect the perception of puppy whines by human listeners. Listeners assessed the distress, size or dominance conveyed by synthetic puppy whines with manipulated NLP, including frequency jumps and varying proportions of subharmonics, sidebands and deterministic chaos. We found that the presence of chaos increased the puppy's perceived level of distress and that this effect held across a range of representative fundamental frequency (fo) levels. Adding sidebands and subharmonics also increased perceived distress among listeners who have extensive caregiving experience with pre-weaned puppies (e.g. breeders, veterinarians). Finally, we found that whines with added chaos, subharmonics or sidebands were associated with larger and more dominant puppies, although these biases were attenuated in experienced caregivers. Together, our results show that nonlinear phenomena in puppy whines can convey rich information to human listeners and therefore may be crucial for offspring survival during breeding of a domesticated species.
Collapse
Affiliation(s)
- Mathilde Massenet
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
| | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- Division of Cognitive Science, University of Lund, 22100 Lund, Sweden
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- CNRS, French National Centre for Scientific Research, Laboratoire de Dynamique du Langage, University of Lyon 2, 69007 Lyon, France
| | - Karine Reynaud
- École Nationale Vétérinaire d'Alfort, EnvA, 94700 Maisons-Alfort, France
- Physiologie de la Reproduction et des Comportements, CNRS, IFCE, INRAE, University of Tours, PRC, Nouzilly, France
| | - Nicolas Mathevon
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- Institut universitaire de France, Paris, France
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle, ENES/CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- Institut universitaire de France, Paris, France
| |
Collapse
|
19
|
Pisanski K, Bryant GA, Cornec C, Anikin A, Reby D. Form follows function in human nonverbal vocalisations. ETHOL ECOL EVOL 2022. [DOI: 10.1080/03949370.2022.2026482] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Katarzyna Pisanski
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
- CNRS French National Centre for Scientific Research, DDL Dynamics of Language Lab, University of Lyon 2, Lyon 69007, France
| | - Gregory A. Bryant
- Department of Communication, Center for Behavior, Evolution, and Culture, University of California, Los Angeles, California, USA
| | - Clément Cornec
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
| | - Andrey Anikin
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
- Division of Cognitive Science, Lund University, Lund 22100, Sweden
| | - David Reby
- ENES Sensory Neuro-Ethology Lab, CRNL, Jean Monnet University of Saint Étienne, UMR 5293, St-Étienne 42023, France
| |
Collapse
|
20
|
Kleisner K, Leongómez JD, Pisanski K, Fiala V, Cornec C, Groyecka-Bernard A, Butovskaya M, Reby D, Sorokowski P, Akoko RM. Predicting strength from aggressive vocalizations versus speech in African bushland and urban communities. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200403. [PMID: 34719250 PMCID: PMC8558769 DOI: 10.1098/rstb.2020.0403] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/23/2021] [Indexed: 02/03/2023] Open
Abstract
The human voice carries information about a vocalizer's physical strength that listeners can perceive and that may influence mate choice and intrasexual competition. Yet, reliable acoustic correlates of strength in human speech remain unclear. Compared to speech, aggressive nonverbal vocalizations (roars) may function to maximize perceived strength, suggesting that their acoustic structure has been selected to communicate formidability, similar to the vocal threat displays of other animals. Here, we test this prediction in two non-WEIRD African samples: an urban community of Cameroonians and rural nomadic Hadza hunter-gatherers in the Tanzanian bushlands. Participants produced standardized speech and volitional roars and provided handgrip strength measures. Using acoustic analysis and information-theoretic multi-model inference and averaging techniques, we show that strength can be measured from both speech and roars, and as predicted, strength is more reliably gauged from roars than vowels, words or greetings. The acoustic structure of roars explains 40-70% of the variance in actual strength within adults of either sex. However, strength is predicted by multiple acoustic parameters whose combinations vary by sex, sample and vocal type. Thus, while roars may maximally signal strength, more research is needed to uncover consistent and likely interacting acoustic correlates of strength in the human voice. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Karel Kleisner
- Department of Philosophy and History of Science, Charles University, Prague, 12800, Czech Republic
| | - Juan David Leongómez
- Human Behaviour Lab (LACH), Faculty of Psychology, Universidad El Bosque, Bogota, DC, 110121, Colombia
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle, Centre de Recherche en Neurosciences de Lyon, Jean Monnet University of Saint-Etienne, 42100, France
- CNRS | Centre National de la Recherche Scientifique, Laboratoire Dynamique du Langage, Université Lyon 2, Lyon, 69363, France
- Institute of Psychology, University of Wroclaw, 50–527, Poland
| | - Vojtěch Fiala
- Department of Philosophy and History of Science, Charles University, Prague, 12800, Czech Republic
| | - Clément Cornec
- Equipe de Neuro-Ethologie Sensorielle, Centre de Recherche en Neurosciences de Lyon, Jean Monnet University of Saint-Etienne, 42100, France
| | | | - Marina Butovskaya
- Institute of Ethnology and Anthropology, Russian Academy of Science, Russia
- Russian State University for the Humanities, Moscow, 125047, Russia
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle, Centre de Recherche en Neurosciences de Lyon, Jean Monnet University of Saint-Etienne, 42100, France
| | | | - Robert Mbe Akoko
- Department of Communication and Development Studies, University of Bamenda, PO Box 39, Bambili, Bamenda, Cameroon
| |
Collapse
|
21
|
Bedoya D, Arias P, Rachman L, Liuni M, Canonne C, Goupil L, Aucouturier JJ. Even violins can cry: specifically vocal emotional behaviours also drive the perception of emotions in non-vocal music. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200396. [PMID: 34719254 PMCID: PMC8558776 DOI: 10.1098/rstb.2020.0396] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A wealth of theoretical and empirical arguments have suggested that music triggers emotional responses by resembling the inflections of expressive vocalizations, but have done so using low-level acoustic parameters (pitch, loudness, speed) that, in fact, may not be processed by the listener in reference to human voice. Here, we take the opportunity of the recent availability of computational models that allow the simulation of three specifically vocal emotional behaviours: smiling, vocal tremor and vocal roughness. When applied to musical material, we find that these three acoustic manipulations trigger emotional perceptions that are remarkably similar to those observed on speech and scream sounds, and identical across musician and non-musician listeners. Strikingly, this not only applied to singing voice with and without musical background, but also to purely instrumental material. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part I)’.
Collapse
Affiliation(s)
- D Bedoya
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - P Arias
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France.,Department of Cognitive Science, Lund University, Lund, Sweden
| | - L Rachman
- Faculty of Medical Sciences, University of Groningen, Groningen, The Netherlands
| | - M Liuni
- Alta Voce SAS, Houilles, France
| | - C Canonne
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
| | - L Goupil
- BabyDevLab, University of East London, London, UK
| | - J-J Aucouturier
- FEMTO-ST Institute, Université de Bourgogne Franche-Comté/CNRS, Besançon, France
| |
Collapse
|
22
|
Marx A, Lenkei R, Pérez Fraga P, Wallis L, Kubinyi E, Faragó T. Age-dependent changes in dogs’ (Canis familiaris) separation-related behaviours in a longitudinal study. Appl Anim Behav Sci 2021. [DOI: 10.1016/j.applanim.2021.105422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
23
|
Anikin A, Pisanski K, Massenet M, Reby D. Harsh is large: nonlinear vocal phenomena lower voice pitch and exaggerate body size. Proc Biol Sci 2021; 288:20210872. [PMID: 34229494 PMCID: PMC8261225 DOI: 10.1098/rspb.2021.0872] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
A lion's roar, a dog's bark, an angry yell in a pub brawl: what do these vocalizations have in common? They all sound harsh due to nonlinear vocal phenomena (NLP)—deviations from regular voice production, hypothesized to lower perceived voice pitch and thereby exaggerate the apparent body size of the vocalizer. To test this yet uncorroborated hypothesis, we synthesized human nonverbal vocalizations, such as roars, groans and screams, with and without NLP (amplitude modulation, subharmonics and chaos). We then measured their effects on nearly 700 listeners' perceptions of three psychoacoustic (pitch, timbre, roughness) and three ecological (body size, formidability, aggression) characteristics. In an explicit rating task, all NLP lowered perceived voice pitch, increased voice darkness and roughness, and caused vocalizers to sound larger, more formidable and more aggressive. Key results were replicated in an implicit associations test, suggesting that the ‘harsh is large’ bias will arise in ecologically relevant confrontational contexts that involve a rapid, and largely implicit, evaluation of the opponent's size. In sum, nonlinearities in human vocalizations can flexibly communicate both formidability and intention to attack, suggesting they are not a mere byproduct of loud vocalizing, but rather an informative acoustic signal well suited for intimidating potential opponents.
Collapse
Affiliation(s)
- Andrey Anikin
- Division of Cognitive Science, Lund University, 22100 Lund, Sweden.,Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France
| | - Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France.,CNRS, French National Centre for Scientific Research, Laboratoire de Dynamique du Langage, University of Lyon 2, 69007 Lyon, France
| | - Mathilde Massenet
- Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle, CNRS and University of Saint Étienne, UMR 5293, 42023 St-Étienne, France
| |
Collapse
|
24
|
Marx A, Lenkei R, Pérez Fraga P, Bakos V, Kubinyi E, Faragó T. Occurrences of non-linear phenomena and vocal harshness in dog whines as indicators of stress and ageing. Sci Rep 2021; 11:4468. [PMID: 33627739 PMCID: PMC7904949 DOI: 10.1038/s41598-021-83614-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 02/01/2021] [Indexed: 11/30/2022] Open
Abstract
During social interactions, acoustic parameters of tetrapods' vocalisations reflect the emotional state of the caller. Higher levels of spectral noise and the occurrence of irregularities (non-linear phenomena NLP) might be negative arousal indicators in alarm calls, although less is known about other distress vocalisations. Family dogs experience different levels of stress during separation from their owner and may vocalise extensively. Analysing their whines can provide evidence for the relationship between arousal and NLP. We recorded 167 family dogs' separation behaviour including vocalisations, assessed their stress level based on behaviour and tested how these, their individual features, and owner reported separation-related problems (SRP) relate to their whines' (N = 4086) spectral noise and NLP. Dogs with SRP produced NLP whines more likely. More active dogs and dogs that tried to escape produced noisier whines. Older dogs' whines were more harmonic than younger ones', but they also showed a higher NLP ratio. Our results show that vocal harshness and NLP are associated with arousal in contact calls, and thus might function as stress indicators. The higher occurrence of NLP in older dogs irrespective to separation stress suggests loss in precise neural control of the larynx, and hence can be a potential ageing indicator.
Collapse
Affiliation(s)
- András Marx
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| | - Rita Lenkei
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| | - Paula Pérez Fraga
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
- MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Eötvös Loránd Research Network, Budapest, Hungary
| | - Viktória Bakos
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| | - Enikő Kubinyi
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary
| | - Tamás Faragó
- Department of Ethology, Eötvös Loránd University, Budapest, Hungary.
| |
Collapse
|