1
|
Corvin S, Massenet M, Hardy A, Patural H, Peyron R, Fauchon C, Mathevon N. Nonlinear acoustic phenomena affect the perception of pain in human baby cries. Philos Trans R Soc Lond B Biol Sci 2025; 380:20240023. [PMID: 40176515 PMCID: PMC11966150 DOI: 10.1098/rstb.2024.0023] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Revised: 10/22/2024] [Accepted: 11/28/2024] [Indexed: 04/04/2025] Open
Abstract
What makes the painful cries of human babies so difficult to ignore? Vocal traits known as 'nonlinear phenomena' are prime candidates. These acoustic irregularities are common in babies' cries and are typically associated with high levels of distress or pain. Despite the vital importance of cries for a baby's survival, how these nonlinear phenomena drive pain perception in adult listeners has not previously been systematically investigated. Here, by combining acoustic analyses of cries recorded in different contexts with playback experiments using natural and synthetic cries, we show that baby cries expressing acute pain are characterized by a pronounced presence of different nonlinear phenomena, and that these nonlinear phenomena drive pain evaluation by adult listeners. While adult listeners rated all cries presenting any of these nonlinear phenomena as expressing more pain, they were particularly sensitive to the presence of chaos. Our results thus show that nonlinear phenomena, especially chaos, encode pain information in baby cries and may be critically helpful for the development of vocal-based tools for monitoring babies' needs in the context of paediatric care.This article is part of the theme issue 'Nonlinear phenomena in vertebrate vocalizations: mechanisms and communicative functions'.
Collapse
Affiliation(s)
- Siloé Corvin
- ENES Bioacoustics Research Lab, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42100, France
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
| | - Mathilde Massenet
- ENES Bioacoustics Research Lab, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42100, France
| | - Angélique Hardy
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
- Neuro-Dol, Inserm, University of Clermont Auvergne, University Hospital of Clermont-Ferrand, Clermont-Ferrand63100, France
| | - Hugues Patural
- Neonatal and Pediatric Intensive Care Unit SAINBIOSE Laboratory, Inserm, University Hospital of Saint-Etienne, University of Saint-Etienne, Saint-Etienne42270, France
| | - Roland Peyron
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
| | - Camille Fauchon
- NEUROPAIN Team, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42270, France
- Neuro-Dol, Inserm, University of Clermont Auvergne, University Hospital of Clermont-Ferrand, Clermont-Ferrand63100, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Lab, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne42100, France
- Institut Universitaire de France, Paris75005, France
- Ecole Pratique des Hautes Etudes, CHArt lab, EPHE - PSL University, Paris, France
| |
Collapse
|
2
|
Corvin S, Fauchon C, Patural H, Peyron R, Reby D, Theunissen F, Mathevon N. Pain cues override identity cues in baby cries. iScience 2024; 27:110375. [PMID: 39055954 PMCID: PMC11269312 DOI: 10.1016/j.isci.2024.110375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 04/29/2024] [Accepted: 06/21/2024] [Indexed: 07/28/2024] Open
Abstract
Baby cries can convey both static information related to individual identity and dynamic information related to the baby's emotional and physiological state. How do these dimensions interact? Are they transmitted independently, or do they compete against one another? Here we show that the universal acoustic expression of pain in distress cries overrides individual differences at the expense of identity signaling. Our acoustic analysis show that pain cries, compared with discomfort cries, are characterized by a more unstable source, thus interfering with the production of identity cues. Machine learning analyses and psychoacoustic experiments reveal that while the baby's identity remains encoded in pain cries, it is considerably weaker than in discomfort cries. Our results are consistent with the prediction that the costs of failing to signal distress outweigh the cost of weakening cues to identity.
Collapse
Affiliation(s)
- Siloé Corvin
- ENES Bioacoustics Research Lab, CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- Université Jean-Monnet-Saint-Etienne, INSERM, CNRS, UCBL, CRNL U1028, NeuroPain team, 42023 Saint-Etienne, France
| | - Camille Fauchon
- Université Jean-Monnet-Saint-Etienne, INSERM, CNRS, UCBL, CRNL U1028, NeuroPain team, 42023 Saint-Etienne, France
- Université Clermont Auvergne, CHU de Clermont-Ferrand, Inserm, Neuro-Dol, Clermont-Ferrand, France
| | - Hugues Patural
- Neonatal and Pediatric Intensive Care Unit, SAINBIOSE laboratory, Inserm, University Hospital of Saint-Etienne, University of Saint-Etienne, Saint-Etienne, France
| | - Roland Peyron
- Université Jean-Monnet-Saint-Etienne, INSERM, CNRS, UCBL, CRNL U1028, NeuroPain team, 42023 Saint-Etienne, France
| | - David Reby
- ENES Bioacoustics Research Lab, CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- Institut universitaire de France, Paris, France
| | - Frédéric Theunissen
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
- Department of Integrative Biology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Nicolas Mathevon
- ENES Bioacoustics Research Lab, CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France
- Institut universitaire de France, Paris, France
- Ecole Pratique des Hautes Etudes, CHArt lab, PSL University, Paris, France
| |
Collapse
|
3
|
Lockhart-Bouron M, Anikin A, Pisanski K, Corvin S, Cornec C, Papet L, Levréro F, Fauchon C, Patural H, Reby D, Mathevon N. Infant cries convey both stable and dynamic information about age and identity. COMMUNICATIONS PSYCHOLOGY 2023; 1:26. [PMID: 39242685 PMCID: PMC11332224 DOI: 10.1038/s44271-023-00022-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Accepted: 08/31/2023] [Indexed: 09/09/2024]
Abstract
What information is encoded in the cries of human babies? While it is widely recognized that cries can encode distress levels, whether cries reliably encode the cause of crying remains disputed. Here, we collected 39201 cries from 24 babies recorded in their homes longitudinally, from 15 days to 3.5 months of age, a database we share publicly for reuse. Based on the parental action that stopped the crying, which matched the parental evaluation of cry cause in 75% of cases, each cry was classified as caused by discomfort, hunger, or isolation. Our analyses show that baby cries provide reliable information about age and identity. Baby voices become more tonal and less shrill with age, while individual acoustic signatures drift throughout the first months of life. In contrast, neither machine learning algorithms nor trained adult listeners can reliably recognize the causes of crying.
Collapse
Affiliation(s)
- Marguerite Lockhart-Bouron
- Neonatal and Pediatric Intensive Care Unit, SAINBIOSE Iaboratory, Inserm, University Hospital of Saint-Etienne, University of Saint-Etienne, Saint-Etienne, France
| | - Andrey Anikin
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Division of Cognitive Science, Lund University, Lund, Sweden
| | - Katarzyna Pisanski
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Laboratoire Dynamique du Langage DDL, CNRS, University of Lyon 2, Lyon, France
| | - Siloé Corvin
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Central Integration of Pain-Neuropain Laboratory, CRNL, CNRS, Inserm, UCB Lyon 1, University of Saint-Etienne, Saint-Etienne, France
| | - Clément Cornec
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
| | - Léo Papet
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Institut universitaire de France, Paris, France
| | - Camille Fauchon
- Central Integration of Pain-Neuropain Laboratory, CRNL, CNRS, Inserm, UCB Lyon 1, University of Saint-Etienne, Saint-Etienne, France
| | - Hugues Patural
- Neonatal and Pediatric Intensive Care Unit, SAINBIOSE Iaboratory, Inserm, University Hospital of Saint-Etienne, University of Saint-Etienne, Saint-Etienne, France
| | - David Reby
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France
- Institut universitaire de France, Paris, France
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, CRNL, CNRS, Inserm, University of Saint-Etienne, Saint-Etienne, France.
- Institut universitaire de France, Paris, France.
- Ecole Pratique des Hautes Etudes, CHArt Lab, PSL Research University, Paris, France.
| |
Collapse
|
4
|
Groyecka-Bernard A, Pisanski K, Frąckowiak T, Kobylarek A, Kupczyk P, Oleszkiewicz A, Sabiniewicz A, Wróbel M, Sorokowski P. Do Voice-Based Judgments of Socially Relevant Speaker Traits Differ Across Speech Types? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3674-3694. [PMID: 36167068 DOI: 10.1044/2022_jslhr-21-00690] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The human voice is a powerful and evolved social tool, with hundreds of studies showing that nonverbal vocal parameters robustly influence listeners' perceptions of socially meaningful speaker traits, ranging from perceived gender and age to attractiveness and trustworthiness. However, these studies have utilized a wide variety of voice stimuli to measure listeners' voice-based judgments of these traits. Here, in the largest scale study known to date, we test whether listeners judge the same unseen speakers differently depending on the complexity of the neutral speech stimulus, from single vowel sounds to a full paragraph. METHOD In a playback experiment testing 2,618 listeners, we examine whether commonly studied voice-based judgments of attractiveness, trustworthiness, dominance, likability, femininity/masculinity, and health differ if listeners hear isolated vowels, a series of vowels, single words, single sentences (greeting), counting from 1 to 10, or a full paragraph recited aloud (Rainbow Passage), recorded from the same 208 men and women. Data were collected using a custom-designed interface in which vocalizers and traits were randomly assigned to raters. RESULTS Linear-mixed models show that the type of voice stimulus does indeed consistently affect listeners' judgments. Overall, ratings of attractiveness, trustworthiness, dominance, likability, health, masculinity among men, and femininity among women increase as speech duration increases. At the same time, speaker-level regression analyses show that interindividual differences in perceived speaker traits are largely preserved across voice stimuli, especially among those of a similar duration. CONCLUSIONS Socially relevant perceptions of speakers are not wholly changed but rather moderated by the length of their speech. Indeed, the same vocalizer is perceived in a similar way regardless of which neutral statements they speak, with the caveat that longer utterances explain the most shared variance in listeners' judgments and elicit the highest ratings on all traits, possibly by providing additional nonverbal information to listeners. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21158890.
Collapse
Affiliation(s)
| | - Katarzyna Pisanski
- Institute of Psychology, University of Wrocław, Poland
- ENES Bioacoustics Research Laboratory, University of Saint-Etienne, France
- CNRS Centre National de la Recherche Scientifique, Laboratoire Dynamique du Langage, Université Lyon 2, France
| | | | | | - Piotr Kupczyk
- Institute of Psychology, University of Wrocław, Poland
| | - Anna Oleszkiewicz
- Institute of Psychology, University of Wrocław, Poland
- Smell and Taste Clinic, Department of Otolaryngology, Technische Universität Dresden, Germany
| | - Agnieszka Sabiniewicz
- Institute of Psychology, University of Wrocław, Poland
- Smell and Taste Clinic, Department of Otolaryngology, Technische Universität Dresden, Germany
| | - Monika Wróbel
- Institute of Psychology, University of Wrocław, Poland
| | | |
Collapse
|
5
|
Pisanski K, Groyecka-Bernard A, Sorokowski P. Human voice pitch measures are robust across a variety of speech recordings: methodological and theoretical implications. Biol Lett 2021; 17:20210356. [PMID: 34582736 DOI: 10.1098/rsbl.2021.0356] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Fundamental frequency (fo), perceived as voice pitch, is the most sexually dimorphic, perceptually salient and intensively studied voice parameter in human nonverbal communication. Thousands of studies have linked human fo to biological and social speaker traits and life outcomes, from reproductive to economic. Critically, researchers have used myriad speech stimuli to measure fo and infer its functional relevance, from individual vowels to longer bouts of spontaneous speech. Here, we acoustically analysed fo in nearly 1000 affectively neutral speech utterances (vowels, words, counting, greetings, read paragraphs and free spontaneous speech) produced by the same 154 men and women, aged 18-67, with two aims: first, to test the methodological validity of comparing fo measures from diverse speech stimuli, and second, to test the prediction that the vast inter-individual differences in habitual fo found between same-sex adults are preserved across speech types. Indeed, despite differences in linguistic content, duration, scripted or spontan--eous production and within-individual variability, we show that 42-81% of inter-individual differences in fo can be explained between any two speech types. Beyond methodological implications, together with recent evidence that inter-individual differences in fo are remarkably stable across the lifespan and generalize to emotional speech and nonverbal vocalizations, our results further substantiate voice pitch as a robust and reliable biomarker in human communication.
Collapse
Affiliation(s)
- Katarzyna Pisanski
- University of Wroclaw, Wroclaw, Poland.,CNRS/Centre National de la Recherche Scientifique, Laboratoire Dynamique du Langage, Université Lyon 2, Lyon, France.,Equipe de Neuro-Ethologie Sensorielle, Centre de Recherche en Neurosciences de Lyon, Jean Monnet University of Saint-Etienne, France
| | - Agata Groyecka-Bernard
- University of Wroclaw, Wroclaw, Poland.,Johannes Gutenberg-Universität Mainz, Mainz, Germany
| | | |
Collapse
|
6
|
Pisanski K, Raine J, Reby D. Individual differences in human voice pitch are preserved from speech to screams, roars and pain cries. ROYAL SOCIETY OPEN SCIENCE 2020; 7:191642. [PMID: 32257325 PMCID: PMC7062086 DOI: 10.1098/rsos.191642] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 01/21/2020] [Indexed: 05/19/2023]
Abstract
Fundamental frequency (F0, perceived as voice pitch) predicts sex and age, hormonal status, mating success and a range of social traits, and thus functions as an important biosocial marker in modal speech. Yet, the role of F0 in human nonverbal vocalizations remains unclear, and given considerable variability in F0 across call types, it is not known whether F0 cues to vocalizer attributes are shared across speech and nonverbal vocalizations. Here, using a corpus of vocal sounds from 51 men and women, we examined whether individual differences in F0 are retained across neutral speech, valenced speech and nonverbal vocalizations (screams, roars and pain cries). Acoustic analyses revealed substantial variability in F0 across vocal types, with mean F0 increasing as much as 10-fold in screams compared to speech in the same individual. Despite these extreme pitch differences, sexual dimorphism was preserved within call types and, critically, inter-individual differences in F0 correlated across vocal types (r = 0.36-0.80) with stronger relationships between vocal types of the same valence (e.g. 38% of the variance in roar F0 was predicted by aggressive speech F0). Our results indicate that biologically and socially relevant indexical cues in the human voice are preserved in simulated valenced speech and vocalizations, including vocalizations characterized by extreme F0 modulation, suggesting that voice pitch may function as a reliable individual and biosocial marker across disparate communication contexts.
Collapse
Affiliation(s)
- Katarzyna Pisanski
- Equipe de Neuro-Ethologie Sensorielle ENES/CRNL, University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
- Author for correspondence: Katarzyna Pisanski e-mail:
| | - Jordan Raine
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK
| | - David Reby
- Equipe de Neuro-Ethologie Sensorielle ENES/CRNL, University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK
| |
Collapse
|
7
|
Cartei V, Banerjee R, Garnham A, Oakhill J, Roberts L, Anns S, Bond R, Reby D. Physiological and perceptual correlates of masculinity in children's voices. Horm Behav 2020; 117:104616. [PMID: 31644889 DOI: 10.1016/j.yhbeh.2019.104616] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 10/06/2019] [Accepted: 10/12/2019] [Indexed: 11/19/2022]
Abstract
Low frequency components (i.e. a low pitch (F0) and low formant spacing (ΔF)) signal high salivary testosterone and height in adult male voices and are associated with high masculinity attributions by unfamiliar listeners (in both men and women). However, the relation between the physiological, acoustic and perceptual dimensions of speakers' masculinity prior to puberty remains unknown. In this study, 110 pre-pubertal children (58 girls), aged 3 to 10, were recorded as they described a cartoon picture. 315 adults (182 women) rated children's perceived masculinity from the voice only after listening to the speakers' audio recordings. On the basis of their voices alone, boys who had higher salivary testosterone levels were rated as more masculine and the relation between testosterone and perceived masculinity was partially mediated by F0. The voices of taller boys were also rated as more masculine, but the relation between height and perceived masculinity was not mediated by the considered acoustic parameters, indicating that acoustic cues other than F0 and ΔF may signal stature. Both boys and girls who had lower F0, were also rated as more masculine, while ΔF did not affect ratings. These findings highlight the interdependence of physiological, acoustic and perceptual dimensions, and suggest that inter-individual variation in male voices, particularly F0, may advertise hormonal masculinity from a very early age.
Collapse
Affiliation(s)
| | - Robin Banerjee
- School of Psychology, University of Sussex, Brighton, UK
| | - Alan Garnham
- School of Psychology, University of Sussex, Brighton, UK
| | - Jane Oakhill
- Equipe Neuro-Ethologie Sensorielle, ENES/CRNL, CNRS UMR5292, INSERM UMR_S 1028, University of Lyon, Saint-Etienne, France
| | - Lucy Roberts
- School of Psychology, University of Sussex, Brighton, UK
| | - Sophie Anns
- School of Psychology, University of Sussex, Brighton, UK
| | - Rod Bond
- School of Psychology, University of Sussex, Brighton, UK
| | - David Reby
- School of Psychology, University of Sussex, Brighton, UK; Equipe Neuro-Ethologie Sensorielle, ENES/CRNL, CNRS UMR5292, INSERM UMR_S 1028, University of Lyon, Saint-Etienne, France
| |
Collapse
|
8
|
The Jena Speaker Set (JESS)-A database of voice stimuli from unfamiliar young and old adult speakers. Behav Res Methods 2019; 52:990-1007. [PMID: 31637667 DOI: 10.3758/s13428-019-01296-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Here we describe the Jena Speaker Set (JESS), a free database for unfamiliar adult voice stimuli, comprising voices from 61 young (18-25 years) and 59 old (60-81 years) female and male speakers uttering various sentences, syllables, read text, semi-spontaneous speech, and vowels. Listeners rated two voice samples (short sentences) per speaker for attractiveness, likeability, two measures of distinctiveness ("deviation"-based [DEV] and "voice in the crowd"-based [VITC]), regional accent, and age. Interrater reliability was high, with Cronbach's α between .82 and .99. Young voices were generally rated as more attractive than old voices, but particularly so when male listeners judged female voices. Moreover, young female voices were rated as more likeable than both young male and old female voices. Young voices were judged to be less distinctive than old voices according to the DEV measure, with no differences in the VITC measure. In age ratings, listeners almost perfectly discriminated young from old voices; additionally, young female voices were perceived as being younger than young male voices. Correlations between the rating dimensions above demonstrated (among other things) that DEV-based distinctiveness was strongly negatively correlated with rated attractiveness and likeability. By contrast, VITC-based distinctiveness was uncorrelated with rated attractiveness and likeability in young voices, although a moderate negative correlation was observed for old voices. Overall, the present results demonstrate systematic effects of vocal age and gender on impressions based on the voice and inform as to the selection of suitable voice stimuli for further research into voice perception, learning, and memory.
Collapse
|