1
|
Daunay V, Reby D, Bryant GA, Pisanski K. Production and perception of volitional laughter across social contexts. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:2774-2789. [PMID: 40227885 DOI: 10.1121/10.0036388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 03/23/2025] [Indexed: 04/16/2025]
Abstract
Human nonverbal vocalizations such as laughter communicate emotion, motivation, and intent during social interactions. While differences between spontaneous and volitional laughs have been described, little is known about the communicative functions of volitional (voluntary) laughter-a complex signal used across diverse social contexts. Here, we examined whether the acoustic structure of volitional laughter encodes social contextual information recognizable by humans and computers. We asked men and women to produce volitional laughs in eight distinct social contexts ranging from positive (e.g., watching a comedy) to negative valence (e.g., embarrassment). Human listeners and machine classification algorithms accurately identified most laughter contexts above chance. However, confusion often arose within valence categories, and could be largely explained by shared acoustics. Although some acoustic features varied across social contexts, including fundamental frequency (perceived as voice pitch) and energy parameters (entropy variance, loudness, spectral centroid, and cepstral peak prominence), which also predicted listeners' recognition of laughter contexts, laughs evoked across different social contexts still often overlapped in acoustic and perceptual space. Thus, we show that volitional laughter can convey some reliable information about social context, but much of this is tied to valence, suggesting that volitional laughter is a graded rather than discrete vocal signal.
Collapse
Affiliation(s)
- Virgile Daunay
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, University of Saint-Étienne, 42023 Saint-Étienne, France
- DDL Dynamics of Language Lab, CNRS French National Centre for Scientific Research, University of Lyon 2, 69363 Lyon, France
| | - David Reby
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, University of Saint-Étienne, 42023 Saint-Étienne, France
| | - Gregory A Bryant
- Department of Communication, Center for Behavior, Evolution, and Culture, University of California, Los Angeles, California 90095, USA
| | - Katarzyna Pisanski
- ENES Bioacoustics Research Laboratory, CRNL Center for Research in Neuroscience in Lyon, University of Saint-Étienne, 42023 Saint-Étienne, France
- DDL Dynamics of Language Lab, CNRS French National Centre for Scientific Research, University of Lyon 2, 69363 Lyon, France
| |
Collapse
|
2
|
Biotti F, Sidnick L, Hatton AL, Abdlkarim D, Wing A, Treasure J, Happé F, Brewer R. Development and validation of the Interoceptive States Vocalisations (ISV) and Interoceptive States Point Light Displays (ISPLD) databases. Behav Res Methods 2025; 57:133. [PMID: 40164853 PMCID: PMC11958399 DOI: 10.3758/s13428-024-02514-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/21/2024] [Indexed: 04/02/2025]
Abstract
The ability to perceive others' emotions and one's own interoceptive states has been the subject of extensive research. Very little work, however, has investigated the ability to recognise others' interoceptive states, such as whether an individual is feeling breathless, nauseated, or fatigued. This is likely owing to the dearth of stimuli available for use in research studies, despite the clear relevance of this ability to social interaction and effective caregiving. This paper describes the development and validation of two stimulus sets for use in research into the perception of others' interoceptive states. The Interoceptive States Vocalisations (ISV) database and the Interoceptive States Point Light Displays (ISPLD) database include 191 vocalisation and 159 point light display stimuli. Both stimulus sets underwent two phases of validation, and all stimuli were scored in terms of their quality and recognisability, using five different measures. The ISV also includes control stimuli featuring non-interoceptive vocalisations. Some interoceptive states were consistently recognised better than others, but variability was observed within, as well as between, stimulus categories. Stimuli are freely available for use in research, and are presented alongside all stimulus quality scores, in order for researchers to select the most appropriate stimuli based on individual research questions.
Collapse
Affiliation(s)
| | - Lily Sidnick
- Royal Holloway, University of London, Egham Hill, Egham, TW20 0EX, UK
| | | | | | - Alan Wing
- University of Birmingham, Birmingham, UK
| | | | | | - Rebecca Brewer
- Royal Holloway, University of London, Egham Hill, Egham, TW20 0EX, UK.
| |
Collapse
|
3
|
Konopkina K, Hirvaskoski H, Hietanen JK, Saarimäki H. Multicomponent approach reveals differences in affective responses among children and adolescents. Sci Rep 2025; 15:10179. [PMID: 40128269 PMCID: PMC11933308 DOI: 10.1038/s41598-025-94309-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Accepted: 03/12/2025] [Indexed: 03/26/2025] Open
Abstract
Investigating age-related shifts in affective responses to emotionally salient stimuli is key to comprehending emotional development during childhood and adolescence. Most of the research regarding emotional experiences has focused on adults, while the understanding of the development of emotional experiences across childhood remains elusive. To address this gap, we explored whether physiological and behavioural responses as well as self-reported emotions elicited in children and adolescents by naturalistic stimuli differ from those in adults. We developed a set of emotional videos to elicit different emotions - fear, joy, anger, sadness, amusement, and tenderness - and measured emotional intensity ratings, electrocardiography, and eye movements from 8-15-year-old children and adults during the viewing of the videos. We identified age-related changes in all measured responses. Emotional intensity and behavioural responses varied across emotion categories. Furthermore, specific emotions showed different maturation patterns. The study highlights the importance of a multicomponent approach to accurately discern and understand emotional states.
Collapse
Affiliation(s)
- Kseniia Konopkina
- Human Information Processing Laboratory, Faculty of Social Sciences, Tampere University, Tampere, FI-33014, Finland
- Department of Psychology, University of Otago, Dunedin, 9016, New Zealand
| | - Hilla Hirvaskoski
- Human Information Processing Laboratory, Faculty of Social Sciences, Tampere University, Tampere, FI-33014, Finland
| | - Jari K Hietanen
- Human Information Processing Laboratory, Faculty of Social Sciences, Tampere University, Tampere, FI-33014, Finland
| | - Heini Saarimäki
- Human Information Processing Laboratory, Faculty of Social Sciences, Tampere University, Tampere, FI-33014, Finland.
| |
Collapse
|
4
|
Lundell-Creagh R, Monroy M, Ocampo J, Keltner D. Blocking lower facial features reduces emotion identification accuracy in static faces and full body dynamic expressions. Cogn Emot 2025:1-12. [PMID: 40094937 DOI: 10.1080/02699931.2025.2477745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 01/21/2025] [Accepted: 02/22/2025] [Indexed: 03/19/2025]
Abstract
During COVID, much of the world wore masks covering their lower faces to prevent the spread of disease. These masks cover lower facial features, but how vital are these lower facial features to the recognition of facial expressions of emotion? Going beyond the Ekman 6 emotions, in Study 1 (N = 372), we used a multilevel logistic regression to examine how artificially rendered masks influence emotion recognition from static photos of facial muscle configurations for many commonly experienced positive and negative emotions. On average, masks reduced emotion recognition accuracy by 17% percent for negative emotions and 23% for positive emotions. In Study 2 (N = 338), we asked whether these results generalised to multimodal full-body expressions of emotions, accompanied by vocal expressions. Participants viewed videos from a previously validated set, where the lower facial features were blurred from the nose down. Here, though the decreases in emotion recognition were noticeably less pronounced, highlighting the power of multimodal information, we did see important decreases for certain specific emotions and for positive emotions overall. Results are discussed in the context of the social and emotional consequences of compromised emotion recognition, as well as the unique facial features which accompany certain emotions.
Collapse
Affiliation(s)
- Ryan Lundell-Creagh
- Department of Psychology, Kwantlen Polytechnic University, Surrey, BC, Canada
- Department of Psychology, University of California, Berkeley, CA, USA
| | - Maria Monroy
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Joseph Ocampo
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
5
|
Lavan N, Ahmed A, Tyrene Oteng C, Aden M, Nasciemento-Krüger L, Raffiq Z, Mareschal I. Similarities in emotion perception from faces and voices: evidence from emotion sorting tasks. Cogn Emot 2025:1-17. [PMID: 40088052 DOI: 10.1080/02699931.2025.2478478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 12/03/2024] [Accepted: 01/14/2025] [Indexed: 03/17/2025]
Abstract
Emotions are expressed via many features including facial displays, vocal intonation, and touch, and perceivers can often interpret emotional displays across the different modalities with high accuracy. Here, we examine how emotion perception from faces and voices relates to one another, probing individual differences in emotion recognition abilities across visual and auditory modalities. We developed a novel emotion sorting task, in which participants were tasked with freely grouping different stimuli into perceived emotional categories, without requiring pre-defined emotion labels. Participants completed two emotion sorting tasks, one using silent videos of facial expressions, the other with audio recordings of vocal expressions. We furthermore manipulated the emotional intensity, contrasting more subtle, lower intensity vs higher intensity emotion portrayals. We find that participants' performance on the emotion sorting task was similar for face and voice stimuli. As expected, performance was lower when stimuli were of low emotional intensity. Consistent with previous reports, we find that task performance was positively correlated across the two modalities. Our findings show that emotion perception in the visual and auditory modalities may be underpinned by similar and/or shared processes, highlighting that emotion sorting tasks are powerful paradigms to investigate emotion recognition from voices, cross-modal and multimodal emotion recognition.
Collapse
Affiliation(s)
- Nadine Lavan
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Centre for Brain and Behaviour, Queen Mary University of London, London, UK
| | - Aleena Ahmed
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Centre for Brain and Behaviour, Queen Mary University of London, London, UK
| | - Chantelle Tyrene Oteng
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Centre for Brain and Behaviour, Queen Mary University of London, London, UK
| | - Munira Aden
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Centre for Brain and Behaviour, Queen Mary University of London, London, UK
| | - Luisa Nasciemento-Krüger
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Centre for Brain and Behaviour, Queen Mary University of London, London, UK
| | - Zahra Raffiq
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Centre for Brain and Behaviour, Queen Mary University of London, London, UK
| | - Isabelle Mareschal
- Department of Biological and Experimental Psychology, School of Biological and Behavioural Sciences, Centre for Brain and Behaviour, Queen Mary University of London, London, UK
| |
Collapse
|
6
|
García-García L, Martí-Vilar M, Hidalgo-Fuentes S, Cabedo-Peris J. Enhancing Emotional Intelligence in Autism Spectrum Disorder Through Intervention: A Systematic Review. Eur J Investig Health Psychol Educ 2025; 15:33. [PMID: 40136772 PMCID: PMC11941702 DOI: 10.3390/ejihpe15030033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2025] [Revised: 03/03/2025] [Accepted: 03/06/2025] [Indexed: 03/27/2025] Open
Abstract
Limitations in some emotional characteristics that are conceptualized in the definition of emotional intelligence can be seen among people with autism spectrum disorder. The main objective of this study is the analysis of the effectiveness of interventions directed to enhance emotional recognition and emotional regulation among this specific population. A systematic review was carried out in databases such as Psycinfo, WoS, SCOPUS, and PubMed, identifying a total of 572 articles, of which 29 met the inclusion criteria. The total sample included 1061 participants, mainly children aged between 4 and 13 years. The analyzed interventions focused on improving emotional recognition, with significant results in the identification of emotions such as happiness, sadness, and anger, although some showed limitations in the duration of these effects. The most used programs included training in facial recognition, virtual reality, and the use of new technologies such as robots. These showed improvements in both emotional recognition and social skills. Other types of interventions such as music therapy or the use of drama techniques were also implemented. However, a gender bias and lack of consistency between results from different cultures were observed. The conclusions indicate that, although the interventions reviewed seem effective, more research is needed to maximize their impact on the ASD population.
Collapse
Affiliation(s)
- Laura García-García
- Basic Psychology Department, Faculty of Psychology and Speech Therapy, Universitat de València, 46010 Valencia, Spain; (L.G.-G.); (S.H.-F.)
| | - Manuel Martí-Vilar
- Basic Psychology Department, Faculty of Psychology and Speech Therapy, Universitat de València, 46010 Valencia, Spain; (L.G.-G.); (S.H.-F.)
| | - Sergio Hidalgo-Fuentes
- Basic Psychology Department, Faculty of Psychology and Speech Therapy, Universitat de València, 46010 Valencia, Spain; (L.G.-G.); (S.H.-F.)
| | - Javier Cabedo-Peris
- Faculty of Health Sciences, Universidad Internacional de Valencia (VIU), 46002 Valencia, Spain;
| |
Collapse
|
7
|
Angkasirisan T. Naturalistic multimodal emotion data with deep learning can advance the theoretical understanding of emotion. PSYCHOLOGICAL RESEARCH 2024; 89:36. [PMID: 39708231 PMCID: PMC11663169 DOI: 10.1007/s00426-024-02068-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 12/04/2024] [Indexed: 12/23/2024]
Abstract
What are emotions? Despite being a century-old question, emotion scientists have yet to agree on what emotions exactly are. Emotions are diversely conceptualised as innate responses (evolutionary view), mental constructs (constructivist view), cognitive evaluations (appraisal view), or self-organising states (dynamical systems view). This enduring fragmentation likely stems from the limitations of traditional research methods, which often adopt narrow methodological approaches. Methods from artificial intelligence (AI), particularly those leveraging big data and deep learning, offer promising approaches for overcoming these limitations. By integrating data from multimodal markers of emotion, including subjective experiences, contextual factors, brain-bodily physiological signals and expressive behaviours, deep learning algorithms can uncover and map their complex relationships within multidimensional spaces. This multimodal emotion framework has the potential to provide novel, nuanced insights into long-standing questions, such as whether emotion categories are innate or learned and whether emotions exhibit coherence or degeneracy, thereby refining emotion theories. Significant challenges remain, particularly in obtaining comprehensive naturalistic multimodal emotion data, highlighting the need for advances in synchronous measurement of naturalistic multimodal emotion.
Collapse
|
8
|
Goldy SP, Hendricks PS, Keltner D, Yaden DB. Considering distinct positive emotions in psychedelic science. Int Rev Psychiatry 2024; 36:908-919. [PMID: 39980212 DOI: 10.1080/09540261.2024.2394221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 08/15/2024] [Indexed: 02/22/2025]
Abstract
In this review, we discuss psychedelics' acute subjective and persisting therapeutic effects, outline the science of positive emotions, and highlight the value in considering distinct positive emotions in psychedelic science. Psychedelics produce a wide variety of acute subjective effects (i.e. the 'trip'), including positive emotions and affective states such as awe and joy. However, despite a rich literature on distinct emotions and their different correlates and sequelae, distinct emotions in psychedelic science remain understudied. Insofar as psychedelics' acute subjective effects may play a role in their downstream therapeutic effects (e.g. decreased depression, anxiety, and substance misuse), considering the role of distinct positive emotions in psychedelic experiences has the potential to yield more precise statements about psychedelic-related subjective processes and outcomes. We propose here that understanding the role of positive emotions within the context of psychedelic experiences could help elucidate the connection between psychedelics' acute subjective effects and therapeutic outcomes.
Collapse
Affiliation(s)
- Sean P Goldy
- Center for Psychedelic and Consciousness Research, Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Peter S Hendricks
- Department of Psychiatry and Behavioral Neurobiology, University of Alabama School of Medicine, Birmingham, AL, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - David B Yaden
- Center for Psychedelic and Consciousness Research, Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
9
|
Zhang Z, Zerwas FK, Keltner D. Emotion specificity, coherence, and cultural variation in conceptualizations of positive emotions: a study of body sensations and emotion recognition. Cogn Emot 2024:1-14. [PMID: 39586014 DOI: 10.1080/02699931.2024.2430400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 10/14/2024] [Accepted: 11/11/2024] [Indexed: 11/27/2024]
Abstract
The present study examines the association between people's interoceptive representation of physical sensations and the recognition of vocal and facial expressions of emotion. We used body maps to study the granularity of the interoceptive conceptualisation of 11 positive emotions (amusement, awe, compassion, contentment, desire, love, joy, interest, pride, relief, and triumph) and a new emotion recognition test (Emotion Expression Understanding Test) to assess the ability to recognise emotions from vocal and facial behaviour. Overall, we found evidence for distinct interoceptive conceptualizations of 11 positive emotions across Asian American, European American, and Latino/a American cultures, as well as the reliable identification of emotion in facial and vocal expressions. Central to new theorising about emotion-related representation, the granularity of physical sensations did not covary with emotion recognition accuracy, suggesting that two kinds of emotion conceptualisation processes might be distinct.
Collapse
Affiliation(s)
- Zaiyao Zhang
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - Felicia K Zerwas
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
10
|
Nestor PG, Woodhull AA. Neuropsychology of social cognition: culture, display rules, and emotional expressivity. J Clin Exp Neuropsychol 2024; 46:811-827. [PMID: 39579335 DOI: 10.1080/13803395.2024.2428728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 11/07/2024] [Indexed: 11/25/2024]
Abstract
INTRODUCTION We investigated the roles of group ethnicity and display rules of emotions in the neuropsychology of social cognition in Asian American and White participants recruited from a majority-minority college campus. METHOD 128 participants (mean age = 24.9 years) completed: 1) Advanced Clinical Solutions-Social Perception (ACS-SP), which includes separate measures of affect naming of facial expressions and emotional prosody interpretation of audio statements; 2) Display Rule Assessment Inventory (DRAI), a self-report measure of emotional expressivity across four settings (family, close friends, colleagues, and strangers) and in two distinct domains (should/actual) that asks participants what they believe people should do (social value) and what they would actually do (behavioral self-report). RESULTS ACS-SP revealed evidence of cultural bias, as reflected by group ethnicity differences, for recognition of emotional prosody but not emotional facial expressions for Asian American versus White participants. The DRAI showed significant cultural differences only for family relationships with White participants endorsing stronger belief in the social value of expressing negative emotions of sadness, aversion, and fear. These AC-SP and DRAI group differences remained significant when covarying for spoken English language, as measured by an oral word reading test. Hierarchical regression results indicated that group ethnicity and family display rules each made specific and significant contributions to neuropsychological performance but did so in very different and distinct ways. Group ethnicity exerted its greatest effect on prosody interpretation whereas family display rules had its most pronounced influence on affect naming. CONCLUSIONS The current results may help inform and advance culturally responsive neuropsychological models of social cognition.
Collapse
Affiliation(s)
- Paul G Nestor
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
- Laboratory of Neuroscience, Harvard Medical School, Brockton, MA, USA
| | - Ashley-Ann Woodhull
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
11
|
Ponsonnet M, Coupé C, Pellegrino F, Garcia Arasco A, Pisanski K. Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languagesa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3118-3139. [PMID: 39531311 DOI: 10.1121/10.0032454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 10/01/2024] [Indexed: 11/16/2024]
Abstract
In this comparative cross-linguistic study we test whether expressive interjections (words like ouch or yay) share similar vowel signatures across the world's languages, and whether these can be traced back to nonlinguistic vocalizations (like screams and cries) expressing the same emotions of pain, disgust, and joy. We analyze vowels in interjections from dictionaries of 131 languages (over 600 tokens) and compare these with nearly 500 vowels based on formant frequency measures from voice recordings of volitional nonlinguistic vocalizations. We show that across the globe, pain interjections feature a-like vowels and wide falling diphthongs ("ai" as in Ayyy! "aw" as in Ouch!), whereas disgust and joy interjections do not show robust vowel regularities that extend geographically. In nonlinguistic vocalizations, all emotions yield distinct vowel signatures: pain prompts open vowels such as [a], disgust schwa-like central vowels, and joy front vowels such as [i]. Our results show that pain is the only affective experience tested with a clear, robust vowel signature that is preserved between nonlinguistic vocalizations and interjections across languages. These results offer empirical evidence for iconicity in some expressive interjections. We consider potential mechanisms and origins, from evolutionary pressures and sound symbolism to colexification, proposing testable hypotheses for future research.
Collapse
Affiliation(s)
- Maïa Ponsonnet
- Dynamique Du Langage, CNRS et Université Lumière Lyon 2, Lyon, France
- School of Social Sciences, The University of Western Australia, Perth, Australia
| | - Christophe Coupé
- Department of Linguistics, The University of Hong Kong, Hong Kong SAR, China
| | | | | | - Katarzyna Pisanski
- Dynamique Du Langage, CNRS et Université Lumière Lyon 2, Lyon, France
- ENES Bioacoustics Research Laboratory, University Jean Monnet of Saint-Etienne, CRNL, CNRS, Saint-Etienne, France
- Institute of Psychology, University of Wrocław, Wrocław, Poland
| |
Collapse
|
12
|
Jang D, Lybeck M, Cortes DS, Elfenbein HA, Laukka P. Estrogen predicts multimodal emotion recognition accuracy across the menstrual cycle. PLoS One 2024; 19:e0312404. [PMID: 39436872 PMCID: PMC11495617 DOI: 10.1371/journal.pone.0312404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 10/04/2024] [Indexed: 10/25/2024] Open
Abstract
Researchers have proposed that variation in sex hormones across the menstrual cycle modulate the ability to recognize emotions in others. Existing research suggests that accuracy is higher during the follicular phase and ovulation compared to the luteal phase, but findings are inconsistent. Using a repeated measures design with a sample of healthy naturally cycling women (N = 63), we investigated whether emotion recognition accuracy varied between the follicular and luteal phases, and whether accuracy related to levels of estrogen (estradiol) and progesterone. Two tasks assessed recognition of a range of positive and negative emotions via brief video recordings presented in visual, auditory, and multimodal blocks, and non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Multilevel models did not show differences in emotion recognition between cycle phases. However, coefficients for estrogen were significant for both emotion recognition tasks. Higher within-person levels of estrogen predicted lower accuracy, whereas higher between-person estrogen levels predicted greater accuracy. This suggests that in general having higher estrogen levels increases accuracy, but that higher-than-usual estrogen at a given time decreases it. Within-person estrogen further interacted with cycle phase for both tasks and showed a quadratic relationship with accuracy for the multimodal task. In particular, women with higher levels of estrogen were more accurate in the follicular phase and middle of the menstrual cycle. We propose that the differing role of within- and between-person hormone levels could explain some of the inconsistency in previous findings.
Collapse
Affiliation(s)
- Daisung Jang
- Melbourne Business School, University of Melbourne, Carlton, Victoria, Australia
| | - Max Lybeck
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | | - Hillary Anger Elfenbein
- Olin Business School, Washington University in St. Louis, St. Louis, Missouri, United States of America
| | - Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden
- Department of Psychology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
13
|
Gao C, Oh S, Yang X, Stanley JM, Shinkareva SV. Neural Representations of Emotions in Visual, Auditory, and Modality-Independent Regions Reflect Idiosyncratic Conceptual Knowledge. Hum Brain Mapp 2024; 45:e70040. [PMID: 39394899 PMCID: PMC11470372 DOI: 10.1002/hbm.70040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 08/27/2024] [Accepted: 09/23/2024] [Indexed: 10/14/2024] Open
Abstract
Growing evidence suggests that conceptual knowledge influences emotion perception, yet the neural mechanisms underlying this effect are not fully understood. Recent studies have shown that brain representations of facial emotion categories in visual-perceptual areas are predicted by conceptual knowledge, but it remains to be seen if auditory regions are similarly affected. Moreover, it is not fully clear whether these conceptual influences operate at a modality-independent level. To address these questions, we conducted a functional magnetic resonance imaging study presenting participants with both facial and vocal emotional stimuli. This dual-modality approach allowed us to investigate effects on both modality-specific and modality-independent brain regions. Using univariate and representational similarity analyses, we found that brain representations in both visual (middle and lateral occipital cortices) and auditory (superior temporal gyrus) regions were predicted by conceptual understanding of emotions for faces and voices, respectively. Additionally, we discovered that conceptual knowledge also influenced supra-modal representations in the superior temporal sulcus. Dynamic causal modeling revealed a brain network showing both bottom-up and top-down flows, suggesting a complex interplay of modality-specific and modality-independent regions in emotional processing. These findings collectively indicate that the neural representations of emotions in both sensory-perceptual and modality-independent regions are likely shaped by each individual's conceptual knowledge.
Collapse
Affiliation(s)
- Chuanji Gao
- School of PsychologyNanjing Normal UniversityNanjingChina
| | - Sewon Oh
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| | - Xuan Yang
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| | - Jacob M. Stanley
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| | - Svetlana V. Shinkareva
- Department of Psychology, Institute for Mind and BrainUniversity of South CarolinaColumbiaSouth CarolinaUSA
| |
Collapse
|
14
|
Daikoku T. Temporal dynamics of uncertainty and prediction error in musical improvisation across different periods. Sci Rep 2024; 14:22297. [PMID: 39333792 PMCID: PMC11437158 DOI: 10.1038/s41598-024-73689-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Accepted: 09/19/2024] [Indexed: 09/30/2024] Open
Abstract
Human improvisational acts contain an innate individuality, derived from one's experiences based on epochal and cultural backgrounds. Musical improvisation, much like spontaneous speech, reveals intricate facets of the improviser's state of mind and emotional character. However, the specific musical components that reveal such individuality remain largely unexplored. Within the framework of human statistical learning and predictive processing, this study examined the temporal dynamics of uncertainty and surprise (prediction error) in a piece of musical improvisation. This cognitive process reconciles the raw auditory cues, such as melody and rhythm, with the musical predictive models shaped by its prior experiences. This study employed the Hierarchical Bayesian Statistical Learning (HBSL) model to analyze a corpus of 456 Jazz improvisations, spanning 1905 to 2009, from 78 distinct Jazz musicians. The results indicated distinctive temporal patterns of surprise and uncertainty, especially in pitch and pitch-rhythm sequences, revealing era-specific features from the early 20th to the 21st centuries. Conversely, rhythm sequences exhibited a consistent degree of uncertainty across eras. Further, the acoustic properties remain unchanged across different periods. These findings highlight the importance of how temporal dynamics of surprise and uncertainty in improvisational music change over periods, profoundly influencing the distinctive methodologies artists adopt for improvisation in each era. Further, it is suggested that the development of improvisational music can be attributed to the adaptive statistical learning mechanisms. This study explores the period-specific characteristics in the temporal dynamics of improvisational music, emphasizing how artists adapt their methods to resonate with the cultural and emotional contexts of their times. Such shifts in improvisational ways offer a window into understanding how artists intuitively respond and adapt their craft to resonate with the cultural zeitgeist and the emotional landscapes of their respective times.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan.
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, UK.
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan.
| |
Collapse
|
15
|
Stamkou E, Keltner D, Corona R, Aksoy E, Cowen AS. Emotional palette: a computational mapping of aesthetic experiences evoked by visual art. Sci Rep 2024; 14:19932. [PMID: 39198545 PMCID: PMC11358466 DOI: 10.1038/s41598-024-69686-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 08/07/2024] [Indexed: 09/01/2024] Open
Abstract
Despite the evolutionary history and cultural significance of visual art, the structure of aesthetic experiences it evokes has only attracted recent scientific attention. What kinds of experience does visual art evoke? Guided by Semantic Space Theory, we identify the concepts that most precisely describe people's aesthetic experiences using new computational techniques. Participants viewed 1457 artworks sampled from diverse cultural and historical traditions and reported on the emotions they felt and their perceived artwork qualities. Results show that aesthetic experiences are high-dimensional, comprising 25 categories of feeling states. Extending well beyond hedonism and broad evaluative judgments (e.g., pleasant/unpleasant), aesthetic experiences involve emotions of daily social living (e.g., "sad", "joy"), the imagination (e.g., "psychedelic", "mysterious"), profundity (e.g., "disgust", "awe"), and perceptual qualities attributed to the artwork (e.g., "whimsical", "disorienting"). Aesthetic emotions and perceptual qualities jointly predict viewers' liking of the artworks, indicating that we conceptualize aesthetic experiences in terms of the emotions we feel but also the qualities we perceive in the artwork. Aesthetic experiences are often mixed and lie along continuous gradients between categories rather than within discrete clusters. Our collection of artworks is visualized within an interactive map ( https://barradeau.com/2021/emotions-map/ ), revealing the high-dimensional space of aesthetic experiences associated with visual art.
Collapse
Affiliation(s)
- Eftychia Stamkou
- Department of Psychology, University of Amsterdam, 1001 NK, Amsterdam, The Netherlands.
| | - Dacher Keltner
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Eda Aksoy
- Google Arts and Culture, 75009, Paris, France
| | - Alan S Cowen
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
- Hume AI, New York, NY, 10010, USA
| |
Collapse
|
16
|
Paletz SBF, Golonka EM, Pandža NB, Stanton G, Ryan D, Adams N, Rytting CA, Murauskaite EE, Buntain C, Johns MA, Bradley P. Social media emotions annotation guide (SMEmo): Development and initial validity. Behav Res Methods 2024; 56:4435-4485. [PMID: 37697206 DOI: 10.3758/s13428-023-02195-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/10/2023] [Indexed: 09/13/2023]
Abstract
The proper measurement of emotion is vital to understanding the relationship between emotional expression in social media and other factors, such as online information sharing. This work develops a standardized annotation scheme for quantifying emotions in social media using recent emotion theory and research. Human annotators assessed both social media posts and their own reactions to the posts' content on scales of 0 to 100 for each of 20 (Study 1) and 23 (Study 2) emotions. For Study 1, we analyzed English-language posts from Twitter (N = 244) and YouTube (N = 50). Associations between emotion ratings and text-based measures (LIWC, VADER, EmoLex, NRC-EIL, Emotionality) demonstrated convergent and discriminant validity. In Study 2, we tested an expanded version of the scheme in-country, in-language, on Polish (N = 3648) and Lithuanian (N = 1934) multimedia Facebook posts. While the correlations were lower than with English, patterns of convergent and discriminant validity with EmoLex and NRC-EIL still held. Coder reliability was strong across samples, with intraclass correlations of .80 or higher for 10 different emotions in Study 1 and 16 different emotions in Study 2. This research improves the measurement of emotions in social media to include more dimensions, multimedia, and context compared to prior schemes.
Collapse
Affiliation(s)
- Susannah B F Paletz
- College of Information Studies, University of Maryland, College Park, MD, USA.
| | - Ewa M Golonka
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - Nick B Pandža
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
- Program in Second Language Acquisition, University of Maryland, College Park, MD, USA
| | - Grace Stanton
- Department of Criminology, University of Maryland, College Park, MD, USA
| | - David Ryan
- Feminist, Gender, and Sexuality Studies, Stanford University, Stanford, CA, USA
| | - Nikki Adams
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - C Anton Rytting
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | | | - Cody Buntain
- College of Information Studies, University of Maryland, College Park, MD, USA
| | - Michael A Johns
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| | - Petra Bradley
- Applied Research Laboratory for Intelligence and Security (ARLIS), University of Maryland, College Park, MD, USA
| |
Collapse
|
17
|
Cowen AS, Brooks JA, Prasad G, Tanaka M, Kamitani Y, Kirilyuk V, Somandepalli K, Jou B, Schroff F, Adam H, Sauter D, Fang X, Manokara K, Tzirakis P, Oh M, Keltner D. How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan. Front Psychol 2024; 15:1350631. [PMID: 38966733 PMCID: PMC11223574 DOI: 10.3389/fpsyg.2024.1350631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/04/2024] [Indexed: 07/06/2024] Open
Abstract
Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.
Collapse
Affiliation(s)
- Alan S. Cowen
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Jeffrey A. Brooks
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | | | - Misato Tanaka
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Yukiyasu Kamitani
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Krishna Somandepalli
- Google Research, Mountain View, CA, United States
- Department of Electrical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brendan Jou
- Google Research, Mountain View, CA, United States
| | | | - Hartwig Adam
- Google Research, Mountain View, CA, United States
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Xia Fang
- Zhejiang University, Zhejiang, China
| | - Kunalan Manokara
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | | | - Moses Oh
- Hume AI, New York, NY, United States
| | - Dacher Keltner
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
18
|
Ferrari C, Arioli M, Atias D, Merabet LB, Cattaneo Z. Perception and discrimination of real-life emotional vocalizations in early blind individuals. Front Psychol 2024; 15:1386676. [PMID: 38784630 PMCID: PMC11112099 DOI: 10.3389/fpsyg.2024.1386676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 04/16/2024] [Indexed: 05/25/2024] Open
Abstract
Introduction The capacity to understand others' emotions and react accordingly is a key social ability. However, it may be compromised in case of a profound sensory loss that limits the contribution of available contextual cues (e.g., facial expression, gestures, body posture) to interpret emotions expressed by others. In this study, we specifically investigated whether early blindness affects the capacity to interpret emotional vocalizations, whose valence may be difficult to recognize without a meaningful context. Methods We asked a group of early blind (N = 22) and sighted controls (N = 22) to evaluate the valence and the intensity of spontaneous fearful and joyful non-verbal vocalizations. Results Our data showed that emotional vocalizations presented alone (i.e., with no contextual information) are similarly ambiguous for blind and sighted individuals but are perceived as more intense by the former possibly reflecting their higher saliency when visual experience is unavailable. Disussion Our study contributes to a better understanding of how sensory experience shapes ememotion recognition.
Collapse
Affiliation(s)
- Chiara Ferrari
- Department of Humanities, University of Pavia, Pavia, Italy
- IRCCS Mondino Foundation, Pavia, Italy
| | - Maria Arioli
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
| | - Doron Atias
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Lotfi B. Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States
| | - Zaira Cattaneo
- Department of Human and Social Sciences, University of Bergamo, Bergamo, Italy
| |
Collapse
|
19
|
Trevizan-Baú P, Stanić D, Furuya WI, Dhingra RR, Dutschmann M. Neuroanatomical frameworks for volitional control of breathing and orofacial behaviors. Respir Physiol Neurobiol 2024; 323:104227. [PMID: 38295924 DOI: 10.1016/j.resp.2024.104227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 01/22/2024] [Accepted: 01/25/2024] [Indexed: 02/16/2024]
Abstract
Breathing is the only vital function that can be volitionally controlled. However, a detailed understanding how volitional (cortical) motor commands can transform vital breathing activity into adaptive breathing patterns that accommodate orofacial behaviors such as swallowing, vocalization or sniffing remains to be developed. Recent neuroanatomical tract tracing studies have identified patterns and origins of descending forebrain projections that target brain nuclei involved in laryngeal adductor function which is critically involved in orofacial behavior. These nuclei include the midbrain periaqueductal gray and nuclei of the respiratory rhythm and pattern generating network in the brainstem, specifically including the pontine Kölliker-Fuse nucleus and the pre-Bötzinger complex in the medulla oblongata. This review discusses the functional implications of the forebrain-brainstem anatomical connectivity that could underlie the volitional control and coordination of orofacial behaviors with breathing.
Collapse
Affiliation(s)
- Pedro Trevizan-Baú
- The Florey Institute, University of Melbourne, Victoria, Australia; Department of Physiological Sciences, University of Florida, Gainesville, FL, USA
| | - Davor Stanić
- The Florey Institute, University of Melbourne, Victoria, Australia
| | - Werner I Furuya
- The Florey Institute, University of Melbourne, Victoria, Australia
| | - Rishi R Dhingra
- The Florey Institute, University of Melbourne, Victoria, Australia; Division of Pulmonary, Critical Care and Sleep Medicine, Case Western Reserve University, Cleveland, OH, USA
| | - Mathias Dutschmann
- The Florey Institute, University of Melbourne, Victoria, Australia; Division of Pulmonary, Critical Care and Sleep Medicine, Case Western Reserve University, Cleveland, OH, USA.
| |
Collapse
|
20
|
Nestor PG, Woodhull AA. Exploring cultural contributions to the neuropsychology of social cognition: the advanced clinical solutions. J Clin Exp Neuropsychol 2024; 46:303-315. [PMID: 38717033 DOI: 10.1080/13803395.2024.2348212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 04/21/2024] [Indexed: 08/09/2024]
Abstract
INTRODUCTION Culture and social cognition are deeply intertwined, yet how this rich intersectionality is expressed neuropsychologically remains an important question. METHOD In a convenience sample of 128 young adults (mean age = 24.9 years) recruited from a majority-minority urban university, we examined performance-based neuropsychological measures of social cognition, the Advanced Clinical Solutions-Social Perception (ACS-SP), in relation to both cultural orientation, as assessed by the Individualism-Collectivism Scale (ICS) and spoken English language, as assessed by the oral word pronunciation measure of the Wide Range Achievement Test-4 (WRAT4). RESULTS Results indicated higher WRAT4 scores correlated with better performance across all ACS-SP measures of social cognition. Controlling for these associations in spoken English, partial correlations linked lower scores across both prosody interpretation and affect naming ACS-SP tasks with a propensity to view social relationships vertically, irrespective of individualistic or collectivistic orientations. Hierarchical regression results showed that cultural orientation and English-language familiarity each specifically and uniquely contributed to ACS-SP performance for matching prosody with facial expressions. CONCLUSIONS These findings underscore the importance of incorporating and prioritizing both language and cultural factors in neuropsychological studies of social cognition. They may be viewed as offering strong support for expanding the boundaries of the construct of social cognition beyond its current theoretical framework of one that privileges Western, educated, industralized, rich and democratic (WEIRD) values, customs, and epistemologies.
Collapse
Affiliation(s)
- Paul G Nestor
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
- Laboratory of Neuroscience, Harvard Medical School, Brockton, MA, USA
| | - Ashley-Ann Woodhull
- Department of Psychology, University of Massachusetts Boston, Boston, MA, USA
| |
Collapse
|
21
|
Kamiloğlu RG, Sauter DA. Sounds like a fight: listeners can infer behavioural contexts from spontaneous nonverbal vocalisations. Cogn Emot 2024; 38:277-295. [PMID: 37997898 PMCID: PMC11057848 DOI: 10.1080/02699931.2023.2285854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
When we hear another person laugh or scream, can we tell the kind of situation they are in - for example, whether they are playing or fighting? Nonverbal expressions are theorised to vary systematically across behavioural contexts. Perceivers might be sensitive to these putative systematic mappings and thereby correctly infer contexts from others' vocalisations. Here, in two pre-registered experiments, we test the prediction that listeners can accurately deduce production contexts (e.g. being tickled, discovering threat) from spontaneous nonverbal vocalisations, like sighs and grunts. In Experiment 1, listeners (total n = 3120) matched 200 nonverbal vocalisations to one of 10 contexts using yes/no response options. Using signal detection analysis, we show that listeners were accurate at matching vocalisations to nine of the contexts. In Experiment 2, listeners (n = 337) categorised the production contexts by selecting from 10 response options in a forced-choice task. By analysing unbiased hit rates, we show that participants categorised all 10 contexts at better-than-chance levels. Together, these results demonstrate that perceivers can infer contexts from nonverbal vocalisations at rates that exceed that of random selection, suggesting that listeners are sensitive to systematic mappings between acoustic structures in vocalisations and behavioural contexts.
Collapse
Affiliation(s)
- Roza G. Kamiloğlu
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Disa A. Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
22
|
Lettieri G, Handjaras G, Cappello EM, Setti F, Bottari D, Bruno V, Diano M, Leo A, Tinti C, Garbarini F, Pietrini P, Ricciardi E, Cecchetti L. Dissecting abstract, modality-specific and experience-dependent coding of affect in the human brain. SCIENCE ADVANCES 2024; 10:eadk6840. [PMID: 38457501 PMCID: PMC10923499 DOI: 10.1126/sciadv.adk6840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/10/2024]
Abstract
Emotion and perception are tightly intertwined, as affective experiences often arise from the appraisal of sensory information. Nonetheless, whether the brain encodes emotional instances using a sensory-specific code or in a more abstract manner is unclear. Here, we answer this question by measuring the association between emotion ratings collected during a unisensory or multisensory presentation of a full-length movie and brain activity recorded in typically developed, congenitally blind and congenitally deaf participants. Emotional instances are encoded in a vast network encompassing sensory, prefrontal, and temporal cortices. Within this network, the ventromedial prefrontal cortex stores a categorical representation of emotion independent of modality and previous sensory experience, and the posterior superior temporal cortex maps the valence dimension using an abstract code. Sensory experience more than modality affects how the brain organizes emotional information outside supramodal regions, suggesting the existence of a scaffold for the representation of emotional states where sensory inputs during development shape its functioning.
Collapse
Affiliation(s)
- Giada Lettieri
- Crossmodal Perception and Plasticity Laboratory, Institute of Research in Psychology & Institute of Neuroscience, Université Catholique de Louvain, Louvain-la-Neuve, Belgium
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Giacomo Handjaras
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Elisa M. Cappello
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Francesca Setti
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Davide Bottari
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Andrea Leo
- Department of of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | | | - Pietro Pietrini
- Forensic Neuroscience and Psychiatry Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Emiliano Ricciardi
- Sensorimotor Experiences and Mental Representations Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Sensory Experience Dependent Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Luca Cecchetti
- Social and Affective Neuroscience Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
23
|
Khan N, Plunk A, Zheng Z, Adiani D, Staubitz J, Weitlauf A, Sarkar N. Pilot study of a real-time early agitation capture technology (REACT) for children with intellectual and developmental disabilities. Digit Health 2024; 10:20552076241287884. [PMID: 39435330 PMCID: PMC11492225 DOI: 10.1177/20552076241287884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 09/10/2024] [Indexed: 10/23/2024] Open
Abstract
Objective Children and adolescents with intellectual and developmental disabilities (IDD), particularly those with autism spectrum disorder, are at increased risk of challenging behaviors such as self-injury, aggression, elopement, and property destruction. To mitigate these challenges, it is crucial to focus on early signs of distress that may lead to these behaviors. These early signs might not be visible to the human eye but could be detected by predictive machine learning (ML) models that utilizes real-time sensing. Current behavioral assessment practices lack such proactive predictive models. This study developed and pilot-tested real-time early agitation capture technology (REACT), a real-time multimodal ML model to detect early signs of distress, termed "agitations." Integrating multimodal sensing, ML, and human expertise could make behavioral assessments for people with IDD safer and more efficient. Methods We leveraged wearable technology to collect behavioral and physiological data from three children with IDD aged 6 to 9 years. The effectiveness of the REACT system was measured using F1 score, assessing its usefulness at the time of agitation to 20s prior. Results The REACT system was able to detect agitations with an average F1 score of 78.69% at the time of agitation and 68.20% 20s prior. Conclusion The findings support the use of the REACT model for real-time, proactive detection of agitations in children with IDD. This approach not only improves the accuracy of detecting distress signals that are imperceptible to the human eye but also increases the window for timely intervention before behavioral escalation, thereby enhancing safety, well-being, and inclusion for this vulnerable population. We believe that such technological support system will enhance user autonomy, self-advocacy, and self-determination.
Collapse
Affiliation(s)
- Nibraas Khan
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Abigale Plunk
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Zhaobo Zheng
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| | - Deeksha Adiani
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - John Staubitz
- Treatment and Research Institute for Autism Spectrum Disorders, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Amy Weitlauf
- Treatment and Research Institute for Autism Spectrum Disorders, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Nilanjan Sarkar
- Department of Computer Science, Vanderbilt University, Nashville, Tennessee, USA
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, Tennessee, USA
- Department of Mechanical Engineering, Vanderbilt University, Nashville, Tennessee, USA
| |
Collapse
|
24
|
Mortillaro M, Schlegel K. Embracing the Emotion in Emotional Intelligence Measurement: Insights from Emotion Theory and Research. J Intell 2023; 11:210. [PMID: 37998709 PMCID: PMC10672494 DOI: 10.3390/jintelligence11110210] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 10/16/2023] [Accepted: 10/28/2023] [Indexed: 11/25/2023] Open
Abstract
Emotional intelligence (EI) has gained significant popularity as a scientific construct over the past three decades, yet its conceptualization and measurement still face limitations. Applied EI research often overlooks its components, treating it as a global characteristic, and there are few widely used performance-based tests for assessing ability EI. The present paper proposes avenues for advancing ability EI measurement by connecting the main EI components to models and theories from the emotion science literature and related fields. For emotion understanding and emotion recognition, we discuss the implications of basic emotion theory, dimensional models, and appraisal models of emotion for creating stimuli, scenarios, and response options. For the regulation and management of one's own and others' emotions, we discuss how the process model of emotion regulation and its extensions to interpersonal processes can inform the creation of situational judgment items. In addition, we emphasize the importance of incorporating context, cross-cultural variability, and attentional and motivational factors into future models and measures of ability EI. We hope this article will foster exchange among scholars in the fields of ability EI, basic emotion science, social cognition, and emotion regulation, leading to an enhanced understanding of the individual differences in successful emotional functioning and communication.
Collapse
Affiliation(s)
- Marcello Mortillaro
- Swiss Center for Affective Sciences, University of Geneva, 1202 Geneva, Switzerland
| | - Katja Schlegel
- Institute of Psychology, University of Bern, 3012 Bern, Switzerland
| |
Collapse
|
25
|
Ziereis A, Schacht A. Motivated attention and task relevance in the processing of cross-modally associated faces: Behavioral and electrophysiological evidence. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:1244-1266. [PMID: 37353712 PMCID: PMC10545602 DOI: 10.3758/s13415-023-01112-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/09/2023] [Indexed: 06/25/2023]
Abstract
It has repeatedly been shown that visually presented stimuli can gain additional relevance by their association with affective stimuli. Studies have shown effects of associated affect in event-related potentials (ERP) like the early posterior negativity (EPN), late positive complex (LPC), and even earlier components as the P1 or N170. However, findings are mixed as to the extent associated affect requires directed attention to the emotional quality of a stimulus and which ERP components are sensitive to task instructions during retrieval. In this preregistered study ( https://osf.io/ts4pb ), we tested cross-modal associations of vocal affect-bursts (positive, negative, neutral) to faces displaying neutral expressions in a flash-card-like learning task, in which participants studied face-voice pairs and learned to correctly assign them to each other. In the subsequent EEG test session, we applied both an implicit ("old-new") and explicit ("valence-classification") task to investigate whether the behavior at retrieval and neurophysiological activation of the affect-based associations were dependent on the type of motivated attention. We collected behavioral and neurophysiological data from 40 participants who reached the preregistered learning criterium. Results showed EPN effects of associated negative valence after learning and independent of the task. In contrast, modulations of later stages (LPC) by positive and negative associated valence were restricted to the explicit, i.e., valence-classification, task. These findings highlight the importance of the task at different processing stages and show that cross-modal affect can successfully be associated to faces.
Collapse
Affiliation(s)
- Annika Ziereis
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, Goßlerstraße 14, 37073 Göttingen, Germany
| | - Anne Schacht
- Department for Cognition, Emotion and Behavior, Affective Neuroscience and Psychophysiology Laboratory, Georg-August-University of Göttingen, Goßlerstraße 14, 37073 Göttingen, Germany
| |
Collapse
|
26
|
Mazza A, Ciorli T, Mirlisenna I, D'Onofrio I, Mantellino S, Zaccaria M, Pia L, Dal Monte O. Pain perception and physiological responses are modulated by active support from a romantic partner. Psychophysiology 2023; 60:e14299. [PMID: 36961121 DOI: 10.1111/psyp.14299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2022] [Revised: 01/24/2023] [Accepted: 03/08/2023] [Indexed: 03/25/2023]
Abstract
As social animals, humans are strongly affected by social bonds and interpersonal interactions. Proximity and social support from significant others may buffer the negative outcomes of a painful experience. Several studies have investigated the role of romantic partners' support in pain modulation, mostly focusing on tactile support and showing its effectiveness in reducing pain perception. Nevertheless, no study so far has investigated the role of supportive speaking on pain modulation, nor has compared the effects of a tactile and vocal support within the same couples. The present study directly compared for the first time the efficacy of mere presence (Passive Support) and different forms of active (Touch, Voice, Touch + Voice) support from a romantic partner during a painful experience in a naturalistic setting. We assessed pain modulation in 37 romantic couples via both subjective (self-reported ratings) and physiological (skin conductance) measurements. We found that all three types of active support were equally more effective than passive support in reducing the painful experience at both subjective and physiological levels; interestingly, our results suggest that supportive speaking can reduce pain perception with respect to passive support to a similar extent as tactile support does. Overall, this study highlights the relevance of an active support in reducing pain perception, with active types of support being more effective than passive support, regardless of its specific modality.
Collapse
Affiliation(s)
| | - Tommaso Ciorli
- Department of Psychology, University of Turin, Torino, Italy
| | | | | | | | | | - Lorenzo Pia
- Department of Psychology, University of Turin, Torino, Italy
| | - Olga Dal Monte
- Department of Psychology, University of Turin, Torino, Italy
- Department of Psychology, Yale University, New Haven, Connecticut, 06520, USA
| |
Collapse
|
27
|
Abstract
How do experiences in nature or in spiritual contemplation or in being moved by music or with psychedelics promote mental and physical health? Our proposal in this article is awe. To make this argument, we first review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes-shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning-that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.
Collapse
Affiliation(s)
- Maria Monroy
- Department of Psychology, University of California,
Berkeley
| | - Dacher Keltner
- Department of Psychology, University of California,
Berkeley
| |
Collapse
|
28
|
Barca L, Candidi M, Lancia GL, Maglianella V, Pezzulo G. Mapping the mental space of emotional concepts through kinematic measures of decision uncertainty. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210367. [PMID: 36571117 PMCID: PMC9791479 DOI: 10.1098/rstb.2021.0367] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 08/09/2022] [Indexed: 12/27/2022] Open
Abstract
Emotional concepts and their mental representations have been extensively studied. Yet, some ecologically relevant aspects, such as how they are processed in ambiguous contexts (e.g., in relation to other emotional stimuli that share similar characteristics), are incompletely known. We employed a similarity judgement of emotional concepts and manipulated the contextual congruency of the responses along the two main affective dimensions of hedonic valence and physiological activation, respectively. Behavioural and kinematics (mouse-tracking) measures were combined to gather a novel 'similarity index' between emotional concepts, to derive topographical maps of their mental representations. Self-report (interoceptive sensibility, positive-negative affectivity, depression) and physiological measures (heart rate variability, HRV) have been collected to explore their possible association with emotional conceptual representation. Results indicate that emotional concepts typically associated with low arousal profit by contextual congruency, with faster responses and reduced uncertainty when contextual ambiguity decreases. The emotional maps recreate two almost orthogonal axes of valence and arousal, and the similarity measure captures the smooth boundaries between emotions. The emotional map of a subgroup of individuals with low positive affectivity reveals a narrower conceptual distribution, with variations in positive emotions and in individuals with reduced arousal (such as those with reduced HRV). Our work introduces a novel methodology to study emotional conceptual representations, bringing the behavioural dynamics of decision-making processes and choice uncertainty into the affective domain. This article is part of the theme issue 'Concepts in interaction: social engagement and inner experiences'.
Collapse
Affiliation(s)
- Laura Barca
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185 Rome, Italy
| | - Matteo Candidi
- Department of Psychology, University of Rome ‘La Sapienza’, 00185 Rome, Italy
| | - Gian Luca Lancia
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185 Rome, Italy
| | - Valerio Maglianella
- Department of Psychology, University of Rome ‘La Sapienza’, 00185 Rome, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, 00185 Rome, Italy
| |
Collapse
|
29
|
Liu J, Huo Y, Wang J, Bai Y, Zhao M, Di M. Awe of nature and well-being: Roles of nature connectedness and powerlessness. PERSONALITY AND INDIVIDUAL DIFFERENCES 2023. [DOI: 10.1016/j.paid.2022.111946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
30
|
Brooks JA, Tzirakis P, Baird A, Kim L, Opara M, Fang X, Keltner D, Monroy M, Corona R, Metrick J, Cowen AS. Deep learning reveals what vocal bursts express in different cultures. Nat Hum Behav 2023; 7:240-250. [PMID: 36577898 DOI: 10.1038/s41562-022-01489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 10/26/2022] [Indexed: 12/29/2022]
Abstract
Human social life is rich with sighs, chuckles, shrieks and other emotional vocalizations, called 'vocal bursts'. Nevertheless, the meaning of vocal bursts across cultures is only beginning to be understood. Here, we combined large-scale experimental data collection with deep learning to reveal the shared and culture-specific meanings of vocal bursts. A total of n = 4,031 participants in China, India, South Africa, the USA and Venezuela mimicked vocal bursts drawn from 2,756 seed recordings. Participants also judged the emotional meaning of each vocal burst. A deep neural network tasked with predicting the culture-specific meanings people attributed to vocal bursts while disregarding context and speaker identity discovered 24 acoustic dimensions, or kinds, of vocal expression with distinct emotion-related meanings. The meanings attributed to these complex vocal modulations were 79% preserved across the five countries and three languages. These results reveal the underlying dimensions of human emotional vocalization in remarkable detail.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY, USA
| | | | - Xia Fang
- Zhejiang University, Hangzhou, China
| | - Dacher Keltner
- Research Division, Hume AI, New York, NY, USA.,University of California, Berkeley, Berkeley, CA, USA
| | - Maria Monroy
- University of California, Berkeley, Berkeley, CA, USA
| | | | | | - Alan S Cowen
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
31
|
Emotional contagion in online groups as a function of valence and status. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2022.107543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
32
|
Barrett LF. Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. AMERICAN PSYCHOLOGIST 2022; 77:894-920. [PMID: 36409120 PMCID: PMC9683522 DOI: 10.1037/amp0001054] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
This article considers the status and study of "context" in psychological science through the lens of research on emotional expressions. The article begins by updating three well-trod methodological debates on the role of context in emotional expressions to reconsider several fundamental assumptions lurking within the field's dominant methodological tradition: namely, that certain expressive movements have biologically prepared, inherent emotional meanings that issue from singular, universal processes which are independent of but interact with contextual influences. The second part of this article considers the scientific opportunities that await if we set aside this traditional understanding of "context" as a moderator of signals with inherent psychological meaning and instead consider the possibility that psychological events emerge in ecosystems of signal ensembles, such that the psychological meaning of any individual signal is entirely relational. Such a fundamental shift has radical implications not only for the science of emotion but for psychological science more generally. It offers opportunities to improve the validity and trustworthiness of psychological science beyond what can be achieved with improvements to methodological rigor alone. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
33
|
Grollero D, Petrolini V, Viola M, Morese R, Lettieri G, Cecchetti L. The structure underlying core affect and perceived affective qualities of human vocal bursts. Cogn Emot 2022; 37:1-17. [PMID: 36300588 DOI: 10.1080/02699931.2022.2139661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Vocal bursts are non-linguistic affectively-laden sounds with a crucial function in human communication, yet their affective structure is still debated. Studies showed that ratings of valence and arousal follow a V-shaped relationship in several kinds of stimuli: high arousal ratings are more likely to go on a par with very negative or very positive valence. Across two studies, we asked participants to listen to 1,008 vocal bursts and judge both how they felt when listening to the sound (i.e. core affect condition), and how the speaker felt when producing it (i.e. perception of affective quality condition). We show that a V-shaped fit outperforms a linear model in explaining the valence-arousal relationship across conditions and studies, even after equating the number of exemplars across emotion categories. Also, although subjective experience can be significantly predicted using affective quality ratings, core affect scores are significantly lower in arousal, less extreme in valence, more variable between individuals, and less reproducible between studies. Nonetheless, stimuli rated with opposite valence between conditions range from 11% (study 1) to 17% (study 2). Lastly, we demonstrate that ambiguity in valence (i.e. high between-participants variability) explains violations of the V-shape and relates to higher arousal.
Collapse
Affiliation(s)
- Demetrio Grollero
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Valentina Petrolini
- Lindy Lab - Language in Neurodiversity, Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Spain
| | - Marco Viola
- Department of Philosophy and Education, University of Turin, Turin, Italy
| | - Rosalba Morese
- Faculty of Communication, Culture and Society, Università della Svizzera Italiana, Lugano, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
| | - Giada Lettieri
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
- Crossmodal Perception and Plasticity Laboratory, IPSY, University of Louvain, Louvain-la-Neuve, Belgium
| | - Luca Cecchetti
- Social and Affective Neuroscience (SANe) Group, MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| |
Collapse
|
34
|
Wood A, Sievert S, Martin J. Semantic Similarity of Social Functional Smiles and Laughter. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00405-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
35
|
Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals. PLoS One 2022; 17:e0261354. [PMID: 34995305 PMCID: PMC8740977 DOI: 10.1371/journal.pone.0261354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/29/2021] [Indexed: 11/19/2022] Open
Abstract
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
Collapse
|
36
|
Bryant GA. Vocal communication across cultures: theoretical and methodological issues. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200387. [PMID: 34775828 PMCID: PMC8591381 DOI: 10.1098/rstb.2020.0387] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 08/03/2021] [Indexed: 11/12/2022] Open
Abstract
The study of human vocal communication has been conducted primarily in Western, educated, industrialized, rich, democratic (WEIRD) societies. Recently, cross-cultural investigations in several domains of voice research have been expanding into more diverse populations. Theoretically, it is important to understand how universals and cultural variations interact in vocal production and perception, but cross-cultural voice research presents many methodological challenges. Experimental methods typically used in WEIRD societies are often not possible to implement in many populations such as rural, small-scale societies. Moreover, theoretical and methodological issues are often unnecessarily intertwined. Here, I focus on three areas of cross-cultural voice modulation research: (i) vocal signalling of formidability and dominance, (ii) vocal emotions, and (iii) production and perception of infant-directed speech. Research in these specific areas illustrates challenges that apply more generally across the human behavioural sciences but also reveals promise as we develop our understanding of the evolution of human communication. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
Affiliation(s)
- Gregory A. Bryant
- Department of Communication, Center for Behavior, Evolution, and Culture, University of California, Los Angeles, 2225 Rolfe Hall, Los Angeles, CA 90095-1563, USA
| |
Collapse
|
37
|
Superior Communication of Positive Emotions Through Nonverbal Vocalisations Compared to Speech Prosody. JOURNAL OF NONVERBAL BEHAVIOR 2021; 45:419-454. [PMID: 34744232 PMCID: PMC8553689 DOI: 10.1007/s10919-021-00375-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/22/2021] [Indexed: 11/29/2022]
Abstract
The human voice communicates emotion through two different types of vocalizations: nonverbal vocalizations (brief non-linguistic sounds like laughs) and speech prosody (tone of voice). Research examining recognizability of emotions from the voice has mostly focused on either nonverbal vocalizations or speech prosody, and included few categories of positive emotions. In two preregistered experiments, we compare human listeners’ (total n = 400) recognition performance for 22 positive emotions from nonverbal vocalizations (n = 880) to that from speech prosody (n = 880). The results show that listeners were more accurate in recognizing most positive emotions from nonverbal vocalizations compared to prosodic expressions. Furthermore, acoustic classification experiments with machine learning models demonstrated that positive emotions are expressed with more distinctive acoustic patterns for nonverbal vocalizations as compared to speech prosody. Overall, the results suggest that vocal expressions of positive emotions are communicated more successfully when expressed as nonverbal vocalizations compared to speech prosody.
Collapse
|
38
|
Neves L, Martins M, Correia AI, Castro SL, Lima CF. Associations between vocal emotion recognition and socio-emotional adjustment in children. ROYAL SOCIETY OPEN SCIENCE 2021; 8:211412. [PMID: 34804582 PMCID: PMC8595998 DOI: 10.1098/rsos.211412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 10/20/2021] [Indexed: 06/13/2023]
Abstract
The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce. Here, we examined relationships between vocal emotion recognition and socio-emotional adjustment in children. The sample included 141 6- to 8-year-old children, and the emotion tasks required them to categorize five emotions (anger, disgust, fear, happiness, sadness, plus neutrality), as conveyed by two types of vocal emotional cues: speech prosody and non-verbal vocalizations such as laughter. Socio-emotional adjustment was evaluated by the children's teachers using a multidimensional questionnaire of self-regulation and social behaviour. Based on frequentist and Bayesian analyses, we found that, for speech prosody, higher emotion recognition related to better general socio-emotional adjustment. This association remained significant even when the children's cognitive ability, age, sex and parental education were held constant. Follow-up analyses indicated that higher emotional prosody recognition was more robustly related to the socio-emotional dimensions of prosocial behaviour and cognitive and behavioural self-regulation. For emotion recognition in non-verbal vocalizations, no associations with socio-emotional adjustment were found. A similar null result was obtained for an additional task focused on facial emotion recognition. Overall, these results support the close link between children's emotional prosody recognition skills and their everyday social behaviour.
Collapse
Affiliation(s)
- Leonor Neves
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - Marta Martins
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - Ana Isabel Correia
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
| | - São Luís Castro
- Centro de Psicologia da Universidade do Porto (CPUP), Faculdade de Psicologia e de Ciências da Educação da Universidade do Porto (FPCEUP), Porto, Portugal
| | - César F. Lima
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Av. das Forças Armadas, 1649-026 Lisboa, Portugal
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
39
|
Investigating individual differences in emotion recognition ability using the ERAM test. Acta Psychol (Amst) 2021; 220:103422. [PMID: 34592586 DOI: 10.1016/j.actpsy.2021.103422] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 09/22/2021] [Accepted: 09/23/2021] [Indexed: 12/14/2022] Open
Abstract
Individuals vary in emotion recognition ability (ERA), but the causes and correlates of this variability are not well understood. Previous studies have largely focused on unimodal facial or vocal expressions and a small number of emotion categories, which may not reflect how emotions are expressed in everyday interactions. We investigated individual differences in ERA using a brief test containing dynamic multimodal (facial and vocal) expressions of 5 positive and 7 negative emotions (the ERAM test). Study 1 (N = 593) showed that ERA was positively correlated with emotional understanding, empathy, and openness, and negatively correlated with alexithymia. Women also had higher ERA than men. Study 2 was conducted online and replicated the recognition rates from Study 1 (which was conducted in lab) in a different sample (N = 106). Study 2 also showed that participants who had higher ERA were more accurate in their meta-cognitive judgments about their own accuracy. Recognition rates for visual, auditory, and audio-visual expressions were substantially correlated in both studies. Results provide further clues about the underlying structure of ERA and its links to broader affective processes. The ERAM test can be used for both lab and online research, and is freely available for academic research.
Collapse
|
40
|
Do People Agree on How Positive Emotions Are Expressed? A Survey of Four Emotions and Five Modalities Across 11 Cultures. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00376-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractWhile much is known about how negative emotions are expressed in different modalities, our understanding of the nonverbal expressions of positive emotions remains limited. In the present research, we draw upon disparate lines of theoretical and empirical work on positive emotions, and systematically examine which channels are thought to be used for expressing four positive emotions: feeling moved, gratitude, interest, and triumph. Employing the intersubjective approach, an established method in cross-cultural psychology, we first explored how the four positive emotions were reported to be expressed in two North American community samples (Studies 1a and 1b: n = 1466). We next confirmed the cross-cultural generalizability of our findings by surveying respondents from ten countries that diverged on cultural values (Study 2: n = 1826). Feeling moved was thought to be signaled with facial expressions, gratitude with the use of words, interest with words, face and voice, and triumph with body posture, vocal cues, facial expressions, and words. These findings provide cross-culturally consistent findings of differential expressions across positive emotions. Notably, positive emotions were thought to be expressed via modalities that go beyond the face.
Collapse
|
41
|
Farley SD. Introduction to the Special Issue on Emotional Expression Beyond the Face: On the Importance of Multiple Channels of Communication and Context. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00377-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
42
|
Charbonneau I, Guérette J, Cormier S, Blais C, Lalonde-Beaudoin G, Smith FW, Fiset D. The role of spatial frequencies for facial pain categorization. Sci Rep 2021; 11:14357. [PMID: 34257357 PMCID: PMC8277883 DOI: 10.1038/s41598-021-93776-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 06/25/2021] [Indexed: 11/16/2022] Open
Abstract
Studies on low-level visual information underlying pain categorization have led to inconsistent findings. Some show an advantage for low spatial frequency information (SFs) and others a preponderance of mid SFs. This study aims to clarify this gap in knowledge since these results have different theoretical and practical implications, such as how far away an observer can be in order to categorize pain. This study addresses this question by using two complementary methods: a data-driven method without a priori expectations about the most useful SFs for pain recognition and a more ecological method that simulates the distance of stimuli presentation. We reveal a broad range of important SFs for pain recognition starting from low to relatively high SFs and showed that performance is optimal in a short to medium distance (1.2-4.8 m) but declines significantly when mid SFs are no longer available. This study reconciles previous results that show an advantage of LSFs over HSFs when using arbitrary cutoffs, but above all reveal the prominent role of mid-SFs for pain recognition across two complementary experimental tasks.
Collapse
Affiliation(s)
- Isabelle Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Joël Guérette
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Stéphanie Cormier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Guillaume Lalonde-Beaudoin
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada
| | - Fraser W Smith
- University of East Anglia School of Psychology, Norwich, NR4 7TJ, UK
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, QC, J8X3X7, Canada.
| |
Collapse
|
43
|
Guyer JJ, Briñol P, Vaughan-Johnston TI, Fabrigar LR, Moreno L, Petty RE. Paralinguistic Features Communicated through Voice can Affect Appraisals of Confidence and Evaluative Judgments. JOURNAL OF NONVERBAL BEHAVIOR 2021; 45:479-504. [PMID: 34744233 PMCID: PMC8553728 DOI: 10.1007/s10919-021-00374-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/25/2021] [Indexed: 11/07/2022]
Abstract
This article unpacks the basic mechanisms by which paralinguistic features communicated through the voice can affect evaluative judgments and persuasion. Special emphasis is placed on exploring the rapidly emerging literature on vocal features linked to appraisals of confidence (e.g., vocal pitch, intonation, speech rate, loudness, etc.), and their subsequent impact on information processing and meta-cognitive processes of attitude change. The main goal of this review is to advance understanding of the different psychological processes by which paralinguistic markers of confidence can affect attitude change, specifying the conditions under which they are more likely to operate. In sum, we highlight the importance of considering basic mechanisms of attitude change to predict when and why appraisals of paralinguistic markers of confidence can lead to more or less persuasion.
Collapse
Affiliation(s)
- Joshua J. Guyer
- Department of Social Psychology and Methodology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Pablo Briñol
- Department of Social Psychology and Methodology, Universidad Autónoma de Madrid, Madrid, Spain
| | | | | | - Lorena Moreno
- Department of Social Psychology and Methodology, Universidad Autónoma de Madrid, Madrid, Spain
| | - Richard E. Petty
- Department of Psychology, The Ohio State University, Columbus, USA
| |
Collapse
|
44
|
Jonauskaite D, Sutton A, Cristianini N, Mohr C. English colour terms carry gender and valence biases: A corpus study using word embeddings. PLoS One 2021; 16:e0251559. [PMID: 34061875 PMCID: PMC8168888 DOI: 10.1371/journal.pone.0251559] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 04/29/2021] [Indexed: 11/19/2022] Open
Abstract
In Western societies, the stereotype prevails that pink is for girls and blue is for boys. A third possible gendered colour is red. While liked by women, it represents power, stereotypically a masculine characteristic. Empirical studies confirmed such gendered connotations when testing colour-emotion associations or colour preferences in males and females. Furthermore, empirical studies demonstrated that pink is a positive colour, blue is mainly a positive colour, and red is both a positive and a negative colour. Here, we assessed if the same valence and gender connotations appear in widely available written texts (Wikipedia and newswire articles). Using a word embedding method (GloVe), we extracted gender and valence biases for blue, pink, and red, as well as for the remaining basic colour terms from a large English-language corpus containing six billion words. We found and confirmed that pink was biased towards femininity and positivity, and blue was biased towards positivity. We found no strong gender bias for blue, and no strong gender or valence biases for red. For the remaining colour terms, we only found that green, white, and brown were positively biased. Our finding on pink shows that writers of widely available English texts use this colour term to convey femininity. This gendered communication reinforces the notion that results from research studies find their analogue in real word phenomena. Other findings were either consistent or inconsistent with results from research studies. We argue that widely available written texts have biases on their own, because they have been filtered according to context, time, and what is appropriate to be reported.
Collapse
Affiliation(s)
| | - Adam Sutton
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Nello Cristianini
- Department of Computer Science, University of Bristol, Bristol, United Kingdom
| | - Christine Mohr
- Institute of Psychology, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
45
|
|
46
|
Lima CF, Arriaga P, Anikin A, Pires AR, Frade S, Neves L, Scott SK. Authentic and posed emotional vocalizations trigger distinct facial responses. Cortex 2021; 141:280-292. [PMID: 34102411 DOI: 10.1016/j.cortex.2021.04.015] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 04/21/2021] [Accepted: 04/27/2021] [Indexed: 11/28/2022]
Abstract
The ability to recognize the emotions of others is a crucial skill. In the visual modality, sensorimotor mechanisms provide an important route for emotion recognition. Perceiving facial expressions often evokes activity in facial muscles and in motor and somatosensory systems, and this activity relates to performance in emotion tasks. It remains unclear whether and how similar mechanisms extend to audition. Here we examined facial electromyographic and electrodermal responses to nonverbal vocalizations that varied in emotional authenticity. Participants (N = 100) passively listened to laughs and cries that could reflect an authentic or a posed emotion. Bayesian mixed models indicated that listening to laughter evoked stronger facial responses than listening to crying. These responses were sensitive to emotional authenticity. Authentic laughs evoked more activity than posed laughs in the zygomaticus and orbicularis, muscles typically associated with positive affect. We also found that activity in the orbicularis and corrugator related to subjective evaluations in a subsequent authenticity perception task. Stronger responses in the orbicularis predicted higher perceived laughter authenticity. Stronger responses in the corrugator, a muscle associated with negative affect, predicted lower perceived laughter authenticity. Moreover, authentic laughs elicited stronger skin conductance responses than posed laughs. This arousal effect did not predict task performance, however. For crying, physiological responses were not associated with authenticity judgments. Altogether, these findings indicate that emotional authenticity affects peripheral nervous system responses to vocalizations. They also point to a role of sensorimotor mechanisms in the evaluation of authenticity in the auditory modality.
Collapse
Affiliation(s)
- César F Lima
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal; Institute of Cognitive Neuroscience, University College London, London, UK.
| | | | - Andrey Anikin
- Equipe de Neuro-Ethologie Sensorielle (ENES)/Centre de Recherche en Neurosciences de Lyon (CRNL), University of Lyon/Saint-Etienne, CNRS UMR5292, INSERM UMR_S 1028, Saint-Etienne, France; Division of Cognitive Science, Lund University, Lund, Sweden
| | - Ana Rita Pires
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sofia Frade
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Leonor Neves
- Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK
| |
Collapse
|
47
|
Hall A, Kawai K, Graber K, Spencer G, Roussin C, Weinstock P, Volk MS. Acoustic analysis of surgeons’ voices to assess change in the stress response during surgical in situ simulation. BMJ SIMULATION & TECHNOLOGY ENHANCED LEARNING 2021; 7:471-477. [DOI: 10.1136/bmjstel-2020-000727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/23/2021] [Indexed: 11/04/2022]
Abstract
IntroductionStress may serve as an adjunct (challenge) or hindrance (threat) to the learning process. Determining the effect of an individual’s response to situational demands in either a real or simulated situation may enable optimisation of the learning environment. Studies of acoustic analysis suggest that mean fundamental frequency and formant frequencies of voice vary with an individual’s response during stressful events. This hypothesis is reviewed within the otolaryngology (ORL) simulation environment to assess whether acoustic analysis could be used as a tool to determine participants’ stress response and cognitive load in medical simulation. Such an assessment could lead to optimisation of the learning environment.MethodologyORL simulation scenarios were performed to teach the participants teamwork and refine clinical skills. Each was performed in an actual operating room (OR) environment (in situ) with a multidisciplinary team consisting of ORL surgeons, OR nurses and anaesthesiologists. Ten of the scenarios were led by an ORL attending and ten were led by an ORL fellow. The vocal communication of each of the 20 individual leaders was analysed using a long-term pitch analysis PRAAT software (autocorrelation method) to obtain mean fundamental frequency (F0) and first four formant frequencies (F1, F2, F3 and F4). In reviewing individual scenarios, each leader’s voice was analysed during a non-stressful environment (WHO sign-out procedure) and compared with their voice during a stressful portion of the scenario (responding to deteriorating oxygen saturations in the manikin).ResultsThe mean unstressed F0 for the male voice was 161.4 Hz and for the female voice was 217.9 Hz. The mean fundamental frequency of speech in the ORL fellow (lead surgeon) group increased by 34.5 Hz between the scenario’s baseline and stressful portions. This was significantly different to the mean change of −0.5 Hz noted in the attending group (p=0.01). No changes were seen in F1, F2, F3 or F4.ConclusionsThis study demonstrates a method of acoustic analysis of the voices of participants taking part in medical simulations. It suggests acoustic analysis of participants may offer a simple, non-invasive, non-intrusive adjunct in evaluating and titrating the stress response during simulation.
Collapse
|
48
|
Hobaiter C. A Very Long Look Back at Language Development. MINNESOTA SYMPOSIA ON CHILD PSYCHOLOGY 2021. [DOI: 10.1002/9781119684527.ch1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
49
|
Marin Vargas A, Cominelli L, Dell’Orletta F, Scilingo EP. Verbal Communication in Robotics: A Study on Salient Terms, Research Fields and Trends in the Last Decades Based on a Computational Linguistic Analysis. FRONTIERS IN COMPUTER SCIENCE 2021. [DOI: 10.3389/fcomp.2020.591164] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Verbal communication is an expanding field in robotics showing a significant increase in both the industrial and research field. The application of verbal communication in robotics aims to reach a natural human-like interaction with robots. In this study, we investigated how salient terms related to verbal communication in robotics have evolved over the years, what are the topics that recur in the related literature, and what are their trends. The study is based on a computational linguistic analysis conducted on a database of 7,435 scientific publications over the last 2 decades. This comprehensive dataset was extracted from the Scopus database using specific key-words. Our results show how relevant terms of verbal communication evolved, which are the main coherent topics and how they have changed over the years. We highlighted positive and negative trends for the most coherent topics and the distribution over the years for the most significant ones. In particular, verbal communication resulted in being highly relevant for social robotics. Potentially, achieving natural verbal communication with a robot can have a great impact on the scientific, societal, and economic role of robotics in the future.
Collapse
|
50
|
Direito B, Ramos M, Pereira J, Sayal A, Sousa T, Castelo-Branco M. Directly Exploring the Neural Correlates of Feedback-Related Reward Saliency and Valence During Real-Time fMRI-Based Neurofeedback. Front Hum Neurosci 2021; 14:578119. [PMID: 33613202 PMCID: PMC7893090 DOI: 10.3389/fnhum.2020.578119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 12/28/2020] [Indexed: 01/04/2023] Open
Abstract
Introduction: The potential therapeutic efficacy of real-time fMRI Neurofeedback has received increasing attention in a variety of psychological and neurological disorders and as a tool to probe cognition. Despite its growing popularity, the success rate varies significantly, and the underlying neural mechanisms are still a matter of debate. The question whether an individually tailored framework positively influences neurofeedback success remains largely unexplored. Methods: To address this question, participants were trained to modulate the activity of a target brain region, the visual motion area hMT+/V5, based on the performance of three imagery tasks with increasing complexity: imagery of a static dot, imagery of a moving dot with two and with four opposite directions. Participants received auditory feedback in the form of vocalizations with either negative, neutral or positive valence. The modulation thresholds were defined for each participant according to the maximum BOLD signal change of their target region during the localizer run. Results: We found that 4 out of 10 participants were able to modulate brain activity in this region-of-interest during neurofeedback training. This rate of success (40%) is consistent with the neurofeedback literature. Whole-brain analysis revealed the recruitment of specific cortical regions involved in cognitive control, reward monitoring, and feedback processing during neurofeedback training. Individually tailored feedback thresholds did not correlate with the success level. We found region-dependent neuromodulation profiles associated with task complexity and feedback valence. Discussion: Findings support the strategic role of task complexity and feedback valence on the modulation of the network nodes involved in monitoring and feedback control, key variables in neurofeedback frameworks optimization. Considering the elaborate design, the small sample size here tested (N = 10) impairs external validity in comparison to our previous studies. Future work will address this limitation. Ultimately, our results contribute to the discussion of individually tailored solutions, and justify further investigation concerning volitional control over brain activity.
Collapse
Affiliation(s)
- Bruno Direito
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - Manuel Ramos
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal
| | - João Pereira
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - Alexandre Sayal
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal.,Siemens Healthineers, Lisbon, Portugal
| | - Teresa Sousa
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal
| | - Miguel Castelo-Branco
- Coimbra Institute for Biomedical Imaging and Translational Research (CIBIT), University of Coimbra, Coimbra, Portugal.,Institute of Nuclear Sciences Applied to Health (ICNAS), University of Coimbra, Coimbra, Portugal.,Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| |
Collapse
|