1
|
Daikoku T, Horii T, Yamawaki S. Body maps of sound pitch and relevant individual differences in alexithymic trait and depressive state. BMC Psychol 2025; 13:547. [PMID: 40405276 PMCID: PMC12101003 DOI: 10.1186/s40359-025-02900-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 05/19/2025] [Indexed: 05/24/2025] Open
Abstract
Sound perception extends beyond the boundaries of auditory sensation, encompassing an engagement with the human body. In this study, we examined the relationship between our perception of sound pitch and our bodily sensations, while also exploring the role of emotions in shaping this intriguing cross-modal correspondence. We also compared the topography of pitch-triggered body sensations between depressive and non-depressive groups, and between alexithymic, and non-alexithymic groups. Further, we examined their associations with anxiety. Our findings reveal that individuals with alexithymic trait and depressive state experience a less localized body sensations in response to sound pitch, accompanied by heightened feelings of anxiety and negative emotions. These findings suggest that diffuse bodily sensations in response to sound may be associated with heightened feelings of anxiety. Monitoring pitch-triggered body sensations could therefore serve as a potential indicator of emotional tendencies linked to disorders such as depression and alexithymia. Our study sheds light on the importance of bodily sensation in response to sounds, a phenomenon that may be mediated by interoception. This research enhances our understanding of the intricate relationship between sound, emotions, and the human body, offering insights for potential interventions in emotional disorders.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Tokyo, 113-8656, Japan.
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, UK.
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan.
| | - Takato Horii
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | - Shigeto Yamawaki
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
2
|
Li MG, Olsen KN, Thompson WF. Cross-Cultural Biases of Emotion Perception in Music. Brain Sci 2025; 15:477. [PMID: 40426648 PMCID: PMC12110013 DOI: 10.3390/brainsci15050477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2025] [Revised: 04/23/2025] [Accepted: 04/24/2025] [Indexed: 05/29/2025] Open
Abstract
Objectives: Emotion perception in music is shaped by cultural background, yet the extent of cultural biases remains unclear. This study investigated how Western listeners perceive emotion in music across cultures, focusing on the accuracy and intensity of emotion recognition and the musical features that predict emotion perception. Methods: White-European (Western) listeners from the UK, USA, New Zealand, and Australia (N = 100) listened to 48 ten-second excerpts of Western classical and Chinese traditional bowed-string music that were validated by experts to convey happiness, sadness, agitation, and calmness. After each excerpt, participants rated the familiarity, enjoyment, and perceived intensity of the four emotions. Musical features were computationally extracted for regression analyses. Results: Western listeners experienced Western classical music as more familiar and enjoyable than Chinese music. Happiness and sadness were recognised more accurately in Western classical music, whereas agitation was more accurately identified in Chinese music. The perceived intensity of happiness and sadness was greater for Western classical music; conversely, the perceived intensity of agitation was greater for Chinese music. Furthermore, emotion perception was influenced by both culture-shared (e.g., timbre) and culture-specific (e.g., dynamics) musical features. Conclusions: Our findings reveal clear cultural biases in the way individuals perceive and classify music, highlighting how these biases are shaped by the interaction between cultural familiarity and the emotional and structural qualities of the music. We discuss the possibility that purposeful engagement with music from diverse cultural traditions-especially in educational and therapeutic settings-may cultivate intercultural empathy and an appreciation of the values and aesthetics of other cultures.
Collapse
Affiliation(s)
- Marjorie G. Li
- School of Psychological Sciences, Macquarie University, Macquarie Park, NSW 2109, Australia;
| | - Kirk N. Olsen
- Australian Institute of Health Innovation, Macquarie University, Macquarie Park, NSW 2109, Australia;
| | - William Forde Thompson
- School of Psychological Sciences, Macquarie University, Macquarie Park, NSW 2109, Australia;
- Faculty of Society and Design, Bond University, Gold Coast, QLD 4229, Australia
| |
Collapse
|
3
|
Tsuchiya N, Bruza P, Yamada M, Saigo H, Pothos EM. Quantum-like Qualia hypothesis: from quantum cognition to quantum perception. Front Psychol 2025; 15:1406459. [PMID: 40322731 PMCID: PMC12046633 DOI: 10.3389/fpsyg.2024.1406459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 05/28/2024] [Indexed: 05/08/2025] Open
Abstract
To arbitrate theories of consciousness, scientists need to understand mathematical structures of quality of consciousness, or qualia. The dominant view regards qualia as points in a dimensional space. This view implicitly assumes that qualia can be measured without any effect on them. This contrasts with intuitions and empirical findings to show that by means of internal attention qualia can change when they are measured. What is a proper mathematical structure for entities that are affected by the act of measurement? Here we propose the mathematical structure used in quantum theory, in which we consider qualia as "observables" (i.e., entities that can, in principle, be observed), sensory inputs and internal attention as "states" that specify the context that a measurement takes place, and "measurement outcomes" with probabilities that qualia observables take particular values. Based on this mathematical structure, the Quantum-like Qualia (QQ) hypothesis proposes that qualia observables interact with the world, as if through an interface of sensory inputs and internal attention. We argue that this qualia-interface-world scheme has the same mathematical structure as observables-states-environment in quantum theory. Moreover, within this structure, the concept of a "measurement instrument" in quantum theory can precisely model how measurements affect qualia observables and states. We argue that QQ naturally explains known properties of qualia and predicts that qualia are sometimes indeterminate. Such predictions can be empirically determined by the presence of order effects or violations of Bell inequalities. Confirmation of such predictions substantiates our overarching claim that the mathematical structure of QQ will offer novel insights into the nature of consciousness.
Collapse
Affiliation(s)
- Naotsugu Tsuchiya
- Faculty of Medicine, Nursing, and Health Sciences, School of Psychological Sciences, Turner Institute for Brain and Mental Health, Monash University, Melbourne, VIC, Australia
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita-shi, Osaka, Japan
- Laboratory of Qualia Structure, ATR Computational Neuroscience Laboratories, Kyoto, Japan
| | - Peter Bruza
- School of Information Systems, Queensland University of Technology, Brisbane, QLD, Australia
| | - Makiko Yamada
- National Institutes for Quantum and Radiological Science and Technology, Chiba, Japan
| | - Hayato Saigo
- Nagahama Institute of Bio-Science and Technology, Nagahama, Japan
| | - Emmanuel M. Pothos
- Department of Psychology, City, University of London, London, United Kingdom
| |
Collapse
|
4
|
Bignardi G, Wesseldijk LW, Mas-Herrero E, Zatorre RJ, Ullén F, Fisher SE, Mosing MA. Twin modelling reveals partly distinct genetic pathways to music enjoyment. Nat Commun 2025; 16:2904. [PMID: 40133299 PMCID: PMC11937235 DOI: 10.1038/s41467-025-58123-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2024] [Accepted: 03/10/2025] [Indexed: 03/27/2025] Open
Abstract
Humans engage with music for various reasons that range from emotional regulation and relaxation to social bonding. While there are large inter-individual differences in how much humans enjoy music, little is known about the origins of those differences. Here, we disentangle the genetic factors underlying such variation. We collect data on several facets of music reward sensitivity, as measured by the Barcelona Music Reward Questionnaire, plus music perceptual abilities and general reward sensitivity from a large sample of Swedish twins (N = 9169; 2305 complete pairs). We estimate that genetic effects contribute up to 54% of the variability in music reward sensitivity, with 70% of these effects being independent of music perceptual abilities and general reward sensitivity. Furthermore, multivariate analyses show that genetic and environmental influences on the different facets of music reward sensitivity are partly distinct, uncovering distinct pathways to music enjoyment and different patterns of genetic associations with objectively assessed music perceptual abilities. These results paint a complex picture in which partially distinct sources of variation contribute to different aspects of musical enjoyment.
Collapse
Affiliation(s)
- Giacomo Bignardi
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
- Max Planck School of Cognition, Leipzig, Germany.
| | - Laura W Wesseldijk
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
- Department of Psychiatry, Amsterdam UMC, University of Amsterdam, Amsterdam, the Netherlands
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Ernest Mas-Herrero
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
- Institute of Neurosciences, Universitat de Barcelona, Barcelona, Spain
- Cognition and Brain Plasticity Group, Institut d'Investigació Biomèdica de Bellvitge (IDIBELL), Hospitalet de Llobregat, Barcelona, Spain
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
| | - Fredrik Ullén
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Simon E Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Miriam A Mosing
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden
- Department of Cognitive Neuropsychology, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Solna, Sweden
- Melbourne School of Psychological Sciences, Faculty of Medicine, Dentistry, and Health Sciences, University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
5
|
Eerola T, Saari P. What emotions does music express? Structure of affect terms in music using iterative crowdsourcing paradigm. PLoS One 2025; 20:e0313502. [PMID: 39841646 PMCID: PMC11753638 DOI: 10.1371/journal.pone.0313502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 10/24/2024] [Indexed: 01/24/2025] Open
Abstract
Music is assumed to express a wide range of emotions. The vocabulary and structure of affects are typically explored without the context of music in which music is experienced, leading to abstract notions about what affects music may express. In a series of three experiments utilising three separate and iterative association tasks including a contextualisation with typical activities associated with specific music and affect terms, we identified the plausible affect terms and structures to capture the wide range of emotions expressed by music. The first experiment produced a list of frequently nominated affect terms (88 out of 647 candidates), and the second experiment established and confirmed multiple factor structures, ranging from 21, to 14, and 7 dimensions. The third experiment compared the terms with external datasets looking at discrete emotions and emotion dimensions, which verified the 7-factor structure and identified a compact 4-factor structure. These structures of affects expressed by music did not conform to music-induced emotion structures, nor could they be explained by basic emotions or affective circumplex. The established affect structures were largely positive and contained concepts such as "romantic" and "free", and terms such as "in love", "dreamy", and "festive" that have rarely featured in past research.
Collapse
Affiliation(s)
- Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| | - Pasi Saari
- Department of Music, Arts and Culture, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
6
|
Keltner D, Stamkou E. Possible Worlds Theory: How the Imagination Transcends and Recreates Reality. Annu Rev Psychol 2025; 76:329-358. [PMID: 39476410 DOI: 10.1146/annurev-psych-080123-102254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2025]
Abstract
The imagination is central to human social life but undervalued worldwide and underexplored in psychology. Here, we offer Possible Worlds Theory as a synthetic theory of the imagination. We first define the imagination, mapping the mental states it touches, from dreams and hallucinations to satire and fiction. The conditions that prompt people to imagine range from trauma to physical and social deprivation, and they challenge the sense of reality, stirring a need to create possible worlds. We theorize about four cognitive operations underlying the structure of the mental states of the imagination. We then show how people embody the imagination in social behaviors such as pretense and ritual, which give rise to experiences of a special class of feelings defined by their freedom from reality. We extend Possible Worlds Theory to four domains-play, spirituality, morality, and art-and show how in flights of the imagination people create new social realities shared with others.
Collapse
Affiliation(s)
- Dacher Keltner
- Department of Psychology, University of California, Berkeley, California, USA;
| | - Eftychia Stamkou
- Department of Psychology, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
7
|
Goldy SP, Hendricks PS, Keltner D, Yaden DB. Considering distinct positive emotions in psychedelic science. Int Rev Psychiatry 2024; 36:908-919. [PMID: 39980212 DOI: 10.1080/09540261.2024.2394221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Accepted: 08/15/2024] [Indexed: 02/22/2025]
Abstract
In this review, we discuss psychedelics' acute subjective and persisting therapeutic effects, outline the science of positive emotions, and highlight the value in considering distinct positive emotions in psychedelic science. Psychedelics produce a wide variety of acute subjective effects (i.e. the 'trip'), including positive emotions and affective states such as awe and joy. However, despite a rich literature on distinct emotions and their different correlates and sequelae, distinct emotions in psychedelic science remain understudied. Insofar as psychedelics' acute subjective effects may play a role in their downstream therapeutic effects (e.g. decreased depression, anxiety, and substance misuse), considering the role of distinct positive emotions in psychedelic experiences has the potential to yield more precise statements about psychedelic-related subjective processes and outcomes. We propose here that understanding the role of positive emotions within the context of psychedelic experiences could help elucidate the connection between psychedelics' acute subjective effects and therapeutic outcomes.
Collapse
Affiliation(s)
- Sean P Goldy
- Center for Psychedelic and Consciousness Research, Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Peter S Hendricks
- Department of Psychiatry and Behavioral Neurobiology, University of Alabama School of Medicine, Birmingham, AL, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| | - David B Yaden
- Center for Psychedelic and Consciousness Research, Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
8
|
Pring EX, Olsen KN, Mobbs AED, Thompson WF. Music communicates social emotions: Evidence from 750 music excerpts. Sci Rep 2024; 14:27766. [PMID: 39532962 PMCID: PMC11557968 DOI: 10.1038/s41598-024-78156-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Humans perceive a range of basic emotional connotations from music, such as joy, sadness, and fear, which can be decoded from structural characteristics of music, such as rhythm, harmony, and timbre. However, despite theory and evidence that music has multiple social functions, little research has examined whether music conveys emotions specifically associated with social status and social connection. This investigation aimed to determine whether the social emotions of dominance and affiliation are perceived in music and whether structural features of music predict social emotions, just as they predict basic emotions. Participants (N = 1513) listened to subsets of 750 music excerpts and provided ratings of energy arousal, tension arousal, valence, dominance, and affiliation. Ratings were modelled based on ten structural features of music. Dominance and affiliation were readily perceived in music and predicted by structural features including rhythm, harmony, dynamics, and timbre. In turn, energy arousal, tension arousal and valence were also predicted by musical structure. We discuss the results in view of current models of music and emotion and propose research to illuminate the significance of social emotions in music.
Collapse
Affiliation(s)
- Elliot X Pring
- School of Psychological Sciences, Macquarie University, Sydney, Australia
| | - Kirk N Olsen
- Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| | - Anthony E D Mobbs
- School of Psychological Sciences, Macquarie University, Sydney, Australia
| | - William Forde Thompson
- School of Psychological Sciences, Macquarie University, Sydney, Australia.
- Faculty of Society and Design, Bond University, Gold Coast, Australia.
| |
Collapse
|
9
|
Marinelli L, Lucht P, Saitis C. A multimodal understanding of the role of sound and music in gendered toy marketing. PLoS One 2024; 19:e0311876. [PMID: 39504306 PMCID: PMC11540170 DOI: 10.1371/journal.pone.0311876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 09/25/2024] [Indexed: 11/08/2024] Open
Abstract
Literature in music theory and psychology shows that, even in isolation, musical sounds can reliably encode gender-loaded messages. Musical material can be imbued with many ideological dimensions and gender is just one of them. Nonetheless, studies of the gendering of music within multimodal communicative events are sparse and lack an encompassing theoretical framework. The present study attempts to address this literature gap by employing a critical quantitative analysis of music in gendered toy marketing, which integrated a content analytical approach with multimodal affective and music-focused perceptual responses. Ratings were collected on a set of 606 commercials spanning a ten-year time frame and strong gender polarization was observed in nearly all of the collected variables. Gendered music styles in toy commercials exhibit synergistic design choices, as music in masculine-targeted adverts was substantially more abrasive-louder, more inharmonious, and more distorted-than in feminine-targeted ones. Thus, toy advertising music appeared deliberately and consistently in line with traditional gender norms. In addition, music perceptual scales and voice-related content analytical variables explain quite well the heavily polarized affective ratings. This study presents a empirical understanding of the gendering of music as constructed within multimodal discourse, reiterating the importance of the sociocultural underpinnings of music cognition. We provided a public repository with all code and data necessary to reproduce the results of this study on github.com/marinelliluca/music-role-gender-marketing.
Collapse
Affiliation(s)
- Luca Marinelli
- Centre for Digital Music, Queen Mary University of London, London, United Kingdom
| | - Petra Lucht
- Center for Interdisciplinary Women’s and Gender Studies, Technical University of Berlin, Berlin, Germany
| | - Charalampos Saitis
- Centre for Digital Music, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
10
|
Stamkou E, Keltner D, Corona R, Aksoy E, Cowen AS. Emotional palette: a computational mapping of aesthetic experiences evoked by visual art. Sci Rep 2024; 14:19932. [PMID: 39198545 PMCID: PMC11358466 DOI: 10.1038/s41598-024-69686-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 08/07/2024] [Indexed: 09/01/2024] Open
Abstract
Despite the evolutionary history and cultural significance of visual art, the structure of aesthetic experiences it evokes has only attracted recent scientific attention. What kinds of experience does visual art evoke? Guided by Semantic Space Theory, we identify the concepts that most precisely describe people's aesthetic experiences using new computational techniques. Participants viewed 1457 artworks sampled from diverse cultural and historical traditions and reported on the emotions they felt and their perceived artwork qualities. Results show that aesthetic experiences are high-dimensional, comprising 25 categories of feeling states. Extending well beyond hedonism and broad evaluative judgments (e.g., pleasant/unpleasant), aesthetic experiences involve emotions of daily social living (e.g., "sad", "joy"), the imagination (e.g., "psychedelic", "mysterious"), profundity (e.g., "disgust", "awe"), and perceptual qualities attributed to the artwork (e.g., "whimsical", "disorienting"). Aesthetic emotions and perceptual qualities jointly predict viewers' liking of the artworks, indicating that we conceptualize aesthetic experiences in terms of the emotions we feel but also the qualities we perceive in the artwork. Aesthetic experiences are often mixed and lie along continuous gradients between categories rather than within discrete clusters. Our collection of artworks is visualized within an interactive map ( https://barradeau.com/2021/emotions-map/ ), revealing the high-dimensional space of aesthetic experiences associated with visual art.
Collapse
Affiliation(s)
- Eftychia Stamkou
- Department of Psychology, University of Amsterdam, 1001 NK, Amsterdam, The Netherlands.
| | - Dacher Keltner
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Eda Aksoy
- Google Arts and Culture, 75009, Paris, France
| | - Alan S Cowen
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
- Hume AI, New York, NY, 10010, USA
| |
Collapse
|
11
|
Qiu L, Wan X. Nature's beauty versus urban bustle: Chinese folk music influences food choices by inducing mental imagery of different scenes. Appetite 2024; 199:107507. [PMID: 38768925 DOI: 10.1016/j.appet.2024.107507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/09/2024] [Accepted: 05/18/2024] [Indexed: 05/22/2024]
Abstract
Previous research has demonstrated that music can impact people's food choices by triggering emotional states. We reported two virtual reality (VR) experiments designed to examine how Chinese folk music influences people's food choices by inducing mental imagery of different scenes. In both experiments, young healthy Chinese participants were asked to select three dishes from an assortment of two meat and two vegetable dishes while listening to Chinese folk music that could elicit mental imagery of nature or urban scenes. The results of Experiment 1 revealed that they chose vegetable-forward meals more frequently while listening to Chinese folk music eliciting mental imagery of nature versus urban scenes. In Experiment 2, the participants were randomly divided into three groups, in which the prevalence of their mental imagery was enhanced, moderately suppressed, or strongly suppressed by performing different tasks while listening to the music pieces. We replicated the results of Experiment 1 when the participants' mental imagery was enhanced, whereas no such effect was observed when the participants' mental imagery was moderately or strongly suppressed. Collectively, these findings suggest that music may influence the food choices people make in virtual food choice tasks by inducing mental imagery, which provides insights into utilizing environmental cues to promote healthier food choices.
Collapse
Affiliation(s)
- Linbo Qiu
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China
| | - Xiaoang Wan
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, China.
| |
Collapse
|
12
|
Cowen AS, Brooks JA, Prasad G, Tanaka M, Kamitani Y, Kirilyuk V, Somandepalli K, Jou B, Schroff F, Adam H, Sauter D, Fang X, Manokara K, Tzirakis P, Oh M, Keltner D. How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan. Front Psychol 2024; 15:1350631. [PMID: 38966733 PMCID: PMC11223574 DOI: 10.3389/fpsyg.2024.1350631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/04/2024] [Indexed: 07/06/2024] Open
Abstract
Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.
Collapse
Affiliation(s)
- Alan S. Cowen
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Jeffrey A. Brooks
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | | | - Misato Tanaka
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Yukiyasu Kamitani
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Krishna Somandepalli
- Google Research, Mountain View, CA, United States
- Department of Electrical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brendan Jou
- Google Research, Mountain View, CA, United States
| | | | - Hartwig Adam
- Google Research, Mountain View, CA, United States
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Xia Fang
- Zhejiang University, Zhejiang, China
| | - Kunalan Manokara
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | | | - Moses Oh
- Hume AI, New York, NY, United States
| | - Dacher Keltner
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
13
|
Hannon E, Snyder J. What rhythm production can tell us about culture. Trends Cogn Sci 2024; 28:487-488. [PMID: 38664158 DOI: 10.1016/j.tics.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Accepted: 04/08/2024] [Indexed: 06/07/2024]
Abstract
Jacoby and colleagues used an iterative rhythm reproduction paradigm with listeners from around the world to provide evidence for both rhythm universals (simple-integer ratios 1:1 and 2:1) and cross-cultural variation for specific rhythmic categories that can be linked to local music traditions in different regions of the world.
Collapse
|
14
|
Alberhasky M, Durkee PK. Songs tell a story: The Arc of narrative for music. PLoS One 2024; 19:e0303188. [PMID: 38753825 PMCID: PMC11098490 DOI: 10.1371/journal.pone.0303188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 04/19/2024] [Indexed: 05/18/2024] Open
Abstract
Research suggests that a core lexical structure characterized by words that define plot staging, plot progression, and cognitive tension underlies written narratives. Here, we investigate the extent to which song lyrics follow this underlying narrative structure. Using a text analytic approach and two publicly available datasets of song lyrics including a larger dataset (N = 12,280) and a smaller dataset of greatest hits (N = 2,823), we find that music lyrics tend to exhibit a core Arc of Narrative structure: setting the stage at the beginning, progressing the plot steadily until the end of the song, and peaking in cognitive tension in the middle. We also observe differences in narrative structure based on musical genre, suggesting different genres set the scene in greater detail (Country, Rap) or progress the plot faster and have a higher rate of internal conflict (Pop). These findings add to the evidence that storytelling exhibits predictable language patterns and that storytelling is evident in music lyrics.
Collapse
Affiliation(s)
- Max Alberhasky
- Department of Marketing, California State University Long Beach, Long Beach, CA, United States of America
| | - Patrick K. Durkee
- Department of Psychology, California State University Fresno, Fresno, CA, United States of America
| |
Collapse
|
15
|
Abrams EB, Namballa R, He R, Poeppel D, Ripollés P. Elevator music as a tool for the quantitative characterization of reward. Ann N Y Acad Sci 2024; 1535:121-136. [PMID: 38566486 DOI: 10.1111/nyas.15131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
While certain musical genres and songs are widely popular, there is still large variability in the music that individuals find rewarding or emotional, even among those with a similar musical enculturation. Interestingly, there is one Western genre that is intended to attract minimal attention and evoke a mild emotional response: elevator music. In a series of behavioral experiments, we show that elevator music consistently elicits low pleasure and surprise. Participants reported elevator music as being less pleasurable than music from popular genres, even when participants did not regularly listen to the comparison genre. Participants reported elevator music to be familiar even when they had not explicitly heard the presented song before. Computational and behavioral measures of surprisal showed that elevator music was less surprising, and thus more predictable, than other well-known genres. Elevator music covers of popular songs were rated as less pleasurable, surprising, and arousing than their original counterparts. Finally, we used elevator music as a control for self-selected rewarding songs in a proof-of-concept physiological (electrodermal activity and piloerection) experiment. Our results suggest that elevator music elicits low emotional responses consistently across Western music listeners, making it a unique control stimulus for studying musical novelty, pleasure, and surprise.
Collapse
Affiliation(s)
- Ellie Bean Abrams
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richa Namballa
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - Richard He
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
- Center for Language, Music, and Emotion (CLaME), New York University, New York, New York, USA
- Music and Audio Research Laboratory (MARL), New York University, New York, New York, USA
| |
Collapse
|
16
|
Wu D, Jia X, Rao W, Dou W, Li Y, Li B. Construction of a Chinese traditional instrumental music dataset: A validated set of naturalistic affective music excerpts. Behav Res Methods 2024; 56:3757-3778. [PMID: 38702502 PMCID: PMC11133124 DOI: 10.3758/s13428-024-02411-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2024] [Indexed: 05/06/2024]
Abstract
Music is omnipresent among human cultures and moves us both physically and emotionally. The perception of emotions in music is influenced by both psychophysical and cultural factors. Chinese traditional instrumental music differs significantly from Western music in cultural origin and music elements. However, previous studies on music emotion perception are based almost exclusively on Western music. Therefore, the construction of a dataset of Chinese traditional instrumental music is important for exploring the perception of music emotions in the context of Chinese culture. The present dataset included 273 10-second naturalistic music excerpts. We provided rating data for each excerpt on ten variables: familiarity, dimensional emotions (valence and arousal), and discrete emotions (anger, gentleness, happiness, peacefulness, sadness, solemnness, and transcendence). The excerpts were rated by a total of 168 participants on a seven-point Likert scale for the ten variables. Three labels for the excerpts were obtained: familiarity, discrete emotion, and cluster. Our dataset demonstrates good reliability, and we believe it could contribute to cross-cultural studies on emotional responses to music.
Collapse
Affiliation(s)
- Di Wu
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Xi Jia
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenxin Rao
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
| | - Wenjie Dou
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China
| | - Yangping Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China
- School of Foreign Studies, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Baoming Li
- Institute of Brain Science and Department of Physiology, School of Basic Medical Sciences, Hangzhou Normal University, Hangzhou, 311121, China.
- Zhejiang Philosophy and Social Science Laboratory for Research in Early Development and Childcare, Hangzhou Normal University, Hangzhou, 311121, China.
| |
Collapse
|
17
|
Strauss H, Vigl J, Jacobsen PO, Bayer M, Talamini F, Vigl W, Zangerle E, Zentner M. The Emotion-to-Music Mapping Atlas (EMMA): A systematically organized online database of emotionally evocative music excerpts. Behav Res Methods 2024; 56:3560-3577. [PMID: 38286947 PMCID: PMC11133078 DOI: 10.3758/s13428-024-02336-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2024] [Indexed: 01/31/2024]
Abstract
Selecting appropriate musical stimuli to induce specific emotions represents a recurring challenge in music and emotion research. Most existing stimuli have been categorized according to taxonomies derived from general emotion models (e.g., basic emotions, affective circumplex), have been rated for perceived emotions, and are rarely defined in terms of interrater agreement. To redress these limitations, we present research that served in the development of a new interactive online database, including an initial set of 364 music excerpts from three different genres (classical, pop, and hip/hop) that were rated for felt emotion using the Geneva Emotion Music Scale (GEMS), a music-specific emotion scale. The sample comprised 517 English- and German-speaking participants and each excerpt was rated by an average of 28.76 participants (SD = 7.99). Data analyses focused on research questions that are of particular relevance for musical database development, notably the number of raters required to obtain stable estimates of emotional effects of music and the adequacy of the GEMS as a tool for describing music-evoked emotions across three prominent music genres. Overall, our findings suggest that 10-20 raters are sufficient to obtain stable estimates of emotional effects of music excerpts in most cases, and that the GEMS shows promise as a valid and comprehensive annotation tool for music databases.
Collapse
Affiliation(s)
- Hannah Strauss
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| | - Julia Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Peer-Ole Jacobsen
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Martin Bayer
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Francesca Talamini
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Wolfgang Vigl
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria
| | - Eva Zangerle
- Department of Computer Science, Universität Innsbruck, Innsbruck, Austria
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Universitätsstrasse 15, 6020, Innsbruck, Austria.
| |
Collapse
|
18
|
Brooks JA, Kim L, Opara M, Keltner D, Fang X, Monroy M, Corona R, Tzirakis P, Baird A, Metrick J, Taddesse N, Zegeye K, Cowen AS. Deep learning reveals what facial expressions mean to people in different cultures. iScience 2024; 27:109175. [PMID: 38433918 PMCID: PMC10906517 DOI: 10.1016/j.isci.2024.109175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/05/2024] Open
Abstract
Cross-cultural studies of the meaning of facial expressions have largely focused on judgments of small sets of stereotypical images by small numbers of people. Here, we used large-scale data collection and machine learning to map what facial expressions convey in six countries. Using a mimicry paradigm, 5,833 participants formed facial expressions found in 4,659 naturalistic images, resulting in 423,193 participant-generated facial expressions. In their own language, participants also rated each expression in terms of 48 emotions and mental states. A deep neural network tasked with predicting the culture-specific meanings people attributed to facial movements while ignoring physical appearance and context discovered 28 distinct dimensions of facial expression, with 21 dimensions showing strong evidence of universality and the remainder showing varying degrees of cultural specificity. These results capture the underlying dimensions of the meanings of facial expressions within and across cultures in unprecedented detail.
Collapse
Affiliation(s)
- Jeffrey A. Brooks
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY 10010, USA
| | | | - Dacher Keltner
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Xia Fang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang, China
| | - Maria Monroy
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY 10010, USA
| | | | | | | | - Alan S. Cowen
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
19
|
Zhang Z, Fort JM, Giménez Mateu L. Decoding emotional responses to AI-generated architectural imagery. Front Psychol 2024; 15:1348083. [PMID: 38533213 PMCID: PMC10963507 DOI: 10.3389/fpsyg.2024.1348083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/29/2024] [Indexed: 03/28/2024] Open
Abstract
Introduction The integration of AI in architectural design represents a significant shift toward creating emotionally resonant spaces. This research investigates AI's ability to evoke specific emotional responses through architectural imagery and examines the impact of professional training on emotional interpretation. Methods We utilized Midjourney AI software to generate images based on direct and metaphorical prompts across two architectural settings: home interiors and museum exteriors. A survey was designed to capture participants' emotional responses to these images, employing a scale that rated their immediate emotional reaction. The study involved 789 university students, categorized into architecture majors (Group A) and non-architecture majors (Group B), to explore differences in emotional perception attributable to educational background. Results Findings revealed that AI is particularly effective in depicting joy, especially in interior settings. However, it struggles to accurately convey negative emotions, indicating a gap in AI's emotional range. Architecture students exhibited a greater sensitivity to emotional nuances in the images compared to non-architecture students, suggesting that architectural training enhances emotional discernment. Notably, the study observed minimal differences in the perception of emotions between direct and metaphorical prompts among architecture students, indicating a consistent emotional interpretation across prompt types. Conclusion AI holds significant promise in creating spaces that resonate on an emotional level, particularly in conveying positive emotions like joy. The study contributes to the understanding of AI's role in architectural design, emphasizing the importance of emotional intelligence in creating spaces that reflect human experiences. Future research should focus on expanding AI's emotional range and further exploring the impact of architectural training on emotional perception.
Collapse
Affiliation(s)
| | - Josep M. Fort
- Escola Tècnica Superior d'Arquitectura de Barcelona, Universitat Politècnica de Catalunya, Barcelona, Spain
| | | |
Collapse
|
20
|
Curwen C, Timmers R, Schiavio A. Action, emotion, and music-colour synaesthesia: an examination of sensorimotor and emotional responses in synaesthetes and non-synaesthetes. PSYCHOLOGICAL RESEARCH 2024; 88:348-362. [PMID: 37453940 PMCID: PMC10857979 DOI: 10.1007/s00426-023-01856-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 06/27/2023] [Indexed: 07/18/2023]
Abstract
Synaesthesia has been conceptualised as a joining of sensory experiences. Taking a holistic, embodied perspective, we investigate in this paper the role of action and emotion, testing hypotheses related to (1) changes to action-related qualities of a musical stimulus affect the resulting synaesthetic experience; (2) a comparable relationship exists between music, sensorimotor and emotional responses in synaesthetes and the general population; and (3) sensorimotor responses are more strongly associated with synaesthesia than emotion. 29 synaesthetes and 33 non-synaesthetes listened to 12 musical excerpts performed on a musical instrument they had first-hand experience playing, an instrument never played before, and a deadpan performance generated by notation software, i.e., a performance without expression. They evaluated the intensity of their experience of the music using a list of dimensions that relate to sensorimotor, emotional or synaesthetic sensations. Results demonstrated that the intensity of listeners' responses was most strongly influenced by whether or not music is performed by a human, more so than familiarity with a particular instrument. Furthermore, our findings reveal a shared relationship between emotional and sensorimotor responses among both synaesthetes and non-synaesthetes. Yet it was sensorimotor intensity that was shown to be fundamentally associated with the intensity of the synaesthetic response. Overall, the research argues for, and gives first evidence of a key role of action in shaping the experiences of music-colour synaesthesia.
Collapse
Affiliation(s)
- Caroline Curwen
- Department of Music, The University of Sheffield, Jessop Building, 34 Leavygreave Road, Sheffield, S3 7RD, UK.
| | - Renee Timmers
- Department of Music, The University of Sheffield, Jessop Building, 34 Leavygreave Road, Sheffield, S3 7RD, UK
| | - Andrea Schiavio
- School of Arts and Creative Technologies, University of York, Sally Baldwin Building D, York, YO10 5DD, UK
| |
Collapse
|
21
|
Putkinen V, Zhou X, Gan X, Yang L, Becker B, Sams M, Nummenmaa L. Bodily maps of musical sensations across cultures. Proc Natl Acad Sci U S A 2024; 121:e2308859121. [PMID: 38271338 PMCID: PMC10835118 DOI: 10.1073/pnas.2308859121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/01/2023] [Indexed: 01/27/2024] Open
Abstract
Emotions, bodily sensations and movement are integral parts of musical experiences. Yet, it remains unknown i) whether emotional connotations and structural features of music elicit discrete bodily sensations and ii) whether these sensations are culturally consistent. We addressed these questions in a cross-cultural study with Western (European and North American, n = 903) and East Asian (Chinese, n = 1035). We precented participants with silhouettes of human bodies and asked them to indicate the bodily regions whose activity they felt changing while listening to Western and Asian musical pieces with varying emotional and acoustic qualities. The resulting bodily sensation maps (BSMs) varied as a function of the emotional qualities of the songs, particularly in the limb, chest, and head regions. Music-induced emotions and corresponding BSMs were replicable across Western and East Asian subjects. The BSMs clustered similarly across cultures, and cluster structures were similar for BSMs and self-reports of emotional experience. The acoustic and structural features of music were consistently associated with the emotion ratings and music-induced bodily sensations across cultures. These results highlight the importance of subjective bodily experience in music-induced emotions and demonstrate consistent associations between musical features, music-induced emotions, and bodily sensations across distant cultures.
Collapse
Affiliation(s)
- Vesa Putkinen
- Turku PET Centre, University of Turku, Turku 20520, Finland
- Turku Institute for Advanced Studies, Department of Psychology, University of Turku, Turku 20014, Finland
| | - Xinqi Zhou
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, China
| | - Xianyang Gan
- The Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China
- MOE Key Laboratory for Neuroinformation, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, China
| | - Linyu Yang
- College of Mathematics, Sichuan University, Chengdu 610064, China
| | - Benjamin Becker
- State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong, China
- Department of Psychology, The University of Hong Kong, Hong Kong, China
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Espoo 00076, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku 20520, Finland
- Department of Psychology, University of Turku, Turku 20520, Finland
| |
Collapse
|
22
|
Monno Y, Nawa NE, Yamagishi N. Duration of mood effects following a Japanese version of the mood induction task. PLoS One 2024; 19:e0293871. [PMID: 38180997 PMCID: PMC10769078 DOI: 10.1371/journal.pone.0293871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 10/23/2023] [Indexed: 01/07/2024] Open
Abstract
Researchers have employed a variety of methodologies to induce positive and negative mood states in study participants to investigate the influence that mood has on psychological, physiological, and cognitive processes both in health and illness. Here, we investigated the effectiveness and the duration of mood effects following the mood induction task (MIT), a protocol that combines mood-inducing sentences, auditory stimuli, and autobiographical memory recall in a cohort of healthy Japanese adult individuals. In Study 1, we translated and augmented the mood-inducing sentences originally proposed by Velten in 1968 and verified that people perceived the translations as being largely congruent with the valence of the original sentences. In Study 2, we developed a Japanese version of the mood induction task (J-MIT) and examined its effectiveness using an online implementation. Results based on data collected immediately after induction showed that the J-MIT was able to modulate the mood in the intended direction. However, mood effects were not observed during the subsequent performance of a cognitive task, the Tower of London task, suggesting that the effects did not persist long enough. Overall, the current results show that mood induction procedures such as the J-MIT can alter the mood of study participants in the short term; however, at the same time, they highlight the need to further examine how mood effects evolve and persist through time to better understand how mood induction protocols can be used to study affective processes more effectively.
Collapse
Affiliation(s)
- Yasunaga Monno
- Research Organization of Open Innovation and Collaboration, Ritsumeikan University, Ibaraki, Osaka, Japan
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
| | - Norberto Eiji Nawa
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
- Graduate School of Frontier Biosciences, Osaka University, Suita, Osaka, Japan
| | - Noriko Yamagishi
- Center for Information and Neural Networks, Advanced ICT Research Institute, National Institute of Information and Communications Technology, Suita, Osaka, Japan
- College of Global Liberal Arts, Ritsumeikan University, Ibaraki, Osaka, Japan
| |
Collapse
|
23
|
Parada-Cabaleiro E, Batliner A, Zentner M, Schedl M. Exploring emotions in Bach chorales: a multi-modal perceptual and data-driven study. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230574. [PMID: 38126059 PMCID: PMC10731325 DOI: 10.1098/rsos.230574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023]
Abstract
The relationship between music and emotion has been addressed within several disciplines, from more historico-philosophical and anthropological ones, such as musicology and ethnomusicology, to others that are traditionally more empirical and technological, such as psychology and computer science. Yet, understanding the link between music and emotion is limited by the scarce interconnections between these disciplines. Trying to narrow this gap, this data-driven exploratory study aims at assessing the relationship between linguistic, symbolic and acoustic features-extracted from lyrics, music notation and audio recordings-and perception of emotion. Employing a listening experiment, statistical analysis and unsupervised machine learning, we investigate how a data-driven multi-modal approach can be used to explore the emotions conveyed by eight Bach chorales. Through a feature selection strategy based on a set of more than 300 Bach chorales and a transdisciplinary methodology integrating approaches from psychology, musicology and computer science, we aim to initiate an efficient dialogue between disciplines, able to promote a more integrative and holistic understanding of emotions in music.
Collapse
Affiliation(s)
- Emilia Parada-Cabaleiro
- Institute of Computational Perception, Johannes Kepler University Linz, Linz, Austria
- Human-Centered AI Group, AI Laboratory, Linz Institute of Technology (LIT), Linz, Austria
- Department of Music Pedagogy, Nuremberg University of Music, Nuremberg, Germany
| | - Anton Batliner
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Marcel Zentner
- Department of Psychology, University of Innsbruck, Innsbruck, Austria
| | - Markus Schedl
- Institute of Computational Perception, Johannes Kepler University Linz, Linz, Austria
- Human-Centered AI Group, AI Laboratory, Linz Institute of Technology (LIT), Linz, Austria
| |
Collapse
|
24
|
Hou Y, Ren Q, Zhang H, Mitchell A, Aletta F, Kang J, Botteldooren D. AI-based soundscape analysis: Jointly identifying sound sources and predicting annoyancea). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3145-3157. [PMID: 37966335 DOI: 10.1121/10.0022408] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 10/31/2023] [Indexed: 11/16/2023]
Abstract
Soundscape studies typically attempt to capture the perception and understanding of sonic environments by surveying users. However, for long-term monitoring or assessing interventions, sound-signal-based approaches are required. To this end, most previous research focused on psycho-acoustic quantities or automatic sound recognition. Few attempts were made to include appraisal (e.g., in circumplex frameworks). This paper proposes an artificial intelligence (AI)-based dual-branch convolutional neural network with cross-attention-based fusion (DCNN-CaF) to analyze automatic soundscape characterization, including sound recognition and appraisal. Using the DeLTA dataset containing human-annotated sound source labels and perceived annoyance, the DCNN-CaF is proposed to perform sound source classification (SSC) and human-perceived annoyance rating prediction (ARP). Experimental findings indicate that (1) the proposed DCNN-CaF using loudness and Mel features outperforms the DCNN-CaF using only one of them. (2) The proposed DCNN-CaF with cross-attention fusion outperforms other typical AI-based models and soundscape-related traditional machine learning methods on the SSC and ARP tasks. (3) Correlation analysis reveals that the relationship between sound sources and annoyance is similar for humans and the proposed AI-based DCNN-CaF model. (4) Generalization tests show that the proposed model's ARP in the presence of model-unknown sound sources is consistent with expert expectations and can explain previous findings from the literature on soundscape augmentation.
Collapse
Affiliation(s)
- Yuanbo Hou
- Wireless, Acoustics, Environmental, and Expert Systems Research Group, Department of Information Technology, Ghent University, Gent, 9052, Belgium
| | - Qiaoqiao Ren
- AI and Robotics, Internet Technology and Data Science Lab, Department of Electronics and Information Systems, Interuniversity Microelectronics Centre, Ghent University, Gent, 9052, Belgium
| | - Huizhong Zhang
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Andrew Mitchell
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Francesco Aletta
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Jian Kang
- Institute for Environmental Design and Engineering, The Bartlett, University College London, London, WC1H 0NN, United Kingdom
| | - Dick Botteldooren
- Wireless, Acoustics, Environmental, and Expert Systems Research Group, Department of Information Technology, Ghent University, Gent, 9052, Belgium
| |
Collapse
|
25
|
Korsmit IR, Montrey M, Wong-Min AYT, McAdams S. A comparison of dimensional and discrete models for the representation of perceived and induced affect in response to short musical sounds. Front Psychol 2023; 14:1287334. [PMID: 38023037 PMCID: PMC10644370 DOI: 10.3389/fpsyg.2023.1287334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction In musical affect research, there is considerable discussion on the best method to represent affective response. This discussion mainly revolves around the dimensional (valence, tension arousal, energy arousal) and discrete (anger, fear, sadness, happiness, tenderness) models of affect. Here, we compared these models' ability to capture self-reported affect in response to short, affectively ambiguous sounds. Methods In two online experiments (n1 = 263, n2 = 152), participants rated perceived and induced affect in response to single notes (Exp 1) and chromatic scales (Exp 2), which varied across instrument family and pitch register. Additionally, participants completed questionnaires measuring pre-existing mood, trait empathy, Big-Five personality, musical sophistication, and musical preferences. Results Rater consistency and agreement were high across all affect scales. Correlation and principal component analyses showed that two dimensions or two affect categories captured most of the variation in affective response. Canonical correlation and regression analyses also showed that energy arousal varied in a manner that was not captured by discrete affect ratings. Furthermore, all sources of individual differences were moderately correlated with all affect scales, particularly pre-existing mood and dimensional affect. Discussion We conclude that when it comes to single notes and chromatic scales, the dimensions of valence and energy arousal best capture the perceived and induced affective response to affectively ambiguous sounds, although the role of individual differences should also be considered.
Collapse
Affiliation(s)
- Iza Ray Korsmit
- Music Research Department, Schulich School of Music, McGill University, Montreal, QC, Canada
| | - Marcel Montrey
- Department of Psychology, McGill University, Montreal, QC, Canada
| | | | - Stephen McAdams
- Music Research Department, Schulich School of Music, McGill University, Montreal, QC, Canada
| |
Collapse
|
26
|
Xiao X, Tan J, Liu X, Zheng M. The dual effect of background music on creativity: perspectives of music preference and cognitive interference. Front Psychol 2023; 14:1247133. [PMID: 37868605 PMCID: PMC10588669 DOI: 10.3389/fpsyg.2023.1247133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Music, an influential environmental factor, significantly shapes cognitive processing and everyday experiences, thus rendering its effects on creativity a dynamic topic within the field of cognitive science. However, debates continue about whether music bolsters, obstructs, or exerts a dual influence on individual creativity. Among the points of contention is the impact of contrasting musical emotions-both positive and negative-on creative tasks. In this study, we focused on traditional Chinese music, drawn from a culture known for its 'preference for sadness,' as our selected emotional stimulus and background music. This choice, underrepresented in previous research, was based on its uniqueness. We examined the effects of differing music genres (including vocal and instrumental), each characterized by a distinct emotional valence (positive or negative), on performance in the Alternative Uses Task (AUT). To conduct this study, we utilized an affective arousal paradigm, with a quiet background serving as a neutral control setting. A total of 114 participants were randomly assigned to three distinct groups after completing a music preference questionnaire: instrumental, vocal, and silent. Our findings showed that when compared to a quiet environment, both instrumental and vocal music as background stimuli significantly affected AUT performance. Notably, music with a negative emotional charge bolstered individual originality in creative performance. These results lend support to the dual role of background music in creativity, with instrumental music appearing to enhance creativity through factors such as emotional arousal, cognitive interference, music preference, and psychological restoration. This study challenges conventional understanding that only positive background music boosts creativity and provides empirical validation for the two-path model (positive and negative) of emotional influence on creativity.
Collapse
Affiliation(s)
- Xinyao Xiao
- China Institute of Music Mental Health, Chongqing, China
- School of Music, Southwest University, Chongqing, China
| | - Junying Tan
- Guizhou University of Finance and Economics, Guiyang, China
| | - Xiaolin Liu
- China Institute of Music Mental Health, Chongqing, China
- School of Psychology, Southwest University, Chongqing, China
| | - Maoping Zheng
- China Institute of Music Mental Health, Chongqing, China
- School of Music, Southwest University, Chongqing, China
| |
Collapse
|
27
|
Yurdum L, Singh M, Glowacki L, Vardy T, Atkinson QD, Hilton CB, Sauter D, Krasnow MM, Mehr SA. Universal interpretations of vocal music. Proc Natl Acad Sci U S A 2023; 120:e2218593120. [PMID: 37676911 PMCID: PMC10500275 DOI: 10.1073/pnas.2218593120] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 06/21/2023] [Indexed: 09/09/2023] Open
Abstract
Despite the variability of music across cultures, some types of human songs share acoustic characteristics. For example, dance songs tend to be loud and rhythmic, and lullabies tend to be quiet and melodious. Human perceptual sensitivity to the behavioral contexts of songs, based on these musical features, suggests that basic properties of music are mutually intelligible, independent of linguistic or cultural content. Whether these effects reflect universal interpretations of vocal music, however, is unclear because prior studies focus almost exclusively on English-speaking participants, a group that is not representative of humans. Here, we report shared intuitions concerning the behavioral contexts of unfamiliar songs produced in unfamiliar languages, in participants living in Internet-connected industrialized societies (n = 5,516 native speakers of 28 languages) or smaller-scale societies with limited access to global media (n = 116 native speakers of three non-English languages). Participants listened to songs randomly selected from a representative sample of human vocal music, originally used in four behavioral contexts, and rated the degree to which they believed the song was used for each context. Listeners in both industrialized and smaller-scale societies inferred the contexts of dance songs, lullabies, and healing songs, but not love songs. Within and across cohorts, inferences were mutually consistent. Further, increased linguistic or geographical proximity between listeners and singers only minimally increased the accuracy of the inferences. These results demonstrate that the behavioral contexts of three common forms of music are mutually intelligible cross-culturally and imply that musical diversity, shaped by cultural evolution, is nonetheless grounded in some universal perceptual phenomena.
Collapse
Affiliation(s)
- Lidya Yurdum
- Child Study Center, Yale University, New Haven, CT06520
- Department of Psychology, University of Amsterdam, Amsterdam1018WT, Netherlands
| | - Manvir Singh
- Department of Anthropology, University of California, Davis, DavisCA95616
| | - Luke Glowacki
- Department of Anthropology, Boston University, Boston, MA02215
| | - Thomas Vardy
- School of Psychology, University of Auckland, Auckland1010, New Zealand
| | | | | | - Disa Sauter
- Department of Psychology, University of Amsterdam, Amsterdam1018WT, Netherlands
| | - Max M. Krasnow
- Division of Continuing Education, Harvard University, Cambridge, MA02138
| | - Samuel A. Mehr
- Child Study Center, Yale University, New Haven, CT06520
- School of Psychology, University of Auckland, Auckland1010, New Zealand
| |
Collapse
|
28
|
Wang X, Huang W. Determining the role of music attitude and its precursors in stimulating the psychological wellbeing of immigrants during COVID quarantine - a moderated mediation approach. Front Psychol 2023; 14:1121180. [PMID: 37519375 PMCID: PMC10382205 DOI: 10.3389/fpsyg.2023.1121180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 06/07/2023] [Indexed: 08/01/2023] Open
Abstract
Based on social cognitive theory (SCT), the purpose of this study is to examine the role of music attitude and its essential precursors in stimulating the psychological wellbeing of immigrants in isolation (quarantine) during the COVID pandemic. This study employed quantitative methodology; an online survey was administered to collect sufficient data from 300 immigrants who traveled to China during the pandemic. Data were collected from five centralized quarantine centers situated in different cities in China. Additionally, the valid data set was analyzed using structural equation modeling (SEM) via AMOS 24 and SPSS 24. The results indicate that potential predictors such as cognitive - music experience (MEX), environmental - social media peer influence (SPI), and cultural factors such as native music (NM) have a direct, significant, and positive effect on music attitude (MA), which further influences immigrants' psychological wellbeing (PW) during their quarantine period. Moreover, in the presence of the mediator (MA), the mediating relationships between MEX and PW, and NM and PW, are positive, significant, and regarded as partial mediation. However, the moderated mediation effects of music type (MT) on MEX-MA-PW and NM-MA-PW were found to be statistically not significant and unsupported. This study contributes to the literature on the effectiveness of individuals' music attitude and its associated outcomes, focusing on mental health care in lonely situations such as quarantine during the COVID pandemic. More importantly, this study has raised awareness about music, music attitude, and their beneficial outcomes, such as mental calmness and peacefulness for the general public, particularly during social distancing, isolation, and quarantine in the COVID pandemic situation.
Collapse
Affiliation(s)
- Xiaokang Wang
- College of Music and Dance, Guizhou Minzu University, Guiyang, Guizhou, China
| | | |
Collapse
|
29
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
30
|
Kurzom N, Lorenzi I, Mendelsohn A. Increasing the complexity of isolated musical chords benefits concurrent associative memory formation. Sci Rep 2023; 13:7563. [PMID: 37161040 PMCID: PMC10169783 DOI: 10.1038/s41598-023-34345-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 04/27/2023] [Indexed: 05/11/2023] Open
Abstract
The effects of background music on learning and memory are inconsistent, partially due to the intrinsic complexity and diversity of music, as well as variability in music perception and preference. By stripping down musical harmony to its building blocks, namely discrete chords, we explored their effects on memory formation of unfamiliar word-image associations. Chords, defined as two or more simultaneously played notes, differ in the number of tones and inter-tone intervals, yielding varying degrees of harmonic complexity, which translate into a continuum of consonance to dissonance percepts. In the current study, participants heard four different types of musical chords (major, minor, medium complex, and high complex chords) while they learned new word-image pairs of a foreign language. One day later, their memory for the word-image pairs was tested, along with a chord rating session, in which they were required to assess the musical chords in terms of perceived valence, tension, and the extent to which the chords grabbed their attention. We found that musical chords containing dissonant elements were associated with higher memory performance for the word-image pairs compared with consonant chords. Moreover, tension positively mediated the relationship between roughness (a key feature of complexity) and memory, while valence negatively mediated this relationship. The reported findings are discussed in light of the effects that basic musical features have on tension and attention, in turn affecting cognitive processes of associative learning.
Collapse
Affiliation(s)
- Nawras Kurzom
- Sagol Department of Neurobiology, University of Haifa, Haifa, Israel.
- The Institute of Information Processing and Decision Making (IIPDM), University of Haifa, Haifa, Israel.
| | - Ilaria Lorenzi
- Sagol Department of Neurobiology, University of Haifa, Haifa, Israel
- The Institute of Information Processing and Decision Making (IIPDM), University of Haifa, Haifa, Israel
- Department of Biology, University of Pisa, Pisa, Italy
| | - Avi Mendelsohn
- Sagol Department of Neurobiology, University of Haifa, Haifa, Israel
- The Institute of Information Processing and Decision Making (IIPDM), University of Haifa, Haifa, Israel
| |
Collapse
|
31
|
Plate RC, Jones C, Zhao S, Flum MW, Steinberg J, Daley G, Corbett N, Neumann C, Waller R. "But not the music": psychopathic traits and difficulties recognising and resonating with the emotion in music. Cogn Emot 2023; 37:748-762. [PMID: 37104122 DOI: 10.1080/02699931.2023.2205105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 12/23/2022] [Accepted: 04/05/2023] [Indexed: 04/28/2023]
Abstract
Recognising and responding appropriately to emotions is critical to adaptive psychological functioning. Psychopathic traits (e.g. callous, manipulative, impulsive, antisocial) are related to differences in recognition and response when emotion is conveyed through facial expressions and language. Use of emotional music stimuli represents a promising approach to improve our understanding of the specific emotion processing difficulties underlying psychopathic traits because it decouples recognition of emotion from cues directly conveyed by other people (e.g. facial signals). In Experiment 1, participants listened to clips of emotional music and identified the emotional content (Sample 1, N = 196) or reported on their feelings elicited by the music (Sample 2, N = 197). Participants accurately recognised (t(195) = 32.78, p < .001, d = 4.69) and reported feelings consistent with (t(196) = 7.84, p < .001, d = 1.12) the emotion conveyed in the music. However, psychopathic traits were associated with reduced emotion recognition accuracy (F(1, 191) = 19.39, p < .001) and reduced likelihood of feeling the emotion (F(1, 193) = 35.45, p < .001), particularly for fearful music. In Experiment 2, we replicated findings for broad difficulties with emotion recognition (Sample 3, N = 179) and emotional resonance (Sample 4, N = 199) associated with psychopathic traits. Results offer new insight into emotion recognition and response difficulties that are associated with psychopathic traits.
Collapse
Affiliation(s)
- R C Plate
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - C Jones
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - S Zhao
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - M W Flum
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - J Steinberg
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - G Daley
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - N Corbett
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - C Neumann
- Department of Psychology, University of North Texas, Denton, TX, USA
| | - R Waller
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
32
|
The EEG microstate representation of discrete emotions. Int J Psychophysiol 2023; 186:33-41. [PMID: 36773887 DOI: 10.1016/j.ijpsycho.2023.02.002] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 02/03/2023] [Accepted: 02/07/2023] [Indexed: 02/11/2023]
Abstract
Understanding how human emotions are represented in our brain is a central question in the field of affective neuroscience. While previous studies have mainly adopted a modular and static perspective on the neural representation of emotions, emerging research suggests that emotions may rely on a distributed and dynamic representation. The present study aimed to explore the EEG microstate representations for nine discrete emotions (Anger, Disgust, Fear, Sadness, Neutral, Amusement, Inspiration, Joy and Tenderness). Seventy-eight participants were recruited to watch emotion eliciting videos with their EEGs recorded. Multivariate analysis revealed that different emotions had distinct EEG microstate features. By using the EEG microstate features in the Neutral condition as the reference, the coverage of C, duration of C and occurrence of B were found to be the top-contributing microstate features for the discrete positive and negative emotions. The emotions of Disgust, Fear and Joy were found to be most effectively represented by EEG microstate. The present study provided the first piece of evidence of EEG microstate representation for discrete emotions, highlighting a whole-brain, dynamical representation of human emotions.
Collapse
|
33
|
Tervaniemi M. The neuroscience of music – towards ecological validity. Trends Neurosci 2023; 46:355-364. [PMID: 37012175 DOI: 10.1016/j.tins.2023.03.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/28/2023] [Accepted: 03/02/2023] [Indexed: 04/03/2023]
Abstract
Studies in the neuroscience of music gained momentum in the 1990s as an integrated part of the well-controlled experimental research tradition. However, during the past two decades, these studies have moved toward more naturalistic, ecologically valid paradigms. Here, I introduce this move in three frameworks: (i) sound stimulation and empirical paradigms, (ii) study participants, and (iii) methods and contexts of data acquisition. I wish to provide a narrative historical overview of the development of the field and, in parallel, to stimulate innovative thinking to further advance the ecological validity of the studies without overlooking experimental rigor.
Collapse
Affiliation(s)
- Mari Tervaniemi
- Centre of Excellence in Music, Mind, Body, and Brain, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland; Cognitive Brain Research Unit, Department of Psychology and Locopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.
| |
Collapse
|
34
|
Lévêque Y, Schellenberg EG, Fornoni L, Bouchet P, Caclin A, Tillmann B. Individuals with congenital amusia remember music they like. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023:10.3758/s13415-023-01084-6. [PMID: 36949277 DOI: 10.3758/s13415-023-01084-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/22/2023] [Indexed: 03/24/2023]
Abstract
Music is better recognized when it is liked. Does this association remain evident when music perception and memory are severely impaired, as in congenital amusia? We tested 11 amusic and 11 matched control participants, asking whether liking of a musical excerpt influences subsequent recognition. In an initial exposure phase, participants-unaware that their recognition would be tested subsequently-listened to 24 musical excerpts and judged how much they liked each excerpt. In the test phase that followed, participants rated whether they recognized the previously heard excerpts, which were intermixed with an equal number of foils matched for mode, tempo, and musical genre. As expected, recognition was in general impaired for amusic participants compared with control participants. For both groups, however, recognition was better for excerpts that were liked, and the liking enhancement did not differ between groups. These results contribute to a growing body of research that examines the complex interplay between emotions and cognitive processes. More specifically, they extend previous findings related to amusics' impairments to a new memory paradigm and suggest that (1) amusic individuals are sensitive to an aesthetic and subjective dimension of the music-listening experience, and (2) emotions can support memory processes even in a population with impaired music perception and memory.
Collapse
Affiliation(s)
- Yohana Lévêque
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France.
- University Lyon 1, F-69000, Lyon, France.
| | - E Glenn Schellenberg
- Centro de Investigação e Intervenção Social (CIS-IUL), Instituto Universitário de Lisboa (ISCTE-IUL), Lisboa, Portugal
- Department of Psychology, University of Toronto Mississauga, Mississauga, Canada
| | - Lesly Fornoni
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Patrick Bouchet
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Anne Caclin
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS, UMR5292, INSERM, U1028, F-69000, Lyon, France
- University Lyon 1, F-69000, Lyon, France
| |
Collapse
|
35
|
Nummenmaa L, Hari R. Bodily feelings and aesthetic experience of art. Cogn Emot 2023:1-14. [PMID: 36912601 DOI: 10.1080/02699931.2023.2183180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2023]
Abstract
Humans all around the world are drawn to creating and consuming art due to its capability to evoke emotions, but the mechanisms underlying art-evoked feelings remain poorly characterised. Here we show how embodiement contributes to emotions evoked by a large database of visual art pieces (n = 336). In four experiments, we mapped the subjective feeling space of art-evoked emotions (n = 244), quantified "bodily fingerprints" of these emotions (n = 615), and recorded the subjects' interest annotations (n = 306) and eye movements (n = 21) while viewing the art. We show that art evokes a wide spectrum of feelings, and that the bodily fingerprints triggered by art are central to these feelings, especially in artworks where human figures are salient. Altogether these results support the model that bodily sensations are central to the aesthetic experience.
Collapse
Affiliation(s)
- Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku, Finland.,Department of Psychology, University of Turku, Turku, Finland.,Turku University Hospital, University of Turku, Turku, Finland
| | - Riitta Hari
- Department of Art and Media, Aalto University, Espoo, Finland
| |
Collapse
|
36
|
van Rijn P, Larrouy-Maestri P. Modelling individual and cross-cultural variation in the mapping of emotions to speech prosody. Nat Hum Behav 2023; 7:386-396. [PMID: 36646838 PMCID: PMC10038802 DOI: 10.1038/s41562-022-01505-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 11/28/2022] [Indexed: 01/18/2023]
Abstract
The existence of a mapping between emotions and speech prosody is commonly assumed. We propose a Bayesian modelling framework to analyse this mapping. Our models are fitted to a large collection of intended emotional prosody, yielding more than 3,000 minutes of recordings. Our descriptive study reveals that the mapping within corpora is relatively constant, whereas the mapping varies across corpora. To account for this heterogeneity, we fit a series of increasingly complex models. Model comparison reveals that models taking into account mapping differences across countries, languages, sexes and individuals outperform models that only assume a global mapping. Further analysis shows that differences across individuals, cultures and sexes contribute more to the model prediction than a shared global mapping. Our models, which can be explored in an online interactive visualization, offer a description of the mapping between acoustic features and emotions in prosody.
Collapse
Affiliation(s)
- Pol van Rijn
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany.
| | - Pauline Larrouy-Maestri
- Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
- Max Planck-NYU Center for Language, Music, and Emotion, New York, NY, USA
| |
Collapse
|
37
|
Abstract
How do experiences in nature or in spiritual contemplation or in being moved by music or with psychedelics promote mental and physical health? Our proposal in this article is awe. To make this argument, we first review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes-shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning-that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.
Collapse
Affiliation(s)
- Maria Monroy
- Department of Psychology, University of California,
Berkeley
| | - Dacher Keltner
- Department of Psychology, University of California,
Berkeley
| |
Collapse
|
38
|
Stamkou E, Brummelman E, Dunham R, Nikolic M, Keltner D. Awe Sparks Prosociality in Children. Psychol Sci 2023; 34:455-467. [PMID: 36745740 DOI: 10.1177/09567976221150616] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Rooted in the novel and the mysterious, awe is a common experience in childhood, but research is almost silent with respect to the import of this emotion for children. Awe makes individuals feel small, thereby shifting their attention to the social world. Here, we studied the effects of art-elicited awe on children's prosocial behavior toward an out-group and its unique physiological correlates. In two preregistered studies (Study 1: N = 159, Study 2: N = 353), children between 8 and 13 years old viewed movie clips that elicited awe, joy, or a neutral (control) response. Children who watched the awe-eliciting clip were more likely to spend their time on an effortful task (Study 1) and to donate their experimental earnings (Studies 1 and 2), all toward benefiting refugees. They also exhibited increased respiratory sinus arrhythmia, an index of parasympathetic nervous system activation associated with social engagement. We discuss implications for fostering prosociality by reimagining children's environments to inspire awe at a critical age.
Collapse
Affiliation(s)
| | - Eddie Brummelman
- Research Institute of Child Development and Education, University of Amsterdam
| | - Rohan Dunham
- Department of Psychology, University of Amsterdam
| | - Milica Nikolic
- Research Institute of Child Development and Education, University of Amsterdam
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley
| |
Collapse
|
39
|
Brooks JA, Tzirakis P, Baird A, Kim L, Opara M, Fang X, Keltner D, Monroy M, Corona R, Metrick J, Cowen AS. Deep learning reveals what vocal bursts express in different cultures. Nat Hum Behav 2023; 7:240-250. [PMID: 36577898 DOI: 10.1038/s41562-022-01489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 10/26/2022] [Indexed: 12/29/2022]
Abstract
Human social life is rich with sighs, chuckles, shrieks and other emotional vocalizations, called 'vocal bursts'. Nevertheless, the meaning of vocal bursts across cultures is only beginning to be understood. Here, we combined large-scale experimental data collection with deep learning to reveal the shared and culture-specific meanings of vocal bursts. A total of n = 4,031 participants in China, India, South Africa, the USA and Venezuela mimicked vocal bursts drawn from 2,756 seed recordings. Participants also judged the emotional meaning of each vocal burst. A deep neural network tasked with predicting the culture-specific meanings people attributed to vocal bursts while disregarding context and speaker identity discovered 24 acoustic dimensions, or kinds, of vocal expression with distinct emotion-related meanings. The meanings attributed to these complex vocal modulations were 79% preserved across the five countries and three languages. These results reveal the underlying dimensions of human emotional vocalization in remarkable detail.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY, USA
| | | | - Xia Fang
- Zhejiang University, Hangzhou, China
| | - Dacher Keltner
- Research Division, Hume AI, New York, NY, USA.,University of California, Berkeley, Berkeley, CA, USA
| | - Maria Monroy
- University of California, Berkeley, Berkeley, CA, USA
| | | | | | - Alan S Cowen
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
40
|
Papatzikis E, Agapaki M, Selvan RN, Pandey V, Zeba F. Quality standards and recommendations for research in music and neuroplasticity. Ann N Y Acad Sci 2023; 1520:20-33. [PMID: 36478395 DOI: 10.1111/nyas.14944] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Research on how music influences brain plasticity has gained momentum in recent years. Considering, however, the nonuniform methodological standards implemented, the findings end up being nonreplicable and less generalizable. To address the need for a standardized baseline of research quality, we gathered all the studies in the music and neuroplasticity field in 2019 and appraised their methodological rigor systematically and critically. The aim was to provide a preliminary and, at the minimum, acceptable quality threshold-and, ipso facto, suggested recommendations-whereupon further discussion and development may take place. Quality appraisal was performed on 89 articles by three independent raters, following a standardized scoring system. The raters' scoring was cross-referenced following an inter-rater reliability measure, and further studied by performing multiple ratings comparisons and matrix analyses. The results for methodological quality were at a quite good level (quantitative articles: mean = 0.737, SD = 0.084; qualitative articles: mean = 0.677, SD = 0.144), following a moderate but statistically significant level of agreement between the raters (W = 0.44, χ2 = 117.249, p = 0.020). We conclude that the standards for implementation and reporting are of high quality; however, certain improvements are needed to reach the stringent levels presumed for such an influential interdisciplinary scientific field.
Collapse
Affiliation(s)
- Efthymios Papatzikis
- Department of Early Childhood Education and Care, Oslo Metropolitan University, Oslo, Norway
| | - Maria Agapaki
- Department of Early Childhood Education and Care, Oslo Metropolitan University, Oslo, Norway
| | - Rosari Naveena Selvan
- Institute for Physics 3 - Biophysics and Bernstein Center for Computational Neuroscience (BCCN), University of Göttingen, Göttingen, Germany.,Department of Psychology, University of Münster, Münster, Germany
| | | | - Fathima Zeba
- School of Humanities and Social Sciences, Manipal Academy of Higher Education Dubai, Dubai, United Arab Emirates
| |
Collapse
|
41
|
Liew K, Uchida Y, Domae H, Koh AHQ. Energetic music is used for anger downregulation: A cross‐cultural differentiation of intensity from rhythmic arousal. JOURNAL OF APPLIED SOCIAL PSYCHOLOGY 2022. [DOI: 10.1111/jasp.12951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Affiliation(s)
- Kongmeng Liew
- Graduate School of Human and Environmental Studies Kyoto University Kyoto Japan
- Graduate School of Science and Technology Nara Institute of Science and Technology Ikoma Japan
| | - Yukiko Uchida
- Institute for the Future of Human Society Kyoto University Kyoto Japan
| | - Hiina Domae
- Graduate School of Human and Environmental Studies Kyoto University Kyoto Japan
| | - Alethea H. Q. Koh
- Institute for the Future of Human Society Kyoto University Kyoto Japan
| |
Collapse
|
42
|
Gómez-Cañón JS, Gutiérrez-Páez N, Porcaro L, Porter A, Cano E, Herrera-Boyer P, Gkiokas A, Santos P, Hernández-Leo D, Karreman C, Gómez E. TROMPA-MER: an open dataset for personalized music emotion recognition. J Intell Inf Syst 2022. [DOI: 10.1007/s10844-022-00746-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractWe present a platform and a dataset to help research on Music Emotion Recognition (MER). We developed the Music Enthusiasts platform aiming to improve the gathering and analysis of the so-called “ground truth” needed as input to MER systems. Firstly, our platform involves engaging participants using citizen science strategies and generate music emotion annotations – the platform presents didactic information and musical recommendations as incentivization, and collects data regarding demographics, mood, and language from each participant. Participants annotated each music excerpt with single free-text emotion words (in native language), distinct forced-choice emotion categories, preference, and familiarity. Additionally, participants stated the reasons for each annotation – including those distinctive of emotion perception and emotion induction. Secondly, our dataset was created for personalized MER and contains information from 181 participants, 4721 annotations, and 1161 music excerpts. To showcase the use of the dataset, we present a methodology for personalization of MER models based on active learning. The experiments show evidence that using the judgment of the crowd as prior knowledge for active learning allows for more effective personalization of MER systems for this particular dataset. Our dataset is publicly available and we invite researchers to use it for testing MER systems.
Collapse
|
43
|
Martins MDJD, Baumard N. How to Develop Reliable Instruments to Measure the Cultural Evolution of Preferences and Feelings in History? Front Psychol 2022; 13:786229. [PMID: 35923745 PMCID: PMC9340072 DOI: 10.3389/fpsyg.2022.786229] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/20/2022] [Indexed: 11/13/2022] Open
Abstract
While we cannot directly measure the psychological preferences of individuals, and the moral, emotional, and cognitive tendencies of people from the past, we can use cultural artifacts as a window to the zeitgeist of societies in particular historical periods. At present, an increasing number of digitized texts spanning several centuries is available for a computerized analysis. In addition, developments form historical economics have enabled increasingly precise estimations of sociodemographic realities from the past. Crossing these datasets offer a powerful tool to test how the environment changes psychology and vice versa. However, designing the appropriate proxies of relevant psychological constructs is not trivial. The gold standard to measure psychological constructs in modern texts - Linguistic Inquiry and Word Count (LIWC) - has been validated by psychometric experimentation with modern participants. However, as a tool to investigate the psychology of the past, the LIWC is limited in two main aspects: (1) it does not cover the entire range of relevant psychological dimensions and (2) the meaning, spelling, and pragmatic use of certain words depend on the historical period from which the fiction work is sampled. These LIWC limitations make the design of custom tools inevitable. However, without psychometric validation, there is uncertainty regarding what exactly is being measured. To overcome these pitfalls, we suggest several internal and external validation procedures, to be conducted prior to diachronic analyses. First, the semantic adequacy of search terms in bags-of-words approaches should be verified by training semantic vector spaces with the historical text corpus using tools like word2vec. Second, we propose factor analyses to evaluate the internal consistency between distinct bag-of-words proxying the same underlying psychological construct. Third, these proxies can be externally validated using prior knowledge on the differences between genres or other literary dimensions. Finally, while LIWC is limited in the analysis of historical documents, it can be used as a sanity check for external validation of custom measures. This procedure allows a robust estimation of psychological constructs and how they change throughout history. Together with historical economics, it also increases our power in testing the relationship between environmental change and the expression of psychological traits from the past.
Collapse
|
44
|
Keltner D, Sauter D, Tracy JL, Wetchler E, Cowen AS. How emotions, relationships, and culture constitute each other: advances in social functionalist theory. Cogn Emot 2022; 36:388-401. [PMID: 35639090 DOI: 10.1080/02699931.2022.2047009] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Social Functionalist Theory (SFT) emerged 20 years ago to orient emotion science to the social nature of emotion. Here we expand upon SFT and make the case for how emotions, relationships, and culture constitute one another. First, we posit that emotions enable the individual to meet six "relational needs" within social interactions: security, commitment, status, trust, fairness, and belongingness. Building upon this new theorising, we detail four principles concerning emotional experience, cognition, expression, and the cultural archiving of emotion. We conclude by considering the bidirectional influences between culture, relationships, and emotion, outlining areas of future inquiry.
Collapse
Affiliation(s)
- Dacher Keltner
- Psychology Department, University of California at Berkeley, Berkeley, CA, USA
| | - Disa Sauter
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | | | - Everett Wetchler
- Psychology Department, University of California at Berkeley, Berkeley, CA, USA
| | - Alan S Cowen
- Psychology Department, University of California at Berkeley, Berkeley, CA, USA
| |
Collapse
|
45
|
Dieterich-Hartwell R, Gilman A, Hecker V. Music in the Practice of Dance/Movement Therapy. ARTS IN PSYCHOTHERAPY 2022. [DOI: 10.1016/j.aip.2022.101938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
46
|
Vuust P, Heggli OA, Friston KJ, Kringelbach ML. Music in the brain. Nat Rev Neurosci 2022; 23:287-305. [PMID: 35352057 DOI: 10.1038/s41583-022-00578-5] [Citation(s) in RCA: 132] [Impact Index Per Article: 44.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/22/2022] [Indexed: 02/06/2023]
Abstract
Music is ubiquitous across human cultures - as a source of affective and pleasurable experience, moving us both physically and emotionally - and learning to play music shapes both brain structure and brain function. Music processing in the brain - namely, the perception of melody, harmony and rhythm - has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain's fundamental capacity for prediction - as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective.
Collapse
Affiliation(s)
- Peter Vuust
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.
| | - Ole A Heggli
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Morten L Kringelbach
- Center for Music in the Brain, Aarhus University and The Royal Academy of Music (Det Jyske Musikkonservatorium), Aarhus, Denmark.,Department of Psychiatry, University of Oxford, Oxford, UK.,Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, Oxford, UK
| |
Collapse
|
47
|
From aesthetics to ethics: Testing the link between an emotional experience of awe and the motive of quixoteism on (un)ethical behavior. MOTIVATION AND EMOTION 2022; 46:508-520. [PMID: 35340283 PMCID: PMC8935891 DOI: 10.1007/s11031-022-09935-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/24/2022] [Indexed: 10/29/2022]
Abstract
According to the awe-quixoteism hypothesis, one experience of awe may lead to the engagement in challenging actions aimed at increasing the welfare of the world. However, what if the action involves damaging one individual? Across four experiments (N = 876), half participants were induced to feel either awe or a different (pleasant, activating, or neutral-control) emotion, and then decided whether achieving a prosocial goal (local vs. global). In the first three experiments this decision was assessed through a dilemma that involved to sacrifice one individual's life, additionally in Experiments 2 and 3 we varied the quality of the action (ordinary vs. challenging). In Experiment 4, participants decided whether performing a real helping action. Overall, in line with the awe-quixoteism hypothesis, the results showed that previously inducing awe enhanced the willingness to sacrifice someone (Experiments 1, 2 and 3) or the acceptance to help (Experiment 4) when the decision involved engaging in challenges aimed at improving the welfare of the world.
Collapse
|
48
|
Liu W, Zheng WL, Li Z, Wu SY, Gan L, Lu BL. Identifying similarities and differences in emotion recognition with EEG and eye movements among Chinese, German, and French people. J Neural Eng 2022; 19. [PMID: 35272271 DOI: 10.1088/1741-2552/ac5c8d] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 03/10/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Cultures have essential influences on emotions. However, most studies on cultural influences on emotions are in the areas of psychology and neuroscience, while the existing affective models are mostly built with data from the same culture. In this paper, we identify the similarities and differences among Chinese, German, and French individuals in emotion recognition with electroencephalogram (EEG) and eye movements from an affective computing perspective. APPROACH Three experimental settings were designed: intraculture subject dependent, intraculture subject independent, and cross-culture subject independent. EEG and eye movements are acquired simultaneously from Chinese, German, and French subjects while watching positive, neutral, and negative movie clips. The affective models for Chinese, German, and French subjects are constructed by using machine learning algorithms. A systematic analysis is performed from four aspects: affective model performance, neural patterns, complementary information from different modalities, and cross-cultural emotion recognition. MAIN RESULTS From emotion recognition accuracies, we find that EEG and eye movements can adapt to Chinese, German, and French cultural diversities and that a cultural in-group advantage phenomenon does exist in emotion recognition with EEG. From the topomaps of EEG, we find that the gamma and beta bands exhibit decreasing activities for Chinese, while for German and French, theta and alpha bands exhibit increasing activities. From confusion matrices and attentional weights, we find that EEG and eye movements have complementary characteristics. From a cross-cultural emotion recognition perspective, we observe that German and French people share more similarities in topographical patterns and attentional weight distributions than Chinese people while the data from Chinese are a good fit for test data but not suitable for training data for the other two cultures. SIGNIFICANCE Our experimental results provide concrete evidence of the in-group advantage phenomenon, cultural influences on emotion recognition, and different neural patterns among Chinese, German, and French individuals.
Collapse
Affiliation(s)
- Wei Liu
- Computer Science and Engineering, Shanghai Jiao Tong University, No 800, Dongchuan Road, Minhang District, Shanghai ,China, Shanghai, Shanghai, Shanghai, 200240, CHINA
| | - Wei-Long Zheng
- Massachusetts General Hospital, 77 Massachusetts Avenue, Room 46-2005 Cambridge, MA, USA, Boston, Massachusetts, 02114-2696, UNITED STATES
| | - Ziyi Li
- Shanghai Jiao Tong University, No 800, Dongchuan Road Minhang District, Shanghai ,China, Shanghai, 200240, CHINA
| | - Si-Yuan Wu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, No 800, Dongchuan Road Minhang District, Shanghai, 200240, CHINA
| | - Lu Gan
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, No 800, Dongchuan Road Minhang District, Shanghai ,China, Shanghai, 200240, CHINA
| | - Bao-Liang Lu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200030, P R China, Shanghai, 200240, CHINA
| |
Collapse
|
49
|
Margulis EH, Wong PCM, Turnbull C, Kubit BM, McAuley JD. Narratives imagined in response to instrumental music reveal culture-bounded intersubjectivity. Proc Natl Acad Sci U S A 2022; 119:e2110406119. [PMID: 35064081 PMCID: PMC8795501 DOI: 10.1073/pnas.2110406119] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 12/13/2021] [Indexed: 11/18/2022] Open
Abstract
The scientific literature sometimes considers music an abstract stimulus, devoid of explicit meaning, and at other times considers it a universal language. Here, individuals in three geographically distinct locations spanning two cultures performed a highly unconstrained task: they provided free-response descriptions of stories they imagined while listening to instrumental music. Tools from natural language processing revealed that listeners provide highly similar stories to the same musical excerpts when they share an underlying culture, but when they do not, the generated stories show limited overlap. These results paint a more complex picture of music's power: music can generate remarkably similar stories in listeners' minds, but the degree to which these imagined narratives are shared depends on the degree to which culture is shared across listeners. Thus, music is neither an abstract stimulus nor a universal language but has semantic affordances shaped by culture, requiring more sustained attention from psychology.
Collapse
Affiliation(s)
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, Chinese University of Hong Kong, Hong Kong SAR, China
| | - Cara Turnbull
- Department of Music, Princeton University, Princeton, NJ 08544
| | - Benjamin M Kubit
- Department of Psychology, Princeton University, Princeton, NJ 08544
| | - J Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI 48824
| |
Collapse
|
50
|
Lange EB, Fünderich J, Grimm H. Multisensory integration of musical emotion perception in singing. PSYCHOLOGICAL RESEARCH 2022; 86:2099-2114. [PMID: 35001181 PMCID: PMC9470688 DOI: 10.1007/s00426-021-01637-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 12/16/2021] [Indexed: 11/25/2022]
Abstract
We investigated how visual and auditory information contributes to emotion communication during singing. Classically trained singers applied two different facial expressions (expressive/suppressed) to pieces from their song and opera repertoire. Recordings of the singers were evaluated by laypersons or experts, presented to them in three different modes: auditory, visual, and audio–visual. A manipulation check confirmed that the singers succeeded in manipulating the face while keeping the sound highly expressive. Analyses focused on whether the visual difference or the auditory concordance between the two versions determined perception of the audio–visual stimuli. When evaluating expressive intensity or emotional content a clear effect of visual dominance showed. Experts made more use of the visual cues than laypersons. Consistency measures between uni-modal and multimodal presentations did not explain the visual dominance. The evaluation of seriousness was applied as a control. The uni-modal stimuli were rated as expected, but multisensory evaluations converged without visual dominance. Our study demonstrates that long-term knowledge and task context affect multisensory integration. Even though singers’ orofacial movements are dominated by sound production, their facial expressions can communicate emotions composed into the music, and observes do not rely on audio information instead. Studies such as ours are important to understand multisensory integration in applied settings.
Collapse
Affiliation(s)
- Elke B Lange
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.
| | - Jens Fünderich
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany.,University of Erfurt, Erfurt, Germany
| | - Hartmut Grimm
- Department of Music, Max Planck Institute for Empirical Aesthetics (MPIEA), Grüneburgweg 14, 60322, Frankfurt/M., Germany
| |
Collapse
|