1
|
Armitage J, Eerola T, Halpern AR. Play it again, but more sadly: Influence of timbre, mode, and musical experience in melody processing. Mem Cognit 2025; 53:869-880. [PMID: 39095618 PMCID: PMC12053353 DOI: 10.3758/s13421-024-01614-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/27/2024] [Indexed: 08/04/2024]
Abstract
The emotional properties of music are influenced by a host of factors, such as timbre, mode, harmony, and tempo. In this paper, we consider how two of these factors, mode (major vs. minor) and timbre interact to influence ratings of perceived valence, reaction time, and recognition memory. More specifically, we considered the notion of congruence-that is, we used a set of melodies that crossed modes typically perceived as happy and sad (i.e., major and minor) in Western cultures with instruments typically perceived as happy and sad (i.e., marimba and viola). In a reaction-time experiment, participants were asked to classify melodies as happy or sad as quickly as possible. There was a clear congruency effect-that is, when the mode and timbre were congruent (major/marimba or minor/viola), reaction times were shorter than when the mode and timbre were incongruent (major/viola or minor/marimba). In Experiment 2, participants first rated the melodies for valence, before completing a recognition task. Melodies that were initially presented in incongruent conditions in the rating task were subsequently recognized better in the recognition task. The recognition advantage for melodies presented in incongruent conditions is discussed in the context of desirable difficulty.
Collapse
Affiliation(s)
- James Armitage
- Music Department, Durham University, Durham, DH1 3RL, UK.
| | - Tuomas Eerola
- Music Department, Durham University, Durham, DH1 3RL, UK
| | - Andrea R Halpern
- Psychology Department, Bucknell University, Lewisburg, PA, 17837, USA
| |
Collapse
|
2
|
O'Connell SR, Papadopoulos JM, Inouye D, van der Donk BJ, Gan H, Goldsworthy RL. Musically evoked emotions in cochlear implant users and those with no known hearing loss. Hear Res 2025; 458:109196. [PMID: 39914280 DOI: 10.1016/j.heares.2025.109196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 01/07/2025] [Accepted: 01/14/2025] [Indexed: 03/06/2025]
Abstract
BACKGROUND Cochlear implants provide the profoundly deaf excellent speech comprehension; however, perception and appreciation of music remains a challenge. Previous work suggests that cochlear implant users, compared to normal-hearing listeners, have diminished perception of certain musically evoked emotions due to deficits in hearing pitch-related musical elements. The purpose of this study was to investigate how well cochlear implants users use pitch-based information to identify the emotional intent of music. METHODS Twenty-six cochlear implant users and 24 peers with no known hearing loss completed a set of online auditory measures. Participants were asked to rate the valence and arousal of 10 happy, 10 sad, 10 scary, and 10 peaceful melodies as categorized by Vieillard et al. (2008). Melodies that were previously categorized as peaceful and sad were then altered from major to minor modes (peaceful to peaceful-modified) and from minor to major (sad to sad-modified), respectively. Additionally, the tempo of these melodies was controlled at 60 beats per minute. Participants then valence and arousal of these 10 sad, 10 sad-modified, 10 peaceful, and 10 peaceful-modified melodies. Participants completed a series pitch perception tasks including major and minor melody and arpeggio discrimination and melodic contour identification. During data analysis, correlations between valence and arousal ratings, major and minor melody and arpeggio discrimination scores, and melodic contour identification scores were assessed. RESULTS When listening to the unmodified melodies from Viellaird et al. (2008), both cochlear implant users and those with no known hearing loss rated happy melodies more distinctly in valence and arousal than sad, peaceful, and scary melodies. Cochlear implant users rated the valence of sad and peaceful melodies more similarly compared to those with no known hearing loss. When listening to the modified melodies, cochlear implant users rated original and modified sad and peaceful melodies similarly on the dimensions of valence and arousal. This contrasts with results found in those with no known hearing loss who utilized mode changes to rate melodies in major modes higher in valence than melodies in minor modes. For major and minor melody and arpeggio discrimination, those with no known hearing loss performed close to or at ceiling while cochlear implant users mostly performed in the chance range. Finally, for melodic contour identification, many cochlear implant users performed significantly worse than those with no known hearing loss. CONCLUSION This study further reveals the challenges that cochlear implant patients face in using modal cues to perceive the emotional intent of music. While cochlear implant users are able to utilize tempo cues to derive the emotional intent of melodies to an extent, they struggle once these cues are taken away. Therefore, the data presented here provides a foundation upon which to explore how pitch-based training may improve cochlear implant users' perception of musical mode. Given the ubiquity of using mode to communicate musical emotional intent, pitch-based training may in turn lead to enhanced music appreciation among cochlear implant users.
Collapse
Affiliation(s)
- Samantha R O'Connell
- Keck School of Medicine of USC,The Caruso Department of Otolaryngology-Head and Neck Surgery, University of Southern California, Los Angeles, California 90033.
| | - Julianne M Papadopoulos
- Keck School of Medicine of USC,The Caruso Department of Otolaryngology-Head and Neck Surgery, University of Southern California, Los Angeles, California 90033.
| | - Daniel Inouye
- Keck School of Medicine of USC,The Caruso Department of Otolaryngology-Head and Neck Surgery, University of Southern California, Los Angeles, California 90033
| | - Brandon J van der Donk
- Keck School of Medicine of USC,The Caruso Department of Otolaryngology-Head and Neck Surgery, University of Southern California, Los Angeles, California 90033.
| | - Helena Gan
- Keck School of Medicine of USC,The Caruso Department of Otolaryngology-Head and Neck Surgery, University of Southern California, Los Angeles, California 90033.
| | - Raymond L Goldsworthy
- Keck School of Medicine of USC,The Caruso Department of Otolaryngology-Head and Neck Surgery, University of Southern California, Los Angeles, California 90033.
| |
Collapse
|
3
|
Carraturo G, Pando-Naude V, Costa M, Vuust P, Bonetti L, Brattico E. The major-minor mode dichotomy in music perception. Phys Life Rev 2025; 52:80-106. [PMID: 39721138 DOI: 10.1016/j.plrev.2024.11.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2024] [Accepted: 11/29/2024] [Indexed: 12/28/2024]
Abstract
In Western tonal music, major and minor modes are recognized as the primary musical features in eliciting emotional responses. The underlying correlates of this dichotomy in music perception have been extensively investigated through decades of psychological and neuroscientific research, yielding plentiful yet often discordant results that highlight the complexity and individual differences in how these modes are perceived. This variability suggests that a deeper understanding of major-minor mode perception in music is still needed. We present the first comprehensive systematic review and meta-analysis, providing both qualitative and quantitative syntheses of major-minor mode perception and its behavioural and neural correlates. The qualitative synthesis includes 70 studies, revealing significant diversity in how the major-minor dichotomy has been empirically investigated. Most studies focused on adults, considered participants' expertise, used real-life musical stimuli, conducted behavioural evaluations, and were predominantly performed with Western listeners. Meta-analyses of behavioural, electroencephalography, and neuroimaging data (37 studies) consistently show that major and minor modes elicit distinct neural and emotional responses, though these differences are heavily influenced by subjective perception. Based on our findings, we propose a framework to describe a Major-Minor Mode(l) of music perception and its correlates, incorporating individual factors such as age, expertise, cultural background, and emotional disorders. Moreover, this work explores the cultural and historical implications of the major-minor dichotomy in music, examining its origins, universality, and emotional associations across both Western and non-Western contexts. By considering individual differences and acoustic characteristics, we contribute to a broader understanding of how musical frameworks develop across cultures. Limitations, implications, and suggestions for future research are discussed, including potential clinical applications for mood regulation and emotional disorders, alongside recommendations for experimental paradigms in investigating major-minor modes.
Collapse
Affiliation(s)
- Giulio Carraturo
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Italy; Department of Psychology, University of Bologna, Italy
| | - Victor Pando-Naude
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Marco Costa
- Department of Psychology, University of Bologna, Italy
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark
| | - Leonardo Bonetti
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark; Department of Psychology, University of Bologna, Italy; Centre for Eudaimonia and Human Flourishing, Linacre College, University of Oxford, United Kingdom; Department of Psychiatry, University of Oxford, United Kingdom
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Aarhus, Denmark; Department of Education, Psychology, Communication, University of Bari Aldo Moro, Italy.
| |
Collapse
|
4
|
Wu H, Wang D, Zhou L. Tunes that move us: the impact of music-induced emotions on prosocial decision-making. Front Psychol 2025; 15:1453808. [PMID: 39850967 PMCID: PMC11754231 DOI: 10.3389/fpsyg.2024.1453808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 12/20/2024] [Indexed: 01/25/2025] Open
Abstract
Introduction The significance of music might be attributed to its role in social bonding, a function that has likely influenced the evolution of human musicality. Although there is substantial evidence for the relationship between prosocial songs and prosocial behavior, it remains unclear whether music alone, independent of lyrics, can influence prosocial behaviors. This study investigates whether music and the emotions it induces can influence prosocial decision-making, utilizing the classical two-dimensional model of emotion (mood and arousal). Methods In Experiment 1,42 undergraduate students listened to happy music (positive, high arousal), sad music (negative, low arousal), and white noise while reading stories describing helping scenarios and then assessed their willingness to help. Experiments 2 and 3 further explore mood and arousal effects by manipulating the mode (major vs. minor) and tempo (fast vs. slow) of the music. Results Experiment 1's results indicated that sad music increases willingness to help more than happy music or white noise, suggesting that music-induced emotions influence prosocial behavior through immediate prosocial emotions like empathy. Experiments 2 and 3 demonstrated that only mood, influenced by the music mode, affects prosocial decision-making, while tempo-induced arousal does not. Additionally, Theory of Mind and memory strength do not mediate these effects. Discussion These findings reveal the role of pure music listening and specific emotional dimensions on prosocial decision-making, providing evidence to support the music-social bonding hypothesis.
Collapse
Affiliation(s)
- Hongwei Wu
- School of Music and Dance, Communication University of Zhejiang, Hangzhou, China
| | - Danni Wang
- Music College, Shanghai Normal University, Shanghai, China
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
5
|
Zapata Cardona J, Duque Arias S, David Jaramillo E, Surget A, Ibargüen-Vargas Y, Rodríguez BDJ. Effects of a veterinary functional music-based enrichment program on the psychophysiological responses of farm pigs. Sci Rep 2024; 14:18660. [PMID: 39134584 PMCID: PMC11319718 DOI: 10.1038/s41598-024-68407-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 07/23/2024] [Indexed: 08/15/2024] Open
Abstract
Intensification of swine production can predispose pigs to chronic stress, with adverse effects on the neuroendocrine and immune systems that can lead to health problems, poor welfare, and reduced production performance. Consequently, there is an interest in developing tools to prevent or eliminate chronic stress. Music is widely used as a therapeutic strategy for stress management in humans and may have similar benefits in non-human animals. This study evaluated the effects of a music-based auditory enrichment program in pigs from a multidimensional perspective by assessing psychophysiological responses. Two experimental groups of 20 pigs each were selected for the study: one enriched, exposed to a program of functional veterinary music designed for pigs, and a control group without auditory stimulation. Qualitative behavior assessment (QBA) and skin lesions indicative of agonistic behavior were used to evaluate the psychological determinants underlying the observed behaviors. Physiological assessment included hemograms, with the determination of the neutrophil:lymphocyte ratio and daily measurements of cortisol and salivary alpha-amylase levels. The results demonstrated a positive effect of a music-based auditory program on psychophysiological responses. Therefore, this strategy developed for environmental enrichment may be beneficial in reducing stress and contributing to the welfare and health of pigs under production conditions.
Collapse
Affiliation(s)
- Juliana Zapata Cardona
- Grupo de Investigación en Patobiología QUIRON, Escuela de Medicina Veterinaria, Universidad de Antioquia, Calle 70 No. 52-21, Medellín, Colombia.
| | - Santiago Duque Arias
- Grupo de Investigación en Patobiología QUIRON, Escuela de Medicina Veterinaria, Universidad de Antioquia, Calle 70 No. 52-21, Medellín, Colombia
| | - Edimer David Jaramillo
- Grupo de Investigación en Patobiología QUIRON, Escuela de Medicina Veterinaria, Universidad de Antioquia, Calle 70 No. 52-21, Medellín, Colombia
| | - Alexandre Surget
- iBraiN (Imaging Brain & Neuropsychiatry, UMR1253 - Team ExTraPsy), INSERM, Université de Tours, Tours, France
| | - Yadira Ibargüen-Vargas
- EUK-CVL, Université d'Orléans, Orléans, France
- CIAMS, Université Paris-Saclay, Orsay, France
| | - Berardo de Jesús Rodríguez
- Grupo de Investigación en Patobiología QUIRON, Escuela de Medicina Veterinaria, Universidad de Antioquia, Calle 70 No. 52-21, Medellín, Colombia
| |
Collapse
|
6
|
Childress A, Lou M. Illness Narratives in Popular Music: An Untapped Resource for Medical Education. THE JOURNAL OF MEDICAL HUMANITIES 2023; 44:533-552. [PMID: 37566168 DOI: 10.1007/s10912-023-09813-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/19/2023] [Indexed: 08/12/2023]
Abstract
Illness narratives convey a person's feelings, thoughts, beliefs, and descriptions of suffering and healing as a result of physical or mental breakdown. Recognized genres include fiction, nonfiction, poetry, plays, and films. Like poets and playwrights, musicians also use their life experiences as fodder for their art. However, illness narratives as expressed through popular music are an understudied and underutilized source of insights into the experience of suffering, healing, and coping with illness, disease, and death. Greater attention to the value of music within medical education is needed to improve students' perspective-taking and communication. Like reading a good book, songs that resonate with listeners speak to shared experiences or invite them into a universe of possibilities that they had not yet imagined. In this article, we show how uncovering these themes in popular music might be integrated into medical education, thus creating a space for reflection on the nature and meaning of illness and the fragility of the human condition. We describe three kinds of illness narratives that may be found in popular music (autobiographical, biographical, and metaphorical) and show how developing skills of close listening through exposure to these narrative forms can improve patient-physician communication and expand students' moral imaginations.
Collapse
Affiliation(s)
- Andrew Childress
- Humanities Expression and Arts Lab, Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA.
| | - Monica Lou
- Department of Medicine, Baylor College of Medicine, Houston, TX, USA
| |
Collapse
|
7
|
Martins THS, Rodrigues RM, Araújo FCO, Cedro ÁM, Bortoloti R, Varella AAB, Huziwara EM. Transfer of functions based on equivalence class formation using musical stimuli. J Exp Anal Behav 2023; 120:394-405. [PMID: 37710382 DOI: 10.1002/jeab.881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 08/05/2023] [Indexed: 09/16/2023]
Abstract
Empirical evidence has supported that musical excerpts written in major and minor modes are responsible for evoking happiness and sadness, respectively. In this study, we evaluated whether the emotional content evoked by musical stimuli would transfer to abstract figures when they became members of the same equivalence class. Participants assigned to the experimental group were submitted to a training procedure to form equivalence classes comprising musical excerpts (A) and meaningless abstract figures (B, C, and D). Afterward, transfer of function was evaluated using a semantic differential. Participants in the control group showed positive semantic differential scores for major mode musical excerpts, negative scores for minor mode musical excerpts, and neutral scores for the B, C, and D stimuli. Participants in the experimental groups showed positive semantic differential scores for visual stimuli equivalent to the major modes and negative semantic differential scores for visual stimuli equivalent to the minor modes. These results indicate transfer of function of emotional content present in musical stimuli through equivalence class formation. These findings could provide a more comprehensive understanding of the effects of using emotional stimuli in equivalence class formation experiments and in transfer of function itself.
Collapse
Affiliation(s)
| | - Raone M Rodrigues
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| | | | - Átila M Cedro
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| | - Renato Bortoloti
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| | - André A B Varella
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
- iABA - Instituto de Análise do Comportamento Aplicada, Brazil
| | - Edson M Huziwara
- Universidade Federal de Minas Gerais, Brazil
- Instituto Nacional sobre Comportamento, Cognição e Ensino (INCT-ECCE), Brazil
| |
Collapse
|
8
|
Ma W, Zhou P, Liang X, Thompson WF. Children across cultures respond emotionally to the acoustic environment. Cogn Emot 2023; 37:1144-1152. [PMID: 37338002 DOI: 10.1080/02699931.2023.2225850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 06/07/2023] [Accepted: 06/08/2023] [Indexed: 06/21/2023]
Abstract
Among human and non-human animals, the ability to respond rapidly to biologically significant events in the environment is essential for survival and development. Research has confirmed that human adult listeners respond emotionally to environmental sounds by relying on the same acoustic cues that signal emotionality in speech prosody and music. However, it is unknown whether young children also respond emotionally to environmental sounds. Here, we report that changes in pitch, rate (i.e. playback speed), and intensity (i.e. amplitude) of environmental sounds trigger emotional responses in 3- to 6-year-old American and Chinese children, including four sound types: sounds of human actions, animal calls, machinery, and natural phenomena such as wind and waves. Children's responses did not differ across the four types of sounds used but developed with age - a finding observed in both American and Chinese children. Thus, the ability to respond emotionally to non-linguistic, non-music environmental sounds is evident at three years of age - an age when the ability to decode emotional prosody in language and music emerges. We argue that general mechanisms that support emotional prosody decoding are engaged by all sounds, as reflected in emotional responses to non-linguistic acoustic input such as music and environmental sounds.
Collapse
Affiliation(s)
- Weiyi Ma
- School of Human Environmental Sciences, University of Arkansas, Fayetteville, AR, USA
| | - Peng Zhou
- School of International Studies, Zhejiang University, Hangzhou, People's Republic of China
| | - Xinya Liang
- Department of Counseling, Leadership, and Research Methods, University of Arkansas, Fayetteville, AR, USA
| | | |
Collapse
|
9
|
Benítez-Burraco A, Nikolsky A. The (Co)Evolution of Language and Music Under Human Self-Domestication. HUMAN NATURE (HAWTHORNE, N.Y.) 2023; 34:229-275. [PMID: 37097428 PMCID: PMC10354115 DOI: 10.1007/s12110-023-09447-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/27/2023] [Indexed: 04/26/2023]
Abstract
Together with language, music is perhaps the most distinctive behavioral trait of the human species. Different hypotheses have been proposed to explain why only humans perform music and how this ability might have evolved in our species. In this paper, we advance a new model of music evolution that builds on the self-domestication view of human evolution, according to which the human phenotype is, at least in part, the outcome of a process similar to domestication in other mammals, triggered by the reduction in reactive aggression responses to environmental changes. We specifically argue that self-domestication can account for some of the cognitive changes, and particularly for the behaviors conducive to the complexification of music through a cultural mechanism. We hypothesize four stages in the evolution of music under self-domestication forces: (1) collective protomusic; (2) private, timbre-oriented music; (3) small-group, pitch-oriented music; and (4) collective, tonally organized music. This line of development encompasses the worldwide diversity of music types and genres and parallels what has been hypothesized for languages. Overall, music diversity might have emerged in a gradual fashion under the effects of the enhanced cultural niche construction as shaped by the progressive decrease in reactive (i.e., impulsive, triggered by fear or anger) aggression and the increase in proactive (i.e., premeditated, goal-directed) aggression.
Collapse
Affiliation(s)
- Antonio Benítez-Burraco
- Department of Spanish Language, Linguistics and Literary Theory (Linguistics), Faculty of Philology, University of Seville, Seville, Spain.
- Departamento de Lengua Española, Facultad de Filología, Área de Lingüística General, Lingüística y Teoría de la Literatura, Universidad de Sevilla, C/ Palos de la Frontera s/n, Sevilla, 41007, España.
| | | |
Collapse
|
10
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
11
|
Spectro-temporal acoustic elements of music interact in an integrated way to modulate emotional responses in pigs. Sci Rep 2023; 13:2994. [PMID: 36810549 PMCID: PMC9944864 DOI: 10.1038/s41598-023-30057-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 02/15/2023] [Indexed: 02/23/2023] Open
Abstract
Music is a complex stimulus, with various spectro-temporal acoustic elements determining one of the most important attributes of music, the ability to elicit emotions. Effects of various musical acoustic elements on emotions in non-human animals have not been studied with an integrated approach. However, this knowledge is important to design music to provide environmental enrichment for non-human species. Thirty-nine instrumental musical pieces were composed and used to determine effects of various acoustic parameters on emotional responses in farm pigs. Video recordings (n = 50) of pigs in the nursery phase (7-9 week old) were gathered and emotional responses induced by stimuli were evaluated with Qualitative Behavioral Assessment (QBA). Non-parametric statistical models (Generalized Additive Models, Decision Trees, Random Forests, and XGBoost) were applied and compared to evaluate relationships between acoustic parameters and pigs' observed emotional responses. We concluded that musical structure affected emotional responses of pigs. The valence of modulated emotions depended on integrated and simultaneous interactions of various spectral and temporal structural components of music that can be readily modified. This new knowledge supports design of musical stimuli to be used as environmental enrichment for non-human animals.
Collapse
|
12
|
Pathre T, Marozeau J. Temporal Cues in the Judgment of Music Emotion for Normal and Cochlear Implant Listeners. Trends Hear 2023; 27:23312165231170501. [PMID: 37097919 PMCID: PMC10134148 DOI: 10.1177/23312165231170501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2023] Open
Abstract
Several studies have established that Cochlear implant (CI) listeners rely on the tempo of music to judge the emotional content of music. However, a re-analysis of a study in which CI listeners judged the emotion conveyed by piano pieces on a scale from happy to sad revealed a weak correlation between tempo and emotion. The present study explored which temporal cues in music influence emotion judgments among normal hearing (NH) listeners, which might provide insights into the cues utilized by CI listeners. Experiment 1 was a replication of the Vannson et al. study with NH listeners using rhythmic patterns of piano created with congas. The temporal cues were preserved while the tonal ones were removed. The results showed (i) tempo was weakly correlated with emotion judgments, (ii) NH listeners' judgments for congas were similar to CI listeners' judgments for piano. In Experiment 2, two tasks were administered with congas played at three different tempi: emotion judgment and a tapping task to record listeners' perceived tempo. Perceived tempo was a better predictor than the tempo, but its physical correlate, mean onset-to-onset difference (MOOD), a measure of the average time between notes, yielded higher correlations with NH listeners' emotion judgments. This result suggests that instead of the tempo, listeners rely on the average time between consecutive notes to judge the emotional content of music. CI listeners could utilize this cue to judge the emotional content of music.
Collapse
Affiliation(s)
- Tanmayee Pathre
- Music and Cochlear Implants Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
- Building Acoustics Group, Department of Built Environment, Eindhoven University of Technology, Eindhoven, The Nederlands
| | - Jeremy Marozeau
- Music and Cochlear Implants Lab, Department of Health Technology, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
13
|
Micallef Grimaud A, Eerola T. Emotional expression through musical cues: A comparison of production and perception approaches. PLoS One 2022; 17:e0279605. [PMID: 36584186 PMCID: PMC9803112 DOI: 10.1371/journal.pone.0279605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 12/10/2022] [Indexed: 01/01/2023] Open
Abstract
Multiple approaches have been used to investigate how musical cues are used to shape different emotions in music. The most prominent approach is a perception study, where musical stimuli varying in cue levels are assessed by participants in terms of their conveyed emotion. However, this approach limits the number of cues and combinations simultaneously investigated, since each variation produces another musical piece to be evaluated. Another less used approach is a production approach, where participants use cues to change the emotion conveyed in music, allowing participants to explore a larger number of cue combinations than the former approach. These approaches provide different levels of accuracy and economy for identifying how cues are used to convey different emotions in music. However, do these approaches provide converging results? This paper's aims are two-fold. The role of seven musical cues (tempo, pitch, dynamics, brightness, articulation, mode, and instrumentation) in communicating seven emotions (sadness, joy, calmness, anger, fear, power, and surprise) in music is investigated. Additionally, this paper explores whether the two approaches will yield similar findings on how the cues are used to shape different emotions in music. The first experiment utilises a production approach where participants adjust the cues in real-time to convey target emotions. The second experiment uses a perception approach where participants rate pre-rendered systematic variations of the stimuli for all emotions. Overall, the cues operated similarly in the majority (32/49) of cue-emotion combinations across both experiments, with the most variance produced by the dynamics and instrumentation cues. A comparison of the prediction accuracy rates of cue combinations representing the intended emotions found that prediction rates in Experiment 1 were higher than the ones obtained in Experiment 2, suggesting that a production approach may be a more efficient method to explore how cues are used to shape different emotions in music.
Collapse
Affiliation(s)
| | - Tuomas Eerola
- Department of Music, Music and Science Lab, Durham University, Durham, United Kingdom
| |
Collapse
|
14
|
Slow tempo music preserves attentional efficiency in young children. Atten Percept Psychophys 2022; 85:978-984. [PMID: 36577915 DOI: 10.3758/s13414-022-02602-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2022] [Indexed: 12/29/2022]
Abstract
Past research has shown that listening to slow- or fast-tempo music can affect adults' executive attention (EA) performance. This study examined the immediate impact of brief exposure to slow- or fast-tempo music on EA performance in 4- to 6-year-old children. A within-subject design was used, where each child completed three blocks of the EA task after listening to fast-tempo music (fast-tempo block), slow-tempo music (slow-tempo block), and ocean waves (control block), with block-order counterbalanced. In each block, children were also asked to report their pre-task subjective emotional status (experienced arousal and valence) before listening to music and their post-task emotional status after the EA task. Three major results emerged. First, reaction time (RT) was significantly faster in the slow-tempo block than in the fast-tempo, suggesting that listening to slow-tempo music preserves processing efficiency, relative to fast-tempo music. Second, children's accuracy rate in the EA task did not differ across blocks. Third, children's subjective emotional status did not differ across blocks and did not change across the pre- and post-task phases in any block, suggesting the faster RT observed in the slow-tempo block cannot be explained by changes in arousal or mood.
Collapse
|
15
|
Neonatal Musicality: Do Newborns Detect Emotions in Music? PSYCHOLOGICAL STUDIES 2022. [DOI: 10.1007/s12646-022-00688-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
AbstractThis study aimed to explore healthy, term neonates’ behavioural and physiological responses to music using frame-by-frame analysis of their movements (Experiment 1; N = 32, 0–3 days old) and heart rate measurements (Experiment 2; N = 66, 0–6 days old). A ‘happy’ and ‘sad’ music was first validated by independent raters for their emotional content from a large pool of children’s songs and lullabies, and the effect of the emotions in these two music pieces and a control, no-music condition was compared. The results of the frame-by-frame behavioural analysis showed that babies had emotion-specific responses across the three conditions. Happy music decreased their arousal levels, shifting from drowsiness to sleep, and resulted in longer latencies in other forms of self-regulatory behaviour, such as sucking. The decrease in arousal was accompanied by heart rate deceleration. In the sad music condition, relative ‘stillness’ was observed, and longer leg stretching latencies were measured. In both music conditions, longer latencies of fine motor finger and toe movements were found. Our findings suggest that the emotional response to music possibly emerges very early ontogenetically as part of a generic, possibly inborn, human musicality.
Collapse
|
16
|
Hine K, Abe K, Kinzuka Y, Shehata M, Hatano K, Matsui T, Nakauchi S. Spontaneous motor tempo contributes to preferred music tempo regardless of music familiarity. Front Psychol 2022; 13:952488. [PMID: 36467226 PMCID: PMC9713942 DOI: 10.3389/fpsyg.2022.952488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 11/01/2022] [Indexed: 11/03/2023] Open
Abstract
Music, and listening to music, has occurred throughout human history. However, it remains unclear why people prefer some types of music over others. To understand why we listen to a certain music, previous studies have focused on preferred tempo. These studies have reported that music components (external), as well as participants' spontaneous motor tempo (SMT; internal), determine tempo preference. In addition, individual familiarity with a piece of music has been suggested to affect the impact of its components on tempo preference. However, the relationships among participants' SMT, music components, and music familiarity as well as the influence of these variables on tempo preference have not been investigated. Moreover, the music components that contribute to tempo preference and their dependence on familiarity remain unclear. Here, we investigate how SMT, music components, and music familiarity simultaneously regulate tempo preference as well as which music components interact with familiarity to contribute to tempo preference. A total of 23 participants adjusted the tempo of music pieces according to their preferences and rated the familiarity of the music. In addition, they engaged in finger tapping at their preferred tempo. Music components, such as the original tempo and the number of notes, were also analyzed. Analysis of the collected data with a linear mixed model showed that the preferred tapping tempo of participants contributed to the preferred music tempo, regardless of music familiarity. In contrast, the contributions of music components differed depending on familiarity. These results suggested that tempo preference could be affected by both movement and memory.
Collapse
Affiliation(s)
- Kyoko Hine
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
| | - Koki Abe
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
| | - Yuya Kinzuka
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
| | - Mohammad Shehata
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, United States
| | - Katsunobu Hatano
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
| | - Toshie Matsui
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
| | - Shigeki Nakauchi
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi, Japan
| |
Collapse
|
17
|
Floreani ED, Orlandi S, Chau T. A pediatric near-infrared spectroscopy brain-computer interface based on the detection of emotional valence. Front Hum Neurosci 2022; 16:938708. [PMID: 36211121 PMCID: PMC9540519 DOI: 10.3389/fnhum.2022.938708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022] Open
Abstract
Brain-computer interfaces (BCIs) are being investigated as an access pathway to communication for individuals with physical disabilities, as the technology obviates the need for voluntary motor control. However, to date, minimal research has investigated the use of BCIs for children. Traditional BCI communication paradigms may be suboptimal given that children with physical disabilities may face delays in cognitive development and acquisition of literacy skills. Instead, in this study we explored emotional state as an alternative access pathway to communication. We developed a pediatric BCI to identify positive and negative emotional states from changes in hemodynamic activity of the prefrontal cortex (PFC). To train and test the BCI, 10 neurotypical children aged 8–14 underwent a series of emotion-induction trials over four experimental sessions (one offline, three online) while their brain activity was measured with functional near-infrared spectroscopy (fNIRS). Visual neurofeedback was used to assist participants in regulating their emotional states and modulating their hemodynamic activity in response to the affective stimuli. Child-specific linear discriminant classifiers were trained on cumulatively available data from previous sessions and adaptively updated throughout each session. Average online valence classification exceeded chance across participants by the last two online sessions (with 7 and 8 of the 10 participants performing better than chance, respectively, in Sessions 3 and 4). There was a small significant positive correlation with online BCI performance and age, suggesting older participants were more successful at regulating their emotional state and/or brain activity. Variability was seen across participants in regards to BCI performance, hemodynamic response, and discriminatory features and channels. Retrospective offline analyses yielded accuracies comparable to those reported in adult affective BCI studies using fNIRS. Affective fNIRS-BCIs appear to be feasible for school-aged children, but to further gauge the practical potential of this type of BCI, replication with more training sessions, larger sample sizes, and end-users with disabilities is necessary.
Collapse
Affiliation(s)
- Erica D. Floreani
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- *Correspondence: Erica D. Floreani
| | - Silvia Orlandi
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Department of Biomedical Engineering, University of Bologna, Bologna, Italy
| | - Tom Chau
- Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
18
|
Leterme G, Guigou C, Guenser G, Bigand E, Bozorg Grayeli A. Effect of Sound Coding Strategies on Music Perception with a Cochlear Implant. J Clin Med 2022; 11:jcm11154425. [PMID: 35956042 PMCID: PMC9369156 DOI: 10.3390/jcm11154425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/15/2022] [Accepted: 07/26/2022] [Indexed: 11/21/2022] Open
Abstract
The goal of this study was to evaluate the music perception of cochlear implantees with two different sound processing strategies. Methods: Twenty-one patients with unilateral or bilateral cochlear implants (Oticon Medical®) were included. A music trial evaluated emotions (sad versus happy based on tempo and/or minor versus major modes) with three tests of increasing difficulty. This was followed by a test evaluating the perception of musical dissonances (marked out of 10). A novel sound processing strategy reducing spectral distortions (CrystalisXDP, Oticon Medical) was compared to the standard strategy (main peak interleaved sampling). Each strategy was used one week before the music trial. Results: Total music score was higher with CrystalisXDP than with the standard strategy. Nine patients (21%) categorized music above the random level (>5) on test 3 only based on mode with either of the strategies. In this group, CrystalisXDP improved the performances. For dissonance detection, 17 patients (40%) scored above random level with either of the strategies. In this group, CrystalisXDP did not improve the performances. Conclusions: CrystalisXDP, which enhances spectral cues, seemed to improve the categorization of happy versus sad music. Spectral cues could participate in musical emotions in cochlear implantees and improve the quality of musical perception.
Collapse
Affiliation(s)
- Gaëlle Leterme
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| | - Caroline Guigou
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
- Correspondence: ; Tel.: +33-615718531
| | - Geoffrey Guenser
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
| | - Emmanuel Bigand
- LEAD Research Laboratory, CNRS UMR 5022, Bourgogne-Franche-Comté University, 21000 Dijon, France;
| | - Alexis Bozorg Grayeli
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| |
Collapse
|
19
|
Ho J, Mann DS, Hickok G, Chubb C. Inadequate pitch-difference sensitivity prevents half of all listeners from discriminating major vs minor tone sequences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3152. [PMID: 35649937 PMCID: PMC9098252 DOI: 10.1121/10.0010161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Substantial evidence suggests that sensitivity to the difference between the major vs minor musical scales may be bimodally distributed. Much of this evidence comes from experiments using the "3-task." On each trial in the 3-task, the listener hears a rapid, random sequence of tones containing equal numbers of notes of either a G major or G minor triad and strives (with feedback) to judge which type of "tone-scramble" it was. This study asks whether the bimodal distribution in 3-task performance is due to variation (across listeners) in sensitivity to differences in pitch. On each trial in a "pitch-difference task," the listener hears two tones and judges whether the second tone is higher or lower than the first. When the first tone is roved (rather than fixed throughout the task), performance varies dramatically across listeners with median threshold approximately equal to a quarter-tone. Strikingly, nearly all listeners with thresholds higher than a quarter-tone performed near chance in the 3-task. Across listeners with thresholds below a quarter-tone, 3-task performance was uniformly distributed from chance to ceiling; thus, the large, lower mode of the distribution in 3-task performance is produced mainly by listeners with roved pitch-difference thresholds greater than a quarter-tone.
Collapse
Affiliation(s)
- Joselyn Ho
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92617, USA
| | - Daniel S Mann
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92617, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92617, USA
| | - Charles Chubb
- Department of Cognitive Sciences, University of California Irvine, Irvine, California 92617, USA
| |
Collapse
|
20
|
Inguscio BMS, Mancini P, Greco A, Nicastri M, Giallini I, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Rossi F, Canale A, Albera A, Giorgi A, Malerba P, Babiloni F, Cartocci G. ‘Musical effort’ and ‘musical pleasantness’: a pilot study on the neurophysiological correlates of classical music listening in adults normal hearing and unilateral cochlear implant users. HEARING, BALANCE AND COMMUNICATION 2022. [DOI: 10.1080/21695717.2022.2079325] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Tiziana Di Cesare
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Federica Rossi
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Andrea Canale
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | - Andrea Albera
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | | | | | - Fabio Babiloni
- BrainSigns Srl, Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou, China
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| | - Giulia Cartocci
- BrainSigns Srl, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
21
|
Israel A, Rosenboim M, Shavit T. “Let the music play” – experimental study on background music and time preference. JOURNAL OF COGNITIVE PSYCHOLOGY 2022. [DOI: 10.1080/20445911.2022.2029457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Avi Israel
- Department of Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Mosi Rosenboim
- Department of Management, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Tal Shavit
- The Department of Economics and Business Administration, Ariel University, Ariel, Israel
| |
Collapse
|
22
|
Music and the Cerebellum. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1378:195-212. [DOI: 10.1007/978-3-030-99550-8_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
23
|
Abstract
People tend to choose smaller, immediate rewards over larger, delayed rewards. This phenomenon is thought to be associated with emotional engagement. However, few studies have demonstrated the real-time impact of incidental emotions on intertemporal choices. This research investigated the effects of music-induced incidental emotions on intertemporal choices, during which happy or sad music was played simultaneously. We found that music-induced happiness made participants prefer smaller-but-sooner rewards (SS), whereas music-induced sadness made participants prefer larger-but-later rewards (LL). Time perception partially mediated this effect: the greater the perceived temporal difference, the more likely they were to prefer SS. Tempo and mode were then manipulated to disentangle the effects of arousal and mood on intertemporal choices. Only tempo-induced arousal, but not mode-induced mood, affected intertemporal choices. These results suggest the role of arousal in intertemporal decision making and provide evidence in support of equate-to-differentiate theory with regard to the non-compensatory mechanism in intertemporal choices.
Collapse
Affiliation(s)
- Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, People's Republic of China
| | - Yufang Yang
- CAS Key Laboratory of Behavioural Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Shu Li
- CAS Key Laboratory of Behavioural Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, People's Republic of China.,Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, People's Republic of China
| |
Collapse
|
24
|
Lahdelma I, Athanasopoulos G, Eerola T. Sweetness is in the ear of the beholder: chord preference across United Kingdom and Pakistani listeners. Ann N Y Acad Sci 2021; 1502:72-84. [PMID: 34240419 DOI: 10.1111/nyas.14655] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 05/28/2021] [Accepted: 06/08/2021] [Indexed: 01/24/2023]
Abstract
The majority of research in the field of music perception has been conducted with Western participants, and it has remained unclear which aspects of music perception are culture dependent, and which are universal. The current study compared how participants unfamiliar with Western music (people from the Khowar and Kalash tribes native to Northwest Pakistan with minimal exposure to Western music) perceive affect (positive versus negative) in musical chords compared with United Kingdom (UK) listeners, as well as the overall preference for these chords. The stimuli consisted of four distinct chord types (major, minor, augmented, and chromatic) and were played as both vertical blocks (pitches presented concurrently) and arpeggios (pitches presented successively). The results suggest that the Western listener major-positive minor-negative affective distinction is opposite for Northwest Pakistani listeners, arguably because of the reversed prevalence of these chords in the two music cultures. The aversion to the harsh dissonance of the chromatic cluster is present cross-culturally, but the preference for the consonance of the major triad varies between UK and Northwest Pakistani listeners, depending on cultural familiarity. Our findings imply not only notable cultural variation but also commonalities in chord perception across Western and non-Western listeners.
Collapse
Affiliation(s)
- Imre Lahdelma
- Department of Music, Durham University, Durham, United Kingdom
| | | | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
25
|
Battcock A, Schutz M. Emotion and expertise: how listeners with formal music training use cues to perceive emotion. PSYCHOLOGICAL RESEARCH 2021; 86:66-86. [PMID: 33511447 PMCID: PMC8821494 DOI: 10.1007/s00426-020-01467-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 12/16/2020] [Indexed: 11/25/2022]
Abstract
Although studies of musical emotion often focus on the role of the composer and performer, the communicative process is also influenced by the listener's musical background or experience. Given the equivocal nature of evidence regarding the effects of musical training, the role of listener expertise in conveyed musical emotion remains opaque. Here we examine emotional responses of musically trained listeners across two experiments using (1) eight measure excerpts, (2) musically resolved excerpts and compare them to responses collected from untrained listeners in Battcock and Schutz (2019). In each experiment 30 participants with six or more years of music training rated perceived emotion for 48 excerpts from Bach's Well-Tempered Clavier (WTC) using scales of valence and arousal. Models of listener ratings predict more variance in trained vs. untrained listeners across both experiments. More importantly however, we observe a shift in cue weights related to training. Using commonality analysis and Fischer Z score comparisons as well as margin of error calculations, we show that timing and mode affect untrained listeners equally, whereas mode plays a significantly stronger role than timing for trained listeners. This is not to say the emotional messages are less well recognized by untrained listeners-simply that training appears to shift the relative weight of cues used in making evaluations. These results clarify music training's potential impact on the specific effects of cues in conveying musical emotion.
Collapse
Affiliation(s)
- Aimee Battcock
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Psychology Building (PC), Room 102, 1280 Main Street West, Hamilton, ON, L8S 4K1, Canada.
| | - Michael Schutz
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Psychology Building (PC), Room 102, 1280 Main Street West, Hamilton, ON, L8S 4K1, Canada.,School of the Arts, McMaster University, Hamilton, Canada
| |
Collapse
|
26
|
Athanasopoulos G, Eerola T, Lahdelma I, Kaliakatsos-Papakostas M. Harmonic organisation conveys both universal and culture-specific cues for emotional expression in music. PLoS One 2021; 16:e0244964. [PMID: 33439887 PMCID: PMC7806179 DOI: 10.1371/journal.pone.0244964] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 12/19/2020] [Indexed: 11/23/2022] Open
Abstract
Previous research conducted on the cross-cultural perception of music and its emotional content has established that emotions can be communicated across cultures at least on a rudimentary level. Here, we report a cross-cultural study with participants originating from two tribes in northwest Pakistan (Khow and Kalash) and the United Kingdom, with both groups being naïve to the music of the other respective culture. We explored how participants assessed emotional connotations of various Western and non-Western harmonisation styles, and whether cultural familiarity with a harmonic idiom such as major and minor mode would consistently relate to emotion communication. The results indicate that Western concepts of harmony are not relevant for participants unexposed to Western music when other emotional cues (tempo, pitch height, articulation, timbre) are kept relatively constant. At the same time, harmonic style alone has the ability to colour the emotional expression in music if it taps the appropriate cultural connotations. The preference for one harmonisation style over another, including the major-happy/minor-sad distinction, is influenced by culture. Finally, our findings suggest that although differences emerge across different harmonisation styles, acoustic roughness influences the expression of emotion in similar ways across cultures; preference for consonance however seems to be dependent on cultural familiarity.
Collapse
Affiliation(s)
| | - Tuomas Eerola
- Dept of Music, Durham University, Durham, United Kingdom
| | - Imre Lahdelma
- Dept of Music, Durham University, Durham, United Kingdom
| | | |
Collapse
|
27
|
Pitch direction on the perception of major and minor modes. Atten Percept Psychophys 2020; 83:399-414. [PMID: 33230730 DOI: 10.3758/s13414-020-02198-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/30/2020] [Indexed: 11/08/2022]
Abstract
One factor affecting the qualia of music perception is the major/minor mode distinction. Major modes are perceived as more arousing, happier, positive, brighter, and less awkward than minor modes. This difference in emotionality of modes is also affected by pitch direction, with ascending pitch associated with positive affect and decreasing pitch with negative affect. The present study examined whether pitch direction influenced the identification of major versus minor musical modes. In six experiments, participants were familiarized with ascending and descending major and minor modes. We then played ascending and descending scales or simple eight-note melodies and asked listeners to identify the mode (major or minor). Identification of mode was moderated by pitch direction: major modes were identified more accurately when played with ascending pitch, and minor modes were identified better when played with descending pitch. Additionally, we replicated the difference in emotional affect between major and minor modes. The crossover pattern in mode identification may result from dual activation of positive and negative constructs, under specific combinations of mode and pitch direction.
Collapse
|
28
|
Wagener GL, Berning M, Costa AP, Steffgen G, Melzer A. Effects of Emotional Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder (ASD). J Autism Dev Disord 2020; 51:3256-3265. [PMID: 33201423 DOI: 10.1007/s10803-020-04781-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/04/2020] [Indexed: 01/02/2023]
Abstract
Impaired facial emotion recognition in children with Autism Spectrum Disorder (ASD) is in contrast to their intact emotional music recognition. This study tested whether emotion congruent music enhances facial emotion recognition. Accuracy and reaction times were assessed for 19 children with ASD and 31 controls in a recognition task with angry, happy, or sad faces. Stimuli were shown with either emotionally congruent or incongruent music or no music. Although children with ASD had higher reaction times than controls, accuracy only differed when incongruent or no music was played, indicating that congruent emotional music can boost facial emotion recognition in children with ASD. Emotion congruent music may support emotion recognition in children with ASD, and thus may improve their social skills.
Collapse
Affiliation(s)
- Gary L Wagener
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg.
| | - Madeleine Berning
- Institute of Psychology, University of Trier, Universitätsring 15, 54286, Trier, Germany
| | - Andreia P Costa
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg
| | - Georges Steffgen
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg
| | - André Melzer
- Department of Behavioural and Cognitive Sciences, University of Luxembourg, 11, Porte des Sciences, 4366, Esch-sur-Alzette, Luxembourg
| |
Collapse
|
29
|
Adler SA, Comishen KJ, Wong-Kee-You AMB, Chubb C. Sensitivity to major versus minor musical modes is bimodally distributed in young infants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3758. [PMID: 32611142 PMCID: PMC7274811 DOI: 10.1121/10.0001349] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Revised: 04/21/2020] [Accepted: 05/18/2020] [Indexed: 05/19/2023]
Abstract
The difference between major and minor scales plays a central role in Western music. However, recent research using random tone sequences ("tone-scrambles") has revealed a dramatically bimodal distribution in sensitivity to this difference: 30% of listeners are near perfect in classifying major versus minor tone-scrambles; the other 70% perform near chance. Here, whether or not infants show this same pattern is investigated. The anticipatory eye-movements of thirty 6-month-old infants were monitored during trials in which the infants heard a tone-scramble whose quality (major versus minor) signalled the location (right versus left) where a subsequent visual stimulus (the target) would appear. For 33% of infants, these anticipatory eye-movements predicted target location with near perfect accuracy; for the other 67%, the anticipatory eye-movements were unrelated to the target location. In conclusion, six-month-old infants show the same distribution as adults in sensitivity to the difference between major versus minor tone-scrambles.
Collapse
Affiliation(s)
- Scott A Adler
- Department of Psychology, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3, Canada
| | - Kyle J Comishen
- Department of Psychology, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3, Canada
| | - Audrey M B Wong-Kee-You
- Department of Psychology, York University, 4700 Keele Street, Toronto, Ontario M3J 1P3, Canada
| | - Charles Chubb
- Department of Cognitive Science, University of California, Irvine, Irvine, California 92697-5100, USA
| |
Collapse
|
30
|
Felt Emotion Elicited by Music: Are Sensitivities to Various Musical Features Different for Young Children and Young Adults? SPANISH JOURNAL OF PSYCHOLOGY 2020; 23:e8. [PMID: 32434622 DOI: 10.1017/sjp.2020.8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
In the present study, we extended the issue of how people access emotion through nonverbal information by testing the effects of simple (tempo) and complex (timbre) acoustic features of music on felt emotion. Three- to six-year-old young children (n = 100; 48% female) and university students (n = 64; 37.5% female) took part in three experiments in which acoustic features of music were manipulated to determine whether there are links between perceived emotion and felt emotion in processing musical segments. After exposure to segments of music, participants completed a felt emotion judgment task. The chi-square test showed significant tempo effects, ps < .001 (Exp. 1), and strong combined effects of mode and tempo on felt emotion. In addition, strength of these effects changed across age. However, these combined effects were significantly stronger under the tempo-and-mode consistent condition, ps < .001 (Exp. 2) than inconsistent condition (Exp. 3). In other words, simple versus complex acoustic features had stronger effects on felt emotion, and that sensitivity to these features, especially complex features, changed across age. These findings suggest that felt emotion evoked by acoustic features of a given piece of music might be affected by both innate abilities and by the strength of mappings between acoustic features and emotion.
Collapse
|
31
|
Giroux SV, Caparos S, Gosselin N, Rutembesa E, Blanchette I. Impact of Music on Working Memory in Rwanda. Front Psychol 2020; 11:774. [PMID: 32411054 PMCID: PMC7198829 DOI: 10.3389/fpsyg.2020.00774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 03/30/2020] [Indexed: 12/02/2022] Open
Abstract
Previous research shows that listening to pleasant, stimulating and familiar music is likely to improve working memory performance. The benefits of music on cognition have been widely studied in Western populations, but not in other cultures. The purpose of this study was to explore the impact of music on working memory in a non-Western sociocultural context: Rwanda. One hundred and nineteen participants were randomly assigned to a control group (short story) or one of four different musical conditions varying on two dimensions: arousal (relaxing, stimulating) and cultural origin (Western, Rwandan). Working memory was measured using a behavioral task, the n-back paradigm, before and after listening to music (or the short story in the control condition). Unlike in previous studies with Western samples, our results with this Rwandan sample did not show any positive effect of familiar, pleasant and stimulating music on working memory. Performance on the n-back task generally improved from pre to post, in all conditions, but this improvement was less important in participants who listened to familiar Rwandan music compared to those who listened to unfamiliar Western music or to a short story. The study highlights the importance of considering the sociocultural context in research examining the impact of music on cognition. Although different aspects of music are considered universal, there may be cultural differences that limit the generalization of certain effects of music on cognition or that modulate the characteristics that favor its beneficial impact.
Collapse
Affiliation(s)
- Sara-Valérie Giroux
- Groupe de Recherche CogNAC, Département de Psychologie, Université du Québec à Trois-Rivières, Trois-Rivières, Québec, QC, Canada
| | - Serge Caparos
- DysCo Laboratory, Département de Psychologie, Université Paris 8, Saint-Denis, France.,Institut Universitaire de France, Paris, France
| | - Nathalie Gosselin
- Groupe de Recherche CogNAC, Département de Psychologie, Université du Québec à Trois-Rivières, Trois-Rivières, Québec, QC, Canada.,International Laboratory for Brain, Music, and Sound Research, Université de Montréal, Montréal, Québec, QC, Canada.,Center for Research on Brain, Language and Music, Université McGill, Montréal, Québec, QC, Canada
| | - Eugène Rutembesa
- College of Medicine and Health Sciences, University of Rwanda, Kigali, Rwanda
| | - Isabelle Blanchette
- Groupe de Recherche CogNAC, Département de Psychologie, Université du Québec à Trois-Rivières, Trois-Rivières, Québec, QC, Canada.,École de Psychologie, Université Laval, Québec, Québec, QC, Canada
| |
Collapse
|
32
|
Kragness HE, Eitel MJ, Baksh AM, Trainor LJ. Evidence for early arousal-based differentiation of emotions in children's musical production. Dev Sci 2020; 24:e12982. [PMID: 32358988 DOI: 10.1111/desc.12982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 04/15/2020] [Accepted: 04/17/2020] [Indexed: 11/28/2022]
Abstract
Accurate perception and production of emotional states is important for successful social interactions across the lifespan. Previous research has shown that when identifying emotion in faces, preschool children are more likely to confuse emotions that share valence, but differ in arousal (e.g. sadness and anger) than emotions that share arousal, but differ on valence (e.g. anger and joy). Here, we examined the influence of valence and arousal on children's production of emotion in music. Three-, 5- and 7-year-old children recruited from the greater Hamilton area (N = 74) 'performed' music to produce emotions using a self-pacing paradigm, in which participants controlled the onset and offset of each chord in a musical sequence by repeatedly pressing and lifting the same key on a MIDI piano. Key press velocity controlled the loudness of each chord. Results showed that (a) differentiation of emotions by 5-year-old children was mainly driven by arousal of the target emotion, with differentiation based on both valence and arousal at 7 years and (b) tempo and loudness were used to differentiate emotions earlier in development than articulation. The results indicate that the developmental trajectory of emotion understanding in music may differ from the developmental trajectory in other domains.
Collapse
Affiliation(s)
- Haley E Kragness
- Department of Psychology, McMaster University, Hamilton, ON, Canada.,Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada.,Department of Psychology, University of Toronto Mississauga, Mississauga, ON, Canada
| | - Matthew J Eitel
- Department of Psychology, McMaster University, Hamilton, ON, Canada
| | - Ammaarah M Baksh
- Department of Psychology, McMaster University, Hamilton, ON, Canada
| | - Laurel J Trainor
- Department of Psychology, McMaster University, Hamilton, ON, Canada.,McMaster Institute for Music and the Mind, McMaster University, Hamilton, ON, Canada.,Rotman Research Institute, Baycrest Hospital, Toronto, ON, Canada
| |
Collapse
|
33
|
Proverbio AM, Benedetto F, Guazzone M. Shared neural mechanisms for processing emotions in music and vocalizations. Eur J Neurosci 2019; 51:1987-2007. [DOI: 10.1111/ejn.14650] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 11/21/2019] [Accepted: 12/05/2019] [Indexed: 12/21/2022]
Affiliation(s)
- Alice Mado Proverbio
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Francesco Benedetto
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| | - Martina Guazzone
- Department of Psychology University of Milano‐Bicocca Milan Italy
- Milan Center for Neuroscience Milan Italy
| |
Collapse
|
34
|
Vidas D, Calligeros R, Nelson NL, Dingle GA. Development of emotion recognition in popular music and vocal bursts. Cogn Emot 2019; 34:906-919. [PMID: 31805815 DOI: 10.1080/02699931.2019.1700482] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Previous research on the development of emotion recognition in music has focused on classical, rather than popular music. Such research does not consider the impact of lyrics on judgements of emotion in music, impact that may differ throughout development. We had 172 children, adolescents, and adults (7- to 20-year-olds) judge emotions in popular music. In song excerpts, the melody of the music and the lyrics had either congruent valence (e.g. happy lyrics and melody), or incongruent valence (e.g. scared lyrics, happy melody). We also examined participants' judgements of vocal bursts, and whether emotion identification was linked to emotion lexicon. Recognition of emotions in congruent music increased with age. For incongruent music, age was positively associated with judging the emotion in music by the melody. For incongruent music with happy or sad lyrics, younger participants were more likely to answer with the emotion of the lyrics. For scared incongruent music, older adolescents were more likely to answer with the lyrics than older and younger participants. Age groups did not differ on their emotion lexicons, nor recognition of emotion in vocal bursts. Whether children use lyrics or melody to determine the emotion of popular music may depend on the emotion conveyed.
Collapse
Affiliation(s)
- Dianna Vidas
- School of Psychology, University of Queensland, St Lucia, Australia
| | - Renee Calligeros
- School of Psychology, University of Queensland, St Lucia, Australia
| | - Nicole L Nelson
- School of Psychology, University of Queensland, St Lucia, Australia
| | | |
Collapse
|
35
|
Manno FAM, Lau C, Fernandez-Ruiz J, Manno SHC, Cheng SH, Barrios FA. The human amygdala disconnecting from auditory cortex preferentially discriminates musical sound of uncertain emotion by altering hemispheric weighting. Sci Rep 2019; 9:14787. [PMID: 31615998 PMCID: PMC6794305 DOI: 10.1038/s41598-019-50042-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 08/24/2019] [Indexed: 02/06/2023] Open
Abstract
How do humans discriminate emotion from non-emotion? The specific psychophysical cues and neural responses involved with resolving emotional information in sound are unknown. In this study we used a discrimination psychophysical-fMRI sparse sampling paradigm to locate threshold responses to happy and sad acoustic stimuli. The fine structure and envelope of auditory signals were covaried to vary emotional certainty. We report that emotion identification at threshold in music utilizes fine structure cues. The auditory cortex was activated but did not vary with emotional uncertainty. Amygdala activation was modulated by emotion identification and was absent when emotional stimuli were chance identifiable, especially in the left hemisphere. The right hemisphere amygdala was considerably more deactivated in response to uncertain emotion. The threshold of emotion was signified by a right amygdala deactivation and change of left amygdala greater than right amygdala activation. Functional sex differences were noted during binaural uncertain emotional stimuli presentations, where the right amygdala showed larger activation in females. Negative control (silent stimuli) experiments investigated sparse sampling of silence to ensure modulation effects were inherent to emotional resolvability. No functional modulation of Heschl's gyrus occurred during silence; however, during rest the amygdala baseline state was asymmetrically lateralized. The evidence indicates changing hemispheric activation and deactivation patterns between the left and right amygdala is a hallmark feature of discriminating emotion from non-emotion in music.
Collapse
Affiliation(s)
- Francis A M Manno
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, New South Wales, Australia.
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Condon Lau
- Department of Physics, City University of Hong Kong, HKSAR, China.
| | - Juan Fernandez-Ruiz
- Departamento de Fisiología, Facultad de Medicina, Universidad Nacional Autónoma de México, México City, 04510, Mexico
| | | | - Shuk Han Cheng
- Department of Biomedical Sciences, City University of Hong Kong, HKSAR, China
| | - Fernando A Barrios
- Instituto de Neurobiología, Universidad Nacional Autónoma de México, Juriquilla, Querétaro, Mexico.
| |
Collapse
|
36
|
Tay RYL, Ng BC. Effects of affective priming through music on the use of emotion words. PLoS One 2019; 14:e0214482. [PMID: 30990819 PMCID: PMC6467386 DOI: 10.1371/journal.pone.0214482] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 03/13/2019] [Indexed: 11/23/2022] Open
Abstract
Understanding how music can evoke emotions and in turn affect language use has significant implications not only in clinical settings but also in the emotional development of children. The relationship between music and emotion is an intricate one that has been closely studied. However, how the use of emotion words can be influenced by auditory priming is a question which is still not known. The main interest in this study was to examine how manipulation of mode and tempo in music affects the emotions induced and the subsequent effects on the use of emotion words. Fifty university students in Singapore were asked to select emotion words after exposure to various music excerpts. The results showed that major modes and faster tempos elicited greater responses for positive words and high arousal words respectively, while minor modes elicited more high arousal words and original tempos resulted in more positive words being selected. In the Major-Fast, Major-Slow and Minor-Slow conditions, positive correlations were found between the number of high arousal words and their rated intensities. Upon further analysis, categorization of emotion words differed from the circumplex model. Taken together, the findings highlight the prominence of affective auditory priming and allow us to better understand our emotive responses to music.
Collapse
Affiliation(s)
- Rosabel Yu Ling Tay
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
| | - Bee Chin Ng
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
- * E-mail:
| |
Collapse
|
37
|
Di Mauro M, Toffalini E, Grassi M, Petrini K. Effect of Long-Term Music Training on Emotion Perception From Drumming Improvisation. Front Psychol 2018; 9:2168. [PMID: 30473677 PMCID: PMC6237981 DOI: 10.3389/fpsyg.2018.02168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2018] [Accepted: 10/22/2018] [Indexed: 11/13/2022] Open
Abstract
Long-term music training has been shown to affect different cognitive and perceptual abilities. However, it is less well known whether it can also affect the perception of emotion from music, especially purely rhythmic music. Hence, we asked a group of 16 non-musicians, 16 musicians with no drumming experience, and 16 drummers to judge the level of expressiveness, the valence (positive and negative), and the category of emotion perceived from 96 drumming improvisation clips (audio-only, video-only, and audiovideo) that varied in several music features (e.g., musical genre, tempo, complexity, drummer’s expressiveness, and drummer’s style). Our results show that the level and type of music training influence the perceived expressiveness, valence, and emotion from solo drumming improvisation. Overall, non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance, for example musicians (with and without drumming experience) gave a greater weight to the visual performance than non-musicians when giving their emotional judgments. These findings suggest that besides influencing several cognitive and perceptual abilities, music training also affects how we perceive emotion from music.
Collapse
Affiliation(s)
- Martina Di Mauro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Enrico Toffalini
- Department of General Psychology, University of Padua, Padua, Italy
| | - Massimo Grassi
- Department of General Psychology, University of Padua, Padua, Italy
| | - Karin Petrini
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
38
|
Akkermans J, Schapiro R, Müllensiefen D, Jakubowski K, Shanahan D, Baker D, Busch V, Lothwesen K, Elvers P, Fischinger T, Schlemmer K, Frieler K. Decoding emotions in expressive music performances: A multi-lab replication and extension study. Cogn Emot 2018; 33:1099-1118. [PMID: 30409082 DOI: 10.1080/02699931.2018.1541312] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
With over 560 citations reported on Google Scholar by April 2018, a publication by Juslin and Gabrielsson (1996) presented evidence supporting performers' abilities to communicate, with high accuracy, their intended emotional expressions in music to listeners. Though there have been related studies published on this topic, there has yet to be a direct replication of this paper. A replication is warranted given the paper's influence in the field and the implications of its results. The present experiment joins the recent replication effort by producing a five-lab replication using the original methodology. Expressive performances of seven emotions (e.g. happy, sad, angry, etc.) by professional musicians were recorded using the same three melodies from the original study. Participants (N = 319) were presented with recordings and rated how well each emotion matched the emotional quality using a 0-10 scale. The same instruments from the original study (i.e. violin, voice, and flute) were used, with the addition of piano. In an effort to increase the accessibility of the experiment and allow for a more ecologically-valid environment, the recordings were presented using an internet-based survey platform. As an extension to the original study, this experiment investigated how musicality, emotional intelligence, and emotional contagion might explain individual differences in the decoding process. Results found overall high decoding accuracy (57%) when using emotion ratings aggregated for the sample of participants, similar to the method of analysis from the original study. However, when decoding accuracy was scored for each participant individually the average accuracy was much lower (31%). Unlike in the original study, the voice was found to be the most expressive instrument. Generalised Linear Mixed Effects Regression modelling revealed that musical training and emotional engagement with music positively influences emotion decoding accuracy.
Collapse
Affiliation(s)
- Jessica Akkermans
- a Department of Psychology , Goldsmiths, University of London , London , UK
| | - Renee Schapiro
- a Department of Psychology , Goldsmiths, University of London , London , UK
| | | | | | - Daniel Shanahan
- c College of Humanities and Social Sciences , Louisiana State University , Baton Rouge , LA , USA
| | - David Baker
- c College of Humanities and Social Sciences , Louisiana State University , Baton Rouge , LA , USA
| | - Veronika Busch
- d Department of Musicology and Music Education , University of Bremen , Bremen , Germany
| | - Kai Lothwesen
- d Department of Musicology and Music Education , University of Bremen , Bremen , Germany
| | - Paul Elvers
- e Music Department , Max Planck Institute for Empirical Aesthetics , Frankfurt am Main , Germany
| | - Timo Fischinger
- e Music Department , Max Planck Institute for Empirical Aesthetics , Frankfurt am Main , Germany
| | - Kathrin Schlemmer
- f Music Department , Catholic University of Eichstätt-Ingolstadt , Eichstaett , Germany
| | - Klaus Frieler
- g Institute for Musicology , University of Music "Franz Liszt" Weimar , Hamburg , Germany
| |
Collapse
|
39
|
Eerola T, Vuoskoski JK, Peltola HR, Putkinen V, Schäfer K. An integrative review of the enjoyment of sadness associated with music. Phys Life Rev 2018; 25:100-121. [DOI: 10.1016/j.plrev.2017.11.016] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 10/30/2017] [Accepted: 11/13/2017] [Indexed: 12/17/2022]
|
40
|
Zentner M. A sadness-independent account of the enjoyment of music-evoked sadness: Comment on “An integrative review of the enjoyment of sadness associated with music” by Tuomas Eerola et al. Phys Life Rev 2018. [DOI: 10.1016/j.plrev.2018.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
41
|
Abstract
From the beginning of therapeutic research with psychedelics, music listening has been consistently used as a method to guide or support therapeutic experiences during the acute effects of psychedelic drugs. Recent findings point to the potential of music to support meaning-making, emotionality, and mental imagery after the administration of psychedelics, and suggest that music plays an important role in facilitating positive clinical outcomes of psychedelic therapy. This review explores the history of, contemporary research on, and future directions regarding the use of music in psychedelic research and therapy, and argues for more detailed and rigorous investigation of the contribution of music to the treatment of psychiatric disorders within the novel framework of psychedelic therapy.
Collapse
Affiliation(s)
- Frederick S Barrett
- a Department of Psychiatry and Behavioral Sciences, Behavioral Pharmacology Research Unit , Johns Hopkins University School of Medicine , Baltimore , MD , USA
| | - Katrin H Preller
- b Neuropsychopharmacology and Brain Imaging, Department of Psychiatry Psychotherapy and Psychosomatics , University Hospital for Psychiatry Zurich , Zurich , Switzerland.,c Department of Psychiatry , Yale University School of Medicine , New Haven , CT , USA
| | - Mendel Kaelen
- d Psychedelic Research Group, Department of Medicine , Imperial College London , London , UK.,e Wavepaths Ltd , London , UK
| |
Collapse
|
42
|
Livingstone SR, Russo FA. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One 2018; 13:e0196391. [PMID: 29768426 PMCID: PMC5955500 DOI: 10.1371/journal.pone.0196391] [Citation(s) in RCA: 196] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
The RAVDESS is a validated multimodal database of emotional speech and song. The database is gender balanced consisting of 24 professional actors, vocalizing lexically-matched statements in a neutral North American accent. Speech includes calm, happy, sad, angry, fearful, surprise, and disgust expressions, and song contains calm, happy, sad, angry, and fearful emotions. Each expression is produced at two levels of emotional intensity, with an additional neutral expression. All conditions are available in face-and-voice, face-only, and voice-only formats. The set of 7356 recordings were each rated 10 times on emotional validity, intensity, and genuineness. Ratings were provided by 247 individuals who were characteristic of untrained research participants from North America. A further set of 72 participants provided test-retest data. High levels of emotional validity and test-retest intrarater reliability were reported. Corrected accuracy and composite "goodness" measures are presented to assist researchers in the selection of stimuli. All recordings are made freely available under a Creative Commons license and can be downloaded at https://doi.org/10.5281/zenodo.1188976.
Collapse
Affiliation(s)
- Steven R. Livingstone
- Department of Psychology, Ryerson University, Toronto, Canada
- Department of Computer Science and Information Systems, University of Wisconsin-River Falls, Wisconsin, WI, United States of America
| | - Frank A. Russo
- Department of Psychology, Ryerson University, Toronto, Canada
| |
Collapse
|
43
|
Cespedes-Guevara J, Eerola T. Music Communicates Affects, Not Basic Emotions - A Constructionist Account of Attribution of Emotional Meanings to Music. Front Psychol 2018; 9:215. [PMID: 29541041 PMCID: PMC5836201 DOI: 10.3389/fpsyg.2018.00215] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 02/08/2018] [Indexed: 12/24/2022] Open
Abstract
Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music's ability to express core affects and the influence of top-down and contextual information in the listener's mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be vital to move beyond the popular Basic Emotion paradigm and start untangling the emergence of emotional experiences with music in the actual contexts in which they occur.
Collapse
Affiliation(s)
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
44
|
Abstract
Do you know that our soul is composed of harmony? Leonardo Da Vinci Despite evidence for music-specific mechanisms at the level of pitch-pattern representations, the most fascinating aspect of music is its transmodality. Recent psychological and neuroscientific evidence suggest that music is unique in the coupling of perception, cognition, action and emotion. This potentially explains why music has been since time immemorial almost inextricably linked to healing processes and should continue to be.
Collapse
Affiliation(s)
- Paulo E Andrade
- Department of Psychology, Goldsmiths, University of London, London, UK
| | | |
Collapse
|
45
|
Schutz M. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion. Front Psychol 2017; 8:1402. [PMID: 29249997 PMCID: PMC5715399 DOI: 10.3389/fpsyg.2017.01402] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 08/02/2017] [Indexed: 11/13/2022] Open
Abstract
Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy”) pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.
Collapse
Affiliation(s)
- Michael Schutz
- Music, Acoustics, Perception, and Learning Lab, McMaster Institute for Music and the Mind, School of the Arts, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
46
|
Dean T, Chubb C. Scale-sensitivity: A cognitive resource basic to music perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:1432. [PMID: 28964076 DOI: 10.1121/1.4998572] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
A tone-scramble is a rapid, randomly ordered sequence of pure tones. Chubb, Dickson, Dean, Fagan, Mann, Wright, Guan, Silva, Gregersen, and Kowalski [(2013). J. Acoust. Soc. Am. 134(4), 3067-3078] showed that a task requiring listeners to classify major vs minor tone-scrambles yielded a strikingly bimodal distribution. The current study sought to clarify the nature of the skill required in this task. In each of the "semitone" tasks, all tone-scrambles contained eight each of the notes G5, D6, and G6 (to establish G as the tonic) and eight copies of a target note. The target note was either A♭ or A in the "2" task, B♭ or B in the "3" task, C or D♭ in the "4" task, E♭ or E in the "6" task, and F or G♭ in the "7" task. On each trial, the listener strove to classify each stimulus according to its target note. Performance was best (and nearly equal) in the 2, 3, and 6 tasks, intermediate in the 4 task and worst in the 7 task. The results were well-described by a model in which a single cognitive resource controls performance in all five semitone tasks. This resource is called "scale sensitivity" here because it seems to confer general sensitivity to variations in scale in the presence of a fixed tonic.
Collapse
Affiliation(s)
- Tyler Dean
- Department of Cognitive Sciences, University of California at Irvine, Irvine, California 92697-5100, USA
| | - Charles Chubb
- Department of Cognitive Sciences, University of California at Irvine, Irvine, California 92697-5100, USA
| |
Collapse
|
47
|
Brattico P, Brattico E, Vuust P. Global Sensory Qualities and Aesthetic Experience in Music. Front Neurosci 2017; 11:159. [PMID: 28424573 PMCID: PMC5380758 DOI: 10.3389/fnins.2017.00159] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2016] [Accepted: 03/13/2017] [Indexed: 11/13/2022] Open
Abstract
A well-known tradition in the study of visual aesthetics holds that the experience of visual beauty is grounded in global computational or statistical properties of the stimulus, for example, scale-invariant Fourier spectrum or self-similarity. Some approaches rely on neural mechanisms, such as efficient computation, processing fluency, or the responsiveness of the cells in the primary visual cortex. These proposals are united by the fact that the contributing factors are hypothesized to be global (i.e., they concern the percept as a whole), formal or non-conceptual (i.e., they concern form instead of content), computational and/or statistical, and based on relatively low-level sensory properties. Here we consider that the study of aesthetic responses to music could benefit from the same approach. Thus, along with local features such as pitch, tuning, consonance/dissonance, harmony, timbre, or beat, also global sonic properties could be viewed as contributing toward creating an aesthetic musical experience. Several such properties are discussed and their neural implementation is reviewed in the light of recent advances in neuroaesthetics.
Collapse
Affiliation(s)
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/AalborgAarhus, Denmark
| | | |
Collapse
|
48
|
Aucouturier JJ, Canonne C. Musical friends and foes: The social cognition of affiliation and control in improvised interactions. Cognition 2017; 161:94-108. [PMID: 28167396 PMCID: PMC5348120 DOI: 10.1016/j.cognition.2017.01.019] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2016] [Revised: 01/18/2017] [Accepted: 01/25/2017] [Indexed: 01/09/2023]
Abstract
A recently emerging view in music cognition holds that music is not only social and participatory in its production, but also in its perception, i.e. that music is in fact perceived as the sonic trace of social relations between a group of real or virtual agents. While this view appears compatible with a number of intriguing music cognitive phenomena, such as the links between beat entrainment and prosocial behaviour or between strong musical emotions and empathy, direct evidence is lacking that listeners are at all able to use the acoustic features of a musical interaction to infer the affiliatory or controlling nature of an underlying social intention. We created a novel experimental situation in which we asked expert music improvisers to communicate 5 types of non-musical social intentions, such as being domineering, disdainful or conciliatory, to one another solely using musical interaction. Using a combination of decoding studies, computational and psychoacoustical analyses, we show that both musically-trained and non musically-trained listeners can recognize relational intentions encoded in music, and that this social cognitive ability relies, to a sizeable extent, on the information processing of acoustic cues of temporal and harmonic coordination that are not present in any one of the musicians’ channels, but emerge from the dynamics of their interaction. By manipulating these cues in two-channel audio recordings and testing their impact on the social judgements of non-musician observers, we finally establish a causal relationship between the affiliation dimension of social behaviour and musical harmonic coordination on the one hand, and between the control dimension and musical temporal coordination on the other hand. These results provide novel mechanistic insights not only into the social cognition of musical interactions, but also into that of non-verbal interactions as a whole.
Collapse
Affiliation(s)
- Jean-Julien Aucouturier
- CNRS Sound and Technology of Music and Sound (UMR9912, CNRS/IRCAM/UPMC), 1 Place Stravinsky, Paris France.
| | - Clément Canonne
- CNRS Sound and Technology of Music and Sound (UMR9912, CNRS/IRCAM/UPMC), 1 Place Stravinsky, Paris France
| |
Collapse
|
49
|
Xiao NG, Quinn PC, Liu S, Ge L, Pascalis O, Lee K. Older but not younger infants associate own-race faces with happy music and other-race faces with sad music. Dev Sci 2017; 21. [DOI: 10.1111/desc.12537] [Citation(s) in RCA: 39] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Accepted: 11/03/2016] [Indexed: 11/30/2022]
Affiliation(s)
- Naiqi G. Xiao
- Dr Eric Jackman Institute of Child Study; University of Toronto; Toronto Canada
| | - Paul C. Quinn
- Department of Psychological and Brain Sciences; University of Delaware; Newark USA
| | | | - Liezhong Ge
- Zhejiang Sci-Tech University; Hangzhou China
- Center for Psychological Sciences; Zhejiang University; Hangzhou China
| | | | - Kang Lee
- Dr Eric Jackman Institute of Child Study; University of Toronto; Toronto Canada
| |
Collapse
|
50
|
Siu TSC, Cheung H. Infants' sensitivity to emotion in music and emotion-action understanding. PLoS One 2017; 12:e0171023. [PMID: 28152081 PMCID: PMC5289547 DOI: 10.1371/journal.pone.0171023] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 01/14/2017] [Indexed: 12/03/2022] Open
Abstract
Emerging evidence has indicated infants' early sensitivity to acoustic cues in music. Do they interpret these cues in emotional terms to represent others' affective states? The present study examined infants' development of emotional understanding of music with a violation-of-expectation paradigm. Twelve- and 20-month-olds were presented with emotionally concordant and discordant music-face displays on alternate trials. The 20-month-olds, but not the 12-month-olds, were surprised by emotional incongruence between musical and facial expressions, suggesting their sensitivity to musical emotion. In a separate non-music task, only the 20-month-olds were able to use an actress's affective facial displays to predict her subsequent action. Interestingly, for the 20-month-olds, such emotion-action understanding correlated with sensitivity to musical expressions measured in the first task. These two abilities however did not correlate with family income, parental estimation of language and communicative skills, and quality of parent-child interaction. The findings suggest that sensitivity to musical emotion and emotion-action understanding may be supported by a generalised common capacity to represent emotion from social cues, which lays a foundation for later social-communicative development.
Collapse
Affiliation(s)
- Tik-Sze Carrey Siu
- Department of Early Childhood Education, The Education University of Hong Kong, Tai Po, Hong Kong
| | - Him Cheung
- Department of Psychology, The Chinese University of Hong Kong, Shatin, Hong Kong
| |
Collapse
|