1
|
Baumard N, Safra L, Martins M, Chevallier C. Cognitive fossils: using cultural artifacts to reconstruct psychological changes throughout history. Trends Cogn Sci 2024; 28:172-186. [PMID: 37949792 DOI: 10.1016/j.tics.2023.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/26/2023] [Accepted: 10/04/2023] [Indexed: 11/12/2023]
Abstract
Psychology is crucial for understanding human history. When aggregated, changes in the psychology of individuals - in the intensity of social trust, parental care, or intellectual curiosity - can lead to important changes in institutions, social norms, and cultures. However, studying the role of psychology in shaping human history has been hindered by the difficulty of documenting the psychological traits of people who are no longer alive. Recent developments in psychology suggest that cultural artifacts reflect in part the psychological traits of the individuals who produced or consumed them. Cultural artifacts can thus serve as 'cognitive fossils' - physical imprints of the psychological traits of long-dead people. We review the range of materials available to cognitive and behavioral scientists, and discuss the methods that can be used to recover and quantify changes in psychological traits throughout history.
Collapse
Affiliation(s)
- Nicolas Baumard
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France.
| | - Lou Safra
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France; Centre de Recherches Politiques de Sciences Po (CEVIPOF), Institut d'Études Politiques de Paris (Sciences Po), Paris, France
| | - Mauricio Martins
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France; SCAN-Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Coralie Chevallier
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France
| |
Collapse
|
2
|
Bowling DL. Biological principles for music and mental health. Transl Psychiatry 2023; 13:374. [PMID: 38049408 PMCID: PMC10695969 DOI: 10.1038/s41398-023-02671-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 10/30/2023] [Accepted: 11/17/2023] [Indexed: 12/06/2023] Open
Abstract
Efforts to integrate music into healthcare systems and wellness practices are accelerating but the biological foundations supporting these initiatives remain underappreciated. As a result, music-based interventions are often sidelined in medicine. Here, I bring together advances in music research from neuroscience, psychology, and psychiatry to bridge music's specific foundations in human biology with its specific therapeutic applications. The framework I propose organizes the neurophysiological effects of music around four core elements of human musicality: tonality, rhythm, reward, and sociality. For each, I review key concepts, biological bases, and evidence of clinical benefits. Within this framework, I outline a strategy to increase music's impact on health based on standardizing treatments and their alignment with individual differences in responsivity to these musical elements. I propose that an integrated biological understanding of human musicality-describing each element's functional origins, development, phylogeny, and neural bases-is critical to advancing rational applications of music in mental health and wellness.
Collapse
Affiliation(s)
- Daniel L Bowling
- Department of Psychiatry and Behavioral Sciences, Stanford University, School of Medicine, Stanford, CA, USA.
- Center for Computer Research in Music and Acoustics (CCRMA), Stanford University, School of Humanities and Sciences, Stanford, CA, USA.
| |
Collapse
|
3
|
Bowling DL. Vocal similarity theory and the biology of musical tonality. Phys Life Rev 2023; 46:46-51. [PMID: 37244152 PMCID: PMC10528872 DOI: 10.1016/j.plrev.2023.05.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 05/15/2023] [Indexed: 05/29/2023]
Affiliation(s)
- Daniel L Bowling
- Department of Psychiatry and Behavioral Sciences, Stanford School of Medicine, United States of America; Center for Computer Research in Music and Acoustics, Stanford School of Humanities and Sciences, United States of America.
| |
Collapse
|
4
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
5
|
Anglada-Tort M, Harrison PMC, Lee H, Jacoby N. Large-scale iterated singing experiments reveal oral transmission mechanisms underlying music evolution. Curr Biol 2023; 33:1472-1486.e12. [PMID: 36958332 DOI: 10.1016/j.cub.2023.02.070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 12/24/2022] [Accepted: 02/23/2023] [Indexed: 03/25/2023]
Abstract
Speech and song have been transmitted orally for countless human generations, changing over time under the influence of biological, cognitive, and cultural pressures. Cross-cultural regularities and diversities in human song are thought to emerge from this transmission process, but testing how underlying mechanisms contribute to musical structures remains a key challenge. Here, we introduce an automatic online pipeline that streamlines large-scale cultural transmission experiments using a sophisticated and naturalistic modality: singing. We quantify the evolution of 3,424 melodies orally transmitted across 1,797 participants in the United States and India. This approach produces a high-resolution characterization of how oral transmission shapes melody, revealing the emergence of structures that are consistent with widespread musical features observed cross-culturally (small pitch sets, small pitch intervals, and arch-shaped melodic contours). We show how the emergence of these structures is constrained by individual biases in our participants-vocal constraints, working memory, and cultural exposure-which determine the size, shape, and complexity of evolving melodies. However, their ultimate effect on population-level structures depends on social dynamics taking place during cultural transmission. When participants recursively imitate their own productions (individual transmission), musical structures evolve slowly and heterogeneously, reflecting idiosyncratic musical biases. When participants instead imitate others' productions (social transmission), melodies rapidly shift toward homogeneous structures, reflecting shared structural biases that may underpin cross-cultural variation. These results provide the first quantitative characterization of the rich collection of biases that oral transmission imposes on music evolution, giving us a new understanding of how human song structures emerge via cultural transmission.
Collapse
Affiliation(s)
- Manuel Anglada-Tort
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany; Faculty of Music, University of Oxford, St Aldate's, Oxford OX1 1DB, UK.
| | - Peter M C Harrison
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany; Faculty of Music, University of Cambridge, 11 West Road, Cambridge CB3 9DP, UK
| | - Harin Lee
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, Leipzig 04103, Germany
| | - Nori Jacoby
- Computational Auditory Perception Group, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| |
Collapse
|
6
|
Effect of Indian Music as an Auditory Stimulus on Physiological Measures of Stress, Anxiety, Cardiovascular and Autonomic Responses in Humans-A Randomized Controlled Trial. Eur J Investig Health Psychol Educ 2022; 12:1535-1558. [PMID: 36286092 PMCID: PMC9601678 DOI: 10.3390/ejihpe12100108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/26/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Among the different anthropogenic stimuli humans are exposed to, the psychological and cardiovascular effects of auditory stimuli are less understood. This study aims to explore the possible range of change after a single session of auditory stimulation with three different ‘Modes’ of musical stimuli (MS) on anxiety, biomarkers of stress, and cardiovascular parameters among healthy young individuals. In this randomized control trial, 140 healthy young adults, aged 18−30 years, were randomly assigned to three MS groups (Mode/Raga Miyan ki Todi, Malkauns, and Puriya) and one control group (natural sounds). The outcome measurements of the State-Trait Anxiety Inventory, salivary alpha-amylase (sAA), salivary cortisol (sCort), blood pressure, and heart rate variability (HRV) were collected at three time points: before (M1), during (M2), and after the intervention (M3). State anxiety was reduced significantly with raga Puriya (p = 0.018), followed by raga Malkauns and raga Miyan Ki Todi. All the groups showed a significant reduction in sAA. Raga Miyan ki Todi and Puriya caused an arousal effect (as evidenced by HRV) during the intervention and significant relaxation after the intervention (both p < 0.005). Raga Malkauns and the control group had a sustained rise in parasympathetic activity over 30 min. Future studies should try to use other modes and features to develop a better scientific foundation for the use of Indian music in medicine.
Collapse
|
7
|
Zeloni G, Pavani F. Minor second intervals: A shared signature for infant cries and sadness in music. Iperception 2022; 13:20416695221092471. [PMID: 35463914 PMCID: PMC9019334 DOI: 10.1177/20416695221092471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 03/20/2022] [Indexed: 11/17/2022] Open
Abstract
In Western music and in music of other cultures, minor chords, modes and intervals evoke sadness. It has been proposed that this emotional interpretation of melodic intervals (the distance between two pitches, expressed in semitones) is common to music and vocal expressions. Here, we asked expert musicians to transcribe into music scores spontaneous vocalizations of pre-verbal infants to test the hypothesis that melodic intervals that evoke sadness in music (i.e., minor 2nd) are more represented in cry compared to neutral utterances. Results showed that the unison, major 2nd, minor 2nd, major 3rd, minor 3rd, perfect 4th and perfect 5th are all represented in infant vocalizations. However, minor 2nd outnumbered all other intervals in cry vocalizations, but not in neutral babbling. These findings suggest that the association between minor intervals and sadness may develop in humans because a critically relevant social cue (infant cry) contains a statistical regularity: the association between minor 2nd and negative emotional valence.
Collapse
Affiliation(s)
- Gabriele Zeloni
- Società Psicoanalitica Italiana, Roma, Italy
- International Psychoanalytical Association
- Azienda USL Toscana Centro, Firenze, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento,
Rovereto, Italy
| |
Collapse
|
8
|
Hyafil A, Baumard N. Evoked and transmitted culture models: Using bayesian methods to infer the evolution of cultural traits in history. PLoS One 2022; 17:e0264509. [PMID: 35389995 PMCID: PMC8989295 DOI: 10.1371/journal.pone.0264509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 02/13/2022] [Indexed: 11/19/2022] Open
Abstract
A central question in behavioral and social sciences is understanding to what extent cultural traits are inherited from previous generations, transmitted from adjacent populations or produced in response to changes in socioeconomic and ecological conditions. As quantitative diachronic databases recording the evolution of cultural artifacts over many generations are becoming more common, there is a need for appropriate data-driven methods to approach this question. Here we present a new Bayesian method to infer the dynamics of cultural traits in a diachronic dataset. Our method called Evoked-Transmitted Cultural model (ETC) relies on fitting a latent-state model where a cultural trait is a latent variable which guides the production of the cultural artifacts observed in the database. The dynamics of this cultural trait may depend on the value of the cultural traits present in previous generations and in adjacent populations (transmitted culture) and/or on ecological factors (evoked culture). We show how ETC models can be fitted to quantitative diachronic or synchronic datasets, using the Expectation-Maximization algorithm, enabling estimating the relative contribution of vertical transmission, horizontal transmission and evoked component in shaping cultural traits. The method also allows to reconstruct the dynamics of cultural traits in different regions. We tested the performance of the method on synthetic data for two variants of the method (for binary or continuous traits). We found that both variants allow reliable estimates of parameters guiding cultural evolution, and that they outperform purely phylogenetic tools that ignore horizontal transmission and ecological factors. Overall, our method opens new possibilities to reconstruct how culture is shaped from quantitative data, with possible application in cultural history, cultural anthropology, archaeology, historical linguistics and behavioral ecology.
Collapse
Affiliation(s)
| | - Nicolas Baumard
- Institut d’Etudes Cognitives, Ecole Normale Supérieure, Paris, France
| |
Collapse
|
9
|
Vuoskoski JK, Zickfeld JH, Alluri V, Moorthigari V, Seibt B. Feeling moved by music: Investigating continuous ratings and acoustic correlates. PLoS One 2022; 17:e0261151. [PMID: 35020739 PMCID: PMC8754323 DOI: 10.1371/journal.pone.0261151] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 11/25/2021] [Indexed: 11/18/2022] Open
Abstract
The experience often described as feeling moved, understood chiefly as a social-relational emotion with social bonding functions, has gained significant research interest in recent years. Although listening to music often evokes what people describe as feeling moved, very little is known about the appraisals or musical features contributing to the experience. In the present study, we investigated experiences of feeling moved in response to music using a continuous rating paradigm. A total of 415 US participants completed an online experiment where they listened to seven moving musical excerpts and rated their experience while listening. Each excerpt was randomly coupled with one of seven rating scales (perceived sadness, perceived joy, feeling moved or touched, sense of connection, perceived beauty, warmth [in the chest], or chills) for each participant. The results revealed that musically evoked experiences of feeling moved are associated with a similar pattern of appraisals, physiological sensations, and trait correlations as feeling moved by videos depicting social scenarios (found in previous studies). Feeling moved or touched by both sadly and joyfully moving music was associated with experiencing a sense of connection and perceiving joy in the music, while perceived sadness was associated with feeling moved or touched only in the case of sadly moving music. Acoustic features related to arousal contributed to feeling moved only in the case of joyfully moving music. Finally, trait empathic concern was positively associated with feeling moved or touched by music. These findings support the role of social cognitive and empathic processes in music listening, and highlight the social-relational aspects of feeling moved or touched by music.
Collapse
Affiliation(s)
- Jonna K. Vuoskoski
- RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, University of Oslo, Oslo, Norway
- Department of Psychology, University of Oslo, Oslo, Norway
- Department of Musicology, University of Oslo, Oslo, Norway
- * E-mail:
| | | | - Vinoo Alluri
- Cognitive Science Lab, International Institute of Information Technology, Hyderabad, India
| | - Vishnu Moorthigari
- Cognitive Science Lab, International Institute of Information Technology, Hyderabad, India
| | - Beate Seibt
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
10
|
Tripathy M, Chaudhari M. The impact of rock music on Indian young adults: a qualitative study on emotions and moods. CARDIOMETRY 2021. [DOI: 10.18137/cardiometry.2021.20.110118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Music has proven to play a vital role in social and emotionaldevelopment in teenagers and young adults. From contemplation,developing self-identity, understanding interpersonalrelationships, and providing possibilities of experience mastery,agency, and self-control with the help of self-directed activities,music helps its audience develop in all aspects of life. In specific,Rock music, since its existence has been more than entertainment,artists expressed themselves and shared their opinionsthrough their musical pieces. Infamous for promoting drugsand alcohol, Rock Music used its platform to enlighten the audienceabout taboo topics like racism, inequality, and other socialissues. This research paper uses a qualitative methodologyapproach to understand Rock Music listeners’ points of view.Data was collected through ‘in-depth interviews’ of 15 participantshailing from different parts of the country. Rock Musichas several positive effects on the listeners. Rock can elevatemoods, induce emotions, helps the listeners be more productiveand creative with their everyday work, and constantly motivatethem to do better in every aspect of life. Rock provides aplatform to express feelings and vent out all the angst, especiallyfor those who otherwise do not voice their opinions becauseof their nature in general. Rock Music has been able to shapepersonalities, characteristics, and thought processes. Moreover,majorly, Rock Music helps people with anger management.
Collapse
|
11
|
Lahdelma I, Athanasopoulos G, Eerola T. Sweetness is in the ear of the beholder: chord preference across United Kingdom and Pakistani listeners. Ann N Y Acad Sci 2021; 1502:72-84. [PMID: 34240419 DOI: 10.1111/nyas.14655] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 05/28/2021] [Accepted: 06/08/2021] [Indexed: 01/24/2023]
Abstract
The majority of research in the field of music perception has been conducted with Western participants, and it has remained unclear which aspects of music perception are culture dependent, and which are universal. The current study compared how participants unfamiliar with Western music (people from the Khowar and Kalash tribes native to Northwest Pakistan with minimal exposure to Western music) perceive affect (positive versus negative) in musical chords compared with United Kingdom (UK) listeners, as well as the overall preference for these chords. The stimuli consisted of four distinct chord types (major, minor, augmented, and chromatic) and were played as both vertical blocks (pitches presented concurrently) and arpeggios (pitches presented successively). The results suggest that the Western listener major-positive minor-negative affective distinction is opposite for Northwest Pakistani listeners, arguably because of the reversed prevalence of these chords in the two music cultures. The aversion to the harsh dissonance of the chromatic cluster is present cross-culturally, but the preference for the consonance of the major triad varies between UK and Northwest Pakistani listeners, depending on cultural familiarity. Our findings imply not only notable cultural variation but also commonalities in chord perception across Western and non-Western listeners.
Collapse
Affiliation(s)
- Imre Lahdelma
- Department of Music, Durham University, Durham, United Kingdom
| | | | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
12
|
Abstract
Evidence supporting a link between harmoni-city and the attractiveness of simultaneous tone combinations has emerged from an experiment designed to mitigate effects of musical enculturation. I examine the analysis undertaken to produce this evidence and clarify its relation to an account of tonal aesthetics based on the biology of auditory-vocal communication.
Collapse
|
13
|
Athanasopoulos G, Eerola T, Lahdelma I, Kaliakatsos-Papakostas M. Harmonic organisation conveys both universal and culture-specific cues for emotional expression in music. PLoS One 2021; 16:e0244964. [PMID: 33439887 PMCID: PMC7806179 DOI: 10.1371/journal.pone.0244964] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 12/19/2020] [Indexed: 11/23/2022] Open
Abstract
Previous research conducted on the cross-cultural perception of music and its emotional content has established that emotions can be communicated across cultures at least on a rudimentary level. Here, we report a cross-cultural study with participants originating from two tribes in northwest Pakistan (Khow and Kalash) and the United Kingdom, with both groups being naïve to the music of the other respective culture. We explored how participants assessed emotional connotations of various Western and non-Western harmonisation styles, and whether cultural familiarity with a harmonic idiom such as major and minor mode would consistently relate to emotion communication. The results indicate that Western concepts of harmony are not relevant for participants unexposed to Western music when other emotional cues (tempo, pitch height, articulation, timbre) are kept relatively constant. At the same time, harmonic style alone has the ability to colour the emotional expression in music if it taps the appropriate cultural connotations. The preference for one harmonisation style over another, including the major-happy/minor-sad distinction, is influenced by culture. Finally, our findings suggest that although differences emerge across different harmonisation styles, acoustic roughness influences the expression of emotion in similar ways across cultures; preference for consonance however seems to be dependent on cultural familiarity.
Collapse
Affiliation(s)
| | - Tuomas Eerola
- Dept of Music, Durham University, Durham, United Kingdom
| | - Imre Lahdelma
- Dept of Music, Durham University, Durham, United Kingdom
| | | |
Collapse
|
14
|
Chien SE, Chen YC, Matsumoto A, Yamashita W, Shih KT, Tsujimura SI, Yeh SL. The modulation of background color on perceiving audiovisual simultaneity. Vision Res 2020; 172:1-10. [PMID: 32388209 DOI: 10.1016/j.visres.2020.04.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 04/15/2020] [Accepted: 04/16/2020] [Indexed: 11/28/2022]
Abstract
Perceiving simultaneity is critical in integrating visual and auditory signals that give rise to a unified perception. We examined whether background color modulates people's perception of audiovisual simultaneity. Two hypotheses were proposed and examined: (1) the red-impairment hypothesis: visual processing speed deteriorates when viewing a red background because the magnocellular system is inhibited by red light; and (2) the blue-enhancement hypothesis: the detection of both visual and auditory signals is enhanced when viewing a blue background because it stimulates the blue-light sensitive intrinsically photosensitive retinal ganglion cells (ipRGCs), which trigger a higher alert state. Participants were exposed to different backgrounds while performing an audiovisual simultaneity judgment (SJ) task: a flash and a beep were presented at pre-designated stimulus onset asynchronies (SOAs) and participants judged whether or not the two stimuli were presented simultaneously. Experiment 1 demonstrated a shift of the point of subjective simultaneity (PSS) toward the visual-leading condition in the red compared to the blue background when the flash was presented in the periphery. In Experiment 2, the stimulation of ipRGCs was specifically manipulated to test the blue-enhancement hypothesis. The results showed no support for this hypothesis, perhaps due to top-down cortical modulations. Taken together, the shift of PSS toward the visual-leading condition in the red background was attributed to impaired visual processing speed with respect to auditory processing speed, caused by the inhibition of the magnocellular system under red light.
Collapse
Affiliation(s)
- Sung-En Chien
- Department of Psychology, National Taiwan University, Taipei, Taiwan
| | - Yi-Chuan Chen
- Department of Medicine, Mackay Medical College, New Taipei City, Taiwan
| | - Akiko Matsumoto
- Faculty of Science and Engineering, Kagoshima University, Kagoshima, Japan
| | - Wakayo Yamashita
- Faculty of Science and Engineering, Kagoshima University, Kagoshima, Japan
| | - Kuaug-Tsu Shih
- Graduate Institute of Communication Engineering, National Taiwan University, Taipei, Taiwan
| | - Sei-Ichi Tsujimura
- Faculty of Design and Architecture, Nagoya City University, Nagoya, Japan
| | - Su-Ling Yeh
- Department of Psychology, National Taiwan University, Taipei, Taiwan; Graduate Institute of Brain and Mind Sciences, National Taiwan University, Taipei, Taiwan; Neurobiology and Cognitive Science Center, National Taiwan University, Taipei, Taiwan; Center for Artificial Intelligence and Advanced Robotics, National Taiwan University, Taipei, Taiwan; Center for the Advanced Study in the Behavioral Sciences, Stanford University, USA.
| |
Collapse
|
15
|
Cowen AS, Fang X, Sauter D, Keltner D. What music makes us feel: At least 13 dimensions organize subjective experiences associated with music across different cultures. Proc Natl Acad Sci U S A 2020; 117:1924-1934. [PMID: 31907316 PMCID: PMC6995018 DOI: 10.1073/pnas.1910704117] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
What is the nature of the feelings evoked by music? We investigated how people represent the subjective experiences associated with Western and Chinese music and the form in which these representational processes are preserved across different cultural groups. US (n = 1,591) and Chinese (n = 1,258) participants listened to 2,168 music samples and reported on the specific feelings (e.g., "angry," "dreamy") or broad affective features (e.g., valence, arousal) that they made individuals feel. Using large-scale statistical tools, we uncovered 13 distinct types of subjective experience associated with music in both cultures. Specific feelings such as "triumphant" were better preserved across the 2 cultures than levels of valence and arousal, contrasting with theoretical claims that valence and arousal are building blocks of subjective experience. This held true even for music selected on the basis of its valence and arousal levels and for traditional Chinese music. Furthermore, the feelings associated with music were found to occupy continuous gradients, contradicting discrete emotion theories. Our findings, visualized within an interactive map (https://www.ocf.berkeley.edu/∼acowen/music.html) reveal a complex, high-dimensional space of subjective experience associated with music in multiple cultures. These findings can inform inquiries ranging from the etiology of affective disorders to the neurological basis of emotion.
Collapse
Affiliation(s)
- Alan S Cowen
- Department of Psychology, University of California, Berkeley, CA 94720;
| | - Xia Fang
- Department of Psychology, University of Amsterdam, 1001 NK Amsterdam, The Netherlands
- Department of Psychology, York University, Toronto, ON M3J 1P3, Canada
| | - Disa Sauter
- Department of Psychology, University of Amsterdam, 1001 NK Amsterdam, The Netherlands
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, CA 94720
| |
Collapse
|
16
|
MacGregor C, Müllensiefen D. The Musical Emotion Discrimination Task: A New Measure for Assessing the Ability to Discriminate Emotions in Music. Front Psychol 2019; 10:1955. [PMID: 31551857 PMCID: PMC6736617 DOI: 10.3389/fpsyg.2019.01955] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 08/08/2019] [Indexed: 11/13/2022] Open
Abstract
Previous research has shown that levels of musical training and emotional engagement with music are associated with an individual's ability to decode the intended emotional expression from a music performance. The present study aimed to assess traits and abilities that might influence emotion recognition, and to create a new test of emotion discrimination ability. The first experiment investigated musical features that influenced the difficulty of the stimulus items (length, type of melody, instrument, target-/comparison emotion) to inform the creation of a short test of emotion discrimination. The second experiment assessed the contribution of individual differences measures of emotional and musical abilities as well as psychoacoustic abilities. Finally, the third experiment established the validity of the new test against other measures currently used to assess similar abilities. Performance on the Musical Emotion Discrimination Task (MEDT) was significantly associated with high levels of self-reported emotional engagement with music as well as with performance on a facial emotion recognition task. Results are discussed in the context of a process model for emotion discrimination in music and psychometric properties of the MEDT are provided. The MEDT is freely available for research use.
Collapse
Affiliation(s)
- Chloe MacGregor
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| | - Daniel Müllensiefen
- Department of Psychology, Goldsmiths, University of London, London, United Kingdom
| |
Collapse
|
17
|
Filippi P, Hoeschele M, Spierings M, Bowling DL. Temporal modulation in speech, music, and animal vocal communication: evidence of conserved function. Ann N Y Acad Sci 2019; 1453:99-113. [DOI: 10.1111/nyas.14228] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 08/09/2019] [Accepted: 08/13/2019] [Indexed: 12/11/2022]
Affiliation(s)
- Piera Filippi
- Laboratoire Parole et Langage, LPL UMR 7309, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Institute of Language, Communication and the Brain, Centre National de la Recherche ScientifiqueAix‐Marseille Université Aix‐en‐Provence France
- Laboratoire de Psychologie Cognitive LPC UMR 7290, Centre National de la Recherche ScientifiqueAix‐Marseille Université Marseille France
| | - Marisa Hoeschele
- Acoustics Research InstituteAustrian Academy of Science Vienna Austria
- Department of Cognitive BiologyUniversity of Vienna Vienna Austria
| | | | - Daniel L. Bowling
- Department of Psychiatry and Behavioral SciencesStanford University School of Medicine Stanford California
| |
Collapse
|
18
|
Useche J, Hurtado R. Melodies as Maximally Disordered Systems under Macroscopic Constraints with Musical Meaning. ENTROPY 2019; 21:e21050532. [PMID: 33267246 PMCID: PMC7515022 DOI: 10.3390/e21050532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 05/14/2019] [Accepted: 05/15/2019] [Indexed: 11/26/2022]
Abstract
One of the most relevant features of musical pieces is the selection and utilization of musical elements by composers. For connecting the musical properties of a melodic line as a whole with those of its constituent elements, we propose a representation for musical intervals based on physical quantities and a statistical model based on the minimization of relative entropy. The representation contains information about the size, location in the register, and level of tonal consonance of musical intervals. The statistical model involves expected values of relevant physical quantities that can be adopted as macroscopic constraints with musical meaning. We studied the occurrences of musical intervals in 20 melodic lines from seven masterpieces of Western tonal music. We found that all melodic lines are strictly ordered in terms of the physical quantities of the representation and that the formalism is suitable for approximately reproducing the final selection of musical intervals made by the composers, as well as for describing musical features as the asymmetry in the use of ascending and descending intervals, transposition processes, and the mean dissonance of a melodic line.
Collapse
|
19
|
Cedro ÁM, Borges J, Diniz MLN, Rodrigues RM, Rico VV, Leme AC, Huziwara EM. Evaluating Concept Formation in Multiple Exemplar Training with Musical Chords. PSYCHOLOGICAL RECORD 2019. [DOI: 10.1007/s40732-019-00346-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
20
|
Origins of 1/f noise in human music performance from short-range autocorrelations related to rhythmic structures. PLoS One 2019; 14:e0216088. [PMID: 31059519 PMCID: PMC6502337 DOI: 10.1371/journal.pone.0216088] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Accepted: 04/12/2019] [Indexed: 11/19/2022] Open
Abstract
1/f fluctuations have been described in numerous physical and biological processes. This noise structure describes an inverse relationship between the intensity and frequency of events in a time series (for example reflected in power spectra), and is believed to indicate long-range dependence, whereby events at one time point influence events many observations later. 1/f has been identified in rhythmic behaviors, such as music, and is typically attributed to long-range correlations. However short-range dependence in musical performance is a well-established finding and past research has suggested that 1/f can arise from multiple continuing short-range processes. We tested this possibility using simulations and time-series modeling, complemented by traditional analyses using power spectra and detrended fluctuation analysis (as often adopted more recently). Our results show that 1/f-type fluctuations in musical contexts may be explained by short-range models involving multiple time lags, and the temporal ranges in which rhythmic hierarchies are expressed are apt to create these fluctuations through such short-range autocorrelations. We also analyzed gait, heartbeat, and resting-state EEG data, demonstrating the coexistence of multiple short-range processes and 1/f fluctuation in a variety of phenomena. This suggests that 1/f fluctuation might not indicate long-range correlations, and points to its likely origins in musical rhythm and related structures.
Collapse
|
21
|
Nordström H, Laukka P. The time course of emotion recognition in speech and music. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:3058. [PMID: 31153307 DOI: 10.1121/1.5108601] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Accepted: 04/25/2019] [Indexed: 06/09/2023]
Abstract
The auditory gating paradigm was adopted to study how much acoustic information is needed to recognize emotions from speech prosody and music performances. In Study 1, brief utterances conveying ten emotions were segmented into temporally fine-grained gates and presented to listeners, whereas Study 2 instead used musically expressed emotions. Emotion recognition accuracy increased with increasing gate duration and generally stabilized after a certain duration, with different trajectories for different emotions. Above-chance accuracy was observed for ≤100 ms stimuli for anger, happiness, neutral, and sadness, and for ≤250 ms stimuli for most other emotions, for both speech and music. This suggests that emotion recognition is a fast process that allows discrimination of several emotions based on low-level physical characteristics. The emotion identification points, which reflect the amount of information required for stable recognition, were shortest for anger and happiness for both speech and music, but recognition took longer to stabilize for music vs speech. This, in turn, suggests that acoustic cues that develop over time also play a role for emotion inferences (especially for music). Finally, acoustic cue patterns were positively correlated between speech and music, suggesting a shared acoustic code for expressing emotions.
Collapse
Affiliation(s)
- Henrik Nordström
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Stockholm University, 106 91 Stockholm, Sweden
| |
Collapse
|
22
|
Tay RYL, Ng BC. Effects of affective priming through music on the use of emotion words. PLoS One 2019; 14:e0214482. [PMID: 30990819 PMCID: PMC6467386 DOI: 10.1371/journal.pone.0214482] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 03/13/2019] [Indexed: 11/23/2022] Open
Abstract
Understanding how music can evoke emotions and in turn affect language use has significant implications not only in clinical settings but also in the emotional development of children. The relationship between music and emotion is an intricate one that has been closely studied. However, how the use of emotion words can be influenced by auditory priming is a question which is still not known. The main interest in this study was to examine how manipulation of mode and tempo in music affects the emotions induced and the subsequent effects on the use of emotion words. Fifty university students in Singapore were asked to select emotion words after exposure to various music excerpts. The results showed that major modes and faster tempos elicited greater responses for positive words and high arousal words respectively, while minor modes elicited more high arousal words and original tempos resulted in more positive words being selected. In the Major-Fast, Major-Slow and Minor-Slow conditions, positive correlations were found between the number of high arousal words and their rated intensities. Upon further analysis, categorization of emotion words differed from the circumplex model. Taken together, the findings highlight the prominence of affective auditory priming and allow us to better understand our emotive responses to music.
Collapse
Affiliation(s)
- Rosabel Yu Ling Tay
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
| | - Bee Chin Ng
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore, Singapore
- * E-mail:
| |
Collapse
|
23
|
Hernandez-Ruiz E. How is music processed? Tentative answers from cognitive neuroscience. NORDIC JOURNAL OF MUSIC THERAPY 2019. [DOI: 10.1080/08098131.2019.1587785] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Affiliation(s)
- Eugenia Hernandez-Ruiz
- Department of Music Education and Music Therapy, School of Music, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
24
|
Dibben N, Coutinho E, Vilar JA, Estévez-Pérez G. Do Individual Differences Influence Moment-by-Moment Reports of Emotion Perceived in Music and Speech Prosody? Front Behav Neurosci 2018; 12:184. [PMID: 30210316 PMCID: PMC6119718 DOI: 10.3389/fnbeh.2018.00184] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 08/02/2018] [Indexed: 11/13/2022] Open
Abstract
Comparison of emotion perception in music and prosody has the potential to contribute to an understanding of their speculated shared evolutionary origin. Previous research suggests shared sensitivity to and processing of music and speech, but less is known about how emotion perception in the auditory domain might be influenced by individual differences. Personality, emotional intelligence, gender, musical training and age exert some influence on discrete, summative judgments of perceived emotion in music and speech stimuli. However, music and speech are temporal phenomena, and little is known about whether individual differences influence moment-by-moment perception of emotion in these domains. A behavioral study collected two main types of data: continuous ratings of perceived emotion while listening to extracts of music and speech, using a computer interface which modeled emotion on two dimensions (arousal and valence), and demographic information including measures of personality (TIPI) and emotional intelligence (TEIQue-SF). Functional analysis of variance on the time series data revealed a small number of statistically significant differences associated with Emotional Stability, Agreeableness, musical training and age. The results indicate that individual differences exert limited influence on continuous judgments of dynamic, naturalistic expressions. We suggest that this reflects a reliance on acoustic cues to emotion in moment-by-moment judgments of perceived emotions and is further evidence of the shared sensitivity to and processing of music and speech.
Collapse
Affiliation(s)
- Nicola Dibben
- Department of Music, University of Sheffield, Sheffield, United Kingdom
| | - Eduardo Coutinho
- Department of Music, University of Liverpool, Liverpool, United Kingdom
| | - José A. Vilar
- Department of Mathematics, University of A Coruña, A Coruña, Spain
| | | |
Collapse
|
25
|
Paquette S, Takerkart S, Saget S, Peretz I, Belin P. Cross-classification of musical and vocal emotions in the auditory cortex. Ann N Y Acad Sci 2018; 1423:329-337. [PMID: 29741242 DOI: 10.1111/nyas.13666] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Revised: 02/05/2018] [Accepted: 02/13/2018] [Indexed: 12/17/2022]
Abstract
Whether emotions carried by voice and music are processed by the brain using similar mechanisms has long been investigated. Yet neuroimaging studies do not provide a clear picture, mainly due to lack of control over stimuli. Here, we report a functional magnetic resonance imaging (fMRI) study using comparable stimulus material in the voice and music domains-the Montreal Affective Voices and the Musical Emotional Bursts-which include nonverbal short bursts of happiness, fear, sadness, and neutral expressions. We use a multivariate emotion-classification fMRI analysis involving cross-timbre classification as a means of comparing the neural mechanisms involved in processing emotional information in the two domains. We find, for affective stimuli in the violin, clarinet, or voice timbres, that local fMRI patterns in the bilateral auditory cortex and upper premotor regions support above-chance emotion classification when training and testing sets are performed within the same timbre category. More importantly, classifier performance generalized well across timbre in cross-classifying schemes, albeit with a slight accuracy drop when crossing the voice-music boundary, providing evidence for a shared neural code for processing musical and vocal emotions, with possibly a cost for the voice due to its evolutionary significance.
Collapse
Affiliation(s)
- Sébastien Paquette
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Department of Neurology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| | - Sylvain Takerkart
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Shinji Saget
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
| | - Isabelle Peretz
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
| | - Pascal Belin
- Department of Psychology, International Laboratory for Brain Music and Sound Research, Université de Montréal, Montreal, Canada
- Institut de Neurosciences de La Timone, CNRS & Aix-Marseille University, Marseille, France
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| |
Collapse
|
26
|
Cespedes-Guevara J, Eerola T. Music Communicates Affects, Not Basic Emotions - A Constructionist Account of Attribution of Emotional Meanings to Music. Front Psychol 2018; 9:215. [PMID: 29541041 PMCID: PMC5836201 DOI: 10.3389/fpsyg.2018.00215] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Accepted: 02/08/2018] [Indexed: 12/24/2022] Open
Abstract
Basic Emotion theory has had a tremendous influence on the affective sciences, including music psychology, where most researchers have assumed that music expressivity is constrained to a limited set of basic emotions. Several scholars suggested that these constrains to musical expressivity are explained by the existence of a shared acoustic code to the expression of emotions in music and speech prosody. In this article we advocate for a shift from this focus on basic emotions to a constructionist account. This approach proposes that the phenomenon of perception of emotions in music arises from the interaction of music's ability to express core affects and the influence of top-down and contextual information in the listener's mind. We start by reviewing the problems with the concept of Basic Emotions, and the inconsistent evidence that supports it. We also demonstrate how decades of developmental and cross-cultural research on music and emotional speech have failed to produce convincing findings to conclude that music expressivity is built upon a set of biologically pre-determined basic emotions. We then examine the cue-emotion consistencies between music and speech, and show how they support a parsimonious explanation, where musical expressivity is grounded on two dimensions of core affect (arousal and valence). Next, we explain how the fact that listeners reliably identify basic emotions in music does not arise from the existence of categorical boundaries in the stimuli, but from processes that facilitate categorical perception, such as using stereotyped stimuli and close-ended response formats, psychological processes of construction of mental prototypes, and contextual information. Finally, we outline our proposal of a constructionist account of perception of emotions in music, and spell out the ways in which this approach is able to make solve past conflicting findings. We conclude by providing explicit pointers about the methodological choices that will be vital to move beyond the popular Basic Emotion paradigm and start untangling the emergence of emotional experiences with music in the actual contexts in which they occur.
Collapse
Affiliation(s)
| | - Tuomas Eerola
- Department of Music, Durham University, Durham, United Kingdom
| |
Collapse
|
27
|
Ravignani A, Thompson B, Filippi P. The Evolution of Musicality: What Can Be Learned from Language Evolution Research? Front Neurosci 2018; 12:20. [PMID: 29467601 PMCID: PMC5808206 DOI: 10.3389/fnins.2018.00020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Accepted: 01/10/2018] [Indexed: 11/22/2022] Open
Abstract
Language and music share many commonalities, both as natural phenomena and as subjects of intellectual inquiry. Rather than exhaustively reviewing these connections, we focus on potential cross-pollination of methodological inquiries and attitudes. We highlight areas in which scholarship on the evolution of language may inform the evolution of music. We focus on the value of coupled empirical and formal methodologies, and on the futility of mysterianism, the declining view that the nature, origins and evolution of language cannot be addressed empirically. We identify key areas in which the evolution of language as a discipline has flourished historically, and suggest ways in which these advances can be integrated into the study of the evolution of music.
Collapse
Affiliation(s)
- Andrea Ravignani
- Department of Language and Cognition, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium
- Research Department, Sealcentre Pieterburen, Pieterburen, Netherlands
| | - Bill Thompson
- Department of Language and Cognition, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Artificial Intelligence Lab, Vrije Universiteit Brussel, Brussels, Belgium
| | - Piera Filippi
- Department of Language and Cognition, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
- Institute of Language, Communication and the Brain, Aix-en-Provence, France
- Laboratoire Parole et Langage LPL UMR 7309, Centre National de la Recherche Scientifique, Aix-Marseille Université, Aix-en-Provence, France
- Laboratoire de Psychologie Cognitive LPC UMR7290, Centre National de la Recherche Scientifique, Aix-Marseille Université, Marseille, France
| |
Collapse
|
28
|
Valla JM, Alappatt JA, Mathur A, Singh NC. Music and Emotion-A Case for North Indian Classical Music. Front Psychol 2018; 8:2115. [PMID: 29312024 PMCID: PMC5742279 DOI: 10.3389/fpsyg.2017.02115] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2017] [Accepted: 11/20/2017] [Indexed: 12/02/2022] Open
Abstract
The ragas of North Indian Classical Music (NICM) have been historically known to elicit emotions. Recently, Mathur et al. (2015) provided empirical support for these historical assumptions, that distinct ragas elicit distinct emotional responses. In this review, we discuss the findings of Mathur et al. (2015) in the context of the structure of NICM. Using, Mathur et al. (2015) as a demonstrative case-in-point, we argue that ragas of NICM can be viewed as uniquely designed stimulus tools for investigating the tonal and rhythmic influences on musical emotion.
Collapse
Affiliation(s)
- Jeffrey M Valla
- Language Literacy and Music Laboratory, National Brain Research Centre, Manesar, India
| | - Jacob A Alappatt
- Language Literacy and Music Laboratory, National Brain Research Centre, Manesar, India
| | - Avantika Mathur
- Language Literacy and Music Laboratory, National Brain Research Centre, Manesar, India
| | - Nandini C Singh
- Language Literacy and Music Laboratory, National Brain Research Centre, Manesar, India
| |
Collapse
|
29
|
Abstract
The foundations of human music have long puzzled philosophers, mathematicians, psychologists, and neuroscientists. Although virtually all cultures uses combinations of tones as a basis for musical expression, why humans favor some tone combinations over others has been debated for millennia. Here we show that our attraction to specific tone combinations played simultaneously (chords) is predicted by their spectral similarity to voiced speech sounds. This connection between auditory aesthetics and a primary characteristic of vocalization adds to other evidence that tonal preferences arise from the biological advantages of social communication mediated by speech and language. Musical chords are combinations of two or more tones played together. While many different chords are used in music, some are heard as more attractive (consonant) than others. We have previously suggested that, for reasons of biological advantage, human tonal preferences can be understood in terms of the spectral similarity of tone combinations to harmonic human vocalizations. Using the chromatic scale, we tested this theory further by assessing the perceived consonance of all possible dyads, triads, and tetrads within a single octave. Our results show that the consonance of chords is predicted by their relative similarity to voiced speech sounds. These observations support the hypothesis that the relative attraction of musical tone combinations is due, at least in part, to the biological advantages that accrue from recognizing and responding to conspecific vocal stimuli.
Collapse
|
30
|
Taruffi L, Pehrs C, Skouras S, Koelsch S. Effects of Sad and Happy Music on Mind-Wandering and the Default Mode Network. Sci Rep 2017; 7:14396. [PMID: 29089542 PMCID: PMC5663956 DOI: 10.1038/s41598-017-14849-0] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Accepted: 10/17/2017] [Indexed: 12/31/2022] Open
Abstract
Music is a ubiquitous phenomenon in human cultures, mostly due to its power to evoke and regulate emotions. However, effects of music evoking different emotional experiences such as sadness and happiness on cognition, and in particular on self-generated thought, are unknown. Here we use probe-caught thought sampling and functional magnetic resonance imaging (fMRI) to investigate the influence of sad and happy music on mind-wandering and its underlying neuronal mechanisms. In three experiments we found that sad music, compared with happy music, is associated with stronger mind-wandering (Experiments 1A and 1B) and greater centrality of the nodes of the Default Mode Network (DMN) (Experiment 2). Thus, our results demonstrate that, when listening to sad vs. happy music, people withdraw their attention inwards and engage in spontaneous, self-referential cognitive processes. Importantly, our results also underscore that DMN activity can be modulated as a function of sad and happy music. These findings call for a systematic investigation of the relation between music and thought, having broad implications for the use of music in education and clinical settings.
Collapse
Affiliation(s)
- Liila Taruffi
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.
| | - Corinna Pehrs
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Stavros Skouras
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Stefan Koelsch
- Department of Biological and Medical Psychology, University of Bergen, Bergen, Norway
| |
Collapse
|
31
|
Abstract
Vocal theories of the origin of language rarely make a case for the precursor functions that underlay the evolution of speech. The vocal expression of emotion is unquestionably the best candidate for such a precursor, although most evolutionary models of both language and speech ignore emotion and prosody altogether. I present here a model for a joint prosodic precursor of language and music in which ritualized group-level vocalizations served as the ancestral state. This precursor combined not only affective and intonational aspects of prosody, but also holistic and combinatorial mechanisms of phrase generation. From this common stage, there was a bifurcation to form language and music as separate, though homologous, specializations. This separation of language and music was accompanied by their (re)unification in songs with words.
Collapse
Affiliation(s)
- Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
32
|
Filippi P. Emotional and Interactional Prosody across Animal Communication Systems: A Comparative Approach to the Emergence of Language. Front Psychol 2016; 7:1393. [PMID: 27733835 PMCID: PMC5039945 DOI: 10.3389/fpsyg.2016.01393] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 08/31/2016] [Indexed: 01/29/2023] Open
Abstract
Across a wide range of animal taxa, prosodic modulation of the voice can express emotional information and is used to coordinate vocal interactions between multiple individuals. Within a comparative approach to animal communication systems, I hypothesize that the ability for emotional and interactional prosody (EIP) paved the way for the evolution of linguistic prosody - and perhaps also of music, continuing to play a vital role in the acquisition of language. In support of this hypothesis, I review three research fields: (i) empirical studies on the adaptive value of EIP in non-human primates, mammals, songbirds, anurans, and insects; (ii) the beneficial effects of EIP in scaffolding language learning and social development in human infants; (iii) the cognitive relationship between linguistic prosody and the ability for music, which has often been identified as the evolutionary precursor of language.
Collapse
Affiliation(s)
- Piera Filippi
- Department of Artificial Intelligence, Vrije Universiteit BrusselBrussels, Belgium
| |
Collapse
|
33
|
Maes PJ. Sensorimotor Grounding of Musical Embodiment and the Role of Prediction: A Review. Front Psychol 2016; 7:308. [PMID: 26973587 PMCID: PMC4778011 DOI: 10.3389/fpsyg.2016.00308] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2015] [Accepted: 02/17/2016] [Indexed: 01/23/2023] Open
Abstract
In a previous article, we reviewed empirical evidence demonstrating action-based effects on music perception to substantiate the musical embodiment thesis (Maes et al., 2014). Evidence was largely based on studies demonstrating that music perception automatically engages motor processes, or that body states/movements influence music perception. Here, we argue that more rigorous evidence is needed before any decisive conclusion in favor of a “radical” musical embodiment thesis can be posited. In the current article, we provide a focused review of recent research to collect further evidence for the “radical” embodiment thesis that music perception is a dynamic process firmly rooted in the natural disposition of sounds and the human auditory and motor system. Though, we emphasize that, on top of these natural dispositions, long-term processes operate, rooted in repeated sensorimotor experiences and leading to learning, prediction, and error minimization. This approach sheds new light on the development of musical repertoires, and may refine our understanding of action-based effects on music perception as discussed in our previous article (Maes et al., 2014). Additionally, we discuss two of our recent empirical studies demonstrating that music performance relies on similar principles of sensorimotor dynamics and predictive processing.
Collapse
Affiliation(s)
- Pieter-Jan Maes
- Department of Art, Music, and Theatre Sciences, IPEM, Ghent University Belgium
| |
Collapse
|
34
|
Abstract
Emotional responses to biologically significant events are essential for human survival. Do human emotions lawfully track changes in the acoustic environment? Here we report that changes in acoustic attributes that are well known to interact with human emotions in speech and music also trigger systematic emotional responses when they occur in environmental sounds, including sounds of human actions, animal calls, machinery, or natural phenomena, such as wind and rain. Three changes in acoustic attributes known to signal emotional states in speech and music were imposed upon 24 environmental sounds. Evaluations of stimuli indicated that human emotions track such changes in environmental sounds just as they do for speech and music. Such changes not only influenced evaluations of the sounds themselves, they also affected the way accompanying facial expressions were interpreted emotionally. The findings illustrate that human emotions are highly attuned to changes in the acoustic environment, and reignite a discussion of Charles Darwin's hypothesis that speech and music originated from a common emotional signal system based on the imitation and modification of environmental sounds.
Collapse
|
35
|
Sensitivity to musical emotions in congenital amusia. Cortex 2015; 71:171-82. [DOI: 10.1016/j.cortex.2015.06.022] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 07/15/2014] [Accepted: 06/22/2015] [Indexed: 11/22/2022]
|
36
|
Abstract
Music has been called "the universal language of mankind." Although contemporary theories of music evolution often invoke various musical universals, the existence of such universals has been disputed for decades and has never been empirically demonstrated. Here we combine a music-classification scheme with statistical analyses, including phylogenetic comparative methods, to examine a well-sampled global set of 304 music recordings. Our analyses reveal no absolute universals but strong support for many statistical universals that are consistent across all nine geographic regions sampled. These universals include 18 musical features that are common individually as well as a network of 10 features that are commonly associated with one another. They span not only features related to pitch and rhythm that are often cited as putative universals but also rarely cited domains including performance style and social context. These cross-cultural structural regularities of human music may relate to roles in facilitating group coordination and cohesion, as exemplified by the universal tendency to sing, play percussion instruments, and dance to simple, repetitive music in groups. Our findings highlight the need for scientists studying music evolution to expand the range of musical cultures and musical features under consideration. The statistical universals we identified represent important candidates for future investigation.
Collapse
|
37
|
Mathur A, Vijayakumar SH, Chakrabarti B, Singh NC. Emotional responses to Hindustani raga music: the role of musical structure. Front Psychol 2015; 6:513. [PMID: 25983702 PMCID: PMC4415143 DOI: 10.3389/fpsyg.2015.00513] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2014] [Accepted: 04/10/2015] [Indexed: 11/13/2022] Open
Abstract
In Indian classical music, ragas constitute specific combinations of tonic intervals potentially capable of evoking distinct emotions. A raga composition is typically presented in two modes, namely, alaap and gat. Alaap is the note by note delineation of a raga bound by a slow tempo, but not bound by a rhythmic cycle. Gat on the other hand is rendered at a faster tempo and follows a rhythmic cycle. Our primary objective was to (1) discriminate the emotions experienced across alaap and gat of ragas, (2) investigate the association of tonic intervals, tempo and rhythmic regularity with emotional response. 122 participants rated their experienced emotion across alaap and gat of 12 ragas. Analysis of the emotional responses revealed that (1) ragas elicit distinct emotions across the two presentation modes, and (2) specific tonic intervals are robust predictors of emotional response. Specifically, our results showed that the ‘minor second’ is a direct predictor of negative valence. (3) Tonality determines the emotion experienced for a raga where as rhythmic regularity and tempo modulate levels of arousal. Our findings provide new insights into the emotional response to Indian ragas and the impact of tempo, rhythmic regularity and tonality on it.
Collapse
Affiliation(s)
- Avantika Mathur
- Speech and Language Laboratory, Cognitive Neuroscience, National Brain Research Centre Manesar, India
| | - Suhas H Vijayakumar
- Speech and Language Laboratory, Cognitive Neuroscience, National Brain Research Centre Manesar, India
| | - Bhismadev Chakrabarti
- Centre for Integrative Neuroscience and Neurodynamics, School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| | - Nandini C Singh
- Speech and Language Laboratory, Cognitive Neuroscience, National Brain Research Centre Manesar, India
| |
Collapse
|
38
|
Abstract
Music is universal at least partly because it expresses emotion and regulates affect. Associations between music and emotion have been examined regularly by music psychologists. Here, we review recent findings in three areas: (a) the communication and perception of emotion in music, (b) the emotional consequences of music listening, and (c) predictors of music preferences.
Collapse
|
39
|
Clark CN, Downey LE, Warren JD. Brain disorders and the biological role of music. Soc Cogn Affect Neurosci 2015; 10:444-52. [PMID: 24847111 PMCID: PMC4350491 DOI: 10.1093/scan/nsu079] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 03/07/2014] [Accepted: 05/14/2014] [Indexed: 12/16/2022] Open
Abstract
Despite its evident universality and high social value, the ultimate biological role of music and its connection to brain disorders remain poorly understood. Recent findings from basic neuroscience have shed fresh light on these old problems. New insights provided by clinical neuroscience concerning the effects of brain disorders promise to be particularly valuable in uncovering the underlying cognitive and neural architecture of music and for assessing candidate accounts of the biological role of music. Here we advance a new model of the biological role of music in human evolution and the link to brain disorders, drawing on diverse lines of evidence derived from comparative ethology, cognitive neuropsychology and neuroimaging studies in the normal and the disordered brain. We propose that music evolved from the call signals of our hominid ancestors as a means mentally to rehearse and predict potentially costly, affectively laden social routines in surrogate, coded, low-cost form: essentially, a mechanism for transforming emotional mental states efficiently and adaptively into social signals. This biological role of music has its legacy today in the disordered processing of music and mental states that characterizes certain developmental and acquired clinical syndromes of brain network disintegration.
Collapse
Affiliation(s)
- Camilla N Clark
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Laura E Downey
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Institute of Neurology, University College London, London WC1N 3BG, UK
| |
Collapse
|
40
|
Hopyan T, Manno III FAM, Papsin BC, Gordon KA. Sad and happy emotion discrimination in music by children with cochlear implants. Child Neuropsychol 2015; 22:366-80. [DOI: 10.1080/09297049.2014.992400] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
41
|
Bowling DL. A vocal basis for the affective character of musical mode in melody. Front Psychol 2013; 4:464. [PMID: 23914179 PMCID: PMC3728488 DOI: 10.3389/fpsyg.2013.00464] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2013] [Accepted: 07/03/2013] [Indexed: 12/03/2022] Open
Abstract
Why does major music sound happy and minor music sound sad? The idea that different musical modes are best suited to the expression of different emotions has been prescribed by composers, music theorists, and natural philosophers for millennia. However, the reason we associate musical modes with emotions remains a matter of debate. On one side there is considerable evidence that mode-emotion associations arise through exposure to the conventions of a particular musical culture, suggesting a basis in lifetime learning. On the other, cross-cultural comparisons suggest that the particular associations we make are supported by musical similarities to the prosodic characteristics of the voice in different affective states, indicating a basis in the biology of emotional expression. Here, I review developmental and cross-cultural studies on the affective character of musical modes, concluding that while learning clearly plays a role, the emotional associations we make are (1) not arbitrary, and (2) best understood by also taking into account the physical characteristics and biological purposes of vocalization.
Collapse
Affiliation(s)
- Daniel L Bowling
- Department of Cognitive Biology, University of Vienna Vienna, Austria
| |
Collapse
|
42
|
Eerola T, Friberg A, Bresin R. Emotional expression in music: contribution, linearity, and additivity of primary musical cues. Front Psychol 2013; 4:487. [PMID: 23908642 PMCID: PMC3726864 DOI: 10.3389/fpsyg.2013.00487] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2013] [Accepted: 07/11/2013] [Indexed: 11/15/2022] Open
Abstract
The aim of this study is to manipulate musical cues systematically to determine the aspects of music that contribute to emotional expression, and whether these cues operate in additive or interactive fashion, and whether the cue levels can be characterized as linear or non-linear. An optimized factorial design was used with six primary musical cues (mode, tempo, dynamics, articulation, timbre, and register) across four different music examples. Listeners rated 200 musical examples according to four perceived emotional characters (happy, sad, peaceful, and scary). The results exhibited robust effects for all cues and the ranked importance of these was established by multiple regression. The most important cue was mode followed by tempo, register, dynamics, articulation, and timbre, although the ranking varied across the emotions. The second main result suggested that most cue levels contributed to the emotions in a linear fashion, explaining 77–89% of variance in ratings. Quadratic encoding of cues did lead to minor but significant increases of the models (0–8%). Finally, the interactions between the cues were non-existent suggesting that the cues operate mostly in an additive fashion, corroborating recent findings on emotional expression in music (Juslin and Lindström, 2010).
Collapse
Affiliation(s)
- Tuomas Eerola
- Department of Music, University of Jyväskylä Jyväskylä, Finland
| | | | | |
Collapse
|
43
|
Abstract
Experimental evidence demonstrates robust cross-modal matches between music and colors that are mediated by emotional associations. US and Mexican participants chose colors that were most/least consistent with 18 selections of classical orchestral music by Bach, Mozart, and Brahms. In both cultures, faster music in the major mode produced color choices that were more saturated, lighter, and yellower whereas slower, minor music produced the opposite pattern (choices that were desaturated, darker, and bluer). There were strong correlations (0.89 < r < 0.99) between the emotional associations of the music and those of the colors chosen to go with the music, supporting an emotional mediation hypothesis in both cultures. Additional experiments showed similarly robust cross-modal matches from emotionally expressive faces to colors and from music to emotionally expressive faces. These results provide further support that music-to-color associations are mediated by common emotional associations.
Collapse
|
44
|
Abstract
Substantial advances in our understanding of the neural bases of emotional processing have been made over the past decades. Overall, studies in humans and other animals highlight the key role of the amygdala in the detection and evaluation of stimuli with affective value. Nonetheless, contradictory findings have been reported, especially in terms of the exact role of this structure in the processing of different emotions, giving rise to different neural models of emotion. For instance, although the amygdala has traditionally been considered as exclusively involved in fear (and possibly anger), more recent work suggests that it may be important for processing other types of emotions, and even nonemotional information. A review of the main findings in this field is presented here, together with some of the hypotheses that have been put forward to interpret this literature and explain its inconsistencies.
Collapse
Affiliation(s)
- Jorge L. Armony
- Department of Psychiatry, McGill University, Canada; Douglas Mental Health University Institute, Canada; International Laboratory for Brain, Music, and Sound Research (BRAMS), Canada
| |
Collapse
|