1
|
Leung FYN, Stojanovik V, Jiang C, Liu F. Investigating implicit emotion processing in autism spectrum disorder across age groups: A cross-modal emotional priming study. Autism Res 2024; 17:824-837. [PMID: 38488319 DOI: 10.1002/aur.3124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 03/01/2024] [Indexed: 04/13/2024]
Abstract
Cumulating evidence suggests that atypical emotion processing in autism may generalize across different stimulus domains. However, this evidence comes from studies examining explicit emotion recognition. It remains unclear whether domain-general atypicality also applies to implicit emotion processing in autism and its implication for real-world social communication. To investigate this, we employed a novel cross-modal emotional priming task to assess implicit emotion processing of spoken/sung words (primes) through their influence on subsequent emotional judgment of faces/face-like objects (targets). We assessed whether implicit emotional priming differed between 38 autistic and 38 neurotypical individuals across age groups as a function of prime and target type. Results indicated no overall group differences across age groups, prime types, and target types. However, differential, domain-specific developmental patterns emerged for the autism and neurotypical groups. For neurotypical individuals, speech but not song primed the emotional judgment of faces across ages. This speech-orienting tendency was not observed across ages in the autism group, as priming of speech on faces was not seen in autistic adults. These results outline the importance of the delicate weighting between speech- versus song-orientation in implicit emotion processing throughout development, providing more nuanced insights into the emotion processing profile of autistic individuals.
Collapse
Affiliation(s)
- Florence Y N Leung
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- Department of Psychology, University of Bath, Bath, UK
| | - Vesna Stojanovik
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| |
Collapse
|
2
|
Ross P, Williams E, Herbert G, Manning L, Lee B. Turn that music down! Affective musical bursts cause an auditory dominance in children recognizing bodily emotions. J Exp Child Psychol 2023; 230:105632. [PMID: 36731279 DOI: 10.1016/j.jecp.2023.105632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 12/16/2022] [Accepted: 01/13/2023] [Indexed: 02/01/2023]
Abstract
Previous work has shown that different sensory channels are prioritized across the life course, with children preferentially responding to auditory information. The aim of the current study was to investigate whether the mechanism that drives this auditory dominance in children occurs at the level of encoding (overshadowing) or when the information is integrated to form a response (response competition). Given that response competition is dependent on a modality integration attempt, a combination of stimuli that could not be integrated was used so that if children's auditory dominance persisted, this would provide evidence for the overshadowing over the response competition mechanism. Younger children (≤7 years), older children (8-11 years), and adults (18+ years) were asked to recognize the emotion (happy or fearful) in either nonvocal auditory musical emotional bursts or human visual bodily expressions of emotion in three conditions: unimodal, congruent bimodal, and incongruent bimodal. We found that children performed significantly worse at recognizing emotional bodies when they heard (and were told to ignore) musical emotional bursts. This provides the first evidence for auditory dominance in both younger and older children when presented with modally incongruent emotional stimuli. The continued presence of auditory dominance, despite the lack of modality integration, was taken as supportive evidence for the overshadowing explanation. These findings are discussed in relation to educational considerations, and future sensory dominance investigations and models are proposed.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham DH1 3LE, UK.
| | - Ella Williams
- Department of Psychology, Durham University, Durham DH1 3LE, UK; Oxford Neuroscience, University of Oxford, Oxford OX3 9DU, UK
| | - Gemma Herbert
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Laura Manning
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Becca Lee
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
3
|
Zheng K, Meng R, Zheng C, Li X, Sang J, Cai J, Wang J, Wang X. EmotionBox: A music-element-driven emotional music generation system based on music psychology. Front Psychol 2022; 13:841926. [PMID: 36106044 PMCID: PMC9465382 DOI: 10.3389/fpsyg.2022.841926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 08/02/2022] [Indexed: 11/14/2022] Open
Abstract
With the development of deep neural networks, automatic music composition has made great progress. Although emotional music can evoke listeners' different auditory perceptions, only few research studies have focused on generating emotional music. This paper presents EmotionBox -a music-element-driven emotional music generator based on music psychology that is capable of composing music given a specific emotion, while this model does not require a music dataset labeled with emotions as previous methods. In this work, pitch histogram and note density are extracted as features that represent mode and tempo, respectively, to control music emotions. The specific emotions are mapped from these features through Russell's psychology model. The subjective listening tests show that the Emotionbox has a competitive performance in generating different emotional music and significantly better performance in generating music with low arousal emotions, especially peaceful emotion, compared with the emotion-label-based method.
Collapse
Affiliation(s)
- Kaitong Zheng
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Ruijie Meng
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Chengshi Zheng
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xiaodong Li
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jinqiu Sang
- Key Laboratory of Noise and Vibration Research, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Juanjuan Cai
- State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing, China
| | - Jie Wang
- School of Electronics and Communication Engineering, Guangzhou University, Guangzhou, China
| | - Xiao Wang
- School of Humanities and Management, Southwest Medical University, Luzhou, China
| |
Collapse
|
4
|
Chu Y. Recognition of musical beat and style and applications in interactive humanoid robot. Front Neurorobot 2022; 16:875058. [PMID: 35990882 PMCID: PMC9386054 DOI: 10.3389/fnbot.2022.875058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 07/07/2022] [Indexed: 11/13/2022] Open
Abstract
The musical beat and style recognition have high application value in music information retrieval. However, the traditional methods mostly use a convolutional neural network (CNN) as the backbone and have poor performance. Accordingly, the present work chooses a recurrent neural network (RNN) in deep learning (DL) to identify musical beats and styles. The proposed model is applied to an interactive humanoid robot. First, DL-based musical beat and style recognition technologies are studied. On this basis, a note beat recognition method combining attention mechanism (AM) and independent RNN (IndRNN) [AM-IndRNN] is proposed. The AM-IndRNN can effectively avoid gradient vanishing and gradient exploding. Second, the audio music files are divided into multiple styles using the music signal's temporal features. A human dancing robot using a multimodal drive is constructed. Finally, the proposed method is tested. The results show that the proposed AM-IndRNN outperforms multiple parallel long short-term memory (LSTM) models and IndRNN in recognition accuracy (88.9%) and loss rate (0.0748). Therefore, the AM-optimized LSTM model has gained a higher recognition accuracy. The research results provide specific ideas for applying DL technology in musical beat and style recognition.
Collapse
|
5
|
A Systematic Review of Scientific Studies on the Effects of Music in People with or at Risk for Autism Spectrum Disorder. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19095150. [PMID: 35564544 PMCID: PMC9100336 DOI: 10.3390/ijerph19095150] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2022] [Revised: 04/10/2022] [Accepted: 04/12/2022] [Indexed: 11/18/2022]
Abstract
The prevalence of autism spectrum disorders (ASD) is globally increasing, and the current available interventions show variable success. Thus, there is a growing interest in additional interventions such as music therapy (MT). Therefore, we aimed to provide a comprehensive and systematic review of music and people with, or at risk of, ASD. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines and used PubMed, PsycINFO, and Web of Science as databases, with “music”, “music therapy”, “autism spectrum disorder”, and “ASD” as search terms. Among the identified and screened articles, 81 out of 621 qualified as scientific studies involving a total of 43,353 participants. These studies investigated the peculiarities of music perception in people with ASD, as well as the effects of music and MT in this patient group. Most of the music-based interventions were beneficial in improving social, emotional, and behavioural problems. However, the availability of studies utilizing a rigorous randomized controlled trial (RCT) design was scarce. Most of the studies had a small sample size, and the applied therapeutic and scientific research methods were heterogeneous.
Collapse
|
6
|
Xu J, Zhou L, Liu F, Xue C, Jiang J, Jiang C. The autistic brain can process local but not global emotion regularities in facial and musical sequences. Autism Res 2021; 15:222-240. [PMID: 34792299 DOI: 10.1002/aur.2635] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/31/2021] [Accepted: 11/01/2021] [Indexed: 11/05/2022]
Abstract
Whether autism spectrum disorder (ASD) is associated with a global processing deficit remains controversial. Global integration requires extraction of regularity across various timescales, yet little is known about how individuals with ASD process regularity at local (short timescale) versus global (long timescale) levels. To this end, we used event-related potentials to investigate whether individuals with ASD would show different neural responses to local (within trial) versus global (across trials) emotion regularities extracted from sequential facial expressions; and if so, whether this visual abnormality would generalize to the music (auditory) domain. Twenty individuals with ASD and 21 age- and IQ-matched individuals with typical development participated in this study. At an early processing stage, ASD participants exhibited preserved neural responses to violations of local emotion regularity for both faces and music. At a later stage, however, there was an absence of neural responses in ASD to violations of global emotion regularity for both faces and music. These findings suggest that the autistic brain responses to emotion regularity are modulated by the timescale of sequential stimuli, and provide insight into the neural mechanisms underlying emotional processing in ASD.
Collapse
Affiliation(s)
- Jie Xu
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Linshu Zhou
- Music College, Shanghai Normal University, Shanghai, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Chao Xue
- Department of Psychology, Shanghai Normal University, Shanghai, China
| | - Jun Jiang
- Music College, Shanghai Normal University, Shanghai, China
| | - Cunmei Jiang
- Music College, Shanghai Normal University, Shanghai, China
| |
Collapse
|
7
|
The Benefits of Music Listening for Induced State Anxiety: Behavioral and Physiological Evidence. Brain Sci 2021; 11:brainsci11101332. [PMID: 34679397 PMCID: PMC8533701 DOI: 10.3390/brainsci11101332] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 09/24/2021] [Accepted: 09/24/2021] [Indexed: 02/05/2023] Open
Abstract
BACKGROUND Some clinical studies have indicated that neutral and happy music may relieve state anxiety. However, the brain mechanisms by which these effective interventions in music impact state anxiety remain unknown. METHODS In this study, we selected music with clinical effects for therapy, and 62 subjects were included using the evoked anxiety paradigm. After evoking anxiety with a visual stimulus, all subjects were randomly divided into three groups (listening to happy music, neutral music and a blank stimulus), and EEG signals were acquired. RESULTS We found that different emotional types of music might have different mechanisms in state anxiety interventions. Neutral music had the effect of alleviating state anxiety. The brain mechanisms supported that neutral music ameliorating state anxiety was associated with decreased power spectral density of the occipital lobe and increased brain functional connectivity between the occipital lobe and frontal lobe. Happy music also had the effect of alleviating state anxiety, and the brain mechanism was associated with enhanced brain functional connectivity between the occipital lobe and right temporal lobe. CONCLUSIONS This study may be important for a deep understanding of the mechanisms associated with state anxiety music interventions and may further contribute to future clinical treatment using nonpharmaceutical interventions.
Collapse
|
8
|
Webster PJ, Wang S, Li X. Review: Posed vs. Genuine Facial Emotion Recognition and Expression in Autism and Implications for Intervention. Front Psychol 2021; 12:653112. [PMID: 34305720 PMCID: PMC8300960 DOI: 10.3389/fpsyg.2021.653112] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 06/02/2021] [Indexed: 12/03/2022] Open
Abstract
Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.
Collapse
Affiliation(s)
- Paula J Webster
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| |
Collapse
|