1
|
Liang J, Zhang M, Yang L, Li Y, Li Y, Wang L, Li H, Chen J, Luo W. How Linguistic and Nonlinguistic Vocalizations Shape the Perception of Emotional Faces-An Electroencephalography Study. J Cogn Neurosci 2025; 37:970-987. [PMID: 39620941 DOI: 10.1162/jocn_a_02284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2025]
Abstract
Vocal emotions are crucial in guiding visual attention toward emotionally significant environmental events, such as recognizing emotional faces. This study employed continuous EEG recordings to examine the impact of linguistic and nonlinguistic vocalizations on facial emotion processing. Participants completed a facial emotion discrimination task while viewing fearful, happy, and neutral faces. The behavioral and ERP results indicated that fearful nonlinguistic vocalizations accelerated the recognition of fearful faces and elicited a larger P1 amplitude, whereas happy linguistic vocalizations accelerated the recognition of happy faces and similarly induced a greater P1 amplitude. In recognition of fearful faces, a greater N170 component was observed in the right hemisphere when the emotional category of the priming vocalization was consistent with the face stimulus. In contrast, this effect occurred in the left hemisphere while recognizing happy faces. Representational similarity analysis revealed that the temporoparietal regions automatically differentiate between linguistic and nonlinguistic vocalizations early in face processing. In conclusion, these findings enhance our understanding of the interplay between vocalization types and facial emotion recognition, highlighting the importance of cross-modal processing in emotional perception.
Collapse
Affiliation(s)
- Junyu Liang
- South China Normal University
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Mingming Zhang
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Lan Yang
- South China Normal University
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Yiwen Li
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
- Beijing Normal University
| | - Yuchen Li
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| | - Li Wang
- South China Normal University
| | | | | | - Wenbo Luo
- Liaoning Normal University
- Key Laboratory of Brain and Cognitive Neuroscience
| |
Collapse
|
2
|
Mohapatra AN, Jabarin R, Ray N, Netser S, Wagner S. Impaired emotion recognition in Cntnap2-deficient mice is associated with hyper-synchronous prefrontal cortex neuronal activity. Mol Psychiatry 2025; 30:1440-1452. [PMID: 39289476 PMCID: PMC11919685 DOI: 10.1038/s41380-024-02754-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 09/11/2024] [Indexed: 09/19/2024]
Abstract
Individuals diagnosed with autism spectrum disorder (ASD) show difficulty in recognizing emotions in others, a process termed emotion recognition. While human fMRI studies linked multiple brain areas to emotion recognition, the specific mechanisms underlying impaired emotion recognition in ASD are not clear. Here, we employed an emotional state preference (ESP) task to show that Cntnap2-knockout (KO) mice, an established ASD model, do not distinguish between conspecifics according to their emotional state. We assessed brain-wide local-field potential (LFP) signals during various social behavior tasks and found that Cntnap2-KO mice exhibited higher LFP theta and gamma rhythmicity than did C57BL/6J mice, even at rest. Specifically, Cntnap2-KO mice showed increased theta coherence, especially between the prelimbic cortex (PrL) and the hypothalamic paraventricular nucleus, during social behavior. Moreover, we observed significantly increased Granger causality of theta rhythmicity between these two brain areas, across several types of social behavior tasks. Finally, optogenetic stimulation of PrL pyramidal neurons in C57BL/6J mice impaired their social discrimination abilities, including in ESP. Together, these results suggest that increased rhythmicity of PrL pyramidal neuronal activity and its hyper-synchronization with specific brain regions are involved in the impaired emotion recognition exhibited by Cntnap2-KO mice.
Collapse
Affiliation(s)
- Alok Nath Mohapatra
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel.
| | - Renad Jabarin
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| | - Natali Ray
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| | - Shai Netser
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
3
|
Jabarin R, Mohapatra AN, Ray N, Netser S, Wagner S. Distinct prelimbic cortex neuronal responses to emotional states of others drive emotion recognition in adult mice. Curr Biol 2025; 35:994-1011.e8. [PMID: 39922187 DOI: 10.1016/j.cub.2025.01.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 10/31/2024] [Accepted: 01/08/2025] [Indexed: 02/10/2025]
Abstract
The ability to perceive the emotional states of others, termed emotion recognition, allows individuals to adapt their conduct to the social environment. The brain mechanisms underlying this capacity, known to be impaired in individuals with autism spectrum disorder (ASD), remain, however, elusive. Here, we show that adult mice can discern between emotional states of conspecifics. Fiber photometry recordings of calcium signals in the prelimbic (PrL) medial prefrontal cortex revealed inhibition of pyramidal neurons during investigation of emotionally aroused individuals, as opposed to transient excitation toward naive conspecifics. Chronic electrophysiological recordings at the single-cell level indicated social stimulus-specific responses in PrL neurons at the onset and conclusion of social investigation bouts, potentially regulating the initiation and termination of social interactions. Finally, optogenetic augmentation of the differential neuronal response enhanced emotion recognition, while its reduction eliminated such behavior. Thus, differential PrL neuronal response to individuals with distinct emotional states underlies murine emotion recognition.
Collapse
Affiliation(s)
- Renad Jabarin
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa 3478403, Israel
| | - Alok Nath Mohapatra
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa 3478403, Israel
| | - Natali Ray
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa 3478403, Israel
| | - Shai Netser
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa 3478403, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, Faculty of Natural Sciences, University of Haifa, Haifa 3478403, Israel.
| |
Collapse
|
4
|
Jiang Z, Long Y, Zhang X, Liu Y, Bai X. CNEV: A corpus of Chinese nonverbal emotional vocalizations with a database of emotion category, valence, arousal, and gender. Behav Res Methods 2025; 57:62. [PMID: 39838181 DOI: 10.3758/s13428-024-02595-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2024] [Indexed: 01/23/2025]
Abstract
Nonverbal emotional vocalizations play a crucial role in conveying emotions during human interactions. Validated corpora of these vocalizations have facilitated emotion-related research and found wide-ranging applications. However, existing corpora have lacked representation from diverse cultural backgrounds, which may limit the generalizability of the resulting theories. The present paper introduces the Chinese Nonverbal Emotional Vocalization (CNEV) corpus, the first nonverbal emotional vocalization corpus recorded and validated entirely by Mandarin speakers from China. The CNEV corpus contains 2415 vocalizations across five emotion categories: happiness, sadness, fear, anger, and neutrality. It also includes a database containing subjective evaluation data on emotion category, valence, arousal, and speaker gender, as well as the acoustic features of the vocalizations. Key conclusions drawn from statistical analyses of perceptual evaluations and acoustic analysis include the following: (1) the CNEV corpus exhibits adequate reliability and high validity; (2) perceptual evaluations reveal a tendency for individuals to associate anger with male voices and fear with female voices; (3) acoustic analysis indicates that males are more effective at expressing anger, while females excel in expressing fear; and (4) the observed perceptual patterns align with the acoustic analysis results, suggesting that the perceptual differences may stem not only from the subjective factors of perceivers but also from objective expressive differences in the vocalizations themselves. For academic research purposes, the CNEV corpus and database are freely available for download at https://osf.io/6gy4v/ .
Collapse
Affiliation(s)
- Zhongqing Jiang
- College of Psychology, Liaoning Normal University, No. 850 Huanghe Road, Dalian, 116029, Liaoning, China.
| | - Yanling Long
- College of Psychology, Liaoning Normal University, No. 850 Huanghe Road, Dalian, 116029, Liaoning, China
| | - Xi'e Zhang
- Xianyang Senior High School of Shaanxi Province, Xianyang, China
| | - Yangtao Liu
- College of Psychology, Liaoning Normal University, No. 850 Huanghe Road, Dalian, 116029, Liaoning, China
| | - Xue Bai
- College of Psychology, Liaoning Normal University, No. 850 Huanghe Road, Dalian, 116029, Liaoning, China
| |
Collapse
|
5
|
Jiang J, Johnson JCS, Requena-Komuro MC, Benhamou E, Sivasathiaseelan H, Chokesuwattanaskul A, Nelson A, Nortley R, Weil RS, Volkmer A, Marshall CR, Bamiou DE, Warren JD, Hardy CJD. Comprehension of acoustically degraded emotional prosody in Alzheimer's disease and primary progressive aphasia. Sci Rep 2024; 14:31332. [PMID: 39732859 PMCID: PMC11682080 DOI: 10.1038/s41598-024-82694-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 12/09/2024] [Indexed: 12/30/2024] Open
Abstract
Previous research suggests that emotional prosody perception is impaired in neurodegenerative diseases like Alzheimer's disease (AD) and primary progressive aphasia (PPA). However, no previous research has investigated emotional prosody perception in these diseases under non-ideal listening conditions. We recruited 18 patients with AD, and 31 with PPA (nine logopenic (lvPPA); 11 nonfluent/agrammatic (nfvPPA) and 11 semantic (svPPA)), together with 24 healthy age-matched individuals. Participants listened to speech stimuli conveying three emotions in clear and noise-vocoded forms and had to identify the emotion being conveyed. We then conducted correlation analyses between task performance and measures of socio-emotional functioning. All patient groups showed significant impairments in identifying clear emotional prosody compared to healthy individuals. These deficits were exacerbated under noise-vocoded conditions, with all patient groups performing significantly worse than healthy individuals and patients with lvPPA performing significantly worse than those with svPPA. Significant correlations with social cognition measures were observed more consistently for noise-vocoded than clear emotional prosody comprehension. These findings open a window on a dimension of real-world emotional communication that has often been overlooked in dementia, with particular relevance to social cognition, and begin to suggest a novel candidate paradigm for investigating and quantifying this systematically.
Collapse
Affiliation(s)
- Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
| | - Jeremy C S Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
- Basic and Clinical Neuroscience, School of Neuroscience, King's College London, London, UK
| | - Maï-Carmen Requena-Komuro
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
- Department of Psychology, Institute of Clinical Psychology and Psychotherapy Research, MSH Medical School Hamburg, Hamburg, Germany
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
| | - Harri Sivasathiaseelan
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
| | - Anthipa Chokesuwattanaskul
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
- Division of Neurology, Department of Internal Medicine, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, Thailand
- Cognitive Clinical and Computational Neuroscience Research Unit, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
| | - Annabel Nelson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
| | - Ross Nortley
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
- Wexham Park Hospital, Frimley Health NHS Foundation Trust, Berkshire, UK
| | - Rimona S Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Charles R Marshall
- Centre for Preventive Neurology, Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Doris-Eva Bamiou
- UCL Ear Institute and UCL/UCLH Biomedical Research Centre, National Institute of Health Research, University College London, London, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK
| | - Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 1st Floor, 8-11 Queen Square, London, WC1N 3AR, UK.
| |
Collapse
|
6
|
Aktar Ugurlu G, Ugurlu BN, Yalcinkaya M. Evaluating the Impact of BoNT-A Injections on Facial Expressions: A Deep Learning Analysis. Aesthet Surg J 2024; 45:NP1-NP7. [PMID: 39365026 DOI: 10.1093/asj/sjae204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 09/05/2024] [Accepted: 10/03/2024] [Indexed: 10/05/2024] Open
Abstract
BACKGROUND Botulinum toxin type A (BoNT-A) injections are widely administered for facial rejuvenation, but their effects on facial expressions remain unclear. OBJECTIVES In this study, we aimed to objectively measure the impact of BoNT-A injections on facial expressions with deep learning techniques. METHODS One hundred eighty patients age 25 to 60 years who underwent BoNT-A application to the upper face were included. Patients were photographed with neutral, happy, surprised, and angry expressions before and 14 days after the procedure. A convolutional neural network (CNN)-based facial emotion recognition (FER) system analyzed 1440 photographs with a hybrid data set of clinical images and the Karolinska Directed Emotional Faces (KDEF) data set. RESULTS The CNN model accurately predicted 90.15% of the test images. Significant decreases in the recognition of angry and surprised expressions were observed postinjection (P < .05), with no significant changes in happy or neutral expressions (P > .05). Angry expressions were often misclassified as neutral or happy (P < .05), and surprised expressions were more likely to be perceived as neutral (P < .05). CONCLUSIONS Deep learning can effectively assess the impact of BoNT-A injections on facial expressions, providing more standardized data than traditional surveys. BoNT-A may reduce the expression of anger and surprise, potentially leading to a more positive facial appearance and emotional state. Further studies are needed to understand the broader implications of these changes. LEVEL OF EVIDENCE: 4 (THERAPEUTIC)
Collapse
|
7
|
Morningstar M, Hughes C, French RC, Grannis C, Mattson WI, Nelson EE. Functional connectivity during facial and vocal emotion recognition: Preliminary evidence for dissociations in developmental change by nonverbal modality. Neuropsychologia 2024; 202:108946. [PMID: 38945440 DOI: 10.1016/j.neuropsychologia.2024.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 05/15/2024] [Accepted: 06/27/2024] [Indexed: 07/02/2024]
Abstract
The developmental trajectory of emotion recognition (ER) skills is thought to vary by nonverbal modality, with vocal ER becoming mature later than facial ER. To investigate potential neural mechanisms contributing to this dissociation at a behavioural level, the current study examined whether youth's neural functional connectivity during vocal and facial ER tasks showed differential developmental change across time. Youth ages 8-19 (n = 41) completed facial and vocal ER tasks while undergoing functional magnetic resonance imaging, at two timepoints (1 year apart; n = 36 for behavioural data, n = 28 for neural data). Partial least squares analyses revealed that functional connectivity during ER is both distinguishable by modality (with different patterns of connectivity for facial vs. vocal ER) and across time-with changes in connectivity being particularly pronounced for vocal ER. ER accuracy was greater for faces than voices, and positively associated with age; although task performance did not change appreciably across a 1-year period, changes in latent functional connectivity patterns across time predicted participants' ER accuracy at Time 2. Taken together, these results suggest that vocal and facial ER are supported by distinguishable neural correlates that may undergo different developmental trajectories. Our findings are also preliminary evidence that changes in network integration may support the development of ER skills in childhood and adolescence.
Collapse
Affiliation(s)
- M Morningstar
- Department of Psychology, Queen's University, Canada; Centre for Neuroscience Studies, Queen's University, Canada.
| | - C Hughes
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Canada
| | - R C French
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA
| | - C Grannis
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - W I Mattson
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA
| | - E E Nelson
- Center for Biobehavioral Health, Abigail Wexner Research Institute, Nationwide Children's Hospital, Columbus, OH, USA; Department of Pediatrics, Ohio State University Wexner College of Medicine, Columbus, OH, USA
| |
Collapse
|
8
|
Morningstar M. A Louder Call for the Integration of Multiple Nonverbal Channels in the Study of Affect. AFFECTIVE SCIENCE 2024; 5:201-208. [PMID: 39391348 PMCID: PMC11461435 DOI: 10.1007/s42761-024-00265-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Accepted: 08/12/2024] [Indexed: 10/12/2024]
Abstract
Affective science has increasingly sought to represent emotional experiences multimodally, measuring affect through a combination of self-report ratings, linguistic output, physiological measures, and/or nonverbal expressions. However, despite widespread recognition that non-facial nonverbal cues are an important facet of expressive behavior, measures of nonverbal expressions commonly focus solely on facial movements. This Commentary represents a call for affective scientists to integrate a larger range of nonverbal cues-including gestures, postures, and vocal cues-alongside facial cues in efforts to represent the experience of emotion and its communication. Using the measurement and analysis of vocal cues as an illustrative case, the Commentary considers challenges, potential solutions, and the theoretical and translational significance of working to integrate multiple nonverbal channels in the study of affect.
Collapse
Affiliation(s)
- Michele Morningstar
- Department of Psychology, Queen’s University, Kingston, Canada
- Centre for Neuroscience Studies, Queen’s University, Kingston, Canada
| |
Collapse
|
9
|
Irino T, Hanatani Y, Kishida K, Naito S, Kawahara H. Effects of age and hearing loss on speech emotion discrimination. Sci Rep 2024; 14:18328. [PMID: 39112612 PMCID: PMC11306396 DOI: 10.1038/s41598-024-69216-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 08/01/2024] [Indexed: 08/10/2024] Open
Abstract
Better communication with older people requires not only improving speech intelligibility but also understanding how well emotions can be conveyed and the effect of age and hearing loss (HL) on emotion perception. In this paper, emotion discrimination experiments were conducted using a vocal morphing method and an HL simulator in young normal hearing (YNH) and older participants. Speech sounds were morphed to represent intermediate emotions between all combinations of happiness, sadness, and anger. Discrimination performance was compared when the YNH listened to normal sounds, when the same YNH listened to HL simulated sounds, and when older people listened to the same normal sounds. The results showed that there was no significant difference between discrimination with and without HL simulation, suggesting that peripheral HL may not affect emotion perception. The discrimination performance of the older participants was significantly worse only for the anger-happiness pair than for the other emotion pairs and for the YNH. It was also found that the difficulty increases with age, not just with hearing level.
Collapse
Affiliation(s)
- Toshio Irino
- Faculty of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan.
- Graduate School of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan.
| | - Yukiho Hanatani
- Graduate School of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan
| | - Kazuma Kishida
- Faculty of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan
| | - Shuri Naito
- Faculty of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan
| | - Hideki Kawahara
- Center for Innovative and Joint Research, Wakayama University, Wakayama, 640-8510, Japan
| |
Collapse
|
10
|
Ghasemahmad Z, Mrvelj A, Panditi R, Sharma B, Perumal KD, Wenstrup JJ. Emotional vocalizations alter behaviors and neurochemical release into the amygdala. eLife 2024; 12:RP88838. [PMID: 39008352 PMCID: PMC11249735 DOI: 10.7554/elife.88838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024] Open
Abstract
The basolateral amygdala (BLA), a brain center of emotional expression, contributes to acoustic communication by first interpreting the meaning of social sounds in the context of the listener's internal state, then organizing the appropriate behavioral responses. We propose that modulatory neurochemicals such as acetylcholine (ACh) and dopamine (DA) provide internal-state signals to the BLA while an animal listens to social vocalizations. We tested this in a vocal playback experiment utilizing highly affective vocal sequences associated with either mating or restraint, then sampled and analyzed fluids within the BLA for a broad range of neurochemicals and observed behavioral responses of adult male and female mice. In male mice, playback of restraint vocalizations increased ACh release and usually decreased DA release, while playback of mating sequences evoked the opposite neurochemical release patterns. In non-estrus female mice, patterns of ACh and DA release with mating playback were similar to males. Estrus females, however, showed increased ACh, associated with vigilance, as well as increased DA, associated with reward-seeking. Experimental groups that showed increased ACh release also showed the largest increases in an aversive behavior. These neurochemical release patterns and several behavioral responses depended on a single prior experience with the mating and restraint behaviors. Our results support a model in which ACh and DA provide contextual information to sound analyzing BLA neurons that modulate their output to downstream brain regions controlling behavioral responses to social vocalizations.
Collapse
Affiliation(s)
- Zahra Ghasemahmad
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
- School of Biomedical Sciences, Kent State UniversityKentUnited States
- Brain Health Research Institute, Kent State UniversityKentUnited States
| | - Aaron Mrvelj
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Rishitha Panditi
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Bhavya Sharma
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Karthic Drishna Perumal
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
| | - Jeffrey J Wenstrup
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical UniversityRootstownUnited States
- School of Biomedical Sciences, Kent State UniversityKentUnited States
- Brain Health Research Institute, Kent State UniversityKentUnited States
| |
Collapse
|
11
|
Löytömäki J, Laakso ML, Huttunen K. Social-Emotional and Behavioural Difficulties in Children with Neurodevelopmental Disorders: Emotion Perception in Daily Life and in a Formal Assessment Context. J Autism Dev Disord 2023; 53:4744-4758. [PMID: 36184695 PMCID: PMC10627915 DOI: 10.1007/s10803-022-05768-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/13/2022] [Indexed: 10/10/2022]
Abstract
Children with neurodevelopmental disorders often have social-emotional and behavioural difficulties. The present study explored these difficulties in children (n = 50, aged 6-10 years) with autism spectrum disorder, attention-deficit/hyperactivity disorder and developmental language disorder. Parents, teachers and therapists evaluated children's social-emotional and behavioural difficulties through a self-devised questionnaire and the Strengths and Difficulties Questionnaire. Additionally, the children, along with their typically developing age peers (n = 106), completed six emotion discrimination tasks. Analysis revealed some impaired emotion discrimination skills that were predictive for behavioural challenges in daily life and associated with the parent-reported existence of friends. Timely intervention in these children is needed, and it should also include emotion perception training.
Collapse
Affiliation(s)
- Joanna Löytömäki
- Faculty of Humanities/Research Unit of Logopedics, University of Oulu, P.O. Box 1000, 90014, Oulun yliopisto, Finland.
| | - Marja-Leena Laakso
- Department of Education, University of Jyvaskyla, Finland PL 35, 40014, Jyvaskylan yliopisto, Finland
| | - Kerttu Huttunen
- Faculty of Humanities/Research Unit of Logopedics, University of Oulu, P.O. Box 1000, 90014, Oulun yliopisto, Finland
- Department of Otorhinolaryngology, Head and Neck Surgery, Oulu University Hospital, Oulu, Finland
- Medical Research Center Oulu, Oulu, Finland
| |
Collapse
|
12
|
Conring F, Gangl N, Derome M, Wiest R, Federspiel A, Walther S, Stegmayer K. Associations of resting-state perfusion and auditory verbal hallucinations with and without emotional content in schizophrenia. Neuroimage Clin 2023; 40:103527. [PMID: 37871539 PMCID: PMC10598456 DOI: 10.1016/j.nicl.2023.103527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 09/21/2023] [Accepted: 10/09/2023] [Indexed: 10/25/2023]
Abstract
Auditory Verbal Hallucinations (AVH) are highly prevalent in patients with schizophrenia. AVH with high emotional content lead to particularly poor functional outcome. Increasing evidence shows that AVH are associated with alterations in structure and function in language and memory related brain regions. However, neural correlates of AVH with emotional content remain unclear. In our study (n = 91), we related resting-state cerebral perfusion to AVH and emotional content, comparing four groups: patients with AVH with emotional content (n = 13), without emotional content (n = 14), without hallucinations (n = 20) and healthy controls (n = 44). Patients with AVH and emotional content presented with increased perfusion within the amygdala and the ventromedial and dorsomedial prefrontal cortex (vmPFC/ dmPFC) compared to patients with AVH without emotional content. In addition, patients with any AVH showed hyperperfusion within the anterior cingulate gyrus, the vmPFC/dmPFC, the right hippocampus, and the left pre- and postcentral gyrus compared to patients without AVH. Our results indicate metabolic alterations in brain areas critical for the processing of emotions as key for the pathophysiology of AVH with emotional content. Particularly, hyperperfusion of the amygdala may reflect and even trigger emotional content of AVH, while hyperperfusion of the vmPFC/dmPFC cluster may indicate insufficient top-down amygdala regulation in patients with schizophrenia.
Collapse
Affiliation(s)
- Frauke Conring
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland; Graduate School for Health Sciences, University of Bern, Bern, Switzerland.
| | - Nicole Gangl
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland; Graduate School for Health Sciences, University of Bern, Bern, Switzerland
| | - Melodie Derome
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Roland Wiest
- Support Center of Advanced Neuroimaging (SCAN), University Institute of Diagnostic and Interventional Neuroradiology, Inselspital, Bern, Switzerland
| | - Andrea Federspiel
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Sebastian Walther
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| | - Katharina Stegmayer
- Translational Research Center, University Hospital of Psychiatry and Psychotherapy, University of Bern, Bern, Switzerland
| |
Collapse
|
13
|
Voytenko S, Shanbhag S, Wenstrup J, Galazyuk A. Intracellular recordings reveal integrative function of the basolateral amygdala in acoustic communication. J Neurophysiol 2023; 129:1334-1343. [PMID: 37098994 PMCID: PMC10202475 DOI: 10.1152/jn.00103.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 04/04/2023] [Accepted: 04/23/2023] [Indexed: 04/27/2023] Open
Abstract
The amygdala, a brain center of emotional expression, contributes to appropriate behavior responses during acoustic communication. In support of that role, the basolateral amygdala (BLA) analyzes the meaning of vocalizations through the integration of multiple acoustic inputs with information from other senses and an animal's internal state. The mechanisms underlying this integration are poorly understood. This study focuses on the integration of vocalization-related inputs to the BLA from auditory centers during this processing. We used intracellular recordings of BLA neurons in unanesthetized big brown bats that rely heavily on a complex vocal repertoire during social interactions. Postsynaptic and spiking responses of BLA neurons were recorded to three vocal sequences that are closely related to distinct behaviors (appeasement, low-level aggression, and high-level aggression) and have different emotional valence. Our novel findings are that most BLA neurons showed postsynaptic responses to one or more vocalizations (31 of 46) but that many fewer neurons showed spiking responses (8 of 46). The spiking responses were more selective than postsynaptic potential (PSP) responses. Furthermore, vocal stimuli associated with either positive or negative valence were similarly effective in eliciting excitatory postsynaptic potentials (EPSPs), inhibitory postsynaptic potentials (IPSPs), and spiking responses. This indicates that BLA neurons process both positive- and negative-valence vocal stimuli. The greater selectivity of spiking responses than PSP responses suggests an integrative role for processing within the BLA to enhance response specificity in acoustic communication.NEW & NOTEWORTHY The amygdala plays an important role in social communication by sound, but little is known about how it integrates diverse auditory inputs to form selective responses to social vocalizations. We show that BLA neurons receive inputs that are responsive to both negative- and positive-affect vocalizations but their spiking outputs are fewer and highly selective for vocalization type. Our work demonstrates that BLA neurons perform an integrative function in shaping appropriate behavioral responses to social vocalizations.
Collapse
Affiliation(s)
- Sergiy Voytenko
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
| | - Sharad Shanbhag
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
- Brain Health Research Institute, Kent State University, Kent, Ohio, United States
| | - Jeffrey Wenstrup
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
- Brain Health Research Institute, Kent State University, Kent, Ohio, United States
| | - Alexander Galazyuk
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States
- Brain Health Research Institute, Kent State University, Kent, Ohio, United States
| |
Collapse
|
14
|
Kelly SD, Ngo Tran QA. Exploring the Emotional Functions of Co-Speech Hand Gesture in Language and Communication. Top Cogn Sci 2023. [PMID: 37115518 DOI: 10.1111/tops.12657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 04/05/2023] [Accepted: 04/06/2023] [Indexed: 04/29/2023]
Abstract
Research over the past four decades has built a convincing case that co-speech hand gestures play a powerful role in human cognition . However, this recent focus on the cognitive function of gesture has, to a large extent, overlooked its emotional role-a role that was once central to research on bodily expression. In the present review, we first give a brief summary of the wealth of research demonstrating the cognitive function of co-speech gestures in language acquisition, learning, and thinking. Building on this foundation, we revisit the emotional function of gesture across a wide range of communicative contexts, from clinical to artistic to educational, and spanning diverse fields, from cognitive neuroscience to linguistics to affective science. Bridging the cognitive and emotional functions of gesture highlights promising avenues of research that have varied practical and theoretical implications for human-machine interactions, therapeutic interventions, language evolution, embodied cognition, and more.
Collapse
Affiliation(s)
- Spencer D Kelly
- Department of Psychological and Brain Sciences, Center for Language and Brain, Colgate University, 13 Oak Dr., Hamilton, NY, 13346, United States
| | - Quang-Anh Ngo Tran
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th St., Bloomington, IN, 47405, United States
| |
Collapse
|
15
|
Ekberg M, Stavrinos G, Andin J, Stenfelt S, Dahlström Ö. Acoustic Features Distinguishing Emotions in Swedish Speech. J Voice 2023:S0892-1997(23)00103-0. [PMID: 37045739 DOI: 10.1016/j.jvoice.2023.03.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 04/14/2023]
Abstract
Few studies have examined which acoustic features of speech can be used to distinguish between different emotions, and how combinations of acoustic parameters contribute to identification of emotions. The aim of the present study was to investigate which acoustic parameters in Swedish speech are most important for differentiation between, and identification of, the emotions anger, fear, happiness, sadness, and surprise in Swedish sentences. One-way ANOVAs were used to compare acoustic parameters between the emotions and both simple and multiple logistic regression models were used to examine the contribution of different acoustic parameters to differentiation between emotions. Results showed differences between emotions for several acoustic parameters in Swedish speech: surprise was the most distinct emotion, with significant differences compared to the other emotions across a range of acoustic parameters, while anger and happiness did not differ from each other on any parameter. The logistic regression models showed that fear was the best-predicted emotion while happiness was most difficult to predict. Frequency- and spectral-balance-related parameters were best at predicting fear. Amplitude- and temporal-related parameters were most important for surprise, while a combination of frequency-, amplitude- and spectral balance-related parameters are important for sadness. Assuming that there are similarities between acoustic models and how listeners infer emotions in speech, results suggest that individuals with hearing loss, who lack abilities of frequency detection, may compared to normal hearing individuals have difficulties in identifying fear in Swedish speech. Since happiness and fear relied primarily on amplitude- and spectral-balance-related parameters, detection of them are probably facilitated more by hearing aid use.
Collapse
Affiliation(s)
- M Ekberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Östergötland, Sweden.
| | - G Stavrinos
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Östergötland, Sweden
| | - J Andin
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Östergötland, Sweden
| | - S Stenfelt
- Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Östergötland, Sweden
| | - Ö Dahlström
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Östergötland, Sweden
| |
Collapse
|
16
|
Disentangling emotional signals in the brain: an ALE meta-analysis of vocal affect perception. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:17-29. [PMID: 35945478 DOI: 10.3758/s13415-022-01030-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/24/2022] [Indexed: 11/08/2022]
Abstract
Recent advances in neuroimaging research on vocal emotion perception have revealed voice-sensitive areas specialized in processing affect. Experimental data on this subject is varied, investigating a wide range of emotions through different vocal signals and task demands. The present meta-analysis was designed to disentangle this diversity of results by summarizing neuroimaging data in the vocal emotion perception literature. Data from 44 experiments contrasting emotional and neutral voices was analyzed to assess brain areas involved in vocal affect perception in general, as well as depending on the type of voice signal (speech prosody or vocalizations), the task demands (implicit or explicit attention to emotions), and the specific emotion perceived. Results reassessed a consistent bilateral network of Emotional Voices Areas consisting of the superior temporal cortex and primary auditory regions. Specific activations and lateralization of these regions, as well as additional areas (insula, middle temporal gyrus) were further modulated by signal type and task demands. Exploring the sparser data on single emotions also suggested the recruitment of other regions (insula, inferior frontal gyrus, frontal operculum) for specific aspects of each emotion. These novel meta-analytic results suggest that while the bulk of vocal affect processing is localized in the STC, the complexity and variety of such vocal signals entails functional specificities in complex and varied cortical (and potentially subcortical) response pathways.
Collapse
|
17
|
de Beer C, Wartenburger I, Huttenlauch C, Hanne S. A systematic review on production and comprehension of linguistic prosody in people with acquired language and communication disorders resulting from unilateral brain lesions. JOURNAL OF COMMUNICATION DISORDERS 2023; 101:106298. [PMID: 36623377 DOI: 10.1016/j.jcomdis.2022.106298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 11/14/2022] [Accepted: 12/25/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Prosody serves central functions in language processing including linguistic functions (linguistic prosody), like structuring the speech signal. Impairments in production and comprehension of linguistic prosody have been described for persons with unilateral right (RHDP) or left hemisphere damage (LHDP). However, reported results differ with respect to the characteristics and severities of these impairments AIMS: We conducted a systematic literature review focusing on production and comprehension of linguistic prosody at the prosody-syntax interface (i.e., phrase or sentence level) in LHDP and RHDP. METHODS & PROCEDURES In a systematic literature search we included: (i) empirical studies with (ii) adult RHDP and/or LHDP (iii) investigating production and/or comprehension of linguistic prosody at the (iv) phrase or sentence level (v) reporting quantitative data on prosodic measures. We excluded overview papers; studies involving participants with dysarthria, apraxia of speech, foreign accent syndrome, psychiatric diseases, and/or neurodegenerative diseases; studies focusing primarily on emotional prosody; and on lexical stress / word level; studies of which no full text was available and/or that were published in a language other than English. We searched the databases BIOSIS, MEDLINE, EMBASE, PubMed, Web of Science, CINAHL, Cochrane Library, PSYNDEX, PsycINFO and speechBITE, last searched on January 13th 2022.We found 2,631 studies without duplicates. We identified 43 studies which were included into our systematic review. For data extraction and synthesis of results, we grouped studies by (i) modality (production vs. comprehension), (ii) function (syntactic structure vs. information structure), and (iii) by experiment task. For production studies, outcome measures were defined as the productive use of the different prosodic cues (lengthening, pause, f0, amplitude). For comprehension studies, performance measures (accuracy and reaction times) were defined as outcome measures. In accordance with the PRISMA 2020 statement (Page et al., 2021), we conducted a quality check to assess study risk of bias. Our review was pre-registered with PROSPERO (CRD42019120308). OUTCOMES & RESULTS Of the 43 studies reviewed, 30 studies involved RHDP (n = 309), assessing production in 15 studies and focusing on comprehension of prosody in 16 studies (one study investigated production and comprehension). LHDP (n = 438) were included in 35 studies of which 15 studied production and 21 evaluated comprehension of prosody (one study investigated production and comprehension). Despite the heterogeneity of results in the studies reviewed, our synthesis of results suggests that both LHDP and RHDP show limitations, but no complete impairment, in their production and/or comprehension of linguistic prosody. Prosodic limitations are evident in different areas of processing linguistic prosody, like syntactic disambiguation or the distinction between sentence types. There is a tendency towards more severe limitations in LHDP as compared to RHDP. CONCLUSIONS We only included published studies into our review and did not perform an assessment of risk of reporting bias as well as systematic certainty assessments of the outcomes. Despite these limitations, we conclude that both groups show deficits in production and comprehension of linguistic prosody, but neither LHDP nor RHDP are completely impaired in their prosodic processing. This suggests that prosody is a relevant communicative resource for LHDP and RHDP worth being addressed in speech-language-therapy.
Collapse
Affiliation(s)
- Carola de Beer
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany; Faculty of Linguistics and Literary Studies & Medical School OWL, University of Bielefeld, Germany.
| | - Isabell Wartenburger
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany
| | - Clara Huttenlauch
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany
| | - Sandra Hanne
- SFB1287, Cognitive Sciences, Department of Linguistic, University of Potsdam, Germany
| |
Collapse
|
18
|
Lin Y, Fan X, Chen Y, Zhang H, Chen F, Zhang H, Ding H, Zhang Y. Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words. Brain Sci 2022; 12:brainsci12121706. [PMID: 36552167 PMCID: PMC9776349 DOI: 10.3390/brainsci12121706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yueqi Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha 410012, China
| | - Hui Zhang
- School of International Education, Shandong University, Jinan 250100, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
19
|
Rühlemann C. How is emotional resonance achieved in storytellings of sadness/distress? Front Psychol 2022; 13:952119. [PMID: 36248512 PMCID: PMC9559217 DOI: 10.3389/fpsyg.2022.952119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 08/26/2022] [Indexed: 11/13/2022] Open
Abstract
Storytelling pivots around stance seen as a window unto emotion: storytellers project a stance expressing their emotion toward the events and recipients preferably mirror that stance by affiliating with the storyteller’s stance. Whether the recipient’s affiliative stance is at the same time expressive of his/her emotional resonance with the storyteller and of emotional contagion is a question that has recently attracted intriguing research in Physiological Interaction Research. Connecting to this line of inquiry, this paper concerns itself with storytellings of sadness/distress. Its aim is to identify factors that facilitate emotion contagion in storytellings of sadness/distress and factors that impede it. Given the complexity and novelty of this question, this study is designed as a pilot study to scour the terrain and sketch out an interim roadmap before a larger study is undertaken. The data base is small, comprising two storytellings of sadness/distress. The methodology used to address the above research question is expansive: it includes CA methods to transcribe and analyze interactionally relevant aspects of the storytelling interaction; it draws on psychophysiological measures to establish whether and to what degree emotional resonance between co-participants is achieved. In discussing possible reasons why resonance is (not or not fully) achieved, the paper embarks on an extended analysis of the storytellers’ multimodal storytelling performance (reenactments, prosody, gaze, gesture) and considers factors lying beyond the storyteller’s control, including relevance, participation framework, personality, and susceptibility to emotion contagion.
Collapse
|
20
|
Netser S, Nahardiya G, Weiss-Dicker G, Dadush R, Goussha Y, John SR, Taub M, Werber Y, Sapir N, Yovel Y, Harony-Nicolas H, Buxbaum JD, Cohen L, Crammer K, Wagner S. TrackUSF, a novel tool for automated ultrasonic vocalization analysis, reveals modified calls in a rat model of autism. BMC Biol 2022; 20:159. [PMID: 35820848 PMCID: PMC9277954 DOI: 10.1186/s12915-022-01299-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/14/2022] [Indexed: 11/30/2022] Open
Abstract
Background Various mammalian species emit ultrasonic vocalizations (USVs), which reflect their emotional state and mediate social interactions. USVs are usually analyzed by manual or semi-automated methodologies that categorize discrete USVs according to their structure in the frequency-time domains. This laborious analysis hinders the effective use of USVs as a readout for high-throughput analysis of behavioral changes in animals. Results Here we present a novel automated open-source tool that utilizes a different approach towards USV analysis, termed TrackUSF. To validate TrackUSF, we analyzed calls from different animal species, namely mice, rats, and bats, recorded in various settings and compared the results with a manual analysis by a trained observer. We found that TrackUSF detected the majority of USVs, with less than 1% of false-positive detections. We then employed TrackUSF to analyze social vocalizations in Shank3-deficient rats, a rat model of autism, and revealed that these vocalizations exhibit a spectrum of deviations from appetitive calls towards aversive calls. Conclusions TrackUSF is a simple and easy-to-use system that may be used for a high-throughput comparison of ultrasonic vocalizations between groups of animals of any kind in any setting, with no prior assumptions. Supplementary Information The online version contains supplementary material available at 10.1186/s12915-022-01299-y.
Collapse
Affiliation(s)
- Shai Netser
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Guy Nahardiya
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Gili Weiss-Dicker
- Department of Electrical Engineering, The Technion, 32000, Haifa, Israel
| | - Roei Dadush
- Department of Electrical Engineering, The Technion, 32000, Haifa, Israel
| | - Yizhaq Goussha
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Shanah Rachel John
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel.,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel
| | - Mor Taub
- School of Zoology, Faculty of Life-Sciences, Tel-Aviv University, Tel Aviv, Israel
| | - Yuval Werber
- Department of Evolutionary and Environmental Biology and Institute of Evolution, University of Haifa, Haifa, Israel
| | - Nir Sapir
- Department of Evolutionary and Environmental Biology and Institute of Evolution, University of Haifa, Haifa, Israel
| | - Yossi Yovel
- School of Zoology, Faculty of Life-Sciences, Tel-Aviv University, Tel Aviv, Israel
| | - Hala Harony-Nicolas
- The Department of Psychiatry and The Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Joseph D Buxbaum
- The Department of Psychiatry and The Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Lior Cohen
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel
| | - Koby Crammer
- Department of Electrical Engineering, The Technion, 32000, Haifa, Israel
| | - Shlomo Wagner
- Sagol Department of Neurobiology, University of Haifa, 3498838, Haifa, Israel. .,The Integrated Brain and Behavior Research Center (IBBR), Faculty of Natural Sciences, University of Haifa, Mt. Carmel, 3498838, Haifa, Israel.
| |
Collapse
|
21
|
Henderson RD, Kepp KP, Eisen A. ALS/FTD: Evolution, Aging, and Cellular Metabolic Exhaustion. Front Neurol 2022; 13:890203. [PMID: 35711269 PMCID: PMC9196861 DOI: 10.3389/fneur.2022.890203] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Accepted: 04/19/2022] [Indexed: 11/15/2022] Open
Abstract
Amyotrophic lateral sclerosis and frontotemporal dementia (ALS/FTD) are neurodegenerations with evolutionary underpinnings, expansive clinical presentations, and multiple genetic risk factors involving a complex network of pathways. This perspective considers the complex cellular pathology of aging motoneuronal and frontal/prefrontal cortical networks in the context of evolutionary, clinical, and biochemical features of the disease. We emphasize the importance of evolution in the development of the higher cortical function, within the influence of increasing lifespan. Particularly, the role of aging on the metabolic competence of delicately optimized neurons, age-related increased proteostatic costs, and specific genetic risk factors that gradually reduce the energy available for neuronal function leading to neuronal failure and disease.
Collapse
Affiliation(s)
| | - Kasper Planeta Kepp
- Department of Chemistry, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Andrew Eisen
- Division of Neurology, Department of Medicine, Faculty of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
22
|
Weisholtz DS, Kreiman G, Silbersweig DA, Stern E, Cha B, Butler T. Localized task-invariant emotional valence encoding revealed by intracranial recordings. Soc Cogn Affect Neurosci 2022; 17:549-558. [PMID: 34941992 PMCID: PMC9164208 DOI: 10.1093/scan/nsab134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 09/05/2021] [Accepted: 12/22/2021] [Indexed: 11/13/2022] Open
Abstract
The ability to distinguish between negative, positive and neutral valence is a key part of emotion perception. Emotional valence has conceptual meaning that supersedes any particular type of stimulus, although it is typically captured experimentally in association with particular tasks. We sought to identify neural encoding for task-invariant emotional valence. We evaluated whether high-gamma responses (HGRs) to visually displayed words conveying emotions could be used to decode emotional valence from HGRs to facial expressions. Intracranial electroencephalography was recorded from 14 individuals while they participated in two tasks, one involving reading words with positive, negative, and neutral valence, and the other involving viewing faces with positive, negative, and neutral facial expressions. Quadratic discriminant analysis was used to identify information in the HGR that differentiates the three emotion conditions. A classifier was trained on the emotional valence labels from one task and was cross-validated on data from the same task (within-task classifier) as well as the other task (between-task classifier). Emotional valence could be decoded in the left medial orbitofrontal cortex and middle temporal gyrus, both using within-task classifiers and between-task classifiers. These observations suggest the presence of task-independent emotional valence information in the signals from these regions.
Collapse
Affiliation(s)
- Daniel S Weisholtz
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Gabriel Kreiman
- Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - David A Silbersweig
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Emily Stern
- Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA.,Ceretype Neuromedicine, Inc
| | - Brannon Cha
- University of California San Diego School of Medicine.,Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Tracy Butler
- Department of Radiology, Weill Cornell Medical Center, New York 10065, USA
| |
Collapse
|
23
|
Liu L, Götz A, Lorette P, Tyler MD. How Tone, Intonation and Emotion Shape the Development of Infants’ Fundamental Frequency Perception. Front Psychol 2022; 13:906848. [PMID: 35719494 PMCID: PMC9204181 DOI: 10.3389/fpsyg.2022.906848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 05/10/2022] [Indexed: 12/02/2022] Open
Abstract
Fundamental frequency (ƒ0), perceived as pitch, is the first and arguably most salient auditory component humans are exposed to since the beginning of life. It carries multiple linguistic (e.g., word meaning) and paralinguistic (e.g., speakers’ emotion) functions in speech and communication. The mappings between these functions and ƒ0 features vary within a language and differ cross-linguistically. For instance, a rising pitch can be perceived as a question in English but a lexical tone in Mandarin. Such variations mean that infants must learn the specific mappings based on their respective linguistic and social environments. To date, canonical theoretical frameworks and most empirical studies do not view or consider the multi-functionality of ƒ0, but typically focus on individual functions. More importantly, despite the eventual mastery of ƒ0 in communication, it is unclear how infants learn to decompose and recognize these overlapping functions carried by ƒ0. In this paper, we review the symbioses and synergies of the lexical, intonational, and emotional functions that can be carried by ƒ0 and are being acquired throughout infancy. On the basis of our review, we put forward the Learnability Hypothesis that infants decompose and acquire multiple ƒ0 functions through native/environmental experiences. Under this hypothesis, we propose representative cases such as the synergy scenario, where infants use visual cues to disambiguate and decompose the different ƒ0 functions. Further, viable ways to test the scenarios derived from this hypothesis are suggested across auditory and visual modalities. Discovering how infants learn to master the diverse functions carried by ƒ0 can increase our understanding of linguistic systems, auditory processing and communication functions.
Collapse
Affiliation(s)
- Liquan Liu
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- Center for Multilingualism in Society Across the Lifespan, University of Oslo, Oslo, Norway
- Australian Research Council Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
- *Correspondence: Liquan Liu,
| | - Antonia Götz
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- Department of Linguistics, University of Potsdam, Potsdam, Germany
| | - Pernelle Lorette
- Department of English Linguistics, University of Mannheim, Mannheim, Germany
| | - Michael D. Tyler
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- Australian Research Council Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
| |
Collapse
|
24
|
Morningstar M, Mattson WI, Nelson EE. Longitudinal Change in Neural Response to Vocal Emotion in Adolescence. Soc Cogn Affect Neurosci 2022; 17:890-903. [PMID: 35323933 PMCID: PMC9527472 DOI: 10.1093/scan/nsac021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 02/25/2022] [Accepted: 03/21/2022] [Indexed: 01/09/2023] Open
Abstract
Adolescence is associated with maturation of function within neural networks supporting the processing of social information. Previous longitudinal studies have established developmental influences on youth’s neural response to facial displays of emotion. Given the increasing recognition of the importance of non-facial cues to social communication, we build on existing work by examining longitudinal change in neural response to vocal expressions of emotion in 8- to 19-year-old youth. Participants completed a vocal emotion recognition task at two timepoints (1 year apart) while undergoing functional magnetic resonance imaging. The right inferior frontal gyrus, right dorsal striatum and right precentral gyrus showed decreases in activation to emotional voices across timepoints, which may reflect focalization of response in these areas. Activation in the dorsomedial prefrontal cortex was positively associated with age but was stable across timepoints. In addition, the slope of change across visits varied as a function of participants’ age in the right temporo-parietal junction (TPJ): this pattern of activation across timepoints and age may reflect ongoing specialization of function across childhood and adolescence. Decreased activation in the striatum and TPJ across timepoints was associated with better emotion recognition accuracy. Findings suggest that specialization of function in social cognitive networks may support the growth of vocal emotion recognition skills across adolescence.
Collapse
Affiliation(s)
- Michele Morningstar
- Correspondence should be addressed to Michele Morningstar, Department of Psychology, Queen’s University, 62 Arch Street, Kingston, ON K7L 3L3, Canada. E-mail:
| | - Whitney I Mattson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
| | - Eric E Nelson
- Center for Biobehavioral Health, Nationwide Children’s Hospital, Columbus, OH 43205, USA
- Department of Pediatrics, The Ohio State University, Columbus, OH 43205, USA
| |
Collapse
|
25
|
Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals. PLoS One 2022; 17:e0261354. [PMID: 34995305 PMCID: PMC8740977 DOI: 10.1371/journal.pone.0261354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/29/2021] [Indexed: 11/19/2022] Open
Abstract
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
Collapse
|
26
|
Bogdanova OV, Bogdanov VB, Miller LE, Hadj-Bouziane F. Simulated proximity enhances perceptual and physiological responses to emotional facial expressions. Sci Rep 2022; 12:109. [PMID: 34996925 PMCID: PMC8741866 DOI: 10.1038/s41598-021-03587-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 12/02/2021] [Indexed: 11/29/2022] Open
Abstract
Physical proximity is important in social interactions. Here, we assessed whether simulated physical proximity modulates the perceived intensity of facial emotional expressions and their associated physiological signatures during observation or imitation of these expressions. Forty-four healthy volunteers rated intensities of dynamic angry or happy facial expressions, presented at two simulated locations, proximal (0.5 m) and distant (3 m) from the participants. We tested whether simulated physical proximity affected the spontaneous (in the observation task) and voluntary (in the imitation task) physiological responses (activity of the corrugator supercilii face muscle and pupil diameter) as well as subsequent ratings of emotional intensity. Angry expressions provoked relative activation of the corrugator supercilii muscle and pupil dilation, whereas happy expressions induced a decrease in corrugator supercilii muscle activity. In proximal condition, these responses were enhanced during both observation and imitation of the facial expressions, and were accompanied by an increase in subsequent affective ratings. In addition, individual variations in condition related EMG activation during imitation of angry expressions predicted increase in subsequent emotional ratings. In sum, our results reveal novel insights about the impact of physical proximity in the perception of emotional expressions, with early proximity-induced enhancements of physiological responses followed by an increased intensity rating of facial emotional expressions.
Collapse
Affiliation(s)
- Olena V Bogdanova
- IMPACT Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, University of Lyon, Bron Cedex, France. .,INCIA, CNRS UMR 5287, Université de Bordeaux, Bordeaux, France.
| | - Volodymyr B Bogdanov
- IMPACT Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, University of Lyon, Bron Cedex, France.,Université de Bordeaux, Collège Science de la Sante, Institut Universitaire des Sciences de la Réadaptation, Handicap Activité Cognition Santé EA 4136, Bordeaux, France
| | - Luke E Miller
- Donders Centre for Cognition of Radboud University in Nijmegen, Nijmegen, The Netherlands
| | - Fadila Hadj-Bouziane
- IMPACT Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, University of Lyon, Bron Cedex, France
| |
Collapse
|
27
|
Lieck R, Rohrmeier M. Discretisation and continuity: The emergence of symbols in communication. Cognition 2021; 215:104787. [PMID: 34303183 PMCID: PMC8381766 DOI: 10.1016/j.cognition.2021.104787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 05/11/2021] [Accepted: 05/19/2021] [Indexed: 11/30/2022]
Abstract
Vocal signalling systems, as used by humans and various non-human animals, exhibit discrete and continuous properties that can naturally be used to express discrete and continuous information, such as distinct words to denote objects in the world and prosodic features to convey the emotions of the speaker. However, continuous aspects are not always expressed with the continuous properties of an utterance but are frequently categorised into discrete symbols. While the existence of symbols in communication is self-evident, the emergence of discretisation from a continuous space is not well understood. In this paper, we investigate the emergence of discrete symbols in regions with a continuous semantics by simulating the learning process of two agents that acquire a shared signalling system. The task is formalised as a reinforcement learning problem with a continuous form and meaning space. We identify two causes for the emergence of discretisation that do not originate in discrete semantics: 1) premature convergence to sub-optimal signalling conventions and 2) topological mismatch between the continuous form space and the continuous semantic space. The insights presented in this paper shed light on the origins of discrete symbols, whose existence is assumed by a large body of research concerned with the emergence of syntactic structures and meaning in language.
Collapse
Affiliation(s)
- Robert Lieck
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland.
| | - Martin Rohrmeier
- Digital and Cognitive Musicology Lab, École Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland
| |
Collapse
|
28
|
Piwowarski M, Gadomska-Lila K, Nermend K. Cognitive Neuroscience Methods in Enhancing Health Literacy. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18105331. [PMID: 34067790 PMCID: PMC8155837 DOI: 10.3390/ijerph18105331] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 05/12/2021] [Accepted: 05/13/2021] [Indexed: 01/10/2023]
Abstract
The aim of the article is to identify the usefulness of cognitive neuroscience methods in assessing the effectiveness of social advertising and constructing messages referring to the generally understood health promotion, which is to contribute to the development of health awareness, and hence to health literacy. The presented research has also proven useful in the field of managing the processes that improve the communication between the organization and its environment. The researchers experimentally applied cognitive neuroscience methods, mainly EEG measurements, including a metric which is one of the most frequently used to measure the reception of advertising messages, i.e., frontal asymmetry. The purpose of the study was to test cognitive responses as expressed by neural indices (memorization, interest) to the reception of an advertisement for the construction of a hospice for adults. For comparative purposes, a questionnaire survey was also conducted. The research findings have confirmed that there are significant differences in remembering the advertisement in question by different groups of recipients (women/men). They also indicate a different level of interest in the advertisement, which may result from different preferences of the recipients concerning the nature of ads. The obtained results contribute to a better understanding of how to design advertising messages concerning health, so that they increase the awareness of the recipients’ responsibility for their own health and induce specific behavior patterns aimed at supporting health-related initiatives, e.g., donating funds for building hospices or performing preventive tests. In this respect, the study findings help improve the organizations’ communication with their environment, thus enhancing their performance. The study has also confirmed the potential and innovativeness of cognitive neuroscience methods as well as their considerable possibilities for application in this field.
Collapse
Affiliation(s)
- Mateusz Piwowarski
- Department of Decision Support Methods and Cognitive Neuroscience, University of Szczecin, 71-004 Szczecin, Poland;
- Correspondence:
| | | | - Kesra Nermend
- Department of Decision Support Methods and Cognitive Neuroscience, University of Szczecin, 71-004 Szczecin, Poland;
| |
Collapse
|
29
|
Pawełczyk A, Łojek E, Żurner N, Gawłowska-Sawosz M, Gębski P, Pawełczyk T. The correlation between white matter integrity and pragmatic language processing in first episode schizophrenia. Brain Imaging Behav 2021; 15:1068-1084. [PMID: 32710335 PMCID: PMC8032571 DOI: 10.1007/s11682-020-00314-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
Objective: Higher-order language disturbances could be the result of white matter tract abnormalities. The study explores the relationship between white matter and pragmatic skills in first-episode schizophrenia. Methods: Thirty-four first-episode patients with schizophrenia and 32 healthy subjects participated in a pragmatic language and Diffusion Tensor Imaging study, where fractional anisotropy of the arcuate fasciculus, corpus callosum and cingulum was correlated with the Polish version of the Right Hemisphere Language Battery. Results: The patients showed reduced fractional anisotropy in the right arcuate fasciculus, left anterior cingulum bundle and left forceps minor. Among the first episode patients, reduced understanding of written metaphors correlated with reduced fractional anisotropy of left forceps minor, and greater explanation of written and picture metaphors correlated with reduced fractional anisotropy of the left anterior cingulum. Conclusions: The white matter dysfunctions may underlie the pragmatic language impairment in schizophrenia. Our results shed further light on the functional neuroanatomical basis of pragmatic language use by patients with schizophrenia.
Collapse
Affiliation(s)
- Agnieszka Pawełczyk
- Department of Affective and Psychotic Disorders, Medical University of Łódź, Łódź, Poland.
| | | | - Natalia Żurner
- Adolescent Ward, Central Clinical Hospital of Medical University of Łódź, Łódź, Poland
| | | | - Piotr Gębski
- Scanlab Diagnostyka Medyczna Księży Młyn, Medical Examination Centre, Medical University of Łódź, Łódź, Poland
| | - Tomasz Pawełczyk
- Department of Affective and Psychotic Disorders, Medical University of Łódź, Łódź, Poland
| |
Collapse
|
30
|
Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective. Atten Percept Psychophys 2021; 83:2159-2173. [PMID: 33759116 DOI: 10.3758/s13414-021-02281-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2021] [Indexed: 11/08/2022]
Abstract
A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain.
Collapse
|
31
|
The relationship between vocal affect recognition and psychosocial functioning for people with moderate to severe traumatic brain injury: a systematic review. BRAIN IMPAIR 2021. [DOI: 10.1017/brimp.2020.24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractThe purpose of this review was to explore how vocal affect recognition deficits impact the psychosocial functioning of people with moderate to severe traumatic brain injury (TBI). A systematic review following the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines was conducted, whereby six databases were searched, with additional hand searching of key journals also completed. The search identified 1847 records after duplicates were removed, and 1749 were excluded through title and abstract screening. After full text screening of 65 peer-reviewed articles published between January 1999 and August 2019, only five met inclusion criteria. The methodological quality of selected studies was assessed using the Mixed Methods Appraisal Tool (MMAT) Version 2018 with a fair level of agreement reached. A narrative synthesis of the results was completed, exploring vocal affect recognition and psychosocial functioning of people with moderate to severe TBI, including aspects of social cognition (i.e., empathy; Theory of Mind) and social behaviour. Results of the review were limited by a paucity of research in this area, a lack of high-level evidence, and wide variation in the outcome measures used. More rigorous study designs are required to establish more conclusive evidence regarding the degree and direction of the association between vocal affect recognition and aspects of psychosocial functioning. This review is registered with Prospero.
Collapse
|
32
|
Liu J, Tsang T, Ponting C, Jackson L, Jeste SS, Bookheimer SY, Dapretto M. Lack of neural evidence for implicit language learning in 9-month-old infants at high risk for autism. Dev Sci 2020; 24:e13078. [PMID: 33368921 DOI: 10.1111/desc.13078] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/30/2022]
Abstract
Word segmentation is a fundamental aspect of language learning, since identification of word boundaries in continuous speech must occur before the acquisition of word meanings can take place. We previously used functional magnetic resonance imaging (fMRI) to show that youth with autism spectrum disorder (ASD) are less sensitive to statistical and speech cues that guide implicit word segmentation. However, little is known about the neural mechanisms underlying this process during infancy and how this may be associated with ASD risk. Here, we examined early neural signatures of language-related learning in 9-month-old infants at high (HR) and low familial risk (LR) for ASD. During natural sleep, infants underwent fMRI while passively listening to three speech streams containing strong statistical and prosodic cues, strong statistical cues only, or minimal statistical cues to word boundaries. Compared to HR infants, LR infants showed greater activity in the left amygdala for the speech stream containing statistical and prosodic cues. While listening to this same speech stream, LR infants also showed more learning-related signal increases in left temporal regions as well as increasing functional connectivity between bilateral primary auditory cortex and right anterior insula. Importantly, learning-related signal increases at 9 months positively correlated with expressive language outcome at 36 months in both groups. In the HR group, greater signal increases were additionally associated with less severe ASD symptomatology at 36 months. These findings suggest that early differences in the neural networks underlying language learning may predict subsequent language development and altered trajectories associated with ASD risk.
Collapse
Affiliation(s)
- Janelle Liu
- Interdepartmental Neuroscience Program, University of California, Los Angeles, Los Angeles, CA, USA.,Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA.,Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, USA
| | - Tawny Tsang
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA.,Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, USA.,Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Carolyn Ponting
- Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, USA.,Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA.,Semel Institute of Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Lisa Jackson
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA.,Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, USA.,Semel Institute of Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Shafali S Jeste
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA.,Semel Institute of Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Susan Y Bookheimer
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA.,Semel Institute of Neuroscience and Human Behavior, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, USA
| | - Mirella Dapretto
- Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, Los Angeles, CA, USA.,Ahmanson-Lovelace Brain Mapping Center, University of California, Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
33
|
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, Houy-Durand E, Gomot M. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NEUROIMAGE-CLINICAL 2020; 28:102512. [PMID: 33395999 PMCID: PMC8481911 DOI: 10.1016/j.nicl.2020.102512] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/17/2020] [Accepted: 11/20/2020] [Indexed: 11/30/2022]
Abstract
We used an oddball paradigm with vocal stimuli to record hemodynamic responses. Brain processing of vocal change relies on STG, insula and lingual area. Activity of the change processing network can be modulated by saliency and emotion. Brain processing of vocal deviancy/novelty appears typical in adults with autism.
Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.
Collapse
Affiliation(s)
| | | | | | - Agathe Saby
- Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | | | | | - Emmanuelle Houy-Durand
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France; Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marie Gomot
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France.
| |
Collapse
|
34
|
Morningstar M, Mattson WI, Singer S, Venticinque JS, Nelson EE. Children and adolescents' neural response to emotional faces and voices: Age-related changes in common regions of activation. Soc Neurosci 2020; 15:613-629. [PMID: 33017278 DOI: 10.1080/17470919.2020.1832572] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The perception of facial and vocal emotional expressions engages overlapping regions of the brain. However, at a behavioral level, the ability to recognize the intended emotion in both types of nonverbal cues follows a divergent developmental trajectory throughout childhood and adolescence. The current study a) identified regions of common neural activation to facial and vocal stimuli in 8- to 19-year-old typically-developing adolescents, and b) examined age-related changes in blood-oxygen-level dependent (BOLD) response within these areas. Both modalities elicited activation in an overlapping network of subcortical regions (insula, thalamus, dorsal striatum), visual-motor association areas, prefrontal regions (inferior frontal cortex, dorsomedial prefrontal cortex), and the right superior temporal gyrus. Within these regions, increased age was associated with greater frontal activation to voices, but not faces. Results suggest that processing facial and vocal stimuli elicits activation in common areas of the brain in adolescents, but that age-related changes in response within these regions may vary by modality.
Collapse
Affiliation(s)
- M Morningstar
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA.,Department of Pediatrics, The Ohio State University , Columbus, OH, USA.,Department of Psychology, Queen's University , Kingston, ON, Canada
| | - W I Mattson
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - S Singer
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - J S Venticinque
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA
| | - E E Nelson
- Center for Biobehavioral Health, Nationwide Children's Hospital , Columbus, OH, USA.,Department of Pediatrics, The Ohio State University , Columbus, OH, USA
| |
Collapse
|
35
|
Soma CS, Baucom BRW, Xiao B, Butner JE, Hilpert P, Narayanan S, Atkins DC, Imel ZE. Coregulation of therapist and client emotion during psychotherapy. Psychother Res 2020; 30:591-603. [PMID: 32400306 DOI: 10.1080/10503307.2019.1661541] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
Abstract
OBJECTIVE Close interpersonal relationships are fundamental to emotion regulation. Clinical theory suggests that one role of therapists in psychotherapy is to help clients regulate emotions, however, if and how clients and therapists serve to regulate each other's emotions has not been empirically tested. Emotion coregulation - the bidirectional emotional linkage of two people that promotes emotional stability - is a specific, temporal process that provides a framework for testing the way in which therapists' and clients' emotions may be related on a moment to moment basis in clinically relevant ways. METHOD Utilizing 227 audio recordings from a relationally oriented treatment (Motivational Interviewing), we estimated continuous values of vocally encoded emotional arousal via mean fundamental frequency. We used dynamic systems models to examine emotional coregulation, and tested the hypothesis that each individual's emotional arousal would be significantly associated with fluctuations in the other's emotional state over the course of a psychotherapy session. RESULTS Results indicated that when clients became more emotionally labile over the course of the session, therapists became less so. When changes in therapist arousal increased, the client's tendency to become more aroused during session slowed. Alternatively, when changes in client arousal increased, the therapist's tendency to become less aroused slowed.
Collapse
Affiliation(s)
- Christina S Soma
- Department of Educational Psychology, University of Utah, Salt Lake City, UT, USA
| | - Brian R W Baucom
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Bo Xiao
- Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jonathan E Butner
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Peter Hilpert
- School of Psychology, University of Surrey, Guilford, UK
| | - Shrikanth Narayanan
- Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - David C Atkins
- Department of Psychiatry and Behavioral Sciences, University of Washington, Seattle, WA, USA
| | - Zac E Imel
- Department of Educational Psychology, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
36
|
Proverbio AM, Santoni S, Adorni R. ERP Markers of Valence Coding in Emotional Speech Processing. iScience 2020; 23:100933. [PMID: 32151976 PMCID: PMC7063241 DOI: 10.1016/j.isci.2020.100933] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/20/2019] [Accepted: 02/19/2020] [Indexed: 11/01/2022] Open
Abstract
How is auditory emotional information processed? The study's aim was to compare cerebral responses to emotionally positive or negative spoken phrases matched for structure and content. Twenty participants listened to 198 vocal stimuli while detecting filler phrases containing first names. EEG was recorded from 128 sites. Three event-related potential (ERP) components were quantified and found to be sensitive to emotional valence since 350 ms of latency. P450 and late positivity were enhanced by positive content, whereas anterior negativity was larger to negative content. A similar set of markers (P300, N400, LP) was found previously for the processing of positive versus negative affective vocalizations, prosody, and music, which suggests a common neural mechanism for extracting the emotional content of auditory information. SwLORETA applied to potentials recorded between 350 and 550 ms showed that negative speech activated the right temporo/parietal areas (BA40, BA20/21), whereas positive speech activated the left homologous and inferior frontal areas.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy.
| | - Sacha Santoni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| | - Roberta Adorni
- Milan Center for Neuroscience, Department of Psychology, University of Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| |
Collapse
|
37
|
Lin SY, Lee CC, Chen YS, Kuo LW. Investigation of functional brain network reconfiguration during vocal emotional processing using graph-theoretical analysis. Soc Cogn Affect Neurosci 2020; 14:529-538. [PMID: 31157395 PMCID: PMC6545541 DOI: 10.1093/scan/nsz025] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 03/11/2019] [Accepted: 04/02/2019] [Indexed: 12/12/2022] Open
Abstract
Vocal expression is essential for conveying the emotion during social interaction. Although vocal emotion has been explored in previous studies, little is known about how perception of different vocal emotional expressions modulates the functional brain network topology. In this study, we aimed to investigate the functional brain networks under different attributes of vocal emotion by graph-theoretical network analysis. Functional magnetic resonance imaging (fMRI) experiments were performed on 36 healthy participants. We utilized the Power-264 functional brain atlas to calculate the interregional functional connectivity (FC) from fMRI data under resting state and vocal stimuli at different arousal and valence levels. The orthogonal minimal spanning trees method was used for topological filtering. The paired-sample t-test with Bonferroni correction across all regions and arousal-valence levels were used for statistical comparisons. Our results show that brain network exhibits significantly altered network attributes at FC, nodal and global levels, especially under high-arousal or negative-valence vocal emotional stimuli. The alterations within/between well-known large-scale functional networks were also investigated. Through the present study, we have gained more insights into how comprehending emotional speech modulates brain networks. These findings may shed light on how the human brain processes emotional speech and how it distinguishes different emotional conditions.
Collapse
Affiliation(s)
- Shih-Yen Lin
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Miaoli, Taiwan.,Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
| | - Chi-Chun Lee
- Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Chiao Tung University, Hsinchu, Taiwan
| | - Li-Wei Kuo
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Miaoli, Taiwan.,Institute of Medical Device and Imaging, National Taiwan University College of Medicine, Taipei, Taiwan
| |
Collapse
|
38
|
Pralus A, Fornoni L, Bouet R, Gomot M, Bhatara A, Tillmann B, Caclin A. Emotional prosody in congenital amusia: Impaired and spared processes. Neuropsychologia 2019; 134:107234. [DOI: 10.1016/j.neuropsychologia.2019.107234] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 08/12/2019] [Accepted: 10/16/2019] [Indexed: 12/15/2022]
|
39
|
Age-related differences in neural activation and functional connectivity during the processing of vocal prosody in adolescence. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1418-1432. [PMID: 31515750 DOI: 10.3758/s13415-019-00742-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ability to recognize others' emotions based on vocal emotional prosody follows a protracted developmental trajectory during adolescence. However, little is known about the neural mechanisms supporting this maturation. The current study investigated age-related differences in neural activation during a vocal emotion recognition (ER) task. Listeners aged 8 to 19 years old completed the vocal ER task while undergoing functional magnetic resonance imaging. The task of categorizing vocal emotional prosody elicited activation primarily in temporal and frontal areas. Age was associated with a) greater activation in regions in the superior, middle, and inferior frontal gyri, b) greater functional connectivity between the left precentral and inferior frontal gyri and regions in the bilateral insula and temporo-parietal junction, and c) greater fractional anisotropy in the superior longitudinal fasciculus, which connects frontal areas to posterior temporo-parietal regions. Many of these age-related differences in brain activation and connectivity were associated with better performance on the ER task. Increased activation in, and connectivity between, areas typically involved in language processing and social cognition may facilitate the development of vocal ER skills in adolescence.
Collapse
|
40
|
Hesling I, Labache L, Joliot M, Tzourio-Mazoyer N. Large-scale plurimodal networks common to listening to, producing and reading word lists: an fMRI study combining task-induced activation and intrinsic connectivity in 144 right-handers. Brain Struct Funct 2019; 224:3075-3094. [PMID: 31494717 PMCID: PMC6875148 DOI: 10.1007/s00429-019-01951-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 08/29/2019] [Indexed: 02/07/2023]
Abstract
We aimed at identifying plurimodal large-scale networks for producing, listening to and reading word lists based on the combined analyses of task-induced activation and resting-state intrinsic connectivity in 144 healthy right-handers. In the first step, we identified the regions in each hemisphere showing joint activation and joint asymmetry during the three tasks. In the left hemisphere, 14 homotopic regions of interest (hROIs) located in the left Rolandic sulcus, precentral gyrus, cingulate gyrus, cuneus and inferior supramarginal gyrus (SMG) met this criterion, and 7 hROIs located in the right hemisphere were located in the preSMA, medial superior frontal gyrus, precuneus and superior temporal sulcus (STS). In a second step, we calculated the BOLD temporal correlations across these 21 hROIs at rest and conducted a hierarchical clustering analysis to unravel their network organization. Two networks were identified, including the WORD-LIST_CORE network that aggregated 14 motor, premotor and phonemic areas in the left hemisphere plus the right STS that corresponded to the posterior human voice area (pHVA). The present results revealed that word-list processing is based on left articulatory and storage areas supporting the action-perception cycle common not only to production and listening but also to reading. The inclusion of the right pHVA acting as a prosodic integrative area highlights the importance of prosody in the three modalities and reveals an intertwining across hemispheres between prosodic (pHVA) and phonemic (left SMG) processing. These results are consistent with the motor theory of speech postulating that articulatory gestures are the central motor units on which word perception, production, and reading develop and act together.
Collapse
Affiliation(s)
- Isabelle Hesling
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France. .,CNRS, IMN, UMR 5293, 33000, Bordeaux, France. .,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France. .,IMN Institut des Maladies Neurodégénératives UMR 5293, Team 5: GIN Groupe d'imagerie Neurofonctionnelle, CEA-CNRS, Université de Bordeaux, Centre Broca Nouvelle-Aquitaine-3ème étage, 146 rue Léo-Saignat-CS 61292-Case 28, 33076, Bordeaux CEDEX, France.
| | - L Labache
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France.,University of Bordeaux, IMB, UMR 5251, 33405, Talence, France.,INRIA Bordeaux Sud-Ouest, CQFD, INRIA, UMR 5251, 33405, Talence, France
| | - M Joliot
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France
| | - N Tzourio-Mazoyer
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France
| |
Collapse
|
41
|
Kosman KA, Levy-Carrick NC. Positioning Psychiatry as a Leader in Trauma-Informed Care (TIC): the Need for Psychiatry Resident Education. ACADEMIC PSYCHIATRY : THE JOURNAL OF THE AMERICAN ASSOCIATION OF DIRECTORS OF PSYCHIATRIC RESIDENCY TRAINING AND THE ASSOCIATION FOR ACADEMIC PSYCHIATRY 2019; 43:429-434. [PMID: 30693465 DOI: 10.1007/s40596-019-01020-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Accepted: 01/03/2019] [Indexed: 06/09/2023]
Affiliation(s)
- Katherine A Kosman
- Brigham and Women's Hospital, Boston, MA, USA.
- Harvard Medical School, Boston, MA, USA.
| | - Nomi C Levy-Carrick
- Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|
42
|
Abstract
We propose a novel feedforward neural network (FFNN)-based speech emotion recognition system built on three layers: A base layer where a set of speech features are evaluated and classified; a middle layer where a speech matrix is built based on the classification scores computed in the base layer; a top layer where an FFNN- and a rule-based classifier are used to analyze the speech matrix and output the predicted emotion. The system offers 80.75% accuracy for predicting the six basic emotions and surpasses other state-of-the-art methods when tested on emotion-stimulated utterances. The method is robust and the fastest in the literature, computing a stable prediction in less than 78 s and proving attractive for replacing questionnaire-based methods and for real-time use. A set of correlations between several speech features (intensity contour, speech rate, pause rate, and short-time energy) and the evaluated emotions is determined, which enhances previous similar studies that have not analyzed these speech features. Using these correlations to improve the system leads to a 6% increase in accuracy. The proposed system can be used to improve human–computer interfaces, in computer-mediated education systems, for accident prevention, and for predicting mental disorders and physical diseases.
Collapse
|
43
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
44
|
De Stefani E, Nicolini Y, Belluardo M, Ferrari PF. Congenital facial palsy and emotion processing: The case of Moebius syndrome. GENES BRAIN AND BEHAVIOR 2019; 18:e12548. [PMID: 30604920 DOI: 10.1111/gbb.12548] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 11/16/2018] [Accepted: 12/15/2018] [Indexed: 12/13/2022]
Abstract
According to the Darwinian perspective, facial expressions of emotions evolved to quickly communicate emotional states and would serve adaptive functions that promote social interactions. Embodied cognition theories suggest that we understand others' emotions by reproducing the perceived expression in our own facial musculature (facial mimicry) and the mere observation of a facial expression can evoke the corresponding emotion in the perceivers. Consequently, the inability to form facial expressions would affect the experience of emotional understanding. In this review, we aimed at providing account on the link between the lack of emotion production and the mechanisms of emotion processing. We address this issue by taking into account Moebius syndrome, a rare neurological disorder that primarily affects the muscles controlling facial expressions. Individuals with Moebius syndrome are born with facial paralysis and inability to form facial expressions. This makes them the ideal population to study whether facial mimicry is necessary for emotion understanding. Here, we discuss behavioral ambiguous/mixed results on emotion recognition deficits in Moebius syndrome suggesting the need to investigate further aspects of emotional processing such as the physiological responses associated with the emotional experience during developmental age.
Collapse
Affiliation(s)
- Elisa De Stefani
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Ylenia Nicolini
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Mauro Belluardo
- Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Pier Francesco Ferrari
- Department of Medicine and Surgery, University of Parma, Parma, Italy.,Institut des Sciences Cognitives Marc Jeannerod, CNRS, Université de Lyon, Lyon, France
| |
Collapse
|
45
|
Koch K, Stegmaier S, Schwarz L, Erb M, Thomas M, Scheffler K, Wildgruber D, Nieratschker V, Ethofer T. CACNA1C risk variant affects microstructural connectivity of the amygdala. Neuroimage Clin 2019; 22:101774. [PMID: 30909026 PMCID: PMC6434179 DOI: 10.1016/j.nicl.2019.101774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 01/29/2019] [Accepted: 03/10/2019] [Indexed: 11/28/2022]
Abstract
Deficits in perception of emotional prosody have been described in patients with affective disorders at behavioral and neural level. In the current study, we use an imaging genetics approach to examine the impact of CACNA1C, one of the most promising genetic risk factors for psychiatric disorders, on prosody processing on a behavioral, functional and microstructural level. Using functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) we examined key areas involved in prosody processing, i.e. the amygdala and voice areas, in a healthy population. We found stronger activation to emotional than neutral prosody in the voice areas and the amygdala, but CACNA1C rs1006737 genotype had no influence on fMRI activity. However, significant microstructural differences (i.e. mean diffusivity) between CACNA1C rs1006737 risk allele carriers and non carriers were found in the amygdala, but not the voice areas. These modifications in brain architecture associated with CACNA1C might reflect a neurobiological marker predisposing to affective disorders and concomitant alterations in emotion perception.
Collapse
Affiliation(s)
- Katharina Koch
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany.
| | - Sophia Stegmaier
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Lena Schwarz
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Michael Erb
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| | - Mara Thomas
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany; Max-Planck-Institute for Biological Cybernetics, University of Tuebingen, Tuebingen, Germany
| | - Dirk Wildgruber
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Vanessa Nieratschker
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany; Werner Reichardt Center for Integrative Neuroscience, University of Tuebingen, Tuebingen, Germany
| | - Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany; Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|
46
|
Whitehead JC, Armony JL. Multivariate fMRI pattern analysis of fear perception across modalities. Eur J Neurosci 2019; 49:1552-1563. [DOI: 10.1111/ejn.14322] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 11/23/2018] [Accepted: 12/17/2018] [Indexed: 01/04/2023]
Affiliation(s)
- Jocelyne C. Whitehead
- Douglas Mental Health University Institute Verdun Quebec Canada
- BRAMS LaboratoryCentre for Research on Brain, Language and Music Montreal Quebec Canada
- Integrated Program in NeuroscienceMcGill University Montreal Quebec Canada
| | - Jorge L. Armony
- Douglas Mental Health University Institute Verdun Quebec Canada
- BRAMS LaboratoryCentre for Research on Brain, Language and Music Montreal Quebec Canada
- Department of PsychiatryMcGill University Montreal Quebec Canada
| |
Collapse
|
47
|
Charpentier J, Kovarski K, Houy-Durand E, Malvy J, Saby A, Bonnet-Brilhault F, Latinus M, Gomot M. Emotional prosodic change detection in autism Spectrum disorder: an electrophysiological investigation in children and adults. J Neurodev Disord 2018; 10:28. [PMID: 30227832 PMCID: PMC6145332 DOI: 10.1186/s11689-018-9246-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Accepted: 09/07/2018] [Indexed: 12/12/2022] Open
Abstract
Background Autism spectrum disorder (ASD) is characterized by atypical behaviors in social environments and in reaction to changing events. While this dyad of symptoms is at the core of the pathology along with atypical sensory behaviors, most studies have investigated only one dimension. A focus on the sameness dimension has shown that intolerance to change is related to an atypical pre-attentional detection of irregularity. In the present study, we addressed the same process in response to emotional change in order to evaluate the interplay between alterations of change detection and socio-emotional processing in children and adults with autism. Methods Brain responses to neutral and emotional prosodic deviancies (mismatch negativity (MMN) and P3a, reflecting change detection and orientation of attention toward change, respectively) were recorded in children and adults with autism and in controls. Comparison of neutral and emotional conditions allowed distinguishing between general deviancy and emotional deviancy effects. Moreover, brain responses to the same neutral and emotional stimuli were recorded when they were not deviants to evaluate the sensory processing of these vocal stimuli. Results In controls, change detection was modulated by prosody: in children, this was characterized by a lateralization of emotional MMN to the right hemisphere, and in adults, by an earlier MMN for emotional deviancy than for neutral deviancy. In ASD, an overall atypical change detection was observed with an earlier MMN and a larger P3a compared to controls suggesting an unusual pre-attentional orientation toward any changes in the auditory environment. Moreover, in children with autism, deviancy detection depicted reduced MMN amplitude. In addition in children with autism, contrary to adults with autism, no modulation of the MMN by prosody was present and sensory processing of both neutral and emotional vocal stimuli appeared atypical. Conclusions Overall, change detection remains altered in people with autism. However, differences between children and adults with ASD evidence a trend toward normalization of vocal processing and of the automatic detection of emotion deviancy with age.
Collapse
Affiliation(s)
| | - K Kovarski
- UMR1253, INSERM, Université de Tours, TOURS, France
| | - E Houy-Durand
- UMR1253, INSERM, Université de Tours, TOURS, France.,Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - J Malvy
- UMR1253, INSERM, Université de Tours, TOURS, France.,Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - A Saby
- Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - F Bonnet-Brilhault
- UMR1253, INSERM, Université de Tours, TOURS, France.,Centre Universitaire de Pédopsychiatrie, CHRU de Tours, TOURS, France
| | - M Latinus
- UMR1253, INSERM, Université de Tours, TOURS, France
| | - M Gomot
- UMR1253, INSERM, Université de Tours, TOURS, France.
| |
Collapse
|
48
|
Morningstar M, Nelson EE, Dirks MA. Maturation of vocal emotion recognition: Insights from the developmental and neuroimaging literature. Neurosci Biobehav Rev 2018; 90:221-230. [DOI: 10.1016/j.neubiorev.2018.04.019] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 03/16/2018] [Accepted: 04/24/2018] [Indexed: 01/05/2023]
|
49
|
Brain mechanisms involved in angry prosody change detection in school-age children and adults, revealed by electrophysiology. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 18:748-763. [DOI: 10.3758/s13415-018-0602-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
50
|
Koch K, Stegmaier S, Schwarz L, Erb M, Reinl M, Scheffler K, Wildgruber D, Ethofer T. Neural correlates of processing emotional prosody in unipolar depression. Hum Brain Mapp 2018; 39:3419-3427. [PMID: 29682814 DOI: 10.1002/hbm.24185] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 03/15/2018] [Accepted: 04/09/2018] [Indexed: 12/11/2022] Open
Abstract
Major depressive disorder (MDD) is characterized by a biased emotion perception. In the auditory domain, MDD patients have been shown to exhibit attenuated processing of positive emotions expressed by speech melody (prosody). So far, no neuroimaging studies examining the neural basis of altered processing of emotional prosody in MDD are available. In this study, we addressed this issue by examining the emotion bias in MDD during evaluation of happy, neutral, and angry prosodic stimuli on a five-point Likert scale during functional magnetic resonance imaging (fMRI). As expected, MDD patients rated happy prosody less intense than healthy controls (HC). At neural level, stronger activation in the middle superior temporal gyrus (STG) and the amygdala was found in all participants when processing emotional as compared to neutral prosody. MDD patients exhibited an increased activation of the amygdala during processing prosody irrespective of valence while no significant differences between groups were found for the STG, indicating that altered processing of prosodic emotions in MDD occurs rather within the amygdala than in auditory areas. Concurring with the valence-specific behavioral effect of attenuated evaluation of positive prosodic stimuli, activation within the left amygdala of MDD patients correlated with ratings of happy, but not neutral or angry prosody. Our study provides first insights in the neural basis of reduced experience of positive information and an abnormally increased amygdala activity during prosody processing.
Collapse
Affiliation(s)
- Katharina Koch
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Sophia Stegmaier
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Lena Schwarz
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Michael Erb
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| | - Maren Reinl
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Klaus Scheffler
- Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany.,Max-Planck-Institute for Biological Cybernetics, University of Tuebingen, Tuebingen, Germany
| | - Dirk Wildgruber
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany
| | - Thomas Ethofer
- Department of General Psychiatry, University of Tuebingen, Tuebingen, Germany.,Department of Biomedical Resonance, University of Tuebingen, Tuebingen, Germany
| |
Collapse
|