1
|
Ertürk A, Gürses E, Kulak Kayıkcı ME. Sex related differences in the perception and production of emotional prosody in adults. PSYCHOLOGICAL RESEARCH 2024; 88:449-457. [PMID: 37542581 DOI: 10.1007/s00426-023-01865-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 07/30/2023] [Indexed: 08/07/2023]
Abstract
This study aimed to investigate the features of sex-related emotional prosody production patterns and perception abilities in adult speakers. The study involved 42 native Turkish speakers (27 females and 15 males). Sex-related perception and production of the emotions "anger," "joy," "sadness," and "neutral" were examined. Participants were first asked to indicate the actor's emotional state by selecting one of the given emotion alternatives provided. They were then instructed to produce the same stimuli with varying emotions. We analyzed the change in voice characteristics employed in different emotions in terms of F0 (Hz), speaking rate (seconds), and intensity (dB) using pairwise emotion comparison. The findings showed no sex differences in emotional prosody perceptions (p = 0.725). However, differences in the production of emotional prosody between sex have been documented in pitch variation of speech. Within-group analyses revealed that women tended to use a higher pitch when expressing joy versus sadness and a neutral state of feeling. Both men and women exhibited varying loudness levels for different emotional states in the speech loudness analysis. When expressing sadness, both men and women speak slower than when expressing as contrasted to anger, joy, or neutral states of feeling. Although Turkish speakers' ability to perceive emotional prosody is similar to that of other languages, they favor speech loudness fluctuation in the production of emotional prosody.
Collapse
Affiliation(s)
- Ayşe Ertürk
- Department of Audiology, Hacettepe University, 06100, Sıhhiye, Ankara, Turkey.
| | - Emre Gürses
- Department of Audiology, Hacettepe University, 06100, Sıhhiye, Ankara, Turkey
| | | |
Collapse
|
2
|
Proverbio AM, Ornaghi L, Gabaro V. How face blurring affects body language processing of static gestures in women and men. Soc Cogn Affect Neurosci 2019; 13:590-603. [PMID: 29767792 PMCID: PMC6022678 DOI: 10.1093/scan/nsy033] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 05/04/2018] [Indexed: 11/13/2022] Open
Abstract
The role of facial coding in body language comprehension was investigated by event-related potential recordings in 31 participants viewing 800 photographs of gestures (iconic, deictic and emblematic), which could be congruent or incongruent with their caption. Facial information was obscured by blurring in half of the stimuli. The task consisted of evaluating picture/caption congruence. Quicker response times were observed in women than in men to congruent stimuli, and a cost for incongruent vs congruent stimuli was found only in men. Face obscuration did not affect accuracy in women as reflected by omission percentages, nor reduced their cognitive potentials, thus suggesting a better comprehension of face deprived pantomimes. N170 response (modulated by congruity and face presence) peaked later in men than in women. Late positivity was much larger for congruent stimuli in the female brain, regardless of face blurring. Face presence specifically activated the right superior temporal and fusiform gyri, cingulate cortex and insula, according to source reconstruction. These regions have been reported to be insufficiently activated in face-avoiding individuals with social deficits. Overall, the results corroborate the hypothesis that females might be more resistant to the lack of facial information or better at understanding body language in face-deprived social information.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Department of Psychology, Neuro-MI Center for Neuroscience, University of Milano-Bicocca, Milano, Italy
| | - Laura Ornaghi
- Department of Psychology, Neuro-MI Center for Neuroscience, University of Milano-Bicocca, Milano, Italy
| | - Veronica Gabaro
- Department of Psychology, Neuro-MI Center for Neuroscience, University of Milano-Bicocca, Milano, Italy
| |
Collapse
|
3
|
Cao J, Wang X, Liu H, Alexandrakis G. Directional changes in information flow between human brain cortical regions after application of anodal transcranial direct current stimulation (tDCS) over Broca's area. BIOMEDICAL OPTICS EXPRESS 2018; 9:5296-5317. [PMID: 30460129 PMCID: PMC6238934 DOI: 10.1364/boe.9.005296] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Revised: 09/14/2018] [Accepted: 10/02/2018] [Indexed: 05/05/2023]
Abstract
Little work has been done on the information flow in functional brain imaging and none so far in fNIRS. In this work, alterations in the directionality of net information flow induced by a short-duration, low-current (2 min 40 s; 0.5 mA) and a longer-duration, high-current (8 min; 1 mA) anodal tDCS applied over the Broca's area of the dominant language hemisphere were studied by fNIRS. The tDCS-induced patterns of information flow, quantified by a novel directed phase transfer entropy (dPTE) analysis, were distinct for different hemodynamic frequency bands and were qualitatively similar between low and high-current tDCS. In the endothelial band (0.003-0.02 Hz), the stimulated Broca's area became the strongest hub of outgoing information flow, whereas in the neurogenic band (0.02-0.04 Hz) the contralateral homologous area became the strongest information outflow source. In the myogenic band (0.04-0.15 Hz), only global patterns were seen, independent of tDCS stimulation that were interpreted as Mayer waves. These findings showcase dPTE analysis in fNIRS as a novel, complementary tool for studying cortical activity reorganization after an intervention.
Collapse
|
4
|
Liang B, Du Y. The Functional Neuroanatomy of Lexical Tone Perception: An Activation Likelihood Estimation Meta-Analysis. Front Neurosci 2018; 12:495. [PMID: 30087589 PMCID: PMC6066585 DOI: 10.3389/fnins.2018.00495] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/02/2018] [Indexed: 11/13/2022] Open
Abstract
In tonal language such as Chinese, lexical tone serves as a phonemic feature in determining word meaning. Meanwhile, it is close to prosody in terms of suprasegmental pitch variations and larynx-based articulation. The important yet mixed nature of lexical tone has evoked considerable studies, but no consensus has been reached on its functional neuroanatomy. This meta-analysis aimed at uncovering the neural network of lexical tone perception in comparison with that of phoneme and prosody in a unified framework. Independent Activation Likelihood Estimation meta-analyses were conducted for different linguistic elements: lexical tone by native tonal language speakers, lexical tone by non-tonal language speakers, phoneme, word-level prosody, and sentence-level prosody. Results showed that lexical tone and prosody studies demonstrated more extensive activations in the right than the left auditory cortex, whereas the opposite pattern was found for phoneme studies. Only tonal language speakers consistently recruited the left anterior superior temporal gyrus (STG) for processing lexical tone, an area implicated in phoneme processing and word-form recognition. Moreover, an anterior-lateral to posterior-medial gradient of activation as a function of element timescale was revealed in the right STG, in which the activation for lexical tone lied between that for phoneme and that for prosody. Another topological pattern was shown on the left precentral gyrus (preCG), with the activation for lexical tone overlapped with that for prosody but ventral to that for phoneme. These findings provide evidence that the neural network for lexical tone perception is hybrid with those for phoneme and prosody. That is, resembling prosody, lexical tone perception, regardless of language experience, involved right auditory cortex, with activation localized between sites engaged by phonemic and prosodic processing, suggesting a hierarchical organization of representations in the right auditory cortex. For tonal language speakers, lexical tone additionally engaged the left STG lexical mapping network, consistent with the phonemic representation. Similarly, when processing lexical tone, only tonal language speakers engaged the left preCG site implicated in prosody perception, consistent with tonal language speakers having stronger articulatory representations for lexical tone in the laryngeal sensorimotor network. A dynamic dual-stream model for lexical tone perception was proposed and discussed.
Collapse
Affiliation(s)
- Baishen Liang
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
5
|
Speech Prosodies of Different Emotional Categories Activate Different Brain Regions in Adult Cortex: an fNIRS Study. Sci Rep 2018; 8:218. [PMID: 29317758 PMCID: PMC5760650 DOI: 10.1038/s41598-017-18683-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 12/14/2017] [Indexed: 11/12/2022] Open
Abstract
Emotional expressions of others embedded in speech prosodies are important for social interactions. This study used functional near-infrared spectroscopy to investigate how speech prosodies of different emotional categories are processed in the cortex. The results demonstrated several cerebral areas critical for emotional prosody processing. We confirmed that the superior temporal cortex, especially the right middle and posterior parts of superior temporal gyrus (BA 22/42), primarily works to discriminate between emotional and neutral prosodies. Furthermore, the results suggested that categorization of emotions occurs within a high-level brain region–the frontal cortex, since the brain activation patterns were distinct when positive (happy) were contrasted to negative (fearful and angry) prosody in the left middle part of inferior frontal gyrus (BA 45) and the frontal eye field (BA8), and when angry were contrasted to neutral prosody in bilateral orbital frontal regions (BA 10/11). These findings verified and extended previous fMRI findings in adult brain and also provided a “developed version” of brain activation for our following neonatal study.
Collapse
|
6
|
Exploring a method for evaluation of preschool and school children with autism spectrum disorder through checking their understanding of the speaker's emotions with the help of prosody of the voice. Brain Dev 2017; 39:836-845. [PMID: 28774670 DOI: 10.1016/j.braindev.2017.07.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 05/12/2017] [Accepted: 07/04/2017] [Indexed: 11/22/2022]
Abstract
PURPOSE We attempted to evaluate the ability of 125 preschool and school children with autism spectrum disorder (ASD children) to understand the intentions of those speaking to them using prosody of the voice, by comparing it with that of 119 typically developing children (TDC) and 51 development-age-matched children with attention deficit hyperactivity disorder (ADHD children), and to explore, based on the results, a method for objective evaluation of children with ASD in the early and later periods of childhood. METHODS Phrases routinely used by children were employed in the task administered to the children, with the prosody of the voice speaking these phrases changed to express the four emotions (acceptance, rejection, bluff and fooling). RESULTS The percentage of children with ASD who could correctly identify the emotion of "fooling" was significantly lower than that of TDC, at each developmental age (corresponding to middle kindergarten class to sixth year of elementary school). On the other hand, in the children with ADHD, while the correct answer rate for identifying the emotion of "fooling" was significantly lower than that in the TDC and higher than that in the ASD children at development ages corresponding to the early years of elementary school, it did not differ significantly from that in the TDC and was higher than that ASD children at development ages corresponding to the later years of elementary school. CONCLUSION These results indicate that children with ASD find it particularly difficult to understand the emotion of fooling by listening to speech with discrepancy between the meaning of the phrases and the emotion expressed by the voice, although the prosody of the voice may serve as a key to understanding the emotion of the speakers. This finding also suggests that the prosody of the voice expressing this emotion (fooling) may be used for objective evaluation of children with ASD.
Collapse
|
7
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 124] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|
8
|
Brazo P, Beaucousin V, Lecardeur L, Razafimandimby A, Dollfus S. Social cognition in schizophrenic patients: the effect of semantic content and emotional prosody in the comprehension of emotional discourse. Front Psychiatry 2014; 5:120. [PMID: 25309458 PMCID: PMC4159994 DOI: 10.3389/fpsyt.2014.00120] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2014] [Accepted: 08/16/2014] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND The recognition of the emotion expressed during conversation relies on the integration of both semantic processing and decoding of emotional prosody. The integration of both types of elements is necessary for social interaction. No study has investigated how these processes are impaired in patients with schizophrenia during the comprehension of an emotional speech. Since patients with schizophrenia have difficulty in daily interactions, it would be of great interest to investigate how these processes are impaired. We tested the hypothesis that patients present lesser performances regarding both semantic and emotional prosodic processes during emotional speech comprehension compared with healthy participants. METHODS The paradigm is based on sentences built with emotional (anger, happiness, or sadness) semantic content uttered with or without congruent emotional prosody. The study participants had to decide with which of the emotional categories each sentence corresponded. RESULTS Patients performed significantly worse than their matched controls, even in the presence of emotional prosody, showing that their ability to understand emotional semantic content was impaired. Although prosody improved performances in both groups, it benefited the patients more than the controls. CONCLUSION Patients exhibited both impaired semantic and emotional prosodic comprehensions. However, they took greater advantage of emotional prosody adjunction than healthy participants. Consequently, focusing on emotional prosody during carrying may improve social communication.
Collapse
Affiliation(s)
- Perrine Brazo
- Service de Psychiatrie, Centre Hospitalier Universitaire de Caen , Caen , France ; UMR6301 Imagerie et Stratégies Thérapeutiques des Pathologies Cérébrales et Tumorales (ISTCT), ISTS Team, Université de Caen Basse-Normandie , Caen , France
| | - Virginie Beaucousin
- Laboratoire de Psychopathologie et Neuropsychologie, Université de Paris 8 , Saint Denis , France
| | - Laurent Lecardeur
- Service de Psychiatrie, Centre Hospitalier Universitaire de Caen , Caen , France ; UMR6301 Imagerie et Stratégies Thérapeutiques des Pathologies Cérébrales et Tumorales (ISTCT), ISTS Team, Université de Caen Basse-Normandie , Caen , France
| | - Annick Razafimandimby
- UMR6301 Imagerie et Stratégies Thérapeutiques des Pathologies Cérébrales et Tumorales (ISTCT), ISTS Team, Université de Caen Basse-Normandie , Caen , France
| | - Sonia Dollfus
- Service de Psychiatrie, Centre Hospitalier Universitaire de Caen , Caen , France ; UMR6301 Imagerie et Stratégies Thérapeutiques des Pathologies Cérébrales et Tumorales (ISTCT), ISTS Team, Université de Caen Basse-Normandie , Caen , France
| |
Collapse
|
9
|
Gautam P, Cherbuin N, Sachdev PS, Wen W, Anstey KJ. Sex differences in cortical thickness in middle aged and early old-aged adults: Personality and Total Health Through Life study. Neuroradiology 2013; 55:697-707. [DOI: 10.1007/s00234-013-1144-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2011] [Accepted: 01/21/2013] [Indexed: 11/24/2022]
|