1
|
Lin Y, Ye X, Zhang H, Xu F, Zhang J, Ding H, Zhang Y. Category-Sensitive Age-Related Shifts Between Prosodic and Semantic Dominance in Emotion Perception Linked to Cognitive Capacities. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:4829-4849. [PMID: 39496066 DOI: 10.1044/2024_jslhr-23-00817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2024]
Abstract
PURPOSE Prior research extensively documented challenges in recognizing verbal and nonverbal emotion among older individuals when compared with younger counterparts. However, the nature of these age-related changes remains unclear. The present study investigated how older and younger adults comprehend four basic emotions (i.e., anger, happiness, neutrality, and sadness) conveyed through verbal (semantic) and nonverbal (facial and prosodic) channels. METHOD A total of 73 older adults (43 women, Mage = 70.18 years) and 74 younger adults (37 women, Mage = 22.01 years) partook in a fixed-choice test for recognizing emotions presented visually via facial expressions or auditorily through prosody or semantics. RESULTS The results confirmed age-related decline in recognizing emotions across all channels except for identifying happy facial expressions. Furthermore, the two age groups demonstrated both commonalities and disparities in their inclinations toward specific channels. While both groups displayed a shared dominance of visual facial cues over auditory emotional signals, older adults indicated a preference for semantics, whereas younger adults displayed a preference for prosody in auditory emotion perception. Notably, the dominance effects observed in older adults for visual and semantic cues were less pronounced for sadness and anger compared to other emotions. These challenges in emotion recognition and the shifts in channel preferences among older adults were correlated with their general cognitive capabilities. CONCLUSION Together, the findings underscore that age-related obstacles in perceiving emotions and alterations in channel dominance, which vary by emotional category, are significantly intertwined with overall cognitive functioning. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27307251.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Xiaoqing Ye
- Shanghai Jiao Tong University School of Medicine, China
| | - Huaiyi Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Fei Xu
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Jingyu Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-Being, Shanghai, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis
| |
Collapse
|
2
|
Ben-David BM, Chebat DR, Icht M. "Love looks not with the eyes": supranormal processing of emotional speech in individuals with late-blindness versus preserved processing in individuals with congenital-blindness. Cogn Emot 2024; 38:1354-1367. [PMID: 38785380 DOI: 10.1080/02699931.2024.2357656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 04/10/2024] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication, Aging, and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), The Department of Psychology, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center (NARCA), Ariel University, Ariel, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
3
|
Taitelbaum-Swead R, Ben-David BM. The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users. Ear Hear 2024; 45:1585-1599. [PMID: 39004788 DOI: 10.1097/aud.0000000000001550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
OBJECTIVES Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study ( Taitlebaum-Swead et al. 2022 ; postlingual CI). DESIGN Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the name of Prof. Mordechai Himelfarb, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Tamati TN, Jebens A, Başkent D. Lexical effects on talker discrimination in adult cochlear implant usersa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1631-1640. [PMID: 38426835 PMCID: PMC10908561 DOI: 10.1121/10.0025011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/02/2024]
Abstract
The lexical and phonological content of an utterance impacts the processing of talker-specific details in normal-hearing (NH) listeners. Adult cochlear implant (CI) users demonstrate difficulties in talker discrimination, particularly for same-gender talker pairs, which may alter the reliance on lexical information in talker discrimination. The current study examined the effect of lexical content on talker discrimination in 24 adult CI users. In a remote AX talker discrimination task, word pairs-produced either by the same talker (ST) or different talkers with the same (DT-SG) or mixed genders (DT-MG)-were either lexically easy (high frequency, low neighborhood density) or lexically hard (low frequency, high neighborhood density). The task was completed in quiet and multi-talker babble (MTB). Results showed an effect of lexical difficulty on talker discrimination, for same-gender talker pairs in both quiet and MTB. CI users showed greater sensitivity in quiet as well as less response bias in both quiet and MTB for lexically easy words compared to lexically hard words. These results suggest that CI users make use of lexical content in same-gender talker discrimination, providing evidence for the contribution of linguistic information to the processing of degraded talker information by adult CI users.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Ave S, Nashville, Tennessee 37232, USA
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Almut Jebens
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
5
|
Icht M, Zukerman G, Ben-Itzchak E, Ben-David BM. Response to McKenzie et al. 2021: Keep It Simple; Young Adults With Autism Spectrum Disorder Without Intellectual Disability Can Process Basic Emotions. J Autism Dev Disord 2023; 53:1269-1272. [PMID: 35507295 PMCID: PMC9066386 DOI: 10.1007/s10803-022-05574-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/13/2022] [Indexed: 12/26/2022]
Abstract
We recently read the interesting and informative paper entitled "Empathic accuracy and cognitive and affective empathy in young adults with and without autism spectrum disorder" (McKenzie et al. in Journal of Autism and Developmental Disorders 52: 1-15, 2021). This paper expands recent findings from our lab (Ben-David in Journal of Autism and Developmental Disorders 50: 741-756, 2020a; International Journal of Audiology 60: 319-321, 2020b) and a recent theoretical framework (Icht et al. in Autism Research 14: 1948-1964, 2021) that may suggest a new purview for McKenzie et al.'s results. Namely, these papers suggest that young adults with autism spectrum disorder without intellectual disability can successfully recruit their cognitive abilities to distinguish between different simple spoken emotions, but may still face difficulties processing complex, subtle emotions. McKenzie et al. (Journal of Autism and Developmental Disorders 52: 1-15, 2021) extended these findings to the processing of emotions in video clips, with both visual and auditory information.
Collapse
Affiliation(s)
- Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Gil Zukerman
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Esther Ben-Itzchak
- Department of Communication Disorders, Ariel University, Ariel, Israel.,Department of Communication Disorders, The Bruckner Center for Research in Autism, Ariel University, Ariel, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC, Herzliya), Herzliya, Israel. .,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada. .,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada.
| |
Collapse
|
6
|
Shakuf V, Ben-David B, Wegner TGG, Wesseling PBC, Mentzel M, Defren S, Allen SEM, Lachmann T. Processing emotional prosody in a foreign language: the case of German and Hebrew. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2022; 6:251-268. [PMID: 35996660 PMCID: PMC9386669 DOI: 10.1007/s41809-022-00107-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Revised: 06/13/2022] [Accepted: 07/16/2022] [Indexed: 11/09/2022]
Abstract
This study investigated the universality of emotional prosody in perception of discrete emotions when semantics is not available. In two experiments the perception of emotional prosody in Hebrew and German by listeners who speak one of the languages but not the other was investigated. Having a parallel tool in both languages allowed to conduct controlled comparisons. In Experiment 1, 39 native German speakers with no knowledge of Hebrew and 80 native Israeli speakers rated Hebrew sentences spoken with four different emotional prosodies (anger, fear, happiness, sadness) or neutral. The Hebrew version of the Test for Rating of Emotions in Speech (T-RES) was used for this purpose. Ratings indicated participants’ agreement on how much the sentence conveyed each of four discrete emotions (anger, fear, happiness and sadness). In Experient 2, 30 native speakers of German, and 24 Israeli native speakers of Hebrew who had no knowledge of German rated sentences of the German version of the T-RES. Based only on the prosody, German-speaking participants were able to accurately identify the emotions in the Hebrew sentences and Hebrew-speaking participants were able to identify the emotions in the German sentences. In both experiments ratings between the groups were similar. These findings show that individuals are able to identify emotions in a foreign language even if they do not have access to semantics. This ability goes beyond identification of target emotion; similarities between languages exist even for “wrong” perception. This adds to accumulating evidence in the literature on the universality of emotional prosody.
Collapse
|
7
|
Dor YI, Algom D, Shakuf V, Ben-David BM. Age-Related Changes in the Perception of Emotions in Speech: Assessing Thresholds of Prosody and Semantics Recognition in Noise for Young and Older Adults. Front Neurosci 2022; 16:846117. [PMID: 35546888 PMCID: PMC9082150 DOI: 10.3389/fnins.2022.846117] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 03/14/2022] [Indexed: 11/15/2022] Open
Abstract
Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults’ sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of −15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.
Collapse
Affiliation(s)
- Yehuda I Dor
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel.,Communication, Aging and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Daniel Algom
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Vered Shakuf
- Department of Communications Disorders, Achva Academic College, Arugot, Israel
| | - Boaz M Ben-David
- Communication, Aging and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.,Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Nitsan G, Banai K, Ben-David BM. One Size Does Not Fit All: Examining the Effects of Working Memory Capacity on Spoken Word Recognition in Older Adults Using Eye Tracking. Front Psychol 2022; 13:841466. [PMID: 35478743 PMCID: PMC9037998 DOI: 10.3389/fpsyg.2022.841466] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Difficulties understanding speech form one of the most prevalent complaints among older adults. Successful speech perception depends on top-down linguistic and cognitive processes that interact with the bottom-up sensory processing of the incoming acoustic information. The relative roles of these processes in age-related difficulties in speech perception, especially when listening conditions are not ideal, are still unclear. In the current study, we asked whether older adults with a larger working memory capacity process speech more efficiently than peers with lower capacity when speech is presented in noise, with another task performed in tandem. Using the Eye-tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL) an adapted version of the "visual world" paradigm, 36 older listeners were asked to follow spoken instructions presented in background noise, while retaining digits for later recall under low (single-digit) or high (four-digits) memory load. In critical trials, instructions (e.g., "point at the candle") directed listeners' gaze to pictures of objects whose names shared onset or offset sounds with the name of a competitor that was displayed on the screen at the same time (e.g., candy or sandal). We compared listeners with different memory capacities on the time course for spoken word recognition under the two memory loads by testing eye-fixations on a named object, relative to fixations on an object whose name shared phonology with the named object. Results indicated two trends. (1) For older adults with lower working memory capacity, increased memory load did not affect online speech processing, however, it impaired offline word recognition accuracy. (2) The reverse pattern was observed for older adults with higher working memory capacity: increased task difficulty significantly decreases online speech processing efficiency but had no effect on offline word recognition accuracy. Results suggest that in older adults, adaptation to adverse listening conditions is at least partially supported by cognitive reserve. Therefore, additional cognitive capacity may lead to greater resilience of older listeners to adverse listening conditions. The differential effects documented by eye movements and accuracy highlight the importance of using both online and offline measures of speech processing to explore age-related changes in speech perception.
Collapse
Affiliation(s)
- Gal Nitsan
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Boaz M. Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada
| |
Collapse
|
9
|
Carl M, Icht M, Ben-David BM. A Cross-Linguistic Validation of the Test for Rating Emotions in Speech: Acoustic Analyses of Emotional Sentences in English, German, and Hebrew. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:991-1000. [PMID: 35171689 DOI: 10.1044/2021_jslhr-21-00205] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE The Test for Rating Emotions in Speech (T-RES) has been developed in order to assess the processing of emotions in spoken language. In this tool, spoken sentences, which are composed of emotional content (anger, happiness, sadness, and neutral) in both semantics and prosody in different combinations, are rated by listeners. To date, English, German, and Hebrew versions have been developed, as well as online versions, iT-RES, to adapt to COVID-19 social restrictions. Since the perception of spoken emotions may be affected by linguistic (and cultural) variables, it is important to compare the acoustic characteristics of the stimuli within and between languages. The goal of the current report was to provide cross-linguistic acoustic validation of the T-RES. METHOD T-RES sentences in the aforementioned languages were acoustically analyzed in terms of mean F0, F0 range, and speech rate to obtain profiles of acoustic parameters for different emotions. RESULTS Significant within-language discriminability of prosodic emotions was found, for both mean F0 and speech rate. Similarly, these measures were associated with comparable patterns of prosodic emotions for each of the tested languages and emotional ratings. CONCLUSIONS The results demonstrate the lack of dependence of prosody and semantics within the T-RES stimuli. These findings illustrate the listeners' ability to clearly distinguish between the different prosodic emotions in each language, providing a cross-linguistic validation of the T-RES and iT-RES.
Collapse
Affiliation(s)
- Micalle Carl
- Department of Communication Disorders, Ariel University, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC) Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Toronto Rehabilitation Institute, University Health Network (UHN), Ontario, Canada
| |
Collapse
|
10
|
Leshem R, Icht M, Ben-David BM. Processing of Spoken Emotions in Schizophrenia: Forensic and Non-forensic Patients Differ in Emotional Identification and Integration but Not in Selective Attention. Front Psychiatry 2022; 13:847455. [PMID: 35386523 PMCID: PMC8977511 DOI: 10.3389/fpsyt.2022.847455] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Accepted: 02/22/2022] [Indexed: 11/13/2022] Open
Abstract
Patients with schizophrenia (PwS) typically demonstrate deficits in visual processing of emotions. Less is known about auditory processing of spoken-emotions, as conveyed by the prosodic (tone) and semantics (words) channels. In a previous study, forensic PwS (who committed violent offenses) identified spoken-emotions and integrated the emotional information from both channels similarly to controls. However, their performance indicated larger failures of selective-attention, and lower discrimination between spoken-emotions, than controls. Given that forensic schizophrenia represents a special subgroup, the current study compared forensic and non-forensic PwS. Forty-five PwS listened to sentences conveying four basic emotions presented in semantic or prosodic channels, in different combinations. They were asked to rate how much they agreed that the sentences conveyed a predefined emotion, focusing on one channel or on the sentence as a whole. Their performance was compared to that of 21 forensic PwS (previous study). The two groups did not differ in selective-attention. However, better emotional identification and discrimination, as well as better channel integration were found for the forensic PwS. Results have several clinical implications: difficulties in spoken-emotions processing might not necessarily relate to schizophrenia; attentional deficits might not be a risk factor for aggression in schizophrenia; and forensic schizophrenia might have unique characteristics as related to spoken-emotions processing (motivation, stimulation).
Collapse
Affiliation(s)
- Rotem Leshem
- Department of Criminology, Bar-Ilan University, Ramat Gan, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada.,Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada
| |
Collapse
|