1
|
Gao L, Yang Z, Zhou Y, Yang J, Luo Q, Feng R, Ou K, Feng R, Lu S. Natural rhythmic speech activates network reorganization with frontal community enhancing communication efficiency in patients with intrinsic brain tumor. Neuroimage 2025; 310:121112. [PMID: 40043784 DOI: 10.1016/j.neuroimage.2025.121112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Revised: 02/04/2025] [Accepted: 03/03/2025] [Indexed: 03/10/2025] Open
Abstract
Brain tumors provide unique insights into brain plasticity due to their slow growth compared to acute cerebrovascular diseases. Despite relying on sophisticated functional networks, patients with brain tumors exhibit minimal deficits in higher language functions and demonstrate positive post-injury plasticity; however, the underlying neural mechanisms remain unclear. We utilized high-density electroencephalography to investigate language network plasticity in brain tumor patients without evident language deficits. Natural rhythmic sentences and non-rhythmic sentences with contrasting speech prosodic harmony were employed to examine the impact of task integrativeness on functional network reorganization. Our study reveals that rhythmic speech perception, characterized by higher processing integrativeness, induced inhibited task engagement in the frontal lobe but evoked enhanced hubness and modularity, which supported the generation of new connections and promoted the efficiency of global connectivity. Furthermore, local invasion in the frontal lobe prompted adjacent hubs to generate enriched connections during the early processing phase, facilitating later functional reorganization. Our findings underscore the significant role of global hubs in language network plasticity and reveal the importance of highly integrated tasks for network reorganization in language rehabilitation.
Collapse
Affiliation(s)
- Leyan Gao
- Neurolinguistics Laboratory, College of International Studies, Shenzhen University, Shenzhen, China; Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, Hong Kong S.A.R. China; School of Humanities, Shenzhen University, Shenzhen, China
| | - Zhirui Yang
- Neurolinguistics Laboratory, College of International Studies, Shenzhen University, Shenzhen, China; Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, Hong Kong S.A.R. China
| | - Yuyao Zhou
- Department of Neurosurgery, Neurosurgical Institute of Fudan University, National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China
| | - Jingwen Yang
- Neurolinguistics Laboratory, College of International Studies, Shenzhen University, Shenzhen, China
| | - Qinqin Luo
- Neurolinguistics Laboratory, College of International Studies, Shenzhen University, Shenzhen, China
| | - Ruiyan Feng
- Department of Chinese Language and Literature, Fudan University, Shanghai, China
| | - Keting Ou
- Neurolinguistics Laboratory, College of International Studies, Shenzhen University, Shenzhen, China
| | - Rui Feng
- Department of Neurosurgery, Neurosurgical Institute of Fudan University, National Center for Neurological Disorders, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China.
| | - Shuo Lu
- Neurolinguistics Laboratory, College of International Studies, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Sugii N, Matsuda M, Ishikawa E. Prosody Disorder and Sing-Song Speech in a Patient With Recurrent Glioblastoma: A Case Report. Cureus 2024; 16:e76385. [PMID: 39867057 PMCID: PMC11761159 DOI: 10.7759/cureus.76385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/25/2024] [Indexed: 01/28/2025] Open
Abstract
Dysprosody affects rhythm and intonation in speech, resulting in the impairment of emotional or attitude expression, and usually presents as a negative symptom resulting in a monotonous tone. We herein report a rare case of recurrent glioblastoma (GBM) with dysprosody featuring sing-song speech. A 68-year-old man, formerly left-handed, with right temporal GBM underwent gross total resection. After chemoradiation therapy, he was discharged without any deficits. Nineteen months later, the patient exhibited recurrence and presented a peculiar way of speaking with excessive melodic intonation. A head magnetic resonance imaging revealed new enhanced lesions in the residual right temporal lobe and the splenium of the corpus callosum with a massive surrounding T2-high area. The case highlights the bilateral hemispheric network underlying prosody and the compensatory failure caused by tumor progression and connectivity disruption. This first account of sing-song dysprosody in a GBM patient underscores the complexity of the language network and the need for further case accumulation to elucidate the pathophysiology of such rare presentations.
Collapse
Affiliation(s)
- Narushi Sugii
- Department of Neurosurgery, University of Tsukuba, Tsukuba, JPN
| | | | - Eiichi Ishikawa
- Department of Neurosurgery, University of Tsukuba Hospital, Tsukuba, JPN
| |
Collapse
|
3
|
O'Connell K, Marsh AA, Seydell-Greenwald A. Right hemisphere stroke is linked to reduced social connectedness in the UK Biobank cohort. Sci Rep 2024; 14:27293. [PMID: 39516519 PMCID: PMC11549225 DOI: 10.1038/s41598-024-78351-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Accepted: 10/30/2024] [Indexed: 11/16/2024] Open
Abstract
Social connectedness is fundamental to health and life satisfaction. Empathic capacities that support social connections are commonly impaired following damage to the brain's right hemisphere, but how these acquired socio-emotional deficits correspond to real-world social outcomes remains unclear. Using anatomical brain imaging and behavioral data from a large sample of stroke survivors included in the UK Biobank (n = 209), we link damage to regions of the right hemisphere involved in emotion recognition to lower social relationship satisfaction and higher loneliness. The effect was driven by lesions to the right anterior insula and not explained by stroke extent and motor function; it was further corroborated by an exploratory analysis of social decline in a few participants for whom data were available from before and after a stroke to the right anterior insula (n = 3; comparison n = 13). These correlational findings provide new insight into the role of the right hemisphere in maintaining social connections and bear important implications for treatment and rehabilitation post-stroke.
Collapse
Affiliation(s)
- Katherine O'Connell
- Interdisciplinary Program in Neuroscience, Georgetown University, Washington, DC, 20057, USA.
| | - Abigail A Marsh
- Department of Psychology, Georgetown University, Washington, DC, 20057, USA
| | - Anna Seydell-Greenwald
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, 20057, USA
| |
Collapse
|
4
|
Yıldırım C, Düzenli-Öztürk S, Parlak MM. Assessing the perception of emotional prosody in healthy ageing. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024; 59:2497-2515. [PMID: 39137279 DOI: 10.1111/1460-6984.13097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 07/24/2024] [Indexed: 08/15/2024]
Abstract
BACKGROUND Emotional prosody is the reflection of emotion types such as happiness, sadness, fear and anger in the speaker's tone of voice. Accurately perceiving, interpreting and expressing emotional prosody is an inseparable part of successful communication and social interaction. There are few studies on emotional prosody, which is crucial for communication, and the results of these studies have inconsistent information regarding age and gender. AIMS The primary aim of this study is to assess the perception of emotional prosody in healthy ageing. The other aim is to examine the effects of variables such as age, gender, language and neurocognitive capacity on the prediction of emotional prosody recognition skills. METHODS AND PROCEDURES Sixty-nine participants between the ages of 18-75 were included in the study. Participants were grouped as the young group aged 18-35 (n = 26), the middle-aged group aged 36-55 (n = 24) and the elderly group aged 56-75 (n = 19). Perceptual emotional prosody test, motor response time test, and neuropsychological test batteries were administered to the participants. Participants were asked to recognise the emotion in the sentences played on the computer. Natural (neutral, containing neither positive nor negative emotion), happy, angry, surprised and panic emotions were evaluated with sentences composed of pseudoword stimuli. RESULTS AND OUTCOMES It was observed that the elderly group performed worse in recognising angry, panic, natural and happy emotions and in total recognition, which gives the correct recognition performance in recognition of all emotions. There was no age-related difference in recognition of the emotion of surprise. The women were more successful in recognising angry, panic, happy and total emotions compared to men. Age and Motor Reaction Time Test scores were found to be significant predictors in the emotional response time regression model. Age, language, attention and gender variables were found to have a significant effect on the regression model created for the success of total recognition of emotions (p < 0.05). CONCLUSIONS AND IMPLICATIONS This was a novel study in which emotional prosody was assessed in the elderly by eliminating lexical-semantic cues related to emotional prosody and associating emotional prosody results with neuropsychiatric tests. All our findings revealed the importance of age for the perception of emotional prosody. In addition, the effects of cognitive functions such as attention, which decline with age, were found to be important. Therefore, it should not be forgotten that many factors contribute to the success of recognising emotional prosody correctly. In this context, clinicians should consider variables such as cognitive health and education when assessing the perception of emotional prosody in elderly individuals. WHAT THIS PAPER ADDS What is already known on the subject Most of the studies compare young and old groups, and these studies evaluate the perception of emotional prosody by using sentences formed by observing the speech sounds, syllables, words and grammar rules in the vocabulary of the language. It has been reported that the perception of emotional prosody is lower, mostly in the elderly group, but there is inconsistent information in terms of age and gender. What this paper adds to existing knowledge Perceptual Prosody Recognition was evaluated with an experimental design in which sentence structures consisting of lexemes were used as stimuli and neurocognitive tests were included, taking into account the phonological and syntactic rules of language. This study was a novel study in diagnosing emotional prosody in terms of comparing different age groups and determining the factors affecting multidimensional emotional prosody, including neuropsychiatric features. What are the clinical implications of this work? All our findings revealed the importance of age for the perception of emotional prosody. In addition, it was determined that the effects of cognitive functions such as attention were important with age.
Collapse
Affiliation(s)
- Cansu Yıldırım
- Department of Speech and Language Therapy, Faculty of Health Sciences, İzmir Bakırçay University, Izmir, Turkey
| | - Seren Düzenli-Öztürk
- Department of Speech and Language Therapy, Faculty of Health Sciences, İzmir Bakırçay University, Izmir, Turkey
| | - Mümüne Merve Parlak
- Department of Speech and Language Therapy, Faculty of Health Sciences, Ankara Yıldırım Beyazıt University, Ankara, Turkey
| |
Collapse
|
5
|
Wu M, Wang Y, Zhao X, Xin T, Wu K, Liu H, Wu S, Liu M, Chai X, Li J, Wei C, Zhu C, Liu Y, Zhang YX. Anti-phasic oscillatory development for speech and noise processing in cochlear implanted toddlers. Child Dev 2024; 95:1693-1708. [PMID: 38742715 DOI: 10.1111/cdev.14105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Human brain demonstrates amazing readiness for speech and language learning at birth, but the auditory development preceding such readiness remains unknown. Cochlear implanted (CI) children (n = 67; mean age 2.77 year ± 1.31 SD; 28 females) with prelingual deafness provide a unique opportunity to study this stage. Using functional near-infrared spectroscopy, it was revealed that the brain of CI children was irresponsive to sounds at CI hearing onset. With increasing CI experiences up to 32 months, the brain demonstrated function, region and hemisphere specific development. Most strikingly, the left anterior temporal lobe showed an oscillatory trajectory, changing in opposite phases for speech and noise. The study provides the first longitudinal brain imaging evidence for early auditory development preceding speech acquisition.
Collapse
Affiliation(s)
- Meiyun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yuyang Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Otolaryngology Head and Neck Surgery, Hunan Provincial People's Hospital (First Affiliated Hospital of Hunan Normal University), Changsha, China
| | - Xue Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Tianyu Xin
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Kun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Haotian Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- Department of Otolaryngology Head and Neck Surgery, West China Hospital of Sichuan University, Chengdu, China
| | - Shinan Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Min Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Xiaoke Chai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Jinhong Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Chaogang Wei
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Chaozhe Zhu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Yu-Xuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
6
|
Laukka P, Månsson KNT, Cortes DS, Manzouri A, Frick A, Fredborg W, Fischer H. Neural correlates of individual differences in multimodal emotion recognition ability. Cortex 2024; 175:1-11. [PMID: 38691922 DOI: 10.1016/j.cortex.2024.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/03/2024]
Abstract
Studies have reported substantial variability in emotion recognition ability (ERA) - an important social skill - but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N = 49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.
Collapse
Affiliation(s)
- Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden; Department of Psychology, Uppsala University, Uppsala, Sweden.
| | - Kristoffer N T Månsson
- Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Psychology and Psychotherapy, Babeș-Bolyai University, Cluj-Napoca, Romania
| | - Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Amirhossein Manzouri
- Department of Psychology, Stockholm University, Stockholm, Sweden; Centre for Psychiatry Research, Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Andreas Frick
- Department of Medical Sciences, Psychiatry, Uppsala University, Uppsala, Sweden
| | - William Fredborg
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden; Stockholm University Brain Imaging Centre (SUBIC), Stockholm University, Stockholm, Sweden; Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet and Stockholm University, Stockholm, Sweden
| |
Collapse
|
7
|
Zhao X, Yang X. Aging affects auditory contributions to focus perception in Jianghuai Mandarina). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2990-3004. [PMID: 38717206 DOI: 10.1121/10.0025928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 04/20/2024] [Indexed: 09/20/2024]
Abstract
Speakers can place their prosodic prominence on any locations within a sentence, generating focus prosody for listeners to perceive new information. This study aimed to investigate age-related changes in the bottom-up processing of focus perception in Jianghuai Mandarin by clarifying the perceptual cues and the auditory processing abilities involved in the identification of focus locations. Young, middle-aged, and older speakers of Jianghuai Mandarin completed a focus identification task and an auditory perception task. The results showed that increasing age led to a decrease in listeners' accuracy rate in identifying focus locations, with all participants performing the worst when dynamic pitch cues were inaccessible. Auditory processing abilities did not predict focus perception performance in young and middle-aged listeners but accounted significantly for the variance in older adults' performance. These findings suggest that age-related deteriorations in focus perception can be largely attributed to declined auditory processing of perceptual cues. Poor ability to extract frequency modulation cues may be the most important underlying psychoacoustic factor for older adults' difficulties in perceiving focus prosody in Jianghuai Mandarin. The results contribute to our understanding of the bottom-up mechanisms involved in linguistic prosody processing in aging adults, particularly in tonal languages.
Collapse
Affiliation(s)
- Xinxian Zhao
- School of Foreign Studies, Tongji University, Shanghai 200092, China
| | - Xiaohu Yang
- School of Foreign Studies, Tongji University, Shanghai 200092, China
| |
Collapse
|
8
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
9
|
Zhang M, Zhang H, Tang E, Ding H, Zhang Y. Evaluating the Relative Perceptual Salience of Linguistic and Emotional Prosody in Quiet and Noisy Contexts. Behav Sci (Basel) 2023; 13:800. [PMID: 37887450 PMCID: PMC10603920 DOI: 10.3390/bs13100800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 09/22/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023] Open
Abstract
How people recognize linguistic and emotional prosody in different listening conditions is essential for understanding the complex interplay between social context, cognition, and communication. The perception of both lexical tones and emotional prosody depends on prosodic features including pitch, intensity, duration, and voice quality. However, it is unclear which aspect of prosody is perceptually more salient and resistant to noise. This study aimed to investigate the relative perceptual salience of emotional prosody and lexical tone recognition in quiet and in the presence of multi-talker babble noise. Forty young adults randomly sampled from a pool of native Mandarin Chinese with normal hearing listened to monosyllables either with or without background babble noise and completed two identification tasks, one for emotion recognition and the other for lexical tone recognition. Accuracy and speed were recorded and analyzed using generalized linear mixed-effects models. Compared with emotional prosody, lexical tones were more perceptually salient in multi-talker babble noise. Native Mandarin Chinese participants identified lexical tones more accurately and quickly than vocal emotions at the same signal-to-noise ratio. Acoustic and cognitive dissimilarities between linguistic prosody and emotional prosody may have led to the phenomenon, which calls for further explorations into the underlying psychobiological and neurophysiological mechanisms.
Collapse
Affiliation(s)
- Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (M.Z.); (H.Z.); (E.T.)
| | - Hui Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (M.Z.); (H.Z.); (E.T.)
| | - Enze Tang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (M.Z.); (H.Z.); (E.T.)
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (M.Z.); (H.Z.); (E.T.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
10
|
Dadario NB, Sughrue ME. The functional role of the precuneus. Brain 2023; 146:3598-3607. [PMID: 37254740 DOI: 10.1093/brain/awad181] [Citation(s) in RCA: 98] [Impact Index Per Article: 49.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/08/2023] [Accepted: 05/10/2023] [Indexed: 06/01/2023] Open
Abstract
Recent advancements in computational approaches and neuroimaging techniques have refined our understanding of the precuneus. While previously believed to be largely a visual processing region, the importance of the precuneus in complex cognitive functions has been previously less familiar due to a lack of focal lesions in this deeply seated region, but also a poor understanding of its true underlying anatomy. Fortunately, recent studies have revealed significant information on the structural and functional connectivity of this region, and this data has provided a more detailed mechanistic understanding of the importance of the precuneus in healthy and pathologic states. Through improved resting-state functional MRI analyses, it has become clear that the function of the precuneus can be better understood based on its functional association with large scale brain networks. Dual default mode network systems have been well explained in recent years in supporting episodic memory and theory of mind; however, a novel 'para-cingulate' network, which is a subnetwork of the larger central executive network, with likely significant roles in self-referential processes and related psychiatric symptoms is introduced here and requires further clarification. Importantly, detailed anatomic studies on the precuneus structural connectivity inside and beyond the cingulate cortex has demonstrated the presence of large structural white matter connections, which provide an additional layer of meaning to the structural-functional significance of this region and its association with large scale brain networks. Together, the structural-functional connectivity of the precuneus has provided central elements which can model various neurodegenerative diseases and psychiatric disorders, such as Alzheimer's disease and depression.
Collapse
Affiliation(s)
- Nicholas B Dadario
- Robert Wood Johnson Medical School, Rutgers University, New Brunswick, NJ 07102, USA
| | | |
Collapse
|
11
|
Dondé C, Kantrowitz JT, Medalia A, Saperstein AM, Balla A, Sehatpour P, Martinez A, O'Connell MN, Javitt DC. Early auditory processing dysfunction in schizophrenia: Mechanisms and implications. Neurosci Biobehav Rev 2023; 148:105098. [PMID: 36796472 PMCID: PMC10106448 DOI: 10.1016/j.neubiorev.2023.105098] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 02/08/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023]
Abstract
Schizophrenia is a major mental disorder that affects approximately 1% of the population worldwide. Cognitive deficits are a key feature of the disorder and a primary cause of long-term disability. Over the past decades, significant literature has accumulated demonstrating impairments in early auditory perceptual processes in schizophrenia. In this review, we first describe early auditory dysfunction in schizophrenia from both a behavioral and neurophysiological perspective and examine their interrelationship with both higher order cognitive constructs and social cognitive processes. Then, we provide insights into underlying pathological processes, especially in relationship to glutamatergic and N-methyl-D-aspartate receptor (NMDAR) dysfunction models. Finally, we discuss the utility of early auditory measures as both treatment targets for precision intervention and as translational biomarkers for etiological investigation. Altogether, this review points out the crucial role of early auditory deficits in the pathophysiology of schizophrenia, in addition to major implications for early intervention and auditory-targeted approaches.
Collapse
Affiliation(s)
- Clément Dondé
- Univ. Grenoble Alpes, F-38000 Grenoble, France; INSERM, U1216, F-38000 Grenoble, France; Psychiatry Department, CHU Grenoble Alpes, F-38000 Grenoble, France; Psychiatry Department, CH Alpes-Isère, F-38000 Saint-Egrève, France.
| | - Joshua T Kantrowitz
- Department of Psychiatry, Columbia University, 1051 Riverside Drive, New York, NY 10032, United States; Schizophrenia Research Center, Nathan Kline Institute, 140 Old Orangeburg Road, Orangeburg, NY 10962, United States
| | - Alice Medalia
- New York State Psychiatric Institute, Department of Psychiatry, Columbia University Vagelos College of Physicians and Surgeons and New York Presbyterian, New York, NY 10032, United States
| | - Alice M Saperstein
- New York State Psychiatric Institute, Department of Psychiatry, Columbia University Vagelos College of Physicians and Surgeons and New York Presbyterian, New York, NY 10032, United States
| | - Andrea Balla
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, United States
| | - Pejman Sehatpour
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, United States; Division of Experimental Therapeutics, College of Physicians and Surgeons, Columbia University, New York, NY, United States
| | - Antigona Martinez
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, United States; Division of Experimental Therapeutics, College of Physicians and Surgeons, Columbia University, New York, NY, United States
| | - Monica N O'Connell
- Translational Neuroscience Division, Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY 10962, United States
| | - Daniel C Javitt
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, United States; Division of Experimental Therapeutics, College of Physicians and Surgeons, Columbia University, New York, NY, United States.
| |
Collapse
|
12
|
Osawa SI, Suzuki K, Asano E, Ukishiro K, Agari D, Kakinuma K, Kochi R, Jin K, Nakasato N, Tominaga T. Causal Involvement of Medial Inferior Frontal Gyrus of Non-dominant Hemisphere in Higher Order Auditory Perception: A single case study. Cortex 2023; 163:57-65. [PMID: 37060887 DOI: 10.1016/j.cortex.2023.02.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 10/12/2022] [Accepted: 02/13/2023] [Indexed: 03/31/2023]
Abstract
The medial side of the operculum is invisible from the lateral surface of cerebral cortex, and its functions remain largely unexplored using direct evidence. Non-invasive and invasive studies have proved functions on peri-sylvian area including the inferior frontal gyrus (IFG) and superior temporal gyrus within the language-dominant hemisphere for semantic processing during verbal communication. However, within the non-dominant hemisphere, there was less evidence of its functions except for pitch or prosody processing. Here we add direct evidence for the functions of the non-dominant hemisphere, the causal involvement of the medial IFG for subjective auditory perception, which is affected by the context of the condition, regarded as a contribution in higher order auditory perception. The phenomenon was clearly distinguished from absolute and invariant pitch perception which is regarded as lower order auditory perception. Electrical stimulation of the medial surface of pars triangularis of IFG in non-dominant hemisphere via depth electrode in an epilepsy patient rapidly and reproducibly elicited perception of pitch changes of auditory input. Pitches were perceived as either higher or lower than those given without stimulation and there was no selectivity for sound type. The patient perceived sounds as higher when she had greater control over the situation when her eyes were open and there were self-cues, and as lower when her eyes were closed and there were investigator-cues. Time-frequency analysis of electrocorticography signals during auditory naming demonstrated medial IFG activation, characterized by low-gamma band augmentation during her own vocal response. The overall evidence provides a neural substrate for altered perception of other vocal tones according to the condition context.
Collapse
|
13
|
Karimi-Boroujeni M, Dajani HR, Giguère C. Perception of Prosody in Hearing-Impaired Individuals and Users of Hearing Assistive Devices: An Overview of Recent Advances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:775-789. [PMID: 36652704 DOI: 10.1044/2022_jslhr-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. METHOD Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. RESULTS The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. CONCLUSIONS The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21809772.
Collapse
Affiliation(s)
| | - Hilmi R Dajani
- School of Electrical Engineering and Computer Science, University of Ottawa, Ontario, Canada
| | - Christian Giguère
- School of Rehabilitation Sciences, University of Ottawa, Ontario, Canada
| |
Collapse
|
14
|
Benetti S, Ferrari A, Pavani F. Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience. Front Hum Neurosci 2023; 17:1108354. [PMID: 36816496 PMCID: PMC9932987 DOI: 10.3389/fnhum.2023.1108354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023] Open
Abstract
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
Collapse
Affiliation(s)
- Stefania Benetti
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy,*Correspondence: Stefania Benetti,
| | - Ambra Ferrari
- Max Planck Institute for Psycholinguistics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Francesco Pavani
- Centre for Mind/Brain Sciences, University of Trento, Trento, Italy,Interuniversity Research Centre “Cognition, Language, and Deafness”, CIRCLeS, Catania, Italy
| |
Collapse
|
15
|
Disentangling emotional signals in the brain: an ALE meta-analysis of vocal affect perception. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2023; 23:17-29. [PMID: 35945478 DOI: 10.3758/s13415-022-01030-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/24/2022] [Indexed: 11/08/2022]
Abstract
Recent advances in neuroimaging research on vocal emotion perception have revealed voice-sensitive areas specialized in processing affect. Experimental data on this subject is varied, investigating a wide range of emotions through different vocal signals and task demands. The present meta-analysis was designed to disentangle this diversity of results by summarizing neuroimaging data in the vocal emotion perception literature. Data from 44 experiments contrasting emotional and neutral voices was analyzed to assess brain areas involved in vocal affect perception in general, as well as depending on the type of voice signal (speech prosody or vocalizations), the task demands (implicit or explicit attention to emotions), and the specific emotion perceived. Results reassessed a consistent bilateral network of Emotional Voices Areas consisting of the superior temporal cortex and primary auditory regions. Specific activations and lateralization of these regions, as well as additional areas (insula, middle temporal gyrus) were further modulated by signal type and task demands. Exploring the sparser data on single emotions also suggested the recruitment of other regions (insula, inferior frontal gyrus, frontal operculum) for specific aspects of each emotion. These novel meta-analytic results suggest that while the bulk of vocal affect processing is localized in the STC, the complexity and variety of such vocal signals entails functional specificities in complex and varied cortical (and potentially subcortical) response pathways.
Collapse
|
16
|
Baetens K, Ma N. Degree of abstraction rather than ambiguity is crucial for driving mentalizing involvement commentary on "A-EM: a neurocognitive model for understanding mixed and ambiguous emotions and morality". Cogn Neurosci 2023; 14:70-72. [PMID: 36803314 DOI: 10.1080/17588928.2023.2181322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/10/2023] [Indexed: 02/23/2023]
Abstract
Willems (this issue) proposes a neurocognitive model with a central role allotted to ambiguity in perceived morality and emotion in driving involvement of reflective/mentalizing processes. We argue that abstractness of representation has more explanatory power in this respect. We illustrate this with examples from the verbal and non-verbal domain showing a) concrete-ambiguous emotions processed through reflexive systems and b) abstract-unambiguous emotions processed through the mentalizing system, counter to MA-EM model predictions. However, due to the natural correlation between ambiguity and abstractness, both accounts will typically make convergent predictions.
Collapse
Affiliation(s)
- Kris Baetens
- Brain, Body and Cognition and Center for Neurosciences, Vrije Universiteit Brussel, Brussel, Belgium
| | - Ning Ma
- School of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
17
|
Li T, Zhu X, Wu X, Gong Y, Jones JA, Liu P, Chang Y, Yan N, Chen X, Liu H. Continuous theta burst stimulation over left and right supramarginal gyri demonstrates their involvement in auditory feedback control of vocal production. Cereb Cortex 2022; 33:11-22. [PMID: 35174862 DOI: 10.1093/cercor/bhac049] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 01/25/2022] [Accepted: 01/26/2022] [Indexed: 01/06/2023] Open
Abstract
The supramarginal gyrus (SMG) has been implicated in auditory-motor integration for vocal production. However, whether the SMG is bilaterally or unilaterally involved in auditory feedback control of vocal production in a causal manner remains unclear. The present event-related potential (ERP) study investigated the causal roles of the left and right SMG to auditory-vocal integration using neuronavigated continuous theta burst stimulation (c-TBS). Twenty-four young adults produced sustained vowel phonations and heard their voice unexpectedly pitch-shifted by ±200 cents after receiving active or sham c-TBS over the left or right SMG. As compared to sham stimulation, c-TBS over the left or right SMG led to significantly smaller vocal compensations for pitch perturbations that were accompanied by smaller cortical P2 responses. Moreover, no significant differences were found in the vocal and ERP responses when comparing active c-TBS over the left vs. right SMG. These findings provide neurobehavioral evidence for a causal influence of both the left and right SMG on auditory feedback control of vocal production. Decreased vocal compensations paralleled by reduced P2 responses following c-TBS over the bilateral SMG support their roles for auditory-motor transformation in a bottom-up manner: receiving auditory feedback information and mediating vocal compensations for feedback errors.
Collapse
Affiliation(s)
- Tingni Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xiaoxia Zhu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xiuqin Wu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Yulai Gong
- Department of Neurological Rehabilitation, Affiliated Sichuan Provincial Rehabilitation Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, 611135, China
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Yichen Chang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Nan Yan
- CAS Key Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy Systems, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China.,Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, 510080, China
| |
Collapse
|
18
|
van der Burght CL, Numssen O, Schlaak B, Goucha T, Hartwigsen G. Differential contributions of inferior frontal gyrus subregions to sentence processing guided by intonation. Hum Brain Mapp 2022; 44:585-598. [PMID: 36189774 PMCID: PMC9842926 DOI: 10.1002/hbm.26086] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 08/30/2022] [Accepted: 08/31/2022] [Indexed: 01/25/2023] Open
Abstract
Auditory sentence comprehension involves processing content (semantics), grammar (syntax), and intonation (prosody). The left inferior frontal gyrus (IFG) is involved in sentence comprehension guided by these different cues, with neuroimaging studies preferentially locating syntactic and semantic processing in separate IFG subregions. However, this regional specialisation has not been confirmed with a neurostimulation method. Consequently, the causal role of such a specialisation remains unclear. This study probed the role of the posterior IFG (pIFG) for syntactic processing and the anterior IFG (aIFG) for semantic processing with repetitive transcranial magnetic stimulation (rTMS) in a task that required the interpretation of the sentence's prosodic realisation. Healthy participants performed a sentence completion task with syntactic and semantic decisions, while receiving 10 Hz rTMS over either left aIFG, pIFG, or vertex (control). Initial behavioural analyses showed an inhibitory effect on accuracy without task-specificity. However, electric field simulations revealed differential effects for both subregions. In the aIFG, stronger stimulation led to slower semantic processing, with no effect of pIFG stimulation. In contrast, we found a facilitatory effect on syntactic processing in both aIFG and pIFG, where higher stimulation strength was related to faster responses. Our results provide first evidence for the functional relevance of left aIFG in semantic processing guided by intonation. The stimulation effect on syntactic responses emphasises the importance of the IFG for syntax processing, without supporting the hypothesis of a pIFG-specific involvement. Together, the results support the notion of functionally specialised IFG subregions for diverse but fundamental cues for language processing.
Collapse
Affiliation(s)
- Constantijn L. van der Burght
- Department of NeuropsychologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany,Lise Meitner Research Group Cognition and PlasticityMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany,Psychology of Language DepartmentMax Planck Institute for PsycholinguisticsNijmegen
| | - Ole Numssen
- Lise Meitner Research Group Cognition and PlasticityMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Benito Schlaak
- Lise Meitner Research Group Cognition and PlasticityMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Tomás Goucha
- Department of NeuropsychologyMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and PlasticityMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
19
|
Vachha BA, Middlebrooks EH. Brain Functional Imaging Anatomy. Neuroimaging Clin N Am 2022; 32:491-505. [PMID: 35843658 DOI: 10.1016/j.nic.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Human brain function is an increasingly complex framework that has important implications in clinical medicine. In this review, the anatomy of the most commonly assessed brain functions in clinical neuroradiology, including motor, language, and vision, is discussed. The anatomy and function of the primary and secondary sensorimotor areas are discussed with clinical case examples. Next, the dual stream of language processing is reviewed, as well as its implications in clinical medicine and surgical planning. Last, the authors discuss the striate and extrastriate visual cortex and review the dual stream model of visual processing.
Collapse
Affiliation(s)
- Behroze Adi Vachha
- Department of Radiology, Neuroradiology Section, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, USA; Brain Tumor Center, Memorial Sloan Kettering Cancer Center, 1275 York Avenue, New York, NY 10065, USA.
| | - Erik H Middlebrooks
- Department of Radiology, Mayo Clinic, 4500 San Pablo Road, Jacksonville, FL 32224, USA; Department of Neurosurgery, Mayo Clinic, 4500 San Pablo Road, Jacksonville, FL 32224, USA
| |
Collapse
|
20
|
Liu L, Götz A, Lorette P, Tyler MD. How Tone, Intonation and Emotion Shape the Development of Infants’ Fundamental Frequency Perception. Front Psychol 2022; 13:906848. [PMID: 35719494 PMCID: PMC9204181 DOI: 10.3389/fpsyg.2022.906848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 05/10/2022] [Indexed: 12/02/2022] Open
Abstract
Fundamental frequency (ƒ0), perceived as pitch, is the first and arguably most salient auditory component humans are exposed to since the beginning of life. It carries multiple linguistic (e.g., word meaning) and paralinguistic (e.g., speakers’ emotion) functions in speech and communication. The mappings between these functions and ƒ0 features vary within a language and differ cross-linguistically. For instance, a rising pitch can be perceived as a question in English but a lexical tone in Mandarin. Such variations mean that infants must learn the specific mappings based on their respective linguistic and social environments. To date, canonical theoretical frameworks and most empirical studies do not view or consider the multi-functionality of ƒ0, but typically focus on individual functions. More importantly, despite the eventual mastery of ƒ0 in communication, it is unclear how infants learn to decompose and recognize these overlapping functions carried by ƒ0. In this paper, we review the symbioses and synergies of the lexical, intonational, and emotional functions that can be carried by ƒ0 and are being acquired throughout infancy. On the basis of our review, we put forward the Learnability Hypothesis that infants decompose and acquire multiple ƒ0 functions through native/environmental experiences. Under this hypothesis, we propose representative cases such as the synergy scenario, where infants use visual cues to disambiguate and decompose the different ƒ0 functions. Further, viable ways to test the scenarios derived from this hypothesis are suggested across auditory and visual modalities. Discovering how infants learn to master the diverse functions carried by ƒ0 can increase our understanding of linguistic systems, auditory processing and communication functions.
Collapse
Affiliation(s)
- Liquan Liu
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- Center for Multilingualism in Society Across the Lifespan, University of Oslo, Oslo, Norway
- Australian Research Council Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
- *Correspondence: Liquan Liu,
| | - Antonia Götz
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- Department of Linguistics, University of Potsdam, Potsdam, Germany
| | - Pernelle Lorette
- Department of English Linguistics, University of Mannheim, Mannheim, Germany
| | - Michael D. Tyler
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, NSW, Australia
- Australian Research Council Centre of Excellence for the Dynamics of Language, Canberra, ACT, Australia
| |
Collapse
|
21
|
Heyrani R, Nejati V, Abbasi S, Hartwigsen G. Laterality in Emotional Language Processing in First and Second Language. Front Psychol 2022; 12:736359. [PMID: 35185667 PMCID: PMC8850280 DOI: 10.3389/fpsyg.2021.736359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 12/20/2021] [Indexed: 11/13/2022] Open
Abstract
Language is a cognitive function that is asymmetrically distributed across both hemispheres, with left dominance for most linguistic operations. One key question of interest in cognitive neuroscience studies is related to the contribution of both hemispheres in bilingualism. Previous work shows a difference of both hemispheres for auditory processing of emotional and non-emotional words in bilinguals and monolinguals. In this study, we examined the differences between both hemispheres in the processing of emotional and non-emotional words of mother tongue language and foreign language. Sixty university students with Persian mother tongue and English as their second language were included. Differences between hemispheres were compared using the dichotic listening test. We tested the effect of hemisphere, language and emotion and their interaction. The right ear (associated with the left hemisphere) showed an advantage for the processing of all words in the first language, and positive words in the second language. Overall, our findings support previous studies reporting left-hemispheric dominance in late bilinguals for processing auditory stimuli.
Collapse
Affiliation(s)
- Raheleh Heyrani
- Department of Education and Psychology, Alzahra University, Tehran, Iran.,Raftar Cognitive Neuroscience Research Center, Shahid Beheshti University, Tehran, Iran.,Department of Education and Psychology, Shahid Beheshti University, Tehran, Iran
| | - Vahid Nejati
- Raftar Cognitive Neuroscience Research Center, Shahid Beheshti University, Tehran, Iran.,Department of Education and Psychology, Shahid Beheshti University, Tehran, Iran
| | - Sara Abbasi
- Department of Education and Psychology, Shahid Beheshti University, Tehran, Iran.,Institute for Cognitive Science Studies, Tehran, Iran
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
22
|
Zora H, Csépe V. Perception of Prosodic Modulations of Linguistic and Paralinguistic Origin: Evidence From Early Auditory Event-Related Potentials. Front Neurosci 2022; 15:797487. [PMID: 35002610 PMCID: PMC8733303 DOI: 10.3389/fnins.2021.797487] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 11/29/2021] [Indexed: 11/13/2022] Open
Abstract
How listeners handle prosodic cues of linguistic and paralinguistic origin is a central question for spoken communication. In the present EEG study, we addressed this question by examining neural responses to variations in pitch accent (linguistic) and affective (paralinguistic) prosody in Swedish words, using a passive auditory oddball paradigm. The results indicated that changes in pitch accent and affective prosody elicited mismatch negativity (MMN) responses at around 200 ms, confirming the brain’s pre-attentive response to any prosodic modulation. The MMN amplitude was, however, statistically larger to the deviation in affective prosody in comparison to the deviation in pitch accent and affective prosody combined, which is in line with previous research indicating not only a larger MMN response to affective prosody in comparison to neutral prosody but also a smaller MMN response to multidimensional deviants than unidimensional ones. The results, further, showed a significant P3a response to the affective prosody change in comparison to the pitch accent change at around 300 ms, in accordance with previous findings showing an enhanced positive response to emotional stimuli. The present findings provide evidence for distinct neural processing of different prosodic cues, and statistically confirm the intrinsic perceptual and motivational salience of paralinguistic information in spoken communication.
Collapse
Affiliation(s)
- Hatice Zora
- Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands
| | - Valéria Csépe
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
23
|
Asghari SZ, Farashi S, Bashirian S, Jenabi E. Distinctive prosodic features of people with autism spectrum disorder: a systematic review and meta-analysis study. Sci Rep 2021; 11:23093. [PMID: 34845298 PMCID: PMC8630064 DOI: 10.1038/s41598-021-02487-6] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 11/16/2021] [Indexed: 12/26/2022] Open
Abstract
In this systematic review, we analyzed and evaluated the findings of studies on prosodic features of vocal productions of people with autism spectrum disorder (ASD) in order to recognize the statistically significant, most confirmed and reliable prosodic differences distinguishing people with ASD from typically developing individuals. Using suitable keywords, three major databases including Web of Science, PubMed and Scopus, were searched. The results for prosodic features such as mean pitch, pitch range and variability, speech rate, intensity and voice duration were extracted from eligible studies. The pooled standard mean difference between ASD and control groups was extracted or calculated. Using I2 statistic and Cochrane Q-test, between-study heterogeneity was evaluated. Furthermore, publication bias was assessed using funnel plot and its significance was evaluated using Egger's and Begg's tests. Thirty-nine eligible studies were retrieved (including 910 and 850 participants for ASD and control groups, respectively). This systematic review and meta-analysis showed that ASD group members had a significantly larger mean pitch (SMD = - 0.4, 95% CI [- 0.70, - 0.10]), larger pitch range (SMD = - 0.78, 95% CI [- 1.34, - 0.21]), longer voice duration (SMD = - 0.43, 95% CI [- 0.72, - 0.15]), and larger pitch variability (SMD = - 0.46, 95% CI [- 0.84, - 0.08]), compared with typically developing control group. However, no significant differences in pitch standard deviation, voice intensity and speech rate were found between groups. Chronological age of participants and voice elicitation tasks were two sources of between-study heterogeneity. Furthermore, no publication bias was observed during analyses (p > 0.05). Mean pitch, pitch range, pitch variability and voice duration were recognized as the prosodic features reliably distinguishing people with ASD from TD individuals.
Collapse
Affiliation(s)
| | - Sajjad Farashi
- Autism Spectrum Disorders Research Center, Hamadan University of Medical Sciences, Hamadan, Iran.
| | - Saeid Bashirian
- Department of Public Health, School of Health, Hamadan University of Medical Sciences, Hamadan, Iran.
| | - Ensiyeh Jenabi
- Autism Spectrum Disorders Research Center, Hamadan University of Medical Sciences, Hamadan, Iran
| |
Collapse
|
24
|
Sihvonen AJ, Sammler D, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, Särkämö T. Right ventral stream damage underlies both poststroke aprosodia and amusia. Eur J Neurol 2021; 29:873-882. [PMID: 34661326 DOI: 10.1111/ene.15148] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 10/07/2021] [Accepted: 10/09/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND AND PURPOSE This study was undertaken to determine and compare lesion patterns and structural dysconnectivity underlying poststroke aprosodia and amusia, using a data-driven multimodal neuroimaging approach. METHODS Thirty-nine patients with right or left hemisphere stroke were enrolled in a cohort study and tested for linguistic and affective prosody perception and musical pitch and rhythm perception at subacute and 3-month poststroke stages. Participants listened to words spoken with different prosodic stress that changed their meaning, and to words spoken with six different emotions, and chose which meaning or emotion was expressed. In the music tasks, participants judged pairs of short melodies as the same or different in terms of pitch or rhythm. Structural magnetic resonance imaging data were acquired at both stages, and machine learning-based lesion-symptom mapping and deterministic tractography were used to identify lesion patterns and damaged white matter pathways giving rise to aprosodia and amusia. RESULTS Both aprosodia and amusia were behaviorally strongly correlated and associated with similar lesion patterns in right frontoinsular and striatal areas. In multiple regression models, reduced fractional anisotropy and lower tract volume of the right inferior fronto-occipital fasciculus were the strongest predictors for both disorders, over time. CONCLUSIONS These results highlight a common origin of aprosodia and amusia, both arising from damage and disconnection of the right ventral auditory stream integrating rhythmic-melodic acoustic information in prosody and music. Comorbidity of these disabilities may worsen the prognosis and affect rehabilitation success.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Centre for Clinical Research, University of Queensland, Brisbane, Queensland, Australia
| | - Daniela Sammler
- Research Group "Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main, Germany
| | - Pablo Ripollés
- Department of Psychology, New York University, New York, New York, USA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, L'Hospitalet de Llobregat, Spain.,Department of Cognition, Development, and Education Psychology, University of Barcelona, Barcelona, Spain.,Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
25
|
Sievers B, Parkinson C, Kohler PJ, Hughes JM, Fogelson SV, Wheatley T. Visual and auditory brain areas share a representational structure that supports emotion perception. Curr Biol 2021; 31:5192-5203.e4. [PMID: 34644547 DOI: 10.1016/j.cub.2021.09.043] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 07/07/2021] [Accepted: 09/16/2021] [Indexed: 11/18/2022]
Abstract
Emotionally expressive music and dance occur together across the world. This may be because features shared across the senses are represented the same way even in different sensory brain areas, putting music and movement in directly comparable terms. These shared representations may arise from a general need to identify environmentally relevant combinations of sensory features, particularly those that communicate emotion. To test the hypothesis that visual and auditory brain areas share a representational structure, we created music and animation stimuli with crossmodally matched features expressing a range of emotions. Participants confirmed that each emotion corresponded to a set of features shared across music and movement. A subset of participants viewed both music and animation during brain scanning, revealing that representations in auditory and visual brain areas were similar to one another. This shared representation captured not only simple stimulus features but also combinations of features associated with emotion judgments. The posterior superior temporal cortex represented both music and movement using this same structure, suggesting supramodal abstraction of sensory content. Further exploratory analysis revealed that early visual cortex used this shared representational structure even when stimuli were presented auditorily. We propose that crossmodally shared representations support mutually reinforcing dynamics across auditory and visual brain areas, facilitating crossmodal comparison. These shared representations may help explain why emotions are so readily perceived and why some dynamic emotional expressions can generalize across cultural contexts.
Collapse
Affiliation(s)
- Beau Sievers
- Department of Psychology, Harvard University, Cambridge, MA 02138, USA; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA.
| | - Carolyn Parkinson
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA 90095, USA; Brain Research Institute, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Peter J Kohler
- Department of Psychology, York University, Toronto, ON, Canada; Centre for Vision Research, York University, Toronto, ON, Canada
| | | | | | - Thalia Wheatley
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA; Santa Fe Institute, Santa Fe, NM 87501, USA.
| |
Collapse
|
26
|
Durfee AZ, Sheppard SM, Blake ML, Hillis AE. Lesion loci of impaired affective prosody: A systematic review of evidence from stroke. Brain Cogn 2021; 152:105759. [PMID: 34118500 PMCID: PMC8324538 DOI: 10.1016/j.bandc.2021.105759] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 05/06/2021] [Accepted: 05/24/2021] [Indexed: 02/06/2023]
Abstract
Affective prosody, or the changes in rate, rhythm, pitch, and loudness that convey emotion, has long been implicated as a function of the right hemisphere (RH), yet there is a dearth of literature identifying the specific neural regions associated with its processing. The current systematic review aimed to evaluate the evidence on affective prosody localization in the RH. One hundred and ninety articles from 1970 to February 2020 investigating affective prosody comprehension and production in patients with focal brain damage were identified via database searches. Eleven articles met inclusion criteria, passed quality reviews, and were analyzed for affective prosody localization. Acute, subacute, and chronic lesions demonstrated similar profile characteristics. Localized right antero-superior (i.e., dorsal stream) regions contributed to affective prosody production impairments, whereas damage to more postero-lateral (i.e., ventral stream) regions resulted in affective prosody comprehension deficits. This review provides support that distinct RH regions are vital for affective prosody comprehension and production, aligning with literature reporting RH activation for affective prosody processing in healthy adults as well. The impact of study design on resulting interpretations is discussed.
Collapse
Affiliation(s)
- Alexandra Zezinka Durfee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States.
| | - Shannon M Sheppard
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Communication Sciences and Disorders, Chapman University Crean College of Health and Behavioral Sciences, Irvine, CA 92618, United States
| | - Margaret L Blake
- Department of Communication Sciences and Disorders, University of Houston College of Liberal Arts and Social Sciences, Houston, TX 77204, United States
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD 21287, United States; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD 21218, United States
| |
Collapse
|
27
|
Multiple prosodic meanings are conveyed through separate pitch ranges: Evidence from perception of focus and surprise in Mandarin Chinese. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:1164-1175. [PMID: 34331268 DOI: 10.3758/s13415-021-00930-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/06/2021] [Indexed: 11/08/2022]
Abstract
F0 variation is a crucial feature in speech prosody, which can convey linguistic information such as focus and paralinguistic meanings such as surprise. How can multiple layers of information be represented with F0 in speech: are they divided into discrete layers of pitch or overlapped without clear divisions? We investigated this question by assessing pitch perception of focus and surprise in Mandarin Chinese. Seventeen native Mandarin listeners rated the strength of focus and surprise conveyed by the same set of synthetically manipulated sentences. An fMRI experiment was conducted to assess neural correlates of the listeners' perceptual response to the stimuli. The results showed that behaviourally, the perceptual threshold for focus was 3 semitones and that for surprise was 5 semitones above the baseline. Moreover, the pitch range of 5-12 semitones above the baseline signalled both focus and surprise, suggesting a considerable overlap between the two types of prosodic information within this range. The neuroimaging data positively correlated with the variations in behavioural data. Also, a ceiling effect was found as no significant behavioural differences or neural activities were shown after reaching a certain pitch level for the perception of focus and surprise respectively. Together, the results suggest that different layers of prosodic information are represented in F0 through different pitch ranges: paralinguistic information is represented at a pitch range beyond that used by linguistic information. Meanwhile, the representation of paralinguistic information is achieved without obscuring linguistic prosody, thus allowing F0 to represent the two layers of information in parallel.
Collapse
|
28
|
Sheppard SM, Meier EL, Zezinka Durfee A, Walker A, Shea J, Hillis AE. Characterizing subtypes and neural correlates of receptive aprosodia in acute right hemisphere stroke. Cortex 2021; 141:36-54. [PMID: 34029857 PMCID: PMC8489691 DOI: 10.1016/j.cortex.2021.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/20/2021] [Accepted: 04/09/2021] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Speakers naturally produce prosodic variations depending on their emotional state. Receptive prosody has several processing stages. We aimed to conduct lesion-symptom mapping to determine whether damage (core infarct or hypoperfusion) to specific brain areas was associated with receptive aprosodia or with impairment at different processing stages in individuals with acute right hemisphere stroke. We also aimed to determine whether different subtypes of receptive aprosodia exist that are characterized by distinctive behavioral performance patterns. METHODS Twenty patients with receptive aprosodia following right hemisphere ischemic stroke were enrolled within five days of stroke; clinical imaging was acquired. Participants completed tests of receptive emotional prosody, and tests of each stage of prosodic processing (Stage 1: acoustic analysis; Stage 2: analyzing abstract representations of acoustic characteristics that convey emotion; Stage 3: semantic processing). Emotional facial recognition was also assessed. LASSO regression was used to identify predictors of performance on each behavioral task. Predictors entered into each model included 14 right hemisphere regions, hypoperfusion in four vascular territories as measured using FLAIR hyperintense vessel ratings, lesion volume, age, and education. A k-medoid cluster analysis was used to identify different subtypes of receptive aprosodia based on performance on the behavioral tasks. RESULTS Impaired receptive emotional prosody and impaired emotional facial expression recognition were both predicted by greater percent damage to the caudate. The k-medoid cluster analysis identified three different subtypes of aprosodia. One group was primarily impaired on Stage 1 processing and primarily had frontotemporal lesions. The second group had a domain-general emotion recognition impairment and maximal lesion overlap in subcortical areas. Finally, the third group was characterized by a Stage 2 processing deficit and had lesion overlap in posterior regions. CONCLUSIONS Subcortical structures, particularly the caudate, play an important role in emotional prosody comprehension. Receptive aprosodia can result from impairments at different processing stages.
Collapse
Affiliation(s)
- Shannon M Sheppard
- Department of Communication Sciences & Disorders, Chapman University, Irvine, CA, USA; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Erin L Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | | | - Alex Walker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jennifer Shea
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Cognitive Science, Krieger School of Arts and Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
29
|
Hartwigsen G, Bengio Y, Bzdok D. How does hemispheric specialization contribute to human-defining cognition? Neuron 2021; 109:2075-2090. [PMID: 34004139 PMCID: PMC8273110 DOI: 10.1016/j.neuron.2021.04.024] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 03/22/2021] [Accepted: 04/26/2021] [Indexed: 12/30/2022]
Abstract
Uniquely human cognitive faculties arise from flexible interplay between specific local neural modules, with hemispheric asymmetries in functional specialization. Here, we discuss how these computational design principles provide a scaffold that enables some of the most advanced cognitive operations, such as semantic understanding of world structure, logical reasoning, and communication via language. We draw parallels to dual-processing theories of cognition by placing a focus on Kahneman's System 1 and System 2. We propose integration of these ideas with the global workspace theory to explain dynamic relay of information products between both systems. Deepening the current understanding of how neurocognitive asymmetry makes humans special can ignite the next wave of neuroscience-inspired artificial intelligence.
Collapse
Affiliation(s)
- Gesa Hartwigsen
- Max Planck Institute for Human Cognitive and Brain Sciences, Lise Meitner Research Group Cognition and Plasticity, Leipzig, Germany.
| | - Yoshua Bengio
- Mila, Montreal, QC, Canada; University of Montreal, Montreal, QC, Canada
| | - Danilo Bzdok
- Mila, Montreal, QC, Canada; Montreal Neurological Institute, McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada; Department of Biomedical Engineering, Faculty of Medicine, and School of Computer Science, McGill University, Montreal, QC, Canada.
| |
Collapse
|
30
|
Meta-Analysis on the Identification of Linguistic and Emotional Prosody in Cochlear Implant Users and Vocoder Simulations. Ear Hear 2021; 41:1092-1102. [PMID: 32251011 DOI: 10.1097/aud.0000000000000863] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study quantitatively assesses how cochlear implants (CIs) and vocoder simulations of CIs influence the identification of linguistic and emotional prosody in nontonal languages. By means of meta-analysis, it was explored how accurately CI users and normal-hearing (NH) listeners of vocoder simulations (henceforth: simulation listeners) identify prosody compared with NH listeners of unprocessed speech (henceforth: NH listeners), whether this effect of electric hearing differs between CI users and simulation listeners, and whether the effect of electric hearing is influenced by the type of prosody that listeners identify or by the availability of specific cues in the speech signal. DESIGN Records were found by searching the PubMed Central, Web of Science, Scopus, Science Direct, and PsycINFO databases (January 2018) using the search terms "cochlear implant prosody" and "vocoder prosody." Records (published in English) were included that reported results of experimental studies comparing CI users' and/or simulation listeners' identification of linguistic and/or emotional prosody in nontonal languages to that of NH listeners (all ages included). Studies that met the inclusion criteria were subjected to a multilevel random-effects meta-analysis. RESULTS Sixty-four studies reported in 28 records were included in the meta-analysis. The analysis indicated that CI users and simulation listeners were less accurate in correctly identifying linguistic and emotional prosody compared with NH listeners, that the identification of emotional prosody was more strongly compromised by the electric hearing speech signal than linguistic prosody was, and that the low quality of transmission of fundamental frequency (f0) through the electric hearing speech signal was the main cause of compromised prosody identification in CI users and simulation listeners. Moreover, results indicated that the accuracy with which CI users and simulation listeners identified linguistic and emotional prosody was comparable, suggesting that vocoder simulations with carefully selected parameters can provide a good estimate of how prosody may be identified by CI users. CONCLUSIONS The meta-analysis revealed a robust negative effect of electric hearing, where CIs and vocoder simulations had a similar negative influence on the identification of linguistic and emotional prosody, which seemed mainly due to inadequate transmission of f0 cues through the degraded electric hearing speech signal of CIs and vocoder simulations.
Collapse
|
31
|
Lin RZ, Marsh EB. Abnormal singing can identify patients with right hemisphere cortical strokes at risk for impaired prosody. Medicine (Baltimore) 2021; 100:e26280. [PMID: 34115027 PMCID: PMC8202571 DOI: 10.1097/md.0000000000026280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 05/21/2021] [Indexed: 01/04/2023] Open
Abstract
Despite lacking aphasia seen with left hemisphere (LH) infarcts involving the middle cerebral artery territory, right hemisphere (RH) strokes can result in significant difficulties in affective prosody. These impairments may be more difficult to identify but lead to significant communication problems.We determine if evaluation of singing can accurately identify stroke patients with cortical RH infarcts at risk for prosodic impairment who may benefit from rehabilitation.A prospective cohort of 36 patients evaluated with acute ischemic stroke was recruited. Participants underwent an experimental battery evaluating their singing, prosody comprehension, and prosody production. Singing samples were rated by 2 independent reviewers as subjectively "normal" or "abnormal," and analyzed for properties of the fundamental frequency. Relationships between infarct location, singing, and prosody performance were evaluated using t tests and chi-squared analysis.Eighty percent of participants with LH cortical strokes were unable to successfully complete any of the tasks due to severe aphasia. For the remainder, singing ratings corresponded to stroke location for 68% of patients. RH cortical strokes demonstrated a lower mean fundamental frequency while singing than those with subcortical infarcts (176.8 vs 130.4, P = 0.02). They also made more errors on tasks of prosody comprehension (28.6 vs 16.0, P < 0.001) and production (40.4 vs 18.4, P < 0.001).Patients with RH cortical infarcts are more likely to exhibit impaired prosody comprehension and production and demonstrate the poor variation of tone when singing compared to patients with subcortical infarcts. A simple singing screen is able to successfully identify patients with cortical lesions and potential prosodic deficits.
Collapse
Affiliation(s)
- Rebecca Z. Lin
- Department of Cognitive Science, Johns Hopkins University
| | - Elisabeth B. Marsh
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD, USA
| |
Collapse
|
32
|
Chan HL, Low I, Chen LF, Chen YS, Chu IT, Hsieh JC. A novel beamformer-based imaging of phase-amplitude coupling (BIPAC) unveiling the inter-regional connectivity of emotional prosody processing in women with primary dysmenorrhea. J Neural Eng 2021; 18. [PMID: 33691295 DOI: 10.1088/1741-2552/abed83] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/10/2021] [Indexed: 12/30/2022]
Abstract
Objective. Neural communication or the interactions of brain regions play a key role in the formation of functional neural networks. A type of neural communication can be measured in the form of phase-amplitude coupling (PAC), which is the coupling between the phase of low-frequency oscillations and the amplitude of high-frequency oscillations. This paper presents a beamformer-based imaging method, beamformer-based imaging of PAC (BIPAC), to quantify the strength of PAC between a seed region and other brain regions.Approach. A dipole is used to model the ensemble of neural activity within a group of nearby neurons and represents a mixture of multiple source components of cortical activity. From ensemble activity at each brain location, the source component with the strongest coupling to the seed activity is extracted, while unrelated components are suppressed to enhance the sensitivity of coupled-source estimation.Main results. In evaluations using simulation data sets, BIPAC proved advantageous with regard to estimation accuracy in source localization, orientation, and coupling strength. BIPAC was also applied to the analysis of magnetoencephalographic signals recorded from women with primary dysmenorrhea in an implicit emotional prosody experiment. In response to negative emotional prosody, auditory areas revealed strong PAC with the ventral auditory stream and occipitoparietal areas in the theta-gamma and alpha-gamma bands, which may respectively indicate the recruitment of auditory sensory memory and attention reorientation. Moreover, patients with more severe pain experience appeared to have stronger coupling between auditory areas and temporoparietal regions.Significance. Our findings indicate that the implicit processing of emotional prosody is altered by menstrual pain experience. The proposed BIPAC is feasible and applicable to imaging inter-regional connectivity based on cross-frequency coupling estimates. The experimental results also demonstrate that BIPAC is capable of revealing autonomous brain processing and neurodynamics, which are more subtle than active and attended task-driven processing.
Collapse
Affiliation(s)
- Hui-Ling Chan
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Intan Low
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Li-Fen Chen
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Biomedical Informatics, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yong-Sheng Chen
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Ian-Ting Chu
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Jen-Chuen Hsieh
- Institute of Brain Science, National Yang Ming Chiao Tung University, Taipei, Taiwan.,Integrated Brain Research Unit, Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan
| |
Collapse
|
33
|
Durfee AZ, Sheppard SM, Meier EL, Bunker L, Cui E, Crainiceanu C, Hillis AE. Explicit Training to Improve Affective Prosody Recognition in Adults with Acute Right Hemisphere Stroke. Brain Sci 2021; 11:brainsci11050667. [PMID: 34065453 PMCID: PMC8161405 DOI: 10.3390/brainsci11050667] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 05/15/2021] [Accepted: 05/18/2021] [Indexed: 11/16/2022] Open
Abstract
Difficulty recognizing affective prosody (receptive aprosodia) can occur following right hemisphere damage (RHD). Not all individuals spontaneously recover their ability to recognize affective prosody, warranting behavioral intervention. However, there is a dearth of evidence-based receptive aprosodia treatment research in this clinical population. The purpose of the current study was to investigate an explicit training protocol targeting affective prosody recognition in adults with RHD and receptive aprosodia. Eighteen adults with receptive aprosodia due to acute RHD completed affective prosody recognition before and after a short training session that targeted proposed underlying perceptual and conceptual processes. Behavioral impairment and lesion characteristics were investigated as possible influences on training effectiveness. Affective prosody recognition improved following training, and recognition accuracy was higher for pseudo- vs. real-word sentences. Perceptual deficits were associated with the most posterior infarcts, conceptual deficits were associated with frontal infarcts, and a combination of perceptual-conceptual deficits were related to temporoparietal and subcortical infarcts. Several right hemisphere ventral stream regions and pathways along with frontal and parietal hypoperfusion predicted training effectiveness. Explicit acoustic-prosodic-emotion training improves affective prosody recognition, but it may not be appropriate for everyone. Factors such as linguistic context and lesion location should be considered when planning prosody training.
Collapse
Affiliation(s)
- Alexandra Zezinka Durfee
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
| | - Shannon M. Sheppard
- Department of Communication Sciences and Disorders, Chapman University, Irvine, CA 92618, USA;
| | - Erin L. Meier
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MD 02115, USA
| | - Lisa Bunker
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
| | - Erjia Cui
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA; (E.C.); (C.C.)
| | - Ciprian Crainiceanu
- Department of Biostatistics, Johns Hopkins University, Baltimore, MD 21205, USA; (E.C.); (C.C.)
| | - Argye E. Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21205, USA; (A.Z.D.); (E.L.M.); (L.B.)
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University, Baltimore, MD 21287, USA
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA
- Correspondence:
| |
Collapse
|
34
|
Giacomo JD, Gongora M, Silva F, Nicoliche E, Bittencourt J, Marinho V, Gupta D, Orsini M, Teixeira S, Cagy M, Bastos V, Budde H, Basile LF, Velasques B, Ribeiro P. Repetitive transcranial magnetic stimulation changes cognitive/motor tasks performance: An absolute alpha and beta power study. Neurosci Lett 2021; 753:135866. [PMID: 33812932 DOI: 10.1016/j.neulet.2021.135866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 10/21/2022]
Abstract
The voluntary movement demands integration between cognitive and motor functions. During the initial stages of motor learning until mastery of a new motor task, and during a demanding task that is not automatic, cognitive and motor functions can be perceived as independent from each other. Areas used for actually performing motor tasks are essentially the same used by Motor Imagery (MI). The main objective of this study was to investigate inhibition effects on cognitive functions of motor skills induced by low-frequency (1 Hz) Repetitive Transcranial Magnetic Stimulation (rTMS) at the sensory-motor integration site (Cz). In particular, the goal was to examine absolute alpha and beta power changes on frontal regions during Execution, Action observation, and Motor Imagery of finger movement tasks. Eleven healthy, right-handed volunteers of both sexes (5 males, 6 females; mean age 28 ± 5 years), with no history of psychiatric or neurological disorders, participated in the experiment. The execution task consisted of the subject flexing and extending the index finger. The action observation task involved watching a video of the same movement. The motor imagery task was imagining the flexion and extension of the index finger movement. After performing the tasks randomly, subjects were submitted to 15 min of low-frequency rTMS and performed the tasks again. All tasks were executed simultaneously with EEG signals recording. Our results demonstrated a significant interaction between rTMS and the three tasks in almost all analyzed regions showing that rTMS can affect the frontal region regarding Execution, Action observation, and Motor Imagery tasks.
Collapse
Affiliation(s)
- Jessé Di Giacomo
- Brain Mapping and Sensory Motor Integration, Institute of Psychiatry of the Federal University of Rio de Janeiro (IPUB/UFRJ), Rio de Janeiro, Brazil; Federal Institute of Education, Science and Technology of Rio de Janeiro (IFRJ), Rio de Janeiro, Brazil.
| | - Mariana Gongora
- Brain Mapping and Sensory Motor Integration, Institute of Psychiatry of the Federal University of Rio de Janeiro (IPUB/UFRJ), Rio de Janeiro, Brazil
| | - Farmy Silva
- Brain Mapping and Sensory Motor Integration, Institute of Psychiatry of the Federal University of Rio de Janeiro (IPUB/UFRJ), Rio de Janeiro, Brazil
| | - Eduardo Nicoliche
- Neurophysiology and Neuropsychology of Attention, Institute of Psychiatry of the Federal University of Rio de Janeiro (IPUB/UFRJ), Rio de Janeiro, Brazil
| | | | - Victor Marinho
- Brain Mapping and Functionality Laboratory, Federal University of Piauí, Piauí, Brazil
| | - Daya Gupta
- Department of Biology, Camden County College, Blackwood, NJ, USA
| | - Marco Orsini
- Antônio Pedro University Hospital, Fluminense Federal University, UFF, Niterói, Brazil; Centro Universitario Severino Sombra, Faculty of Medicine, Vassouras, Brazil
| | - Silmar Teixeira
- Brain Mapping and Functionality Laboratory, Federal University of Piauí, Piauí, Brazil
| | - Mauricio Cagy
- Biomedical Engineering Program, COPPE, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Victor Bastos
- Brain Mapping and Functionality Laboratory, Federal University of Piauí, Piauí, Brazil
| | - Henning Budde
- Faculty of Human Sciences, Medical School Hamburg, Hamburg, Germany; Sport Science, Reykjavik University, Reykjavik, Iceland
| | - Luis F Basile
- Laboratory of Psychophysiology, Faculdade da Saúde, UMESP, São Paulo, Brazil; Division of Neurosurgery, University of São Paulo Medical School, São Paulo, Brazil
| | - Bruna Velasques
- Bioscience Department, School of Physical Education of the Federal University of Rio de Janeiro (EEFD/UFRJ), Rio de Janeiro, Brazil; Institute of Applied Neuroscience (INA), Rio de Janeiro, Brazil; Neurophysiology and Neuropsychology of Attention, Institute of Psychiatry of the Federal University of Rio de Janeiro (IPUB/UFRJ), Rio de Janeiro, Brazil
| | - Pedro Ribeiro
- Brain Mapping and Sensory Motor Integration, Institute of Psychiatry of the Federal University of Rio de Janeiro (IPUB/UFRJ), Rio de Janeiro, Brazil; Brain Mapping and Functionality Laboratory, Federal University of Piauí, Piauí, Brazil; Institute of Applied Neuroscience (INA), Rio de Janeiro, Brazil; Neurophysiology and Neuropsychology of Attention, Institute of Psychiatry of the Federal University of Rio de Janeiro (IPUB/UFRJ), Rio de Janeiro, Brazil
| |
Collapse
|
35
|
Bourke JD, Todd J. Acoustics versus linguistics? Context is Part and Parcel to lateralized processing of the parts and parcels of speech. Laterality 2021; 26:725-765. [PMID: 33726624 DOI: 10.1080/1357650x.2021.1898415] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The purpose of this review is to provide an accessible exploration of key considerations of lateralization in speech and non-speech perception using clear and defined language. From these considerations, the primary arguments for each side of the linguistics versus acoustics debate are outlined and explored in context of emerging integrative theories. This theoretical approach entails a perspective that linguistic and acoustic features differentially contribute to leftward bias, depending on the given context. Such contextual factors include stimulus parameters and variables of stimulus presentation (e.g., noise/silence and monaural/binaural) and variances in individuals (sex, handedness, age, and behavioural ability). Discussion of these factors and their interaction is also aimed towards providing an outline of variables that require consideration when developing and reviewing methodology of acoustic and linguistic processing laterality studies. Thus, there are three primary aims in the present paper: (1) to provide the reader with key theoretical perspectives from the acoustics/linguistics debate and a synthesis of the two viewpoints, (2) to highlight key caveats for generalizing findings regarding predominant models of speech laterality, and (3) to provide a practical guide for methodological control using predominant behavioural measures (i.e., gap detection and dichotic listening tasks) and/or neurophysiological measures (i.e., mismatch negativity) of speech laterality.
Collapse
Affiliation(s)
- Jesse D Bourke
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| | - Juanita Todd
- School of Psychology, University Drive, Callaghan, NSW 2308, Australia
| |
Collapse
|
36
|
The Neural Bases of Drawing. A Meta-analysis and a Systematic Literature Review of Neurofunctional Studies in Healthy Individuals. Neuropsychol Rev 2021; 31:689-702. [PMID: 33728526 PMCID: PMC8593049 DOI: 10.1007/s11065-021-09494-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 03/01/2021] [Indexed: 12/13/2022]
Abstract
Drawing is a multi-component process requiring a wide range of cognitive abilities. Several studies on patients with focal brain lesions and functional neuroimaging studies on healthy individuals demonstrated that drawing is associated with a wide brain network. However, the neural structures specifically related to drawing remain to be better comprehended. We conducted a systematic review complemented by a meta-analytic approach to identify the core neural underpinnings related to drawing in healthy individuals. In analysing the selected studies, we took into account the type of the control task employed (i.e. motor or non-motor) and the type of drawn stimulus (i.e. geometric, figurative, or nonsense). The results showed that a fronto-parietal network, particularly on the left side of the brain, was involved in drawing when compared with other motor activities. Drawing figurative images additionally activated the inferior frontal gyrus and the inferior temporal cortex, brain areas involved in selection of semantic features of objects and in visual semantic processing. Moreover, copying more than drawing from memory was associated with the activation of extrastriate cortex (BA 18, 19). The activation likelihood estimation coordinate-based meta-analysis revealed a core neural network specifically associated with drawing which included the premotor area (BA 6) and the inferior parietal lobe (BA 40) bilaterally, and the left precuneus (BA 7). These results showed that a fronto-parietal network is specifically involved in drawing and suggested that a crucial role is played by the (left) inferior parietal lobe, consistent with classical literature on constructional apraxia.
Collapse
|
37
|
The brain mechanism of explicit and implicit processing of emotional prosodies: An fNIRS study. ACTA PSYCHOLOGICA SINICA 2021. [DOI: 10.3724/sp.j.1041.2021.00015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
38
|
Stroganova TA, Komarov KS, Sysoeva OV, Goiaeva DE, Obukhova TS, Ovsiannikova TM, Prokofyev AO, Orekhova EV. Left hemispheric deficit in the sustained neuromagnetic response to periodic click trains in children with ASD. Mol Autism 2020; 11:100. [PMID: 33384021 PMCID: PMC7775632 DOI: 10.1186/s13229-020-00408-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 12/17/2020] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Deficits in perception and production of vocal pitch are often observed in people with autism spectrum disorder (ASD), but the neural basis of these deficits is unknown. In magnetoencephalogram (MEG), spectrally complex periodic sounds trigger two continuous neural responses-the auditory steady state response (ASSR) and the sustained field (SF). It has been shown that the SF in neurotypical individuals is associated with low-level analysis of pitch in the 'pitch processing center' of the Heschl's gyrus. Therefore, alternations in this auditory response may reflect atypical processing of vocal pitch. The SF, however, has never been studied in people with ASD. METHODS We used MEG and individual brain models to investigate the ASSR and SF evoked by monaural 40 Hz click trains in boys with ASD (N = 35) and neurotypical (NT) boys (N = 35) aged 7-12-years. RESULTS In agreement with the previous research in adults, the cortical sources of the SF in children were located in the left and right Heschl's gyri, anterolateral to those of the ASSR. In both groups, the SF and ASSR dominated in the right hemisphere and were higher in the hemisphere contralateral to the stimulated ear. The ASSR increased with age in both NT and ASD children and did not differ between the groups. The SF amplitude did not significantly change between the ages of 7 and 12 years. It was moderately attenuated in both hemispheres and was markedly delayed and displaced in the left hemisphere in boys with ASD. The SF delay in participants with ASD was present irrespective of their intelligence level and severity of autism symptoms. LIMITATIONS We did not test the language abilities of our participants. Therefore, the link between SF and processing of vocal pitch in children with ASD remains speculative. CONCLUSION Children with ASD demonstrate atypical processing of spectrally complex periodic sound at the level of the core auditory cortex of the left-hemisphere. The observed neural deficit may contribute to speech perception difficulties experienced by children with ASD, including their poor perception and production of linguistic prosody.
Collapse
Affiliation(s)
- T A Stroganova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - K S Komarov
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - O V Sysoeva
- Institute of Higher Nervous Activity, Russian Academy of Science, Moscow, Russian Federation
| | - D E Goiaeva
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - T S Obukhova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - T M Ovsiannikova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - A O Prokofyev
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation
| | - E V Orekhova
- Center for Neurocognitive Research (MEG Center), Moscow State University of Psychology and Education, Moscow, Russian Federation. .,MedTech West and the Institute of Neuroscience and Physiology, Sahlgrenska Academy, The University of Gothenburg, Gothenburg, Sweden.
| |
Collapse
|
39
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Intonation processing increases task-specific fronto-temporal connectivity in tonal language speakers. Hum Brain Mapp 2020; 42:161-174. [PMID: 32996647 PMCID: PMC7721241 DOI: 10.1002/hbm.25214] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 09/08/2020] [Accepted: 09/13/2020] [Indexed: 01/08/2023] Open
Abstract
Language comprehension depends on tight functional interactions between distributed brain regions. While these interactions are established for semantic and syntactic processes, the functional network of speech intonation – the linguistic variation of pitch – has been scarcely defined. Particularly little is known about intonation in tonal languages, in which pitch not only serves intonation but also expresses meaning via lexical tones. The present study used psychophysiological interaction analyses of functional magnetic resonance imaging data to characterise the neural networks underlying intonation and tone processing in native Mandarin Chinese speakers. Participants categorised either intonation or tone of monosyllabic Mandarin words that gradually varied between statement and question and between Tone 2 and Tone 4. Intonation processing induced bilateral fronto‐temporal activity and increased functional connectivity between left inferior frontal gyrus and bilateral temporal regions, likely linking auditory perception and labelling of intonation categories in a phonological network. Tone processing induced bilateral temporal activity, associated with the auditory representation of tonal (phonemic) categories. Together, the present data demonstrate the breadth of the functional intonation network in a tonal language including higher‐level phonological processes in addition to auditory representations common to both intonation and tone.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group 'Neural Bases of Intonation in Speech and Music', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group 'Cognition and Plasticity', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group 'Cognition and Plasticity', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group 'Neural Bases of Intonation in Speech and Music', Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
40
|
Rivera Bonet CN, Hwang G, Hermann B, Struck AF, J Cook C, A Nair V, Mathis J, Allen L, Almane DN, Arkush K, Birn R, Conant LL, DeYoe EA, Felton E, Maganti R, Nencka A, Raghavan M, Shah U, Sosa VN, Ustine C, Prabhakaran V, Binder JR, Meyerand ME. Neuroticism in temporal lobe epilepsy is associated with altered limbic-frontal lobe resting-state functional connectivity. Epilepsy Behav 2020; 110:107172. [PMID: 32554180 PMCID: PMC7483612 DOI: 10.1016/j.yebeh.2020.107172] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 05/11/2020] [Accepted: 05/11/2020] [Indexed: 11/18/2022]
Abstract
Neuroticism, a core personality trait characterized by a tendency towards experiencing negative affect, has been reported to be higher in people with temporal lobe epilepsy (TLE) compared with healthy individuals. Neuroticism is a known predictor of depression and anxiety, which also occur more frequently in people with TLE. The purpose of this study was to identify abnormalities in whole-brain resting-state functional connectivity in relation to neuroticism in people with TLE and to determine the degree of unique versus shared patterns of abnormal connectivity in relation to elevated symptoms of depression and anxiety. Ninety-three individuals with TLE (55 females) and 40 healthy controls (18 females) from the Epilepsy Connectome Project (ECP) completed measures of neuroticism, depression, and anxiety, which were all significantly higher in people with TLE compared with controls. Resting-state functional connectivity was compared between controls and groups with TLE with high and low neuroticism using analysis of variance (ANOVA) and t-test. In secondary analyses, the same analytics were performed using measures of depression and anxiety and the unique variance in resting-state connectivity associated with neuroticism independent of symptoms of depression and anxiety identified. Increased neuroticism was significantly associated with hyposynchrony between the right hippocampus and Brodmann area (BA) 9 (region of prefrontal cortex (PFC)) (p < 0.005), representing a unique relationship independent of symptoms of depression and anxiety. Hyposynchrony of connection between the right hippocampus and BA47 (anterior frontal operculum) was associated with high neuroticism and with higher depression and anxiety scores (p < 0.05), making it a shared abnormal connection for the three measures. In conclusion, increased neuroticism exhibits both unique and shared patterns of abnormal functional connectivity with depression and anxiety symptoms between regions of the mesial temporal and frontal lobe.
Collapse
Affiliation(s)
| | - Gyujoon Hwang
- Department of Medical Physics, University of Wisconsin-Madison, United States of America
| | - Bruce Hermann
- Department of Neurology, University of Wisconsin-Madison, United States of America
| | - Aaron F Struck
- Department of Neurology, University of Wisconsin-Madison, United States of America
| | - Cole J Cook
- Department of Medical Physics, University of Wisconsin-Madison, United States of America
| | - Veena A Nair
- Department of Radiology, University of Wisconsin-Madison, United States of America
| | - Jedidiah Mathis
- Department of Radiology Froedtert & Medical College of Wisconsin, United States of America
| | - Linda Allen
- Department of Neurology, Medical College of Wisconsin, United States of America
| | - Dace N Almane
- Department of Neurology, University of Wisconsin-Madison, United States of America
| | - Karina Arkush
- Neuroscience Innovation Institute, Aurora St. Luke's Medical Center, United States of America
| | - Rasmus Birn
- Neuroscience Training Program, University of Wisconsin-Madison, United States of America; Department of Medical Physics, University of Wisconsin-Madison, United States of America; Department of Psychiatry, University of Wisconsin-Madison, United States of America
| | - Lisa L Conant
- Department of Neurology, Medical College of Wisconsin, United States of America
| | - Edgar A DeYoe
- Department of Radiology Froedtert & Medical College of Wisconsin, United States of America; Department of Biophysics, Medical College of Wisconsin, United States of America
| | - Elizabeth Felton
- Department of Neurology, University of Wisconsin-Madison, United States of America
| | - Rama Maganti
- Department of Neurology, University of Wisconsin-Madison, United States of America
| | - Andrew Nencka
- Department of Radiology Froedtert & Medical College of Wisconsin, United States of America
| | - Manoj Raghavan
- Department of Neurology, Medical College of Wisconsin, United States of America
| | - Umang Shah
- Neuroscience Innovation Institute, Aurora St. Luke's Medical Center, United States of America
| | - Veronica N Sosa
- Neuroscience Innovation Institute, Aurora St. Luke's Medical Center, United States of America
| | - Candida Ustine
- Department of Neurology, Medical College of Wisconsin, United States of America
| | - Vivek Prabhakaran
- Neuroscience Training Program, University of Wisconsin-Madison, United States of America; Department of Neurology, University of Wisconsin-Madison, United States of America; Department of Radiology, University of Wisconsin-Madison, United States of America
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, United States of America; Department of Biophysics, Medical College of Wisconsin, United States of America
| | - Mary E Meyerand
- Neuroscience Training Program, University of Wisconsin-Madison, United States of America; Department of Medical Physics, University of Wisconsin-Madison, United States of America; Department of Radiology, University of Wisconsin-Madison, United States of America
| |
Collapse
|
41
|
Ramchandran K, Tranel D, Duster K, Denburg NL. The Role of Emotional vs. Cognitive Intelligence in Economic Decision-Making Amongst Older Adults. Front Neurosci 2020; 14:497. [PMID: 32547361 PMCID: PMC7274021 DOI: 10.3389/fnins.2020.00497] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 04/21/2020] [Indexed: 11/27/2022] Open
Abstract
The links between emotions, bio-regulatory processes, and economic decision-making are well-established in the context of age-related changes in fluid, real-time, decision competency. The objective of the research reported here is to assess the relative contributions, interactions, and impacts of affective and cognitive intelligence in economic, value-based decision-making amongst older adults. Additionally, we explored this decision-making competency in the context of the neurobiology of aging by examining the neuroanatomical correlates of intelligence and decision-making in an aging cohort. Thirty-nine, healthy, community dwelling older adults were administered the Iowa Gambling Task (IGT), an ecologically valid laboratory measure of complex, economic decision-making; along with standardized, performance-based measures of cognitive and emotional intelligence (EI). A smaller subset of this group underwent structural brain scans from which thicknesses of the frontal, parietal, temporal, occipital, cingulate cortices and their sub-sections, were computed. Fluid (online processing) aspects of Perceptual Reasoning cognitive intelligence predicted superior choices on the IGT. However, older adults with higher overall emotional intelligence (EI) and higher Experiential EI area/sub-scores learned faster to make better choices on the IGT, even after controlling for cognitive intelligence and its area scores. Thickness of the left rostral anterior cingulate (associated with fluid affective, processing) mediated the relationship between age and Experiential EI. Thickness of the right transverse temporal gyrus moderated the rate of learning on the IGT. In conclusion, our data suggest that fluid processing, which involves "online," bottom-up, cognitive processing, predicts value-based decision-making amongst older adults, while crystallized intelligence, which relies on "offline" previously acquired knowledge, does not. However, only emotional intelligence, especially its fluid "online" aspects of affective processing predicts the rate of learning in situations of complex choice, especially when there is a paucity of cues/information available to guide decision-making. Age-related effects on these cognitive, affective and decision mechanisms may have neuroanatomical correlates, especially in regions that form a subset of the human mirror-neuron and mentalizing systems. While superior decision-making may be stereotypically associated with "smarter people" (i.e., higher cognitive intelligence), our data indicate that emotional intelligence has a significant role to play in the economic decisions of older adults.
Collapse
Affiliation(s)
- Kanchna Ramchandran
- Department of Internal Medicine, Carver College of Medicine, Iowa City, IA, United States
| | - Daniel Tranel
- Department of Neurology, Carver College of Medicine, Iowa City, IA, United States
| | - Keagan Duster
- Department of Internal Medicine, Carver College of Medicine, Iowa City, IA, United States
| | - Natalie L. Denburg
- Department of Neurology, Carver College of Medicine, Iowa City, IA, United States
| |
Collapse
|
42
|
LaCroix AN, Blumenstein N, Tully M, Baxter LC, Rogalsky C. Effects of prosody on the cognitive and neural resources supporting sentence comprehension: A behavioral and lesion-symptom mapping study. BRAIN AND LANGUAGE 2020; 203:104756. [PMID: 32032865 PMCID: PMC7064294 DOI: 10.1016/j.bandl.2020.104756] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 12/03/2019] [Accepted: 01/19/2020] [Indexed: 05/29/2023]
Abstract
Non-canonical sentence comprehension impairments are well-documented in aphasia. Studies of neurotypical controls indicate that prosody can aid comprehension by facilitating attention towards critical pitch inflections and phrase boundaries. However, no studies have examined how prosody may engage specific cognitive and neural resources during non-canonical sentence comprehension in persons with left hemisphere damage. Experiment 1 examines the relationship between comprehension of non-canonical sentences spoken with typical and atypical prosody and several cognitive measures in 25 persons with chronic left hemisphere stroke and 20 matched controls. Experiment 2 explores the neural resources critical for non-canonical sentence comprehension with each prosody type using region-of-interest-based multiple regressions. Lower orienting attention abilities and greater inferior frontal and parietal damage predicted lower comprehension, but only for sentences with typical prosody. Our results suggest that typical sentence prosody may engage attention resources to support non-canonical sentence comprehension, and this relationship may be disrupted following left hemisphere stroke.
Collapse
Affiliation(s)
- Arianna N LaCroix
- College of Health Solutions, Arizona State University, Tempe, AZ, USA; College of Health Sciences, Midwestern University, Glendale, AZ, USA
| | | | - McKayla Tully
- College of Health Solutions, Arizona State University, Tempe, AZ, USA
| | | | - Corianne Rogalsky
- College of Health Solutions, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
43
|
Chien PJ, Friederici AD, Hartwigsen G, Sammler D. Neural correlates of intonation and lexical tone in tonal and non-tonal language speakers. Hum Brain Mapp 2020; 41:1842-1858. [PMID: 31957928 PMCID: PMC7268089 DOI: 10.1002/hbm.24916] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Revised: 12/10/2019] [Accepted: 12/18/2019] [Indexed: 12/31/2022] Open
Abstract
Intonation, the modulation of pitch in speech, is a crucial aspect of language that is processed in right‐hemispheric regions, beyond the classical left‐hemispheric language system. Whether or not this notion generalises across languages remains, however, unclear. Particularly, tonal languages are an interesting test case because of the dual linguistic function of pitch that conveys lexical meaning in form of tone, in addition to intonation. To date, only few studies have explored how intonation is processed in tonal languages, how this compares to tone and between tonal and non‐tonal language speakers. The present fMRI study addressed these questions by testing Mandarin and German speakers with Mandarin material. Both groups categorised mono‐syllabic Mandarin words in terms of intonation, tone, and voice gender. Systematic comparisons of brain activity of the two groups between the three tasks showed large cross‐linguistic commonalities in the neural processing of intonation in left fronto‐parietal, right frontal, and bilateral cingulo‐opercular regions. These areas are associated with general phonological, specific prosodic, and controlled categorical decision‐making processes, respectively. Tone processing overlapped with intonation processing in left fronto‐parietal areas, in both groups, but evoked additional activity in bilateral temporo‐parietal semantic regions and subcortical areas in Mandarin speakers only. Together, these findings confirm cross‐linguistic commonalities in the neural implementation of intonation processing but dissociations for semantic processing of tone only in tonal language speakers.
Collapse
Affiliation(s)
- Pei-Ju Chien
- International Max Planck Research School NeuroCom, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.,Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group "Cognition and Plasticity", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Daniela Sammler
- Otto Hahn Group "Neural Bases of Intonation in Speech and Music", Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
44
|
Gao Z, Guo X, Liu C, Mo Y, Wang J. Right inferior frontal gyrus: An integrative hub in tonal bilinguals. Hum Brain Mapp 2020; 41:2152-2159. [PMID: 31957933 PMCID: PMC7268011 DOI: 10.1002/hbm.24936] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 12/11/2019] [Accepted: 01/13/2020] [Indexed: 12/24/2022] Open
Abstract
Right hemispheric dominance in tonal bilingualism is still controversial. In this study, we investigated hemispheric dominance in 30 simultaneous Bai‐Mandarin tonal bilinguals and 28 Mandarin monolinguals using multimodal neuroimaging. Resting‐state functional connectivity (RSFC) analysis was first performed to reveal the changes of functional connections within the language‐related network. Voxel‐based morphology (VBM) and tract‐based spatial statistics (TBSS) analyses were then used to identify bilinguals' alterations in gray matter volume (GMV) and fractional anisotropy (FA) of white matter, respectively. RSFC analyses revealed significantly increased functional connections of the right pars‐orbital part of the inferior frontal gyrus (IFG) with right caudate, right pars‐opercular part of IFG, and left inferior temporal gyrus in Bai‐Mandarin bilinguals compared to monolinguals. VBM and TBSS analyses further identified significantly greater GMV in right pars‐triangular IFG and increased FA in right superior longitudinal fasciculus (SLF) in bilinguals than in monolinguals. Taken together, these results demonstrate the integrative role of the right IFG in tonal language processing of bilinguals. Our findings suggest that the intrinsic language network in simultaneous tonal bilinguals differs from that of monolinguals in terms of both function and structure.
Collapse
Affiliation(s)
- Zhao Gao
- Key Laboratory for Neurolnformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,School of Foreign Languages, University of Electronic Science and Technology of China, Chengdu, China
| | - Xin Guo
- Key Laboratory for Neurolnformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Cirong Liu
- Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, China
| | - Yin Mo
- Department of Radiology, Kunming Medical University First Affiliated Hospital, Kunming, China
| | - Jiaojian Wang
- Key Laboratory for Neurolnformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.,Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, 518057, China
| |
Collapse
|
45
|
What you say versus how you say it: Comparing sentence comprehension and emotional prosody processing using fMRI. Neuroimage 2019; 209:116509. [PMID: 31899288 DOI: 10.1016/j.neuroimage.2019.116509] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 12/23/2019] [Accepted: 12/26/2019] [Indexed: 11/24/2022] Open
Abstract
While language processing is often described as lateralized to the left hemisphere (LH), the processing of emotion carried by vocal intonation is typically attributed to the right hemisphere (RH) and more specifically, to areas mirroring the LH language areas. However, the evidence base for this hypothesis is inconsistent, with some studies supporting right-lateralization but others favoring bilateral involvement in emotional prosody processing. Here we compared fMRI activations for an emotional prosody task with those for a sentence comprehension task in 20 neurologically healthy adults, quantifying lateralization using a lateralization index. We observed right-lateralized frontotemporal activations for emotional prosody that roughly mirrored the left-lateralized activations for sentence comprehension. In addition, emotional prosody also evoked bilateral activation in pars orbitalis (BA47), amygdala, and anterior insula. These findings are consistent with the idea that analysis of the auditory speech signal is split between the hemispheres, possibly according to their preferred temporal resolution, with the left preferentially encoding phonetic and the right encoding prosodic information. Once processed, emotional prosody information is fed to domain-general emotion processing areas and integrated with semantic information, resulting in additional bilateral activations.
Collapse
|
46
|
Papitto G, Friederici AD, Zaccarella E. The topographical organization of motor processing: An ALE meta-analysis on six action domains and the relevance of Broca's region. Neuroimage 2019; 206:116321. [PMID: 31678500 DOI: 10.1016/j.neuroimage.2019.116321] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 09/24/2019] [Accepted: 10/28/2019] [Indexed: 12/24/2022] Open
Abstract
Action is a cover term used to refer to a large set of motor processes differing in domain specificities (e.g. execution or observation). Here we review neuroimaging evidence on action processing (N = 416; Subjects = 5912) using quantitative Activation Likelihood Estimation (ALE) and Meta-Analytic Connectivity Modeling (MACM) approaches to delineate the functional specificities of six domains: (1) Action Execution, (2) Action Imitation, (3) Motor Imagery, (4) Action Observation, (5) Motor Learning, (6) Motor Preparation. Our results show distinct functional patterns for the different domains with convergence in posterior BA44 (pBA44) for execution, imitation and imagery processing. The functional connectivity network seeding in the motor-based localized cluster of pBA44 differs from the connectivity network seeding in the (language-related) anterior BA44. The two networks implement distinct cognitive functions. We propose that the motor-related network encompassing pBA44 is recruited when processing movements requiring a mental representation of the action itself.
Collapse
Affiliation(s)
- Giorgio Papitto
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Stephanstraße 1a, 04103, Leipzig, Germany; International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Stephanstraße 1a, 04103, Leipzig, Germany.
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Stephanstraße 1a, 04103, Leipzig, Germany
| | - Emiliano Zaccarella
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Stephanstraße 1a, 04103, Leipzig, Germany
| |
Collapse
|
47
|
Age-related differences in neural activation and functional connectivity during the processing of vocal prosody in adolescence. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 19:1418-1432. [PMID: 31515750 DOI: 10.3758/s13415-019-00742-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
The ability to recognize others' emotions based on vocal emotional prosody follows a protracted developmental trajectory during adolescence. However, little is known about the neural mechanisms supporting this maturation. The current study investigated age-related differences in neural activation during a vocal emotion recognition (ER) task. Listeners aged 8 to 19 years old completed the vocal ER task while undergoing functional magnetic resonance imaging. The task of categorizing vocal emotional prosody elicited activation primarily in temporal and frontal areas. Age was associated with a) greater activation in regions in the superior, middle, and inferior frontal gyri, b) greater functional connectivity between the left precentral and inferior frontal gyri and regions in the bilateral insula and temporo-parietal junction, and c) greater fractional anisotropy in the superior longitudinal fasciculus, which connects frontal areas to posterior temporo-parietal regions. Many of these age-related differences in brain activation and connectivity were associated with better performance on the ER task. Increased activation in, and connectivity between, areas typically involved in language processing and social cognition may facilitate the development of vocal ER skills in adolescence.
Collapse
|
48
|
Intonation guides sentence processing in the left inferior frontal gyrus. Cortex 2019; 117:122-134. [DOI: 10.1016/j.cortex.2019.02.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Revised: 12/22/2018] [Accepted: 02/11/2019] [Indexed: 11/18/2022]
|
49
|
Kellmeyer P, Vry MS, Ball T. A transcallosal fibre system between homotopic inferior frontal regions supports complex linguistic processing. Eur J Neurosci 2019; 50:3544-3556. [PMID: 31209927 PMCID: PMC6899774 DOI: 10.1111/ejn.14486] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 05/20/2019] [Accepted: 05/30/2019] [Indexed: 12/31/2022]
Abstract
Inferior frontal regions in the left and right hemisphere support different aspects of language processing. In the canonical model, left inferior frontal regions are mostly involved in processing based on phonological, syntactic and semantic features of language, whereas the right inferior frontal regions process paralinguistic aspects like affective prosody. Using diffusion tensor imaging (DTI)‐based probabilistic fibre tracking in 20 healthy volunteers, we identify a callosal fibre system connecting left and right inferior frontal regions that are involved in linguistic processing of varying complexity. Anatomically, we show that the interhemispheric fibres are highly aligned and distributed along a rostral to caudal gradient in the body and genu of the corpus callosum to connect homotopic inferior frontal regions. In the light of converging data, taking previous DTI‐based tracking studies and clinical case studies into account, our findings suggest that the right inferior frontal cortex not only processes paralinguistic aspects of language (such as affective prosody), as purported by the canonical model, but also supports the computation of linguistic aspects of varying complexity in the human brain. Our model may explain patterns of right‐hemispheric contribution to stroke recovery as well as disorders of prosodic processing. Beyond language‐related brain function, we discuss how inter‐species differences in interhemispheric connectivity and fibre density, including the system we described here may also explain differences in transcallosal information transfer and cognitive abilities across different mammalian species.
Collapse
Affiliation(s)
- Philipp Kellmeyer
- Neuromedical Artificial Intelligence Lab, Department of Neurosurgery, Medical Center-University of Freiburg, Freiburg im Breisgau, Germany.,Cluster of Excellence BrainLinks-BrainTools, University of Freiburg, Freiburg im Breisgau, Germany
| | - Magnus-Sebastian Vry
- Department of Psychiatry and Psychotherapy, Faculty of Medicine, Medical Center-University of Freiburg, Freiburg im Breisgau, Germany
| | - Tonio Ball
- Neuromedical Artificial Intelligence Lab, Department of Neurosurgery, Medical Center-University of Freiburg, Freiburg im Breisgau, Germany.,Cluster of Excellence BrainLinks-BrainTools, University of Freiburg, Freiburg im Breisgau, Germany
| |
Collapse
|
50
|
Shekhar S, Maria A, Kotilahti K, Huotilainen M, Heiskala J, Tuulari JJ, Hirvi P, Karlsson L, Karlsson H, Nissilä I. Hemodynamic responses to emotional speech in two-month-old infants imaged using diffuse optical tomography. Sci Rep 2019; 9:4745. [PMID: 30894569 PMCID: PMC6426868 DOI: 10.1038/s41598-019-39993-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Accepted: 02/04/2019] [Indexed: 12/14/2022] Open
Abstract
Emotional speech is one of the principal forms of social communication in humans. In this study, we investigated neural processing of emotional speech (happy, angry, sad and neutral) in the left hemisphere of 21 two-month-old infants using diffuse optical tomography. Reconstructed total hemoglobin (HbT) images were analysed using adaptive voxel-based clustering and region-of-interest (ROI) analysis. We found a distributed happy > neutral response within the temporo-parietal cortex, peaking in the anterior temporal cortex; a negative HbT response to emotional speech (the average of the emotional speech conditions < baseline) in the temporo-parietal cortex, neutral > angry in the anterior superior temporal sulcus (STS), happy > angry in the superior temporal gyrus and posterior superior temporal sulcus, angry < baseline in the insula, superior temporal sulcus and superior temporal gyrus and happy < baseline in the anterior insula. These results suggest that left STS is more sensitive to happy speech as compared to angry speech, indicating that it might play an important role in processing positive emotions in two-month-old infants. Furthermore, happy speech (relative to neutral) seems to elicit more activation in the temporo-parietal cortex, thereby suggesting enhanced sensitivity of temporo-parietal cortex to positive emotional stimuli at this stage of infant development.
Collapse
Affiliation(s)
- Shashank Shekhar
- University of Turku, Institute of Clinical Medicine, Turku Brain and Mind Center, FinnBrain Birth Cohort Study, Turku, Finland.,University of Mississippi Medical Center, Department of Neurology, Jackson, MS, USA
| | - Ambika Maria
- University of Turku, Institute of Clinical Medicine, Turku Brain and Mind Center, FinnBrain Birth Cohort Study, Turku, Finland
| | - Kalle Kotilahti
- Department of Neuroscience and Biomedical Engineering, Aalto University, Helsinki, Finland
| | - Minna Huotilainen
- University of Turku, Institute of Clinical Medicine, Turku Brain and Mind Center, FinnBrain Birth Cohort Study, Turku, Finland.,CICERO Learning, Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland.,Faculty of Educational Sciences, University of Helsinki, Helsinki, Finland
| | - Juha Heiskala
- Department of Clinical Neurophysiology, Helsinki University Central Hospital, Turku, Finland
| | - Jetro J Tuulari
- University of Turku, Institute of Clinical Medicine, Turku Brain and Mind Center, FinnBrain Birth Cohort Study, Turku, Finland
| | - Pauliina Hirvi
- Department of Neuroscience and Biomedical Engineering, Aalto University, Helsinki, Finland
| | - Linnea Karlsson
- University of Turku, Institute of Clinical Medicine, Turku Brain and Mind Center, FinnBrain Birth Cohort Study, Turku, Finland.,University of Turku and Turku University Hospital, Department of Child Psychiatry, Turku, Finland
| | - Hasse Karlsson
- University of Turku, Institute of Clinical Medicine, Turku Brain and Mind Center, FinnBrain Birth Cohort Study, Turku, Finland.,University of Turku and Turku University Hospital, Department of Psychiatry, Turku, Finland
| | - Ilkka Nissilä
- Department of Neuroscience and Biomedical Engineering, Aalto University, Helsinki, Finland.
| |
Collapse
|