1
|
Contreras-Ruston F, Duggirala SX, Wingbermühle J, Navarra J, Kotz SA. Sensory Feedback in Parkinson Disease Voice Production: A Systematic Review. J Voice 2025:S0892-1997(25)00088-8. [PMID: 40113519 DOI: 10.1016/j.jvoice.2025.02.039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2024] [Revised: 02/24/2025] [Accepted: 02/24/2025] [Indexed: 03/22/2025]
Abstract
BACKGROUND Understanding voice and speech impairments in Parkinson's disease (PD) is essential for developing effective interventions and ensuring efficient social communication. OBJECTIVE This review reports findings on voice perception and production in PD with a specific focus on sensory feedback (auditory and somatosensory) of the self-voice, neural correlates of the voice, and voice quality parameters such as pitch, loudness, and emotion modulation. METHODS A combined bibliometric analysis and a systematic review should identify key trends and knowledge gaps in the neuroimaging (functional magnetic resonance imaging (fMRI)/EEG) literature on PD self-voice processing. RESULTS EEG studies focusing on pitch revealed significant differences in the P200 event-related potential, but no differences in the N100, between healthy controls and individuals with PD. fMRI studies showed reduced activation in the motor cortex and basal ganglia during speech production in PD, accompanied by increased activation in other brain regions, such as the auditory cortex, which was associated with pitch variability and loudness control. A decrease in right dorsal premotor cortex activation was linked to impaired voice control, particularly regarding loudness modulation. Additionally, the review identified missing research on the emotion modulation of the voice, despite its critical role in social communication. Altered sensory feedback plays a significant role in compensatory cortical responses during vocalization, underscoring the importance of sensory feedback in maintaining normal voice production in PD. CONCLUSIONS This review identified missing research on voice loudness perception and the potential impact of emotion perception deficits regarding voice modulation in persons with PD.
Collapse
Affiliation(s)
- Francisco Contreras-Ruston
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain; Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, 6229 ER Maastricht, The Netherlands; Speech-Language Pathology and Audiology Department, Universidad de Valparaíso, San Felipe, Chile.
| | - Suvarnalata Xanthate Duggirala
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, 6229 ER Maastricht, The Netherlands
| | - Judith Wingbermühle
- Institute of Medical Psychology and Medical Sociology, University Hospital of the RWTH Aachen University, Aachen, Germany
| | - Jordi Navarra
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| | - Sonja A Kotz
- Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht University, 6229 ER Maastricht, The Netherlands
| |
Collapse
|
2
|
Berk E, Üzümcüoğlu R, İnceoğlu F, Aydın M, Arpacı MF, Sığırcı A, Pekmez H. Correlation of Neuroanatomical Structures Related to Speech in Cerebral Palsy Patients Aged 0-17: A Retrospective MRI Study. CHILDREN (BASEL, SWITZERLAND) 2025; 12:249. [PMID: 40003351 PMCID: PMC11853842 DOI: 10.3390/children12020249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2025] [Revised: 02/11/2025] [Accepted: 02/13/2025] [Indexed: 02/27/2025]
Abstract
Background/Objectives: Cerebral Palsy (CP) is a non-progressive clinical condition characterized by secondary issues, including speech impairments. Our study aims to evaluate the volumes of brain areas related to speech in patients diagnosed with CP between the ages of 0-17. Methods: this study includes the images of 84 children: 42 in the control group who applied to the hospital between the specified dates and were reported as healthy by MRI from the patient records, and 42 patients with CP. Results: in the CP group, white and gray matter, cerebrum, cerebellum, thalamus, lobus frontalis, lobus temporalis, lobus parietalis, lobus insularis, gyrus cinguli, and nuclei basales volumes were observed to decrease statistically significantly compared to the control group (p ˂ 0.001). Conclusions: we found a significant decrease in the volumes of speech-related brain areas in CP patients, indicating that CP can significantly impact the brain's speech-related regions.
Collapse
Affiliation(s)
- Erhan Berk
- Department of Pediatrics, Faculty of Medicine, Malatya Turgut Özal University, 44210 Malatya, Türkiye;
| | - Rümeysa Üzümcüoğlu
- Department of Anatomy, Institude of Graduate Science, Malatya Turgut Özal University, 44210 Malatya, Türkiye;
| | - Feyza İnceoğlu
- Department of Biostatistics, Faculty of Medicine, Malatya Turgut Özal University, 44210 Malatya, Türkiye;
| | - Merve Aydın
- Department of Anatomy, Faculty of Medicine, Malatya Turgut Özal University, 44210 Malatya, Türkiye; (M.A.); (M.F.A.)
| | - Muhammed Furkan Arpacı
- Department of Anatomy, Faculty of Medicine, Malatya Turgut Özal University, 44210 Malatya, Türkiye; (M.A.); (M.F.A.)
| | - Ahmet Sığırcı
- Department of Radiology, Turgut Özal Medical Center, İnönü University, 44000 Malatya, Türkiye;
| | - Hıdır Pekmez
- Department of Anatomy, Faculty of Medicine, Malatya Turgut Özal University, 44210 Malatya, Türkiye; (M.A.); (M.F.A.)
| |
Collapse
|
3
|
Villar-Rodríguez E, Marin-Marin L, Baena-Pérez M, Cano-Melle C, Parcet MA, Ávila C. Musicianship and Prominence of Interhemispheric Connectivity Determine Two Different Pathways to Atypical Language Dominance. J Neurosci 2024; 44:e2430232024. [PMID: 39160067 PMCID: PMC11391498 DOI: 10.1523/jneurosci.2430-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 05/13/2024] [Accepted: 07/10/2024] [Indexed: 08/21/2024] Open
Abstract
During infancy and adolescence, language develops from a predominantly interhemispheric control-through the corpus callosum (CC)-to a predominantly intrahemispheric control, mainly subserved by the left arcuate fasciculus (AF). Using multimodal neuroimaging, we demonstrate that human left-handers (both male and female) with an atypical language lateralization show a rightward participation of language areas from the auditory cortex to the inferior frontal cortex when contrasting speech to tone perception and an enhanced interhemispheric anatomical and functional connectivity. Crucially, musicianship determines two different structural pathways to this outcome. Nonmusicians present a relation between atypical lateralization and intrahemispheric underdevelopment across the anterior AF, hinting at a dysregulation of the ontogenetic shift from an interhemispheric to an intrahemispheric brain. Musicians reveal an alternative pathway related to interhemispheric overdevelopment across the posterior CC and the auditory cortex. We discuss the heterogeneity in reaching atypical language lateralization and the relevance of early musical training in altering the normal development of language cognitive functions.
Collapse
Affiliation(s)
- Esteban Villar-Rodríguez
- Neuropsychology and Functional Neuroimaging, Universitat Jaume I, Castelllón de la Plana 12071, Spain
| | - Lidón Marin-Marin
- Department of Psychology, University of York, York YO10 5DD, United Kingdom
- York Neuroimaging Centre, York YO10 5NY, United Kingdom
| | - María Baena-Pérez
- Neuropsychology and Functional Neuroimaging, Universitat Jaume I, Castelllón de la Plana 12071, Spain
| | - Cristina Cano-Melle
- Neuropsychology and Functional Neuroimaging, Universitat Jaume I, Castelllón de la Plana 12071, Spain
| | - Maria Antònia Parcet
- Neuropsychology and Functional Neuroimaging, Universitat Jaume I, Castelllón de la Plana 12071, Spain
| | - César Ávila
- Neuropsychology and Functional Neuroimaging, Universitat Jaume I, Castelllón de la Plana 12071, Spain
| |
Collapse
|
4
|
Hakonen M, Dahmani L, Lankinen K, Ren J, Barbaro J, Blazejewska A, Cui W, Kotlarz P, Li M, Polimeni JR, Turpin T, Uluç I, Wang D, Liu H, Ahveninen J. Individual connectivity-based parcellations reflect functional properties of human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576475. [PMID: 38293021 PMCID: PMC10827228 DOI: 10.1101/2024.01.20.576475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neuroimaging studies of the functional organization of human auditory cortex have focused on group-level analyses to identify tendencies that represent the typical brain. Here, we mapped auditory areas of the human superior temporal cortex (STC) in 30 participants by combining functional network analysis and 1-mm isotropic resolution 7T functional magnetic resonance imaging (fMRI). Two resting-state fMRI sessions, and one or two auditory and audiovisual speech localizer sessions, were collected on 3-4 separate days. We generated a set of functional network-based parcellations from these data. Solutions with 4, 6, and 11 networks were selected for closer examination based on local maxima of Dice and Silhouette values. The resulting parcellation of auditory cortices showed high intraindividual reproducibility both between resting state sessions (Dice coefficient: 69-78%) and between resting state and task sessions (Dice coefficient: 62-73%). This demonstrates that auditory areas in STC can be reliably segmented into functional subareas. The interindividual variability was significantly larger than intraindividual variability (Dice coefficient: 57%-68%, p<0.001), indicating that the parcellations also captured meaningful interindividual variability. The individual-specific parcellations yielded the highest alignment with task response topographies, suggesting that individual variability in parcellations reflects individual variability in auditory function. Connectional homogeneity within networks was also highest for the individual-specific parcellations. Furthermore, the similarity in the functional parcellations was not explainable by the similarity of macroanatomical properties of auditory cortex. Our findings suggest that individual-level parcellations capture meaningful idiosyncrasies in auditory cortex organization.
Collapse
Affiliation(s)
- M Hakonen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - L Dahmani
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - K Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - J Ren
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J Barbaro
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - A Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - W Cui
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - P Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - M Li
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - T Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - I Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - D Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - H Liu
- Division of Brain Sciences, Changping Laboratory, Beijing, China
- Biomedical Pioneering Innovation Center (BIOPIC), Peking University, Beijing, China
| | - J Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
5
|
Liu J, Stohl J, Overath T. Hidden hearing loss: Fifteen years at a glance. Hear Res 2024; 443:108967. [PMID: 38335624 DOI: 10.1016/j.heares.2024.108967] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/15/2024] [Accepted: 01/29/2024] [Indexed: 02/12/2024]
Abstract
Hearing loss affects approximately 18% of the population worldwide. Hearing difficulties in noisy environments without accompanying audiometric threshold shifts likely affect an even larger percentage of the global population. One of the potential causes of hidden hearing loss is cochlear synaptopathy, the loss of synapses between inner hair cells (IHC) and auditory nerve fibers (ANF). These synapses are the most vulnerable structures in the cochlea to noise exposure or aging. The loss of synapses causes auditory deafferentation, i.e., the loss of auditory afferent information, whose downstream effect is the loss of information that is sent to higher-order auditory processing stages. Understanding the physiological and perceptual effects of this early auditory deafferentation might inform interventions to prevent later, more severe hearing loss. In the past decade, a large body of work has been devoted to better understand hidden hearing loss, including the causes of hidden hearing loss, their corresponding impact on the auditory pathway, and the use of auditory physiological measures for clinical diagnosis of auditory deafferentation. This review synthesizes the findings from studies in humans and animals to answer some of the key questions in the field, and it points to gaps in knowledge that warrant more investigation. Specifically, recent studies suggest that some electrophysiological measures have the potential to function as indicators of hidden hearing loss in humans, but more research is needed for these measures to be included as part of a clinical test battery.
Collapse
Affiliation(s)
- Jiayue Liu
- Department of Psychology and Neuroscience, Duke University, Durham, USA.
| | - Joshua Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, USA
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, USA
| |
Collapse
|
6
|
Bogetz JF, Natarajan N, Hauer J, Ramirez JM. Appreciating the Abilities of Children With Severe Neurologic Impairment. Hosp Pediatr 2023; 13:e392-e394. [PMID: 37946661 PMCID: PMC10656431 DOI: 10.1542/hpeds.2023-007463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Affiliation(s)
- Jori F. Bogetz
- Divisions of Bioethics and Palliative Care, Department of Pediatrics
- Treuman Katz Center for Pediatric Bioethics and Palliative Care, Center for Clinical and Translational Research
| | - Niranjana Natarajan
- Pediatric Neurology, Department of Neurology, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington
| | - Julie Hauer
- Division of General Pediatrics, Department of Pediatrics, Harvard Medical School, Boston, Massachusetts
| | - Jan-Marino Ramirez
- Center for Integrative Brain Research, Seattle Children’s Hospital and Research Institute, Seattle, Washington
| |
Collapse
|
7
|
Jia G, Bai S, Lin Y, Wang X, Zhu L, Lyu C, Sun G, An K, Roe AW, Li X, Gao L. Representation of conspecific vocalizations in amygdala of awake marmosets. Natl Sci Rev 2023; 10:nwad194. [PMID: 37818111 PMCID: PMC10561708 DOI: 10.1093/nsr/nwad194] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 06/23/2023] [Accepted: 07/06/2023] [Indexed: 10/12/2023] Open
Abstract
Human speech and animal vocalizations are important for social communication and animal survival. Neurons in the auditory pathway are responsive to a range of sounds, from elementary sound features to complex acoustic sounds. For social communication, responses to distinct patterns of vocalization are usually highly specific to an individual conspecific call, in some species. This includes the specificity of sound patterns and embedded biological information. We conducted single-unit recordings in the amygdala of awake marmosets and presented calls used in marmoset communication, calls of other species and calls from specific marmoset individuals. We found that some neurons (47/262) in the amygdala distinguished 'Phee' calls from vocalizations of other animals and other types of marmoset vocalizations. Interestingly, a subset of Phee-responsive neurons (22/47) also exhibited selectivity to one out of the three Phees from two different 'caller' marmosets. Our findings suggest that, while it has traditionally been considered the key structure in the limbic system, the amygdala also represents a critical stage of socially relevant auditory perceptual processing.
Collapse
Affiliation(s)
- Guoqiang Jia
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Siyi Bai
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Yingxu Lin
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Xiaohui Wang
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Lin Zhu
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Chenfei Lyu
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Guanglong Sun
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
| | - Kang An
- College of Information, Mechanical and Electrical Engineering, Shanghai Normal University, Shanghai 201418, China
| | - Anna Wang Roe
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Xinjian Li
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Medical Neurobiology of Zhejiang Province, Zhejiang University School of Medicine, Hangzhou 310020, China
| | - Lixia Gao
- Department of Neurology of the Second Affiliated Hospital and Interdisciplinary Institute of Neuroscience and Technology, Zhejiang University School of Medicine, Hangzhou 310029, China
- MOE Frontier Science Center for Brain Science and Brain-Machine Integration, School of Brain Science and Brain Medicine, Zhejiang University, Hangzhou 310058, China
- Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
8
|
You S, Lv T, Qin R, Hu Z, Ke Z, Yao W, Zhao H, Bai F. Neuro-Navigated rTMS Improves Sleep and Cognitive Impairment via Regulating Sleep-Related Networks' Spontaneous Activity in AD Spectrum Patients. Clin Interv Aging 2023; 18:1333-1349. [PMID: 37601952 PMCID: PMC10439779 DOI: 10.2147/cia.s416992] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 08/03/2023] [Indexed: 08/22/2023] Open
Abstract
Study Objectives By examining spontaneous activity changes of sleep-related networks in patients with the Alzheimer's disease (AD) spectrum with or without insomnia disorder (ID) over time via neuro-navigated repetitive transcranial magnetic stimulation (rTMS), we revealed the effect and mechanism of rTMS targeting the left-angular gyrus in improving the comorbidity symptoms of the AD spectrum with ID. Methods A total of 34 AD spectrum patients were recruited in this study, including 18 patients with ID and the remaining 16 patients without ID. All of them were measured for cognitive function and sleep by using the cognitive and sleep subscales of the neuropsychiatric inventory. The amplitude of low-frequency fluctuation changes in sleep-related networks was revealed before and after neuro-navigated rTMS treatment between these two groups, and the behavioral significance was further explored. Results Affective auditory processing and sensory-motor collaborative sleep-related networks with hypo-spontaneous activity were observed at baseline in the AD spectrum with ID group, while substantial increases in activity were evident at follow-up in these subjects. In addition, longitudinal affective auditory processing, sensory-motor and default mode collaborative sleep-related networks with hyper-spontaneous activity were also revealed at follow-up in the AD spectrum with ID group. In particular, longitudinal changes in sleep-related networks were associated with improvements in sleep quality and episodic memory scores in AD spectrum with ID patients. Conclusion We speculated that left angular gyrus-navigated rTMS therapy may enhance the memory function of AD spectrum patients by regulating the spontaneous activity of sleep-related networks, and it was associated with memory consolidation in the hippocampus-cortical circuit during sleep. Clinical Trial Registration The study was registered at the Chinese Clinical Trial Registry, registration ID: ChiCTR2100050496, China.
Collapse
Affiliation(s)
- Shengqi You
- Department of Neurology, Nanjing Drum Tower Hospital Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, 210008, People’s Republic of China
| | - Tingyu Lv
- Department of Neurology, Nanjing Drum Tower Hospital Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, 210008, People’s Republic of China
| | - Ruomeng Qin
- Department of Neurology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210008, People’s Republic of China
| | - Zheqi Hu
- Department of Neurology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210008, People’s Republic of China
| | - Zhihong Ke
- Department of Neurology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210008, People’s Republic of China
| | - Weina Yao
- Department of Neurology, Nanjing Drum Tower Hospital Clinical College of Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, Nanjing, 210008, People’s Republic of China
| | - Hui Zhao
- Department of Neurology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210008, People’s Republic of China
| | - Feng Bai
- Department of Neurology, Nanjing Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210008, People’s Republic of China
- Geriatric Medicine Center, Taikang Xianlin Drum Tower Hospital, Affiliated Hospital of Medical School, Nanjing University, Nanjing, 210008, People’s Republic of China
| |
Collapse
|
9
|
Wilson KM, Arquilla AM, Saltzman W. The parental umwelt: Effects of parenthood on sensory processing in rodents. J Neuroendocrinol 2023; 35:e13237. [PMID: 36792373 DOI: 10.1111/jne.13237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 01/16/2023] [Accepted: 01/18/2023] [Indexed: 01/26/2023]
Abstract
An animal's umwelt, comprising its perception of the sensory environment, which is inherently subjective, can change across the lifespan in accordance with major life events. In mammals, the onset of motherhood, in particular, is associated with a neural and sensory plasticity that alters a mother's detection and use of sensory information such as infant-related sensory stimuli. Although the literature surrounding mammalian mothers is well established, very few studies have addressed the effects of parenthood on sensory plasticity in mammalian fathers. In this review, we summarize the major findings on the effects of parenthood on behavioural and neural responses to sensory stimuli from pups in rodent mothers, with a focus on the olfactory, auditory, and somatosensory systems, as well as multisensory integration. We also review the available literature on sensory plasticity in rodent fathers. Finally, we discuss the importance of sensory plasticity for effective parental care, hormonal modulation of plasticity, and an exploration of temporal, ecological, and life-history considerations of sensory plasticity associated with parenthood. The changes in processing and/or perception of sensory stimuli associated with the onset of parental care may have both transient and long-lasting effects on parental behaviour and cognition in both mothers and fathers; as such, several promising areas of study, such as on the molecular/genetic, neurochemical, and experiential underpinnings of parenthood-related sensory plasticity, as well as determinants of interspecific variation, remain potential avenues for further exploration.
Collapse
Affiliation(s)
- Kerianne M Wilson
- Department of Evolution, Ecology, and Organismal Biology, University of California, Riverside, CA, USA
- Department of Biology, Pomona College, Claremont, CA, USA
| | - April M Arquilla
- Department of Evolution, Ecology, and Organismal Biology, University of California, Riverside, CA, USA
| | - Wendy Saltzman
- Department of Evolution, Ecology, and Organismal Biology, University of California, Riverside, CA, USA
- Neuroscience Graduate Program, University of California, Riverside, CA, USA
| |
Collapse
|
10
|
Jafari A, Dureux A, Zanini A, Menon RS, Gilbert KM, Everling S. A vocalization-processing network in marmosets. Cell Rep 2023; 42:112526. [PMID: 37195863 DOI: 10.1016/j.celrep.2023.112526] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/31/2023] [Accepted: 05/02/2023] [Indexed: 05/19/2023] Open
Abstract
Vocalizations play an important role in the daily life of primates and likely form the basis of human language. Functional imaging studies have demonstrated that listening to voices activates a fronto-temporal voice perception network in human participants. Here, we acquired whole-brain ultrahigh-field (9.4 T) fMRI in awake marmosets (Callithrix jacchus) and demonstrate that these small, highly vocal New World primates possess a similar fronto-temporal network, including subcortical regions, that is activated by the presentation of conspecific vocalizations. The findings suggest that the human voice perception network has evolved from an ancestral vocalization-processing network that predates the separation of New and Old World primates.
Collapse
Affiliation(s)
- Azadeh Jafari
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Audrey Dureux
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Alessandro Zanini
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Ravi S Menon
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Kyle M Gilbert
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Stefan Everling
- Centre for Functional and Metabolic Mapping, Robarts Research Institute, University of Western Ontario, London, ON, Canada; Department of Physiology and Pharmacology, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
11
|
Karimi-Boroujeni M, Dajani HR, Giguère C. Perception of Prosody in Hearing-Impaired Individuals and Users of Hearing Assistive Devices: An Overview of Recent Advances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:775-789. [PMID: 36652704 DOI: 10.1044/2022_jslhr-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. METHOD Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. RESULTS The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. CONCLUSIONS The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21809772.
Collapse
Affiliation(s)
| | - Hilmi R Dajani
- School of Electrical Engineering and Computer Science, University of Ottawa, Ontario, Canada
| | - Christian Giguère
- School of Rehabilitation Sciences, University of Ottawa, Ontario, Canada
| |
Collapse
|
12
|
Schelinski S, von Kriegstein K. Responses in left inferior frontal gyrus are altered for speech-in-noise processing, but not for clear speech in autism. Brain Behav 2023; 13:e2848. [PMID: 36575611 PMCID: PMC9927852 DOI: 10.1002/brb3.2848] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 11/10/2022] [Accepted: 11/28/2022] [Indexed: 12/29/2022] Open
Abstract
INTRODUCTION Autistic individuals often have difficulties with recognizing what another person is saying in noisy conditions such as in a crowded classroom or a restaurant. The underlying neural mechanisms of this speech perception difficulty are unclear. In typically developed individuals, three cerebral cortex regions are particularly related to speech-in-noise perception: the left inferior frontal gyrus (IFG), the right insula, and the left inferior parietal lobule (IPL). Here, we tested whether responses in these cerebral cortex regions are altered in speech-in-noise perception in autism. METHODS Seventeen autistic adults and 17 typically developed controls (matched pairwise on age, sex, and IQ) performed an auditory-only speech recognition task during functional magnetic resonance imaging (fMRI). Speech was presented either with noise (noise condition) or without noise (no noise condition, i.e., clear speech). RESULTS In the left IFG, blood-oxygenation-level-dependent (BOLD) responses were higher in the control compared to the autism group for recognizing speech-in-noise compared to clear speech. For this contrast, both groups had similar response magnitudes in the right insula and left IPL. Additionally, we replicated previous findings that BOLD responses in speech-related and auditory brain regions (including bilateral superior temporal sulcus and Heschl's gyrus) for clear speech were similar in both groups and that voice identity recognition was impaired for clear and noisy speech in autism. DISCUSSION Our findings show that in autism, the processing of speech is particularly reduced under noisy conditions in the left IFG-a dysfunction that might be important in explaining restricted speech comprehension in noisy environments.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Research Group Neural Mechanisms of Human CommunicationMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Research Group Neural Mechanisms of Human CommunicationMax Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
13
|
Krasewicz J, Yu WM. Eph and ephrin signaling in the development of the central auditory system. Dev Dyn 2023; 252:10-26. [PMID: 35705527 PMCID: PMC9751234 DOI: 10.1002/dvdy.506] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 06/10/2022] [Accepted: 06/12/2022] [Indexed: 01/17/2023] Open
Abstract
Acoustic communication relies crucially on accurate interpretation of information about the intensity, frequency, timing, and location of diverse sound stimuli in the environment. To meet this demand, neurons along different levels of the auditory system form precisely organized neural circuits. The assembly of these precise circuits requires tight regulation and coordination of multiple developmental processes. Several groups of axon guidance molecules have proven critical in controlling these processes. Among them, the family of Eph receptors and their ephrin ligands emerge as one group of key players. They mediate diverse functions at multiple levels of the auditory pathway, including axon guidance and targeting, topographic map formation, as well as cell migration and tissue pattern formation. Here, we review our current knowledge of how Eph and ephrin molecules regulate different processes in the development and maturation of central auditory circuits.
Collapse
Affiliation(s)
| | - Wei-Ming Yu
- Correspondence: Wei-Ming Yu, Department of Biology, Loyola University of Chicago, 1032 W Sheridan Rd, LSB 226, Chicago, IL 60660, , Tel: +1-773-508-3325, Fax: +1-773-508-3646
| |
Collapse
|
14
|
Ai M, Loui P, Morris TP, Chaddock-Heyman L, Hillman CH, McAuley E, Kramer AF. Musical Experience Relates to Insula-Based Functional Connectivity in Older Adults. Brain Sci 2022; 12:1577. [PMID: 36421901 PMCID: PMC9688373 DOI: 10.3390/brainsci12111577] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/11/2022] [Accepted: 11/14/2022] [Indexed: 11/22/2022] Open
Abstract
Engaging in musical activities throughout the lifespan may protect against age-related cognitive decline and modify structural and functional connectivity in the brain. Prior research suggests that musical experience modulates brain regions that integrate different modalities of sensory information, such as the insula. Most of this research has been performed in individuals classified as professional musicians; however, general musical experiences across the lifespan may also confer beneficial effects on brain health in older adults. The current study investigated whether general musical experience, characterized using the Goldsmith Music Sophistication Index (Gold-MSI), was associated with functional connectivity in older adults (age = 65.7 ± 4.4, n = 69). We tested whether Gold-MSI was associated with individual differences in the functional connectivity of three a priori hypothesis-defined seed regions in the insula (i.e., dorsal anterior, ventral anterior, and posterior insula). We found that older adults with more musical experience showed greater functional connectivity between the dorsal anterior insula and the precentral and postcentral gyrus, and between the ventral anterior insula and diverse brain regions, including the insula and prefrontal cortex, and decreased functional connectivity between the ventral anterior insula and thalamus (voxel p < 0.01, cluster FWE p < 0.05). Follow-up correlation analyses showed that the singing ability subscale score was key in driving the association between functional connectivity differences and musical experience. Overall, our findings suggest that musical experience, even among non-professional musicians, is related to functional brain reorganization in older adults.
Collapse
Affiliation(s)
- Meishan Ai
- Department of Psychology, Northeastern University, Boston, MA 02115, USA
| | - Psyche Loui
- Department of Psychology, Northeastern University, Boston, MA 02115, USA
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Timothy P. Morris
- Department of Physical Therapy, Movement & Rehabilitation Sciences, Northeastern University, Boston, MA 02115, USA
| | - Laura Chaddock-Heyman
- Department of Psychology, Northeastern University, Boston, MA 02115, USA
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Charles H. Hillman
- Department of Psychology, Northeastern University, Boston, MA 02115, USA
- Department of Physical Therapy, Movement & Rehabilitation Sciences, Northeastern University, Boston, MA 02115, USA
| | - Edward McAuley
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
- Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Arthur F. Kramer
- Department of Psychology, Northeastern University, Boston, MA 02115, USA
- Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
15
|
Schelinski S, Tabas A, von Kriegstein K. Altered processing of communication signals in the subcortical auditory sensory pathway in autism. Hum Brain Mapp 2022; 43:1955-1972. [PMID: 35037743 PMCID: PMC8933247 DOI: 10.1002/hbm.25766] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 11/24/2021] [Accepted: 12/19/2021] [Indexed: 12/17/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech-in-noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full-scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non-vocal sounds. Within the control group, the left and right IC responded more when recognising speech-in-noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Alejandro Tabas
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Faculty of Psychology, Chair of Cognitive and Clinical NeuroscienceTechnische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
16
|
Impaired Subcortical Processing of Amplitude-Modulated Tones in Mice Deficient for Cacna2d3, a Risk Gene for Autism Spectrum Disorders in Humans. eNeuro 2022; 9:ENEURO.0118-22.2022. [PMID: 35410870 PMCID: PMC9034753 DOI: 10.1523/eneuro.0118-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 12/18/2022] Open
Abstract
Temporal processing of complex sounds is a fundamental and complex task in hearing and a prerequisite for processing and understanding vocalization, speech, and prosody. Here, we studied response properties of neurons in the inferior colliculus (IC) in mice lacking Cacna2d3, a risk gene for autism spectrum disorders (ASDs). The α2δ3 auxiliary Ca2+ channel subunit encoded by Cacna2d3 is essential for proper function of glutamatergic synapses in the auditory brainstem. Recent evidence has shown that much of auditory feature extraction is performed in the auditory brainstem and IC, including processing of amplitude modulation (AM). We determined both spectral and temporal properties of single- and multi-unit responses in the IC of anesthetized mice. IC units of α2δ3−/− mice showed normal tuning properties yet increased spontaneous rates compared with α2δ3+/+. When stimulated with AM tones, α2δ3−/− units exhibited less precise temporal coding and reduced evoked rates to higher modulation frequencies (fm). Whereas first spike latencies (FSLs) were increased for only few modulation frequencies, population peak latencies were increased for fm ranging from 20 to 100 Hz in α2δ3−/− IC units. The loss of precision of temporal coding with increasing fm from 70 to 160 Hz was characterized using a normalized offset-corrected (Pearson-like) correlation coefficient, which appeared more appropriate than the metrics of vector strength. The processing deficits of AM sounds analyzed at the level of the IC indicate that α2δ3−/− mice exhibit a subcortical auditory processing disorder (APD). Similar deficits may be present in other mouse models for ASDs.
Collapse
|
17
|
Moyne M, Legendre G, Arnal L, Kumar S, Sterpenich V, Seeck M, Grandjean D, Schwartz S, Vuilleumier P, Domínguez-Borràs J. Brain reactivity to emotion persists in NREM sleep and is associated with individual dream recall. Cereb Cortex Commun 2022; 3:tgac003. [PMID: 35174329 PMCID: PMC8844542 DOI: 10.1093/texcom/tgac003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/07/2022] [Accepted: 01/07/2022] [Indexed: 12/02/2022] Open
Abstract
The waking brain efficiently detects emotional signals to promote survival. However, emotion detection during sleep is poorly understood and may be influenced by individual sleep characteristics or neural reactivity. Notably, dream recall frequency has been associated with stimulus reactivity during sleep, with enhanced stimulus-driven responses in high vs. low recallers. Using electroencephalography (EEG), we characterized the neural responses of healthy individuals to emotional, neutral voices, and control stimuli, both during wakefulness and NREM sleep. Then, we tested how these responses varied with individual dream recall frequency. Event-related potentials (ERPs) differed for emotional vs. neutral voices, both in wakefulness and NREM. Likewise, EEG arousals (sleep perturbations) increased selectively after the emotional voices, indicating emotion reactivity. Interestingly, sleep ERP amplitude and arousals after emotional voices increased linearly with participants' dream recall frequency. Similar correlations with dream recall were observed for beta and sigma responses, but not for theta. In contrast, dream recall correlations were absent for neutral or control stimuli. Our results reveal that brain reactivity to affective salience is preserved during NREM and is selectively associated to individual memory for dreams. Our findings also suggest that emotion-specific reactivity during sleep, and not generalized alertness, may contribute to the encoding/retrieval of dreams.
Collapse
Affiliation(s)
- Maëva Moyne
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Neuroscience, University of Geneva, Rue Michel Servet 1, CH-1211 Geneva, Switzerland
| | - Guillaume Legendre
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Neuroscience, University of Geneva, Rue Michel Servet 1, CH-1211 Geneva, Switzerland
| | - Luc Arnal
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Neuroscience, University of Geneva, Rue Michel Servet 1, CH-1211 Geneva, Switzerland
| | - Samika Kumar
- Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, USA
| | - Virginie Sterpenich
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Neuroscience, University of Geneva, Rue Michel Servet 1, CH-1211 Geneva, Switzerland
| | - Margitta Seeck
- Department of Clinical Neuroscience, Geneva University Hospitals, 4 rue Gabrielle-Perret-Gentil 4, CH-1211 Geneva, Switzerland
- Department of Clinical Neuroscience, University of Geneva, 4 rue Gabrielle-Perret-Gentil 4, CH-1211 Geneva, Switzerland
| | - Didier Grandjean
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Psychology, University of Geneva, Uni Mail, bd du Pont-d’Arve 40, CH-1211 Geneva, Switzerland
| | - Sophie Schwartz
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Neuroscience, University of Geneva, Rue Michel Servet 1, CH-1211 Geneva, Switzerland
- Center for Affective Sciences, CISA - chemin des mines 9, CH-1202 Geneva, Switzerland
| | - Patrik Vuilleumier
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Neuroscience, University of Geneva, Rue Michel Servet 1, CH-1211 Geneva, Switzerland
- Center for Affective Sciences, CISA - chemin des mines 9, CH-1202 Geneva, Switzerland
| | - Judith Domínguez-Borràs
- Campus Biotech, chemin des mines, 9 CH-1202 Geneva, Switzerland
- Department of Clinical Neuroscience, University of Geneva, 4 rue Gabrielle-Perret-Gentil 4, CH-1211 Geneva, Switzerland
- Center for Affective Sciences, CISA - chemin des mines 9, CH-1202 Geneva, Switzerland
| |
Collapse
|
18
|
Domínguez-Borràs J, Vuilleumier P. Amygdala function in emotion, cognition, and behavior. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:359-380. [PMID: 35964983 DOI: 10.1016/b978-0-12-823493-8.00015-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The amygdala is a core structure in the anterior medial temporal lobe, with an important role in several brain functions involving memory, emotion, perception, social cognition, and even awareness. As a key brain structure for saliency detection, it triggers and controls widespread modulatory signals onto multiple areas of the brain, with a great impact on numerous aspects of adaptive behavior. Here we discuss the neural mechanisms underlying these functions, as established by animal and human research, including insights provided in both healthy and pathological conditions.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Psychology and Psychobiology & Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Patrik Vuilleumier
- Department of Neuroscience and Center for Affective Sciences, University of Geneva, Geneva, Switzerland.
| |
Collapse
|
19
|
Gábor A, Andics A, Miklósi Á, Czeibert K, Carreiro C, Gácsi M. Social relationship-dependent neural response to speech in dogs. Neuroimage 2021; 243:118480. [PMID: 34411741 DOI: 10.1016/j.neuroimage.2021.118480] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 07/13/2021] [Accepted: 08/15/2021] [Indexed: 11/16/2022] Open
Abstract
In humans, social relationship with the speaker affects neural processing of speech, as exemplified by children's auditory and reward responses to their mother's utterances. Family dogs show human analogue attachment behavior towards the owner, and neuroimaging revealed auditory cortex and reward center sensitivity to verbal praises in dog brains. Combining behavioral and non-invasive fMRI data, we investigated the effect of dogs' social relationship with the speaker on speech processing. Dogs listened to praising and neutral speech from their owners and a control person. We found positive correlation between dogs' behaviorally measured attachment scores towards their owners and neural activity increase for the owner's voice in the caudate nucleus; and activity increase in the secondary auditory caudal ectosylvian gyrus and the caudate nucleus for the owner's praise. Through identifying social relationship-dependent neural reward responses, our study reveals similarities in neural mechanisms modulated by infant-mother and dog-owner attachment.
Collapse
Affiliation(s)
- Anna Gábor
- MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Hungarian Academy of Sciences - Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary.
| | - Attila Andics
- MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Hungarian Academy of Sciences - Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Ádám Miklósi
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; MTA-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Kálmán Czeibert
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Cecília Carreiro
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Márta Gácsi
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; MTA-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| |
Collapse
|
20
|
Peripheral Anomalies in USH2A Cause Central Auditory Anomalies in a Mouse Model of Usher Syndrome and CAPD. Genes (Basel) 2021; 12:genes12020151. [PMID: 33498833 PMCID: PMC7910880 DOI: 10.3390/genes12020151] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 01/13/2021] [Accepted: 01/21/2021] [Indexed: 11/16/2022] Open
Abstract
Central auditory processing disorder (CAPD) is associated with difficulties hearing and processing acoustic information, as well as subsequent impacts on the development of higher-order cognitive processes (i.e., attention and language). Yet CAPD also lacks clear and consistent diagnostic criteria, with widespread clinical disagreement on this matter. As such, identification of biological markers for CAPD would be useful. A recent genome association study identified a potential CAPD risk gene, USH2A. In a homozygous state, this gene is associated with Usher syndrome type 2 (USH2), a recessive disorder resulting in bilateral, high-frequency hearing loss due to atypical cochlear hair cell development. However, children with heterozygous USH2A mutations have also been found to show unexpected low-frequency hearing loss and reduced early vocabulary, contradicting assumptions that the heterozygous (carrier) state is "phenotype free". Parallel evidence has confirmed that heterozygous Ush2a mutations in a transgenic mouse model also cause low-frequency hearing loss (Perrino et al., 2020). Importantly, these auditory processing anomalies were still evident after covariance for hearing loss, suggesting a CAPD profile. Since usherin anomalies occur in the peripheral cochlea and not central auditory structures, these findings point to upstream developmental feedback effects of peripheral sensory loss on high-level processing characteristic of CAPD. In this study, we aimed to expand upon the mouse behavioral battery used in Perrino et al. (2020) by evaluating central auditory brain structures, including the superior olivary complex (SOC) and medial geniculate nucleus (MGN), in heterozygous and homozygous Ush2a mice. We found that heterozygous Ush2a mice had significantly larger SOC volumes while homozygous Ush2a had significantly smaller SOC volumes. Heterozygous mutations did not affect the MGN; however, homozygous Ush2a mutations resulted in a significant shift towards more smaller neurons. These findings suggest that alterations in cochlear development due to USH2A variation can secondarily impact the development of brain regions important for auditory processing ability.
Collapse
|
21
|
Swanborough H, Staib M, Frühholz S. Neurocognitive dynamics of near-threshold voice signal detection and affective voice evaluation. SCIENCE ADVANCES 2020; 6:6/50/eabb3884. [PMID: 33310844 PMCID: PMC7732184 DOI: 10.1126/sciadv.abb3884] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2020] [Accepted: 10/29/2020] [Indexed: 05/10/2023]
Abstract
Communication and voice signal detection in noisy environments are universal tasks for many species. The fundamental problem of detecting voice signals in noise (VIN) is underinvestigated especially in its temporal dynamic properties. We investigated VIN as a dynamic signal-to-noise ratio (SNR) problem to determine the neurocognitive dynamics of subthreshold evidence accrual and near-threshold voice signal detection. Experiment 1 showed that dynamic VIN, including a varying SNR and subthreshold sensory evidence accrual, is superior to similar conditions with nondynamic SNRs or with acoustically matched sounds. Furthermore, voice signals with affective meaning have a detection advantage during VIN. Experiment 2 demonstrated that VIN is driven by an effective neural integration in an auditory cortical-limbic network at and beyond the near-threshold detection point, which is preceded by activity in subcortical auditory nuclei. This demonstrates the superior recognition advantage of communication signals in dynamic noise contexts, especially when carrying socio-affective meaning.
Collapse
Affiliation(s)
- Huw Swanborough
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
22
|
Multilevel fMRI adaptation for spoken word processing in the awake dog brain. Sci Rep 2020; 10:11968. [PMID: 32747731 PMCID: PMC7398925 DOI: 10.1038/s41598-020-68821-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Accepted: 06/30/2020] [Indexed: 01/08/2023] Open
Abstract
Human brains process lexical meaning separately from emotional prosody of speech at higher levels of the processing hierarchy. Recently we demonstrated that dog brains can also dissociate lexical and emotional prosodic information in human spoken words. To better understand the neural dynamics of lexical processing in the dog brain, here we used an event-related design, optimized for fMRI adaptation analyses on multiple time scales. We investigated repetition effects in dogs’ neural (BOLD) responses to lexically marked (praise) words and to lexically unmarked (neutral) words, in praising and neutral prosody. We identified temporally and anatomically distinct adaptation patterns. In a subcortical auditory region, we found both short- and long-term fMRI adaptation for emotional prosody, but not for lexical markedness. In multiple cortical auditory regions, we found long-term fMRI adaptation for lexically marked compared to unmarked words. This lexical adaptation showed right-hemisphere bias and was age-modulated in a near-primary auditory region and was independent of prosody in a secondary auditory region. Word representations in dogs’ auditory cortex thus contain more than just the emotional prosody they are typically associated with. These findings demonstrate multilevel fMRI adaptation effects in the dog brain and are consistent with a hierarchical account of spoken word processing.
Collapse
|
23
|
Logerot P, Smith PF, Wild M, Kubke MF. Auditory processing in the zebra finch midbrain: single unit responses and effect of rearing experience. PeerJ 2020; 8:e9363. [PMID: 32775046 PMCID: PMC7384439 DOI: 10.7717/peerj.9363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 05/26/2020] [Indexed: 11/26/2022] Open
Abstract
In birds the auditory system plays a key role in providing the sensory input used to discriminate between conspecific and heterospecific vocal signals. In those species that are known to learn their vocalizations, for example, songbirds, it is generally considered that this ability arises and is manifest in the forebrain, although there is no a priori reason why brainstem components of the auditory system could not also play an important part. To test this assumption, we used groups of normal reared and cross-fostered zebra finches that had previously been shown in behavioural experiments to reduce their preference for conspecific songs subsequent to cross fostering experience with Bengalese finches, a related species with a distinctly different song. The question we asked, therefore, is whether this experiential change also changes the bias in favour of conspecific song displayed by auditory midbrain units of normally raised zebra finches. By recording the responses of single units in MLd to a variety of zebra finch and Bengalese finch songs in both normally reared and cross-fostered zebra finches, we provide a positive answer to this question. That is, the difference in response to conspecific and heterospecific songs seen in normal reared zebra finches is reduced following cross-fostering. In birds the virtual absence of mammalian-like cortical projections upon auditory brainstem nuclei argues against the interpretation that MLd units change, as observed in the present experiments, as a result of top-down influences on sensory processing. Instead, it appears that MLd units can be influenced significantly by sensory inputs arising directly from a change in auditory experience during development.
Collapse
Affiliation(s)
- Priscilla Logerot
- Anatomy and Medical Imaging, University of Auckland, University of Auckland, Auckland, New Zealand
| | - Paul F. Smith
- Dept. of Pharmacology and Toxicology, School of Biomedical Sciences, Brain Health Research Centre, Brain Research New Zealand, and Eisdell Moore Centre, University of Otago, Dunedin, New Zealand
| | - Martin Wild
- Anatomy and Medical Imaging and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| | - M. Fabiana Kubke
- Anatomy and Medical Imaging, Centre for Brain Research and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| |
Collapse
|
24
|
Gruber T, Debracque C, Ceravolo L, Igloi K, Marin Bosch B, Frühholz S, Grandjean D. Human Discrimination and Categorization of Emotions in Voices: A Functional Near-Infrared Spectroscopy (fNIRS) Study. Front Neurosci 2020; 14:570. [PMID: 32581695 PMCID: PMC7290129 DOI: 10.3389/fnins.2020.00570] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 05/08/2020] [Indexed: 11/24/2022] Open
Abstract
Functional Near-Infrared spectroscopy (fNIRS) is a neuroimaging tool that has been recently used in a variety of cognitive paradigms. Yet, it remains unclear whether fNIRS is suitable to study complex cognitive processes such as categorization or discrimination. Previously, functional imaging has suggested a role of both inferior frontal cortices in attentive decoding and cognitive evaluation of emotional cues in human vocalizations. Here, we extended paradigms used in functional magnetic resonance imaging (fMRI) to investigate the suitability of fNIRS to study frontal lateralization of human emotion vocalization processing during explicit and implicit categorization and discrimination using mini-blocks and event-related stimuli. Participants heard speech-like but semantically meaningless pseudowords spoken in various tones and evaluated them based on their emotional or linguistic content. Behaviorally, participants were faster to discriminate than to categorize; and processed the linguistic faster than the emotional content of stimuli. Interactions between condition (emotion/word), task (discrimination/categorization) and emotion content (anger, fear, neutral) influenced accuracy and reaction time. At the brain level, we found a modulation of the Oxy-Hb changes in IFG depending on condition, task, emotion and hemisphere (right or left), highlighting the involvement of the right hemisphere to process fear stimuli, and of both hemispheres to treat anger stimuli. Our results show that fNIRS is suitable to study vocal emotion evaluation, fostering its application to complex cognitive paradigms.
Collapse
Affiliation(s)
- Thibaud Gruber
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland.,Cognitive Science Center, University of Neuchâtel, Neuchâtel, Switzerland
| | - Coralie Debracque
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Leonardo Ceravolo
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Kinga Igloi
- Department of Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland.,Geneva Neuroscience Center, University of Geneva, Geneva, Switzerland
| | - Blanca Marin Bosch
- Department of Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland.,Geneva Neuroscience Center, University of Geneva, Geneva, Switzerland
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, University of Zurich and ETH Zürich, Zurich, Switzerland.,Center for Integrative Human Physiology, University of Zurich, Zurich, Switzerland
| | - Didier Grandjean
- Neuroscience of Emotion and Affective Dynamics Lab, Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| |
Collapse
|
25
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
26
|
Abstract
The processing of emotional nonlinguistic information in speech is defined as emotional prosody. This auditory nonlinguistic information is essential in the decoding of social interactions and in our capacity to adapt and react adequately by taking into account contextual information. An integrated model is proposed at the functional and brain levels, encompassing 5 main systems that involve cortical and subcortical neural networks relevant for the processing of emotional prosody in its major dimensions, including perception and sound organization; related action tendencies; and associated values that integrate complex social contexts and ambiguous situations.
Collapse
Affiliation(s)
- Didier Grandjean
- Department of Psychology and Educational Sciences and Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
27
|
Koch SBJ, Galli A, Volman I, Kaldewaij R, Toni I, Roelofs K. Neural Control of Emotional Actions in Response to Affective Vocalizations. J Cogn Neurosci 2020; 32:977-988. [PMID: 31933433 DOI: 10.1162/jocn_a_01523] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Social-emotional cues, such as affective vocalizations and emotional faces, automatically elicit emotional action tendencies. Adaptive social-emotional behavior depends on the ability to control these automatic action tendencies. It remains unknown whether neural control over automatic action tendencies is supramodal or relies on parallel modality-specific neural circuits. Here, we address this largely unexplored issue in humans. We consider neural circuits supporting emotional action control in response to affective vocalizations, using an approach-avoidance task known to reliably index control over emotional action tendencies elicited by emotional faces. We isolate supramodal neural contributions to emotional action control through a conjunction analysis of control-related neural activity evoked by auditory and visual affective stimuli, the latter from a previously published data set obtained in an independent sample. We show that the anterior pFC (aPFC) supports control of automatic action tendencies in a supramodal manner, that is, triggered by either emotional faces or affective vocalizations. When affective vocalizations are heard and emotional control is required, the aPFC supports control through negative functional connectivity with the posterior insula. When emotional faces are seen and emotional control is required, control relies on the same aPFC territory downregulating the amygdala. The findings provide evidence for a novel mechanism of emotional action control with a hybrid hierarchical architecture, relying on a supramodal node (aPFC) implementing an abstract goal by modulating modality-specific nodes (posterior insula, amygdala) involved in signaling motivational significance of either affective vocalizations or faces.
Collapse
Affiliation(s)
- Saskia B J Koch
- Donders Institute for Brain, Cognition and Behavior, Radboud University.,Behavioral Science Institute, Radboud University
| | - Alessandra Galli
- Donders Institute for Brain, Cognition and Behavior, Radboud University
| | - Inge Volman
- Wellcome Centre for Integrative Neuroimaging, Oxford, UK
| | - Reinoud Kaldewaij
- Donders Institute for Brain, Cognition and Behavior, Radboud University.,Behavioral Science Institute, Radboud University
| | - Ivan Toni
- Donders Institute for Brain, Cognition and Behavior, Radboud University
| | - Karin Roelofs
- Donders Institute for Brain, Cognition and Behavior, Radboud University.,Behavioral Science Institute, Radboud University
| |
Collapse
|
28
|
Smit I, Szabo D, Kubinyi E. Age-related positivity effect on behavioural responses of dogs to human vocalisations. Sci Rep 2019; 9:20201. [PMID: 31882873 PMCID: PMC6934484 DOI: 10.1038/s41598-019-56636-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 11/25/2019] [Indexed: 11/09/2022] Open
Abstract
Age-related changes in the brain can alter how emotions are processed. In humans, valence specific changes in attention and memory were reported with increasing age, i.e. older people are less attentive toward and experience fewer negative emotions, while processing of positive emotions remains intact. Little is yet known about this "positivity effect" in non-human animals. We tested young (n = 21, 1-5 years) and old (n = 19, >10 years) family dogs with positive (laugh), negative (cry), and neutral (hiccup, cough) human vocalisations and investigated age-related differences in their behavioural reactions. Only dogs with intact hearing were analysed and the selected sound samples were balanced regarding mean and fundamental frequencies between valence categories. Compared to young dogs, old individuals reacted slower only to the negative sounds and there was no significant difference in the duration of the reactions between groups. The selective response of the aged dogs to the sound stimuli suggests that the results cannot be explained by general cognitive and/or perceptual decline. and supports the presence of an age-related positivity effect in dogs, too. Similarities in emotional processing between humans and dogs may imply analogous changes in subcortical emotional processing in the canine brain during ageing.
Collapse
Affiliation(s)
- Iris Smit
- Department of Ethology, Eötvös Loránd University, Budapest, 1117, Hungary.
- HAS University of Applied Sciences, 's-Hertogenbosch, 5223DE, The Netherlands.
| | - Dora Szabo
- Department of Ethology, Eötvös Loránd University, Budapest, 1117, Hungary
| | - Enikő Kubinyi
- Department of Ethology, Eötvös Loránd University, Budapest, 1117, Hungary
| |
Collapse
|
29
|
Kuo PC, Tseng YL, Zilles K, Suen S, Eickhoff SB, Lee JD, Cheng PE, Liou M. Brain dynamics and connectivity networks under natural auditory stimulation. Neuroimage 2019; 202:116042. [PMID: 31344485 DOI: 10.1016/j.neuroimage.2019.116042] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 07/17/2019] [Accepted: 07/20/2019] [Indexed: 02/03/2023] Open
Abstract
The analysis of functional magnetic resonance imaging (fMRI) data is challenging when subjects are under exposure to natural sensory stimulation. In this study, a two-stage approach was developed to enable the identification of connectivity networks involved in the processing of information in the brain under natural sensory stimulation. In the first stage, the degree of concordance between the results of inter-subject and intra-subject correlation analyses is assessed statistically. The microstructurally (i.e., cytoarchitectonically) defined brain areas are designated either as concordant in which the results of both correlation analyses are in agreement, or as discordant in which one analysis method shows a higher proportion of supra-threshold voxels than does the other. In the second stage, connectivity networks are identified using the time courses of supra-threshold voxels in brain areas contingent upon the classifications derived in the first stage. In an empirical study, fMRI data were collected from 40 young adults (19 males, average age 22.76 ± 3.25), who underwent auditory stimulation involving sound clips of human voices and animal vocalizations under two operational conditions (i.e., eyes-closed and eyes-open). The operational conditions were designed to assess confounding effects due to auditory instructions or visual perception. The proposed two-stage analysis demonstrated that stress modulation (affective) and language networks in the limbic and cortical structures were respectively engaged during sound stimulation, and presented considerable variability among subjects. The network involved in regulating visuomotor control was sensitive to the eyes-open instruction, and presented only small variations among subjects. A high degree of concordance was observed between the two analyses in the primary auditory cortex which was highly sensitive to the pitch of sound clips. Our results have indicated that brain areas can be identified as concordant or discordant based on the two correlation analyses. This may further facilitate the search for connectivity networks involved in the processing of information under natural sensory stimulation.
Collapse
Affiliation(s)
- Po-Chih Kuo
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Yi-Li Tseng
- Department of Electrical Engineering, Fu Jen Catholic University, New Taipei City, Taiwan
| | - Karl Zilles
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Summit Suen
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
| | - Juin-Der Lee
- Graduate Institute of Business Administration, National Chengchi University, Taipei, Taiwan
| | - Philip E Cheng
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan
| | - Michelle Liou
- Institute of Statistical Science, Academia Sinica, Taipei, Taiwan.
| |
Collapse
|
30
|
Domínguez-Borràs J, Guex R, Méndez-Bértolo C, Legendre G, Spinelli L, Moratti S, Frühholz S, Mégevand P, Arnal L, Strange B, Seeck M, Vuilleumier P. Human amygdala response to unisensory and multisensory emotion input: No evidence for superadditivity from intracranial recordings. Neuropsychologia 2019; 131:9-24. [PMID: 31158367 DOI: 10.1016/j.neuropsychologia.2019.05.027] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 05/15/2019] [Accepted: 05/28/2019] [Indexed: 12/14/2022]
Abstract
The amygdala is crucially implicated in processing emotional information from various sensory modalities. However, there is dearth of knowledge concerning the integration and relative time-course of its responses across different channels, i.e., for auditory, visual, and audiovisual input. Functional neuroimaging data in humans point to a possible role of this region in the multimodal integration of emotional signals, but direct evidence for anatomical and temporal overlap of unisensory and multisensory-evoked responses in amygdala is still lacking. We recorded event-related potentials (ERPs) and oscillatory activity from 9 amygdalae using intracranial electroencephalography (iEEG) in patients prior to epilepsy surgery, and compared electrophysiological responses to fearful, happy, or neutral stimuli presented either in voices alone, faces alone, or voices and faces simultaneously delivered. Results showed differential amygdala responses to fearful stimuli, in comparison to neutral, reaching significance 100-200 ms post-onset for auditory, visual and audiovisual stimuli. At later latencies, ∼400 ms post-onset, amygdala response to audiovisual information was also amplified in comparison to auditory or visual stimuli alone. Importantly, however, we found no evidence for either super- or subadditivity effects in any of the bimodal responses. These results suggest, first, that emotion processing in amygdala occurs at globally similar early stages of perceptual processing for auditory, visual, and audiovisual inputs; second, that overall larger responses to multisensory information occur at later stages only; and third, that the underlying mechanisms of this multisensory gain may reflect a purely additive response to concomitant visual and auditory inputs. Our findings provide novel insights on emotion processing across the sensory pathways, and their convergence within the limbic system.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | - Raphaël Guex
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | | - Guillaume Legendre
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Laurent Spinelli
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Stephan Moratti
- Department of Experimental Psychology, Complutense University of Madrid, Spain; Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain.
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Switzerland.
| | - Pierre Mégevand
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Luc Arnal
- Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| | - Bryan Strange
- Laboratory for Clinical Neuroscience, Centre for Biomedical Technology, Universidad Politécnica de Madrid, Spain; Department of Neuroimaging, Alzheimer's Disease Research Centre, Reina Sofia-CIEN Foundation, Madrid, Spain.
| | - Margitta Seeck
- Department of Clinical Neuroscience, University Hospital of Geneva, Switzerland.
| | - Patrik Vuilleumier
- Center for Affective Sciences, University of Geneva, Switzerland; Campus Biotech, Geneva, Switzerland; Department of Basic Neuroscience, Faculty of Medicine, University of Geneva, Switzerland.
| |
Collapse
|
31
|
Profant O, Jilek M, Bures Z, Vencovsky V, Kucharova D, Svobodova V, Korynta J, Syka J. Functional Age-Related Changes Within the Human Auditory System Studied by Audiometric Examination. Front Aging Neurosci 2019; 11:26. [PMID: 30863300 PMCID: PMC6399208 DOI: 10.3389/fnagi.2019.00026] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 01/30/2019] [Indexed: 12/27/2022] Open
Abstract
Age related hearing loss (presbycusis) is one of the most common sensory deficits in the aging population. The main subjective ailment in the elderly is the deterioration of speech understanding, especially in a noisy environment, which cannot solely be explained by increased hearing thresholds. The examination methods used in presbycusis are primarily focused on the peripheral pathologies (e.g., hearing sensitivity measured by hearing thresholds), with only a limited capacity to detect the central lesion. In our study, auditory tests focused on central auditory abilities were used in addition to classical examination tests, with the aim to compare auditory abilities between an elderly group (elderly, mean age 70.4 years) and young controls (young, mean age 24.4 years) with clinically normal auditory thresholds, and to clarify the interactions between peripheral and central auditory impairments. Despite the fact that the elderly were selected to show natural age-related deterioration of hearing (auditory thresholds did not exceed 20 dB HL for main speech frequencies) and with clinically normal speech reception thresholds (SRTs), the detailed examination of their auditory functions revealed deteriorated processing of temporal parameters [gap detection threshold (GDT), interaural time difference (ITD) detection] which was partially responsible for the altered perception of distorted speech (speech in babble noise, gated speech). An analysis of interactions between peripheral and central auditory abilities, showed a stronger influence of peripheral function than temporal processing ability on speech perception in silence in the elderly with normal cognitive function. However, in a more natural environment mimicked by the addition of background noise, the role of temporal processing increased rapidly.
Collapse
Affiliation(s)
- Oliver Profant
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Otorhinolaryngology of Faculty Hospital Královské Vinohrady and 3rd Faculty of Medicine, Charles University, Prague, Czechia
| | - Milan Jilek
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia
| | - Zbynek Bures
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Technical Studies, College of Polytechnics, Jihlava, Czechia
| | - Vaclav Vencovsky
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia
| | - Diana Kucharova
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Otorhinolaryngology and Head and Neck Surgery, 1st Faculty of Medicine, Charles University in Prague, University Hospital Motol, Prague, Czechia
| | - Veronika Svobodova
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia.,Department of Otorhinolaryngology and Head and Neck Surgery, 1st Faculty of Medicine, Charles University in Prague, University Hospital Motol, Prague, Czechia
| | | | - Josef Syka
- Department of Auditory Neuroscience, Institute of Experimental Medicine of the Czech Academy of Sciences, Prague, Czechia
| |
Collapse
|
32
|
Hoogstraten AMRJV, Souza APRD, Moraes ABD. A complementaridade entre sinais PREAUT e IRDI na análise de risco psíquico aos nove meses e sua relação com idade gestacional. Codas 2018; 30:e20170096. [DOI: 10.1590/2317-1782/20182017096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Accepted: 06/04/2018] [Indexed: 11/22/2022] Open
Abstract
RESUMO Objetivo Comparar o nível de concordância estatística entre os Sinais PREAUT e os Indicadores Clínicos de Risco/Referência ao Desenvolvimento Infantil (IRDI) na identificação de risco e analisar a frequência de risco psíquico considerando a variável idade gestacional. Método A amostra total contou com 80 bebês, sendo 55 bebês nascidos a termo e 25 bebês nascidos pré-termo, considerando a idade corrigida. Foram excluídos todos os bebês que apresentaram qualquer espécie de síndrome genética, lesões neurológicas ou déficits sensoriais. O IRDI e os Sinais PREAUT, além de uma entrevista semiestruturada foram utilizados como instrumento de coleta de dados. A análise estatística avaliou o grau de concordância entre os Sinais PREAUT e o IRDI a partir do coeficiente de concordância kappa. Resultados Foi observada uma concordância perfeita na identificação de sujeitos em ambos protocolos aos nove meses, embora essa identificação se dê por sinais fenomênicos distintos. A frequência de risco psíquico em bebês prematuros foi superior (24%) à frequência em bebês nascidos a termo (20%). O risco psíquico foi considerável na amostra estudada aos nove meses (21,25%). Conclusão Houve uma concordância total entre ambos os protocolos na identificação de risco psíquico aos nove meses, cuja frequência foi importante na amostra estudada.
Collapse
|
33
|
Keesom SM, Morningstar MD, Sandlain R, Wise BM, Hurley LM. Social isolation reduces serotonergic fiber density in the inferior colliculus of female, but not male, mice. Brain Res 2018; 1694:94-103. [DOI: 10.1016/j.brainres.2018.05.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Revised: 04/15/2018] [Accepted: 05/11/2018] [Indexed: 12/26/2022]
|
34
|
Neurobiology of Hearing Loss and Ear Disease. BIOMED RESEARCH INTERNATIONAL 2018; 2018:2464251. [PMID: 29850490 PMCID: PMC5933019 DOI: 10.1155/2018/2464251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Accepted: 03/15/2018] [Indexed: 11/30/2022]
|
35
|
|
36
|
Dricu M, Ceravolo L, Grandjean D, Frühholz S. Biased and unbiased perceptual decision-making on vocal emotions. Sci Rep 2017; 7:16274. [PMID: 29176612 PMCID: PMC5701116 DOI: 10.1038/s41598-017-16594-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 11/15/2017] [Indexed: 01/20/2023] Open
Abstract
Perceptual decision-making on emotions involves gathering sensory information about the affective state of another person and forming a decision on the likelihood of a particular state. These perceptual decisions can be of varying complexity as determined by different contexts. We used functional magnetic resonance imaging and a region of interest approach to investigate the brain activation and functional connectivity behind two forms of perceptual decision-making. More complex unbiased decisions on affective voices recruited an extended bilateral network consisting of the posterior inferior frontal cortex, the orbitofrontal cortex, the amygdala, and voice-sensitive areas in the auditory cortex. Less complex biased decisions on affective voices distinctly recruited the right mid inferior frontal cortex, pointing to a functional distinction in this region following decisional requirements. Furthermore, task-induced neural connectivity revealed stronger connections between these frontal, auditory, and limbic regions during unbiased relative to biased decision-making on affective voices. Together, the data shows that different types of perceptual decision-making on auditory emotions have distinct patterns of activations and functional coupling that follow the decisional strategies and cognitive mechanisms involved during these perceptual decisions.
Collapse
Affiliation(s)
- Mihai Dricu
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland. .,Department of Experimental Psychology and Neuropsychology, University of Bern, 3012, Bern, Switzerland.
| | - Leonardo Ceravolo
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland.,Department of Psychology and Educational Sciences, University of Geneva, 1205, Geneva, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland.,Department of Psychology and Educational Sciences, University of Geneva, 1205, Geneva, Switzerland
| | - Sascha Frühholz
- Swiss Center for Affective Sciences, Campus Biotech, University of Geneva, 1202, Geneva, Switzerland.,Department of Psychology, University of Zurich, 8050, Zurich, Switzerland.,Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.,Center for Integrative Human Physiology (ZIHP), University of Zurich, Zurich, Switzerland
| |
Collapse
|
37
|
Profant O, Roth J, Bureš Z, Balogová Z, Lišková I, Betka J, Syka J. Auditory dysfunction in patients with Huntington’s disease. Clin Neurophysiol 2017; 128:1946-1953. [DOI: 10.1016/j.clinph.2017.07.403] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2017] [Revised: 07/13/2017] [Accepted: 07/18/2017] [Indexed: 10/19/2022]
|
38
|
Frühholz S, Staib M. Neurocircuitry of impaired affective sound processing: A clinical disorders perspective. Neurosci Biobehav Rev 2017; 83:516-524. [PMID: 28919431 DOI: 10.1016/j.neubiorev.2017.09.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Revised: 06/08/2017] [Accepted: 09/05/2017] [Indexed: 12/22/2022]
Abstract
Decoding affective meaning from sensory information is central to accurate and adaptive behavior in many natural and social contexts. Human vocalizations (speech and non-speech), environmental sounds (e.g. thunder, noise, or animal sounds) and human-produced sounds (e.g. technical sounds or music) can carry a wealth of important aversive, threatening, appealing, or pleasurable affective information that sometimes implicitly influences and guides our behavior. A deficit in processing such affective information is detrimental to adaptive environmental behavior, psychological well-being, and social interactive abilities. These deficits can originate from a diversity of psychiatric and neurological disorders, and are associated with neural dysfunctions across largely distributed brain networks. Recent neuroimaging studies in psychiatric and neurological patients outline the cortical and subcortical neurocircuitry of the complimentary and differential functional roles for affective sound processing. This points to and confirms a recently proposed distributed network rather than a single brain region underlying affective sound processing, and highlights the notion of a multi-functional process that can be differentially impaired in clinical disorders.
Collapse
Affiliation(s)
- Sascha Frühholz
- Department of Psychology, University of Zürich, Zürich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland; Center for Integrative Human Physiology (ZIHP), University of Zurich, Switzerland.
| | - Matthias Staib
- Department of Psychology, University of Zürich, Zürich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| |
Collapse
|
39
|
Frühholz S, Schlegel K, Grandjean D. Amygdala structure and core dimensions of the affective personality. Brain Struct Funct 2017; 222:3915-3925. [PMID: 28512686 DOI: 10.1007/s00429-017-1444-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 05/11/2017] [Indexed: 11/26/2022]
Abstract
While biological models of human personality propose that socio-affective traits and skills are rooted in the structure of the amygdala, empirical evidence remains sparse and inconsistent. Here, we used a comprehensive assessment of the affective personality and tested its association with global, local, and laterality measures of the amygdala structure. Results revealed three broad dimensions of the affective personality that were differentially related to bilateral amygdala structures. Dysfunctional and maladaptive affective traits were associated with a global size and local volume reduction of the amygdala, whereas adaptive emotional skills were linked to an increased size of the left amygdala. Furthermore, reduced asymmetry in the bilateral global amygdala volume was linked to higher affective instability and might be a potential precursor of psychiatric disorders. This study demonstrates that structural amygdala measures provide a neural basis for all major dimensions of the human personality related to adaptive and maladaptive socio-affective functioning.
Collapse
Affiliation(s)
- Sascha Frühholz
- Department of Psychology, University of Zurich, Binzmühlestrasse 14/18, 8050, Zurich, Switzerland.
- Neuroscience Center Zurich, University of Zurich and ETH Zurich, 8057, Zurich, Switzerland.
- Center for Integrative Human Physiology (ZIHP), University of Zurich, 8057, Zurich, Switzerland.
- Swiss Center for Affective Sciences, University of Geneva, 1202, Geneva, Switzerland.
| | - Katja Schlegel
- Swiss Center for Affective Sciences, University of Geneva, 1202, Geneva, Switzerland
- Institute for Psychology, University of Bern, 3012, Bern, Switzerland
| | - Didier Grandjean
- Swiss Center for Affective Sciences, University of Geneva, 1202, Geneva, Switzerland
| |
Collapse
|
40
|
Gruber T, Grandjean D. A comparative neurological approach to emotional expressions in primate vocalizations. Neurosci Biobehav Rev 2016; 73:182-190. [PMID: 27993605 DOI: 10.1016/j.neubiorev.2016.12.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2016] [Revised: 12/01/2016] [Accepted: 12/03/2016] [Indexed: 12/20/2022]
Abstract
Different approaches from different research domains have crystallized debate over primate emotional processing and vocalizations in recent decades. On one side, researchers disagree about whether emotional states or processes in animals truly compare to those in humans. On the other, a long-held assumption is that primate vocalizations are innate communicative signals over which nonhuman primates have limited control and a mirror of the emotional state of the individuals producing them, despite growing evidence of intentional production for some vocalizations. Our goal is to connect both sides of the discussion in deciphering how the emotional content of primate calls compares with emotional vocal signals in humans. We focus particularly on neural bases of primate emotions and vocalizations to identify cerebral structures underlying emotion, vocal production, and comprehension in primates, and discuss whether particular structures or neuronal networks solely evolved for specific functions in the human brain. Finally, we propose a model to classify emotional vocalizations in primates according to four dimensions (learning, control, emotional, meaning) to allow comparing calls across species.
Collapse
Affiliation(s)
- Thibaud Gruber
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland.
| | - Didier Grandjean
- Swiss Center for Affective Sciences and Department of Psychology and Sciences of Education, University of Geneva, Geneva, Switzerland
| |
Collapse
|
41
|
Perceiving emotional expressions in others: Activation likelihood estimation meta-analyses of explicit evaluation, passive perception and incidental perception of emotions. Neurosci Biobehav Rev 2016; 71:810-828. [DOI: 10.1016/j.neubiorev.2016.10.020] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Revised: 09/17/2016] [Accepted: 10/24/2016] [Indexed: 01/09/2023]
|
42
|
Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions. Cortex 2016; 85:116-125. [PMID: 27855282 DOI: 10.1016/j.cortex.2016.10.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2016] [Revised: 09/19/2016] [Accepted: 10/19/2016] [Indexed: 11/23/2022]
Abstract
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions.
Collapse
|
43
|
The sound of emotions-Towards a unifying neural network perspective of affective sound processing. Neurosci Biobehav Rev 2016; 68:96-110. [PMID: 27189782 DOI: 10.1016/j.neubiorev.2016.05.002] [Citation(s) in RCA: 117] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2016] [Revised: 05/01/2016] [Accepted: 05/04/2016] [Indexed: 12/15/2022]
Abstract
Affective sounds are an integral part of the natural and social environment that shape and influence behavior across a multitude of species. In human primates, these affective sounds span a repertoire of environmental and human sounds when we vocalize or produce music. In terms of neural processing, cortical and subcortical brain areas constitute a distributed network that supports our listening experience to these affective sounds. Taking an exhaustive cross-domain view, we accordingly suggest a common neural network that facilitates the decoding of the emotional meaning from a wide source of sounds rather than a traditional view that postulates distinct neural systems for specific affective sound types. This new integrative neural network view unifies the decoding of affective valence in sounds, and ascribes differential as well as complementary functional roles to specific nodes within a common neural network. It also highlights the importance of an extended brain network beyond the central limbic and auditory brain systems engaged in the processing of affective sounds.
Collapse
|