1
|
Hao Y, Hu L. Lower Childhood Socioeconomic Status Is Associated with Greater Neural Responses to Ambient Auditory Changes in Adulthood. J Cogn Neurosci 2024; 36:979-996. [PMID: 38579240 DOI: 10.1162/jocn_a_02151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2024]
Abstract
Humans' early life experience varies by socioeconomic status (SES), raising the question of how this difference is reflected in the adult brain. An important aspect of brain function is the ability to detect salient ambient changes while focusing on a task. Here, we ask whether subjective social status during childhood is reflected by the way young adults' brain detecting changes in irrelevant information. In two studies (total n = 58), we examine electrical brain responses in the frontocentral region to a series of auditory tones, consisting of standard stimuli (80%) and deviant stimuli (20%) interspersed randomly, while participants were engaged in various visual tasks. Both studies showed stronger automatic change detection indexed by MMN in lower SES individuals, regardless of the unattended sound's feature, attended emotional content, or study type. Moreover, we observed a larger MMN in lower-SES participants, although they did not show differences in brain and behavior responses to the attended task. Lower-SES people also did not involuntarily orient more attention to sound changes (i.e., deviant stimuli), as indexed by the P3a. The study indicates that individuals with lower subjective social status may have an increased ability to automatically detect changes in their environment, which may suggest their adaptation to their childhood environments.
Collapse
Affiliation(s)
- Yu Hao
- University of Pennsylvania
| | | |
Collapse
|
2
|
Li X, Vuoriainen E, Xu Q, Astikainen P. The effect of sad mood on early sensory event-related potentials to task-irrelevant faces. Biol Psychol 2023; 178:108531. [PMID: 36871812 DOI: 10.1016/j.biopsycho.2023.108531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 01/25/2023] [Accepted: 03/01/2023] [Indexed: 03/06/2023]
Abstract
It has been shown that the perceiver's mood affects the perception of emotional faces, but it is not known how mood affects preattentive brain responses to emotional facial expressions. To examine the question, we experimentally induced sad and neutral mood in healthy adults before presenting them with task-irrelevant pictures of faces while an electroencephalography was recorded. Sad, happy, and neutral faces were presented to the participants in an ignore oddball condition. Differential responses (emotional - neutral) for the P1, N170, and P2 amplitudes were extracted and compared between neutral and sad mood conditions. Emotional facial expressions modulated all the components, and an interaction effect of expression by mood was found for P1: an emotional modulation to happy faces, which was found in neutral mood condition, disappeared in sad mood condition. For N170 and P2, we found larger response amplitudes for both emotional faces, regardless of the mood. The results add to the previous behavioral findings showing that mood already affects low-level cortical feature encoding of task-irrelevant faces.
Collapse
Affiliation(s)
- Xueqiao Li
- Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyvaskyla, P.O. Box 35, 40014 Jyväskylä, Finland.
| | - Elisa Vuoriainen
- Human Information Processing Laboratory, Faculty of Social Sciences / Psychology, Tampere University, 33014 Tampere, Finland
| | - Qianru Xu
- Center for Machine Vision and Signal Analysis, University of Oulu, 90014 Oulu, Finland
| | - Piia Astikainen
- Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyvaskyla, P.O. Box 35, 40014 Jyväskylä, Finland
| |
Collapse
|
3
|
From communication dysfunction to treatment options in serious mental illness. Psychiatry Res 2023; 321:115062. [PMID: 36746033 DOI: 10.1016/j.psychres.2023.115062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/12/2023] [Accepted: 01/14/2023] [Indexed: 01/21/2023]
Abstract
The Commentary covers research focused on language dysfunction in schizophrenia, and more broadly in communication dysfunction in this disorder, which I have examined with a variety of both behavioral and imaging methodologies. It briefly outlines how further progress can be achieved in pursuing the goal of a comprehensive understanding of its underlying causes. Possible therapeutic approaches are also briefly discussed.
Collapse
|
4
|
Zhang M, Siegle GJ. Linking Affective and Hearing Sciences-Affective Audiology. Trends Hear 2023; 27:23312165231208377. [PMID: 37904515 PMCID: PMC10619363 DOI: 10.1177/23312165231208377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 09/22/2023] [Accepted: 10/01/2023] [Indexed: 11/01/2023] Open
Abstract
A growing number of health-related sciences, including audiology, have increasingly recognized the importance of affective phenomena. However, in audiology, affective phenomena are mostly studied as a consequence of hearing status. This review first addresses anatomical and functional bidirectional connections between auditory and affective systems that support a reciprocal affect-hearing relationship. We then postulate, by focusing on four practical examples (hearing public campaigns, hearing intervention uptake, thorough hearing evaluation, and tinnitus), that some important challenges in audiology are likely affect-related and that potential solutions could be developed by inspiration from affective science advances. We continue by introducing useful resources from affective science that could help audiology professionals learn about the wide range of affective constructs and integrate them into hearing research and clinical practice in structured and applicable ways. Six important considerations for good quality affective audiology research are summarized. We conclude that it is worthwhile and feasible to explore the explanatory power of emotions, feelings, motivations, attitudes, moods, and other affective processes in depth when trying to understand and predict how people with hearing difficulties perceive, react, and adapt to their environment.
Collapse
Affiliation(s)
- Min Zhang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Huadong Hospital, Fudan University, Shanghai, China
| | - Greg J. Siegle
- Department of Psychiatry, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
5
|
Lin Y, Fan X, Chen Y, Zhang H, Chen F, Zhang H, Ding H, Zhang Y. Neurocognitive Dynamics of Prosodic Salience over Semantics during Explicit and Implicit Processing of Basic Emotions in Spoken Words. Brain Sci 2022; 12:brainsci12121706. [PMID: 36552167 PMCID: PMC9776349 DOI: 10.3390/brainsci12121706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/06/2022] [Accepted: 12/07/2022] [Indexed: 12/15/2022] Open
Abstract
How language mediates emotional perception and experience is poorly understood. The present event-related potential (ERP) study examined the explicit and implicit processing of emotional speech to differentiate the relative influences of communication channel, emotion category and task type in the prosodic salience effect. Thirty participants (15 women) were presented with spoken words denoting happiness, sadness and neutrality in either the prosodic or semantic channel. They were asked to judge the emotional content (explicit task) and speakers' gender (implicit task) of the stimuli. Results indicated that emotional prosody (relative to semantics) triggered larger N100, P200 and N400 amplitudes with greater delta, theta and alpha inter-trial phase coherence (ITPC) and event-related spectral perturbation (ERSP) values in the corresponding early time windows, and continued to produce larger LPC amplitudes and faster responses during late stages of higher-order cognitive processing. The relative salience of prosodic and semantics was modulated by emotion and task, though such modulatory effects varied across different processing stages. The prosodic salience effect was reduced for sadness processing and in the implicit task during early auditory processing and decision-making but reduced for happiness processing in the explicit task during conscious emotion processing. Additionally, across-trial synchronization of delta, theta and alpha bands predicted the ERP components with higher ITPC and ERSP values significantly associated with stronger N100, P200, N400 and LPC enhancement. These findings reveal the neurocognitive dynamics of emotional speech processing with prosodic salience tied to stage-dependent emotion- and task-specific effects, which can reveal insights into understanding language and emotion processing from cross-linguistic/cultural and clinical perspectives.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xinran Fan
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yueqi Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Hao Zhang
- School of Foreign Languages and Literature, Shandong University, Jinan 250100, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha 410012, China
| | - Hui Zhang
- School of International Education, Shandong University, Jinan 250100, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Science & Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +86-213-420-5664 (H.D.); +1-612-624-7818 (Y.Z.)
| |
Collapse
|
6
|
Chen J, Huang X, Wang X, Zhang X, Liu S, Ma J, Huang Y, Tang A, Wu W. Visually Perceived Negative Emotion Enhances Mismatch Negativity but Fails to Compensate for Age-Related Impairments. Front Hum Neurosci 2022; 16:903797. [PMID: 35832873 PMCID: PMC9271563 DOI: 10.3389/fnhum.2022.903797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 05/31/2022] [Indexed: 11/13/2022] Open
Abstract
Objective: Automatic detection of auditory stimuli, represented by the mismatch negativity (MMN), facilitates rapid processing of salient stimuli in the environment. The amplitude of MMN declines with ageing. However, whether automatic detection of auditory stimuli is affected by visually perceived negative emotions with normal ageing remains unclear. We aimed to evaluate how fearful facial expressions affect the MMN amplitude under ageing.Methods: We used a modified oddball paradigm to analyze the amplitude of N100 (N1) and MMN in 22 young adults and 21 middle-aged adults.Results: We found that the amplitude of N1 elicited by standard tones was smaller under fearful facial expressions than neutral facial expressions and was more negative for young adults than middle-aged adults. The MMN amplitude under fearful facial expressions was greater than neutral facial expressions, but the amplitude in middle-aged adults was smaller than in young adults.Conclusion: Visually perceived negative emotion promotes the extraction of auditory features. Additionally, it enhances the effect of auditory change detection in middle-aged adults but fails to compensate for this decline with normal ageing.Significance: The study may help to understand how visually perceived emotion affects the early stage of auditory information processing from an event process perspective.
Collapse
Affiliation(s)
- Jiali Chen
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaomin Huang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xianglong Wang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Xuefei Zhang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Sishi Liu
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Junqin Ma
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yuanqiu Huang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- Guangdong Province Work Injury Rehabilitation Hospital, Guangzhou, China
| | - Anli Tang
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Wen Wu
- Department of Rehabilitation Medicine, Zhujiang Hospital, Southern Medical University, Guangzhou, China
- *Correspondence: Wen Wu
| |
Collapse
|
7
|
Nakakoga S, Shimizu K, Muramatsu J, Kitagawa T, Nakauchi S, Minami T. Pupillary response reflects attentional modulation to sound after emotional arousal. Sci Rep 2021; 11:17264. [PMID: 34446768 PMCID: PMC8390645 DOI: 10.1038/s41598-021-96643-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Accepted: 08/06/2021] [Indexed: 11/09/2022] Open
Abstract
There have been various studies on the effects of emotional visual processing on subsequent non-emotional auditory stimuli. A previous study with EEG has shown that responses to deviant sounds presented after presenting negative pictures collected more attentional resources than those for neutral pictures. To investigate such a compelling between emotional and cognitive processing, this study aimed to examined pupillary responses to an auditory stimulus after a positive, negative, or neutral emotional state was elicited by an emotional image. An emotional image was followed by a beep sound that was either repetitive or unexpected, and the pupillary dilation was measured. As a result, we found that the early component of the pupillary response to the beep sound was larger for negative and positive emotional states than the neutral emotional state, whereas the late component was larger for the positive emotional state than the negative and neutral emotional states. In addition, the peak latency of the pupillary response was earlier for negative than neutral or positive images. Further, to compensate for the disadvantage of low-temporal resolution of the pupillary data, the pupillary responses were deconvoluted and used in the analysis. The deconvolution analysis of pupillary responses confirmed that the responses to beep sound were more likely to be modulated by the emotional state rather than being influenced by the short presentation interval between the images and sounds. These findings suggested that pupil size index modulations in the compelling situation between emotional and cognitive processing.
Collapse
Affiliation(s)
- Satoshi Nakakoga
- Department of Computer Science and Engineering, Toyohashi University of Technology, 1-1, Hibarigaoka Tempaku, Toyohashi, Aichi, 441-8580, Japan
| | - Kengo Shimizu
- Department of Computer Science and Engineering, Toyohashi University of Technology, 1-1, Hibarigaoka Tempaku, Toyohashi, Aichi, 441-8580, Japan
| | - Junya Muramatsu
- System & Electronics Engineering Dept. II, TOYOTA Central R&D Labs., Inc., 41-1, Yokomichi, Nagakute, Aichi, 480-1192, Japan
| | - Takashi Kitagawa
- R&D and Engineering Management Div., TOYOTA MOTOR CORPORATION, 1, Toyota-cho, Toyota, Aichi, 471-8502, Japan
| | - Shigeki Nakauchi
- Department of Computer Science and Engineering, Toyohashi University of Technology, 1-1, Hibarigaoka Tempaku, Toyohashi, Aichi, 441-8580, Japan
| | - Tetsuto Minami
- Department of Computer Science and Engineering, Toyohashi University of Technology, 1-1, Hibarigaoka Tempaku, Toyohashi, Aichi, 441-8580, Japan.
- Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology, 1-1, Hibarigaoka Tempaku, Toyohashi, Aichi, 441-8580, Japan.
| |
Collapse
|
8
|
Swyer A, Powers AR. Voluntary control of auditory hallucinations: phenomenology to therapeutic implications. NPJ SCHIZOPHRENIA 2020; 6:19. [PMID: 32753641 PMCID: PMC7403299 DOI: 10.1038/s41537-020-0106-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 06/04/2020] [Indexed: 12/11/2022]
Abstract
Auditory verbal hallucinations (AVH) have traditionally been thought to be outside the influence of conscious control. However, recent work with voice hearers makes clear that both treatment-seeking and non-treatment-seeking voice hearers may exert varying degrees of control over their voices. Evidence suggests that this ability may be a key factor in determining health status, but little systematic examination of control in AVH has been carried out. This review provides an overview of the research examining control over AVH in both treatment-seeking and non-treatment-seeking populations. We first examine the relationship between control over AVH and health status as well as the psychosocial factors that may influence control and functioning. We then link control to various cognitive constructs that appear to be important for voice hearing. Finally, we reconcile the possibility of control with the field’s current understanding of the proposed cognitive, computational, and neural underpinnings of hallucinations and perception more broadly. Established relationships between control, health status, and functioning suggest that the development of control over AVH could increase functioning and reduce distress. A more detailed understanding of the discrete types of control, their development, and their neural underpinnings is essential for translating this knowledge into new therapeutic approaches.
Collapse
Affiliation(s)
- Ariel Swyer
- Department of Behavioral Sciences, York College/CUNY, Jamaica, NY, USA
| | - Albert R Powers
- Department of Psychiatry and the Connecticut Mental Health Center, Yale University, New Haven, CT, USA.
| |
Collapse
|
9
|
Pereira DR, Sampaio A, Pinheiro AP. Is internal source memory recognition modulated by emotional encoding contexts? PSYCHOLOGICAL RESEARCH 2020; 85:958-979. [PMID: 32060700 DOI: 10.1007/s00426-020-01294-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 01/18/2020] [Indexed: 11/28/2022]
Abstract
The influence of emotion on memory has been mainly examined by manipulating the emotional valence and/or arousal of critical items. Few studies probed how emotional information presented during the encoding of critical neutral items modulates memory recognition, particularly when considering source memory features. In this study, we specified the role of emotional encoding contexts in internal source memory performance (discrimination between encoding tasks) using a mixed (Experiment 1) and a blocked design (Experiment 2). During the study phase, participants were required to evaluate a set of neutral words, using either a self-referential or a semantic (common judgment) encoding strategy. Prior and concomitantly with each word, negative, neutral or positive pictures were presented in the background. The beneficial effect of self-referential encoding was observed for both item and internal source memory in both experiments. Remarkably, item and internal source memory recognition was not modulated by emotion, even though a secondary analysis indicated that the consistent exposure to negative (vs. positive) information led to worse source memory performance. These findings suggest that internal source memory of neutral items is not always affected by changing or repetitive emotional encoding contexts.
Collapse
Affiliation(s)
- Diana R Pereira
- Psychological Neuroscience Lab, CIPsi, School of Psychology, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal
| | - Adriana Sampaio
- Psychological Neuroscience Lab, CIPsi, School of Psychology, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal
| | - Ana P Pinheiro
- Psychological Neuroscience Lab, CIPsi, School of Psychology, University of Minho, Campus de Gualtar, 4710-057, Braga, Portugal. .,Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisbon, Portugal.
| |
Collapse
|
10
|
Rachman L, Dubal S, Aucouturier JJ. Happy you, happy me: expressive changes on a stranger's voice recruit faster implicit processes than self-produced expressions. Soc Cogn Affect Neurosci 2020; 14:559-568. [PMID: 31044241 PMCID: PMC6545538 DOI: 10.1093/scan/nsz030] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Revised: 04/09/2019] [Accepted: 04/21/2019] [Indexed: 01/09/2023] Open
Abstract
In social interactions, people have to pay attention both to the ‘what’ and ‘who’. In particular, expressive changes heard on speech signals have to be integrated with speaker identity, differentiating e.g. self- and other-produced signals. While previous research has shown that self-related visual information processing is facilitated compared to non-self stimuli, evidence in the auditory modality remains mixed. Here, we compared electroencephalography (EEG) responses to expressive changes in sequence of self- or other-produced speech sounds using a mismatch negativity (MMN) passive oddball paradigm. Critically, to control for speaker differences, we used programmable acoustic transformations to create voice deviants that differed from standards in exactly the same manner, making EEG responses to such deviations comparable between sequences. Our results indicate that expressive changes on a stranger’s voice are highly prioritized in auditory processing compared to identical changes on the self-voice. Other-voice deviants generate earlier MMN onset responses and involve stronger cortical activations in a left motor and somatosensory network suggestive of an increased recruitment of resources for less internally predictable, and therefore perhaps more socially relevant, signals.
Collapse
Affiliation(s)
- Laura Rachman
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France.,Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| | - Stéphanie Dubal
- Inserm U, CNRS UMR, Sorbonne Université UMR S, Institut du Cerveau et de la Moelle épinière, Social and Affective Neuroscience Lab, Paris, France
| | - Jean-Julien Aucouturier
- Science & Technology of Music and Sound, UMR (CNRS/IRCAM/Sorbonne Université), Paris, France
| |
Collapse
|
11
|
Rosburg T, Weigl M, Deuring G. Enhanced processing of facial emotion for target stimuli. Int J Psychophysiol 2019; 146:190-200. [DOI: 10.1016/j.ijpsycho.2019.08.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 07/24/2019] [Accepted: 08/28/2019] [Indexed: 01/14/2023]
|
12
|
Tavakoli P, Dale A, Boafo A, Campbell K. Evidence of P3a During Sleep, a Process Associated With Intrusions Into Consciousness in the Waking State. Front Neurosci 2019; 12:1028. [PMID: 30686989 PMCID: PMC6335993 DOI: 10.3389/fnins.2018.01028] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2018] [Accepted: 12/19/2018] [Indexed: 11/20/2022] Open
Abstract
The present study examines processes associated with intrusions into consciousness during an unconscious state, natural sleep. The definition of sleep is still much debated. Almost all researchers agree that sleep onset represents a gradual loss of consciousness of the external environment. For sleep to be beneficial, it needs to remain as undisturbed as possible. Nevertheless, unlike other unconsciousness states, sleep is reversible. For purposes of survival, it is critical that the sleeper be able to “detect” and perhaps become conscious of highly relevant biological or personal information. Therefore, even in sleep, the brain must decide whether a new incoming stimulus is relevant and if so, may require an arousal to wakefulness, or whether it is irrelevant and can be gated to prevent disruption of sleep. Event-related potentials (ERPs) were used to measure the extent processing of auditory stimuli some of which elicited an ERP component, the P3a, in the waking state. The P3a is associated with processes resulting in the interruption of frontal central executive, leading to conscious awareness. Very little research has focused on the occurrence of the P3a during sleep. A multi-feature paradigm was used to examine the processing of a frequently occurring “standard” stimulus and six rarely occurring different “deviant” stimuli during wakefulness, NREM, and REM sleep. A P3a was elicited by novel environmental sounds and white noise bursts in the waking state, replicating previous studies. Other deviant stimuli (changes in pitch, intensity, duration) failed to do so. The ERPs indicated that processing of the stimuli that did not elicit a P3a in wakefulness were much inhibited during both NREM and REM sleep. Surprisingly, those deviants that did elicit a P3a in wakefulness continued to do so in stage N2 and REM sleep. The subject did not, however, awaken. These results suggest processes leading to consciousness in wakefulness may still remain active during sleep possibly allowing subjects to act on potentially highly relevant input. This may also explain how sleep can be reversed if the stimulus input is sufficiently critical.
Collapse
Affiliation(s)
- Paniz Tavakoli
- Children's Hospital of Eastern Ontario, Ottawa, ON, Canada
| | - Allyson Dale
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Addo Boafo
- Children's Hospital of Eastern Ontario, Ottawa, ON, Canada.,Department of Psychiatry, University of Ottawa, Ottawa, ON, Canada
| | | |
Collapse
|
13
|
Carminati M, Fiori-Duharcourt N, Isel F. Neurophysiological differentiation between preattentive and attentive processing of emotional expressions on French vowels. Biol Psychol 2017; 132:55-63. [PMID: 29102707 DOI: 10.1016/j.biopsycho.2017.10.013] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Revised: 10/17/2017] [Accepted: 10/30/2017] [Indexed: 12/29/2022]
Abstract
The present electrophysiological study investigated the processing of emotional prosody by minimizing as much as possible the effect of emotional information conveyed by the lexical-semantic context. Emotionally colored French vowels (i.e., happiness, sadness, fear, and neutral) were presented in a mismatch negativity (MMN) oddball paradigm. Both the MMN, i.e., an event-related potential (ERP) component thought to reflect preattentive change detection, and the P3a, i.e., an ERP marker of involuntary orientation of attention toward deviant stimuli, were significantly modulated by the emotional deviants compared to the neutral ones. Critically, the largest amplitude (MMN, P3a) and the shortest peak latency (MMN) were observed for fear deviants, all other things being equal. Taken together, the present findings lend support to a sequential neurocognitive model of emotion processing (Scherer, 2001) which postulates, among other checks, a first stage of automatic emotion detection (MMN) followed by a second stage of subjective evaluation of the stimulus or event (P3a). Consistently with previous studies, our data suggest that among the six universal emotions, fear could have a special status probably because of its adaptive role in the evolution of the human species.
Collapse
Affiliation(s)
- Mathilde Carminati
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France.
| | - Nicole Fiori-Duharcourt
- Laboratory Vision Action Cognition - EA 7326, Institute of Psychology, Paris Descartes University - Sorbonne Paris Cité, France
| | - Frédéric Isel
- University Paris Nanterre - Paris Lumières, CNRS, UMR 7114 Models, Dynamics, Corpora, France
| |
Collapse
|