1
|
Rodríguez-San Esteban P, Gonzalez-Lopez JA, Chica AB. Neural representation of consciously seen and unseen information. Sci Rep 2025; 15:7888. [PMID: 40050698 PMCID: PMC11885812 DOI: 10.1038/s41598-025-92490-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2024] [Accepted: 02/27/2025] [Indexed: 03/09/2025] Open
Abstract
Machine learning (ML) techniques have steadily gained popularity in Neuroscience research, particularly when applied to the analysis of neuroimaging data. One of the most discussed topics in this field, the neural correlates of conscious (and unconscious) information, has also benefited from these approaches. Nevertheless, further research is still necessary to better understand the minimal neural mechanisms that are necessary and sufficient for experiencing any conscious percept, and which mechanisms are comparable and discernible between conscious and unconscious events. The aim of this study was two-fold. First, to explore whether it was possible to decode task-relevant features from electroencephalography (EEG) signals, particularly those related to perceptual awareness. Secondly, to test whether this decoding could be improved by using time-frequency representations instead of voltage. We employed a perceptual task in which participants were presented with near-threshold Gabor stimuli. They were asked to discriminate the orientation of the grating, and report whether they had perceived it or not. Participants' EEG signal was recorded while performing the task and was then analysed by using ML algorithms to decode distinctive task-related parameters. Results demonstrated the feasibility of decoding the presence/absence of the stimuli from EEG data, as well as participants' subjective perception, although the model failed to extract relevant information related to the orientation of the Gabor. Unconscious processing of unseen stimulation was observed both behaviourally and at the neural level. Moreover, contrary to conscious processing, unconscious representations were less stable across time, and only observed at early perceptual stages (~ 100 ms) and during response preparation. Furthermore, we conducted a comparative analysis of the performance of the classifier when employing either raw voltage signals or time-frequency representations, finding a substantial improvement when the latter was used to train the model, particularly in the theta and alpha bands. These findings underscore the significant potential of ML algorithms in decoding perceptual awareness from EEG data in consciousness research tasks.
Collapse
Affiliation(s)
- Pablo Rodríguez-San Esteban
- Department of Experimental Psychology, University of Granada (UGR), Granada, Spain.
- Brain, Mind, and Behavior Research Center (CIMCYC), Campus of Cartuja, University of Granada (UGR), Granada, 18011, Spain.
| | - Jose A Gonzalez-Lopez
- Department of Signal Theory, Telematics and Communications, University of Granada (UGR), Granada, Spain
- Research Center for Information and Communication Technologies (CITIC-UGR), University of Granada (UGR), Granada, Spain
| | - Ana B Chica
- Department of Experimental Psychology, University of Granada (UGR), Granada, Spain
- Brain, Mind, and Behavior Research Center (CIMCYC), Campus of Cartuja, University of Granada (UGR), Granada, 18011, Spain
| |
Collapse
|
2
|
Gao Y, Chen S, Rahnev D. Dynamics of sensory and decisional biases in perceptual decision making: Insights from the face distortion illusion. Psychon Bull Rev 2025; 32:317-325. [PMID: 38980570 DOI: 10.3758/s13423-024-02539-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/19/2024] [Indexed: 07/10/2024]
Abstract
Bias in perceptual decision making can have both sensory and decisional origins. These distinct sources of bias are typically seen as static and stable over time. However, human behavior is dynamic and constantly adapting. Yet it remains unclear how sensory and decisional biases progress in distinct ways over time. We addressed this question by tracking the dynamics of sensory and decisional biases during a task that involves a visual illusion. Observers saw multiple pairs of peripherally presented faces that induce a strong illusion making the faces appear distorted and grotesque. The task was to judge whether one of the last two faces had true physical distortion (experimentally introduced in half of the trials). Initially, participants classified most faces as distorted as exemplified by a liberal response bias. However, over the course of the experiment, this response bias gradually disappeared even though the distortion illusion remained equally strong, as demonstrated by a separate subjective rating task without artificially distorted faces. The results suggest that the sensory bias was progressively countered by an opposite decisional bias. This transition was accompanied by an increase in reaction times and a decrease in confidence relative to a condition that does not induce the visual illusion. All results were replicated in a second experiment with inverted faces. These findings demonstrate that participants dynamically adjust their decisional bias to compensate for sensory biases, and that these two biases together determine how humans make perceptual decisions.
Collapse
Affiliation(s)
- Yi Gao
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA.
| | - Sixing Chen
- Department of Psychology, New York University, New York, NY, USA
| | - Dobromir Rahnev
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
3
|
Costa C, Scarpazza C, Filippini N. The Anterior Insula Engages in Feature- and Context-Level Predictive Coding Processes for Recognition Judgments. J Neurosci 2025; 45:e0872242024. [PMID: 39622647 PMCID: PMC11780353 DOI: 10.1523/jneurosci.0872-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 09/16/2024] [Accepted: 10/19/2024] [Indexed: 01/31/2025] Open
Abstract
Predictive coding mechanisms facilitate detection and perceptual recognition, thereby influencing recognition judgements, and, broadly, perceptual decision-making. The anterior insula (AI) has been shown to be involved in reaching a decision about discrimination and recognition, as well as to coordinate brain circuits related to reward-based learning. Yet, experimental studies in the context of recognition and decision-making, targeting this area and based on formal trial-by-trial predictive coding computational quantities, are sparse. The present study goes beyond previous investigations and provides a predictive coding computational account of the role of the AI in recognition-related decision-making, by leveraging Zaragoza-Jimenez et al. (2023) open fMRI dataset (17 female, 10 male participants) and computational modeling, characterized by a combination of view-independent familiarity learning and contextual learning. Using model-based fMRI, we show that, in the context a two-option forced-choice identity recognition task, the AI engages in feature-level (i.e., view-independent familiarity) updating and error signaling processes and context-level familiarity updating to reach a recognition judgment. Our findings highlight that an important functional property of the AI is to update the level of familiarity of a given stimulus while also adapting to task-relevant, contextual information. Ultimately, these expectations, combined with input visual signals through reciprocally interconnected feedback and feedforward processes, facilitate recognition judgments, thereby guiding perceptual decision-making.
Collapse
Affiliation(s)
- Cristiano Costa
- Padova Neuroscience Center, Università degli Studi di Padova, Padua 35131, Italy
| | - Cristina Scarpazza
- Dipartimento di Psicologia Generale, Università degli Studi di Padova, Padua 35131, Italy
- IRCCS San Camillo Hospital, Venice 30126, Italy
| | | |
Collapse
|
4
|
Marsicano G, Bertini C, Ronconi L. Decoding cognition in neurodevelopmental, psychiatric and neurological conditions with multivariate pattern analysis of EEG data. Neurosci Biobehav Rev 2024; 164:105795. [PMID: 38977116 DOI: 10.1016/j.neubiorev.2024.105795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/21/2024] [Accepted: 07/03/2024] [Indexed: 07/10/2024]
Abstract
Multivariate pattern analysis (MVPA) of electroencephalographic (EEG) data represents a revolutionary approach to investigate how the brain encodes information. By considering complex interactions among spatio-temporal features at the individual level, MVPA overcomes the limitations of univariate techniques, which often fail to account for the significant inter- and intra-individual neural variability. This is particularly relevant when studying clinical populations, and therefore MVPA of EEG data has recently started to be employed as a tool to study cognition in brain disorders. Here, we review the insights offered by this methodology in the study of anomalous patterns of neural activity in conditions such as autism, ADHD, schizophrenia, dyslexia, neurological and neurodegenerative disorders, within different cognitive domains (perception, attention, memory, consciousness). Despite potential drawbacks that should be attentively addressed, these studies reveal a peculiar sensitivity of MVPA in unveiling dysfunctional and compensatory neurocognitive dynamics of information processing, which often remain blind to traditional univariate approaches. Such higher sensitivity in characterizing individual neurocognitive profiles can provide unique opportunities to optimise assessment and promote personalised interventions.
Collapse
Affiliation(s)
- Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna 40121, Italy; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena 47023, Italy.
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, Bologna 40121, Italy; Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, Cesena 47023, Italy.
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy; Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
5
|
Nuiten SA, de Gee JW, Zantvoord JB, Fahrenfort JJ, van Gaal S. Pharmacological Elevation of Catecholamine Levels Improves Perceptual Decisions, But Not Metacognitive Insight. eNeuro 2024; 11:ENEURO.0019-24.2024. [PMID: 39029953 PMCID: PMC11287790 DOI: 10.1523/eneuro.0019-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 07/12/2024] [Accepted: 07/15/2024] [Indexed: 07/21/2024] Open
Abstract
Perceptual decisions are often accompanied by a feeling of decision confidence. Where the parietal cortex is known for its crucial role in shaping such perceptual decisions, metacognitive evaluations are thought to additionally rely on the (pre)frontal cortex. Because of this supposed neural differentiation between these processes, perceptual and metacognitive decisions may be divergently affected by changes in internal (e.g., attention, arousal) and external (e.g., task and environmental demands) factors. Although intriguing, causal evidence for this hypothesis remains scarce. Here, we investigated the causal effect of two neuromodulatory systems on behavioral and neural measures of perceptual and metacognitive decision-making. Specifically, we pharmacologically elevated levels of catecholamines (with atomoxetine) and acetylcholine (with donepezil) in healthy adult human participants performing a visual discrimination task in which we gauged decision confidence, while electroencephalography was measured. Where cholinergic effects were not robust, catecholaminergic enhancement improved perceptual sensitivity, while at the same time leaving metacognitive sensitivity unaffected. Neurally, catecholaminergic elevation did not affect sensory representations of task-relevant visual stimuli but instead enhanced well-known decision signals measured over the centroparietal cortex, reflecting the accumulation of sensory evidence over time. Crucially, catecholaminergic enhancement concurrently impoverished neural markers measured over the frontal cortex linked to the formation of metacognitive evaluations. Enhanced catecholaminergic neuromodulation thus improves perceptual but not metacognitive decision-making.
Collapse
Affiliation(s)
- Stijn A Nuiten
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, Netherlands
- Department of Psychiatry (UPK), University of Basel, Basel, Switzerland
| | - Jan Willem de Gee
- Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, Netherlands
- Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Jasper B Zantvoord
- Department of Psychiatry, Amsterdam UMC location AMC, Amsterdam, Netherlands
- Amsterdam Neuroscience, Amsterdam, Netherlands
| | - Johannes J Fahrenfort
- Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
- Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Simon van Gaal
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain & Cognition, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
6
|
Harrison WJ, Bays PM, Rideaux R. Neural tuning instantiates prior expectations in the human visual system. Nat Commun 2023; 14:5320. [PMID: 37658039 PMCID: PMC10474129 DOI: 10.1038/s41467-023-41027-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 08/17/2023] [Indexed: 09/03/2023] Open
Abstract
Perception is often modelled as a process of active inference, whereby prior expectations are combined with noisy sensory measurements to estimate the structure of the world. This mathematical framework has proven critical to understanding perception, cognition, motor control, and social interaction. While theoretical work has shown how priors can be computed from environmental statistics, their neural instantiation could be realised through multiple competing encoding schemes. Using a data-driven approach, here we extract the brain's representation of visual orientation and compare this with simulations from different sensory coding schemes. We found that the tuning of the human visual system is highly conditional on stimulus-specific variations in a way that is not predicted by previous proposals. We further show that the adopted encoding scheme effectively embeds an environmental prior for natural image statistics within the sensory measurement, providing the functional architecture necessary for optimal inference in the earliest stages of cortical processing.
Collapse
Affiliation(s)
- William J Harrison
- School of Psychology, The University of Queensland, St Lucia, Australia
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia
| | - Paul M Bays
- Department of Psychology, The University of Cambridge, Cambridge, UK
| | - Reuben Rideaux
- Queensland Brain Institute, The University of Queensland, St Lucia, Australia.
- Department of Psychology, The University of Cambridge, Cambridge, UK.
- School of Psychology, The University of Sydney, Camperdown, Australia.
| |
Collapse
|
7
|
Xu Q, Hu J, Qin Y, Li G, Zhang X, Li P. Intention affects fairness processing: Evidence from behavior and representational similarity analysis of event-related potential signals. Hum Brain Mapp 2023; 44:2451-2464. [PMID: 36749642 PMCID: PMC10028638 DOI: 10.1002/hbm.26223] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 01/11/2023] [Accepted: 01/21/2023] [Indexed: 02/08/2023] Open
Abstract
In an ultimatum game, the responder must decide between pursuing self-interest and insisting on fairness, and these choices are affected by the intentions of the proposer. However, the time course of this social decision-making process is unclear. Representational similarity analysis (RSA) is a useful technique for linking brain activity with rich behavioral data sets. In this study, electroencephalography (EEG) was used to measure the time course of neural responses to proposed allocation schemes with different intentions. Twenty-eight participants played an ultimatum game as responders. They had to choose between accepting and rejecting the fair or unfair money allocation schemes of proposers. The schemes were offered based on the proposer's selfish intention (monetary gain), altruistic intention (donation to charity), or ambiguous intention (unknown to the responder). We used a spatiotemporal RSA and inter-subject RSA (IS-RSA) to explore the connections between event-related potentials (ERPs) after offer presentation and intention presentation with four types of behavioral data (acceptance, response time, fairness ratings, and pleasantness ratings). The spatiotemporal RSA results revealed that only response time variation was linked with the difference in ERPs at 432-592 ms after offer presentation on the posterior parietal and prefrontal regions. Meanwhile, the IS-RSA results found a significant association between inter-individual differences in response time and differences in ERP activity at 596-812 ms after the presentation of ambiguous intention, particularly in the prefrontal region. This study expands the intention-based reciprocal model to the third-party context and demonstrates that brain activity can represent response time differences in social decision-making.
Collapse
Affiliation(s)
- Qiang Xu
- Brain Function and Psychological Science Research Center, Shenzhen University, Shenzhen, China
| | - Jiali Hu
- Brain Function and Psychological Science Research Center, Shenzhen University, Shenzhen, China
| | - Yi Qin
- Brain Function and Psychological Science Research Center, Shenzhen University, Shenzhen, China
| | - Guojie Li
- Brain Function and Psychological Science Research Center, Shenzhen University, Shenzhen, China
| | - Xukai Zhang
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Peng Li
- Brain Function and Psychological Science Research Center, Shenzhen University, Shenzhen, China
| |
Collapse
|
8
|
Dijkstra N, Fleming SM. Subjective signal strength distinguishes reality from imagination. Nat Commun 2023; 14:1627. [PMID: 36959279 PMCID: PMC10036541 DOI: 10.1038/s41467-023-37322-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 03/09/2023] [Indexed: 03/25/2023] Open
Abstract
Humans are voracious imaginers, with internal simulations supporting memory, planning and decision-making. Because the neural mechanisms supporting imagery overlap with those supporting perception, a foundational question is how reality and imagination are kept apart. One possibility is that the intention to imagine is used to identify and discount self-generated signals during imagery. Alternatively, because internally generated signals are generally weaker, sensory strength is used to index reality. Traditional psychology experiments struggle to investigate this issue as subjects can rapidly learn that real stimuli are in play. Here, we combined one-trial-per-participant psychophysics with computational modelling and neuroimaging to show that imagined and perceived signals are in fact intermixed, with judgments of reality being determined by whether this intermixed signal is strong enough to cross a reality threshold. A consequence of this account is that when virtual or imagined signals are strong enough, they become subjectively indistinguishable from reality.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Wellcome Centre for Human Neuroimaging, University College London, London, UK.
| | - Stephen M Fleming
- Wellcome Centre for Human Neuroimaging, University College London, London, UK
- Max Planck UCL Centre for Computational Psychiatry and Aging Research, University College London, London, UK
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
9
|
Alilović J, Lampers E, Slagter HA, van Gaal S. Illusory object recognition is either perceptual or cognitive in origin depending on decision confidence. PLoS Biol 2023; 21:e3002009. [PMID: 36862734 PMCID: PMC10013920 DOI: 10.1371/journal.pbio.3002009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 03/14/2023] [Accepted: 01/20/2023] [Indexed: 03/03/2023] Open
Abstract
We occasionally misinterpret ambiguous sensory input or report a stimulus when none is presented. It is unknown whether such errors have a sensory origin and reflect true perceptual illusions, or whether they have a more cognitive origin (e.g., are due to guessing), or both. When participants performed an error-prone and challenging face/house discrimination task, multivariate electroencephalography (EEG) analyses revealed that during decision errors (e.g., mistaking a face for a house), sensory stages of visual information processing initially represent the presented stimulus category. Crucially however, when participants were confident in their erroneous decision, so when the illusion was strongest, this neural representation flipped later in time and reflected the incorrectly reported percept. This flip in neural pattern was absent for decisions that were made with low confidence. This work demonstrates that decision confidence arbitrates between perceptual decision errors, which reflect true illusions of perception, and cognitive decision errors, which do not.
Collapse
Affiliation(s)
- Josipa Alilović
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
| | - Eline Lampers
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
| | - Heleen A. Slagter
- Department of Applied and Experimental Psychology, Vrije Universiteit Amsterdam, the Netherlands
- Institute for Brain and Behavior, Vrije Universiteit Amsterdam, the Netherlands
| | - Simon van Gaal
- Department of Psychology, University of Amsterdam, Amsterdam, the Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, the Netherlands
- * E-mail:
| |
Collapse
|
10
|
Han B, Zhang Y, Shen L, Mo L, Chen Q. Task demands modulate pre-stimulus alpha frequency and sensory template during bistable apparent motion perception. Cereb Cortex 2023; 33:1679-1692. [PMID: 35512283 DOI: 10.1093/cercor/bhac165] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 03/29/2022] [Accepted: 04/09/2022] [Indexed: 11/13/2022] Open
Abstract
Despite ambiguous environmental inputs, top-down attention biases our subjective perception toward the preferred percepts, via modulating prestimulus neural activity or inducing prestimulus sensory templates that carry concrete internal sensory representations of the preferred percepts. In contrast to frequent changes of behavioral goals in the typical cue-target paradigm, human beings are often engaged in a prolonged task state with only 1 specific behavioral goal. It remains unclear how prestimulus neural signals and sensory templates are modulated in the latter case. To answer this question in the present electroencephalogram study on human subjects, we manipulated sustained task demands toward one of the 2 possible percepts in the bistable Ternus display, emphasizing either temporal integration or segregation. First, the prestimulus peak alpha frequency, which gated the temporal window of temporal integration, was effectively modulated by task demands. Furthermore, time-resolved decoding analyses showed that task demands biased neural representations toward the preferred percepts after the full presentation of bottom-up stimuli. More importantly, sensory templates resembling the preferred percepts emerged even before the bottom-up sensory evidence were sufficient enough to induce explicit percepts. Taken together, task demands modulate both prestimulus alpha frequency and sensory templates, to eventually bias subjective perception toward the preferred percepts.
Collapse
Affiliation(s)
- Biao Han
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Guangzhou 510631, China.,School of Psychology, South China Normal University, Guangzhou 510631, China.,Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Yanni Zhang
- School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Lu Shen
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Guangzhou 510631, China.,School of Psychology, South China Normal University, Guangzhou 510631, China.,Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Lei Mo
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Guangzhou 510631, China.,School of Psychology, South China Normal University, Guangzhou 510631, China.,Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Qi Chen
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, Guangzhou 510631, China.,School of Psychology, South China Normal University, Guangzhou 510631, China.,Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| |
Collapse
|
11
|
Distinct early and late neural mechanisms regulate feature-specific sensory adaptation in the human visual system. Proc Natl Acad Sci U S A 2023; 120:e2216192120. [PMID: 36724257 PMCID: PMC9963156 DOI: 10.1073/pnas.2216192120] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
A canonical feature of sensory systems is that they adapt to prolonged or repeated inputs, suggesting the brain encodes the temporal context in which stimuli are embedded. Sensory adaptation has been observed in the central nervous systems of many animal species, using techniques sensitive to a broad range of spatiotemporal scales of neural activity. Two competing models have been proposed to account for the phenomenon. One assumes that adaptation reflects reduced neuronal sensitivity to sensory inputs over time (the "fatigue" account); the other posits that adaptation arises due to increased neuronal selectivity (the "sharpening" account). To adjudicate between these accounts, we exploited the well-known "tilt aftereffect", which reflects adaptation to orientation information in visual stimuli. We recorded whole-brain activity with millisecond precision from human observers as they viewed oriented gratings before and after adaptation, and used inverted encoding modeling to characterize feature-specific neural responses. We found that both fatigue and sharpening mechanisms contribute to the tilt aftereffect, but that they operate at different points in the sensory processing cascade to produce qualitatively distinct outcomes. Specifically, fatigue operates during the initial stages of processing, consistent with tonic inhibition of feedforward responses, whereas sharpening occurs ~200 ms later, consistent with feedback or local recurrent activity. Our findings reconcile two major accounts of sensory adaptation, and reveal how this canonical process optimizes the detection of change in sensory inputs through efficient neural coding.
Collapse
|
12
|
Johnson PA, Blom T, van Gaal S, Feuerriegel D, Bode S, Hogendoorn H. Position representations of moving objects align with real-time position in the early visual response. eLife 2023; 12:e82424. [PMID: 36656268 PMCID: PMC9851612 DOI: 10.7554/elife.82424] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Accepted: 11/16/2022] [Indexed: 01/20/2023] Open
Abstract
When interacting with the dynamic world, the brain receives outdated sensory information, due to the time required for neural transmission and processing. In motion perception, the brain may overcome these fundamental delays through predictively encoding the position of moving objects using information from their past trajectories. In the present study, we evaluated this proposition using multivariate analysis of high temporal resolution electroencephalographic data. We tracked neural position representations of moving objects at different stages of visual processing, relative to the real-time position of the object. During early stimulus-evoked activity, position representations of moving objects were activated substantially earlier than the equivalent activity evoked by unpredictable flashes, aligning the earliest representations of moving stimuli with their real-time positions. These findings indicate that the predictability of straight trajectories enables full compensation for the neural delays accumulated early in stimulus processing, but that delays still accumulate across later stages of cortical processing.
Collapse
|
13
|
Magnetoencephalography recordings reveal the neural mechanisms of auditory contributions to improved visual detection. Commun Biol 2023; 6:12. [PMID: 36604455 PMCID: PMC9816120 DOI: 10.1038/s42003-022-04335-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 12/01/2022] [Indexed: 01/07/2023] Open
Abstract
Sounds enhance the detection of visual stimuli while concurrently biasing an observer's decisions. To investigate the neural mechanisms that underlie such multisensory interactions, we decoded time-resolved Signal Detection Theory sensitivity and criterion parameters from magneto-encephalographic recordings of participants that performed a visual detection task. We found that sounds improved visual detection sensitivity by enhancing the accumulation and maintenance of perceptual evidence over time. Meanwhile, criterion decoding analyses revealed that sounds induced brain activity patterns that resembled the patterns evoked by an actual visual stimulus. These two complementary mechanisms of audiovisual interplay differed in terms of their automaticity: Whereas the sound-induced enhancement in visual sensitivity depended on participants being actively engaged in a detection task, we found that sounds activated the visual cortex irrespective of task demands, potentially inducing visual illusory percepts. These results challenge the classical assumption that sound-induced increases in false alarms exclusively correspond to decision-level biases.
Collapse
|
14
|
Anandakumar DB, Liu RC. More than the end: OFF response plasticity as a mnemonic signature of a sound's behavioral salience. Front Comput Neurosci 2022; 16:974264. [PMID: 36148326 PMCID: PMC9485674 DOI: 10.3389/fncom.2022.974264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/17/2022] [Indexed: 11/29/2022] Open
Abstract
In studying how neural populations in sensory cortex code dynamically varying stimuli to guide behavior, the role of spiking after stimuli have ended has been underappreciated. This is despite growing evidence that such activity can be tuned, experience-and context-dependent and necessary for sensory decisions that play out on a slower timescale. Here we review recent studies, focusing on the auditory modality, demonstrating that this so-called OFF activity can have a more complex temporal structure than the purely phasic firing that has often been interpreted as just marking the end of stimuli. While diverse and still incompletely understood mechanisms are likely involved in generating phasic and tonic OFF firing, more studies point to the continuing post-stimulus activity serving a short-term, stimulus-specific mnemonic function that is enhanced when the stimuli are particularly salient. We summarize these results with a conceptual model highlighting how more neurons within the auditory cortical population fire for longer duration after a sound's termination during an active behavior and can continue to do so even while passively listening to behaviorally salient stimuli. Overall, these studies increasingly suggest that tonic auditory cortical OFF activity holds an echoic memory of specific, salient sounds to guide behavioral decisions.
Collapse
Affiliation(s)
- Dakshitha B. Anandakumar
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, United States
- Department of Biology, Emory University, Atlanta, GA, United States
| | - Robert C. Liu
- Department of Biology, Emory University, Atlanta, GA, United States
- Center for Translational Social Neuroscience, Emory University, Atlanta, GA, United States
| |
Collapse
|
15
|
Bo K, Cui L, Yin S, Hu Z, Hong X, Kim S, Keil A, Ding M. Decoding the temporal dynamics of affective scene processing. Neuroimage 2022; 261:119532. [PMID: 35931307 DOI: 10.1016/j.neuroimage.2022.119532] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 07/01/2022] [Accepted: 08/01/2022] [Indexed: 10/31/2022] Open
Abstract
Natural images containing affective scenes are used extensively to investigate the neural mechanisms of visual emotion processing. Functional fMRI studies have shown that these images activate a large-scale distributed brain network that encompasses areas in visual, temporal, and frontal cortices. The underlying spatial and temporal dynamics, however, remain to be better characterized. We recorded simultaneous EEG-fMRI data while participants passively viewed affective images from the International Affective Picture System (IAPS). Applying multivariate pattern analysis to decode EEG data, and representational similarity analysis to fuse EEG data with simultaneously recorded fMRI data, we found that: (1) ∼80 ms after picture onset, perceptual processing of complex visual scenes began in early visual cortex, proceeding to ventral visual cortex at ∼100 ms, (2) between ∼200 and ∼300 ms (pleasant pictures: ∼200 ms; unpleasant pictures: ∼260 ms), affect-specific neural representations began to form, supported mainly by areas in occipital and temporal cortices, and (3) affect-specific neural representations were stable, lasting up to ∼2 s, and exhibited temporally generalizable activity patterns. These results suggest that affective scene representations in the brain are formed temporally in a valence-dependent manner and may be sustained by recurrent neural interactions among distributed brain areas.
Collapse
Affiliation(s)
- Ke Bo
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Psychological and Brain Sciences, Dartmouth college, Hanover, NH 03755, USA
| | - Lihan Cui
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Siyang Yin
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Zhenhong Hu
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Xiangfei Hong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Shanghai Key Laboratory of Psychotic Disorders, Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, 200030, China
| | - Sungkean Kim
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Human-Computer Interaction, Hanyang University, Ansan, Republic of Korea
| | - Andreas Keil
- Department of Psychology, University of Florida, Gainesville, FL 32611, USA.
| | - Mingzhou Ding
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA.
| |
Collapse
|
16
|
Harris A, Hutcherson CA. Temporal dynamics of decision making: A synthesis of computational and neurophysiological approaches. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2022; 13:e1586. [PMID: 34854573 DOI: 10.1002/wcs.1586] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 10/06/2021] [Accepted: 10/18/2021] [Indexed: 06/13/2023]
Abstract
As interest in the temporal dynamics of decision-making has grown, researchers have increasingly turned to computational approaches such as the drift diffusion model (DDM) to identify how cognitive processes unfold during choice. At the same time, technological advances in noninvasive neurophysiological methods such as electroencephalography and magnetoencephalography now allow researchers to map the neural time course of decision making with millisecond precision. Combining these approaches can potentially yield important new insights into how choices emerge over time. Here we review recent research on the computational and neurophysiological correlates of perceptual and value-based decision making, from DDM parameters to scalp potentials and oscillatory neural activity. Starting with motor response preparation, the most well-understood aspect of the decision process, we discuss evidence that urgency signals and shifts in baseline activation, rather than shifts in the physiological value of the choice-triggering response threshold, are responsible for adjusting response times under speeded choice scenarios. Research on the neural correlates of starting point bias suggests that prestimulus activity can predict biases in motor choice behavior. Finally, studies examining the time dynamics of evidence construction and evidence accumulation have identified signals at frontocentral and centroparietal electrodes associated respectively with these processes, emerging 300-500 ms after stimulus onset. These findings can inform psychological theories of decision-making, providing empirical support for attribute weighting in value-based choice while suggesting theoretical alternatives to dual-process accounts. Further research combining computational and neurophysiological approaches holds promise for providing greater insight into the moment-by-moment evolution of the decision process. This article is categorized under: Psychology > Reasoning and Decision Making Neuroscience > Cognition Economics > Individual Decision-Making.
Collapse
Affiliation(s)
- Alison Harris
- Claremont McKenna College, Claremont, California, USA
| | | |
Collapse
|
17
|
Kuc A, Korchagin S, Maksimenko VA, Shusharina N, Hramov AE. Combining Statistical Analysis and Machine Learning for EEG Scalp Topograms Classification. Front Syst Neurosci 2021; 15:716897. [PMID: 34867218 PMCID: PMC8635058 DOI: 10.3389/fnsys.2021.716897] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Accepted: 10/05/2021] [Indexed: 11/13/2022] Open
Abstract
Incorporating brain-computer interfaces (BCIs) into daily life requires reducing the reliance of decoding algorithms on the calibration or enabling calibration with the minimal burden on the user. A potential solution could be a pre-trained decoder demonstrating a reasonable accuracy on the naive operators. Addressing this issue, we considered ambiguous stimuli classification tasks and trained an artificial neural network to classify brain responses to the stimuli of low and high ambiguity. We built a pre-trained classifier utilizing time-frequency features corresponding to the fundamental neurophysiological processes shared between subjects. To extract these features, we statistically contrasted electroencephalographic (EEG) spectral power between the classes in the representative group of subjects. As a result, the pre-trained classifier achieved 74% accuracy on the data of newly recruited subjects. Analysis of the literature suggested that a pre-trained classifier could help naive users to start using BCI bypassing training and further increased accuracy during the feedback session. Thus, our results contribute to using BCI during paralysis or limb amputation when there is no explicit user-generated kinematic output to properly train a decoder. In machine learning, our approach may facilitate the development of transfer learning (TL) methods for addressing the cross-subject problem. It allows extracting the interpretable feature subspace from the source data (the representative group of subjects) related to the target data (a naive user), preventing the negative transfer in the cross-subject tasks.
Collapse
Affiliation(s)
- Alexander Kuc
- Center for Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Sergey Korchagin
- Department of Data Analysis and Machine Learning, Financial University Under the Government of the Russian Federation, Moscow, Russia
| | - Vladimir A Maksimenko
- Center for Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia.,Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Innopolis University, Innopolis, Russia
| | - Natalia Shusharina
- Center for Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Alexander E Hramov
- Center for Neurotechnology and Machine Learning, Immanuel Kant Baltic Federal University, Kaliningrad, Russia.,Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Neuroscience and Cognitive Technology Laboratory, Innopolis University, Innopolis, Russia
| |
Collapse
|
18
|
Dijkstra N, van Gaal S, Geerligs L, Bosch SE, van Gerven MAJ. No Evidence for Neural Overlap between Unconsciously Processed and Imagined Stimuli. eNeuro 2021; 8:ENEURO.0228-21.2021. [PMID: 34593516 PMCID: PMC8577044 DOI: 10.1523/eneuro.0228-21.2021] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 09/01/2021] [Accepted: 09/02/2021] [Indexed: 11/23/2022] Open
Abstract
Visual representations can be generated via feedforward or feedback processes. The extent to which these processes result in overlapping representations remains unclear. Previous work has shown that imagined stimuli elicit similar representations as perceived stimuli throughout the visual cortex. However, while representations during imagery are indeed only caused by feedback processing, neural processing during perception is an interplay of both feedforward and feedback processing. This means that any representational overlap could be because of overlap in feedback processes. In the current study, we aimed to investigate this issue by characterizing the overlap between feedforward- and feedback-initiated category representations during imagined stimuli, conscious perception, and unconscious processing using fMRI in humans of either sex. While all three conditions elicited stimulus representations in left lateral occipital cortex (LOC), significant similarities were observed only between imagery and conscious perception in this area. Furthermore, connectivity analyses revealed stronger connectivity between frontal areas and left LOC during conscious perception and in imagery compared with unconscious processing. Together, these findings can be explained by the idea that long-range feedback modifies visual representations, thereby reducing representational overlap between purely feedforward- and feedback-initiated stimulus representations measured by fMRI. Neural representations influenced by feedback, either stimulus driven (perception) or purely internally driven (imagery), are, however, relatively similar.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, United Kingdom
| | - Simon van Gaal
- Department of Psychology, Brain & Cognition, University of Amsterdam, 1000 GG, Amsterdam, The Netherlands
| | - Linda Geerligs
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Sander E Bosch
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| | - Marcel A J van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 GL, Nijmegen, The Netherlands
| |
Collapse
|
19
|
Sensor-Level Wavelet Analysis Reveals EEG Biomarkers of Perceptual Decision-Making. SENSORS 2021; 21:s21072461. [PMID: 33918223 PMCID: PMC8038130 DOI: 10.3390/s21072461] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 03/17/2021] [Accepted: 03/26/2021] [Indexed: 11/18/2022]
Abstract
Perceptual decision-making requires transforming sensory information into decisions. An ambiguity of sensory input affects perceptual decisions inducing specific time-frequency patterns on EEG (electroencephalogram) signals. This paper uses a wavelet-based method to analyze how ambiguity affects EEG features during a perceptual decision-making task. We observe that parietal and temporal beta-band wavelet power monotonically increases throughout the perceptual process. Ambiguity induces high frontal beta-band power at 0.3–0.6 s post-stimulus onset. It may reflect the increasing reliance on the top-down mechanisms to facilitate accumulating decision-relevant sensory features. Finally, this study analyzes the perceptual process using mixed within-trial and within-subject design. First, we found significant percept-related changes in each subject and then test their significance at the group level. Thus, observed beta-band biomarkers are pronounced in single EEG trials and may serve as control commands for brain-computer interface (BCI).
Collapse
|
20
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18. [PMID: 33690177 DOI: 10.1101/2020.08.03.234997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 05/24/2023]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
21
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18:10.1088/1741-2552/abecf0. [PMID: 33690177 PMCID: PMC8738965 DOI: 10.1088/1741-2552/abecf0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
22
|
van Driel J, Olivers CNL, Fahrenfort JJ. High-pass filtering artifacts in multivariate classification of neural time series data. J Neurosci Methods 2021; 352:109080. [PMID: 33508412 DOI: 10.1016/j.jneumeth.2021.109080] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 01/13/2021] [Accepted: 01/15/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND Traditionally, EEG/MEG data are high-pass filtered and baseline-corrected to remove slow drifts. Minor deleterious effects of high-pass filtering in traditional time-series analysis have been well-documented, including temporal displacements. However, its effects on time-resolved multivariate pattern classification analyses (MVPA) are largely unknown. NEW METHOD To prevent potential displacement effects, we extend an alternative method of removing slow drift noise - robust detrending - with a procedure in which we mask out all cortical events from each trial. We refer to this method as trial-masked robust detrending. RESULTS In both real and simulated EEG data of a working memory experiment, we show that both high-pass filtering and standard robust detrending create artifacts that result in the displacement of multivariate patterns into activity silent periods, particularly apparent in temporal generalization analyses, and especially in combination with baseline correction. We show that trial-masked robust detrending is free from such displacements. COMPARISON WITH EXISTING METHOD(S) Temporal displacement may emerge even with modest filter cut-off settings such as 0.05 Hz, and even in regular robust detrending. However, trial-masked robust detrending results in artifact-free decoding without displacements. Baseline correction may unwittingly obfuscate spurious decoding effects and displace them to the rest of the trial. CONCLUSIONS Decoding analyses benefit from trial-masked robust detrending, without the unwanted side effects introduced by filtering or regular robust detrending. However, for sufficiently clean data sets and sufficiently strong signals, no filtering or detrending at all may work adequately. Implications for other types of data are discussed, followed by a number of recommendations.
Collapse
Affiliation(s)
- Joram van Driel
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Christian N L Olivers
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands
| | - Johannes J Fahrenfort
- Institute for Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, the Netherlands; Department of Experimental and Applied Psychology - Cognitive Psychology, Vrije Universiteit Amsterdam, the Netherlands; Faculty of Behavioural and Movement Sciences, Vrije Universiteit Amsterdam, the Netherlands; Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, the Netherlands; Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, the Netherlands.
| |
Collapse
|
23
|
Knotts JD, Michel M, Odegaard B. Defending subjective inflation: an inference to the best explanation. Neurosci Conscious 2020; 2020:niaa025. [PMID: 33343930 PMCID: PMC7734437 DOI: 10.1093/nc/niaa025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 09/28/2020] [Accepted: 10/12/2020] [Indexed: 12/25/2022] Open
Abstract
In a recent opinion piece, Abid (2019) criticizes the hypothesis that subjective inflation may partly account for apparent phenomenological richness across the visual field and outside the focus of attention. In response, we address three main issues. First, we maintain that inflation should be interpreted as an intraperceptual-and not post-perceptual-phenomenon. Second, we describe how inflation may differ from filling-in. Finally, we contend that, in general, there is sufficient evidence to tip the scales toward intraperceptual interpretations of visibility and confidence judgments.
Collapse
Affiliation(s)
- J D Knotts
- Department of Psychology, University of California, Los Angeles, 502 Portola Plaza Los Angeles, CA 90095, USA
| | - Matthias Michel
- Centre for Philosophy of Natural and Social Science, London School of Economics and Political Science, Houghton Street London WC2A 2AE, UK
- Consciousness, Cognition & Computation Group, Centre for Research in Cognition & Neurosciences, Université Libre de Bruxelles (ULB), 50 avenue F.D. Roosevelt CP191 B–1050, Bruxelles, Belgium
| | - Brian Odegaard
- Department of Psychology, University of Florida, 945 Center Dr. P.O. Box 112250 Gainesville, FL 32603, USA
| |
Collapse
|
24
|
Dijkstra N, Ambrogioni L, Vidaurre D, van Gerven M. Neural dynamics of perceptual inference and its reversal during imagery. eLife 2020; 9:e53588. [PMID: 32686645 PMCID: PMC7371419 DOI: 10.7554/elife.53588] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 06/30/2020] [Indexed: 12/27/2022] Open
Abstract
After the presentation of a visual stimulus, neural processing cascades from low-level sensory areas to increasingly abstract representations in higher-level areas. It is often hypothesised that a reversal in neural processing underlies the generation of mental images as abstract representations are used to construct sensory representations in the absence of sensory input. According to predictive processing theories, such reversed processing also plays a central role in later stages of perception. Direct experimental evidence of reversals in neural information flow has been missing. Here, we used a combination of machine learning and magnetoencephalography to characterise neural dynamics in humans. We provide direct evidence for a reversal of the perceptual feed-forward cascade during imagery and show that, during perception, such reversals alternate with feed-forward processing in an 11 Hz oscillatory pattern. Together, these results show how common feedback processes support both veridical perception and mental imagery.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Donders Centre for Cognition, Radboud University, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
- Wellcome Centre for Human Neuroimaging, University College LondonLondonUnited Kingdom
| | - Luca Ambrogioni
- Donders Centre for Cognition, Radboud University, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| | - Diego Vidaurre
- Oxford Centre for Human Brain Activity, Oxford UniversityOxfordUnited Kingdom
- Department of Clinical Health, Aarhus UniversityAarhusDenmark
| | - Marcel van Gerven
- Donders Centre for Cognition, Radboud University, Donders Institute for Brain, Cognition and BehaviourNijmegenNetherlands
| |
Collapse
|
25
|
Maksimenko VA, Kuc A, Frolov NS, Khramova MV, Pisarchik AN, Hramov AE. Dissociating Cognitive Processes During Ambiguous Information Processing in Perceptual Decision-Making. Front Behav Neurosci 2020; 14:95. [PMID: 32754018 PMCID: PMC7370842 DOI: 10.3389/fnbeh.2020.00095] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 05/20/2020] [Indexed: 12/25/2022] Open
Abstract
Decision-making requires the accumulation of sensory evidence. However, in everyday life, sensory information is often ambiguous and contains decision-irrelevant features. This means that the brain must disambiguate sensory input and extract decision-relevant features. Sensory information processing and decision-making represent two subsequent stages of the perceptual decision-making process. While sensory processing relies on occipito-parietal neuronal activity during the earlier time window, decision-making lasts for a prolonged time, involving parietal and frontal areas. Although perceptual decision-making is being actively studied, its neuronal mechanisms under ambiguous sensory evidence lack detailed consideration. Here, we analyzed the brain activity of subjects accomplishing a perceptual decision-making task involving the classification of ambiguous stimuli. We demonstrated that ambiguity induced high frontal θ-band power for 0.15 s post-stimulus onset, indicating increased reliance on top-down processes, such as expectations and memory. Ambiguous processing also caused high occipito-parietal β-band power for 0.2 s and high fronto-parietal β-power for 0.35–0.42 s post-stimulus onset. We supposed that the former component reflected the disambiguation process while the latter reflected the decision-making phase. Our findings complemented existing knowledge about ambiguous perception by providing additional information regarding the temporal discrepancy between the different cognitive processes during perceptual decision-making.
Collapse
Affiliation(s)
- Vladimir A Maksimenko
- Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University of Nizhny Novgorod, Nizhny Novgorod, Russia.,Center for Technologies in Robotics and Mechatronics Component, Innopolis University, Innopolis, Russia
| | - Alexander Kuc
- Center for Technologies in Robotics and Mechatronics Component, Innopolis University, Innopolis, Russia
| | - Nikita S Frolov
- Center for Technologies in Robotics and Mechatronics Component, Innopolis University, Innopolis, Russia
| | - Marina V Khramova
- Faculty of Information Technologies, Saratov State University, Saratov, Russia
| | - Alexander N Pisarchik
- Center for Technologies in Robotics and Mechatronics Component, Innopolis University, Innopolis, Russia.,Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain
| | - Alexander E Hramov
- Center for Technologies in Robotics and Mechatronics Component, Innopolis University, Innopolis, Russia
| |
Collapse
|
26
|
Battaglini L. Effect of Repetitive Transcranial Magnetic Stimulation on a Target Moving in Front of a Static or Random Dynamic Visual Noise. Perception 2020; 49:882-892. [PMID: 32646284 DOI: 10.1177/0301006620940222] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Observers report seeing as slower a target disk moving in front of a static visual noise (SVN) background than the same object moving in front of a random dynamic visual noise (rDVN) background when the speed is the same. To investigate in which brain region (lower vs. higher visual areas) the background and the target signals might be combined to elicit this misperception, the transcranial magnetic stimulation (TMS) was delivered over the early visual cortex (V1/V2), middle temporal area (MT) and Cz (control site) while participants performed a speed discrimination task with targets moving in front of an SVN or an rDVN. Results showed that the TMS over MT reduced the perceived speed of the target moving in front of an SVN, but not when the target was moving in front of an rDVN background. Moreover, the TMS do not seem to interfere with encoding processing but more likely affected decoding processing in conditions of high uncertainty (i.e., when targets have similar speed).
Collapse
|
27
|
Prior Expectations of Motion Direction Modulate Early Sensory Processing. J Neurosci 2020; 40:6389-6397. [PMID: 32641404 PMCID: PMC7424874 DOI: 10.1523/jneurosci.0537-20.2020] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 05/28/2020] [Accepted: 06/16/2020] [Indexed: 12/03/2022] Open
Abstract
Perception is a process of inference, integrating sensory inputs with prior expectations. However, little is known regarding the temporal dynamics of this integration. It has been proposed that expectation plays a role early in the perceptual process, biasing sensory processing. Alternatively, others suggest that expectations are integrated only at later, postperceptual decision-making stages. The current study aimed to dissociate between these hypotheses. We exposed human participants (male and female) to auditory cues predicting the likely direction of upcoming moving dot patterns, while recording neural activity using magnetoencephalography (MEG). Participants' reports of the moving dot directions were biased toward the direction predicted by the cues. To investigate when expectations affected sensory representations, we used inverted encoding models to decode the direction represented in early sensory signals. Strikingly, the cues modulated the direction represented in the MEG signal as early as 150 ms after visual stimulus onset. While this may not reflect a modulation of the initial feedforward sweep, it does reveal a modulation of early sensory representations. Exploratory analyses showed that the neural modulation was related to perceptual expectation effects: participants with a stronger perceptual bias toward the predicted direction also revealed a stronger reflection of the predicted direction in the MEG signal. For participants with this perceptual bias, a correlation between decoded and perceived direction already emerged before visual stimulus onset, suggesting that the prestimulus state of the visual cortex influences sensory processing. Together, these results suggest that expectations play an integral role in the neural computations underlying perception. SIGNIFICANCE STATEMENT Perception can be thought of as an inferential process in which our brains integrate sensory inputs with prior expectations to make sense of the world. This study investigated whether this integration occurs early or late in the process of perception. We exposed human participants to auditory cues that predicted the likely direction of visual moving dots, while recording neural activity with millisecond resolution using magnetoencephalography. Participants' perceptual reports of the direction of the moving dots were biased toward the predicted direction. Additionally, the predicted direction modulated the neural representation of the moving dots just 150 ms after they appeared. This suggests that prior expectations affected sensory processing at early stages, playing an integral role in the perceptual process.
Collapse
|
28
|
Mazzi C, Mazzeo G, Savazzi S. Late Positivity Does Not Meet the Criteria to be Considered a Proper Neural Correlate of Perceptual Awareness. Front Syst Neurosci 2020; 14:36. [PMID: 32733211 PMCID: PMC7358964 DOI: 10.3389/fnsys.2020.00036] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 05/18/2020] [Indexed: 11/19/2022] Open
Abstract
Contrastive analysis has been widely employed in the search for the electrophysiological neural correlates of consciousness. However, despite its clear logic, it has been argued that it may not succeed in isolating neural processes solely involved in the emergence of perceptual awareness. In fact, data from contrastive analysis would be contaminated by potential confounding factors reflecting distinct, though related, processes either preceding or following the conscious perception. At present, the ERP components representing the proper correlates of perceptual awareness still remain to be identified among those correlating with awareness (i.e., Visual Awareness Negativity, VAN and Late Positivity, LP). In order to dissociate visual awareness from post-perceptual confounds specifically related to decision making, we manipulated the response criterion, which affects how a percept is translated into a decision. In particular, while performing an orientation discrimination task, participants were asked to shift their response criterion across sessions. As a consequence, the resulting modulation should concern the ERP component(s) not exclusively reflecting mechanisms regulating the subjective conscious experience itself but rather the processes accompanying it. Electrophysiological results showed that N1 and P3 were sensitive to the response criterion adopted by participants. Additionally, the more the participants shifted their response criterion, the bigger the ERP modulation was; this was consequently indicative of the critical role of these components in the decision-making processes regardless of awareness level. When considering data independently from the response criterion, the aware vs. unaware contrast showed that both VAN and LP were significant. Crucially, the LP component was also modulated by the interaction of awareness and response criterion, while VAN results to be unaffected. In agreement with previous literature, these findings provide evidence supporting the hypothesis that VAN tracks the emergence of visual awareness by encoding the conscious percept, whereas LP reflects the contribution from post-perceptual processes related to response requirements. This excludes a direct functional role of this later component in giving rise to perceptual awareness.
Collapse
Affiliation(s)
- Chiara Mazzi
- Perception and Awareness (PandA) Laboratory, Department of Neuroscience, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Gaetano Mazzeo
- Perception and Awareness (PandA) Laboratory, Department of Neuroscience, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| | - Silvia Savazzi
- Perception and Awareness (PandA) Laboratory, Department of Neuroscience, Biomedicine and Movement Sciences, University of Verona, Verona, Italy
| |
Collapse
|
29
|
The interplay between multisensory integration and perceptual decision making. Neuroimage 2020; 222:116970. [PMID: 32454204 DOI: 10.1016/j.neuroimage.2020.116970] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 03/23/2020] [Accepted: 05/15/2020] [Indexed: 01/15/2023] Open
Abstract
Facing perceptual uncertainty, the brain combines information from different senses to make optimal perceptual decisions and to guide behavior. However, decision making has been investigated mostly in unimodal contexts. Thus, how the brain integrates multisensory information during decision making is still unclear. Two opposing, but not mutually exclusive, scenarios are plausible: either the brain thoroughly combines the signals from different modalities before starting to build a supramodal decision, or unimodal signals are integrated during decision formation. To answer this question, we devised a paradigm mimicking naturalistic situations where human participants were exposed to continuous cacophonous audiovisual inputs containing an unpredictable signal cue in one or two modalities and had to perform a signal detection task or a cue categorization task. First, model-based analyses of behavioral data indicated that multisensory integration takes place alongside perceptual decision making. Next, using supervised machine learning on concurrently recorded EEG, we identified neural signatures of two processing stages: sensory encoding and decision formation. Generalization analyses across experimental conditions and time revealed that multisensory cues were processed faster during both stages. We further established that acceleration of neural dynamics during sensory encoding and decision formation was directly linked to multisensory integration. Our results were consistent across both signal detection and categorization tasks. Taken together, the results revealed a continuous dynamic interplay between multisensory integration and decision making processes (mixed scenario), with integration of multimodal information taking place both during sensory encoding as well as decision formation.
Collapse
|
30
|
Decoding across sensory modalities reveals common supramodal signatures of conscious perception. Proc Natl Acad Sci U S A 2020; 117:7437-7446. [PMID: 32184331 PMCID: PMC7132110 DOI: 10.1073/pnas.1912584117] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
An increasing number of studies highlight common brain regions and processes in mediating conscious sensory experience. While most studies have been performed in the visual modality, it is implicitly assumed that similar processes are involved in other sensory modalities. However, the existence of supramodal neural processes related to conscious perception has not been convincingly shown so far. Here, we aim to directly address this issue by investigating whether neural correlates of conscious perception in one modality can predict conscious perception in a different modality. In two separate experiments, we presented participants with successive blocks of near-threshold tasks involving subjective reports of tactile, visual, or auditory stimuli during the same magnetoencephalography (MEG) acquisition. Using decoding analysis in the poststimulus period between sensory modalities, our first experiment uncovered supramodal spatiotemporal neural activity patterns predicting conscious perception of the feeble stimulation. Strikingly, these supramodal patterns included activity in primary sensory regions not directly relevant to the task (e.g., neural activity in visual cortex predicting conscious perception of auditory near-threshold stimulation). We carefully replicate our results in a control experiment that furthermore show that the relevant patterns are independent of the type of report (i.e., whether conscious perception was reported by pressing or withholding a button press). Using standard paradigms for probing neural correlates of conscious perception, our findings reveal a common signature of conscious access across sensory modalities and illustrate the temporally late and widespread broadcasting of neural representations, even into task-unrelated primary sensory processing regions.
Collapse
|
31
|
Bitzer S, Park H, Maess B, von Kriegstein K, Kiebel SJ. Representation of Perceptual Evidence in the Human Brain Assessed by Fast, Within-Trial Dynamic Stimuli. Front Hum Neurosci 2020; 14:9. [PMID: 32116600 PMCID: PMC7010639 DOI: 10.3389/fnhum.2020.00009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 01/13/2020] [Indexed: 01/07/2023] Open
Abstract
In perceptual decision making the brain extracts and accumulates decision evidence from a stimulus over time and eventually makes a decision based on the accumulated evidence. Several characteristics of this process have been observed in human electrophysiological experiments, especially an average build-up of motor-related signals supposedly reflecting accumulated evidence, when averaged across trials. Another recently established approach to investigate the representation of decision evidence in brain signals is to correlate the within-trial fluctuations of decision evidence with the measured signals. We here report results of this approach for a two-alternative forced choice reaction time experiment measured using magnetoencephalography (MEG) recordings. Our results show: (1) that decision evidence is most strongly represented in the MEG signals in three consecutive phases and (2) that posterior cingulate cortex is involved most consistently, among all brain areas, in all three of the identified phases. As most previous work on perceptual decision making in the brain has focused on parietal and motor areas, our findings therefore suggest that the role of the posterior cingulate cortex in perceptual decision making may be currently underestimated.
Collapse
Affiliation(s)
- Sebastian Bitzer
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| | - Hame Park
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld, Germany.,Center of Excellence Cognitive Interaction Technology, Bielefeld University, Bielefeld, Germany
| | - Burkhard Maess
- Brain Networks Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Department of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Stefan J Kiebel
- Department of Psychology, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
32
|
Smith CM, Federmeier KD. Neural Signatures of Learning Novel Object-Scene Associations. J Cogn Neurosci 2020; 32:783-803. [PMID: 31933437 DOI: 10.1162/jocn_a_01530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Objects are perceived within rich visual contexts, and statistical associations may be exploited to facilitate their rapid recognition. Recent work using natural scene-object associations suggests that scenes can prime the visual form of associated objects, but it remains unknown whether this relies on an extended learning process. We asked participants to learn categorically structured associations between novel objects and scenes in a paired associate memory task while ERPs were recorded. In the test phase, scenes were first presented (2500 msec), followed by objects that matched or mismatched the scene; degree of contextual mismatch was manipulated along visual and categorical dimensions. Matching objects elicited a reduced N300 response, suggesting visuostructural priming based on recently formed associations. Amplitude of an extended positivity (onset ∼200 msec) was sensitive to visual distance between the presented object and the contextually associated target object, most likely indexing visual template matching. Results suggest recent associative memories may be rapidly recruited to facilitate object recognition in a top-down fashion, with clinical implications for populations with impairments in hippocampal-dependent memory and executive function.
Collapse
|
33
|
Maksimenko VA, Frolov NS, Hramov AE, Runnova AE, Grubov VV, Kurths J, Pisarchik AN. Neural Interactions in a Spatially-Distributed Cortical Network During Perceptual Decision-Making. Front Behav Neurosci 2019; 13:220. [PMID: 31607873 PMCID: PMC6769171 DOI: 10.3389/fnbeh.2019.00220] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 09/05/2019] [Indexed: 01/11/2023] Open
Abstract
Behavioral experiments evidence that attention is not maintained at a constant level, but fluctuates with time. Recent studies associate such fluctuations with dynamics of attention-related cortical networks, however the exact mechanism remains unclear. To address this issue, we consider functional neuronal interactions during the accomplishment of a reaction time (RT) task which requires sustained attention. The participants are subjected to a binary classification of a large number of presented ambiguous visual stimuli with different degrees of ambiguity. Generally, high ambiguity causes high RT and vice versa. However, we demonstrate that RT fluctuates even when the stimulus ambiguity remains unchanged. The analysis of neuronal activity reveals that the subject's behavioral response is preceded by the formation of a distributed functional network in the β-frequency band. This network is characterized by high connectivity in the frontal cortex and supposed to subserve a decision-making process. We show that neither the network structure nor the duration of its formation depend on RT and stimulus ambiguity. In turn, RT is related to the moment of time when the β-band functional network emerges. We hypothesize that RT is affected by the processes preceding the decision-making stage, e.g., encoding visual sensory information and extracting decision-relevant features from raw sensory information.
Collapse
Affiliation(s)
- Vladimir A Maksimenko
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Nikita S Frolov
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Alexander E Hramov
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Anastasia E Runnova
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Vadim V Grubov
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia
| | - Jürgen Kurths
- Research Domain IV "Complexity Science", Potsdam Institute for Climate Impact Research, Potsdam, Germany.,Department of Physics, Humboldt University, Berlin, Germany.,Faculty of Biology, Saratov State University, Saratov, Russia
| | - Alexander N Pisarchik
- Neuroscience and Cognitive Technology Laboratory, Center for Technologies in Robotics and Mechatronics Components, Innopolis University, Innopolis, Russia.,Center for Biomedical Technology, Technical University of Madrid, Madrid, Spain
| |
Collapse
|
34
|
Meijs EL, Mostert P, Slagter HA, de Lange FP, van Gaal S. Exploring the role of expectations and stimulus relevance on stimulus-specific neural representations and conscious report. Neurosci Conscious 2019; 2019:niz011. [PMID: 31456886 PMCID: PMC6704346 DOI: 10.1093/nc/niz011] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 07/05/2019] [Accepted: 07/08/2019] [Indexed: 12/15/2022] Open
Abstract
Subjective experience can be influenced by top-down factors, such as expectations and stimulus relevance. Recently, it has been shown that expectations can enhance the likelihood that a stimulus is consciously reported, but the neural mechanisms supporting this enhancement are still unclear. We manipulated stimulus expectations within the attentional blink (AB) paradigm using letters and combined visual psychophysics with magnetoencephalographic (MEG) recordings to investigate whether prior expectations may enhance conscious access by sharpening stimulus-specific neural representations. We further explored how stimulus-specific neural activity patterns are affected by the factors expectation, stimulus relevance and conscious report. First, we show that valid expectations about the identity of an upcoming stimulus increase the likelihood that it is consciously reported. Second, using a series of multivariate decoding analyses, we show that the identity of letters presented in and out of the AB can be reliably decoded from MEG data. Third, we show that early sensory stimulus-specific neural representations are similar for reported and missed target letters in the AB task (active report required) and an oddball task in which the letter was clearly presented but its identity was task-irrelevant. However, later sustained and stable stimulus-specific representations were uniquely observed when target letters were consciously reported (decision-dependent signal). Fourth, we show that global pre-stimulus neural activity biased perceptual decisions for a ‘seen’ response. Fifth and last, no evidence was obtained for the sharpening of sensory representations by top-down expectations. We discuss these findings in light of emerging models of perception and conscious report highlighting the role of expectations and stimulus relevance.
Collapse
Affiliation(s)
- Erik L Meijs
- Radboud University Medical Center, Donders Institute for Brain, Cognition and Behaviour, Nijmegen 6500 HB, the Netherlands.,Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6500 HB, the Netherlands
| | - Pim Mostert
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6500 HB, the Netherlands
| | - Heleen A Slagter
- Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, the Netherlands.,Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, the Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6500 HB, the Netherlands
| | - Simon van Gaal
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen 6500 HB, the Netherlands.,Department of Psychology, University of Amsterdam, Amsterdam 1001 NK, the Netherlands.,Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam 1001 NK, the Netherlands
| |
Collapse
|
35
|
Validating a dimension of doubt in decision-making: A proposed endophenotype for obsessive-compulsive disorder. PLoS One 2019; 14:e0218182. [PMID: 31194808 PMCID: PMC6564001 DOI: 10.1371/journal.pone.0218182] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 05/28/2019] [Indexed: 12/13/2022] Open
Abstract
Doubt is subjective uncertainty about one's perceptions and recall. It can impair decision-making and is a prominent feature of obsessive-compulsive disorder (OCD). We propose that evaluation of doubt during decision-making provides a useful endophenotype with which to study the underlying pathophysiology of OCD and potentially other psychopathologies. For the current study, we developed a new instrument, the Doubt Questionnaire, to clinically assess doubt. The random dot motion task was used to measure reaction time and subjective certainty, at varying levels of perceptual difficulty, in individuals who scored high and low on doubt, and in individuals with and without OCD. We found that doubt scores were significantly higher in OCD cases than controls. Drift diffusion modeling revealed that high doubt scores predicted slower evidence accumulation than did low doubt scores; and OCD diagnosis lower than controls. At higher levels of dot coherence, OCD participants exhibited significantly slower drift rates than did controls (q<0.05 for 30%, and 45% coherence; q<0.01 for 70% coherence). In addition, at higher levels of coherence, high doubt subjects exhibited even slower drift rates and reaction times than low doubt subjects (q<0.01 for 70% coherence). Moreover, under high coherence conditions, individuals with high doubt scores reported lower certainty in their decisions than did those with low doubt scores. We conclude that the Doubt Questionnaire is a useful instrument for measuring doubt. Compared to those with low doubt, those with high doubt accumulate evidence more slowly and report lower certainty when making decisions under conditions of low uncertainty. High doubt may affect the decision-making process in individuals with OCD. The dimensional doubt measure is a useful endophenotype for OCD research and could enable computationally rigorous and neurally valid understanding of decision-making and its pathological expression in OCD and other disorders.
Collapse
|
36
|
Internal noise in contrast discrimination propagates forwards from early visual cortex. Neuroimage 2019; 191:503-517. [PMID: 30822470 DOI: 10.1016/j.neuroimage.2019.02.049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 02/03/2019] [Accepted: 02/19/2019] [Indexed: 11/22/2022] Open
Abstract
Human contrast discrimination performance is limited by transduction nonlinearities and variability of the neural representation (noise). Whereas the nonlinearities have been well-characterised, there is less agreement about the specifics of internal noise. Psychophysical models assume that it impacts late in sensory processing, whereas neuroimaging and intracranial electrophysiology studies suggest that the noise is much earlier. We investigated whether perceptually-relevant internal noise arises in early visual areas or later decision making areas. We recorded EEG and MEG during a two-interval-forced-choice contrast discrimination task and used multivariate pattern analysis to decode target/non-target and selected/non-selected intervals from evoked responses. We found that perceptual decisions could be decoded from both EEG and MEG signals, even when the stimuli in both intervals were physically identical. Above-chance decision classification started <100 ms after stimulus onset, suggesting that neural noise affects sensory signals early in the visual pathway. Classification accuracy increased over time, peaking at >500 ms. Applying multivariate analysis to separate anatomically-defined brain regions in MEG source space, we found that occipital regions were informative early on but then information spreads forwards across parietal and frontal regions. This is consistent with neural noise affecting sensory processing at multiple stages of perceptual decision making. We suggest how early sensory noise might be resolved with Birdsall's linearisation, in which a dominant noise source obscures subsequent nonlinearities, to allow the visual system to preserve the wide dynamic range of early areas whilst still benefitting from contrast-invariance at later stages. A preprint of this work is available at: https://doi.org/10.1101/364612.
Collapse
|
37
|
Attention promotes the neural encoding of prediction errors. PLoS Biol 2019; 17:e2006812. [PMID: 30811381 PMCID: PMC6411367 DOI: 10.1371/journal.pbio.2006812] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 03/11/2019] [Accepted: 02/05/2019] [Indexed: 11/20/2022] Open
Abstract
The encoding of sensory information in the human brain is thought to be optimised by two principal processes: 'prediction' uses stored information to guide the interpretation of forthcoming sensory events, and 'attention' prioritizes these events according to their behavioural relevance. Despite the ubiquitous contributions of attention and prediction to various aspects of perception and cognition, it remains unknown how they interact to modulate information processing in the brain. A recent extension of predictive coding theory suggests that attention optimises the expected precision of predictions by modulating the synaptic gain of prediction error units. Because prediction errors code for the difference between predictions and sensory signals, this model would suggest that attention increases the selectivity for mismatch information in the neural response to a surprising stimulus. Alternative predictive coding models propose that attention increases the activity of prediction (or 'representation') neurons and would therefore suggest that attention and prediction synergistically modulate selectivity for 'feature information' in the brain. Here, we applied forward encoding models to neural activity recorded via electroencephalography (EEG) as human observers performed a simple visual task to test for the effect of attention on both mismatch and feature information in the neural response to surprising stimuli. Participants attended or ignored a periodic stream of gratings, the orientations of which could be either predictable, surprising, or unpredictable. We found that surprising stimuli evoked neural responses that were encoded according to the difference between predicted and observed stimulus features, and that attention facilitated the encoding of this type of information in the brain. These findings advance our understanding of how attention and prediction modulate information processing in the brain, as well as support the theory that attention optimises precision expectations during hierarchical inference by increasing the gain of prediction errors.
Collapse
|
38
|
Eye Movement-Related Confounds in Neural Decoding of Visual Working Memory Representations. eNeuro 2018; 5:eN-NWR-0401-17. [PMID: 30310862 PMCID: PMC6179574 DOI: 10.1523/eneuro.0401-17.2018] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Revised: 06/03/2018] [Accepted: 06/12/2018] [Indexed: 11/21/2022] Open
Abstract
A relatively new analysis technique, known as neural decoding or multivariate pattern analysis (MVPA), has become increasingly popular for cognitive neuroimaging studies over recent years. These techniques promise to uncover the representational contents of neural signals, as well as the underlying code and the dynamic profile thereof. A field in which these techniques have led to novel insights in particular is that of visual working memory (VWM). In the present study, we subjected human volunteers to a combined VWM/imagery task while recording their neural signals using magnetoencephalography (MEG). We applied multivariate decoding analyses to uncover the temporal profile underlying the neural representations of the memorized item. Analysis of gaze position however revealed that our results were contaminated by systematic eye movements, suggesting that the MEG decoding results from our originally planned analyses were confounded. In addition to the eye movement analyses, we also present the original analyses to highlight how these might have readily led to invalid conclusions. Finally, we demonstrate a potential remedy, whereby we train the decoders on a functional localizer that was specifically designed to target bottom-up sensory signals and as such avoids eye movements. We conclude by arguing for more awareness of the potentially pervasive and ubiquitous effects of eye movement-related confounds.
Collapse
|
39
|
Mihali A, Young AG, Adler LA, Halassa MM, Ma WJ. A Low-Level Perceptual Correlate of Behavioral and Clinical Deficits in ADHD. COMPUTATIONAL PSYCHIATRY (CAMBRIDGE, MASS.) 2018; 2:141-163. [PMID: 30381800 PMCID: PMC6184361 DOI: 10.1162/cpsy_a_00018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2017] [Accepted: 07/10/2018] [Indexed: 11/04/2022]
Abstract
In many studies of attention-deficit hyperactivity disorder (ADHD), stimulus encoding and processing (perceptual function) and response selection (executive function) have been intertwined. To dissociate deficits in these functions, we introduced a task that parametrically varied low-level stimulus features (orientation and color) for fine-grained analysis of perceptual function. It also required participants to switch their attention between feature dimensions on a trial-by-trial basis, thus taxing executive processes. Furthermore, we used a response paradigm that captured task-irrelevant motor output (TIMO), reflecting failures to use the correct stimulus-response rule. ADHD participants had substantially higher perceptual variability than controls, especially for orientation, as well as higher TIMO. In both ADHD and controls, TIMO was strongly affected by the switch manipulation. Across participants, the perceptual variability parameter was correlated with TIMO, suggesting that perceptual deficits are associated with executive function deficits. Based on perceptual variability alone, we were able to classify participants into ADHD and controls with a mean accuracy of about 77%. Participants' self-reported General Executive Composite score correlated not only with TIMO but also with the perceptual variability parameter. Our results highlight the role of perceptual deficits in ADHD and the usefulness of computational modeling of behavior in dissociating perceptual from executive processes.
Collapse
Affiliation(s)
- Andra Mihali
- Center for Neural Science, New York University, New York, New York, USA
- Department of Psychology, New York University, New York, New York, USA
| | - Allison G. Young
- Department of Psychiatry, NYU School of Medicine, New York, New York, USA
| | - Lenard A. Adler
- Department of Psychiatry, NYU School of Medicine, New York, New York, USA
| | - Michael M. Halassa
- Department of Brain and Cognitive Science, MIT, Boston, Massachusetts, USA
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York, New York, USA
- Department of Psychology, New York University, New York, New York, USA
| |
Collapse
|
40
|
Delis I, Dmochowski JP, Sajda P, Wang Q. Correlation of neural activity with behavioral kinematics reveals distinct sensory encoding and evidence accumulation processes during active tactile sensing. Neuroimage 2018; 175:12-21. [PMID: 29580968 PMCID: PMC5960621 DOI: 10.1016/j.neuroimage.2018.03.035] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Revised: 02/21/2018] [Accepted: 03/17/2018] [Indexed: 12/16/2022] Open
Abstract
Many real-world decisions rely on active sensing, a dynamic process for directing our sensors (e.g. eyes or fingers) across a stimulus to maximize information gain. Though ecologically pervasive, limited work has focused on identifying neural correlates of the active sensing process. In tactile perception, we often make decisions about an object/surface by actively exploring its shape/texture. Here we investigate the neural correlates of active tactile decision-making by simultaneously measuring electroencephalography (EEG) and finger kinematics while subjects interrogated a haptic surface to make perceptual judgments. Since sensorimotor behavior underlies decision formation in active sensing tasks, we hypothesized that the neural correlates of decision-related processes would be detectable by relating active sensing to neural activity. Novel brain-behavior correlation analysis revealed that three distinct EEG components, localizing to right-lateralized occipital cortex (LOC), middle frontal gyrus (MFG), and supplementary motor area (SMA), respectively, were coupled with active sensing as their activity significantly correlated with finger kinematics. To probe the functional role of these components, we fit their single-trial-couplings to decision-making performance using a hierarchical-drift-diffusion-model (HDDM), revealing that the LOC modulated the encoding of the tactile stimulus whereas the MFG predicted the rate of information integration towards a choice. Interestingly, the MFG disappeared from components uncovered from control subjects performing active sensing but not required to make perceptual decisions. By uncovering the neural correlates of distinct stimulus encoding and evidence accumulation processes, this study delineated, for the first time, the functional role of cortical areas in active tactile decision-making.
Collapse
Affiliation(s)
- Ioannis Delis
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | - Jacek P Dmochowski
- Department of Biomedical Engineering, City College of New York, New York, NY, 10031, USA
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA; Data Science Institute, Columbia University, New York, NY, 10027, USA.
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA.
| |
Collapse
|
41
|
Fahrenfort JJ, van Driel J, van Gaal S, Olivers CNL. From ERPs to MVPA Using the Amsterdam Decoding and Modeling Toolbox (ADAM). Front Neurosci 2018; 12:368. [PMID: 30018529 PMCID: PMC6038716 DOI: 10.3389/fnins.2018.00368] [Citation(s) in RCA: 82] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2018] [Accepted: 05/11/2018] [Indexed: 12/31/2022] Open
Abstract
In recent years, time-resolved multivariate pattern analysis (MVPA) has gained much popularity in the analysis of electroencephalography (EEG) and magnetoencephalography (MEG) data. However, MVPA may appear daunting to those who have been applying traditional analyses using event-related potentials (ERPs) or event-related fields (ERFs). To ease this transition, we recently developed the Amsterdam Decoding and Modeling (ADAM) toolbox in MATLAB. ADAM is an entry-level toolbox that allows a direct comparison of ERP/ERF results to MVPA results using any dataset in standard EEGLAB or Fieldtrip format. The toolbox performs and visualizes multiple-comparison corrected group decoding and forward encoding results in a variety of ways, such as classifier performance across time, temporal generalization (time-by-time) matrices of classifier performance, channel tuning functions (CTFs) and topographical maps of (forward-transformed) classifier weights. All analyses can be performed directly on raw data or can be preceded by a time-frequency decomposition of the data in which case the analyses are performed separately on different frequency bands. The figures ADAM produces are publication-ready. In the current manuscript, we provide a cookbook in which we apply a decoding analysis to a publicly available MEG/EEG dataset involving the perception of famous, non-famous and scrambled faces. The manuscript covers the steps involved in single subject analysis and shows how to perform and visualize a subsequent group-level statistical analysis. The processing pipeline covers computation and visualization of group ERPs, ERP difference waves, as well as MVPA decoding results. It ends with a comparison of the differences and similarities between EEG and MEG decoding results. The manuscript has a level of description that allows application of these analyses to any dataset in EEGLAB or Fieldtrip format.
Collapse
Affiliation(s)
- Johannes J. Fahrenfort
- Department of Experimental and Applied Psychology, Institute Brain and Behavior Amsterdam (iBBA), VU University Amsterdam, Amsterdam, Netherlands
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam, Netherlands
| | - Joram van Driel
- Department of Experimental and Applied Psychology, Institute Brain and Behavior Amsterdam (iBBA), VU University Amsterdam, Amsterdam, Netherlands
| | - Simon van Gaal
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam, Netherlands
| | - Christian N. L. Olivers
- Department of Experimental and Applied Psychology, Institute Brain and Behavior Amsterdam (iBBA), VU University Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
42
|
Dijkstra N, Mostert P, Lange FPD, Bosch S, van Gerven MA. Differential temporal dynamics during visual imagery and perception. eLife 2018; 7:33904. [PMID: 29807570 PMCID: PMC5973830 DOI: 10.7554/elife.33904] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 04/30/2018] [Indexed: 11/13/2022] Open
Abstract
Visual perception and imagery rely on similar representations in the visual cortex. During perception, visual activity is characterized by distinct processing stages, but the temporal dynamics underlying imagery remain unclear. Here, we investigated the dynamics of visual imagery in human participants using magnetoencephalography. Firstly, we show that, compared to perception, imagery decoding becomes significant later and representations at the start of imagery already overlap with later time points. This suggests that during imagery, the entire visual representation is activated at once or that there are large differences in the timing of imagery between trials. Secondly, we found consistent overlap between imagery and perceptual processing around 160 ms and from 300 ms after stimulus onset. This indicates that the N170 gets reactivated during imagery and that imagery does not rely on early perceptual representations. Together, these results provide important insights for our understanding of the neural mechanisms of visual imagery.
Collapse
Affiliation(s)
- Nadine Dijkstra
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Pim Mostert
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Floris P de Lange
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Sander Bosch
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Marcel Aj van Gerven
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
43
|
Stimulus expectation alters decision criterion but not sensory signal in perceptual decision making. Sci Rep 2017; 7:17072. [PMID: 29213117 PMCID: PMC5719011 DOI: 10.1038/s41598-017-16885-2] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Accepted: 11/20/2017] [Indexed: 12/03/2022] Open
Abstract
Humans are more likely to report perceiving an expected than an unexpected stimulus. Influential theories have proposed that this bias arises from expectation altering the sensory signal. However, the effects of expectation can also be due to decisional criterion shifts independent of any sensory changes. In order to adjudicate between these two possibilities, we compared the behavioral effects of pre-stimulus cues (pre cues; can influence both sensory signal and decision processes) and post-stimulus cues (post cues; can only influence decision processes). Subjects judged the average orientation of a series of Gabor patches. Surprisingly, we found that post cues had a larger effect on response bias (criterion c) than pre cues. Further, pre and post cues did not differ in their effects on stimulus sensitivity (d’) or the pattern of temporal or feature processing. Indeed, reverse correlation analyses showed no difference in the temporal or feature-based use of information between pre and post cues. Overall, post cues produced all of the behavioral modulations observed as a result of pre cues. These findings show that pre and post cues affect the decision through the same mechanisms and suggest that stimulus expectation alters the decision criterion but not the sensory signal itself.
Collapse
|
44
|
Boyle SC, Kayser SJ, Kayser C. Neural correlates of multisensory reliability and perceptual weights emerge at early latencies during audio-visual integration. Eur J Neurosci 2017; 46:2565-2577. [PMID: 28940728 PMCID: PMC5725738 DOI: 10.1111/ejn.13724] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Revised: 09/11/2017] [Accepted: 09/18/2017] [Indexed: 12/24/2022]
Abstract
To make accurate perceptual estimates, observers must take the reliability of sensory information into account. Despite many behavioural studies showing that subjects weight individual sensory cues in proportion to their reliabilities, it is still unclear when during a trial neuronal responses are modulated by the reliability of sensory information or when they reflect the perceptual weights attributed to each sensory input. We investigated these questions using a combination of psychophysics, EEG‐based neuroimaging and single‐trial decoding. Our results show that the weighted integration of sensory information in the brain is a dynamic process; effects of sensory reliability on task‐relevant EEG components were evident 84 ms after stimulus onset, while neural correlates of perceptual weights emerged 120 ms after stimulus onset. These neural processes had different underlying sources, arising from sensory and parietal regions, respectively. Together these results reveal the temporal dynamics of perceptual and neural audio‐visual integration and support the notion of temporally early and functionally specific multisensory processes in the brain.
Collapse
Affiliation(s)
- Stephanie C Boyle
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Stephanie J Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| | - Christoph Kayser
- Institute of Neuroscience and Psychology, University of Glasgow, Hillhead Street 58, Glasgow, G12 8QB, UK
| |
Collapse
|
45
|
Abstract
Perception can be described as a process of inference, integrating bottom-up sensory inputs and top-down expectations. However, it is unclear how this process is neurally implemented. It has been proposed that expectations lead to prestimulus baseline increases in sensory neurons tuned to the expected stimulus, which in turn, affect the processing of subsequent stimuli. Recent fMRI studies have revealed stimulus-specific patterns of activation in sensory cortex as a result of expectation, but this method lacks the temporal resolution necessary to distinguish pre- from poststimulus processes. Here, we combined human magnetoencephalography (MEG) with multivariate decoding techniques to probe the representational content of neural signals in a time-resolved manner. We observed a representation of expected stimuli in the neural signal shortly before they were presented, showing that expectations indeed induce a preactivation of stimulus templates. The strength of these prestimulus expectation templates correlated with participants' behavioral improvement when the expected feature was task-relevant. These results suggest a mechanism for how predictive perception can be neurally implemented.
Collapse
|
46
|
King JR, Pescetelli N, Dehaene S. Brain Mechanisms Underlying the Brief Maintenance of Seen and Unseen Sensory Information. Neuron 2017; 92:1122-1134. [PMID: 27930903 DOI: 10.1016/j.neuron.2016.10.051] [Citation(s) in RCA: 116] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2016] [Revised: 07/05/2016] [Accepted: 10/21/2016] [Indexed: 01/24/2023]
Abstract
Recent evidence of unconscious working memory challenges the notion that only visible stimuli can be actively maintained over time. In the present study, we investigated the neural dynamics underlying the maintenance of variably visible stimuli using magnetoencephalography. Subjects had to detect and mentally maintain the orientation of a masked grating. We show that the stimulus is fully encoded in early brain activity independently of visibility reports. However, the presence and orientation of the target are actively maintained throughout the brief retention period, even when the stimulus is reported as unseen. Source and decoding analyses revealed that perceptual maintenance recruits a hierarchical network spanning the early visual, temporal, parietal, and frontal cortices. Importantly, the representations coded in the late processing stages of this network specifically predicted visibility reports. These unexpected results challenge several theories of consciousness and suggest that invisible information can be briefly maintained within the higher processing stages of visual perception.
Collapse
Affiliation(s)
- Jean-Rémi King
- Department of Psychology, New York University, New York, NY 10003, USA; Neuroscience Department, Frankfurt Institute for Advanced Studies, 60438 Frankfurt, Germany.
| | - Niccolo Pescetelli
- Department of Experimental Psychology, University of Oxford, OX1 3UD Oxford, UK
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France
| |
Collapse
|
47
|
Pisauro MA, Fouragnan E, Retzler C, Philiastides MG. Neural correlates of evidence accumulation during value-based decisions revealed via simultaneous EEG-fMRI. Nat Commun 2017; 8:15808. [PMID: 28598432 PMCID: PMC5472767 DOI: 10.1038/ncomms15808] [Citation(s) in RCA: 103] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Accepted: 05/04/2017] [Indexed: 01/18/2023] Open
Abstract
Current computational accounts posit that, in simple binary choices, humans accumulate evidence in favour of the different alternatives before committing to a decision. Neural correlates of this accumulating activity have been found during perceptual decisions in parietal and prefrontal cortex; however the source of such activity in value-based choices remains unknown. Here we use simultaneous EEG–fMRI and computational modelling to identify EEG signals reflecting an accumulation process and demonstrate that the within- and across-trial variability in these signals explains fMRI responses in posterior-medial frontal cortex. Consistent with its role in integrating the evidence prior to reaching a decision, this region also exhibits task-dependent coupling with the ventromedial prefrontal cortex and the striatum, brain areas known to encode the subjective value of the decision alternatives. These results further endorse the proposition of an evidence accumulation process during value-based decisions in humans and implicate the posterior-medial frontal cortex in this process. Parietal and prefrontal cortices gather information to make perceptual decisions, but it is not known if the same is true for value-based choices. Here, authors use simultaneous EEG-fMRI and modelling to show that during value- and reward-based decisions this evidence is accumulated in the posterior medial frontal cortex.
Collapse
Affiliation(s)
- M Andrea Pisauro
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK
| | - Elsa Fouragnan
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.,Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Chris Retzler
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK.,Department of Behavioural &Social Sciences, University of Huddersfield, Huddersfield, UK
| | | |
Collapse
|
48
|
Alavash M, Daube C, Wöstmann M, Brandmeyer A, Obleser J. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making. Netw Neurosci 2017; 1:166-191. [PMID: 29911668 PMCID: PMC5988391 DOI: 10.1162/netn_a_00009] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Accepted: 03/01/2017] [Indexed: 11/24/2022] Open
Abstract
Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16-28 Hz) oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations.
Collapse
Affiliation(s)
- Mohsen Alavash
- Department of Psychology, University of Lübeck, Germany
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Christoph Daube
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Malte Wöstmann
- Department of Psychology, University of Lübeck, Germany
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Alex Brandmeyer
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Germany
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
49
|
Schuerman WL, Meyer AS, McQueen JM. Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels. Front Hum Neurosci 2017; 11:161. [PMID: 28439232 PMCID: PMC5383703 DOI: 10.3389/fnhum.2017.00161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 03/17/2017] [Indexed: 11/13/2022] Open
Abstract
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation.
Collapse
Affiliation(s)
- William L. Schuerman
- Psychology of Language, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
| | - Antje S. Meyer
- Psychology of Language, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| | - James M. McQueen
- Psychology of Language, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud UniversityNijmegen, Netherlands
| |
Collapse
|
50
|
Gamma-Band Activities in Mouse Frontal and Visual Cortex Induced by Coherent Dot Motion. Sci Rep 2017; 7:43780. [PMID: 28252109 PMCID: PMC5333145 DOI: 10.1038/srep43780] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 01/30/2017] [Indexed: 02/01/2023] Open
Abstract
A key question within systems neuroscience is to understand how the brain encodes spatially and temporally distributed local features and binds these together into one perceptual representation. Previous works in animal and human have shown that changes in neural synchrony occur during the perceptual processing and these changes are distinguished by the emergence of gamma-band oscillations (GBO, 30-80 Hz, centered at 40 Hz). Here, we used the mouse electroencephalogram to investigate how different cortical areas play roles in perceptual processing by assessing their GBO patterns during the visual presentation of coherently/incoherently moving random-dot kinematogram and static dots display. Our results revealed that GBO in the visual cortex were strongly modulated by the moving dots regardless of the existence of a global dot coherence, whereas GBO in frontal cortex were modulated by coherence of the motion. Moreover, concurrent GBO across the multiple cortical area occur more frequently for coherently moving dots. Taken together, these findings of GBO in the mouse frontal and visual cortex are related to the perceptual binding of local features into a globally-coherent representation, suggesting the dynamic interplay across the local/distributed networks of GBO in the global processing of optic flow.
Collapse
|