1
|
Dreyer AM, Michalke L, Perry A, Chang EF, Lin JJ, Knight RT, Rieger JW. Grasp-specific high-frequency broadband mirror neuron activity during reach-and-grasp movements in humans. Cereb Cortex 2023; 33:6291-6298. [PMID: 36562997 PMCID: PMC10183732 DOI: 10.1093/cercor/bhac504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 11/30/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022] Open
Abstract
Broadly congruent mirror neurons, responding to any grasp movement, and strictly congruent mirror neurons, responding only to specific grasp movements, have been reported in single-cell studies with primates. Delineating grasp properties in humans is essential to understand the human mirror neuron system with implications for behavior and social cognition. We analyzed electrocorticography data from a natural reach-and-grasp movement observation and delayed imitation task with 3 different natural grasp types of everyday objects. We focused on the classification of grasp types from high-frequency broadband mirror activation patterns found in classic mirror system areas, including sensorimotor, supplementary motor, inferior frontal, and parietal cortices. Classification of grasp types was successful during movement observation and execution intervals but not during movement retention. Our grasp type classification from combined and single mirror electrodes provides evidence for grasp-congruent activity in the human mirror neuron system potentially arising from strictly congruent mirror neurons.
Collapse
Affiliation(s)
- Alexander M Dreyer
- Department of Psychology, Carl von Ossietzky University Oldenburg, Oldenburg 26129, Germany
| | - Leo Michalke
- Department of Psychology, Carl von Ossietzky University Oldenburg, Oldenburg 26129, Germany
| | - Anat Perry
- Department of Psychology, Hebrew University of Jerusalem, Jerusalem 91905, Israel
| | - Edward F Chang
- Department of Neurological Surgery, University of California San Francisco, San Francisco, CA 94143, United States
| | - Jack J Lin
- Department of Biomedical Engineering and the Comprehensive Epilepsy Program, Department of Neurology, University of California, Irvine, CA 92868, United States
| | - Robert T Knight
- Department of Psychology and the Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, United States
| | - Jochem W Rieger
- Department of Psychology, Carl von Ossietzky University Oldenburg, Oldenburg 26129, Germany
| |
Collapse
|
2
|
Unni A, Trende A, Pauley C, Weber L, Biebl B, Kacianka S, Lüdtke A, Bengler K, Pretschner A, Fränzle M, Rieger JW. Investigating Differences in Behavior and Brain in Human-Human and Human-Autonomous Vehicle Interactions in Time-Critical Situations. FRONTIERS IN NEUROERGONOMICS 2022; 3:836518. [PMID: 38235443 PMCID: PMC10790869 DOI: 10.3389/fnrgo.2022.836518] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 02/01/2022] [Indexed: 01/19/2024]
Abstract
Some studies provide evidence that humans could actively exploit the alleged technological advantages of autonomous vehicles (AVs). This implies that humans may tend to interact differently with AVs as compared to human driven vehicles (HVs) with the knowledge that AVs are programmed to be risk-averse. Hence, it is important to investigate how humans interact with AVs in complex traffic situations. Here, we investigated whether participants would value interactions with AVs differently compared to HVs, and if these differences can be characterized on the behavioral and brain-level. We presented participants with a cover story while recording whole-head brain activity using fNIRS that they were driving under time pressure through urban traffic in the presence of other HVs and AVs. Moreover, the AVs were programmed defensively to avoid collisions and had faster braking reaction times than HVs. Participants would receive a monetary reward if they managed to finish the driving block within a given time-limit without risky driving maneuvers. During the drive, participants were repeatedly confronted with left-lane turning situations at unsignalized intersections. They had to stop and find a gap to turn in front of an oncoming stream of vehicles consisting of HVs and AVs. While the behavioral results did not show any significant difference between the safety margin used during the turning maneuvers with respect to AVs or HVs, participants tended to be more certain in their decision-making process while turning in front of AVs as reflected by the smaller variance in the gap size acceptance as compared to HVs. Importantly, using a multivariate logistic regression approach, we were able to predict whether the participants decided to turn in front of HVs or AVs from whole-head fNIRS in the decision-making phase for every participant (mean accuracy = 67.2%, SD = 5%). Channel-wise univariate fNIRS analysis revealed increased brain activation differences for turning in front of AVs compared to HVs in brain areas that represent the valuation of actions taken during decision-making. The insights provided here may be useful for the development of control systems to assess interactions in future mixed traffic environments involving AVs and HVs.
Collapse
Affiliation(s)
- Anirudh Unni
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Alexander Trende
- OFFIS Institute for Information Technology, Division of Transportation Research, Oldenburg, Germany
| | - Claire Pauley
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Lars Weber
- OFFIS Institute for Information Technology, Division of Transportation Research, Oldenburg, Germany
| | - Bianca Biebl
- Chair of Ergonomics, Technical University of Munich, Garching, Germany
| | - Severin Kacianka
- Chair of Software and Systems Engineering, Technical University of Munich, Garching, Germany
| | - Andreas Lüdtke
- OFFIS Institute for Information Technology, Division of Transportation Research, Oldenburg, Germany
| | - Klaus Bengler
- Chair of Ergonomics, Technical University of Munich, Garching, Germany
| | - Alexander Pretschner
- Chair of Software and Systems Engineering, Technical University of Munich, Garching, Germany
| | - Martin Fränzle
- OFFIS Institute for Information Technology, Division of Transportation Research, Oldenburg, Germany
- Department of Computer Science, University of Oldenburg, Oldenburg, Germany
| | - Jochem W. Rieger
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
3
|
Al-Zubaidi A, Bräuer S, Holdgraf CR, Schepers IM, Rieger JW. OUP accepted manuscript. Cereb Cortex Commun 2022; 3:tgac007. [PMID: 35281216 PMCID: PMC8914075 DOI: 10.1093/texcom/tgac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 01/26/2022] [Accepted: 01/29/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Arkan Al-Zubaidi
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
- Research Center Neurosensory Science, Oldenburg University, 26129 Oldenburg, Germany
| | - Susann Bräuer
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Chris R Holdgraf
- Department of Statistics, UC Berkeley, Berkeley, CA 94720, USA
- International Interactive Computing Collaboration
| | - Inga M Schepers
- Applied Neurocognitive Psychology Lab and Cluster of Excellence Hearing4all, Oldenburg University, Oldenburg, Germany
| | - Jochem W Rieger
- Corresponding author: Department of Psychology, Faculty VI, Oldenburg University, 26129 Oldenburg, Germany.
| |
Collapse
|
4
|
Rampone G, Makin ADJ, Tyson-Carr J, Bertamini M. Spinning objects and partial occlusion: Smart neural responses to symmetry. Vision Res 2021; 188:1-9. [PMID: 34271291 DOI: 10.1016/j.visres.2021.06.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 06/04/2021] [Accepted: 06/19/2021] [Indexed: 11/18/2022]
Abstract
In humans, extrastriate visual areas are strongly activated by symmetry. However, perfect symmetry is rare in natural visual images. Recent findings showed that when parts of a symmetric shape are presented at different points in time the process relies on a perceptual memory buffer. Does this temporal integration need a retinotopic reference frame? For the first time we tested integration of parts both in the temporal and spatial domain, using a non-retinotopic frame of reference. In Experiment 1, an irregular polygonal shape (either symmetric or asymmetric) was partly occluded by a rectangle for 500 ms (T1). The rectangle moved to the opposite side to reveal the other half of the shape, whilst occluding the previously visible half (T2). The reference frame for the object was static: the two parts stimulated retinotopically corresponding receptive fields (revealed over time). A symmetry-specific ERP response from ~300 ms after T2 was observed. In Experiment 2 dynamic occlusion was combined with an additional step at T2: the new half-shape and occluder were rotated by 90°. Therefore, there was a moving frame of reference and the retinal correspondence between the two parts was disrupted. A weaker but significant symmetry-specific response was recorded. This result extends previous findings: global symmetry representation can be achieved in extrastriate areas non-retinotopically, through integration in both temporal and spatial domain.
Collapse
Affiliation(s)
- Giulia Rampone
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK.
| | - Alexis D J Makin
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK
| | - John Tyson-Carr
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK
| | - Marco Bertamini
- Department of Psychology, University of Liverpool, Eleanor Rathbone Building, L697ZA Liverpool, UK; Department of General Psychology, Via Venezia, 8 - 35131, University of Padova, Padova, Italy
| |
Collapse
|
5
|
Moving a Shape behind a Slit: Partial Shape Representations in Inferior Temporal Cortex. J Neurosci 2021; 41:6484-6501. [PMID: 34131035 DOI: 10.1523/jneurosci.0348-21.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/04/2021] [Accepted: 06/08/2021] [Indexed: 11/21/2022] Open
Abstract
Current models of object recognition are based on spatial representations build from object features that are simultaneously present in the retinal image. However, one can recognize an object when it moves behind a static occlude, and only a small fragment of its shape is visible through a slit at a given moment in time. Such anorthoscopic perception requires spatiotemporal integration of the successively presented shape parts during slit-viewing. Human fMRI studies suggested that ventral visual stream areas represent whole shapes formed through temporal integration during anorthoscopic perception. To examine the time course of shape-selective responses during slit-viewing, we recorded the responses of single inferior temporal (IT) neurons of rhesus monkeys to moving shapes that were only partially visible through a static narrow slit. The IT neurons signaled shape identity by their response when that was cumulated across the duration of the shape presentation. Their shape preference during slit-viewing equaled that for static, whole-shape presentations. However, when analyzing their responses at a finer time scale, we showed that the IT neurons responded to particular shape fragments that were revealed by the slit. We found no evidence for temporal integration of slit-views that result in a whole-shape representation, even when the monkey was matching slit-views of a shape to static whole-shape presentations. These data suggest that, although the temporally integrated response of macaque IT neurons can signal shape identity in slit-viewing conditions, the spatiotemporal integration needed for the formation of a whole-shape percept occurs in other areas, perhaps downstream to IT.SIGNIFICANCE STATEMENT One recognizes an object when it moves behind a static occluder and only a small fragment of its shape is visible through a static slit at a given moment in time. Such anorthoscopic perception requires spatiotemporal integration of the successively presented partial shape parts. Human fMRI studies suggested that ventral visual stream areas represent shapes formed through temporal integration. We recorded the responses of inferior temporal (IT) cortical neurons of macaques during slit-viewing conditions. Although the temporally summated response of macaque IT neurons could signal shape identity under slit-viewing conditions, we found no evidence for a whole-shape representation using analyses at a finer time scale. Thus, the spatiotemporal integration needed for anorthoscopic perception does not occur within IT.
Collapse
|
6
|
Scheunemann J, Unni A, Ihme K, Jipp M, Rieger JW. Demonstrating Brain-Level Interactions Between Visuospatial Attentional Demands and Working Memory Load While Driving Using Functional Near-Infrared Spectroscopy. Front Hum Neurosci 2019; 12:542. [PMID: 30728773 PMCID: PMC6351455 DOI: 10.3389/fnhum.2018.00542] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2018] [Accepted: 12/31/2018] [Indexed: 11/13/2022] Open
Abstract
Driving is a complex task concurrently drawing on multiple cognitive resources. Yet, there is a lack of studies investigating interactions at the brain-level among different driving subtasks in dual-tasking. This study investigates how visuospatial attentional demands related to increased driving difficulty interacts with different working memory load (WML) levels at the brain level. Using multichannel whole-head high density functional near-infrared spectroscopy (fNIRS) brain activation measurements, we aimed to predict driving difficulty level, both separate for each WML level and with a combined model. Participants drove for approximately 60 min on a highway with concurrent traffic in a virtual reality driving simulator. In half of the time, the course led through a construction site with reduced lane width, increasing visuospatial attentional demands. Concurrently, participants performed a modified version of the n-back task with five different WML levels (from 0-back up to 4-back), forcing them to continuously update, memorize, and recall the sequence of the previous 'n' speed signs and adjust their speed accordingly. Using multivariate logistic ridge regression, we were able to correctly predict driving difficulty in 75.0% of the signal samples (1.955 Hz sampling rate) across 15 participants in an out-of-sample cross-validation of classifiers trained on fNIRS data separately for each WML level. There was a significant effect of the WML level on the driving difficulty prediction accuracies [range 62.2-87.1%; χ2(4) = 19.9, p < 0.001, Kruskal-Wallis H test] with highest prediction rates at intermediate WML levels. On the contrary, training one classifier on fNIRS data across all WML levels severely degraded prediction performance (mean accuracy of 46.8%). Activation changes in the bilateral dorsal frontal (putative BA46), bilateral inferior parietal (putative BA39), and left superior parietal (putative BA7) areas were most predictive to increased driving difficulty. These discriminative patterns diminished at higher WML levels indicating that visuospatial attentional demands and WML involve interacting underlying brain processes. The changing pattern of driving difficulty related brain areas across WML levels could indicate potential changes in the multitasking strategy with level of WML demand, in line with the multiple resource theory.
Collapse
Affiliation(s)
- Jakob Scheunemann
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Anirudh Unni
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Klas Ihme
- Institute of Transportation Systems, German Aerospace Center (DLR), Braunschweig, Germany
| | - Meike Jipp
- Institute of Transportation Systems, German Aerospace Center (DLR), Braunschweig, Germany
| | - Jochem W. Rieger
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
7
|
Ihme K, Unni A, Zhang M, Rieger JW, Jipp M. Recognizing Frustration of Drivers From Face Video Recordings and Brain Activation Measurements With Functional Near-Infrared Spectroscopy. Front Hum Neurosci 2018; 12:327. [PMID: 30177876 PMCID: PMC6109683 DOI: 10.3389/fnhum.2018.00327] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Accepted: 07/25/2018] [Indexed: 11/13/2022] Open
Abstract
Experiencing frustration while driving can harm cognitive processing, result in aggressive behavior and hence negatively influence driving performance and traffic safety. Being able to automatically detect frustration would allow adaptive driver assistance and automation systems to adequately react to a driver's frustration and mitigate potential negative consequences. To identify reliable and valid indicators of driver's frustration, we conducted two driving simulator experiments. In the first experiment, we aimed to reveal facial expressions that indicate frustration in continuous video recordings of the driver's face taken while driving highly realistic simulator scenarios in which frustrated or non-frustrated emotional states were experienced. An automated analysis of facial expressions combined with multivariate logistic regression classification revealed that frustrated time intervals can be discriminated from non-frustrated ones with accuracy of 62.0% (mean over 30 participants). A further analysis of the facial expressions revealed that frustrated drivers tend to activate muscles in the mouth region (chin raiser, lip pucker, lip pressor). In the second experiment, we measured cortical activation with almost whole-head functional near-infrared spectroscopy (fNIRS) while participants experienced frustrating and non-frustrating driving simulator scenarios. Multivariate logistic regression applied to the fNIRS measurements allowed us to discriminate between frustrated and non-frustrated driving intervals with higher accuracy of 78.1% (mean over 12 participants). Frustrated driving intervals were indicated by increased activation in the inferior frontal, putative premotor and occipito-temporal cortices. Our results show that facial and cortical markers of frustration can be informative for time resolved driver state identification in complex realistic driving situations. The markers derived here can potentially be used as an input for future adaptive driver assistance and automation systems that detect driver frustration and adaptively react to mitigate it.
Collapse
Affiliation(s)
- Klas Ihme
- Department of Human Factors, Institute of Transportation Systems, German Aerospace Center (DLR), Braunschweig, Germany
| | - Anirudh Unni
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Meng Zhang
- Department of Human Factors, Institute of Transportation Systems, German Aerospace Center (DLR), Braunschweig, Germany
| | - Jochem W. Rieger
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Meike Jipp
- Department of Human Factors, Institute of Transportation Systems, German Aerospace Center (DLR), Braunschweig, Germany
| |
Collapse
|
8
|
A Generic Mechanism for Perceptual Organization in the Parietal Cortex. J Neurosci 2018; 38:7158-7169. [PMID: 30006362 DOI: 10.1523/jneurosci.0436-18.2018] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2018] [Revised: 05/29/2018] [Accepted: 06/21/2018] [Indexed: 11/21/2022] Open
Abstract
Our visual system's ability to group visual elements into meaningful entities and to separate them from others is referred to as scene segmentation. Visual motion often provides a powerful cue for this process as parallax or coherence can inform the visual system about scene or object structure. Here we tested the hypothesis that scene segmentation by motion cues relies on a common neural substrate in the parietal cortex. We used fMRI and a set of three entirely distinct motion stimuli to examine scene segmentation in the human brain. The stimuli covered a wide range of high-level processes, including perceptual grouping, transparent motion, and depth perception. All stimuli were perceptually bistable such that percepts alternated every few seconds while the physical stimulation remained constant. The perceptual states were asymmetric, in that one reflected the default (nonsegmented) interpretation, and the other the non-default (segmented) interpretation. We confirmed behaviorally that upon stimulus presentation, the default percept was always perceived first, before perceptual alternations ensued. Imaging results showed that across all stimulus classes perceptual scene-segmentation was associated with an increase of activity in the posterior parietal cortex together with a decrease of neural signal in the early visual cortex. This pattern of activation is compatible with predictive coding models of visual perception, and suggests that parietal cortex hosts a generic mechanism for scene segmentation.SIGNIFICANCE STATEMENT Making sense of cluttered visual scenes is crucial for everyday perception. An important cue to scene segmentation is visual motion: slight movements of scene elements give away which elements belong to the foreground or background or to the same object. We used three distinct stimuli that engage visual scene segmentation mechanisms based on motion. They involved perceptual grouping, transparent motion, and depth perception. Brain activity associated with all three mechanisms converged in the same parietal region with concurrent deactivation of early visual areas. The results suggest that posterior parietal cortex is a hub involved in structuring visual scenes based on different motion cues, and that feedback modulates early cortical processing in accord with predictive coding theory.
Collapse
|
9
|
Erlikhman G, Caplovitz GP, Gurariy G, Medina J, Snow JC. Towards a unified perspective of object shape and motion processing in human dorsal cortex. Conscious Cogn 2018; 64:106-120. [PMID: 29779844 DOI: 10.1016/j.concog.2018.04.016] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Revised: 04/20/2018] [Accepted: 04/26/2018] [Indexed: 01/06/2023]
Abstract
Although object-related areas were discovered in human parietal cortex a decade ago, surprisingly little is known about the nature and purpose of these representations, and how they differ from those in the ventral processing stream. In this article, we review evidence for the unique contribution of object areas of dorsal cortex to three-dimensional (3-D) shape representation, the localization of objects in space, and in guiding reaching and grasping actions. We also highlight the role of dorsal cortex in form-motion interaction and spatiotemporal integration, possible functional relationships between 3-D shape and motion processing, and how these processes operate together in the service of supporting goal-directed actions with objects. Fundamental differences between the nature of object representations in the dorsal versus ventral processing streams are considered, with an emphasis on how and why dorsal cortex supports veridical (rather than invariant) representations of objects to guide goal-directed hand actions in dynamic visual environments.
Collapse
Affiliation(s)
| | | | - Gennadiy Gurariy
- Department of Psychology, University of Nevada, Reno, USA; Department of Psychology, University of Wisconsin, Milwaukee, USA
| | - Jared Medina
- Department of Psychological and Brain Sciences, University of Delaware, USA
| | | |
Collapse
|
10
|
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization. J Neurosci 2018; 38:2844-2853. [PMID: 29440556 PMCID: PMC5852662 DOI: 10.1523/jneurosci.3022-17.2018] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 01/21/2018] [Accepted: 02/06/2018] [Indexed: 12/05/2022] Open
Abstract
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it.
Collapse
|
11
|
Kozunov V, Nikolaeva A, Stroganova TA. Categorization for Faces and Tools-Two Classes of Objects Shaped by Different Experience-Differs in Processing Timing, Brain Areas Involved, and Repetition Effects. Front Hum Neurosci 2018; 11:650. [PMID: 29379426 PMCID: PMC5770807 DOI: 10.3389/fnhum.2017.00650] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Accepted: 12/19/2017] [Indexed: 11/13/2022] Open
Abstract
The brain mechanisms that integrate the separate features of sensory input into a meaningful percept depend upon the prior experience of interaction with the object and differ between categories of objects. Recent studies using representational similarity analysis (RSA) have characterized either the spatial patterns of brain activity for different categories of objects or described how category structure in neuronal representations emerges in time, but never simultaneously. Here we applied a novel, region-based, multivariate pattern classification approach in combination with RSA to magnetoencephalography data to extract activity associated with qualitatively distinct processing stages of visual perception. We asked participants to name what they see whilst viewing bitonal visual stimuli of two categories predominantly shaped by either value-dependent or sensorimotor experience, namely faces and tools, and meaningless images. We aimed to disambiguate the spatiotemporal patterns of brain activity between the meaningful categories and determine which differences in their processing were attributable to either perceptual categorization per se, or later-stage mentalizing-related processes. We have extracted three stages of cortical activity corresponding to low-level processing, category-specific feature binding, and supra-categorical processing. All face-specific spatiotemporal patterns were associated with bilateral activation of ventral occipito-temporal areas during the feature binding stage at 140–170 ms. The tool-specific activity was found both within the categorization stage and in a later period not thought to be associated with binding processes. The tool-specific binding-related activity was detected within a 210–220 ms window and was located to the intraparietal sulcus of the left hemisphere. Brain activity common for both meaningful categories started at 250 ms and included widely distributed assemblies within parietal, temporal, and prefrontal regions. Furthermore, we hypothesized and tested whether activity within face and tool-specific binding-related patterns would demonstrate oppositely acting effects following procedural perceptual learning. We found that activity in the ventral, face-specific network increased following the stimuli repetition. In contrast, tool processing in the dorsal network adapted by reducing its activity over the repetition period. Altogether, we have demonstrated that activity associated with visual processing of faces and tools during the categorization stage differ in processing timing, brain areas involved, and in their dynamics underlying stimuli learning.
Collapse
Affiliation(s)
- Vladimir Kozunov
- MEG Centre, Moscow State University of Psychology and Education, Moscow, Russia
| | - Anastasia Nikolaeva
- MEG Centre, Moscow State University of Psychology and Education, Moscow, Russia
| | | |
Collapse
|
12
|
Object Representations in Human Visual Cortex Formed Through Temporal Integration of Dynamic Partial Shape Views. J Neurosci 2017; 38:659-678. [PMID: 29196319 DOI: 10.1523/jneurosci.1318-17.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2017] [Revised: 11/13/2017] [Accepted: 11/15/2017] [Indexed: 11/21/2022] Open
Abstract
We typically recognize visual objects using the spatial layout of their parts, which are present simultaneously on the retina. Therefore, shape extraction is based on integration of the relevant retinal information over space. The lateral occipital complex (LOC) can represent shape faithfully in such conditions. However, integration over time is sometimes required to determine object shape. To study shape extraction through temporal integration of successive partial shape views, we presented human participants (both men and women) with artificial shapes that moved behind a narrow vertical or horizontal slit. Only a tiny fraction of the shape was visible at any instant at the same retinal location. However, observers perceived a coherent whole shape instead of a jumbled pattern. Using fMRI and multivoxel pattern analysis, we searched for brain regions that encode temporally integrated shape identity. We further required that the representation of shape should be invariant to changes in the slit orientation. We show that slit-invariant shape information is most accurate in the LOC. Importantly, the slit-invariant shape representations matched the conventional whole-shape representations assessed during full-image runs. Moreover, when the same slit-dependent shape slivers were shuffled, thereby preventing their spatiotemporal integration, slit-invariant shape information was reduced dramatically. The slit-invariant representation of the various shapes also mirrored the structure of shape perceptual space as assessed by perceptual similarity judgment tests. Therefore, the LOC is likely to mediate temporal integration of slit-dependent shape views, generating a slit-invariant whole-shape percept. These findings provide strong evidence for a global encoding of shape in the LOC regardless of integration processes required to generate the shape percept.SIGNIFICANCE STATEMENT Visual objects are recognized through spatial integration of features available simultaneously on the retina. The lateral occipital complex (LOC) represents shape faithfully in such conditions even if the object is partially occluded. However, shape must sometimes be reconstructed over both space and time. Such is the case in anorthoscopic perception, when an object is moving behind a narrow slit. In this scenario, spatial information is limited at any moment so the whole-shape percept can only be inferred by integration of successive shape views over time. We find that LOC carries shape-specific information recovered using such temporal integration processes. The shape representation is invariant to slit orientation and is similar to that evoked by a fully viewed image. Existing models of object recognition lack such capabilities.
Collapse
|
13
|
Holdgraf CR, Rieger JW, Micheli C, Martin S, Knight RT, Theunissen FE. Encoding and Decoding Models in Cognitive Electrophysiology. Front Syst Neurosci 2017; 11:61. [PMID: 29018336 PMCID: PMC5623038 DOI: 10.3389/fnsys.2017.00061] [Citation(s) in RCA: 64] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 08/07/2017] [Indexed: 11/13/2022] Open
Abstract
Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of "Encoding" models, in which stimulus features are used to model brain activity, and "Decoding" models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aim is to provide a practical understanding of predictive modeling of human brain data and to propose best-practices in conducting these analyses.
Collapse
Affiliation(s)
- Christopher R. Holdgraf
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Office of the Vice Chancellor for Research, Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, United States
| | - Jochem W. Rieger
- Department of Psychology, Carl-von-Ossietzky University, Oldenburg, Germany
| | - Cristiano Micheli
- Department of Psychology, Carl-von-Ossietzky University, Oldenburg, Germany
- Institut des Sciences Cognitives Marc Jeannerod, Lyon, France
| | - Stephanie Martin
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Defitech Chair in Brain-Machine Interface, Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Robert T. Knight
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
| | - Frederic E. Theunissen
- Department of Psychology, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
14
|
Liu Y, Ayaz H, Shewokis PA. Multisubject "Learning" for Mental Workload Classification Using Concurrent EEG, fNIRS, and Physiological Measures. Front Hum Neurosci 2017; 11:389. [PMID: 28798675 PMCID: PMC5529418 DOI: 10.3389/fnhum.2017.00389] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Accepted: 07/12/2017] [Indexed: 11/13/2022] Open
Abstract
An accurate measure of mental workload level has diverse neuroergonomic applications ranging from brain computer interfacing to improving the efficiency of human operators. In this study, we integrated electroencephalogram (EEG), functional near-infrared spectroscopy (fNIRS), and physiological measures for the classification of three workload levels in an n-back working memory task. A significantly better than chance level classification was achieved by EEG-alone, fNIRS-alone, physiological alone, and EEG+fNIRS based approaches. The results confirmed our previous finding that integrating EEG and fNIRS significantly improved workload classification compared to using EEG-alone or fNIRS-alone. The inclusion of physiological measures, however, does not significantly improves EEG-based or fNIRS-based workload classification. A major limitation of currently available mental workload assessment approaches is the requirement to record lengthy calibration data from the target subject to train workload classifiers. We show that by learning from the data of other subjects, workload classification accuracy can be improved especially when the amount of data from the target subject is small.
Collapse
Affiliation(s)
- Yichuan Liu
- School of Biomedical Engineering, Science and Health Systems, Drexel UniversityPhiladelphia, PA, United States.,Cognitive Neuroengineering and Quantitative Experimental Research Collaborative, Drexel UniversityPhiladelphia, PA, United States
| | - Hasan Ayaz
- School of Biomedical Engineering, Science and Health Systems, Drexel UniversityPhiladelphia, PA, United States.,Cognitive Neuroengineering and Quantitative Experimental Research Collaborative, Drexel UniversityPhiladelphia, PA, United States.,Department of Family and Community Health, University of PennsylvaniaPhiladelphia, PA, United States.,Division of General Pediatrics, Children's Hospital of PhiladelphiaPhiladelphia, PA, United States
| | - Patricia A Shewokis
- School of Biomedical Engineering, Science and Health Systems, Drexel UniversityPhiladelphia, PA, United States.,Cognitive Neuroengineering and Quantitative Experimental Research Collaborative, Drexel UniversityPhiladelphia, PA, United States.,Nutrition Sciences Department, College of Nursing and Health Professions, Drexel UniversityPhiladelphia, PA, United States
| |
Collapse
|
15
|
Unni A, Ihme K, Jipp M, Rieger JW. Assessing the Driver's Current Level of Working Memory Load with High Density Functional Near-infrared Spectroscopy: A Realistic Driving Simulator Study. Front Hum Neurosci 2017; 11:167. [PMID: 28424602 PMCID: PMC5380755 DOI: 10.3389/fnhum.2017.00167] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Accepted: 03/21/2017] [Indexed: 11/13/2022] Open
Abstract
Cognitive overload or underload results in a decrease in human performance which may result in fatal incidents while driving. We envision that driver assistive systems which adapt their functionality to the driver's cognitive state could be a promising approach to reduce road accidents due to human errors. This research attempts to predict variations of cognitive working memory load levels in a natural driving scenario with multiple parallel tasks and to reveal predictive brain areas. We used a modified version of the n-back task to induce five different working memory load levels (from 0-back up to 4-back) forcing the participants to continuously update, memorize, and recall the previous 'n' speed sequences and adjust their speed accordingly while they drove for approximately 60 min on a highway with concurrent traffic in a virtual reality driving simulator. We measured brain activation using multichannel whole head, high density functional near-infrared spectroscopy (fNIRS) and predicted working memory load level from the fNIRS data by combining multivariate lasso regression and cross-validation. This allowed us to predict variations in working memory load in a continuous time-resolved manner with mean Pearson correlations between induced and predicted working memory load over 15 participants of 0.61 [standard error (SE) 0.04] and a maximum of 0.8. Restricting the analysis to prefrontal sensors placed over the forehead reduced the mean correlation to 0.38 (SE 0.04), indicating additional information gained through whole head coverage. Moreover, working memory load predictions derived from peripheral heart rate parameters achieved much lower correlations (mean 0.21, SE 0.1). Importantly, whole head fNIRS sampling revealed increasing brain activation in bilateral inferior frontal and bilateral temporo-occipital brain areas with increasing working memory load levels suggesting that these areas are specifically involved in workload-related processing.
Collapse
Affiliation(s)
- Anirudh Unni
- Department of Psychology, University of OldenburgOldenburg, Germany
| | - Klas Ihme
- Institute of Transportation Systems, German Aerospace Research CenterBraunschweig, Germany
| | - Meike Jipp
- Institute of Transportation Systems, German Aerospace Research CenterBraunschweig, Germany
| | - Jochem W Rieger
- Department of Psychology, University of OldenburgOldenburg, Germany
| |
Collapse
|
16
|
Parietal cortex mediates perceptual Gestalt grouping independent of stimulus size. Neuroimage 2016; 133:367-377. [DOI: 10.1016/j.neuroimage.2016.03.008] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Revised: 01/22/2016] [Accepted: 03/04/2016] [Indexed: 11/19/2022] Open
|
17
|
Schicktanz S, Amelung T, Rieger JW. Qualitative assessment of patients' attitudes and expectations toward BCIs and implications for future technology development. Front Syst Neurosci 2015; 9:64. [PMID: 25964745 PMCID: PMC4410612 DOI: 10.3389/fnsys.2015.00064] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Accepted: 04/03/2015] [Indexed: 11/13/2022] Open
Abstract
Brain-computer-interfaces (BCIs) are important for the next generation of neuro-prosthesis innovations. Only few pilot projects have tested patients' abilities to control BCIs as well as their satisfaction with the offered technologies. On the one hand, little is known about patients' moral attitudes toward the benefit-risk-ratio of BCIs as well as their needs, priorities, and expectations. On the other hand, ethics experts intensively discuss the general risks of BCIs as well as the limits of neuro-enhancement. To our knowledge, we present here the first qualitative interview study with ten chronic patients matching the potential user categories for motor and communication BCIs to assess their practical and moral attitudes toward this technology. The interviews reveal practical and moral attitudes toward motor BCIs that can impact future technology development. We discuss our empirical findings on patients' perspectives and compare them to neuroscientists' and ethicists' perspectives. Our analysis indicates only partial overlap between the potential users' and the experts' assessments of BCI-technology. It points out the importance of considering the needs and desires of the targeted patient group. Based on our findings, we suggest a multi-fold approach to the development of clinical BCIs, rooted in the participatory technology-development. We conclude that clinical BCI development needs to be explored in a disease-related and culturally sensitive way.
Collapse
Affiliation(s)
- Silke Schicktanz
- Department of Medical Ethics and History of Medicine, University Medical Center GöttingenGöttingen, Germany
| | - Till Amelung
- Department of Medical Ethics and History of Medicine, University Medical Center GöttingenGöttingen, Germany
| | - Jochem W. Rieger
- Department of Psychology, University ofOldenburg, Germany
- Research Center Neurosensory Science, University of OldenburgOldenburg, Germany
| |
Collapse
|
18
|
Baecke S, Lützkendorf R, Mallow J, Luchtmann M, Tempelmann C, Stadler J, Bernarding J. A proof-of-principle study of multi-site real-time functional imaging at 3T and 7T: Implementation and validation. Sci Rep 2015; 5:8413. [PMID: 25672521 PMCID: PMC4325335 DOI: 10.1038/srep08413] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2014] [Accepted: 01/19/2015] [Indexed: 11/09/2022] Open
Abstract
Real-time functional Magnetic Resonance Imaging (rtfMRI) is used mainly for neurofeedback or for brain-computer interfaces (BCI). But multi-site rtfMRI could in fact help in the application of new interactive paradigms such as the monitoring of mutual information flow or the controlling of objects in shared virtual environments. For that reason, a previously developed framework that provided an integrated control and data analysis of rtfMRI experiments was extended to enable multi-site rtfMRI. Important new components included a data exchange platform for analyzing the data of both MR scanners independently and/or jointly. Information related to brain activation can be displayed separately or in a shared view. However, a signal calibration procedure had to be developed and integrated in order to permit the connecting of sites that had different hardware and to account for different inter-individual brain activation levels. The framework was successfully validated in a proof-of-principle study with twelve volunteers. Thus the overall concept, the calibration of grossly differing signals, and BCI functionality on each site proved to work as required. To model interactions between brains in real-time, more complex rules utilizing mutual activation patterns could easily be implemented to allow for new kinds of social fMRI experiments.
Collapse
Affiliation(s)
- Sebastian Baecke
- Institute for Biometry and Medical Informatics, Otto-von-Guericke-University Magdeburg
| | - Ralf Lützkendorf
- Institute for Biometry and Medical Informatics, Otto-von-Guericke-University Magdeburg
| | - Johannes Mallow
- Institute for Biometry and Medical Informatics, Otto-von-Guericke-University Magdeburg
| | | | | | | | - Johannes Bernarding
- Institute for Biometry and Medical Informatics, Otto-von-Guericke-University Magdeburg
| |
Collapse
|
19
|
Deike S, Denham SL, Sussman E. Probing auditory scene analysis. Front Neurosci 2014; 8:293. [PMID: 25309314 PMCID: PMC4162357 DOI: 10.3389/fnins.2014.00293] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2014] [Accepted: 08/27/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Susann Deike
- Special Lab Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology Magdeburg, Germany
| | - Susan L Denham
- Cognition Institute, University of Plymouth Plymouth, UK ; School of Psychology, University of Plymouth Plymouth, UK
| | - Elyse Sussman
- Department of Neuroscience, Albert Einstein College of Medicine of Yeshiva University Bronx, NY, USA ; Department of Otorhinolaryngology-Head and Neck Surgery, Albert Einstein College of Medicine of Yeshiva University Bronx, NY, USA
| |
Collapse
|