1
|
Xu Z, Chen H, Wang Y. Invisible social grouping facilitates the recognition of individual faces. Conscious Cogn 2023; 113:103556. [PMID: 37541010 DOI: 10.1016/j.concog.2023.103556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/09/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023]
Abstract
Emerging evidence suggests a specialized mechanism supporting perceptual grouping of social entities. However, the stage at which social grouping is processed is unclear. Through four experiments, here we showed that participants' recognition of a visible face was facilitated by the presence of a second facing (thus forming a social grouping) relative to a nonfacing face, even when the second face was invisible. Using a monocular/dichoptic paradigm, we further found that the social grouping facilitation effect occurred when the two faces were presented dichoptically to different eyes rather than monocularly to the same eye, suggesting that social grouping relies on binocular rather than monocular neural channels. The above effects were not found for inverted face dyads, thereby ruling out the contribution of nonsocial factors. Taken together, these findings support the unconscious influence of social grouping on visual perception and suggest an early origin of social grouping processing in the visual pathway.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| |
Collapse
|
2
|
Egurtzegi A, Blasi DE, Bornkessel-Schlesewsky I, Laka I, Meyer M, Bickel B, Sauppe S. Cross-linguistic differences in case marking shape neural power dynamics and gaze behavior during sentence planning. BRAIN AND LANGUAGE 2022; 230:105127. [PMID: 35605312 DOI: 10.1016/j.bandl.2022.105127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 04/07/2022] [Accepted: 04/21/2022] [Indexed: 06/15/2023]
Abstract
Languages differ in how they mark the dependencies between verbs and arguments, e.g., by case. An eye tracking and EEG picture description study examined the influence of case marking on the time course of sentence planning in Basque and Swiss German. While German assigns an unmarked (nominative) case to subjects, Basque specifically marks agent arguments through ergative case. Fixations to agents and event-related synchronization (ERS) in the theta and alpha frequency bands, as well as desynchronization (ERD) in the alpha and beta bands revealed multiple effects of case marking on the time course of early sentence planning. Speakers decided on case marking under planning early when preparing sentences with ergative-marked agents in Basque, whereas sentences with unmarked agents allowed delaying structural commitment across languages. These findings support hierarchically incremental accounts of sentence planning and highlight how cross-linguistic differences shape the neural dynamics underpinning language use.
Collapse
Affiliation(s)
- Aitor Egurtzegi
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland; English Department, University of Zurich, Switzerland
| | - Damián E Blasi
- Department of Human Evolutionary Biology, Harvard University, United States; Department of Linguistic and Cultural Evolution, Max Planck Institute for Evolutionary Anthropology, Germany
| | - Ina Bornkessel-Schlesewsky
- School of Psychology, Social Work and Social Policy, University of South Australia, Australia; Cognitive and Systems Neuroscience Research Hub, University of South Australia, Australia
| | - Itziar Laka
- Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Spain
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland; Cognitive Psychology Unit, Psychological Institute, University of Klagenfurt, Austria
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland
| | - Sebastian Sauppe
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland.
| |
Collapse
|
3
|
Affiliation(s)
- Ilenia Paparella
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| | - Liuba Papeo
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| |
Collapse
|
4
|
Bellot E, Abassi E, Papeo L. Moving Toward versus Away from Another: How Body Motion Direction Changes the Representation of Bodies and Actions in the Visual Cortex. Cereb Cortex 2021; 31:2670-2685. [PMID: 33401307 DOI: 10.1093/cercor/bhaa382] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 11/05/2020] [Accepted: 11/25/2020] [Indexed: 11/12/2022] Open
Abstract
Representing multiple agents and their mutual relations is a prerequisite to understand social events such as interactions. Using functional magnetic resonance imaging on human adults, we show that visual areas dedicated to body form and body motion perception contribute to processing social events, by holding the representation of multiple moving bodies and encoding the spatial relations between them. In particular, seeing animations of human bodies facing and moving toward (vs. away from) each other increased neural activity in the body-selective cortex [extrastriate body area (EBA)] and posterior superior temporal sulcus (pSTS) for biological motion perception. In those areas, representation of body postures and movements, as well as of the overall scene, was more accurate for facing body (vs. nonfacing body) stimuli. Effective connectivity analysis with dynamic causal modeling revealed increased coupling between EBA and pSTS during perception of facing body stimuli. The perceptual enhancement of multiple-body scenes featuring cues of interaction (i.e., face-to-face positioning, spatial proximity, and approaching signals) was supported by the participants' better performance in a recognition task with facing body versus nonfacing body stimuli. Thus, visuospatial cues of interaction in multiple-person scenarios affect the perceptual representation of body and body motion and, by promoting functional integration, streamline the process from body perception to action representation.
Collapse
Affiliation(s)
- Emmanuelle Bellot
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| |
Collapse
|
5
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
6
|
You won't believe what this guy is doing with the potato: The ObjAct stimulus-set depicting human actions on congruent and incongruent objects. Behav Res Methods 2021; 53:1895-1909. [PMID: 33634424 PMCID: PMC8516756 DOI: 10.3758/s13428-021-01540-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2021] [Indexed: 01/24/2023]
Abstract
Perception famously involves both bottom-up and top-down processes. The latter are influenced by our previous knowledge and expectations about the world. In recent years, many studies have focused on the role of expectations in perception in general, and in object processing in particular. Yet studying this question is not an easy feat, requiring-among other things-the creation and validation of appropriate stimuli. Here, we introduce the ObjAct stimulus-set of free-to-use, highly controlled real-life scenes, on which critical objects are pasted. All scenes depict human agents performing an action with an object that is either congruent or incongruent with the action. The focus on human actions yields highly constraining contexts, strengthening congruency effects. The stimuli were analyzed for low-level properties, using the SHINE toolbox to control for luminance and contrast, and using a deep convolutional neural network to mimic V1 processing and potentially discover other low-level factors that might differ between congruent and incongruent scenes. Two online validation studies (N = 500) were also conducted to assess the congruency manipulation and collect additional ratings of our images (e.g., arousal, likeability, visual complexity). We also provide full descriptions of the online sources from which all images were taken, as well as verbal descriptions of their content. Taken together, this extensive validation and characterization procedure makes the ObjAct stimulus-set highly informative and easy to use for future researchers in multiple fields, from object and scene processing, through top-down contextual effects, to the study of actions.
Collapse
|
7
|
Speaking for seeing: Sentence structure guides visual event apprehension. Cognition 2020; 206:104516. [PMID: 33228969 DOI: 10.1016/j.cognition.2020.104516] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/05/2020] [Accepted: 11/11/2020] [Indexed: 11/24/2022]
Abstract
Human experience and communication are centred on events, and event apprehension is a rapid process that draws on the visual perception and immediate categorization of event roles ("who does what to whom"). We demonstrate a role for syntactic structure in visual information uptake for event apprehension. An event structure foregrounding either the agent or patient was activated during speaking, transiently modulating the apprehension of subsequently viewed unrelated events. Speakers of Dutch described pictures with actives and passives (agent and patient foregrounding, respectively). First fixations on pictures of unrelated events that were briefly presented (for 300 ms) next were influenced by the active or passive structure of the previously produced sentence. Going beyond the study of how single words cue object perception, we show that sentence structure guides the viewpoint taken during rapid event apprehension.
Collapse
|
8
|
Schweinberger SR, Dobel C. Why twos in human visual perception? A possible role of prediction from dynamic synchronization in interaction. Cortex 2020; 135:355-357. [PMID: 33234236 DOI: 10.1016/j.cortex.2020.09.015] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 09/23/2020] [Indexed: 12/01/2022]
Affiliation(s)
- Stefan R Schweinberger
- Department of General Psychology and Cognitive Neuroscience, Friedrich Schiller University of Jena, Germany; Swiss Center for Affective Sciences, University of Geneva, Switzerland. http://www.allgpsy.uni-jena.de
| | - Christian Dobel
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Friedrich Schiller University of Jena, Germany
| |
Collapse
|
9
|
Redies C, Grebenkina M, Mohseni M, Kaduhm A, Dobel C. Global Image Properties Predict Ratings of Affective Pictures. Front Psychol 2020; 11:953. [PMID: 32477228 PMCID: PMC7235378 DOI: 10.3389/fpsyg.2020.00953] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Accepted: 04/17/2020] [Indexed: 01/08/2023] Open
Abstract
Affective pictures are widely used in studies of human emotions. The objects or scenes shown in affective pictures play a pivotal role in eliciting particular emotions. However, affective processing can also be mediated by low-level perceptual features, such as local brightness contrast, color or the spatial frequency profile. In the present study, we asked whether image properties that reflect global image structure and image composition affect the rating of affective pictures. We focused on 13 global image properties that were previously associated with the esthetic evaluation of visual stimuli, and determined their predictive power for the ratings of five affective picture datasets (IAPS, GAPED, NAPS, DIRTI, and OASIS). First, we used an SVM-RBF classifier to predict high and low ratings for valence and arousal, respectively, and achieved a classification accuracy of 58–76% in this binary decision task. Second, a multiple linear regression analysis revealed that the individual image properties account for between 6 and 20% of the variance in the subjective ratings for valence and arousal. The predictive power of the image properties varies for the different datasets and type of ratings. Ratings tend to share similar sets of predictors if they correlate positively with each other. In conclusion, we obtained evidence from non-linear and linear analyses that affective pictures evoke emotions not only by what they show, but they also differ by how they show it. Whether the human visual system actually uses these perceptive cues for emotional processing remains to be investigated.
Collapse
Affiliation(s)
- Christoph Redies
- Experimental Aesthetics Group, Institute of Anatomy I, Jena University Hospital, Friedrich Schiller University, Jena, Germany
| | - Maria Grebenkina
- Experimental Aesthetics Group, Institute of Anatomy I, Jena University Hospital, Friedrich Schiller University, Jena, Germany
| | - Mahdi Mohseni
- Experimental Aesthetics Group, Institute of Anatomy I, Jena University Hospital, Friedrich Schiller University, Jena, Germany
| | - Ali Kaduhm
- Experimental Aesthetics Group, Institute of Anatomy I, Jena University Hospital, Friedrich Schiller University, Jena, Germany
| | - Christian Dobel
- Department of Otolaryngology and Institute of Phonatry and Pedaudiology, Jena University Hospital, Friedrich Schiller University, Jena, Germany
| |
Collapse
|
10
|
The Representation of Two-Body Shapes in the Human Visual Cortex. J Neurosci 2019; 40:852-863. [PMID: 31801812 DOI: 10.1523/jneurosci.1378-19.2019] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/21/2019] [Accepted: 11/27/2019] [Indexed: 11/21/2022] Open
Abstract
Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.
Collapse
|
11
|
Quadflieg S, Westmoreland K. Making Sense of Other People’s Encounters: Towards an Integrative Model of Relational Impression Formation. JOURNAL OF NONVERBAL BEHAVIOR 2019. [DOI: 10.1007/s10919-019-00295-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
12
|
Zwitserlood P, Bölte J, Hofmann R, Meier CC, Dobel C. Seeing for speaking: Semantic and lexical information provided by briefly presented, naturalistic action scenes. PLoS One 2018; 13:e0194762. [PMID: 29652939 PMCID: PMC5898714 DOI: 10.1371/journal.pone.0194762] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 03/09/2018] [Indexed: 11/19/2022] Open
Abstract
At the interface between scene perception and speech production, we investigated how rapidly action scenes can activate semantic and lexical information. Experiment 1 examined how complex action-scene primes, presented for 150 ms, 100 ms, or 50 ms and subsequently masked, influenced the speed with which immediately following action-picture targets are named. Prime and target actions were either identical, showed the same action with different actors and environments, or were unrelated. Relative to unrelated primes, identical and same-action primes facilitated naming the target action, even when presented for 50 ms. In Experiment 2, neutral primes assessed the direction of effects. Identical and same-action scenes induced facilitation but unrelated actions induced interference. In Experiment 3, written verbs were used as targets for naming, preceded by action primes. When target verbs denoted the prime action, clear facilitation was obtained. In contrast, interference was observed when target verbs were phonologically similar, but otherwise unrelated, to the names of prime actions. This is clear evidence for word-form activation by masked action scenes. Masked action pictures thus provide conceptual information that is detailed enough to facilitate apprehension and naming of immediately following scenes. Masked actions even activate their word-form information-as is evident when targets are words. We thus show how language production can be primed with briefly flashed masked action scenes, in answer to long-standing questions in scene processing.
Collapse
Affiliation(s)
- Pienie Zwitserlood
- Institute for Psychology, University of Münster, Münster, Germany
- Otto-Creutzfeldt Center for Cognitive Neuroscience, University of Münster, Münster, Germany
- * E-mail:
| | - Jens Bölte
- Institute for Psychology, University of Münster, Münster, Germany
- Otto-Creutzfeldt Center for Cognitive Neuroscience, University of Münster, Münster, Germany
| | - Reinhild Hofmann
- Clinic for Phoniatrics and Pediatric Audiology, University of Münster, Münster, Germany
| | | | - Christian Dobel
- Department of Otorhinolaryngology, Medical Faculty, University of Jena, Jena, Germany
| |
Collapse
|
13
|
Hafri A, Trueswell JC, Strickland B. Encoding of event roles from visual scenes is rapid, spontaneous, and interacts with higher-level visual processing. Cognition 2018; 175:36-52. [PMID: 29459238 DOI: 10.1016/j.cognition.2018.02.011] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Revised: 02/06/2018] [Accepted: 02/08/2018] [Indexed: 11/24/2022]
Abstract
A crucial component of event recognition is understanding event roles, i.e. who acted on whom: boy hitting girl is different from girl hitting boy. We often categorize Agents (i.e. the actor) and Patients (i.e. the one acted upon) from visual input, but do we rapidly and spontaneously encode such roles even when our attention is otherwise occupied? In three experiments, participants observed a continuous sequence of two-person scenes and had to search for a target actor in each (the male/female or red/blue-shirted actor) by indicating with a button press whether the target appeared on the left or the right. Critically, although role was orthogonal to gender and shirt color, and was never explicitly mentioned, participants responded more slowly when the target's role switched from trial to trial (e.g., the male went from being the Patient to the Agent). In a final experiment, we demonstrated that this effect cannot be fully explained by differences in posture associated with Agents and Patients. Our results suggest that extraction of event structure from visual scenes is rapid and spontaneous.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychology, University of Pennsylvania, 425 S. University Avenue, Philadelphia, PA 19104, USA.
| | - John C Trueswell
- Department of Psychology, University of Pennsylvania, 425 S. University Avenue, Philadelphia, PA 19104, USA
| | - Brent Strickland
- Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL Research University, Institut Jean Nicod, (ENS, EHESS, CNRS), 75005 Paris, France
| |
Collapse
|
14
|
Konopka AE, Meyer A, Forest TA. Planning to speak in L1 and L2. Cogn Psychol 2018; 102:72-104. [PMID: 29407637 DOI: 10.1016/j.cogpsych.2017.12.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Revised: 12/21/2017] [Accepted: 12/26/2017] [Indexed: 10/17/2022]
Abstract
The leading theories of sentence planning - Hierarchical Incrementality and Linear Incrementality - differ in their assumptions about the coordination of processes that map preverbal information onto language. Previous studies showed that, in native (L1) speakers, this coordination can vary with the ease of executing the message-level and sentence-level processes necessary to plan and produce an utterance. We report the first series of experiments to systematically examine how linguistic experience influences sentence planning in native (L1) speakers (i.e., speakers with life-long experience using the target language) and non-native (L2) speakers (i.e., speakers with less experience using the target language). In all experiments, speakers spontaneously generated one-sentence descriptions of simple events in Dutch (L1) and English (L2). Analyses of eye-movements across early and late time windows (pre- and post-400 ms) compared the extent of early message-level encoding and the onset of linguistic encoding. In Experiment 1, speakers were more likely to engage in extensive message-level encoding and to delay sentence-level encoding when using their L2. Experiments 2-4 selectively facilitated encoding of the preverbal message, encoding of the agent character (i.e., the first content word in active sentences), and encoding of the sentence verb (i.e., the second content word in active sentences) respectively. Experiment 2 showed that there is no delay in the onset of L2 linguistic encoding when speakers are familiar with the events. Experiments 3 and 4 showed that the delay in the onset of L2 linguistic encoding is not due to speakers delaying encoding of the agent, but due to a preference to encode information needed to select a suitable verb early in the formulation process. Overall, speakers prefer to temporally separate message-level from sentence-level encoding and to prioritize encoding of relational information when planning L2 sentences, consistent with Hierarchical Incrementality.
Collapse
Affiliation(s)
- Agnieszka E Konopka
- University of Aberdeen, Scotland, UK; Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Antje Meyer
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Tess A Forest
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands; University of Toronto, Canada
| |
Collapse
|
15
|
Biderman N, Mudrik L. Evidence for Implicit—But Not Unconscious—Processing of Object-Scene Relations. Psychol Sci 2017; 29:266-277. [DOI: 10.1177/0956797617735745] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Is consciousness necessary for integration? Findings of seemingly high-level object-scene integration in the absence of awareness have challenged major theories in the field and attracted considerable scientific interest. Lately, one of these findings has been questioned because of a failure to replicate, yet the other finding was still uncontested. Here, we show that this latter finding—slowed-down performance on a visible target following a masked prime scene that includes an incongruent object—is also not reproducible. Using Bayesian statistics, we found evidence against unconscious integration of objects and scenes. Put differently, at the moment, there is no compelling evidence for object-scene congruency processing in the absence of awareness. Intriguingly, however, our results do suggest that consciously experienced yet briefly presented incongruent scenes take longer to process, even when subjects do not explicitly detect their incongruency.
Collapse
Affiliation(s)
| | - Liad Mudrik
- School of Psychological Sciences, Tel Aviv University
- Sagol School for Neuroscience, Tel Aviv University
| |
Collapse
|
16
|
Making Sense of Real-World Scenes. Trends Cogn Sci 2016; 20:843-856. [PMID: 27769727 DOI: 10.1016/j.tics.2016.09.003] [Citation(s) in RCA: 81] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Revised: 09/06/2016] [Accepted: 09/06/2016] [Indexed: 11/23/2022]
Abstract
To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties.
Collapse
|