1
|
Boger T, Strickland B. Object persistence explains event completion. Cognition 2025; 259:106110. [PMID: 40054394 DOI: 10.1016/j.cognition.2025.106110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 02/26/2025] [Accepted: 03/02/2025] [Indexed: 04/09/2025]
Abstract
Our minds consistently distort memories of objects and events. Oftentimes, these distortions serve to transform incoherent memories into coherent ones, as when we misremember partial events as whole ("event completion"). What mechanisms drive these distortions? Whereas extant work shows that representations of causality, continuity, familiarity, physical coherence, or event coherence create memory distortions, we suggest that a simpler and more fundamental mechanism may be at play: object persistence. Merely seeing an object take part in an event can create a persisting memory of its presence throughout that event. In 8 pre-registered experiments (N = 317 adults), participants performed a simple task where they watched an animation, then chose whether or not a frame from the animation contained an object. Participants falsely remembered seeing an object when it was not there (E1). These effects persisted in the absence of causality (E2), continuity (E3), event familiarity (E4), object familiarity (E5), even when the events violated physical laws (E6), and when the events themselves were not coherent (E7). However, the effect disappeared when we abolished object persistence (E8). Thus, object persistence alone creates rich, enduring, and coherent representations of objects and events.
Collapse
Affiliation(s)
- Tal Boger
- Johns Hopkins University, Baltimore, MD 21218, United States of America.
| | - Brent Strickland
- Institut Jean Nicod, France; UM6P Africa Business School and School of Collective Intelligence, Morocco
| |
Collapse
|
2
|
Kim S, Kang J, Chong SC. Visual statistical learning initiates in/out-group judgments. Acta Psychol (Amst) 2025; 254:104873. [PMID: 40058127 DOI: 10.1016/j.actpsy.2025.104873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2024] [Revised: 03/03/2025] [Accepted: 03/04/2025] [Indexed: 03/18/2025] Open
Abstract
Our research examines how spatial proximity, shaped by visual statistical learning (VSL), initiates the categorization of individuals into in-/out-groups. We hypothesized that individuals positioned continuously closer in a visual array would be more frequently chosen as part of the in-group, while those farther away would be categorized as out-group members. In Experiment 1, participants selected individuals spatially associated as in-group members, while those associated differently and farther away were more often assigned to the out-group. Experiment 2 replicated these findings and refined the methodology by incorporating two types of visual representation: facial images and initials. These findings enhance our understanding of how VSL not only shapes perceptions of spatial proximity but also initiates the process of group judgments. Specifically, participants indirectly learned and recognized spatial regularities, which influenced their in-group and out-group decisions, underscoring the critical role of VSL in driving early-stage social categorization within virtual environments.
Collapse
Affiliation(s)
- Shinjung Kim
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Jisu Kang
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Sang Chul Chong
- Department of Psychology, Yonsei University, Seoul, South Korea; Graduate Program in Cognitive Science, Yonsei University, Seoul, South Korea.
| |
Collapse
|
3
|
Reger M, Vrabie O, Volberg G, Lingnau A. Actions at a glance: The time course of action, object, and scene recognition in a free recall paradigm. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2025:10.3758/s13415-025-01272-6. [PMID: 40011402 DOI: 10.3758/s13415-025-01272-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 02/02/2025] [Indexed: 02/28/2025]
Abstract
Being able to quickly recognize other people's actions lies at the heart of our ability to efficiently interact with our environment. Action recognition has been suggested to rely on the analysis and integration of information from different perceptual subsystems, e.g., for the processing of objects and scenes. However, stimulus presentation times that are required to extract information about actions, objects, and scenes to our knowledge have not yet been directly compared. To address this gap in the literature, we compared the recognition thresholds for actions, objects, and scenes. First, 30 participants were presented with grayscale images depicting different actions at variable presentation times (33-500 ms) and provided written descriptions of each image. Next, ten naïve raters evaluated these descriptions with respect to the presence and accuracy of information related to actions, objects, scenes, and sensory information. Comparing thresholds across presentation times, we found that recognizing actions required shorter presentation times (from 60 ms onwards) than objects (68 ms) and scenes (84 ms). More specific actions required presentation times of approximately 100 ms. Moreover, thresholds were modulated by action category, with the lowest thresholds for locomotion and the highest thresholds for food-related actions. Together, our data suggest that perceptual evidence for actions, objects, and scenes is gathered in parallel when these are presented in the same scene but accumulates faster for actions that reflect static body posture recognition than for objects and scenes.
Collapse
Affiliation(s)
- Maximilian Reger
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Oleg Vrabie
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Gregor Volberg
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany
| | - Angelika Lingnau
- Faculty of Human Sciences, University of Regensburg, Universitätsstraße 31, 93053, Regensburg, Germany.
| |
Collapse
|
4
|
Giglio L, Hagoort P, Ostarek M. Neural encoding of semantic structures during sentence production. Cereb Cortex 2024; 34:bhae482. [PMID: 39716739 PMCID: PMC11666472 DOI: 10.1093/cercor/bhae482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 11/22/2024] [Accepted: 12/05/2024] [Indexed: 12/25/2024] Open
Abstract
The neural representations for compositional processing have so far been mostly studied during sentence comprehension. In an fMRI study of sentence production, we investigated the brain representations for compositional processing during speaking. We used a rapid serial visual presentation sentence recall paradigm to elicit sentence production from the conceptual memory of an event. With voxel-wise encoding models, we probed the specificity of the compositional structure built during the production of each sentence, comparing an unstructured model of word meaning without relational information with a model that encodes abstract thematic relations and a model encoding event-specific relational structure. Whole-brain analyses revealed that sentence meaning at different levels of specificity was encoded in a large left frontal-parietal-temporal network. A comparison with semantic structures composed during the comprehension of the same sentences showed similarly distributed brain activity patterns. An ROI analysis over left fronto-temporal language parcels showed that event-specific relational structure above word-specific information was encoded in the left inferior frontal gyrus. Overall, we found evidence for the encoding of sentence meaning during sentence production in a distributed brain network and for the encoding of event-specific semantic structures in the left inferior frontal gyrus.
Collapse
Affiliation(s)
- Laura Giglio
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525XD, The Netherlands
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, Nijmegen 6525EN, The Netherlands
- Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 915 Greene Street, Columbia SC 29208, USA
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525XD, The Netherlands
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Kapittelweg 29, Nijmegen 6525EN, The Netherlands
| | - Markus Ostarek
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525XD, The Netherlands
| |
Collapse
|
5
|
Friedrich J, Fischer MH, Raab M. Invariant representations in abstract concept grounding - the physical world in grounded cognition. Psychon Bull Rev 2024; 31:2558-2580. [PMID: 38806790 PMCID: PMC11680661 DOI: 10.3758/s13423-024-02522-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/26/2024] [Indexed: 05/30/2024]
Abstract
Grounded cognition states that mental representations of concepts consist of experiential aspects. For example, the concept "cup" consists of the sensorimotor experiences from interactions with cups. Typical modalities in which concepts are grounded are: The sensorimotor system (including interoception), emotion, action, language, and social aspects. Here, we argue that this list should be expanded to include physical invariants (unchanging features of physical motion; e.g., gravity, momentum, friction). Research on physical reasoning consistently demonstrates that physical invariants are represented as fundamentally as other grounding substrates, and therefore should qualify. We assess several theories of concept representation (simulation, conceptual metaphor, conceptual spaces, predictive processing) and their positions on physical invariants. We find that the classic grounded cognition theories, simulation and conceptual metaphor theory, have not considered physical invariants, while conceptual spaces and predictive processing have. We conclude that physical invariants should be included into grounded cognition theories, and that the core mechanisms of simulation and conceptual metaphor theory are well suited to do this. Furthermore, conceptual spaces and predictive processing are very promising and should also be integrated with grounded cognition in the future.
Collapse
Affiliation(s)
- Jannis Friedrich
- German Sport University Cologne, Germany, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany.
| | - Martin H Fischer
- Psychology Department, University of Potsdam, Karl-Liebknecht-Strasse 24-25, House 14 D - 14476, Potsdam-Golm, Germany
| | - Markus Raab
- German Sport University Cologne, Germany, Am Sportpark Müngersdorf 6, 50933, Cologne, Germany
| |
Collapse
|
6
|
Papeo L, Vettori S, Serraille E, Odin C, Rostami F, Hochmann JR. Abstract thematic roles in infants' representation of social events. Curr Biol 2024; 34:4294-4300.e4. [PMID: 39168122 DOI: 10.1016/j.cub.2024.07.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 06/23/2024] [Accepted: 07/23/2024] [Indexed: 08/23/2024]
Abstract
Infants' thoughts are classically characterized as iconic, perceptual-like representations.1,2,3 Less clear is whether preverbal infants also possess a propositional language of thought, where mental symbols are combined according to syntactic rules, very much like words in sentences.4,5,6,7,8,9,10,11,12,13,14,15,16,17 Because it is rich, productive, and abstract, a language of thought would provide a key to explaining impressive achievements in early infancy, from logical inference to representation of false beliefs.18,19,20,21,22,23,24,25,26,27,28,29,30,31 A propositional language-including a language of thought5-implies thematic roles that, in a sentence, indicate the relation between noun and verb phrases, defining who acts on whom; i.e., who is the agent and who is the patient.32,33,34,35,36,37,38,39 Agent and patient roles are abstract in that they generally apply to different situations: whether A kicks, helps, or kisses B, A is the agent and B is the patient. Do preverbal infants represent abstract agent and patient roles? We presented 7-month-olds (n = 143) with sequences of scenes where the posture or relative positioning of two individuals indicated that, across different interactions, A acted on B. Results from habituation (experiment 1) and pupillometry paradigms (experiments 2 and 3) demonstrated that infants showed surprise when roles eventually switched (B acted on A). Thus, while encoding social interactions, infants fill in an abstract relational structure that marks the roles of agent and patient and that can be accessed via different event scenes and properties of the event participants (body postures or positioning). This mental process implies a combinatorial capacity that lays the foundations for productivity and compositionality in language and cognition.
Collapse
Affiliation(s)
- Liuba Papeo
- Institut des Sciences Cognitives Marc Jeannerod -UMR5229, CNRS & Université Claude Bernard Lyon1, 67 Boulevard Pinel, 69675 Bron, France.
| | - Sofie Vettori
- Institut des Sciences Cognitives Marc Jeannerod -UMR5229, CNRS & Université Claude Bernard Lyon1, 67 Boulevard Pinel, 69675 Bron, France
| | - Emilie Serraille
- Institut des Sciences Cognitives Marc Jeannerod -UMR5229, CNRS & Université Claude Bernard Lyon1, 67 Boulevard Pinel, 69675 Bron, France
| | - Catherine Odin
- Institut des Sciences Cognitives Marc Jeannerod -UMR5229, CNRS & Université Claude Bernard Lyon1, 67 Boulevard Pinel, 69675 Bron, France
| | - Farzad Rostami
- Institut des Sciences Cognitives Marc Jeannerod -UMR5229, CNRS & Université Claude Bernard Lyon1, 67 Boulevard Pinel, 69675 Bron, France
| | - Jean-Rémy Hochmann
- Institut des Sciences Cognitives Marc Jeannerod -UMR5229, CNRS & Université Claude Bernard Lyon1, 67 Boulevard Pinel, 69675 Bron, France.
| |
Collapse
|
7
|
Hafri A. Cognitive development: The origins of structured thought in the mind. Curr Biol 2024; 34:R856-R859. [PMID: 39317155 DOI: 10.1016/j.cub.2024.07.096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2024]
Abstract
Linguistic syntax lets us communicate complex, structured thoughts, like whether a dog chased a man or vice versa. New work shows that seven-month-olds can entertain such structured thoughts even before acquiring their native language, revealing the origins of this sophisticated ability.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Linguistics and Cognitive Science, University of Delaware, Newark, DE 19716, USA.
| |
Collapse
|
8
|
Ünal E, Wilson F, Trueswell J, Papafragou A. Asymmetries in encoding event roles: Evidence from language and cognition. Cognition 2024; 250:105868. [PMID: 38959638 PMCID: PMC11358469 DOI: 10.1016/j.cognition.2024.105868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 05/20/2024] [Accepted: 06/28/2024] [Indexed: 07/05/2024]
Abstract
It has long been hypothesized that the linguistic structure of events, including event participants and their relative prominence, draws on the non-linguistic nature of events and the roles that these events license. However, the precise relation between the prominence of event participants in language and cognition has not been tested experimentally in a systematic way. Here we address this gap. In four experiments, we investigate the relative prominence of (animate) Agents, Patients, Goals and Instruments in the linguistic encoding of complex events and the prominence of these event roles in cognition as measured by visual search and change blindness tasks. The relative prominence of these event roles was largely similar-though not identical-across linguistic and non-linguistic measures. Across linguistic and non-linguistic tasks, Patients were more salient than Goals, which were more salient than Instruments. (Animate) Agents were more salient than Patients in linguistic descriptions and visual search; however, this asymmetrical pattern did not emerge in change detection. Overall, our results reveal homologies between the linguistic and non-linguistic prominence of individual event participants, thereby lending support to the claim that the linguistic structure of events builds on underlying conceptual event representations. We discuss implications of these findings for linguistic theory and theories of event cognition.
Collapse
Affiliation(s)
- Ercenur Ünal
- Department of Psychology, Ozyegin University, Istanbul, Turkey.
| | - Frances Wilson
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE, USA
| | - John Trueswell
- Department of Psychology, University of Pennsylvania, Philadelphia, PA, USA
| | - Anna Papafragou
- Department of Linguistics, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
9
|
Su X, Swallow KM. People can reliably detect action changes and goal changes during naturalistic perception. Mem Cognit 2024; 52:1093-1111. [PMID: 38315292 DOI: 10.3758/s13421-024-01525-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/17/2024] [Indexed: 02/07/2024]
Abstract
As a part of ongoing perception, the human cognitive system segments others' activities into discrete episodes (event segmentation). Although prior research has shown that this process is likely related to changes in an actor's actions and goals, it has not yet been determined whether untrained observers can reliably identify action and goal changes as naturalistic activities unfold, or whether the changes they identify are tied to visual features of the activity (e.g., the beginnings and ends of object interactions). This study addressed these questions by examining untrained participants' identification of action changes, goal changes, and event boundaries while watching videos of everyday activities that were presented in both first-person and third-person perspectives. We found that untrained observers can identify goal changes and action changes consistently, and these changes are not explained by visual change and the onsets or offsets of contact with objects. Moreover, the action and goal changes identified by untrained observers were associated with event boundaries, even after accounting for objective visual features of the videos. These findings suggest that people can identify action and goal changes consistently and with high agreement, that they do so by using sensory information flexibly, and that the action and goal changes they identify may contribute to event segmentation.
Collapse
Affiliation(s)
- Xing Su
- Department of Psychological and Brain Sciences, Washington University in Saint Louis, Saint Louis, MO, USA
| | - Khena M Swallow
- Department of Psychology and Cognitive Science Program, Cornell University, 211 Uris Hall, Ithaca, NY, 14853, USA.
| |
Collapse
|
10
|
Hafri A, Bonner MF, Landau B, Firestone C. A Phone in a Basket Looks Like a Knife in a Cup: Role-Filler Independence in Visual Processing. Open Mind (Camb) 2024; 8:766-794. [PMID: 38957507 PMCID: PMC11219067 DOI: 10.1162/opmi_a_00146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 04/17/2024] [Indexed: 07/04/2024] Open
Abstract
When a piece of fruit is in a bowl, and the bowl is on a table, we appreciate not only the individual objects and their features, but also the relations containment and support, which abstract away from the particular objects involved. Independent representation of roles (e.g., containers vs. supporters) and "fillers" of those roles (e.g., bowls vs. cups, tables vs. chairs) is a core principle of language and higher-level reasoning. But does such role-filler independence also arise in automatic visual processing? Here, we show that it does, by exploring a surprising error that such independence can produce. In four experiments, participants saw a stream of images containing different objects arranged in force-dynamic relations-e.g., a phone contained in a basket, a marker resting on a garbage can, or a knife sitting in a cup. Participants had to respond to a single target image (e.g., a phone in a basket) within a stream of distractors presented under time constraints. Surprisingly, even though participants completed this task quickly and accurately, they false-alarmed more often to images matching the target's relational category than to those that did not-even when those images involved completely different objects. In other words, participants searching for a phone in a basket were more likely to mistakenly respond to a knife in a cup than to a marker on a garbage can. Follow-up experiments ruled out strategic responses and also controlled for various confounding image features. We suggest that visual processing represents relations abstractly, in ways that separate roles from fillers.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Linguistics and Cognitive Science, University of Delaware
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| | | | - Barbara Landau
- Department of Cognitive Science, Johns Hopkins University
| | - Chaz Firestone
- Department of Cognitive Science, Johns Hopkins University
- Department of Psychological and Brain Sciences, Johns Hopkins University
| |
Collapse
|
11
|
Huber E, Sauppe S, Isasi-Isasmendi A, Bornkessel-Schlesewsky I, Merlo P, Bickel B. Surprisal From Language Models Can Predict ERPs in Processing Predicate-Argument Structures Only if Enriched by an Agent Preference Principle. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:167-200. [PMID: 38645615 PMCID: PMC11025647 DOI: 10.1162/nol_a_00121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 08/30/2023] [Indexed: 04/23/2024]
Abstract
Language models based on artificial neural networks increasingly capture key aspects of how humans process sentences. Most notably, model-based surprisals predict event-related potentials such as N400 amplitudes during parsing. Assuming that these models represent realistic estimates of human linguistic experience, their success in modeling language processing raises the possibility that the human processing system relies on no other principles than the general architecture of language models and on sufficient linguistic input. Here, we test this hypothesis on N400 effects observed during the processing of verb-final sentences in German, Basque, and Hindi. By stacking Bayesian generalised additive models, we show that, in each language, N400 amplitudes and topographies in the region of the verb are best predicted when model-based surprisals are complemented by an Agent Preference principle that transiently interprets initial role-ambiguous noun phrases as agents, leading to reanalysis when this interpretation fails. Our findings demonstrate the need for this principle independently of usage frequencies and structural differences between languages. The principle has an unequal force, however. Compared to surprisal, its effect is weakest in German, stronger in Hindi, and still stronger in Basque. This gradient is correlated with the extent to which grammars allow unmarked NPs to be patients, a structural feature that boosts reanalysis effects. We conclude that language models gain more neurobiological plausibility by incorporating an Agent Preference. Conversely, theories of human processing profit from incorporating surprisal estimates in addition to principles like the Agent Preference, which arguably have distinct evolutionary roots.
Collapse
Affiliation(s)
- Eva Huber
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| | - Sebastian Sauppe
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Arrate Isasi-Isasmendi
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| | - Ina Bornkessel-Schlesewsky
- Cognitive Neuroscience Laboratory, Australian Research Centre for Interactive and Virtual Environments, University of South Australia, Adelaide, Australia
| | - Paola Merlo
- Department of Linguistics, University of Geneva, Geneva, Switzerland
- University Center for Computer Science, University of Geneva, Geneva, Switzerland
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| |
Collapse
|
12
|
Yu X, Li J, Zhu H, Tian X, Lau E. Electrophysiological hallmarks for event relations and event roles in working memory. Front Neurosci 2024; 17:1282869. [PMID: 38328555 PMCID: PMC10847304 DOI: 10.3389/fnins.2023.1282869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 12/22/2023] [Indexed: 02/09/2024] Open
Abstract
The ability to maintain events (i.e., interactions between/among objects) in working memory is crucial for our everyday cognition, yet the format of this representation is poorly understood. The current ERP study was designed to answer two questions: How is maintaining events (e.g., the tiger hit the lion) neurally different from maintaining item coordinations (e.g., the tiger and the lion)? That is, how is the event relation (present in events but not coordinations) represented? And how is the agent, or initiator of the event encoded differently from the patient, or receiver of the event during maintenance? We used a novel picture-sentence match-across-delay approach in which the working memory representation was "pinged" during the delay, replicated across two ERP experiments with Chinese and English materials. We found that maintenance of events elicited a long-lasting late sustained difference in posterior-occipital electrodes relative to non-events. This effect resembled the negative slow wave reported in previous studies of working memory, suggesting that the maintenance of events in working memory may impose a higher cost compared to coordinations. Although we did not observe significant ERP differences associated with pinging the agent vs. the patient during the delay, we did find that the ping appeared to dampen the ongoing sustained difference, suggesting a shift from sustained activity to activity silent mechanisms. These results suggest a new method by which ERPs can be used to elucidate the format of neural representation for events in working memory.
Collapse
Affiliation(s)
- Xinchi Yu
- Program of Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
- Department of Linguistics, University of Maryland, College Park, MD, United States
| | - Jialu Li
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Hao Zhu
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - Ellen Lau
- Program of Neuroscience and Cognitive Science, University of Maryland, College Park, MD, United States
- Department of Linguistics, University of Maryland, College Park, MD, United States
| |
Collapse
|
13
|
Brady N, Gough P, Leonard S, Allan P, McManus C, Foley T, O'Leary A, McGovern DP. Actions are characterized by 'canonical moments' in a sequence of movements. Cognition 2024; 242:105652. [PMID: 37866178 DOI: 10.1016/j.cognition.2023.105652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 10/12/2023] [Accepted: 10/15/2023] [Indexed: 10/24/2023]
Abstract
Understanding what others are doing is an essential aspect of social cognition that depends on our ability to quickly recognize and categorize their actions. To effectively study action recognition we need to understand how actions are bounded, where they start and where they end. Here we borrow a conceptual approach - the notion of 'canonicality' - introduced by Palmer and colleagues in their study of object recognition and apply it to the study of action recognition. Using a set of 50 video clips sourced from stock photography sites, we show that many everyday actions - transitive and intransitive, social and non-social, communicative - are characterized by 'canonical moments' in a sequence of movements that are agreed by participants to 'best represent' a named action, as indicated in a forced choice (Exp 1, n = 142) and a free choice (Exp 2, n = 125) paradigm. In Exp 3 (n = 102) we confirm that canonical moments from action sequences are more readily named as depicting specific actions and, mirroring research in object recognition, that such canonical moments are privileged in memory (Exp 4, n = 95). We suggest that 'canonical moments', being those that convey maximal information about human actions, are integral to the representation of human action.1.
Collapse
Affiliation(s)
- Nuala Brady
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland.
| | - Patricia Gough
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Sophie Leonard
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Paul Allan
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Caoimhe McManus
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Tomas Foley
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - Aoife O'Leary
- School of Psychology, University College Dublin, Belfield, Dublin 4, Ireland
| | - David P McGovern
- School of Psychology, Dublin City University, Glasnevin Campus, Dublin 9, Ireland
| |
Collapse
|
14
|
Croom S, Zhou H, Firestone C. Seeing and understanding epistemic actions. Proc Natl Acad Sci U S A 2023; 120:e2303162120. [PMID: 37983484 DOI: 10.1073/pnas.2303162120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 07/27/2023] [Indexed: 11/22/2023] Open
Abstract
Many actions have instrumental aims, in which we move our bodies to achieve a physical outcome in the environment. However, we also perform actions with epistemic aims, in which we move our bodies to acquire information and learn about the world. A large literature on action recognition investigates how observers represent and understand the former class of actions; but what about the latter class? Can one person tell, just by observing another person's movements, what they are trying to learn? Here, five experiments explore epistemic action understanding. We filmed volunteers playing a "physics game" consisting of two rounds: Players shook an opaque box and attempted to determine i) the number of objects hidden inside, or ii) the shape of the objects inside. Then, independent subjects watched these videos and were asked to determine which videos came from which round: Who was shaking for number and who was shaking for shape? Across several variations, observers successfully determined what an actor was trying to learn, based only on their actions (i.e., how they shook the box)-even when the box's contents were identical across rounds. These results demonstrate that humans can infer epistemic intent from physical behaviors, adding a new dimension to research on action understanding.
Collapse
Affiliation(s)
- Sholei Croom
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218
| | - Hanbei Zhou
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218
| |
Collapse
|
15
|
McMahon E, Bonner MF, Isik L. Hierarchical organization of social action features along the lateral visual pathway. Curr Biol 2023; 33:5035-5047.e8. [PMID: 37918399 PMCID: PMC10841461 DOI: 10.1016/j.cub.2023.10.015] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 09/01/2023] [Accepted: 10/10/2023] [Indexed: 11/04/2023]
Abstract
Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA.
| | - Michael F Bonner
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | - Leyla Isik
- Department of Cognitive Science, Zanvyl Krieger School of Arts & Sciences, Johns Hopkins University, 237 Krieger Hall, 3400 N. Charles Street, Baltimore, MD 21218, USA; Department of Biomedical Engineering, Whiting School of Engineering, Johns Hopkins University, Suite 400 West, Wyman Park Building, 3400 N. Charles Street, Baltimore, MD 21218, USA
| |
Collapse
|
16
|
McMahon E, Isik L. Seeing social interactions. Trends Cogn Sci 2023; 27:1165-1179. [PMID: 37805385 PMCID: PMC10841760 DOI: 10.1016/j.tics.2023.09.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 09/01/2023] [Accepted: 09/05/2023] [Indexed: 10/09/2023]
Abstract
Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.
Collapse
Affiliation(s)
- Emalie McMahon
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
17
|
Prystauka Y, DeLuca V, Luque A, Voits T, Rothman J. Cognitive Neuroscience Perspectives on Language Acquisition and Processing. Brain Sci 2023; 13:1613. [PMID: 38137061 PMCID: PMC10741862 DOI: 10.3390/brainsci13121613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 10/20/2023] [Indexed: 12/24/2023] Open
Abstract
The earliest investigations of the neural implementation of language started with examining patients with various types of disorders and underlying brain damage [...].
Collapse
Affiliation(s)
- Yanina Prystauka
- Department of Language and Culture, UiT the Arctic University of Norway, 9019 Tromsø, Norway; (V.D.); (T.V.); (J.R.)
- Department of Foreign Languages and Translation, University of Agder, 4604 Kristiansand, Norway
| | - Vincent DeLuca
- Department of Language and Culture, UiT the Arctic University of Norway, 9019 Tromsø, Norway; (V.D.); (T.V.); (J.R.)
| | - Alicia Luque
- Nebrija Research Center in Cognition, Nebrija University, 28043 Madrid, Spain;
- Department of Applied Language Studies, Nebrija University, 28043 Madrid, Spain
| | - Toms Voits
- Department of Language and Culture, UiT the Arctic University of Norway, 9019 Tromsø, Norway; (V.D.); (T.V.); (J.R.)
- Department of Psychology, University of Gothenburg, SE-40530 Gothenburg, Sweden
| | - Jason Rothman
- Department of Language and Culture, UiT the Arctic University of Norway, 9019 Tromsø, Norway; (V.D.); (T.V.); (J.R.)
- Nebrija Research Center in Cognition, Nebrija University, 28043 Madrid, Spain;
| |
Collapse
|
18
|
Xu Z, Chen H, Wang Y. Invisible social grouping facilitates the recognition of individual faces. Conscious Cogn 2023; 113:103556. [PMID: 37541010 DOI: 10.1016/j.concog.2023.103556] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/09/2023] [Accepted: 07/28/2023] [Indexed: 08/06/2023]
Abstract
Emerging evidence suggests a specialized mechanism supporting perceptual grouping of social entities. However, the stage at which social grouping is processed is unclear. Through four experiments, here we showed that participants' recognition of a visible face was facilitated by the presence of a second facing (thus forming a social grouping) relative to a nonfacing face, even when the second face was invisible. Using a monocular/dichoptic paradigm, we further found that the social grouping facilitation effect occurred when the two faces were presented dichoptically to different eyes rather than monocularly to the same eye, suggesting that social grouping relies on binocular rather than monocular neural channels. The above effects were not found for inverted face dyads, thereby ruling out the contribution of nonsocial factors. Taken together, these findings support the unconscious influence of social grouping on visual perception and suggest an early origin of social grouping processing in the visual pathway.
Collapse
Affiliation(s)
- Zhenjie Xu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| | - Yingying Wang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou 310028, Zhejiang, China.
| |
Collapse
|
19
|
Isasi-Isasmendi A, Andrews C, Flecken M, Laka I, Daum MM, Meyer M, Bickel B, Sauppe S. The Agent Preference in Visual Event Apprehension. Open Mind (Camb) 2023; 7:240-282. [PMID: 37416075 PMCID: PMC10320828 DOI: 10.1162/opmi_a_00083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/19/2023] [Indexed: 07/08/2023] Open
Abstract
A central aspect of human experience and communication is understanding events in terms of agent ("doer") and patient ("undergoer" of action) roles. These event roles are rooted in general cognition and prominently encoded in language, with agents appearing as more salient and preferred over patients. An unresolved question is whether this preference for agents already operates during apprehension, that is, the earliest stage of event processing, and if so, whether the effect persists across different animacy configurations and task demands. Here we contrast event apprehension in two tasks and two languages that encode agents differently; Basque, a language that explicitly case-marks agents ('ergative'), and Spanish, which does not mark agents. In two brief exposure experiments, native Basque and Spanish speakers saw pictures for only 300 ms, and subsequently described them or answered probe questions about them. We compared eye fixations and behavioral correlates of event role extraction with Bayesian regression. Agents received more attention and were recognized better across languages and tasks. At the same time, language and task demands affected the attention to agents. Our findings show that a general preference for agents exists in event apprehension, but it can be modulated by task and language demands.
Collapse
Affiliation(s)
- Arrate Isasi-Isasmendi
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Caroline Andrews
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Monique Flecken
- Department of Linguistics, Amsterdam Centre for Language and Communication, University of Amsterdam, Amsterdam, The Netherlands
| | - Itziar Laka
- Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Moritz M. Daum
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Cognitive Psychology Unit, University of Klagenfurt, Klagenfurt, Austria
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Sebastian Sauppe
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
20
|
Kueser JB, Horvath S, Borovsky A. Two pathways in vocabulary development: Large-scale differences in noun and verb semantic structure. Cogn Psychol 2023; 143:101574. [PMID: 37209501 PMCID: PMC10832511 DOI: 10.1016/j.cogpsych.2023.101574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 04/25/2023] [Accepted: 05/07/2023] [Indexed: 05/22/2023]
Abstract
In adults, nouns and verbs have varied and multilevel semantic interrelationships. In children, evidence suggests that nouns and verbs also have semantic interrelationships, though the timing of the emergence of these relationships and their precise impact on later noun and verb learning are not clear. In this work, we ask whether noun and verb semantic knowledge in 16-30-month-old children tend to be semantically isolated from one another or semantically interacting from the onset of vocabulary development. Early word learning patterns were quantified using network science. We measured the semantic network structure for nouns and verbs in 3,804 16-30-month-old children at several levels of granularity using a large, open dataset of vocabulary checklist data. In a cross-sectional approach in Experiment 1, early nouns and verbs exhibited stronger network relationships with other nouns and verbs than expected across multiple network levels. Using a longitudinal approach in Experiment 2, we examined patterns of normative vocabulary development over time. Initial noun and verb learning was supported by strong semantic connections to other nouns, whereas later-learned words exhibited strong connections to verbs. Overall, these two experiments suggest that nouns and verbs demonstrate early semantic interactions and that these interactions impact later word learning. Early verb and noun learning is affected by the emergence of noun and verb semantic networks during early lexical development.
Collapse
|
21
|
Gómez-Vidal B, Arantzeta M, Laka JP, Laka I. Subjects are not all alike: Eye-tracking the agent preference in Spanish. PLoS One 2022; 17:e0272211. [PMID: 35921377 PMCID: PMC9348668 DOI: 10.1371/journal.pone.0272211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 07/14/2022] [Indexed: 11/19/2022] Open
Abstract
Experimental research on argument structure has reported mixed results regarding the processing of unaccusative and unergative predicates. Using eye tracking in the visual world paradigm, this study seeks to fill a gap in the literature by presenting new evidence of the processing distinction between agent and theme subjects. We considered two hypotheses. First, the Unaccusative Hypothesis states that unaccusative (theme) subjects involve a more complex syntactic representation than unergative (agent) subjects. It predicts a delayed reactivation of unaccusative subjects compared to unergatives after the presentation of the verb. Second, the Agent First Hypothesis states that the first ambiguous NP of a sentence will preferably be interpreted as an agent due to an attentional preference to agents over themes. It predicts a larger reactivation of agent subjects than themes. We monitored the time course of gaze fixations of 44 native speakers across a visual display while processing sentences with unaccusative, unergative and transitive verbs. One of the pictures in the visual display was semantically related to the sentential subject. We analyzed fixation patterns in three different time frames: the verb frame, the post-verb frame, and the global post-verbal frame. Results indicated that sentential subjects across the three conditions were significantly activated when participants heard the verb; this is compatible with observing a post-verbal reactivation effect. Time course and magnitude of the gaze-fixation patterns are fully compatible with the predictions made by the Agent First Hypothesis. Thus, we report new evidence for (a) a processing distinction between unaccusative and unergative predicates in sentence comprehension, and (b) an attentional preference towards agents over themes, reflected by a larger reactivation effect in agent subjects.
Collapse
Affiliation(s)
- Beatriz Gómez-Vidal
- The Bilingual Mind Research Group, Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Basque Country, Spain
- * E-mail:
| | - Miren Arantzeta
- The Bilingual Mind Research Group, Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Basque Country, Spain
| | - Jon Paul Laka
- University of Deusto (DBS), Bilbao, Basque Country, Spain
| | - Itziar Laka
- The Bilingual Mind Research Group, Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Vitoria-Gasteiz, Basque Country, Spain
| |
Collapse
|
22
|
Wilson VAD, Zuberbühler K, Bickel B. The evolutionary origins of syntax: Event cognition in nonhuman primates. SCIENCE ADVANCES 2022; 8:eabn8464. [PMID: 35731868 PMCID: PMC9216513 DOI: 10.1126/sciadv.abn8464] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 05/05/2022] [Indexed: 06/15/2023]
Abstract
Languages tend to encode events from the perspective of agents, placing them first and in simpler forms than patients. This agent bias is mirrored by cognition: Agents are more quickly recognized than patients and generally attract more attention. This leads to the hypothesis that key aspects of language structure are fundamentally rooted in a cognition that decomposes events into agents, actions, and patients, privileging agents. Although this type of event representation is almost certainly universal across languages, it remains unclear whether the underlying cognition is uniquely human or more widespread in animals. Here, we review a range of evidence from primates and other animals, which suggests that agent-based event decomposition is phylogenetically older than humans. We propose a research program to test this hypothesis in great apes and human infants, with the goal to resolve one of the major questions in the evolution of language, the origins of syntax.
Collapse
Affiliation(s)
- Vanessa A. D. Wilson
- Department of Comparative Cognition, Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Klaus Zuberbühler
- Department of Comparative Cognition, Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- School of Psychology and Neuroscience, University of St Andrews, St. Andrews, Scotland
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
23
|
Rissman L, van Putten S, Majid A. Evidence for a Shared Instrument Prototype from English, Dutch, and German. Cogn Sci 2022; 46:e13140. [PMID: 35523145 PMCID: PMC9285710 DOI: 10.1111/cogs.13140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 03/24/2022] [Accepted: 03/31/2022] [Indexed: 11/30/2022]
Abstract
At conceptual and linguistic levels of cognition, events are said to be represented in terms of abstract categories, for example, the sentence Jackie cut the bagel with a knife encodes the categories Agent (i.e., Jackie) and Patient (i.e., the bagel). In this paper, we ask whether entities such as the knife are also represented in terms of such a category (often labeled “Instrument”) and, if so, whether this category has a prototype structure. We hypothesized the Proto‐instrument is a tool: a physical object manipulated by an intentional agent to affect a change in another individual or object. To test this, we asked speakers of English, Dutch, and German to complete an event description task and a sentence acceptability judgment task in which events were viewed with more or less prototypical instruments. We found broad similarities in how English, Dutch, and German partition the semantic space of instrumental events, suggesting there is a shared concept of the Instrument category. However, there was no evidence to support the specific hypothesis that tools are the core of the Instrument category—instead, our results suggest the most prototypical Instrument is the direct extension of an intentional agent. This paper supports theoretical frameworks where thematic roles are analyzed in terms of prototypes and suggests new avenues of research on how instrumental category structure differs across linguistic and conceptual domains.
Collapse
Affiliation(s)
- Lilia Rissman
- Department of Psychology, University of Wisconsin-Madison
| | | | - Asifa Majid
- Department of Experimental Psychology, University of Oxford
| |
Collapse
|
24
|
Affiliation(s)
- Ilenia Paparella
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| | - Liuba Papeo
- Institut des Sciences Cognitives— Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon1, Lyon, France
| |
Collapse
|
25
|
Zhuang T, Lingnau A. The characterization of actions at the superordinate, basic and subordinate level. PSYCHOLOGICAL RESEARCH 2021; 86:1871-1891. [PMID: 34907466 PMCID: PMC9363348 DOI: 10.1007/s00426-021-01624-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 11/26/2021] [Indexed: 10/26/2022]
Abstract
Objects can be categorized at different levels of abstraction, ranging from the superordinate (e.g., fruit) and the basic (e.g., apple) to the subordinate level (e.g., golden delicious). The basic level is assumed to play a key role in categorization, e.g., in terms of the number of features used to describe these actions and the speed of processing. To which degree do these principles also apply to the categorization of observed actions? To address this question, we first selected a range of actions at the superordinate (e.g., locomotion), basic (e.g., to swim) and subordinate level (e.g., to swim breaststroke), using verbal material (Experiments 1-3). Experiments 4-6 aimed to determine the characteristics of these actions across the three taxonomic levels. Using a feature listing paradigm (Experiment 4), we determined the number of features that were provided by at least six out of twenty participants (common features), separately for the three different levels. In addition, we examined the number of shared (i.e., provided for more than one category) and distinct (i.e., provided for one category only) features. Participants produced the highest number of common features for actions at the basic level. Actions at the subordinate level shared more features with other actions at the same level than those at the superordinate level. Actions at the superordinate and basic level were described with more distinct features compared to those provided at the subordinate level. Using an auditory priming paradigm (Experiment 5), we observed that participants responded faster to action images preceded by a matching auditory cue corresponding to the basic and subordinate level, but not for superordinate level cues, suggesting that the basic level is the most abstract level at which verbal cues facilitate the processing of an upcoming action. Using a category verification task (Experiment 6), we found that participants were faster and more accurate to verify action categories (depicted as images) at the basic and subordinate level in comparison to the superordinate level. Together, in line with the object categorization literature, our results suggest that information about action categories is maximized at the basic level.
Collapse
Affiliation(s)
- Tonghe Zhuang
- Chair of Cognitive Neuroscience, Faculty of Human Sciences, Institute of Psychology, University of Regensburg, Universitätsstrasse 31, 93053, Regensburg, Germany
| | - Angelika Lingnau
- Chair of Cognitive Neuroscience, Faculty of Human Sciences, Institute of Psychology, University of Regensburg, Universitätsstrasse 31, 93053, Regensburg, Germany.
| |
Collapse
|
26
|
Kislinger L. Photographs of Actions: What Makes Them Special Cues to Social Perception. Brain Sci 2021; 11:brainsci11111382. [PMID: 34827381 PMCID: PMC8615998 DOI: 10.3390/brainsci11111382] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 10/18/2021] [Accepted: 10/18/2021] [Indexed: 11/16/2022] Open
Abstract
I have reviewed studies on neural responses to pictured actions in the action observation network (AON) and the cognitive functions of these responses. Based on this review, I have analyzed the specific representational characteristics of action photographs. There has been consensus that AON responses provide viewers with knowledge of observed or pictured actions, but there has been controversy about the properties of this knowledge. Is this knowledge causally provided by AON activities or is it dependent on conceptual processing? What elements of actions does it refer to, and how generalized or specific is it? The answers to these questions have come from studies that used transcranial magnetic stimulation (TMS) to stimulate motor or somatosensory cortices. In conjunction with electromyography (EMG), TMS allows researchers to examine changes of the excitability in the corticospinal tract and muscles of people viewing pictured actions. The timing of these changes and muscle specificity enable inferences to be drawn about the cognitive products of processing pictured actions in the AON. Based on a review of studies using TMS and other neuroscience methods, I have proposed a novel hypothetical account that describes the characteristics of action photographs that make them effective cues to social perception. This account includes predictions that can be tested experimentally.
Collapse
|
27
|
Stern MC, Stover L, Guerra E, Martohardjono G. Syntactic and Semantic Influences on the Time Course of Relative Clause Processing: The Role of Language Dominance. Brain Sci 2021; 11:brainsci11080989. [PMID: 34439608 PMCID: PMC8391599 DOI: 10.3390/brainsci11080989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 07/20/2021] [Accepted: 07/23/2021] [Indexed: 12/02/2022] Open
Abstract
We conducted a visual world eye-tracking experiment with highly proficient Spanish-English bilingual adults to investigate the effects of relative language dominance, operationalized as a continuous, multidimensional variable, on the time course of relative clause processing in the first-learned language, Spanish. We found that participants exhibited two distinct processing preferences: a semantically driven preference to assign agency to referents of lexically animate noun phrases and a syntactically driven preference to interpret relative clauses as subject-extracted. Spanish dominance was found to exert a distinct influence on each of these preferences, gradiently attenuating the semantic preference while gradiently exaggerating the syntactic preference. While these results might be attributable to particular properties of Spanish and English, they also suggest a possible generalization that greater dominance in a language increases reliance on language-specific syntactic processing strategies while correspondingly decreasing reliance on more domain-general semantic processing strategies.
Collapse
Affiliation(s)
- Michael C. Stern
- Linguistics Department, Yale University, 370 Temple St, New Haven, CT 06511, USA
- Correspondence:
| | - LeeAnn Stover
- Linguistics Program, The Graduate Center, City University of New York, 365 Fifth Ave, New York, NY 10016, USA; (L.S.); (G.M.)
| | - Ernesto Guerra
- Center for Advanced Research in Education, Institute of Education, Universidad de Chile, Periodista José Carrasco Tapia 75, Santiago de Chile 7550000, Chile;
| | - Gita Martohardjono
- Linguistics Program, The Graduate Center, City University of New York, 365 Fifth Ave, New York, NY 10016, USA; (L.S.); (G.M.)
| |
Collapse
|
28
|
Abstract
In order to understand ecologically meaningful social behaviors and their neural substrates in humans and other animals, researchers have been using a variety of social stimuli in the laboratory with a goal of extracting specific processes in real-life scenarios. However, certain stimuli may not be sufficiently effective at evoking typical social behaviors and neural responses. Here, we review empirical research employing different types of social stimuli by classifying them into five levels of naturalism. We describe the advantages and limitations while providing selected example studies for each level. We emphasize the important trade-off between experimental control and ecological validity across the five levels of naturalism. Taking advantage of newly emerging tools, such as real-time videos, virtual avatars, and wireless neural sampling techniques, researchers are now more than ever able to adopt social stimuli at a higher level of naturalism to better capture the dynamics and contingency of real-life social interaction.
Collapse
Affiliation(s)
- Siqi Fan
- Department of Psychology, Yale University, New Haven, CT 06520, USA
| | - Olga Dal Monte
- Department of Psychology, Yale University, New Haven, CT 06520, USA
- Department of Psychology, University of Turin, Torino, Italy
| | - Steve W.C. Chang
- Department of Psychology, Yale University, New Haven, CT 06520, USA
- Department of Neuroscience, Yale University School of Medicine, New Haven, CT 06510, USA
- Kavli Institute for Neuroscience, Yale University School of Medicine, New Haven, CT 06510, USA
- Wu Tsai Institute, Yale University, New Haven, CT 06510, USA
| |
Collapse
|
29
|
Kislinger L, Kotrschal K. Hunters and Gatherers of Pictures: Why Photography Has Become a Human Universal. Front Psychol 2021; 12:654474. [PMID: 34168589 PMCID: PMC8217823 DOI: 10.3389/fpsyg.2021.654474] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Accepted: 05/11/2021] [Indexed: 11/13/2022] Open
Abstract
Photography is ubiquitous worldwide. We analyzed why people take, share, and use personal photographs, independent of their specific cultural background. These behaviors are still poorly understood. Experimental research on them is scarce. Smartphone technology and social media have pushed the success of photography, but cannot explain it, as not all smartphone features are widely used just because they are available. We analyzed properties of human nature that have made taking and using photographs functional behaviors. We did this based on the four levels, which Nikolaas Tinbergen suggested for analyzing why animals behave in a particular way. Including findings from multiple disciplines, we developed a novel conceptual framework-the "Mental Utilization Hypothesis of Photography." It suggests that people adopt photography because it matches with core human mental mechanisms mainly from the social domain, and people use photography as a cognitive, primarily social coping strategy. Our framework comprises a range of testable predictions, provides a new theoretical basis for future empirical investigations into photography, and has practical implications. We conclude that photography has become a human universal, which is based on context-sensitive mental predispositions and differentiates itself in the social and societal environment.
Collapse
Affiliation(s)
| | - Kurt Kotrschal
- Department of Behavioral Biology and Konrad Lorenz Forschungsstelle, University of Vienna, Vienna, Austria
- Domestication Lab at the Konrad-Lorenz Institute of Ethology, Wolf Science Center, University of Veterinary Medicine, Ernstbrunn, Austria
| |
Collapse
|
30
|
Ünal E, Richards C, Trueswell JC, Papafragou A. Representing agents, patients, goals and instruments in causative events: A cross-linguistic investigation of early language and cognition. Dev Sci 2021; 24:e13116. [PMID: 33955664 DOI: 10.1111/desc.13116] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/09/2021] [Accepted: 04/17/2021] [Indexed: 11/28/2022]
Abstract
Although it is widely assumed that the linguistic description of events is based on a structured representation of event components at the perceptual/conceptual level, little empirical work has tested this assumption directly. Here, we test the connection between language and perception/cognition cross-linguistically, focusing on the relative salience of causative event components in language and cognition. We draw on evidence from preschoolers speaking English or Turkish. In a picture description task, Turkish-speaking 3-5-year-olds mentioned Agents less than their English-speaking peers (Turkish allows subject drop); furthermore, both language groups mentioned Patients more frequently than Goals, and Instruments less frequently than either Patients or Goals. In a change blindness task, both language groups were equally accurate at detecting changes to Agents (despite surface differences in Agent mentions). The remaining components also behaved similarly: both language groups were less accurate in detecting changes to Instruments than either Patients or Goals (even though Turkish-speaking preschoolers were less accurate overall than their English-speaking peers). To our knowledge, this is the first study offering evidence for a strong-even though not strict-homology between linguistic and conceptual event roles in young learners cross-linguistically.
Collapse
Affiliation(s)
- Ercenur Ünal
- Department of Psychology, Ozyegin University, Istanbul, Turkey.,Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| | - Catherine Richards
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| | - John C Trueswell
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Anna Papafragou
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA.,Department of Linguistics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
31
|
Bellot E, Abassi E, Papeo L. Moving Toward versus Away from Another: How Body Motion Direction Changes the Representation of Bodies and Actions in the Visual Cortex. Cereb Cortex 2021; 31:2670-2685. [PMID: 33401307 DOI: 10.1093/cercor/bhaa382] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 11/05/2020] [Accepted: 11/25/2020] [Indexed: 11/12/2022] Open
Abstract
Representing multiple agents and their mutual relations is a prerequisite to understand social events such as interactions. Using functional magnetic resonance imaging on human adults, we show that visual areas dedicated to body form and body motion perception contribute to processing social events, by holding the representation of multiple moving bodies and encoding the spatial relations between them. In particular, seeing animations of human bodies facing and moving toward (vs. away from) each other increased neural activity in the body-selective cortex [extrastriate body area (EBA)] and posterior superior temporal sulcus (pSTS) for biological motion perception. In those areas, representation of body postures and movements, as well as of the overall scene, was more accurate for facing body (vs. nonfacing body) stimuli. Effective connectivity analysis with dynamic causal modeling revealed increased coupling between EBA and pSTS during perception of facing body stimuli. The perceptual enhancement of multiple-body scenes featuring cues of interaction (i.e., face-to-face positioning, spatial proximity, and approaching signals) was supported by the participants' better performance in a recognition task with facing body versus nonfacing body stimuli. Thus, visuospatial cues of interaction in multiple-person scenarios affect the perceptual representation of body and body motion and, by promoting functional integration, streamline the process from body perception to action representation.
Collapse
Affiliation(s)
- Emmanuelle Bellot
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Etienne Abassi
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| | - Liuba Papeo
- Institut des Sciences Cognitives-Marc Jeannerod, UMR5229, Centre National de la Recherche Scientifique (CNRS) & Université Claude Bernard Lyon 1, 69675 Bron, France
| |
Collapse
|
32
|
Hafri A, Firestone C. The Perception of Relations. Trends Cogn Sci 2021; 25:475-492. [PMID: 33812770 DOI: 10.1016/j.tics.2021.01.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 01/05/2021] [Accepted: 01/18/2021] [Indexed: 11/16/2022]
Abstract
The world contains not only objects and features (red apples, glass bowls, wooden tables), but also relations holding between them (apples contained in bowls, bowls supported by tables). Representations of these relations are often developmentally precocious and linguistically privileged; but how does the mind extract them in the first place? Although relations themselves cast no light onto our eyes, a growing body of work suggests that even very sophisticated relations display key signatures of automatic visual processing. Across physical, eventive, and social domains, relations such as support, fit, cause, chase, and even socially interact are extracted rapidly, are impossible to ignore, and influence other perceptual processes. Sophisticated and structured relations are not only judged and understood, but also seen - revealing surprisingly rich content in visual perception itself.
Collapse
Affiliation(s)
- Alon Hafri
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | - Chaz Firestone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA; Department of Philosophy, Johns Hopkins University, Baltimore, MD 21218, USA.
| |
Collapse
|
33
|
Kuperberg GR. Tea With Milk? A Hierarchical Generative Framework of Sequential Event Comprehension. Top Cogn Sci 2021; 13:256-298. [PMID: 33025701 PMCID: PMC7897219 DOI: 10.1111/tops.12518] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 07/11/2020] [Accepted: 07/11/2020] [Indexed: 10/23/2022]
Abstract
To make sense of the world around us, we must be able to segment a continual stream of sensory inputs into discrete events. In this review, I propose that in order to comprehend events, we engage hierarchical generative models that "reverse engineer" the intentions of other agents as they produce sequential action in real time. By generating probabilistic predictions for upcoming events, generative models ensure that we are able to keep up with the rapid pace at which perceptual inputs unfold. By tracking our certainty about other agents' goals and the magnitude of prediction errors at multiple temporal scales, generative models enable us to detect event boundaries by inferring when a goal has changed. Moreover, by adapting flexibly to the broader dynamics of the environment and our own comprehension goals, generative models allow us to optimally allocate limited resources. Finally, I argue that we use generative models not only to comprehend events but also to produce events (carry out goal-relevant sequential action) and to continually learn about new events from our surroundings. Taken together, this hierarchical generative framework provides new insights into how the human brain processes events so effortlessly while highlighting the fundamental links between event comprehension, production, and learning.
Collapse
Affiliation(s)
- Gina R. Kuperberg
- Department of Psychology and Center for Cognitive Science, Tufts University
- Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| |
Collapse
|
34
|
Do ML, Papafragou A, Trueswell J. Cognitive and pragmatic factors in language production: Evidence from source-goal motion events. Cognition 2020; 205:104447. [DOI: 10.1016/j.cognition.2020.104447] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 08/18/2020] [Accepted: 08/23/2020] [Indexed: 11/30/2022]
|
35
|
Speaking for seeing: Sentence structure guides visual event apprehension. Cognition 2020; 206:104516. [PMID: 33228969 DOI: 10.1016/j.cognition.2020.104516] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 11/05/2020] [Accepted: 11/11/2020] [Indexed: 11/24/2022]
Abstract
Human experience and communication are centred on events, and event apprehension is a rapid process that draws on the visual perception and immediate categorization of event roles ("who does what to whom"). We demonstrate a role for syntactic structure in visual information uptake for event apprehension. An event structure foregrounding either the agent or patient was activated during speaking, transiently modulating the apprehension of subsequently viewed unrelated events. Speakers of Dutch described pictures with actives and passives (agent and patient foregrounding, respectively). First fixations on pictures of unrelated events that were briefly presented (for 300 ms) next were influenced by the active or passive structure of the previously produced sentence. Going beyond the study of how single words cue object perception, we show that sentence structure guides the viewpoint taken during rapid event apprehension.
Collapse
|
36
|
|
37
|
Ji Y, Papafragou A. Is there an end in sight? Viewers' sensitivity to abstract event structure. Cognition 2020; 197:104197. [DOI: 10.1016/j.cognition.2020.104197] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/10/2020] [Accepted: 01/18/2020] [Indexed: 11/15/2022]
|
38
|
Decroix J, Roger C, Kalénine S. Neural dynamics of grip and goal integration during the processing of others' actions with objects: An ERP study. Sci Rep 2020; 10:5065. [PMID: 32193497 PMCID: PMC7081278 DOI: 10.1038/s41598-020-61963-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 03/06/2020] [Indexed: 11/17/2022] Open
Abstract
Recent behavioural evidence suggests that when processing others’ actions, motor acts and goal-related information both contribute to action recognition. Yet the neuronal mechanisms underlying the dynamic integration of the two action dimensions remain unclear. This study aims to elucidate the ERP components underlying the processing and integration of grip and goal-related information. The electrophysiological activity of 28 adults was recorded during the processing of object-directed action photographs (e.g., writing with pencil) containing either grip violations (e.g. upright pencil grasped with atypical-grip), goal violations (e.g., upside-down pencil grasped with typical-grip), both grip and goal violations (e.g., upside-down pencil grasped with atypical-grip), or no violations. Participants judged whether actions were overall typical or not according to object typical use. Brain activity was sensitive to the congruency between grip and goal information on the N400, reflecting the semantic integration between the two dimensions. On earlier components, brain activity was affected by grip and goal typicality independently. Critically, goal typicality but not grip typicality affected brain activity on the N300, supporting an earlier role of goal-related representations in action recognition. Findings provide new insights on the neural temporal dynamics of the integration of motor acts and goal-related information during the processing of others’ actions.
Collapse
Affiliation(s)
- Jérémy Decroix
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000, Lille, France
| | - Clémence Roger
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000, Lille, France
| | - Solène Kalénine
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000, Lille, France.
| |
Collapse
|
39
|
Knott A, Takac M. Roles for Event Representations in Sensorimotor Experience, Memory Formation, and Language Processing. Top Cogn Sci 2020; 13:187-205. [DOI: 10.1111/tops.12497] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 09/22/2019] [Accepted: 10/08/2019] [Indexed: 11/28/2022]
Affiliation(s)
| | - Martin Takac
- Centre for Cognitive Science Comenius University
| |
Collapse
|
40
|
Pigeons process actor-action configurations more readily than bystander-action configurations. Learn Behav 2020; 48:41-52. [PMID: 32043271 DOI: 10.3758/s13420-020-00416-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Behavior requires an actor. Two experiments using complex conditional action discriminations examined whether pigeons privilege information related to the digital actor who is engaged in behavior. In Experiment 1, each of two video displays contained a digital model, one an actor engaged in one of two behaviors (Indian dance or martial arts) and one a neutrally posed bystander. To correctly classify the display, the pigeons needed to conditionally process the action in conjunction with distinctive physical features of the actor or the bystander. Four actor-conditional pigeons learned to correctly discriminate the actions based on the identity of the actors, whereas four bystander-conditional birds failed to learn. Experiment 2 established that this failure was not due to the latter group's inability to spatially integrate information across the distance between the two models. Potentially, the colocalization of the relevant model identity and the action was critical due to a fundamental configural or integral representation of these properties. These findings contribute to our understanding of the evolution of action recognition, the recognition of social behavior, and forms of observational learning by animals.
Collapse
|
41
|
Abstract
Events make up much of our lived experience, and the perceptual mechanisms that represent events in experience have pervasive effects on action control, language use, and remembering. Event representations in both perception and memory have rich internal structure and connections one to another, and both are heavily informed by knowledge accumulated from previous experiences. Event perception and memory have been identified with specific computational and neural mechanisms, which show protracted development in childhood and are affected by language use, expertise, and brain disorders and injuries. Current theoretical approaches focus on the mechanisms by which events are segmented from ongoing experience, and emphasize the common coding of events for perception, action, and memory. Abetted by developments in eye-tracking, neuroimaging, and computer science, research on event perception and memory is moving from small-scale laboratory analogs to the complexity of events in the wild.
Collapse
Affiliation(s)
- Jeffrey M Zacks
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130, USA;
| |
Collapse
|
42
|
Davis CP, Altmann GTM, Yee E. Situational systematicity: A role for schema in understanding the differences between abstract and concrete concepts. Cogn Neuropsychol 2020; 37:142-153. [DOI: 10.1080/02643294.2019.1710124] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Charles P. Davis
- Department of Psychological Sciences, University of Connecticut, Storrs CT, USA
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs CT, USA
- Brain Imaging Research Center, University of Connecticut, Storrs CT, USA
| | - Gerry T. M. Altmann
- Department of Psychological Sciences, University of Connecticut, Storrs CT, USA
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs CT, USA
| | - Eiling Yee
- Department of Psychological Sciences, University of Connecticut, Storrs CT, USA
- Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs CT, USA
| |
Collapse
|
43
|
The Representation of Two-Body Shapes in the Human Visual Cortex. J Neurosci 2019; 40:852-863. [PMID: 31801812 DOI: 10.1523/jneurosci.1378-19.2019] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 11/21/2019] [Accepted: 11/27/2019] [Indexed: 11/21/2022] Open
Abstract
Human social nature has shaped visual perception. A signature of the relationship between vision and sociality is a particular visual sensitivity to social entities such as faces and bodies. We asked whether human vision also exhibits a special sensitivity to spatial relations that reliably correlate with social relations. In general, interacting people are more often situated face-to-face than back-to-back. Using functional MRI and behavioral measures in female and male human participants, we show that visual sensitivity to social stimuli extends to images including two bodies facing toward (vs away from) each other. In particular, the inferior lateral occipital cortex, which is involved in visual-object perception, is organized such that the inferior portion encodes the number of bodies (one vs two) and the superior portion is selectively sensitive to the spatial relation between bodies (facing vs nonfacing). Moreover, functionally localized, body-selective visual cortex responded to facing bodies more strongly than identical, but nonfacing, bodies. In this area, multivariate pattern analysis revealed an accurate representation of body dyads with sharpening of the representation of single-body postures in facing dyads, which demonstrates an effect of visual context on the perceptual analysis of a body. Finally, the cost of body inversion (upside-down rotation) on body recognition, a behavioral signature of a specialized mechanism for body perception, was larger for facing versus nonfacing dyads. Thus, spatial relations between multiple bodies are encoded in regions for body perception and affect the way in which bodies are processed.SIGNIFICANCE STATEMENT Human social nature has shaped visual perception. Here, we show that human vision is not only attuned to socially relevant entities, such as bodies, but also to socially relevant spatial relations between those entities. Body-selective regions of visual cortex respond more strongly to multiple bodies that appear to be interacting (i.e., face-to-face), relative to unrelated bodies, and more accurately represent single body postures in interacting scenarios. Moreover, recognition of facing bodies is particularly susceptible to perturbation by upside-down rotation, indicative of a particular visual sensitivity to the canonical appearance of facing bodies. This encoding of relations between multiple bodies in areas for body-shape recognition suggests that the visual context in which a body is encountered deeply affects its perceptual analysis.
Collapse
|
44
|
Abstract
The status of thematic roles such as Agent and Patient in cognitive science is highly controversial: To some they are universal components of core knowledge, to others they are scholarly fictions without psychological reality. We address this debate by posing two critical questions: to what extent do humans represent events in terms of abstract role categories, and to what extent are these categories shaped by universal cognitive biases? We review a range of literature that contributes answers to these questions: psycholinguistic and event cognition experiments with adults, children, and infants; typological studies grounded in cross-linguistic data; and studies of emerging sign languages. We pose these questions for a variety of roles and find that the answers depend on the role. For Agents and Patients, there is strong evidence for abstract role categories and a universal bias to distinguish the two roles. For Goals and Recipients, we find clear evidence for abstraction but mixed evidence as to whether there is a bias to encode Goals and Recipients as part of one or two distinct categories. Finally, we discuss the Instrumental role and do not find clear evidence for either abstraction or universal biases to structure instrumental categories.
Collapse
Affiliation(s)
- Lilia Rissman
- Center for Language Studies, Radboud University, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Asifa Majid
- Department of Psychology, University of York, York, UK
| |
Collapse
|
45
|
Ünal E, Ji Y, Papafragou A. From Event Representation to Linguistic Meaning. Top Cogn Sci 2019; 13:224-242. [DOI: 10.1111/tops.12475] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 09/25/2019] [Accepted: 10/07/2019] [Indexed: 11/30/2022]
Affiliation(s)
| | - Yue Ji
- Department of Linguistics University of Delaware
| | | |
Collapse
|
46
|
Loschky LC, Larson AM, Smith TJ, Magliano JP. The Scene Perception & Event Comprehension Theory (SPECT) Applied to Visual Narratives. Top Cogn Sci 2019; 12:311-351. [PMID: 31486277 PMCID: PMC9328418 DOI: 10.1111/tops.12455] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 08/05/2019] [Accepted: 08/05/2019] [Indexed: 11/29/2022]
Abstract
Understanding how people comprehend visual narratives (including picture stories, comics, and film) requires the combination of traditionally separate theories that span the initial sensory and perceptual processing of complex visual scenes, the perception of events over time, and comprehension of narratives. Existing piecemeal approaches fail to capture the interplay between these levels of processing. Here, we propose the Scene Perception & Event Comprehension Theory (SPECT), as applied to visual narratives, which distinguishes between front‐end and back‐end cognitive processes. Front‐end processes occur during single eye fixations and are comprised of attentional selection and information extraction. Back‐end processes occur across multiple fixations and support the construction of event models, which reflect understanding of what is happening now in a narrative (stored in working memory) and over the course of the entire narrative (stored in long‐term episodic memory). We describe relationships between front‐ and back‐end processes, and medium‐specific differences that likely produce variation in front‐end and back‐end processes across media (e.g., picture stories vs. film). We describe several novel research questions derived from SPECT that we have explored. By addressing these questions, we provide greater insight into how attention, information extraction, and event model processes are dynamically coordinated to perceive and understand complex naturalistic visual events in narratives and the real world. Comprehension of visual narratives like comics, picture stories, and films involves both decoding the visual content and construing the meaningful events they represent. The Scene Perception & Event Comprehension Theory (SPECT) proposes a framework for understanding how a comprehender perceptually negotiates the surface of a visual representation and integrates its meaning into a growing mental model.
Collapse
Affiliation(s)
| | | | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London
| | | |
Collapse
|
47
|
Cohn N, Engelen J, Schilperoord J. The grammar of emoji? Constraints on communicative pictorial sequencing. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2019; 4:33. [PMID: 31471857 PMCID: PMC6717234 DOI: 10.1186/s41235-019-0177-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 06/03/2019] [Indexed: 11/10/2022]
Abstract
Emoji have become a prominent part of interactive digital communication. Here, we ask the questions: does a grammatical system govern the way people use emoji; and how do emoji interact with the grammar of written text? We conducted two experiments that asked participants to have a digital conversation with each other using only emoji (Experiment 1) or to substitute at least one emoji for a word in the sentences (Experiment 2). First, we found that the emoji-only utterances of participants remained at simplistic levels of patterning, primarily appearing as one-unit utterances (as formulaic expressions or responsive emotions) or as linear sequencing (for example, repeating the same emoji or providing an unordered list of semantically related emoji). Emoji playing grammatical roles (i.e., 'parts-of-speech') were minimal, and showed little consistency in 'word order'. Second, emoji were substituted more for nouns and adjectives than verbs, while also typically conveying nonredundant information to the sentences. These findings suggest that, while emoji may follow tendencies in their interactions with grammatical structure in multimodal text-emoji productions, they lack grammatical structure on their own.
Collapse
Affiliation(s)
- Neil Cohn
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands.
| | - Jan Engelen
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands
| | - Joost Schilperoord
- Department of Communication and Cognition, Tilburg University, P.O. Box 90153, 5000, LE, Tilburg, The Netherlands
| |
Collapse
|
48
|
Quadflieg S, Westmoreland K. Making Sense of Other People’s Encounters: Towards an Integrative Model of Relational Impression Formation. JOURNAL OF NONVERBAL BEHAVIOR 2019. [DOI: 10.1007/s10919-019-00295-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
49
|
Lupyan G, Winter B. Language is more abstract than you think, or, why aren't languages more iconic? Philos Trans R Soc Lond B Biol Sci 2018; 373:20170137. [PMID: 29915005 PMCID: PMC6015821 DOI: 10.1098/rstb.2017.0137] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2018] [Indexed: 01/29/2023] Open
Abstract
How abstract is language? We show that abstractness pervades every corner of language, going far beyond the usual examples of freedom and justice In the light of the ubiquity of abstract words, the need to understand where abstract meanings come from becomes ever more acute. We argue that the best source of knowledge about abstract meanings may be language itself. We then consider a seemingly unrelated question: Why isn't language more iconic? Iconicity-a resemblance between the form of words and their meanings-can be immensely useful in language learning and communication. Languages could be much more iconic than they currently are. So why aren't they? We suggest that one reason is that iconicity is inimical to abstraction because iconic forms are too connected to specific contexts and sensory depictions. Form-meaning arbitrariness may allow language to better convey abstract meanings.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'.
Collapse
Affiliation(s)
- Gary Lupyan
- Department of Psychology, University of Wisconsin, Madison, WI 53706, USA
| | - Bodo Winter
- Department of English Language and Applied Linguistics, University of Birmingham, Birmingham, UK
| |
Collapse
|
50
|
Zwitserlood P, Bölte J, Hofmann R, Meier CC, Dobel C. Seeing for speaking: Semantic and lexical information provided by briefly presented, naturalistic action scenes. PLoS One 2018; 13:e0194762. [PMID: 29652939 PMCID: PMC5898714 DOI: 10.1371/journal.pone.0194762] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 03/09/2018] [Indexed: 11/19/2022] Open
Abstract
At the interface between scene perception and speech production, we investigated how rapidly action scenes can activate semantic and lexical information. Experiment 1 examined how complex action-scene primes, presented for 150 ms, 100 ms, or 50 ms and subsequently masked, influenced the speed with which immediately following action-picture targets are named. Prime and target actions were either identical, showed the same action with different actors and environments, or were unrelated. Relative to unrelated primes, identical and same-action primes facilitated naming the target action, even when presented for 50 ms. In Experiment 2, neutral primes assessed the direction of effects. Identical and same-action scenes induced facilitation but unrelated actions induced interference. In Experiment 3, written verbs were used as targets for naming, preceded by action primes. When target verbs denoted the prime action, clear facilitation was obtained. In contrast, interference was observed when target verbs were phonologically similar, but otherwise unrelated, to the names of prime actions. This is clear evidence for word-form activation by masked action scenes. Masked action pictures thus provide conceptual information that is detailed enough to facilitate apprehension and naming of immediately following scenes. Masked actions even activate their word-form information-as is evident when targets are words. We thus show how language production can be primed with briefly flashed masked action scenes, in answer to long-standing questions in scene processing.
Collapse
Affiliation(s)
- Pienie Zwitserlood
- Institute for Psychology, University of Münster, Münster, Germany
- Otto-Creutzfeldt Center for Cognitive Neuroscience, University of Münster, Münster, Germany
- * E-mail:
| | - Jens Bölte
- Institute for Psychology, University of Münster, Münster, Germany
- Otto-Creutzfeldt Center for Cognitive Neuroscience, University of Münster, Münster, Germany
| | - Reinhild Hofmann
- Clinic for Phoniatrics and Pediatric Audiology, University of Münster, Münster, Germany
| | | | - Christian Dobel
- Department of Otorhinolaryngology, Medical Faculty, University of Jena, Jena, Germany
| |
Collapse
|