1
|
Adam M, Elsner B, Zmyj N. Perspective matters in goal-predictive gaze shifts during action observation: Results from 6-, 9-, and 12-month-olds and adults. J Exp Child Psychol 2025; 249:106075. [PMID: 39305583 DOI: 10.1016/j.jecp.2024.106075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Revised: 07/30/2024] [Accepted: 08/27/2024] [Indexed: 11/24/2024]
Abstract
Research on goal-predictive gaze shifts in infancy so far has mostly focused on the effect of infants' experience with observed actions or the effect of agency cues that the observed agent displays. However, the perspective from which an action is presented to the infants (egocentric vs. allocentric) has received only little attention from researchers despite the fact that the natural observation of own actions is always linked to an egocentric perspective, whereas the observation of others' actions is often linked to an allocentric perspective. The current study investigated the timing of 6-, 9-, and 12-month-olds' goal-predictive gaze behavior, as well as that of adults, during the observation of simple human grasping actions that were presented from either an egocentric or allocentric perspective (within-participants design). The results showed that at 6 and 9 months of age, the infants predicted the action goal only when observing the action from the egocentric perspective. The 12-month-olds and adults, in contrast, predicted the action in both perspectives. The results therefore are in line with accounts proposing an advantage of egocentric versus allocentric processing of social stimuli, at least early in development. This study is among the first to show this egocentric bias already during the first year of life.
Collapse
Affiliation(s)
- Maurits Adam
- Department of Psychology, University of Potsdam, 14476 Potsdam, Germany
| | - Birgit Elsner
- Department of Psychology, University of Potsdam, 14476 Potsdam, Germany
| | - Norbert Zmyj
- Institute of Psychology, TU Dortmund University, 44227 Dortmund, Germany.
| |
Collapse
|
2
|
Theuer JK, Koch NN, Gumbsch C, Elsner B, Butz MV. Infants infer and predict coherent event interactions: Modeling cognitive development. PLoS One 2024; 19:e0312532. [PMID: 39446862 PMCID: PMC11500850 DOI: 10.1371/journal.pone.0312532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 10/09/2024] [Indexed: 10/26/2024] Open
Abstract
Mental representations of the environment in infants are sparse and grow richer during their development. Anticipatory eye fixation studies show that infants aged around 7 months start to predict the goal of an observed action, e.g., an object targeted by a reaching hand. Interestingly, goal-predictive gaze shifts occur at an earlier age when the hand subsequently manipulates an object and later when an action is performed by an inanimate actor, e.g., a mechanical claw. We introduce CAPRI2 (Cognitive Action PRediction and Inference in Infants), a computational model that explains this development from a functional, algorithmic perspective. It is based on the theory that infants learn object files and events as they develop a physical reasoning system. In particular, CAPRI2 learns a generative event-predictive model, which it uses to both interpret sensory information and infer goal-directed behavior. When observing object interactions, CAPRI2 (i) interprets the unfolding interactions in terms of event-segmented dynamics, (ii) maximizes the coherence of its event interpretations, updating its internal estimates and (iii) chooses gaze behavior to minimize expected uncertainty. As a result, CAPRI2 mimics the developmental pathway of infants' goal-predictive gaze behavior. Our modeling work suggests that the involved event-predictive representations, longer-term generative model learning, and shorter-term retrospective and active inference principles constitute fundamental building blocks for the effective development of goal-predictive capacities.
Collapse
Affiliation(s)
- Johanna K. Theuer
- Neuro-Cognitive Modeling, Department of Computer Science and Department of Psychology, University of Tübingen, Tübingen, Germany
| | - Nadine N. Koch
- Neuro-Cognitive Modeling, Department of Computer Science and Department of Psychology, University of Tübingen, Tübingen, Germany
| | - Christian Gumbsch
- Neuro-Cognitive Modeling, Department of Computer Science and Department of Psychology, University of Tübingen, Tübingen, Germany
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technical University Dresden, Dresden, Germany
| | - Birgit Elsner
- Developmental Psychology, Faculty of Humanities, University of Potsdam, Potsdam, Germany
| | - Martin V. Butz
- Neuro-Cognitive Modeling, Department of Computer Science and Department of Psychology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
3
|
Lonardo L, Völter CJ, Lamm C, Huber L. Dogs Rely On Visual Cues Rather Than On Effector-Specific Movement Representations to Predict Human Action Targets. Open Mind (Camb) 2023; 7:588-607. [PMID: 37840756 PMCID: PMC10575556 DOI: 10.1162/opmi_a_00096] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Accepted: 07/18/2023] [Indexed: 10/17/2023] Open
Abstract
The ability to predict others' actions is one of the main pillars of social cognition. We investigated the processes underlying this ability by pitting motor representations of the observed movements against visual familiarity. In two pre-registered eye-tracking experiments, we measured the gaze arrival times of 16 dogs (Canis familiaris) who observed videos of a human or a conspecific executing the same goal-directed actions. On the first trial, when the human agent performed human-typical movements outside dogs' specific motor repertoire, dogs' gaze arrived at the target object anticipatorily (i.e., before the human touched the target object). When the agent was a conspecific, dogs' gaze arrived to the target object reactively (i.e., upon or after touch). When the human agent performed unusual movements more closely related to the dogs' motor possibilities (e.g., crawling instead of walking), dogs' gaze arrival times were intermediate between the other two conditions. In a replication experiment, with slightly different stimuli, dogs' looks to the target object were neither significantly predictive nor reactive, irrespective of the agent. However, when including looks at the target object that were not preceded by looks to the agents, on average dogs looked anticipatorily and sooner at the human agent's action target than at the conspecific's. Looking times and pupil size analyses suggest that the dogs' attention was captured more by the dog agent. These results suggest that visual familiarity with the observed action and saliency of the agent had a stronger influence on the dogs' looking behaviour than effector-specific movement representations in anticipating action targets.
Collapse
Affiliation(s)
- Lucrezia Lonardo
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine of Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria
| | - Christoph J. Völter
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine of Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria
| | - Claus Lamm
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Ludwig Huber
- Comparative Cognition, Messerli Research Institute, University of Veterinary Medicine of Vienna, Medical University of Vienna and University of Vienna, Vienna, Austria
| |
Collapse
|
4
|
Monroy C, Yu C, Houston D. Visual statistical learning in deaf and hearing infants and toddlers. INFANCY 2022; 27:720-735. [PMID: 35524478 PMCID: PMC9320792 DOI: 10.1111/infa.12474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 03/21/2022] [Accepted: 04/19/2022] [Indexed: 11/28/2022]
Abstract
Congenital hearing loss offers a unique opportunity to examine the role of sound in cognitive, social, and linguistic development. Children with hearing loss demonstrate atypical performance across a range of general cognitive skills. For instance, research has shown that deaf school-age children underperform on visual statistical learning (VSL) tasks. However, the evidence for these deficits has been challenged, with mixed findings emerging in recent years. Here, we used a novel approach to examine VSL in the action domain early in development. We compared learning between deaf and hearing infants, prior to cochlear implantation (pre-CI), and a group of toddlers post implantation (post-CI). Findings revealed a significant difference between deaf and hearing infants pre-CI, with evidence for learning only in the hearing infants. However, there were no significant group differences between deaf and hearing toddlers post-CI, with both groups demonstrating learning. Further, VSL performance was positively correlated with language scores for the deaf toddlers, adding to the body of evidence suggesting that statistical learning is associated with language abilities. We discuss these findings in the context of previous evidence for group differences in VSL skills, and the role that auditory experiences play in infant cognitive development.
Collapse
Affiliation(s)
- Claire Monroy
- School of PsychologyKeele UniversityKeeleStaffordshireUK
- Department of OtolaryngologyThe Ohio State UniversityWexner Medical CenterColumbusOhioUSA
| | - Chen Yu
- Department of Psychological and Brain SciencesUniversity of Texas at AustinAustinTexasUSA
| | - Derek Houston
- Department of OtolaryngologyThe Ohio State UniversityWexner Medical CenterColumbusOhioUSA
- Nationwide Children's HospitalColumbusOhioUSA
| |
Collapse
|
5
|
Tan E, Hamlin JK. Mechanisms of social evaluation in infancy: A preregistered exploration of infants' eye-movement and pupillary responses to prosocial and antisocial events. INFANCY 2021; 27:255-276. [PMID: 34873821 DOI: 10.1111/infa.12447] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 07/29/2021] [Accepted: 11/15/2021] [Indexed: 11/28/2022]
Abstract
Past research shows infants selectively touch and look longer at characters who help versus hinder others (Social evaluation by preverbal infants. Nature, 2007, 450, 557; Three-month-olds show a negativity bias in their social evaluations. Developmental Science, 2010, 13, 923); however, the mechanisms underlying this tendency remain underspecified. The current preregistered experiment approaches this question by examining infants' real-time looking behaviors during prosocial and antisocial events, and exploring how individual infants' looking behaviors correlate with helper preferences. Using eye-tracking, 34 five-month-olds were familiarized with two blocks of the "hill" scenario originally developed by Kuhlmeier et al. (Attribution of dispositional states by 12-month-olds. Psychological Science, 2003, 14, 402), in which a climber tries unsuccessfully to reach the top of a hill and is alternately helped or hindered. Infants' visual preferences were assessed after each block of 6 helping and hindering events by proportional looking time to the helper versus hinderer in an image of the characters side by side. Results showed that, at the group level, infants looked longer at the helper after viewing 12 (but not after viewing 6) helping and hindering videos. Moreover, individual infants' average preference for the helper was predicted by their looking behaviors, particularly those suggestive of an understanding of the climber's unfulfilled goal. These results shed light on how infants process helping/hindering scenarios, and suggest that goal understanding is important for infants' helper preferences.
Collapse
Affiliation(s)
- Enda Tan
- University of British Columbia, Vancouver, British Columbia, Canada
| | | |
Collapse
|
6
|
Adam M, Gumbsch C, Butz MV, Elsner B. The Impact of Action Effects on Infants' Predictive Gaze Shifts for a Non-Human Grasping Action at 7, 11, and 18 Months. Front Psychol 2021; 12:695550. [PMID: 34447336 PMCID: PMC8382717 DOI: 10.3389/fpsyg.2021.695550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 07/16/2021] [Indexed: 11/13/2022] Open
Abstract
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants’ processing of others’ action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants’ agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants’ predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants’ age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants’ action-goal prediction also for observed tool-use actions.
Collapse
Affiliation(s)
- Maurits Adam
- Developmental Psychology, Department of Psychology, University of Potsdam, Potsdam, Germany
| | - Christian Gumbsch
- Neuro-Cognitive Modeling, Department of Computer Science and Department of Psychology, University of Tübingen, Tübingen, Germany.,Autonomous Learning Group, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Martin V Butz
- Neuro-Cognitive Modeling, Department of Computer Science and Department of Psychology, University of Tübingen, Tübingen, Germany
| | - Birgit Elsner
- Developmental Psychology, Department of Psychology, University of Potsdam, Potsdam, Germany
| |
Collapse
|
7
|
Gumbsch C, Adam M, Elsner B, Butz MV. Emergent Goal-Anticipatory Gaze in Infants via Event-Predictive Learning and Inference. Cogn Sci 2021; 45:e13016. [PMID: 34379329 DOI: 10.1111/cogs.13016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 05/17/2021] [Accepted: 06/16/2021] [Indexed: 12/18/2022]
Abstract
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.
Collapse
Affiliation(s)
- Christian Gumbsch
- Neuro-Cognitive Modeling Group, Department of Computer Science, University of Tübingen.,Autonomous Learning Group, Max Planck Institute for Intelligent Systems
| | | | | | - Martin V Butz
- Neuro-Cognitive Modeling Group, Department of Computer Science, University of Tübingen
| |
Collapse
|