1
|
Kliesch C. Postnatal dependency as the foundation of social learning in humans. Proc Biol Sci 2025; 292:20242818. [PMID: 40237509 PMCID: PMC12001984 DOI: 10.1098/rspb.2024.2818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 02/11/2025] [Accepted: 03/21/2025] [Indexed: 04/18/2025] Open
Abstract
Humans have developed a sophisticated system of cultural transmission that allows for complex, non-genetically specified behaviours to be passed on from one generation to the next. This system relies on understanding others as social and communicative partners. Some theoretical accounts argue for the existence of domain-specific cognitive adaptations that prioritize social information, while others suggest that social learning is itself a product of cumulative cultural evolution based on domain-general learning mechanisms. The current paper explores the contribution of humans' unique ontogenetic environment to the emergence of social learning in infancy. It suggests that the prolonged period of post-natal dependency experienced by human infants contributes to the development of social learning. Because of motor limitations, infants learn to interact with and act through caregivers, establishing social learning abilities and skills that continue to develop as children become less dependent. According to this perspective, at least some key aspects of social development can be attributed to a developmental trajectory guided by infants' early motor development that radically alters how they experience the world.
Collapse
|
2
|
Girault JB. The developing visual system: A building block on the path to autism. Dev Cogn Neurosci 2025; 73:101547. [PMID: 40096794 PMCID: PMC11964655 DOI: 10.1016/j.dcn.2025.101547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2024] [Revised: 02/17/2025] [Accepted: 03/03/2025] [Indexed: 03/19/2025] Open
Abstract
Longitudinal neuroimaging studies conducted over the past decade provide evidence of atypical visual system development in the first years of life in autism spectrum disorder (ASD). Findings from genomic analyses, family studies, and postmortem investigations suggest that changes in the visual system in ASD are linked to genetic factors, making the visual system an important neural phenotype along the path from genes to behavior that deserves further study. This article reviews what is known about the developing visual system in ASD in the first years of life; it also explores the potential canalizing role that atypical visual system maturation may have in the emergence of ASD by placing findings in the context of developmental cascades involving brain development, attention, and social and cognitive development. Critical gaps in our understanding of human visual system development are discussed, and future research directions are proposed to improve our understanding of ASD as a complex neurodevelopmental disorder with origins in early brain development.
Collapse
Affiliation(s)
- Jessica B Girault
- Carolina Institute for Developmental Disabilities, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
3
|
Leitzke BT, Cochrane A, Stein AG, DeLap GA, Green CS, Pollak SD. Children's and Adolescent's Use of Context in Judgments of Emotion Intensity. AFFECTIVE SCIENCE 2025; 6:117-127. [PMID: 40094045 PMCID: PMC11904079 DOI: 10.1007/s42761-024-00279-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 09/02/2024] [Indexed: 03/19/2025]
Abstract
The ability to infer others' emotions is important for social communication. This study examines three key aspects of emotion perception for which relatively little is currently known: (1) the evaluation of the intensity of portrayed emotion, (2) the role of contextual information in the perception of facial configurations, and (3) developmental differences in how children perceive co-occurring facial and contextual information. Two experiments examined developmental effects on the influence of congruent, incongruent, and neutral situational contexts on participants' reasoning about others' emotions, both with and without emotion labels. Experiment 1 revealed that participants interpreted others' emotions to be of higher intensity when facial movements were congruent with contextual information. This effect was greater for children compared to adolescents and adults. Experiment 2 showed that without verbal emotion category labels, adults relied less on context to scale their intensity judgments, but children showed an opposite pattern; in the absence of labels, children relied more on contextual information than facial information. Making accurate inferences about others' internal states is a complex learning task given high variability within and across individuals and contexts. These data suggest changes in attention to perceptual information as such learning occurs. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-024-00279-5.
Collapse
Affiliation(s)
- Brian T. Leitzke
- Department of Psychology, University of WI–Madison, Waisman Center, 1500 Highland Avenue, Room 399, Madison, WI 53705 USA
| | - Aaron Cochrane
- Department of Cognitive and Psychological Sciences, Brown University, 190 Thayer St, Providence, RI 02912 USA
| | - Andrea G. Stein
- Department of Psychology, University of WI–Madison, Waisman Center, 1500 Highland Avenue, Room 399, Madison, WI 53705 USA
| | - Gwyneth A. DeLap
- University of Rochester, 494 Meliora Hall, Rochester, NY 14627 USA
| | - C. Shawn Green
- Department of Psychology, University of WI–Madison, Waisman Center, 1500 Highland Avenue, Room 399, Madison, WI 53705 USA
| | - Seth D. Pollak
- Department of Psychology, University of WI–Madison, Waisman Center, 1500 Highland Avenue, Room 399, Madison, WI 53705 USA
| |
Collapse
|
4
|
O’Connell TP, Bonnen T, Friedman Y, Tewari A, Sitzmann V, Tenenbaum JB, Kanwisher N. Approximating Human-Level 3D Visual Inferences With Deep Neural Networks. Open Mind (Camb) 2025; 9:305-324. [PMID: 40013087 PMCID: PMC11864798 DOI: 10.1162/opmi_a_00189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 01/14/2025] [Indexed: 02/28/2025] Open
Abstract
Humans make rich inferences about the geometry of the visual world. While deep neural networks (DNNs) achieve human-level performance on some psychophysical tasks (e.g., rapid classification of object or scene categories), they often fail in tasks requiring inferences about the underlying shape of objects or scenes. Here, we ask whether and how this gap in 3D shape representation between DNNs and humans can be closed. First, we define the problem space: after generating a stimulus set to evaluate 3D shape inferences using a match-to-sample task, we confirm that standard DNNs are unable to reach human performance. Next, we construct a set of candidate 3D-aware DNNs including 3D neural field (Light Field Network), autoencoder, and convolutional architectures. We investigate the role of the learning objective and dataset by training single-view (the model only sees one viewpoint of an object per training trial) and multi-view (the model is trained to associate multiple viewpoints of each object per training trial) versions of each architecture. When the same object categories appear in the model training and match-to-sample test sets, multi-view DNNs approach human-level performance for 3D shape matching, highlighting the importance of a learning objective that enforces a common representation across viewpoints of the same object. Furthermore, the 3D Light Field Network was the model most similar to humans across all tests, suggesting that building in 3D inductive biases increases human-model alignment. Finally, we explore the generalization performance of multi-view DNNs to out-of-distribution object categories not seen during training. Overall, our work shows that multi-view learning objectives for DNNs are necessary but not sufficient to make similar 3D shape inferences as humans and reveals limitations in capturing human-like shape inferences that may be inherent to DNN modeling approaches. We provide a methodology for understanding human 3D shape perception within a deep learning framework and highlight out-of-domain generalization as the next challenge for learning human-like 3D representations with DNNs.
Collapse
Affiliation(s)
| | - Tyler Bonnen
- EECS, University of California, Berkeley, Berkeley, CA, USA
| | | | | | | | | | | |
Collapse
|
5
|
Throm E, Gui A, Haartsen R, da Costa PF, Leech R, Mason L, Jones EJH. Combining Real-Time Neuroimaging With Machine Learning to Study Attention to Familiar Faces During Infancy: A Proof of Principle Study. Dev Sci 2025; 28:e13592. [PMID: 39600130 PMCID: PMC11599787 DOI: 10.1111/desc.13592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 07/24/2024] [Accepted: 10/18/2024] [Indexed: 11/29/2024]
Abstract
Looking at caregivers' faces is important for early social development, and there is a concomitant increase in neural correlates of attention to familiar versus novel faces in the first 6 months. However, by 12 months of age brain responses may not differentiate between familiar and unfamiliar faces. Traditional group-based analyses do not examine whether these 'null' findings stem from a true lack of preference within individual infants, or whether groups of infants show individually strong but heterogeneous preferences for familiar versus unfamiliar faces. In a preregistered proof-of-principle study, we applied Neuroadaptive Bayesian Optimisation (NBO) to test how individual infants' neural responses vary across faces differing in familiarity. Sixty-one 5-12-month-olds viewed faces resulting from gradually morphing a familiar (primary caregiver) into an unfamiliar face. Electroencephalography (EEG) data from fronto-central channels were analysed in real-time. After the presentation of each face, the Negative central (Nc) event-related potential (ERP) amplitude was calculated. A Bayesian Optimisation algorithm iteratively selected the next stimulus until it identified the stimulus eliciting the strongest Nc for that infant. Attrition (15%) was lower than in traditional studies (22%). Although there was no group-level Nc-difference between familiar versus unfamiliar faces, an optimum was predicted in 85% of the children, indicating individual-level attentional preferences. Traditional analyses based on infants' predicted optimum confirmed NBO can identify subgroups based on brain activation. Optima were not related to age and social behaviour. NBO suggests the lack of overall familiar/unfamiliar-face attentional preference in middle infancy is explained by heterogeneous preferences, rather than a lack of preference within individual infants.
Collapse
Affiliation(s)
- Elena Throm
- Department of Psychological ScienceCentre for Brain and Cognitive Development, Birkbeck, University of LondonLondonUK
| | - Anna Gui
- Department of Psychological ScienceCentre for Brain and Cognitive Development, Birkbeck, University of LondonLondonUK
- Department of PsychologyUniversity of EssexColchesterUK
| | - Rianne Haartsen
- Department of Psychological ScienceCentre for Brain and Cognitive Development, Birkbeck, University of LondonLondonUK
| | - Pedro F. da Costa
- Department of Neuroimaging, Institute of Psychiatry, Psychology and NeuroscienceKing's College LondonLondonUK
| | - Robert Leech
- Department of Neuroimaging, Institute of Psychiatry, Psychology and NeuroscienceKing's College LondonLondonUK
| | - Luke Mason
- Department of Psychological ScienceCentre for Brain and Cognitive Development, Birkbeck, University of LondonLondonUK
| | - Emily J. H. Jones
- Department of Psychological ScienceCentre for Brain and Cognitive Development, Birkbeck, University of LondonLondonUK
| |
Collapse
|
6
|
Yurkovic-Harding J, Bradshaw J. The Dynamics of Looking and Smiling Differ for Young Infants at Elevated Likelihood for ASD. INFANCY 2025; 30:e12646. [PMID: 39716809 PMCID: PMC12047390 DOI: 10.1111/infa.12646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 11/27/2024] [Accepted: 12/06/2024] [Indexed: 12/25/2024]
Abstract
Social smiling is the earliest gained social communication skill, emerging around 2 months of age. From 2 to 6-months, infants primarily smile in response to caregivers. After 6 months, infants coordinate social smiles with other social cues to initiate interactions with the caregiver. Social smiling is reduced in older infants with autism spectrum disorder (ASD) but has rarely been studied before 6 months of life. The current study therefore aimed to understand the component parts of infant social smiles, namely look to caregiver and smile, during face-to-face interactions in 3 and 4-month-old infants at elevated (EL) and low likelihood (LL) for ASD. We found that EL and LL infants looked to their caregiver and smiled for similar amounts of time and at similar rates, suggesting that social smiling manifests similarly in both groups. A nuanced difference between groups emerged when considering temporal dynamics of looking and smiling. Specifically, 3-month-old EL infants demonstrated extended looking to the caregiver after smile offset. These findings suggest that social smiling is largely typical in EL infants in early infancy, with subtle differences in temporal coupling. Future research is needed to understand the full magnitude of these differences and their implications for social development.
Collapse
Affiliation(s)
- Julia Yurkovic-Harding
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Carolina Autism and Neurodevelopment Research Center, University of South Carolina, Columbia, South Carolina, USA
| | - Jessica Bradshaw
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Carolina Autism and Neurodevelopment Research Center, University of South Carolina, Columbia, South Carolina, USA
| |
Collapse
|
7
|
Gupta P, Dobs K. Human-like face pareidolia emerges in deep neural networks optimized for face and object recognition. PLoS Comput Biol 2025; 21:e1012751. [PMID: 39869654 PMCID: PMC11790231 DOI: 10.1371/journal.pcbi.1012751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 02/03/2025] [Accepted: 12/24/2024] [Indexed: 01/29/2025] Open
Abstract
The human visual system possesses a remarkable ability to detect and process faces across diverse contexts, including the phenomenon of face pareidolia--seeing faces in inanimate objects. Despite extensive research, it remains unclear why the visual system employs such broadly tuned face detection capabilities. We hypothesized that face pareidolia results from the visual system's optimization for recognizing both faces and objects. To test this hypothesis, we used task-optimized deep convolutional neural networks (CNNs) and evaluated their alignment with human behavioral signatures and neural responses, measured via magnetoencephalography (MEG), related to pareidolia processing. Specifically, we trained CNNs on tasks involving combinations of face identification, face detection, object categorization, and object detection. Using representational similarity analysis, we found that CNNs that included object categorization in their training tasks represented pareidolia faces, real faces, and matched objects more similarly to neural responses than those that did not. Although these CNNs showed similar overall alignment with neural data, a closer examination of their internal representations revealed that specific training tasks had distinct effects on how pareidolia faces were represented across layers. Finally, interpretability methods revealed that only a CNN trained for both face identification and object categorization relied on face-like features-such as 'eyes'-to classify pareidolia stimuli as faces, mirroring findings in human perception. Our results suggest that human-like face pareidolia may emerge from the visual system's optimization for face identification within the context of generalized object categorization.
Collapse
Affiliation(s)
- Pranjul Gupta
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Katharina Dobs
- Department of Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain, and Behavior, Universities of Marburg, Giessen and Darmstadt, Marburg, Germany
| |
Collapse
|
8
|
Humphreys KL, Garon-Bissonnette J, Hill KE, Bailes LG, Barnett W, Hare MM. Caregiving relationships are a cornerstone of developmental psychopathology. Dev Psychopathol 2024; 36:2218-2231. [PMID: 38389283 PMCID: PMC11341779 DOI: 10.1017/s0954579424000300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2024]
Abstract
The interdisciplinary field of developmental psychopathology has made great strides by including context into theoretical and empirical approaches to studying risk and resilience. Perhaps no context is more important to the developing child than their relationships with their caregivers (typically a child's parents), as caregivers are a key source of stimulation and nurturance to young children. Coupled with the high degree of brain plasticity in the earliest years of life, these caregiving relationships have an immense influence on shaping behavioral outcomes relevant to developmental psychopathology. In this article, we discuss three areas within caregiving relationships: (1) caregiver-child interactions in everyday, naturalistic settings; (2) caregivers' social cognitions about their child; and (3) caregivers' broader social and cultural context. For each area, we provide an overview of its significance to the field, identify existing knowledge gaps, and offer potential approaches for bridging these gaps to foster growth in the field. Lastly, given that one value of a scientific discipline is its ability to produce research useful in guiding real-world decisions related to policy and practice, we encourage developmental psychopathology to consider that a focus on caregiving, a modifiable target, supports this mission.
Collapse
Affiliation(s)
| | | | - Kaylin E. Hill
- Vanderbilt University, Department of Psychology and Human
Development
| | - Lauren G. Bailes
- Vanderbilt University, Department of Psychology and Human
Development
| | - Whitney Barnett
- Vanderbilt University, Department of Psychology and Human
Development
| | - Megan M. Hare
- Vanderbilt University, Department of Psychology and Human
Development
| |
Collapse
|
9
|
Chow HM, Ma YK, Tseng CH. Social and communicative not a prerequisite: Preverbal infants learn an abstract rule only from congruent audiovisual dynamic pitch-height patterns. J Exp Child Psychol 2024; 248:106046. [PMID: 39241321 DOI: 10.1016/j.jecp.2024.106046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 07/23/2024] [Accepted: 07/29/2024] [Indexed: 09/09/2024]
Abstract
Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi 980-0812, Japan.
| |
Collapse
|
10
|
Kamensek T, Iarocci G, Oruc I. Atypical daily visual exposure to faces in adults with autism spectrum disorder. Curr Biol 2024; 34:4197-4208.e4. [PMID: 39181127 DOI: 10.1016/j.cub.2024.07.094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 02/20/2024] [Accepted: 07/30/2024] [Indexed: 08/27/2024]
Abstract
Expert face processes are refined and tuned through a protracted development. Exposure statistics of the daily visual experience of neurotypical adults (the face diet) show substantial exposure to familiar faces. People with autism spectrum disorder (ASD) do not show the same expertise with faces as their non-autistic counterparts. This may be due to an impoverished visual experience with faces, according to experiential models of autism. Here, we present the first empirical report on the day-to-day visual experience of the faces of adults with ASD. Our results, based on over 360 h of first-person perspective footage of daily exposure, show striking qualitative and quantitative differences in the ASD face diet compared with those of neurotypical observers, which is best characterized by a pattern of reduced and atypical exposure to familiar faces in ASD. Specifically, duration of exposure to familiar faces was lower in ASD, and faces were viewed from farther distances and from viewpoints that were biased toward profile pose. Our results provide strong evidence that individuals with ASD may not be getting the experience needed for the typical development of expert face processes.
Collapse
Affiliation(s)
- Todd Kamensek
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 818 W 10th Avenue, Vancouver, BC V5Z 1M9, Canada
| | - Grace Iarocci
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 818 W 10th Avenue, Vancouver, BC V5Z 1M9, Canada.
| |
Collapse
|
11
|
Potter CE, Lew-Williams C. Language development in children's natural environments: People, places, and things. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 67:200-235. [PMID: 39260904 DOI: 10.1016/bs.acdb.2024.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2024]
Abstract
Our goal in this chapter is to describe young children's experiences with language by examining three domains-people, places, and things-that define and influence their language input. We highlight how features of each of these three domains could provide useful learning opportunities, as well as how differences in infants' and toddlers' experiences may affect their long-term language skills. However, we ultimately suggest that a full understanding of early environments must move beyond a focus on individual experiences and include the broader systems that shape young children's lives, including both tangible aspects of the environment, such as physical resources or locations, and more hidden factors, such as cultural considerations, community health, or economic constraints.
Collapse
Affiliation(s)
- Christine E Potter
- Department of Psychology, University of Texas at El Paso, El Paso, TX, United States.
| | - Casey Lew-Williams
- Department of Psychology, Princeton University, Princeton, NJ, United States
| |
Collapse
|
12
|
Tamis-LeMonda CS, Swirbul MS, Lai KH. Natural behavior in everyday settings. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 66:1-27. [PMID: 39074918 DOI: 10.1016/bs.acdb.2024.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Infant behaviors-walking, vocalizing, playing, interacting with others, and so on-offer an unparalleled window into learning and development. The study of infants requires strategic choices about what to observe, where, when, and how. We argue that loosening study constraints-by allowing infants and caregivers to do whatever they choose, wherever they choose, and with whatever materials they choose-promises to reveal a deep understanding of the everyday data on which learning builds. We show that observations of infants' natural behavior yield unique insights into the nature of visual exploration, object play, posture and locomotion, proximity to caregiver, and communication. Furthermore, we show that by situating the study of behavior in ecologically-valid settings, researchers can gain purchase on the contextual regularities that frame learning. We close by underscoring the value of studies at every point on the research continuum-from cleverly controlled lab-based tasks to fully natural observations in everyday environments. Acceleration in the science of behavior rests on leveraging expertise across disciplines, theoretical positions, and methodological approaches.
Collapse
Affiliation(s)
| | - Mackenzie S Swirbul
- Department of Applied Psychology, New York University, New York, NY, United States
| | - Kristy H Lai
- Department of Applied Psychology, New York University, New York, NY, United States
| |
Collapse
|
13
|
Zhu L, Wang JZ, Lee W, Wyble B. Incorporating simulated spatial context information improves the effectiveness of contrastive learning models. PATTERNS (NEW YORK, N.Y.) 2024; 5:100964. [PMID: 38800363 PMCID: PMC11117056 DOI: 10.1016/j.patter.2024.100964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 05/03/2023] [Accepted: 03/04/2024] [Indexed: 05/29/2024]
Abstract
Visual learning often occurs in a specific context, where an agent acquires skills through exploration and tracking of its location in a consistent environment. The historical spatial context of the agent provides a similarity signal for self-supervised contrastive learning. We present a unique approach, termed environmental spatial similarity (ESS), that complements existing contrastive learning methods. Using images from simulated, photorealistic environments as an experimental setting, we demonstrate that ESS outperforms traditional instance discrimination approaches. Moreover, sampling additional data from the same environment substantially improves accuracy and provides new augmentations. ESS allows remarkable proficiency in room classification and spatial prediction tasks, especially in unfamiliar environments. This learning paradigm has the potential to enable rapid visual learning in agents operating in new environments with unique visual characteristics. Potentially transformative applications span from robotics to space exploration. Our proof of concept demonstrates improved efficiency over methods that rely on extensive, disconnected datasets.
Collapse
Affiliation(s)
- Lizhen Zhu
- Data Science and Artificial Intelligence Area, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA, USA
| | - James Z. Wang
- Data Science and Artificial Intelligence Area, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA, USA
- Human-Computer Interaction Area, College of Information Sciences and Technology, The Pennsylvania State University, University Park, PA, USA
- Department of Communication and Media, School of Social Sciences and Humanities, Loughborough University, Loughborough, Leicestershire, UK
| | - Wonseuk Lee
- Department of Computer Science and Engineering, The Pennsylvania State University, University Park, PA, USA
| | - Brad Wyble
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
14
|
Casey K, Potter CE, Lew-Williams C, Wojcik EH. Moving beyond "nouns in the lab": Using naturalistic data to understand why infants' first words include uh-oh and hi. Dev Psychol 2023; 59:2162-2173. [PMID: 37824228 PMCID: PMC10872816 DOI: 10.1037/dev0001630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Why do infants learn some words earlier than others? Many theories of early word learning focus on explaining how infants map labels onto concrete objects. However, words that are more abstract than object nouns, such as uh-oh, hi, more, up, and all-gone, are typically among the first to appear in infants' vocabularies. We combined a behavioral experiment with naturalistic observational research to explore how infants learn and represent this understudied category of high-frequency, routine-based non-nouns, which we term "everyday words." In Study 1, we found that a conventional eye-tracking measure of comprehension was insufficient to capture U.S.-based English-learning 10- to 16-month-old infants' emerging understanding of everyday words. In Study 2, we analyzed the visual and social scenes surrounding caregivers' and infants' use of everyday words in a naturalistic video corpus. This ecologically motivated research revealed that everyday words rarely co-occurred with consistent visual referents, making their early learnability difficult to reconcile with dominant word learning theories. Our findings instead point to complex patterns in the types of situations associated with everyday words that could contribute to their early representation in infants' vocabularies. By leveraging both experimental and observational methods, this investigation underscores the value of using naturalistic data to broaden theories of early learning. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
15
|
Kamensek T, Susilo T, Iarocci G, Oruc I. Are people with autism prosopagnosic? Autism Res 2023; 16:2100-2109. [PMID: 37740564 DOI: 10.1002/aur.3030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 08/30/2023] [Indexed: 09/24/2023]
Abstract
Difficulties in various face processing tasks have been well documented in autism spectrum disorder (ASD). Several meta-analyses and numerous case-control studies have indicated that this population experiences a moderate degree of impairment, with a small percentage of studies failing to detect any impairment. One possible account of this mixed pattern of findings is heterogeneity in face processing abilities stemming from the presence of a subpopulation of prosopagnosic individuals with ASD alongside those with normal face processing skills. Samples randomly drawn from such a population, especially relatively smaller ones, would vary in the proportion of participants with prosopagnosia, resulting in a wide range of group-level deficits from mild (or none) to severe across studies. We test this prosopagnosic subpopulation hypothesis by examining three groups of participants: adults with ASD, adults with developmental prosopagnosia (DP), and a comparison group. Our results show that the prosopagnosic subpopulation hypothesis does not account for the face impairments in the broader autism spectrum. ASD observers show a continuous and graded, rather than categorical, heterogeneity that span a range of face processing skills including many with mild to moderate deficits, inconsistent with a prosopagnosic subtype account. We suggest that pathogenic origins of face deficits for at least some with ASD differ from those of DP.
Collapse
Affiliation(s)
- Todd Kamensek
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| | - Tirta Susilo
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Grace Iarocci
- Department of Psychology, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
16
|
Birulés J, Goupil L, Josse J, Fort M. The Role of Talking Faces in Infant Language Learning: Mind the Gap between Screen-Based Settings and Real-Life Communicative Interactions. Brain Sci 2023; 13:1167. [PMID: 37626523 PMCID: PMC10452843 DOI: 10.3390/brainsci13081167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/27/2023] Open
Abstract
Over the last few decades, developmental (psycho) linguists have demonstrated that perceiving talking faces audio-visually is important for early language acquisition. Using mostly well-controlled and screen-based laboratory approaches, this line of research has shown that paying attention to talking faces is likely to be one of the powerful strategies infants use to learn their native(s) language(s). In this review, we combine evidence from these screen-based studies with another line of research that has studied how infants learn novel words and deploy their visual attention during naturalistic play. In our view, this is an important step toward developing an integrated account of how infants effectively extract audiovisual information from talkers' faces during early language learning. We identify three factors that have been understudied so far, despite the fact that they are likely to have an important impact on how infants deploy their attention (or not) toward talking faces during social interactions: social contingency, speaker characteristics, and task- dependencies. Last, we propose ideas to address these issues in future research, with the aim of reducing the existing knowledge gap between current experimental studies and the many ways infants can and do effectively rely upon the audiovisual information extracted from talking faces in their real-life language environment.
Collapse
Affiliation(s)
- Joan Birulés
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Louise Goupil
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Jérémie Josse
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Mathilde Fort
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
- Centre de Recherche en Neurosciences de Lyon, INSERM U1028-CNRS UMR 5292, Université Lyon 1, 69500 Bron, France
| |
Collapse
|
17
|
Huber LS, Geirhos R, Wichmann FA. The developmental trajectory of object recognition robustness: Children are like small adults but unlike big deep neural networks. J Vis 2023; 23:4. [PMID: 37410494 PMCID: PMC10337805 DOI: 10.1167/jov.23.7.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 05/10/2023] [Indexed: 07/07/2023] Open
Abstract
In laboratory object recognition tasks based on undistorted photographs, both adult humans and deep neural networks (DNNs) perform close to ceiling. Unlike adults', whose object recognition performance is robust against a wide range of image distortions, DNNs trained on standard ImageNet (1.3M images) perform poorly on distorted images. However, the last 2 years have seen impressive gains in DNN distortion robustness, predominantly achieved through ever-increasing large-scale datasets-orders of magnitude larger than ImageNet. Although this simple brute-force approach is very effective in achieving human-level robustness in DNNs, it raises the question of whether human robustness, too, is simply due to extensive experience with (distorted) visual input during childhood and beyond. Here we investigate this question by comparing the core object recognition performance of 146 children (aged 4-15 years) against adults and against DNNs. We find, first, that already 4- to 6-year-olds show remarkable robustness to image distortions and outperform DNNs trained on ImageNet. Second, we estimated the number of images children had been exposed to during their lifetime. Compared with various DNNs, children's high robustness requires relatively little data. Third, when recognizing objects, children-like adults but unlike DNNs-rely heavily on shape but not on texture cues. Together our results suggest that the remarkable robustness to distortions emerges early in the developmental trajectory of human object recognition and is unlikely the result of a mere accumulation of experience with distorted visual input. Even though current DNNs match human performance regarding robustness, they seem to rely on different and more data-hungry strategies to do so.
Collapse
Affiliation(s)
- Lukas S Huber
- Department of Psychology, University of Bern, Bern, Switzerland
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0002-7755-6926
| | - Robert Geirhos
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0001-7698-3187
| | - Felix A Wichmann
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0002-2592-634X
| |
Collapse
|
18
|
Yovel G, Grosbard I, Abudarham N. Deep learning models challenge the prevailing assumption that face-like effects for objects of expertise support domain-general mechanisms. Proc Biol Sci 2023; 290:20230093. [PMID: 37161322 PMCID: PMC10170201 DOI: 10.1098/rspb.2023.0093] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/04/2023] [Indexed: 05/11/2023] Open
Abstract
The question of whether task performance is best achieved by domain-specific, or domain-general processing mechanisms is fundemental for both artificial and biological systems. This question has generated a fierce debate in the study of expert object recognition. Because humans are experts in face recognition, face-like neural and cognitive effects for objects of expertise were considered support for domain-general mechanisms. However, effects of domain, experience and level of categorization, are confounded in human studies, which may lead to erroneous inferences. To overcome these limitations, we trained deep learning algorithms on different domains (objects, faces, birds) and levels of categorization (basic, sub-ordinate, individual), matched for amount of experience. Like humans, the models generated a larger inversion effect for faces than for objects. Importantly, a face-like inversion effect was found for individual-based categorization of non-faces (birds) but only in a network specialized for that domain. Thus, contrary to prevalent assumptions, face-like effects for objects of expertise do not support domain-general mechanisms but may originate from domain-specific mechanisms. More generally, we show how deep learning algorithms can be used to dissociate factors that are inherently confounded in the natural environment of biological organisms to test hypotheses about their isolated contributions to cognition and behaviour.
Collapse
Affiliation(s)
- Galit Yovel
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69987, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69987, Israel
| | - Idan Grosbard
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69987, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69987, Israel
| | - Naphtali Abudarham
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69987, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69987, Israel
| |
Collapse
|
19
|
Knabe ML, Vlach HA. Not all is forgotten: Children's associative matrices for features of a word learning episode. Dev Sci 2023; 26:e13291. [PMID: 35622834 DOI: 10.1111/desc.13291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 05/01/2022] [Accepted: 05/12/2022] [Indexed: 11/28/2022]
Abstract
Word learning studies traditionally examine the narrow link between words and objects, indifferent to the rich contextual information surrounding objects. This research examined whether children attend to this contextual information and construct an associative matrix of the words, objects, people, and environmental context during word learning. In Experiment 1, preschool-aged children (age: 3;2-5;11 years) were presented with novel words and objects in an animated storybook. Results revealed that children constructed associations beyond words and objects. Specifically, children attended to and had the strongest associations for features of the environmental context but failed to learn word-object associations. Experiment 2 demonstrated that children (age: 3;0-5;8 years) leveraged strong associations for the person and environmental context to support word-object mapping. This work demonstrates that children are especially sensitive to the word learning context and use associative matrices to support word mapping. Indeed, this research suggests associative matrices of the environment may be foundational for children's vocabulary development.
Collapse
Affiliation(s)
- Melina L Knabe
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Haley A Vlach
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
20
|
Looking at faces in the wild. Sci Rep 2023; 13:783. [PMID: 36646709 PMCID: PMC9842722 DOI: 10.1038/s41598-022-25268-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/28/2022] [Indexed: 01/18/2023] Open
Abstract
Faces are key to everyday social interactions, but our understanding of social attention is based on experiments that present images of faces on computer screens. Advances in wearable eye-tracking devices now enable studies in unconstrained natural settings but this approach has been limited by manual coding of fixations. Here we introduce an automatic 'dynamic region of interest' approach that registers eye-fixations to bodies and faces seen while a participant moves through the environment. We show that just 14% of fixations are to faces of passersby, contrasting with prior screen-based studies that suggest faces automatically capture visual attention. We also demonstrate the potential for this new tool to help understand differences in individuals' social attention, and the content of their perceptual exposure to other people. Together, this can form the basis of a new paradigm for studying social attention 'in the wild' that opens new avenues for theoretical, applied and clinical research.
Collapse
|
21
|
Tsurumi S, Kanazawa S, Yamaguchi MK, Kawahara JI. Development of upper visual field bias for faces in infants. Dev Sci 2023; 26:e13262. [PMID: 35340093 PMCID: PMC10078383 DOI: 10.1111/desc.13262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/11/2022] [Accepted: 03/22/2022] [Indexed: 12/15/2022]
Abstract
The spatial location of the face and body seen in daily life influences human perception and recognition. This contextual effect of spatial locations suggests that daily experience affects how humans visually process the face and body. However, it remains unclear whether this effect is caused by experience, or innate neural pathways. To address this issue, we examined the development of visual field asymmetry for face processing, in which faces in the upper visual field were processed preferentially compared to the lower visual field. We found that a developmental change occurred between 6 and 7 months. Older infants aged 7-8 months showed bias toward faces in the upper visual field, similar to adults, but younger infants of 5-6 months showed no such visual field bias. Furthermore, older infants preferentially memorized faces in the upper visual field, rather than in the lower visual field. These results suggest that visual field asymmetry is acquired through development, and might be caused by the learning of spatial location in daily experience.
Collapse
Affiliation(s)
- Shuma Tsurumi
- Department of Psychology, Chuo University, Hachioji, Tokyo, Japan.,Japan Society for the Promotion of Science, Chiyoda-ku, Tokyo, Japan
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Bunkyo-ku, Tokyo, Japan
| | | | | |
Collapse
|
22
|
Holden E, Buryn-Weitzel JC, Atim S, Biroch H, Donnellan E, Graham KE, Hoffman M, Jurua M, Knapper CV, Lahiff NJ, Marshall S, Paricia J, Tusiime F, Wilke C, Majid A, Slocombe KE. Maternal attitudes and behaviours differentially shape infant early life experience: A cross cultural study. PLoS One 2022; 17:e0278378. [PMID: 36542635 PMCID: PMC9770339 DOI: 10.1371/journal.pone.0278378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 11/15/2022] [Indexed: 12/24/2022] Open
Abstract
Early life environments afford infants a variety of learning opportunities, and caregivers play a fundamental role in shaping infant early life experience. Variation in maternal attitudes and parenting practices is likely to be greater between than within cultures. However, there is limited cross-cultural work characterising how early life environment differs across populations. We examined the early life environment of infants from two cultural contexts where attitudes towards parenting and infant development were expected to differ: in a group of 53 mother-infant dyads in the UK and 44 mother-infant dyads in Uganda. Participants were studied longitudinally from when infants were 3- to 15-months-old. Questionnaire data revealed the Ugandan mothers had more relational attitudes towards parenting than the mothers from the UK, who had more autonomous parenting attitudes. Using questionnaires and observational methods, we examined whether infant development and experience aligned with maternal attitudes. We found the Ugandan infants experienced a more relational upbringing than the UK infants, with Ugandan infants receiving more distributed caregiving, more body contact with their mothers, and more proximity to mothers at night. Ugandan infants also showed earlier physical development compared to UK infants. Contrary to our expectations, however, Ugandan infants were not in closer proximity to their mothers during the day, did not have more people in proximity or more partners for social interaction compared to UK infants. In addition, when we examined attitudes towards specific behaviours, mothers' attitudes rarely predicted infant experience in related contexts. Taken together our findings highlight the importance of measuring behaviour, rather than extrapolating expected behaviour based on attitudes alone. We found infants' early life environment varies cross-culturally in many important ways and future research should investigate the consequences of these differences for later development.
Collapse
Affiliation(s)
- Eve Holden
- Department of Psychology, University of York, York, United Kingdom
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| | | | - Santa Atim
- Budongo Conservation Field Station, Nyabyeya, Uganda
| | - Hellen Biroch
- Budongo Conservation Field Station, Nyabyeya, Uganda
| | - Ed Donnellan
- Department of Psychology, University of York, York, United Kingdom
- Department of Psychology, University College London, London, United Kingdom
| | - Kirsty E. Graham
- Department of Psychology, University of York, York, United Kingdom
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| | - Maggie Hoffman
- School of Human Evolution and Social Change and Institute of Human Origins, Arizona State University, Tempe, Arizona, United States of America
| | - Michael Jurua
- Budongo Conservation Field Station, Nyabyeya, Uganda
| | | | - Nicole J. Lahiff
- Department of Psychology, University of York, York, United Kingdom
| | - Sophie Marshall
- Department of Psychology, University of York, York, United Kingdom
| | | | | | - Claudia Wilke
- Department of Psychology, University of York, York, United Kingdom
| | - Asifa Majid
- Department of Psychology, University of York, York, United Kingdom
| | | |
Collapse
|
23
|
Cabral L, Zubiaurre-Elorza L, Wild CJ, Linke A, Cusack R. Anatomical correlates of category-selective visual regions have distinctive signatures of connectivity in neonates. Dev Cogn Neurosci 2022; 58:101179. [PMID: 36521345 PMCID: PMC9768242 DOI: 10.1016/j.dcn.2022.101179] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 11/15/2022] [Accepted: 11/21/2022] [Indexed: 11/25/2022] Open
Abstract
The ventral visual stream is shaped during development by innate proto-organization within the visual system, such as the strong input from the fovea to the fusiform face area. In adults, category-selective regions have distinct signatures of connectivity to brain regions beyond the visual system, likely reflecting cross-modal and motoric associations. We tested if this long-range connectivity is part of the innate proto-organization, or if it develops with postnatal experience, by using diffusion-weighted imaging to characterize the connectivity of anatomical correlates of category-selective regions in neonates (N = 445), 1-9 month old infants (N = 11), and adults (N = 14). Using the HCP data we identified face- and place- selective regions and a third intermediate region with a distinct profile of selectivity. Using linear classifiers, these regions were found to have distinctive connectivity at birth, to other regions in the visual system and to those outside of it. The results support an extended proto-organization that includes long-range connectivity that shapes, and is shaped by, experience-dependent development.
Collapse
Affiliation(s)
- Laura Cabral
- Department of Radiology, University of Pittsburgh, Pittsburgh 15224, PA, USA.
| | - Leire Zubiaurre-Elorza
- Department of Psychology, Faculty of Health Sciences, University of Deusto, Bilbao 48007, Spain
| | - Conor J Wild
- Western Institute for Neuroscience, Western University, London, ON N6A 3K7, Canada; Department of Physiology and Pharmacology,Western University, London, ON N6A 3K7, Canada
| | - Annika Linke
- Brain Development Imaging Laboratories, San Diego State University, San Diego 92120, CA, USA
| | - Rhodri Cusack
- Trinity College Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
| |
Collapse
|
24
|
Belteki Z, van den Boomen C, Junge C. Face-to-face contact during infancy: How the development of gaze to faces feeds into infants' vocabulary outcomes. Front Psychol 2022; 13:997186. [PMID: 36389540 PMCID: PMC9650530 DOI: 10.3389/fpsyg.2022.997186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/03/2022] [Indexed: 08/10/2023] Open
Abstract
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants' gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
Collapse
|
25
|
Abstract
Eye contact is essential for human interactions. We investigated whether humans are able to avoid eye contact while navigating crowds. At a science festival, we fitted 62 participants with a wearable eye tracker and instructed them to walk a route. Half of the participants were further instructed to avoid eye contact. We report that humans can flexibly allocate their gaze while navigating crowds and avoid eye contact primarily by orienting their head and eyes towards the floor. We discuss implications for crowd navigation and gaze behavior. In addition, we address a number of issues encountered in such field studies with regard to data quality, control of the environment, and participant adherence to instructions. We stress that methodological innovation and scientific progress are strongly interrelated.
Collapse
|
26
|
Mendoza JK, Fausey CM. Everyday Parameters for Episode-to-Episode Dynamics in the Daily Music of Infancy. Cogn Sci 2022; 46:e13178. [PMID: 35938844 PMCID: PMC9542518 DOI: 10.1111/cogs.13178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 04/12/2022] [Accepted: 05/23/2022] [Indexed: 11/26/2022]
Abstract
Experience-dependent change pervades early human development. Though trajectories of developmental change have been well charted in many domains, the episode-to-episode schedules of experiences on which they are hypothesized to depend have not. Here, we took up this issue in a domain known to be governed in part by early experiences: music. Using a corpus of longform audio recordings, we parameterized the daily schedules of music encountered by 35 infants ages 6-12 months. We discovered that everyday music episodes, as well as the interstices between episodes, typically persisted less than a minute, with most daily schedules also including some very extended episodes and interstices. We also discovered that infants encountered music episodes in a bursty rhythm, rather than a periodic or random rhythm, over the day. These findings join a suite of recent discoveries from everyday vision, motor, and language that expand our imaginations beyond artificial learning schedules and enable theorists to model the history-dependence of developmental process in ways that respect everyday sensory histories. Future theories about how infants build knowledge across multiple episodes can now be parameterized using these insights from infants' everyday lives.
Collapse
|
27
|
Infants' sensitivity to emotional expressions in actions: The contributions of parental expressivity and motor experience. Infant Behav Dev 2022; 68:101751. [PMID: 35914367 DOI: 10.1016/j.infbeh.2022.101751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Revised: 06/13/2022] [Accepted: 07/22/2022] [Indexed: 11/22/2022]
Abstract
Actions can convey information about the affective state of an actor. By the end of the first year, infants show sensitivity to such emotional information in actions. Here, we examined the mechanisms contributing to infants' developing sensitivity to emotional action kinematics. We hypothesized that this sensitivity might rely on two factors: a stable motor representation of the observed action to be able to detect deviations from how it would typically be performed and experience with emotional expressions. The sensitivity of 12- to 13-month-old infants to happy and angry emotional cues in a manual transport action was examined using facial EMG. Infants' own movements when performing an object transport task were assessed using optical motion capture. The infants' caregivers' emotional expressivity was measured using a questionnaire. Negative emotional expressivity of the primary caregiver was significantly related to infants' sensitivity to observed angry actions. There was no evidence for such an association with infants' own motor skill. Overall, our results show that infants' experience with emotions, measured as caregivers' emotional expressivity, may aid infants' discrimination of others' emotions expressed in action kinematics.
Collapse
|
28
|
Damon F, Quinn PC, Méary D, Pascalis O. Asymmetrical responding to male versus female other-race categories in 9- to 12-month-old infants. Br J Psychol 2022; 114 Suppl 1:71-93. [PMID: 35808935 DOI: 10.1111/bjop.12582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 06/08/2022] [Accepted: 06/16/2022] [Indexed: 11/28/2022]
Abstract
Faces can be categorized along various dimensions including gender or race, an ability developing in infancy. Infant categorization studies have focused on facial attributes in isolation, but the interaction between these attributes remains poorly understood. Experiment 1 examined gender categorization of other-race faces in 9- and 12-month-old White infants. Nine- and 12-month-olds were familiarized with Asian male or female faces, and tested with a novel exemplar from the familiarized category paired with a novel exemplar from a novel category. Both age groups showed novel category preferences for novel Asian female faces after familiarization with Asian male faces, but showed no novel category preference for novel Asian male faces after familiarization with Asian female faces. This categorization asymmetry was not due to a spontaneous preference hindering novel category reaction (Experiment 2), and both age groups displayed difficulty discriminating among male, but not female, other-race faces (Experiment 3). These results indicate that category formation for male other-race faces is mediated by categorical perception. Overall, the findings suggest that even by 12 months of age, infants are not fully able to form gender category representations of other-race faces, responding categorically to male, but not female, other-race faces.
Collapse
Affiliation(s)
- Fabrice Damon
- Center for Taste, Smell & Feeding Behavior, Development of Olfactory Communication & Cognition Laboratory, Université de Bourgogne, CNRS, Inrae, Institut Agro Dijon, Université Bourgogne Franche-Comté, Dijon, France
| | - Paul C Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| | - David Méary
- Univ. Grenoble Alpes, LPNC, Grenoble, France.,LPNC, CNRS, Grenoble, France
| | - Olivier Pascalis
- Univ. Grenoble Alpes, LPNC, Grenoble, France.,LPNC, CNRS, Grenoble, France
| |
Collapse
|
29
|
Karmazyn-Raz H, Smith LB. Discourse with Few Words: Coherence Statistics, Parent-Infant Actions on Objects, and Object Names. LANGUAGE ACQUISITION 2022; 30:211-229. [PMID: 37736139 PMCID: PMC10513098 DOI: 10.1080/10489223.2022.2054342] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 03/05/2022] [Indexed: 09/23/2023]
Abstract
The data for early object name learning is often conceptualized as a problem of mapping heard names to referents. However, infants do not hear object names as discrete events but rather in extended interactions organized around goal-directed actions on objects. The present study examined the statistical structure of the nonlinguistic events that surround parent naming of objects. Parents and 12-month -old infants were left alone in a room for 10 minutes with 32 objects available for exploration. Parent and infant handling of objects and parent naming of objects were coded. The four measured statistics were from measures used in the study of coherent discourse: (1) a frequency distribution in which actions were frequently directed to a few objects and more rarely to other objects; (2) repeated returns to the high-frequency objects over the 10-minute play period; (3) clustered repetitions, continuity, of actions on objects; and (4) structured networks of transitions among objects in play that connected all the played-with objects. Parent naming was infrequent but related to the statistics of object-directed actions. The implications of the discourse-like stream of actions are discussed in terms of learning mechanisms that could support rapid learning of object names from relatively few name-object co-occurrences.
Collapse
Affiliation(s)
| | - Linda B Smith
- Indiana University, Bloomington, US
- University of East Anglia, Norfolk, UK
| |
Collapse
|
30
|
Matthews CM, Mondloch CJ, Lewis-Dennis F, Laurence S. Children's ability to recognize their parent's face improves with age. J Exp Child Psychol 2022; 223:105480. [PMID: 35753197 DOI: 10.1016/j.jecp.2022.105480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Revised: 05/18/2022] [Accepted: 05/23/2022] [Indexed: 12/01/2022]
Abstract
Adults are experts at recognizing familiar faces across images that incorporate natural within-person variability in appearance (i.e., ambient images). Little is known about children's ability to do so. In the current study, we investigated whether 4- to 7-year-olds (n = 56) could recognize images of their own parent-a person with whom children have had abundant exposure in a variety of different contexts. Children were asked to identify images of their parent that were intermixed with images of other people. We included images of each parent taken both before and after their child was born to manipulate how close the images were to the child's own experience. When viewing before-birth images, 4- and 5-year-olds were less sensitive to identity than were older children; sensitivity did not differ when viewing images taken after the child was born. These findings suggest that with even the most familiar face, 4- and 5-year-olds have difficulty recognizing instances that go beyond their direct experience. We discuss two factors that may contribute to the prolonged development of familiar face recognition.
Collapse
Affiliation(s)
| | | | | | - Sarah Laurence
- Keele University, Keele, Staffordshire ST5 5BG, UK; The Open University, Milton Keynes MK7 6AA, UK.
| |
Collapse
|
31
|
Infant sensitivity to age-based social categories in full-body displays. Infant Behav Dev 2022; 68:101726. [PMID: 35671651 DOI: 10.1016/j.infbeh.2022.101726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 04/12/2022] [Accepted: 05/19/2022] [Indexed: 11/23/2022]
Abstract
This study examined 3.5- and 6-month-old infants' visual preferences for individuals from different age groups: adults versus infants. Unlike previous studies that only studied faces, here we included bodies, which are as frequent as faces in our environment, and highly salient, and in consequence, may play a role in identifying social categories and driving social preferences. In particular, we studied three salient dimensions along which individuals of different ages differ: body length, body typology, and face typology. In Experiment 1, adult and infant stimuli were presented in real proportions, differing both in body length and face typology, and infants preferred the adult stimuli. Experiment 2 demonstrated that given identical adult stimuli, which differ only in body length, infants attended more to the longer stimuli. In Experiment 3, infant and adult stimuli were matched on body length with the infant stimuli having larger heads, and infants preferred the infant stimuli. Experiment 4 measured infant visual preference for infant or adult bodies in the absence of face information, and found that 4-month-olds attended more to the infant bodies. Experiment 5 measured infants' sensitivity to matching or mismatching faces and bodies based on age, and infants demonstrated a preference for the incongruent stimuli (i.e., adult head with an infant body). Altogether these studies show that while face typology and body size are main drivers of infant visual preference for adults, when body typology information is provided for bodies matched in size, infant preference shifts towards their peers. Thus, our results suggest that infants have early developing age-based body representations, and that body information shifts their pattern of visual behavior from a visual preference for adult faces, to a visual preference for full-body peers.
Collapse
|
32
|
Dobs K, Martinez J, Kell AJE, Kanwisher N. Brain-like functional specialization emerges spontaneously in deep neural networks. SCIENCE ADVANCES 2022; 8:eabl8913. [PMID: 35294241 PMCID: PMC8926347 DOI: 10.1126/sciadv.abl8913] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/21/2022] [Indexed: 05/10/2023]
Abstract
The human brain contains multiple regions with distinct, often highly specialized functions, from recognizing faces to understanding language to thinking about what others are thinking. However, it remains unclear why the cortex exhibits this high degree of functional specialization in the first place. Here, we consider the case of face perception using artificial neural networks to test the hypothesis that functional segregation of face recognition in the brain reflects a computational optimization for the broader problem of visual recognition of faces and other visual categories. We find that networks trained on object recognition perform poorly on face recognition and vice versa and that networks optimized for both tasks spontaneously segregate themselves into separate systems for faces and objects. We then show functional segregation to varying degrees for other visual categories, revealing a widespread tendency for optimization (without built-in task-specific inductive biases) to lead to functional specialization in machines and, we conjecture, also brains.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Julio Martinez
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Psychology, Stanford University, Stanford, CA, USA
| | | | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
33
|
|
34
|
Carnevali L, Gui A, Jones EJH, Farroni T. Face Processing in Early Development: A Systematic Review of Behavioral Studies and Considerations in Times of COVID-19 Pandemic. Front Psychol 2022; 13:778247. [PMID: 35250718 PMCID: PMC8894249 DOI: 10.3389/fpsyg.2022.778247] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 01/21/2022] [Indexed: 12/17/2022] Open
Abstract
Human faces are one of the most prominent stimuli in the visual environment of young infants and convey critical information for the development of social cognition. During the COVID-19 pandemic, mask wearing has become a common practice outside the home environment. With masks covering nose and mouth regions, the facial cues available to the infant are impoverished. The impact of these changes on development is unknown but is critical to debates around mask mandates in early childhood settings. As infants grow, they increasingly interact with a broader range of familiar and unfamiliar people outside the home; in these settings, mask wearing could possibly influence social development. In order to generate hypotheses about the effects of mask wearing on infant social development, in the present work, we systematically review N = 129 studies selected based on the most recent PRISMA guidelines providing a state-of-the-art framework of behavioral studies investigating face processing in early infancy. We focused on identifying sensitive periods during which being exposed to specific facial features or to the entire face configuration has been found to be important for the development of perceptive and socio-communicative skills. For perceptive skills, infants gradually learn to analyze the eyes or the gaze direction within the context of the entire face configuration. This contributes to identity recognition as well as emotional expression discrimination. For socio-communicative skills, direct gaze and emotional facial expressions are crucial for attention engagement while eye-gaze cuing is important for joint attention. Moreover, attention to the mouth is particularly relevant for speech learning. We discuss possible implications of the exposure to masked faces for developmental needs and functions. Providing groundwork for further research, we encourage the investigation of the consequences of mask wearing for infants' perceptive and socio-communicative development, suggesting new directions within the research field.
Collapse
Affiliation(s)
- Laura Carnevali
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| | - Anna Gui
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, United Kingdom
| | - Emily J. H. Jones
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, United Kingdom
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| |
Collapse
|
35
|
de Barbaro K, Fausey CM. Ten lessons about infants' everyday experiences. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2022; 31:28-33. [PMID: 36159505 PMCID: PMC9499013 DOI: 10.1177/09637214211059536] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/20/2023]
Abstract
Audio recorders, accelerometers, and cameras that infants wear throughout their everyday lives capture the experiences that are available to shape development. Everyday sensing in infancy reveals patterns within the everyday hubbub that are unknowable using methods that capture shorter, more isolated, or more planned slices of behavior. Here, we review ten lessons learned from recent endeavors that removed researchers from designing or participating in infants' experiences and instead quantified patterns that arose within infants' own spontaneously arising everyday experiences. The striking heterogeneity of experiences - there is no meaningfully "representative" hour of a day, instance of a category, interaction context, or infant - inspires next steps in theory and practice that embrace the complex, dynamic, and multiple pathways of human development.
Collapse
Affiliation(s)
- Kaya de Barbaro
- Department of Psychology, The University of Texas at Austin, SEA 4.208, 108 E. Dean Keaton Stop A8000, Austin, TX 78712-1043
| | - Caitlin M. Fausey
- Department of Psychology, University of Oregon, 1227 University of Oregon, Eugene, OR, 97403
| |
Collapse
|
36
|
Conte S, Baccolo E, Bulf H, Proietti V, Macchi Cassia V. Infants' visual exploration strategies for adult and child faces. INFANCY 2022; 27:492-514. [PMID: 35075767 DOI: 10.1111/infa.12458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Revised: 11/05/2021] [Accepted: 12/21/2021] [Indexed: 11/28/2022]
Abstract
By the end of the first year of life, infants' discrimination abilities tune to frequently experienced face groups. Little is known about the exploration strategies adopted to efficiently discriminate frequent, familiar face types. The present eye-tracking study examined the distribution of visual fixations produced by 10-month-old and 4-month-old singletons while learning adult (i.e., familiar) and child (i.e., unfamiliar) White faces. Infants were tested in an infant-controlled visual habituation task, in which post-habituation preference measured successful discrimination. Results confirmed earlier evidence that, without sibling experience, 10-month-olds discriminate only among adult faces. Analyses of gaze movements during habituation showed that infants' fixations were centered in the upper part of the stimuli. The mouth was sampled longer in adult faces than in child faces, while the child eyes were sampled longer and more frequently than the adult eyes. At 10 months, but not at 4 months, global measures of scanning behavior on the whole face also varied according to face age, as the spatiotemporal distribution of scan paths showed larger within- and between-participants similarity for adult faces than for child faces. Results are discussed with reference to the perceptual narrowing literature, and the influence of age-appropriate developmental tasks on infants' face processing abilities.
Collapse
Affiliation(s)
- Stefania Conte
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
| | - Elisa Baccolo
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Hermann Bulf
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Valentina Proietti
- Department of Psychology, Trinity Western University, Langley, British Columbia, Canada
| | | |
Collapse
|
37
|
Franchak JM, Yu C. Beyond screen time: Using head-mounted eye tracking to study natural behavior. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 62:61-91. [PMID: 35249686 DOI: 10.1016/bs.acdb.2021.11.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Head-mounted eye tracking is a new method that allows researchers to catch a glimpse of what infants and children see during naturalistic activities. In this chapter, we review how mobile, wearable eye trackers improve the construct validity of important developmental constructs, such as visual object experiences and social attention, in ways that would be impossible using screen-based eye tracking. Head-mounted eye tracking improves ecological validity by allowing researchers to present more realistic and complex visual scenes, create more interactive experimental situations, and examine how the body influences what infants and children see. As with any new method, there are difficulties to overcome. Accordingly, we identify what aspects of head-mounted eye-tracking study design affect the measurement quality, interpretability of the results, and efficiency of gathering data. Moreover, we provide a summary of best practices aimed at allowing researchers to make well-informed decisions about whether and how to apply head-mounted eye tracking to their own research questions.
Collapse
Affiliation(s)
- John M Franchak
- Department of Psychology, University of California, Riverside, CA, United States.
| | - Chen Yu
- Department of Psychology, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
38
|
Oakes LM. The development of visual attention in infancy: A cascade approach. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 64:1-37. [PMID: 37080665 DOI: 10.1016/bs.acdb.2022.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Visual attention develops rapidly and significantly during the first postnatal years. At birth, infants have poor visual acuity, poor head and neck control, and as a result have little autonomy over where and how long they look. Across the first year, the neural systems that support alerting, orienting, and endogenous attention develop, allowing infants to more effectively focus their attention on information in the environment important for processing. However, visual attention is a system that develops in the context of the whole child, and fully understanding this development requires understanding how attentional systems interact and how these systems interact with other systems across wide domains. By adopting a cascades framework we can better position the development of visual attention in the context of the whole developing child. Specifically, development builds, with previous achievements setting the stage for current development, and current development having cascading consequences on future development. In addition, development reflects changes in multiple domains, and those domains influence each other across development. Finally, development reflects and produces changes in the input that the visual system receives; understanding the changing input is key to fully understand the development of visual attention. The development of visual attention is described in this context.
Collapse
Affiliation(s)
- Lisa M Oakes
- Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, United States.
| |
Collapse
|
39
|
Kobayashi M, Kanazawa S, Yamaguchi MK, O'Toole AJ. Cortical processing of dynamic bodies in the superior occipito-temporal regions of the infants' brain: Difference from dynamic faces and inversion effect. Neuroimage 2021; 244:118598. [PMID: 34587515 DOI: 10.1016/j.neuroimage.2021.118598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 11/17/2022] Open
Abstract
Previous functional neuroimaging studies imply a crucial role of the superior temporal regions (e.g., superior temporal sulcus: STS) for processing of dynamic faces and bodies. However, little is known about the cortical processing of moving faces and bodies in infancy. The current study used functional near-infrared spectroscopy (fNIRS) to directly compare cortical hemodynamic responses to dynamic faces (videos of approaching people with blurred bodies) and dynamic bodies (videos of approaching people with blurred faces) in infants' brain. We also examined the body-inversion effect in 5- to 8-month-old infants using hemodynamic responses as a measure. We found significant brain activity for the dynamic faces and bodies in the superior area of bilateral temporal cortices in both 5- to 6-month-old and 7- to 8-month-old infants. The hemodynamic responses to dynamic faces occurred across a broader area of cortex in 7- to 8-month-olds than in 5- to 6-month-olds, but we did not find a developmental change for dynamic bodies. There was no significant activation when the stimuli were presented upside down, indicating that these activation patterns did not result from the low-level visual properties of dynamic faces and bodies. Additionally, we found that the superior temporal regions showed a body inversion effect in infants aged over 5 months: the upright dynamic body stimuli induced stronger activation compared to the inverted stimuli. The most important contribution of the present study is that we identified cortical areas responsive to dynamic bodies and faces in two groups of infants (5-6-months and 7-8-months of age) and we found different developmental trends for the processing of bodies and faces.
Collapse
Affiliation(s)
- Megumi Kobayashi
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Developmental Disability Center, Japan.
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Japan
| | | | - Alice J O'Toole
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
| |
Collapse
|
40
|
Mendoza JK, Fausey CM. Quantifying Everyday Ecologies: Principles for Manual Annotation of Many Hours of Infants' Lives. Front Psychol 2021; 12:710636. [PMID: 34552533 PMCID: PMC8450442 DOI: 10.3389/fpsyg.2021.710636] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 07/20/2021] [Indexed: 11/25/2022] Open
Abstract
Everyday experiences are the experiences available to shape developmental change. Remarkable advances in devices used to record infants' and toddlers' everyday experiences, as well as in repositories to aggregate and share such recordings across teams of theorists, have yielded a potential gold mine of insights to spur next-generation theories of experience-dependent change. Making full use of these advances, however, currently requires manual annotation. Manually annotating many hours of everyday life is a dedicated pursuit requiring significant time and resources, and in many domains is an endeavor currently lacking foundational facts to guide potentially consequential implementation decisions. These realities make manual annotation a frequent barrier to discoveries, as theorists instead opt for narrower scoped activities. Here, we provide theorists with a framework for manually annotating many hours of everyday life designed to reduce both theoretical and practical overwhelm. We share insights based on our team's recent adventures in the previously uncharted territory of everyday music. We identify principles, and share implementation examples and tools, to help theorists achieve scalable solutions to challenges that are especially fierce when annotating extended timescales. These principles for quantifying everyday ecologies will help theorists collectively maximize return on investment in databases of everyday recordings and will enable a broad community of scholars—across institutions, skillsets, experiences, and working environments—to make discoveries about the experiences upon which development may depend.
Collapse
Affiliation(s)
- Jennifer K Mendoza
- Department of Psychology, University of Oregon, Eugene, OR, United States
| | - Caitlin M Fausey
- Department of Psychology, University of Oregon, Eugene, OR, United States
| |
Collapse
|
41
|
Abstract
Deep learning models currently achieve human levels of performance on real-world face recognition tasks. We review scientific progress in understanding human face processing using computational approaches based on deep learning. This review is organized around three fundamental advances. First, deep networks trained for face identification generate a representation that retains structured information about the face (e.g., identity, demographics, appearance, social traits, expression) and the input image (e.g., viewpoint, illumination). This forces us to rethink the universe of possible solutions to the problem of inverse optics in vision. Second, deep learning models indicate that high-level visual representations of faces cannot be understood in terms of interpretable features. This has implications for understanding neural tuning and population coding in the high-level visual cortex. Third, learning in deep networks is a multistep process that forces theoretical consideration of diverse categories of learning that can overlap, accumulate over time, and interact. Diverse learning types are needed to model the development of human face processing skills, cross-race effects, and familiarity with individual faces.
Collapse
Affiliation(s)
- Alice J O'Toole
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080, USA;
| | - Carlos D Castillo
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, USA;
| |
Collapse
|
42
|
Arcaro MJ, Livingstone MS. On the relationship between maps and domains in inferotemporal cortex. Nat Rev Neurosci 2021; 22:573-583. [PMID: 34345018 PMCID: PMC8865285 DOI: 10.1038/s41583-021-00490-4] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/24/2021] [Indexed: 02/07/2023]
Abstract
How does the brain encode information about the environment? Decades of research have led to the pervasive notion that the object-processing pathway in primate cortex consists of multiple areas that are each specialized to process different object categories (such as faces, bodies, hands, non-face objects and scenes). The anatomical consistency and modularity of these regions have been interpreted as evidence that these regions are innately specialized. Here, we propose that ventral-stream modules do not represent clusters of circuits that each evolved to process some specific object category particularly important for survival, but instead reflect the effects of experience on a domain-general architecture that evolved to be able to adapt, within a lifetime, to its particular environment. Furthermore, we propose that the mechanisms underlying the development of domains are both evolutionarily old and universal across cortex. Topographic maps are fundamental, governing the development of specializations across systems, providing a framework for brain organization.
Collapse
|
43
|
van den Boomen C, Munsters NM, Deković M, Kemner C. Exploring emotional face processing in 5-month-olds: The relation with quality of parent-child interaction and spatial frequencies. INFANCY 2021; 26:811-830. [PMID: 34237191 DOI: 10.1111/infa.12420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Revised: 12/14/2020] [Accepted: 06/10/2021] [Indexed: 11/28/2022]
Abstract
It is unclear whether infants differentially process emotional faces in the brain at 5 months of age. Contradictory findings of previous research indicate that additional factors play a role in this process. The current study investigated whether five-month-old infants show differential brain activity between emotional faces. Furthermore, we explored the relation between emotional face processing and (I) stimulus characteristics, specifically the spatial frequency content, and (II) parent, child, and dyadic qualities of interaction characteristics. Face-sensitive components (i.e., N290, P400, Nc) in response to neutral and fearful faces that contained only lower or higher spatial frequencies were assessed. Quality of parent-child interaction was assessed with the Manchester Assessment of Caregiver Infant Interaction (MACI). The results show that, as a full group, none of the components differed between emotional expressions. However, when splitting the group based on median MACI scores, infants who showed high quality of interaction (i.e., more attentiveness to caregiver, positive and negative affect, and liveliness) processed emotions differently, whereas infants who showed low quality did not. These results indicate that a sub-group of infants show differential emotional face processing at 5 months of age, which seem to relate to quality of their behavior during the parent-child interaction.
Collapse
Affiliation(s)
- Carlijn van den Boomen
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nicolette M Munsters
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.,Department of Psychiatry, Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands.,Karakter Child and Adolescent Psychiatry, Ede, The Netherlands
| | - Maja Deković
- Department of Clinical Child and Family Studies, Utrecht University, Utrecht, The Netherlands
| | - Chantal Kemner
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.,Department of Psychiatry, Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
44
|
Mendoza JK, Fausey CM. Everyday music in infancy. Dev Sci 2021; 24:e13122. [PMID: 34170059 PMCID: PMC8596421 DOI: 10.1111/desc.13122] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/18/2021] [Accepted: 03/30/2021] [Indexed: 11/28/2022]
Abstract
Infants enculturate to their soundscape over the first year of life, yet theories of how they do so rarely make contact with details about the sounds available in everyday life. Here, we report on properties of a ubiquitous early ecology in which foundational skills get built: music. We captured daylong recordings from 35 infants ages 6–12 months at home and fully double‐coded 467 h of everyday sounds for music and its features, tunes, and voices. Analyses of this first‐of‐its‐kind corpus revealed two distributional properties of infants’ everyday musical ecology. First, infants encountered vocal music in over half, and instrumental in over three‐quarters, of everyday music. Live sources generated one‐third, and recorded sources three‐quarters, of everyday music. Second, infants did not encounter each individual tune and voice in their day equally often. Instead, the most available identity cumulated to many more seconds of the day than would be expected under a uniform distribution. These properties of everyday music in human infancy are different from what is discoverable in environments highly constrained by context (e.g., laboratories) and time (e.g., minutes rather than hours). Together with recent insights about the everyday motor, language, and visual ecologies of infancy, these findings reinforce an emerging priority to build theories of development that address the opportunities and challenges of real input encountered by real learners.
Collapse
Affiliation(s)
| | - Caitlin M Fausey
- Department of Psychology, University of Oregon, Eugene, Oregon, USA
| |
Collapse
|
45
|
Damon F, Quinn PC, Pascalis O. When novelty prevails on familiarity: Visual biases for child versus infant faces in 3.5- to 12-month-olds. J Exp Child Psychol 2021; 210:105174. [PMID: 34144347 DOI: 10.1016/j.jecp.2021.105174] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 03/14/2021] [Accepted: 04/05/2021] [Indexed: 11/16/2022]
Abstract
The current study examined the influence of everyday perceptual experience with infant and child faces on the shaping of visual biases for faces in 3.5-, 6-, 9-, and 12-month-old infants. In Experiment 1, infants were presented with pairs of photographs of unfamiliar child and infant faces. Four groups with differential experience with infant and child faces were composed from parents' reports of daily exposure with infants and children (no experience, infant face experience, child face experience, and both infant and child face experience) to assess influence of experience on face preferences. Results showed that infants from all age groups displayed a bias for the novel category of faces in relation to their previous exposure to infant and child faces. In Experiment 2, this pattern of visual attention was reversed in infants presented with pictures of personally familiar child faces (i.e., older siblings) compared with unfamiliar infant faces, especially in older infants. These results suggest that allocation of attention for novelty can supersede familiarity biases for faces depending on experience and highlight that multiple factors drive infant visual behavior in responding to the social world.
Collapse
Affiliation(s)
- Fabrice Damon
- Centre des Sciences du Goût et de l'Alimentation, AgroSup Dijon, CNRS, INRAE, Université Bourgogne Franche-Comté, F-21000 Dijon, France.
| | - Paul C Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
| | - Olivier Pascalis
- University of Grenoble Alpes, Laboratoire de Psychologie et Neurocognition, F-38000 Grenoble, France; Laboratoire de Psychologie et Neurocognition, CNRS, F-38000 Grenoble, France
| |
Collapse
|
46
|
Little Z, Jenkins D, Susilo T. Fast saccades towards faces are robust to orientation inversion and contrast negation. Vision Res 2021; 185:9-16. [PMID: 33866144 DOI: 10.1016/j.visres.2021.03.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 03/22/2021] [Accepted: 03/30/2021] [Indexed: 11/18/2022]
Abstract
Eye movement studies show that humans can make very fast saccades towards faces in natural scenes, but the visual mechanisms behind this process remain unclear. Here we investigate whether fast saccades towards faces rely on mechanisms that are sensitive to the orientation or contrast of the face image. We present participants pairs of images each containing a face and a car in the left and right visual field or the reverse, and we ask them to saccade to faces or cars as targets in different blocks. We assign participants to one of three image conditions: normal images, orientation-inverted images, or contrast-negated images. We report three main results that hold regardless of image conditions. First, reliable saccades towards faces are fast - they can occur at 120-130 ms. Second, fast saccades towards faces are selective - they are more accurate and faster by about 60-70 ms than saccades towards cars. Third, saccades towards faces are reflexive - early saccades in the interval of 120-160 ms tend to go to faces, even when cars are the target. These findings suggest that the speed, selectivity, and reflexivity of saccades towards faces do not depend on the orientation or contrast of the face image. Our results accord with studies suggesting that fast saccades towards faces are mainly driven by low-level image properties, such as amplitude spectrum and spatial frequency.
Collapse
Affiliation(s)
- Zoë Little
- School of Psychology, Victoria University of Wellington, New Zealand.
| | - Daniel Jenkins
- School of Psychology, Victoria University of Wellington, New Zealand
| | - Tirta Susilo
- School of Psychology, Victoria University of Wellington, New Zealand
| |
Collapse
|
47
|
Levin HI, Egger D, Andres L, Johnson M, Bearman SK, de Barbaro K. Sensing everyday activity: Parent perceptions and feasibility. Infant Behav Dev 2021; 62:101511. [PMID: 33465730 PMCID: PMC9128842 DOI: 10.1016/j.infbeh.2020.101511] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 11/18/2020] [Accepted: 11/21/2020] [Indexed: 11/23/2022]
Abstract
Mobile and wearable sensors provide a unique opportunity to capture the daily activities and interactions that shape developmental trajectories, with potential to revolutionize the study of development (de Barbaro, 2019). However, developmental research employing sensors is still in its infancy, and parents' comfort using these devices is uncertain. This exploratory report assesses parent willingness to participate in sensor studies via a nationally representative survey (N = 210) and live recruitment of a low-income, minority population for an ongoing study (N = 359). The survey allowed us to assess how protocol design influences acceptability, including various options for devices and datastream resolution, conditions of data sharing, and feedback. By contrast, our recruitment data provided insight into parents' true willingness to participate in a sensor study, with a protocol including 72 h of continuous audio, motion, and physiological data. Our results indicate that parents are relatively conservative when considering participation in sensing studies. However, nearly 41 % of surveyed parents reported that they would be at least somewhat willing to participate in studies with audio or video recordings, 26 % were willing or extremely willing, and 14 % reported being extremely willing. These results roughly paralleled our recruitment results, where 58 % of parents indicated interest, 29 % of parents scheduled to participate, and 10 % ultimately participated. Additionally, 70 % of caregivers stated their reason for not participating in the study was due to barriers unrelated to sensing while about 25 % noted barriers due to either privacy concerns or the physical sensors themselves. Parents' willingness to collect sensitive datastreams increased if data stayed within the household for individual use only, are shared anonymously with researchers, or if parents receive feedback from devices. Overall, our findings suggest that given the correct circumstances, mobile sensors are a feasible and promising tool for characterizing children's daily interactions and their role in development.
Collapse
Affiliation(s)
- Hannah I Levin
- School of Communication, Northwestern University, United States.
| | - Dominique Egger
- Department of Educational Psychology, University of Texas at Austin, United States
| | - Lara Andres
- Department of Educational Psychology, University of Texas at Austin, United States
| | - Mckensey Johnson
- Department of Psychology, University of Texas at Austin, United States
| | - Sarah Kate Bearman
- Department of Educational Psychology, University of Texas at Austin, United States
| | - Kaya de Barbaro
- Department of Psychology, University of Texas at Austin, United States
| |
Collapse
|
48
|
Quinn PC, Balas BJ, Pascalis O. Reorganization in the representation of face-race categories from 6 to 9 months of age: Behavioral and computational evidence. Vision Res 2020; 179:34-41. [PMID: 33285348 DOI: 10.1016/j.visres.2020.11.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2020] [Revised: 10/31/2020] [Accepted: 11/16/2020] [Indexed: 11/29/2022]
Abstract
Prior research has reported developmental change in how infants represent categories of other-race faces (Developmental Science 19 (2016) 362-371). In particular, Caucasian 6-month-olds were shown to represent African versus Asian face categories, whereas Caucasian 9 month-olds represented different classes of other-race faces in one category, inclusive of African and Asian faces but exclusive of Caucasian faces. The current investigation sought to provide stronger evidence that is convergent with these findings by asking whether infants will generalize looking-time responsiveness from one to another other-race category. In Experiment 1, an experimental group of Caucasian 6-month-olds was familiarized with African (or Asian) faces and then given a novel category preference test with an Asian (or African) face versus a Caucasian face, while a control group of Caucasian 6-month-olds viewed the test faces without prior familiarization. Infants in the experimental group divided attention between the test faces and infants in the control group did not manifest a spontaneous preference. Experiment 2 used the same procedure, but was conducted with Caucasian 9-month-olds. Infants in the experimental group displayed a robust preference for Caucasian faces when considered against the finding that infants in the control group displayed a spontaneous preference for other-race faces. The results offer confirmation that between 6 and 9 months, infants transition to representing own-race versus other-race face categories, with the latter inclusive of multiple other-race face classes with clear perceptual differences. Computational modeling of infant responding suggests that the developmental change is rooted in the statistics of experience with majority versus minority group faces.
Collapse
Affiliation(s)
- Paul C Quinn
- Department of Psychological and Brain Sciences, University of Delaware, United States.
| | - Benjamin J Balas
- Department of Psychology, North Dakota State University, United States
| | - Olivier Pascalis
- Laboratoire de Psychologie et Neurocognition, Universite Grenoble Alpes, France
| |
Collapse
|
49
|
Mutual Gaze: An Active Ingredient for Social Development in Toddlers with ASD: A Randomized Control Trial. J Autism Dev Disord 2020; 51:1921-1938. [PMID: 32894382 PMCID: PMC8124047 DOI: 10.1007/s10803-020-04672-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
We examined the efficacy of an early autism intervention for use in early childhood intervention (ECI) and mutual gaze as a contributor to social development. Seventy-eight families were randomly assigned to one of three 12-week interventions: Pathways (with a mutual gaze component), communication, or services-as-usual (SAU). The Pathways/SAU comparison concerned the efficacy of Pathways for ECI, and the Pathways/communication comparison, mutual gaze. The Pathways group made significantly more change on social measures, communicative synchrony, and adaptive functioning compared with the SAU group and on social measures compared with the communication group. There were no group differences for communicative acts. The results support Pathways as a potential ECI program and mutual gaze as an active ingredient for social and communication development.
Collapse
|
50
|
Ruan M, Webster PJ, Li X, Wang S. Deep Neural Network Reveals the World of Autism From a First-Person Perspective. Autism Res 2020; 14:333-342. [PMID: 32869953 DOI: 10.1002/aur.2376] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 07/26/2020] [Accepted: 07/27/2020] [Indexed: 01/22/2023]
Abstract
People with autism spectrum disorder (ASD) show atypical attention to social stimuli and aberrant gaze when viewing images of the physical world. However, it is unknown how they perceive the world from a first-person perspective. In this study, we used machine learning to classify photos taken in three different categories (people, indoors, and outdoors) as either having been taken by individuals with ASD or by peers without ASD. Our classifier effectively discriminated photos from all three categories, but was particularly successful at classifying photos of people with >80% accuracy. Importantly, visualization of our model revealed critical features that led to successful discrimination and showed that our model adopted a strategy similar to that of ASD experts. Furthermore, for the first time we showed that photos taken by individuals with ASD contained less salient objects, especially in the central visual field. Notably, our model outperformed classification of these photos by ASD experts. Together, we demonstrate an effective and novel method that is capable of discerning photos taken by individuals with ASD and revealing aberrant visual attention in ASD from a unique first-person perspective. Our method may in turn provide an objective measure for evaluations of individuals with ASD. LAY SUMMARY: People with autism spectrum disorder (ASD) demonstrate atypical visual attention to social stimuli. However, it remains largely unclear how they perceive the world from a first-person perspective. In this study, we employed a deep learning approach to analyze a unique dataset of photos taken by people with and without ASD. Our computer modeling was not only able to discern which photos were taken by individuals with ASD, outperforming ASD experts, but importantly, it revealed critical features that led to successful discrimination, revealing aspects of atypical visual attention in ASD from their first-person perspective.
Collapse
Affiliation(s)
- Mindi Ruan
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, West Virginia, USA
| | - Paula J Webster
- Department of Chemical and Biomedical Engineering and Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, USA
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, West Virginia, USA
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering and Rockefeller Neuroscience Institute, West Virginia University, Morgantown, West Virginia, USA
| |
Collapse
|