1
|
Petroff ZJ, Jayaraman S, Smith LB, Candy TR, Bonnen K. The world through infant eyes: Evidence for the early emergence of the cardinal orientation bias. Proc Natl Acad Sci U S A 2025; 122:e2421277122. [PMID: 40228134 PMCID: PMC12037014 DOI: 10.1073/pnas.2421277122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2024] [Accepted: 02/22/2025] [Indexed: 04/16/2025] Open
Abstract
The structure of the environment includes more horizontal and vertical (i.e. cardinal) orientations than oblique orientations, meaning that edges tend to be aligned with or perpendicular to the direction of gravity. This bias in the visual scene is associated with a bias in visual sensitivity in adults. Although infants must learn to function in this biased environment, their immature motor control prevents them from consistently orienting themselves, relative to gravity. This study therefore asked whether cardinal orientations dominate human visual experience from early infancy or only from later in development, as motor control improves. We analyze video clips from head-mounted cameras, showing the egocentric perspective of 75 infants (1 to 12 mo) in their home environments in two communities (Indiana, USA vs. Tamil Nadu, India). We measured the distribution of orientations in each frame of these videos and found that horizontal and vertical orientations were overrepresented in infants from both countries. A cardinal orientation bias was evident even in the egocentric view of the youngest infants (3 wk) and became more prominent during the subsequent weeks of development. The early presence of a cardinal orientation bias in infants' visual input may serve as a consistent cue to gravity and ground planes, potentially influencing motor development and contributing to the formation of sensory, perceptual, and cognitive biases.
Collapse
Affiliation(s)
- Zachary J. Petroff
- School of Optometry, Indiana University, Bloomington, IN47405
- Department of Computer Science, Indiana University, Bloomington, IN47405
- Program in Cognitive Science, Indiana University, Bloomington, IN47405
| | - Swapnaa Jayaraman
- Department of Psychological and Brain Sciences, Bloomington, IN47405
- Pudiyador Association for Community Empowerment, Chennai600020, India
| | - Linda B. Smith
- Department of Psychological and Brain Sciences, Bloomington, IN47405
| | - T. Rowan Candy
- School of Optometry, Indiana University, Bloomington, IN47405
| | - Kathryn Bonnen
- School of Optometry, Indiana University, Bloomington, IN47405
| |
Collapse
|
2
|
Kliesch C. Postnatal dependency as the foundation of social learning in humans. Proc Biol Sci 2025; 292:20242818. [PMID: 40237509 PMCID: PMC12001984 DOI: 10.1098/rspb.2024.2818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 02/11/2025] [Accepted: 03/21/2025] [Indexed: 04/18/2025] Open
Abstract
Humans have developed a sophisticated system of cultural transmission that allows for complex, non-genetically specified behaviours to be passed on from one generation to the next. This system relies on understanding others as social and communicative partners. Some theoretical accounts argue for the existence of domain-specific cognitive adaptations that prioritize social information, while others suggest that social learning is itself a product of cumulative cultural evolution based on domain-general learning mechanisms. The current paper explores the contribution of humans' unique ontogenetic environment to the emergence of social learning in infancy. It suggests that the prolonged period of post-natal dependency experienced by human infants contributes to the development of social learning. Because of motor limitations, infants learn to interact with and act through caregivers, establishing social learning abilities and skills that continue to develop as children become less dependent. According to this perspective, at least some key aspects of social development can be attributed to a developmental trajectory guided by infants' early motor development that radically alters how they experience the world.
Collapse
|
3
|
O’Connell TP, Bonnen T, Friedman Y, Tewari A, Sitzmann V, Tenenbaum JB, Kanwisher N. Approximating Human-Level 3D Visual Inferences With Deep Neural Networks. Open Mind (Camb) 2025; 9:305-324. [PMID: 40013087 PMCID: PMC11864798 DOI: 10.1162/opmi_a_00189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 01/14/2025] [Indexed: 02/28/2025] Open
Abstract
Humans make rich inferences about the geometry of the visual world. While deep neural networks (DNNs) achieve human-level performance on some psychophysical tasks (e.g., rapid classification of object or scene categories), they often fail in tasks requiring inferences about the underlying shape of objects or scenes. Here, we ask whether and how this gap in 3D shape representation between DNNs and humans can be closed. First, we define the problem space: after generating a stimulus set to evaluate 3D shape inferences using a match-to-sample task, we confirm that standard DNNs are unable to reach human performance. Next, we construct a set of candidate 3D-aware DNNs including 3D neural field (Light Field Network), autoencoder, and convolutional architectures. We investigate the role of the learning objective and dataset by training single-view (the model only sees one viewpoint of an object per training trial) and multi-view (the model is trained to associate multiple viewpoints of each object per training trial) versions of each architecture. When the same object categories appear in the model training and match-to-sample test sets, multi-view DNNs approach human-level performance for 3D shape matching, highlighting the importance of a learning objective that enforces a common representation across viewpoints of the same object. Furthermore, the 3D Light Field Network was the model most similar to humans across all tests, suggesting that building in 3D inductive biases increases human-model alignment. Finally, we explore the generalization performance of multi-view DNNs to out-of-distribution object categories not seen during training. Overall, our work shows that multi-view learning objectives for DNNs are necessary but not sufficient to make similar 3D shape inferences as humans and reveals limitations in capturing human-like shape inferences that may be inherent to DNN modeling approaches. We provide a methodology for understanding human 3D shape perception within a deep learning framework and highlight out-of-domain generalization as the next challenge for learning human-like 3D representations with DNNs.
Collapse
Affiliation(s)
| | - Tyler Bonnen
- EECS, University of California, Berkeley, Berkeley, CA, USA
| | | | | | | | | | | |
Collapse
|
4
|
Daniel-Hertz E, Yao JK, Gregorek S, Hoyos PM, Gomez J. An Eccentricity Gradient Reversal across High-Level Visual Cortex. J Neurosci 2025; 45:e0809242024. [PMID: 39516043 PMCID: PMC11713851 DOI: 10.1523/jneurosci.0809-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 10/15/2024] [Accepted: 10/21/2024] [Indexed: 11/16/2024] Open
Abstract
Human visual cortex contains regions selectively involved in perceiving and recognizing ecologically important visual stimuli such as people and places. Located in the ventral temporal lobe, these regions are organized consistently relative to cortical folding, a phenomenon thought to be inherited from how centrally or peripherally these stimuli are viewed with the retina. While this eccentricity theory of visual cortex has been one of the best descriptions of its functional organization, whether or not it accurately describes visual processing in all category-selective regions is not yet clear. Through a combination of behavioral and functional MRI measurements in 27 participants (17 females), we demonstrate that a limb-selective region neighboring well-studied face-selective regions shows tuning for the visual periphery in a cortical region originally thought to be centrally biased. We demonstrate that the spatial computations performed by the limb-selective region are consistent with visual experience and in doing so, make the novel observation that there may in fact be two eccentricity gradients, forming an eccentricity reversal across high-level visual cortex. These data expand the current theory of cortical organization to provide a unifying principle that explains the broad functional features of many visual regions, showing that viewing experience interacts with innate wiring principles to drive the location of cortical specialization.
Collapse
Affiliation(s)
- Edan Daniel-Hertz
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Jewelia K Yao
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Sidney Gregorek
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Patricia M Hoyos
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Jesse Gomez
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| |
Collapse
|
5
|
Yurkovic-Harding J, Bradshaw J. The Dynamics of Looking and Smiling Differ for Young Infants at Elevated Likelihood for ASD. INFANCY 2025; 30:e12646. [PMID: 39716809 PMCID: PMC12047390 DOI: 10.1111/infa.12646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 11/27/2024] [Accepted: 12/06/2024] [Indexed: 12/25/2024]
Abstract
Social smiling is the earliest gained social communication skill, emerging around 2 months of age. From 2 to 6-months, infants primarily smile in response to caregivers. After 6 months, infants coordinate social smiles with other social cues to initiate interactions with the caregiver. Social smiling is reduced in older infants with autism spectrum disorder (ASD) but has rarely been studied before 6 months of life. The current study therefore aimed to understand the component parts of infant social smiles, namely look to caregiver and smile, during face-to-face interactions in 3 and 4-month-old infants at elevated (EL) and low likelihood (LL) for ASD. We found that EL and LL infants looked to their caregiver and smiled for similar amounts of time and at similar rates, suggesting that social smiling manifests similarly in both groups. A nuanced difference between groups emerged when considering temporal dynamics of looking and smiling. Specifically, 3-month-old EL infants demonstrated extended looking to the caregiver after smile offset. These findings suggest that social smiling is largely typical in EL infants in early infancy, with subtle differences in temporal coupling. Future research is needed to understand the full magnitude of these differences and their implications for social development.
Collapse
Affiliation(s)
- Julia Yurkovic-Harding
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Carolina Autism and Neurodevelopment Research Center, University of South Carolina, Columbia, South Carolina, USA
| | - Jessica Bradshaw
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Carolina Autism and Neurodevelopment Research Center, University of South Carolina, Columbia, South Carolina, USA
| |
Collapse
|
6
|
Rutkowska JM, Mermier J, Meyer M, Bulf H, Turati C, Hunnius S. Emotional Movement Kinematics Guide Twelve-Month-Olds' Visual, but Not Manual, Exploration. INFANCY 2025; 30:e70000. [PMID: 39841055 PMCID: PMC11753196 DOI: 10.1111/infa.70000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/28/2024] [Accepted: 01/07/2025] [Indexed: 01/23/2025]
Abstract
The ability to recognize and act on others' emotions is crucial for navigating social interactions successfully and learning about the world. One way in which others' emotions are observable is through their movement kinematics. Movement information is available even at a distance or when an individual's face is not visible. Infants have been shown to be sensitive to emotions in movement kinematics of transporting actions, like moving an object from one to another place. However, it is still unknown whether they associate the manipulated object with the emotions contained in moving it, and whether they use this information to guide their own exploration of this object. In this study, 12-month-old infants watched actors transporting two toys with positive or negative emotional valence. Then, infants were given the possibility to interact with the same toys. We expected the infants to look at and touch the toy handled in a positive manner more, compared to the toy handled in a negative manner. Our results showed that infants looked at the positive toys more than at the negative toys, but that infants touched both toys for the same amount of time. Also, there was no difference in which toy they manually explored first.
Collapse
Affiliation(s)
- Joanna M. Rutkowska
- Donders Institute for Brain, Cognition and BehaviorRadboud UniversityNijmegenThe Netherlands
- Department of PsychologyJacobs Center for Productive Youth DevelopmentUniversity of ZurichZurichSwitzerland
| | - Julia Mermier
- Department of PsychologyUniversity of Milano‐BicoccaMilanItaly
| | - Marlene Meyer
- Donders Institute for Brain, Cognition and BehaviorRadboud UniversityNijmegenThe Netherlands
| | - Hermann Bulf
- Department of PsychologyUniversity of Milano‐BicoccaMilanItaly
- NeuroMIMilan Center for NeuroscienceMilanItaly
| | - Chiara Turati
- Department of PsychologyUniversity of Milano‐BicoccaMilanItaly
- NeuroMIMilan Center for NeuroscienceMilanItaly
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition and BehaviorRadboud UniversityNijmegenThe Netherlands
| |
Collapse
|
7
|
Yan X, Tung SS, Fascendini B, Chen YD, Norcia AM, Grill-Spector K. The emergence of visual category representations in infants' brains. eLife 2024; 13:RP100260. [PMID: 39714017 DOI: 10.7554/elife.100260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2024] Open
Abstract
Organizing the continuous stream of visual input into categories like places or faces is important for everyday function and social interactions. However, it is unknown when neural representations of these and other visual categories emerge. Here, we used steady-state evoked potential electroencephalography to measure cortical responses in infants at 3-4 months, 4-6 months, 6-8 months, and 12-15 months, when they viewed controlled, gray-level images of faces, limbs, corridors, characters, and cars. We found that distinct responses to these categories emerge at different ages. Reliable brain responses to faces emerge first, at 4-6 months, followed by limbs and places around 6-8 months. Between 6 and 15 months response patterns become more distinct, such that a classifier can decode what an infant is looking at from their brain responses. These findings have important implications for assessing typical and atypical cortical development as they not only suggest that category representations are learned, but also that representations of categories that may have innate substrates emerge at different times during infancy.
Collapse
Affiliation(s)
- Xiaoqian Yan
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Sarah Shi Tung
- Department of Psychology, Stanford University, Stanford, United States
| | - Bella Fascendini
- Department of Psychology, Stanford University, Stanford, United States
| | - Yulan Diana Chen
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
| | - Anthony M Norcia
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
- Neurosciences Program, Stanford University, Stanford, United States
| |
Collapse
|
8
|
Chow HM, Ma YK, Tseng CH. Social and communicative not a prerequisite: Preverbal infants learn an abstract rule only from congruent audiovisual dynamic pitch-height patterns. J Exp Child Psychol 2024; 248:106046. [PMID: 39241321 DOI: 10.1016/j.jecp.2024.106046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 07/23/2024] [Accepted: 07/29/2024] [Indexed: 09/09/2024]
Abstract
Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi 980-0812, Japan.
| |
Collapse
|
9
|
Rekow D, Baudouin J, Kiseleva A, Rossion B, Durand K, Schaal B, Leleu A. Olfactory-to-visual facilitation in the infant brain declines gradually from 4 to 12 months. Child Dev 2024; 95:1967-1981. [PMID: 39022837 PMCID: PMC11579641 DOI: 10.1111/cdev.14124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
During infancy, intersensory facilitation declines gradually as unisensory perception develops. However, this trade-off was mainly investigated using audiovisual stimulations. Here, fifty 4- to 12-month-old infants (26 females, predominately White) were tested in 2017-2020 to determine whether the facilitating effect of their mother's body odor on neural face categorization, as previously observed at 4 months, decreases with age. In a baseline odor context, the results revealed a face-selective electroencephalographic response that increases and changes qualitatively between 4 and 12 months, marking improved face categorization. At the same time, the benefit of adding maternal odor fades with age (R 2 = .31), indicating an inverse relation with the amplitude of the visual response, and generalizing to olfactory-visual interactions previous evidence from the audiovisual domain.
Collapse
Affiliation(s)
- Diane Rekow
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'AlimentationUniversité de Bourgogne, Université Bourgogne Franche‐Comté, CNRS, INRAe, Institut Agro DijonDijonFrance
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| | - Jean‐Yves Baudouin
- Laboratoire “Développement, Individu, Processus, Handicap, Éducation” (DIPHE), Département Psychologie du Développement, de l'Éducation et des Vulnérabilités (PsyDÉV), Institut de PsychologieUniversité de Lyon (Lumière Lyon 2)BronFrance
- Institut Universitaire de FranceParisFrance
| | - Anna Kiseleva
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'AlimentationUniversité de Bourgogne, Université Bourgogne Franche‐Comté, CNRS, INRAe, Institut Agro DijonDijonFrance
| | - Bruno Rossion
- Université de Lorraine, CNRS, IMoPANancyFrance
- Université de Lorraine, CHRU‐Nancy, Service de NeurologieNancyFrance
| | - Karine Durand
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'AlimentationUniversité de Bourgogne, Université Bourgogne Franche‐Comté, CNRS, INRAe, Institut Agro DijonDijonFrance
| | - Benoist Schaal
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'AlimentationUniversité de Bourgogne, Université Bourgogne Franche‐Comté, CNRS, INRAe, Institut Agro DijonDijonFrance
| | - Arnaud Leleu
- Development of Olfactory Communication & Cognition Lab, Centre des Sciences du Goût et de l'AlimentationUniversité de Bourgogne, Université Bourgogne Franche‐Comté, CNRS, INRAe, Institut Agro DijonDijonFrance
| |
Collapse
|
10
|
Franchak JM, Adolph KE. An update of the development of motor behavior. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1682. [PMID: 38831670 PMCID: PMC11534565 DOI: 10.1002/wcs.1682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 03/31/2024] [Accepted: 04/22/2024] [Indexed: 06/05/2024]
Abstract
This primer describes research on the development of motor behavior. We focus on infancy when basic action systems are acquired-posture, locomotion, manual actions, and facial actions-and we adopt a developmental systems perspective to understand the causes and consequences of developmental change. Experience facilitates improvements in motor behavior and infants accumulate immense amounts of varied everyday experience with all the basic action systems. At every point in development, perception guides behavior by providing feedback about the results of just prior movements and information about what to do next. Across development, new motor behaviors provide new inputs for perception. Thus, motor development opens up new opportunities for acquiring knowledge and acting on the world, instigating cascades of developmental changes in perceptual, cognitive, and social domains. This article is categorized under: Cognitive Biology > Cognitive Development Psychology > Motor Skill and Performance Neuroscience > Development.
Collapse
Affiliation(s)
- John M Franchak
- Department of Psychology, University of California, Riverside, California, USA
| | - Karen E Adolph
- Department of Psychology, Center for Neural Science, New York University, New York, USA
| |
Collapse
|
11
|
Greene MR, Balas BJ, Lescroart MD, MacNeilage PR, Hart JA, Binaee K, Hausamann PA, Mezile R, Shankar B, Sinnott CB, Capurro K, Halow S, Howe H, Josyula M, Li A, Mieses A, Mohamed A, Nudnou I, Parkhill E, Riley P, Schmidt B, Shinkle MW, Si W, Szekely B, Torres JM, Weissmann E. The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video. J Vis 2024; 24:6. [PMID: 39377740 PMCID: PMC11466363 DOI: 10.1167/jov.24.11.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Accepted: 08/13/2024] [Indexed: 10/09/2024] Open
Abstract
We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.
Collapse
Affiliation(s)
- Michelle R Greene
- Barnard College, Columbia University, New York, NY, USA
- Bates College, Lewiston, ME, USA
| | | | | | | | | | - Kamran Binaee
- University of Nevada, Reno, NV, USA
- Magic Leap, Plantation, FL, USA
| | | | | | - Bharath Shankar
- University of Nevada, Reno, NV, USA
- Unmanned Ground Systems, Chelmsford, MA, USA
| | - Christian B Sinnott
- University of Nevada, Reno, NV, USA
- Smith-Kettlewell Eye Research Institute, San Francisco, CA, USA
| | | | | | | | | | - Annie Li
- Bates College, Lewiston, ME, USA
| | | | | | - Ilya Nudnou
- North Dakota State University, Fargo, ND, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
12
|
Schroer SE, Peters RE, Yu C. Consistency and variability in multimodal parent-child social interaction: An at-home study using head-mounted eye trackers. Dev Psychol 2024; 60:1432-1446. [PMID: 38976434 PMCID: PMC11916910 DOI: 10.1037/dev0001756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Real-time attention coordination in parent-toddler dyads is often studied in tightly controlled laboratory settings. These studies have demonstrated the importance of joint attention in scaffolding the development of attention and the types of dyadic behaviors that support early language learning. Little is known about how often these behaviors occur in toddlers' everyday lives. We brought wireless head-mounted eye trackers to families' homes to study the moment-to-moment patterns of toddlers' and parents' visual attention and manual activity in daily routines. Our sample consisted of English- and Spanish-speaking families who all reported being middle- or upper middle-class. Toddlers were 2 to 3 years old. Consistent with the findings from previous laboratory studies, we found variability in how frequently toddlers attended to named objects in two everyday activities-Object Play and Mealtime. We then tested whether parent-toddler joint attention in the seconds before a naming utterance increased toddler's attention on the named object. We found that joint attention accompanied by the attended object being held increased the child's attention to the labeled object during naming. We posit that in the rich, noisy world of toddlers' everyday lives, embodied attention plays a critical role in coordinating dyadic behaviors and creating informative naming moments. Our findings highlight the importance of studying toddlers' natural behavior in the real world. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - Chen Yu
- Department of Psychology, University of Texas at Austin
| |
Collapse
|
13
|
Cusack R, Ranzato M, Charvet CJ. Helpless infants are learning a foundation model. Trends Cogn Sci 2024; 28:726-738. [PMID: 38839537 PMCID: PMC11310914 DOI: 10.1016/j.tics.2024.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 04/24/2024] [Accepted: 05/03/2024] [Indexed: 06/07/2024]
Abstract
Humans have a protracted postnatal helplessness period, typically attributed to human-specific maternal constraints causing an early birth when the brain is highly immature. By aligning neurodevelopmental events across species, however, it has been found that humans are not born with especially immature brains compared with animal species with a shorter helpless period. Consistent with this, the rapidly growing field of infant neuroimaging has found that brain connectivity and functional activation at birth share many similarities with the mature brain. Inspired by machine learning, where deep neural networks also benefit from a 'helpless period' of pre-training, we propose that human infants are learning a foundation model: a set of fundamental representations that underpin later cognition with high performance and rapid generalisation.
Collapse
|
14
|
Bradshaw J, Fu X, Richards JE. Infant sustained attention differs by context and social content in the first 2 years of life. Dev Sci 2024; 27:e13500. [PMID: 38499474 PMCID: PMC11608077 DOI: 10.1111/desc.13500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 02/09/2024] [Accepted: 02/11/2024] [Indexed: 03/20/2024]
Abstract
Sustained attention (SA) is an endogenous form of attention that emerges in infancy and reflects cognitive engagement and processing. SA is critical for learning and has been measured using different methods during screen-based and interactive contexts involving social and nonsocial stimuli. How SA differs by measurement method, context, and stimuli across development in infancy is not fully understood. This 2-year longitudinal study examines attention using one measure of overall looking behavior and three measures of SA-mean look duration, percent time in heart rate-defined SA, and heart rate change during SA-in N = 53 infants from 1 to 24 months across four unique task conditions: social videos, nonsocial videos, social interactions (face-to-face play), and nonsocial interactions (toy engagement). Results suggest that developmental changes in attention differ by measurement method, task context (screen or interaction), and task stimulus (social or nonsocial). During social interactions, overall looking and look durations declined after age 3-4 months, whereas heart rate-defined attention measures remained stable. All SA measures were greater for videos than for live interaction conditions throughout the first 6 months, but SA to social and nonsocial stimuli within each task context were equivalent. In the second year of life, SA measured with look durations was greater for social videos compared to other conditions, heart rate-defined SA was greater for social videos compared to nonsocial interactions, and heart rate change during SA was similar across conditions. Together, these results suggest that different measures of attention to social and nonsocial stimuli may reflect unique developmental processes and are important to compare and consider together, particularly when using infant attention as a marker of typical or atypical development. RESEARCH HIGHLIGHTS: Attention measure, context, and social content uniquely differentiate developmental trajectories of attention in the first 2 years of life. Overall looking to caregivers during dyadic social interactions declines significantly from 4 to 6 months of age while sustained attention (SA) to caregivers remains stable. Heart rate-defined SA generally differentiates stimulus context where infants show greater SA while watching videos than while engaging with toys.
Collapse
Affiliation(s)
- Jessica Bradshaw
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Institute for Mind and Brain, University of South Carolina, Columbia, South Carolina, USA
- Carolina Autism and Neurodevelopment Research Center, University of South Carolina, Columbia, South Carolina, USA
| | - Xiaoxue Fu
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Institute for Mind and Brain, University of South Carolina, Columbia, South Carolina, USA
| | - John E. Richards
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Institute for Mind and Brain, University of South Carolina, Columbia, South Carolina, USA
| |
Collapse
|
15
|
D'Souza H, D'Souza D. Stop trying to carve Nature at its joints! The importance of a process-based developmental science for understanding neurodiversity. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 66:233-268. [PMID: 39074923 DOI: 10.1016/bs.acdb.2024.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Nature is dynamic and interdependent. Yet we typically study and understand it as a hierarchy of independent static things (objects, factors, capacities, traits, attributes) with well-defined boundaries. Hence, since Plato, the dominant research practice has been to 'carve Nature at its joints' (Phaedrus 265e), rooted in the view that the world comes to us pre-divided - into static forms or essences - and that the goal of science is to simply discover (identify and classify) them. This things-based approach dominates developmental science, and especially the study of neurodevelopmental conditions. The goal of this paper is to amplify the marginalised process-based approach: that Nature has no joints. It is a hierarchy of interacting processes from which emerging functions (with fuzzy boundaries) softly assemble, become actively maintained, and dissipate over various timescales. We further argue (with a specific focus on children with Down syndrome) that the prevailing focus on identifying, isolating, and analysing things rather than understanding dynamic interdependent processes is obstructing progress in developmental science and particularly our understanding of neurodiversity. We explain how re-examining the very foundation of traditional Western thought is necessary to progress our research on neurodiversity, and we provide specific recommendations on how to steer developmental science towards the process-based approach.
Collapse
Affiliation(s)
- Hana D'Souza
- Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom.
| | - Dean D'Souza
- Centre for Human Developmental Science, School of Psychology, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
16
|
Donaghy R, Shinskey J, Tsakiris M. Maternal interoceptive focus is associated with greater reported engagement in mother-infant stroking and rocking. PLoS One 2024; 19:e0302791. [PMID: 38900756 PMCID: PMC11189230 DOI: 10.1371/journal.pone.0302791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 04/12/2024] [Indexed: 06/22/2024] Open
Abstract
Parental caregiving during infancy is primarily aimed at the regulation of infants' physiological and emotional states. Recent models of embodied cognition propose that interoception, i.e., the perception of internal bodily states, may influence the quality and quantity of parent-infant caregiving. Yet, empirical investigations into this relationship remain scarce. Across two online studies of mothers with 6- to 18-month-old infants during Covid-19 lockdowns, we examined whether mothers' self-reported engagement in stroking and rocking their infant was related to self-reported interoceptive abilities. Additional measures included retrospective accounts of pregnancy and postnatal body satisfaction, and mothers' reports of their infant's understanding of vocabulary relating to body parts. In Study 1 (N = 151) and Study 2 (N = 111), mothers reported their engagement in caregiving behaviours and their tendency to focus on and regulate bodily states. In a subsample from Study 2 (N = 49), we also obtained an objective measure of cardiac interoceptive accuracy using an online heartbeat counting task. Across both studies, the tendency to focus on and regulate interoceptive states was associated with greater mother-infant stroking and rocking. Conversely, we found no evidence for a relationship between objective interoceptive accuracy and caregiving. The findings suggest that interoception may play a role in parental engagement in stroking and rocking, however, in-person dyadic studies are warranted to further investigate this relationship.
Collapse
Affiliation(s)
- Rosie Donaghy
- Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
| | - Jeanne Shinskey
- Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
| | - Manos Tsakiris
- Department of Psychology, Royal Holloway, University of London, Egham, United Kingdom
- Centre for the Politics of Feelings, Senate House, School of Advanced Study, University of London, Egham, London, United Kingdom
| |
Collapse
|
17
|
Casillas M, Casey K. Daylong egocentric recordings in small- and large-scale language communities: A practical introduction. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 66:29-53. [PMID: 39074924 DOI: 10.1016/bs.acdb.2024.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Daylong egocentric (i.e., participant-centered) recordings promise an unprecedented view into the experiences that drive early language learning, impacting both assumptions and theories about how learning happens. Thanks to recent advances in technology, collecting long-form audio, photo, and video recordings with child-worn devices is cheaper and more convenient than ever. These recording methods can be similarly deployed across small- and large-scale language communities around the world, opening up enormous possibilities for comparative research on early language development. However, building new high-quality naturalistic corpora is a massive investment of time and money. In this chapter, we provide a practical look into considerations relevant for developing and managing daylong egocentric recording projects: Is it possible to re-use existing data? How much time will manual annotation take? Can automated tools sufficiently tackle the questions at hand? We conclude by outlining two exciting directions for future naturalistic child language research.
Collapse
Affiliation(s)
- Marisa Casillas
- Comparative Human Development Department, University of Chicago.
| | | |
Collapse
|
18
|
Tamis-LeMonda CS, Swirbul MS, Lai KH. Natural behavior in everyday settings. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2024; 66:1-27. [PMID: 39074918 DOI: 10.1016/bs.acdb.2024.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/31/2024]
Abstract
Infant behaviors-walking, vocalizing, playing, interacting with others, and so on-offer an unparalleled window into learning and development. The study of infants requires strategic choices about what to observe, where, when, and how. We argue that loosening study constraints-by allowing infants and caregivers to do whatever they choose, wherever they choose, and with whatever materials they choose-promises to reveal a deep understanding of the everyday data on which learning builds. We show that observations of infants' natural behavior yield unique insights into the nature of visual exploration, object play, posture and locomotion, proximity to caregiver, and communication. Furthermore, we show that by situating the study of behavior in ecologically-valid settings, researchers can gain purchase on the contextual regularities that frame learning. We close by underscoring the value of studies at every point on the research continuum-from cleverly controlled lab-based tasks to fully natural observations in everyday environments. Acceleration in the science of behavior rests on leveraging expertise across disciplines, theoretical positions, and methodological approaches.
Collapse
Affiliation(s)
| | - Mackenzie S Swirbul
- Department of Applied Psychology, New York University, New York, NY, United States
| | - Kristy H Lai
- Department of Applied Psychology, New York University, New York, NY, United States
| |
Collapse
|
19
|
Aneja P, Kinna T, Newman J, Sami S, Cassidy J, McCarthy J, Tiwari M, Kumar A, Spencer JP. Leveraging technological advances to assess dyadic visual cognition during infancy in high- and low-resource settings. Front Psychol 2024; 15:1376552. [PMID: 38873529 PMCID: PMC11169819 DOI: 10.3389/fpsyg.2024.1376552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 05/08/2024] [Indexed: 06/15/2024] Open
Abstract
Caregiver-infant interactions shape infants' early visual experience; however, there is limited work from low-and middle-income countries (LMIC) in characterizing the visual cognitive dynamics of these interactions. Here, we present an innovative dyadic visual cognition pipeline using machine learning methods which captures, processes, and analyses the visual dynamics of caregiver-infant interactions across cultures. We undertake two studies to examine its application in both low (rural India) and high (urban UK) resource settings. Study 1 develops and validates the pipeline to process caregiver-infant interaction data captured using head-mounted cameras and eye-trackers. We use face detection and object recognition networks and validate these tools using 12 caregiver-infant dyads (4 dyads from a 6-month-old UK cohort, 4 dyads from a 6-month-old India cohort, and 4 dyads from a 9-month-old India cohort). Results show robust and accurate face and toy detection, as well as a high percent agreement between processed and manually coded dyadic interactions. Study 2 applied the pipeline to a larger data set (25 6-month-olds from the UK, 31 6-month-olds from India, and 37 9-month-olds from India) with the aim of comparing the visual dynamics of caregiver-infant interaction across the two cultural settings. Results show remarkable correspondence between key measures of visual exploration across cultures, including longer mean look durations during infant-led joint attention episodes. In addition, we found several differences across cultures. Most notably, infants in the UK had a higher proportion of infant-led joint attention episodes consistent with a child-centered view of parenting common in western middle-class families. In summary, the pipeline we report provides an objective assessment tool to quantify the visual dynamics of caregiver-infant interaction across high- and low-resource settings.
Collapse
Affiliation(s)
- Prerna Aneja
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Thomas Kinna
- School of Medicine, University of East Anglia, Norwich, United Kingdom
- School of Pharmacy, University of East Anglia, Norwich, United Kingdom
| | - Jacob Newman
- IT and Computing, University of East Anglia, Norwich, United Kingdom
| | - Saber Sami
- School of Medicine, University of East Anglia, Norwich, United Kingdom
| | - Joe Cassidy
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Jordan McCarthy
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | | | | | - John P. Spencer
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| |
Collapse
|
20
|
Breitfeld E, Saffran JR. Early word learning is influenced by physical environments. Child Dev 2024; 95:962-971. [PMID: 38018684 PMCID: PMC11023760 DOI: 10.1111/cdev.14046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 10/26/2023] [Accepted: 11/09/2023] [Indexed: 11/30/2023]
Abstract
During word learning moments, toddlers experience labels and objects in particular environments. Do toddlers learn words better when the physical environment creates contrasts between objects with different labels? Thirty-six 21- to 24-month-olds (92% White, 22 female, data collected 8/21-4/22) learned novel words for novel objects presented using an apparatus that mimicked a shape-sorter toy. The manipulation concerned whether or not the physical features of the environments in which objects occurred heightened the contrasts between the objects. Toddlers only learned labels for objects presented in environments where the apparatus heightened the contrast between the objects (b = .068). These results emphasize the importance of investigating word learning in physical environments that more closely approximate young children's everyday experiences with objects.
Collapse
Affiliation(s)
- Elise Breitfeld
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Jenny R Saffran
- Department of Psychology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
21
|
Long B, Goodin S, Kachergis G, Marchman VA, Radwan SF, Sparks RZ, Xiang V, Zhuang C, Hsu O, Newman B, Yamins DLK, Frank MC. The BabyView camera: Designing a new head-mounted camera to capture children's early social and visual environments. Behav Res Methods 2024; 56:3523-3534. [PMID: 37656342 DOI: 10.3758/s13428-023-02206-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 09/02/2023]
Abstract
Head-mounted cameras have been used in developmental psychology research for more than a decade to provide a rich and comprehensive view of what infants see during their everyday experiences. However, variation between these devices has limited the field's ability to compare results across studies and across labs. Further, the video data captured by these cameras to date has been relatively low-resolution, limiting how well machine learning algorithms can operate over these rich video data. Here, we provide a well-tested and easily constructed design for a head-mounted camera assembly-the BabyView-developed in collaboration with Daylight Design, LLC., a professional product design firm. The BabyView collects high-resolution video, accelerometer, and gyroscope data from children approximately 6-30 months of age via a GoPro camera custom mounted on a soft child-safety helmet. The BabyView also captures a large, portrait-oriented vertical field-of-view that encompasses both children's interactions with objects and with their social partners. We detail our protocols for video data management and for handling sensitive data from home environments. We also provide customizable materials for onboarding families with the BabyView. We hope that these materials will encourage the wide adoption of the BabyView, allowing the field to collect high-resolution data that can link children's everyday environments with their learning outcomes.
Collapse
Affiliation(s)
- Bria Long
- Department of Psychology, Stanford University, Stanford, CA, USA.
| | | | - George Kachergis
- Department of Psychology, Stanford University, Stanford, CA, USA
| | | | - Samaher F Radwan
- Department of Psychology, Stanford University, Stanford, CA, USA
- Graduate School of Education, Stanford University, Stanford, CA, USA
| | - Robert Z Sparks
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Violet Xiang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Chengxu Zhuang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Oliver Hsu
- Daylight Design, LLC, San Francisco, CA, USA
| | | | - Daniel L K Yamins
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, CA, USA
| |
Collapse
|
22
|
Vong WK, Wang W, Orhan AE, Lake BM. Grounded language acquisition through the eyes and ears of a single child. Science 2024; 383:504-511. [PMID: 38300999 DOI: 10.1126/science.adi1374] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Accepted: 12/31/2023] [Indexed: 02/03/2024]
Abstract
Starting around 6 to 9 months of age, children begin acquiring their first words, linking spoken words to their visual counterparts. How much of this knowledge is learnable from sensory input with relatively generic learning mechanisms, and how much requires stronger inductive biases? Using longitudinal head-mounted camera recordings from one child aged 6 to 25 months, we trained a relatively generic neural network on 61 hours of correlated visual-linguistic data streams, learning feature-based representations and cross-modal associations. Our model acquires many word-referent mappings present in the child's everyday experience, enables zero-shot generalization to new visual referents, and aligns its visual and linguistic conceptual systems. These results show how critical aspects of grounded word meaning are learnable through joint representation and associative learning from one child's input.
Collapse
Affiliation(s)
- Wai Keen Vong
- Center for Data Science, New York University, New York, NY, USA
| | - Wentao Wang
- Center for Data Science, New York University, New York, NY, USA
| | - A Emin Orhan
- Center for Data Science, New York University, New York, NY, USA
| | - Brenden M Lake
- Center for Data Science, New York University, New York, NY, USA
- Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
23
|
Kosie JE, Lew-Williams C. Open Science Considerations for Descriptive Research in Developmental Science. INFANT AND CHILD DEVELOPMENT 2024; 33:e2377. [PMID: 38389731 PMCID: PMC10881201 DOI: 10.1002/icd.2377] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2021] [Accepted: 09/09/2022] [Indexed: 02/24/2024]
Abstract
Descriptive developmental research seeks to document, describe, and analyze the conditions under which infants and children live and learn. Here, we articulate how open-science practices can be incorporated into descriptive research to increase its transparency, reliability, and replicability. To date, most open-science practices have been oriented toward experimental rather than descriptive studies, and it can be confusing to figure out how to translate open-science practices (e.g., preregistration) for research that is more descriptive in nature. We discuss a number of unique considerations for descriptive developmental research, taking inspiration from existing open-science practices and providing examples from recent and ongoing studies. By embracing a scientific culture where descriptive research and open science coexist productively, developmental science will be better positioned to generate comprehensive theories of development and understand variability in development across communities and cultures.
Collapse
|
24
|
Kubota E, Grill-Spector K, Nordt M. Rethinking cortical recycling in ventral temporal cortex. Trends Cogn Sci 2024; 28:8-17. [PMID: 37858388 PMCID: PMC10841108 DOI: 10.1016/j.tics.2023.09.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 09/18/2023] [Accepted: 09/20/2023] [Indexed: 10/21/2023]
Abstract
High-level visual areas in ventral temporal cortex (VTC) support recognition of important categories, such as faces and words. Word-selective regions are left lateralized and emerge at the onset of reading instruction. Face-selective regions are right lateralized and have been documented in infancy. Prevailing theories suggest that face-selective regions become right lateralized due to competition with word-selective regions in the left hemisphere. However, recent longitudinal studies examining face- and word-selective responses in childhood do not provide support for this theory. Instead, there is evidence that word representations recycle cortex previously involved in processing other stimuli, such as limbs. These findings call for more longitudinal investigations of cortical recycling and a new era of work that links visual experience and behavior with neural responses.
Collapse
Affiliation(s)
- Emily Kubota
- Department of Psychology, Stanford University, Stanford, CA, USA.
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA, USA; Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| | - Marisa Nordt
- Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics, and Psychotherapy, Medical Faculty, RWTH Aachen University, Aachen, Germany; JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen and Research Centre Juelich, Juelich, Germany
| |
Collapse
|
25
|
Yovel G, Abudarham N. Why psychologists should embrace rather than abandon DNNs. Behav Brain Sci 2023; 46:e414. [PMID: 38054326 DOI: 10.1017/s0140525x2300167x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Deep neural networks (DNNs) are powerful computational models, which generate complex, high-level representations that were missing in previous models of human cognition. By studying these high-level representations, psychologists can now gain new insights into the nature and origin of human high-level vision, which was not possible with traditional handcrafted models. Abandoning DNNs would be a huge oversight for psychological sciences.
Collapse
Affiliation(s)
- Galit Yovel
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel ; https://people.socsci.tau.ac.il/mu/galityovel/
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Naphtali Abudarham
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel ; https://people.socsci.tau.ac.il/mu/galityovel/
| |
Collapse
|
26
|
Nordt M, Gomez J, Natu VS, Rezai AA, Finzi D, Kular H, Grill-Spector K. Longitudinal development of category representations in ventral temporal cortex predicts word and face recognition. Nat Commun 2023; 14:8010. [PMID: 38049393 PMCID: PMC10696026 DOI: 10.1038/s41467-023-43146-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 11/01/2023] [Indexed: 12/06/2023] Open
Abstract
Regions in ventral temporal cortex that are involved in visual recognition of categories like words and faces undergo differential development during childhood. However, categories are also represented in distributed responses across high-level visual cortex. How distributed category representations develop and if this development relates to behavioral changes in recognition remains largely unknown. Here, we used functional magnetic resonance imaging to longitudinally measure the development of distributed responses across ventral temporal cortex to 10 categories in school-age children over several years. Our results reveal both strengthening and weakening of category representations with age, which was mainly driven by changes across category-selective voxels. Representations became particularly more distinct for words in the left hemisphere and for faces bilaterally. Critically, distinctiveness for words and faces across category-selective voxels in left and right lateral ventral temporal cortex, respectively, predicted individual children's word and face recognition performance. These results suggest that the development of distributed representations in ventral temporal cortex has behavioral ramifications and advance our understanding of prolonged cortical development during childhood.
Collapse
Affiliation(s)
- Marisa Nordt
- Department of Psychology, Stanford University, Stanford, CA, USA.
- Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, Medical Faculty, RWTH Aachen, Aachen, Germany.
- JARA-Brain Institute II, Molecular Neuroscience and Neuroimaging, RWTH Aachen & Research Centre Juelich, Juelich, Germany.
| | - Jesse Gomez
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Vaidehi S Natu
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Alex A Rezai
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Dawn Finzi
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Holly Kular
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, CA, USA
- Neurosciences Program, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
27
|
Bradshaw J, Fu X, Yurkovic-Harding J, Abney D. Infant embodied attention in context: Feasibility of home-based head-mounted eye tracking in early infancy. Dev Cogn Neurosci 2023; 64:101299. [PMID: 37748360 PMCID: PMC10522938 DOI: 10.1016/j.dcn.2023.101299] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/24/2023] [Accepted: 09/08/2023] [Indexed: 09/27/2023] Open
Abstract
Social communication emerges from dynamic, embodied social interactions during which infants coordinate attention to caregivers and objects. Yet many studies of infant attention are constrained to a laboratory setting, neglecting how attention is nested within social contexts where caregivers dynamically scaffold infant behavior in real time. This study evaluates the feasibility and acceptability of the novel use of head-mounted eye tracking (HMET) in the home with N = 40 infants aged 4 and 8 months who are typically developing and at an elevated genetic liability for autism spectrum disorder (ASD). Results suggest that HMET with young infants with limited independent motor abilities and at an elevated likelihood for atypical development is highly feasible and deemed acceptable by caregivers. Feasibility and acceptability did not differ by age or ASD likelihood. Data quality was also acceptable, albeit with younger infants showing slightly lower accuracy, allowing for preliminary analysis of developmental trends in infant gaze behavior. This study provides new evidence for the feasibility of using in-home HMET with young infants during a critical developmental period when more complex interactions with the environment and social partners are emerging. Future research can apply this technology to illuminate atypical developmental trajectories of embodied social attention in infancy.
Collapse
Affiliation(s)
- Jessica Bradshaw
- University of South Carolina, 1800 Gervais St., Columbia, SC 29201, USA; Carolina Autism and Neurodevelopment Research Center, University of South Carolina, USA.
| | - Xiaoxue Fu
- University of South Carolina, 1800 Gervais St., Columbia, SC 29201, USA; Carolina Autism and Neurodevelopment Research Center, University of South Carolina, USA
| | - Julia Yurkovic-Harding
- University of South Carolina, 1800 Gervais St., Columbia, SC 29201, USA; Carolina Autism and Neurodevelopment Research Center, University of South Carolina, USA
| | - Drew Abney
- University of Georgia, 125 Baldwin St., Athens, GA 30602, USA
| |
Collapse
|
28
|
Kamensek T, Susilo T, Iarocci G, Oruc I. Are people with autism prosopagnosic? Autism Res 2023; 16:2100-2109. [PMID: 37740564 DOI: 10.1002/aur.3030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 08/30/2023] [Indexed: 09/24/2023]
Abstract
Difficulties in various face processing tasks have been well documented in autism spectrum disorder (ASD). Several meta-analyses and numerous case-control studies have indicated that this population experiences a moderate degree of impairment, with a small percentage of studies failing to detect any impairment. One possible account of this mixed pattern of findings is heterogeneity in face processing abilities stemming from the presence of a subpopulation of prosopagnosic individuals with ASD alongside those with normal face processing skills. Samples randomly drawn from such a population, especially relatively smaller ones, would vary in the proportion of participants with prosopagnosia, resulting in a wide range of group-level deficits from mild (or none) to severe across studies. We test this prosopagnosic subpopulation hypothesis by examining three groups of participants: adults with ASD, adults with developmental prosopagnosia (DP), and a comparison group. Our results show that the prosopagnosic subpopulation hypothesis does not account for the face impairments in the broader autism spectrum. ASD observers show a continuous and graded, rather than categorical, heterogeneity that span a range of face processing skills including many with mild to moderate deficits, inconsistent with a prosopagnosic subtype account. We suggest that pathogenic origins of face deficits for at least some with ASD differ from those of DP.
Collapse
Affiliation(s)
- Todd Kamensek
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| | - Tirta Susilo
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Grace Iarocci
- Department of Psychology, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
29
|
Duchaine B, Rezlescu C, Garrido L, Zhang Y, Braga MV, Susilo T. The development of upright face perception depends on evolved orientation-specific mechanisms and experience. iScience 2023; 26:107763. [PMID: 37954143 PMCID: PMC10638473 DOI: 10.1016/j.isci.2023.107763] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 08/03/2023] [Accepted: 08/25/2023] [Indexed: 11/14/2023] Open
Abstract
Here we examine whether our impressive ability to perceive upright faces arises from evolved orientation-specific mechanisms, our extensive experience with upright faces, or both factors. To do so, we tested Claudio, a man with a congenital joint disorder causing his head to be rotated back so that it is positioned between his shoulder blades. As a result, Claudio has seen more faces reversed in orientation to his own face than matched to it. Controls exhibited large inversion effects on all tasks, but Claudio performed similarly with upright and inverted faces in both detection and identity-matching tasks, indicating these abilities are the product of evolved mechanisms and experience. In contrast, he showed clear upright superiority when detecting "Thatcherized" faces (faces with vertically flipped features), suggesting experience plays a greater role in this judgment. Together, these findings indicate that both evolved orientation-specific mechanisms and experience contribute to our proficiency with upright faces.
Collapse
Affiliation(s)
- Brad Duchaine
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Constantin Rezlescu
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Lúcia Garrido
- Department of Psychology, City, University of London, London EC1V 0HB, UK
| | - Yiyuan Zhang
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Maira V. Braga
- School of Psychological Science, University of Western Australia, Crawley, WA 6009, Australia
| | - Tirta Susilo
- School of Psychology, University of Victoria Wellington, Wellington 6140, New Zealand
| |
Collapse
|
30
|
Oakes LM. The cascading development of visual attention in infancy: Learning to look and looking to learn. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2023; 32:410-417. [PMID: 38107783 PMCID: PMC10723638 DOI: 10.1177/09637214231178744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
The development of visual attention in infancy is typically indexed by where and how long infants look, focusing on changes in alerting, orienting, or attentional control. However, visual attention and looking are both complex systems that are multiply determined. Moreover, infants' visual attention, looking, and learning are intimately connected. Infants learn to look, reflecting cascading effects of changes in attention, the visual system and motor control, as well as the information infants learn about the world around them. Furthermore infants' looking behavior provides the input infants use to perceive and learn about the world. Thus, infants look to learn about the world around them. A deeper understanding of development will be gained by appreciating the cascading effects of changes across these intertwined domains.
Collapse
Affiliation(s)
- Lisa M Oakes
- Department of Psychology and Center for Mind and Brain, UC Davis
| |
Collapse
|
31
|
Ambroziak KB, Bofill MA, Azañón E, Longo MR. Perceptual aftereffects of adiposity transfer from hands to whole bodies. Exp Brain Res 2023; 241:2371-2379. [PMID: 37620437 DOI: 10.1007/s00221-023-06686-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 08/08/2023] [Indexed: 08/26/2023]
Abstract
Adaptation aftereffects for features such as identity and gender have been shown to transfer between faces and bodies, and faces and body parts, i.e. hands. However, no studies have investigated transfer of adaptation aftereffects between whole bodies and body parts. The present study investigated whether visual adaptation aftereffects transfer between hands and whole bodies in the context of adiposity judgements (i.e. how thin or fat a body is). On each trial, participants had to decide whether the body they saw was thinner or fatter than average. Participants performed the task before and after exposure to a thin/fat hand. Consistent with body adaptation studies, after exposure to a slim hand participants judged subsequently presented bodies to be fatter than after adaptation to a fat hand. These results suggest that there may be links between visual representations of body adiposity for whole bodies and body parts.
Collapse
Affiliation(s)
- Klaudia B Ambroziak
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| | - Marina Araujo Bofill
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Elena Azañón
- Institute of Psychology, Otto-Von-Guericke University, Universitätsplatz 2, 39016, Magdeburg, Germany
- Center for Behavioral Brain Sciences, Universitätsplatz 2, 39106, Magdeburg, Germany
- Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118, Magdeburg, Germany
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
32
|
Rigato S, Filippetti ML, de Klerk C. Infants' representations of the infant body in the first year of life: a preferential looking time study. Sci Rep 2023; 13:14091. [PMID: 37640931 PMCID: PMC10462757 DOI: 10.1038/s41598-023-41235-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 08/23/2023] [Indexed: 08/31/2023] Open
Abstract
Representing others' bodies is of fundamental importance for interacting with our environment, yet little is known about how body representations develop. Previous research suggests that infants have expectations about the typical structure of human bodies from relatively early in life, but that these expectations are dependent on how closely the stimuli resemble the bodies infants are exposed to in daily life. Yet, all previous studies used images of adult human bodies, and therefore it is unknown whether infants' representations of infant bodies follow a similar developmental trajectory. In this study we investigated whether infants have expectations about the relative size of infant body parts in a preferential looking study using typical and disproportional infant bodies. We recorded the looking behaviour of three groups of infants between 5 and 14 months of age while they watched images of upright and inverted infant bodies, typical and proportionally distorted, and also collected data on participants' locomotor abilities. Our results showed that infants of all ages looked equally at the typical and proportionally distorted infant body stimuli in both the upright and inverted conditions, and that their looking behaviour was unrelated to their locomotor skills. These findings suggest that infants may need additional visual experience with infant bodies to develop expectations about their typical proportions.
Collapse
Affiliation(s)
- Silvia Rigato
- Centre for Brain Science, Department of Psychology, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK.
| | - Maria Laura Filippetti
- Centre for Brain Science, Department of Psychology, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK
| | - Carina de Klerk
- Centre for Brain Science, Department of Psychology, University of Essex, Wivenhoe Park, Colchester, CO4 3SQ, UK
| |
Collapse
|
33
|
Birulés J, Goupil L, Josse J, Fort M. The Role of Talking Faces in Infant Language Learning: Mind the Gap between Screen-Based Settings and Real-Life Communicative Interactions. Brain Sci 2023; 13:1167. [PMID: 37626523 PMCID: PMC10452843 DOI: 10.3390/brainsci13081167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/27/2023] Open
Abstract
Over the last few decades, developmental (psycho) linguists have demonstrated that perceiving talking faces audio-visually is important for early language acquisition. Using mostly well-controlled and screen-based laboratory approaches, this line of research has shown that paying attention to talking faces is likely to be one of the powerful strategies infants use to learn their native(s) language(s). In this review, we combine evidence from these screen-based studies with another line of research that has studied how infants learn novel words and deploy their visual attention during naturalistic play. In our view, this is an important step toward developing an integrated account of how infants effectively extract audiovisual information from talkers' faces during early language learning. We identify three factors that have been understudied so far, despite the fact that they are likely to have an important impact on how infants deploy their attention (or not) toward talking faces during social interactions: social contingency, speaker characteristics, and task- dependencies. Last, we propose ideas to address these issues in future research, with the aim of reducing the existing knowledge gap between current experimental studies and the many ways infants can and do effectively rely upon the audiovisual information extracted from talking faces in their real-life language environment.
Collapse
Affiliation(s)
- Joan Birulés
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Louise Goupil
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Jérémie Josse
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Mathilde Fort
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
- Centre de Recherche en Neurosciences de Lyon, INSERM U1028-CNRS UMR 5292, Université Lyon 1, 69500 Bron, France
| |
Collapse
|
34
|
Huber LS, Geirhos R, Wichmann FA. The developmental trajectory of object recognition robustness: Children are like small adults but unlike big deep neural networks. J Vis 2023; 23:4. [PMID: 37410494 PMCID: PMC10337805 DOI: 10.1167/jov.23.7.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 05/10/2023] [Indexed: 07/07/2023] Open
Abstract
In laboratory object recognition tasks based on undistorted photographs, both adult humans and deep neural networks (DNNs) perform close to ceiling. Unlike adults', whose object recognition performance is robust against a wide range of image distortions, DNNs trained on standard ImageNet (1.3M images) perform poorly on distorted images. However, the last 2 years have seen impressive gains in DNN distortion robustness, predominantly achieved through ever-increasing large-scale datasets-orders of magnitude larger than ImageNet. Although this simple brute-force approach is very effective in achieving human-level robustness in DNNs, it raises the question of whether human robustness, too, is simply due to extensive experience with (distorted) visual input during childhood and beyond. Here we investigate this question by comparing the core object recognition performance of 146 children (aged 4-15 years) against adults and against DNNs. We find, first, that already 4- to 6-year-olds show remarkable robustness to image distortions and outperform DNNs trained on ImageNet. Second, we estimated the number of images children had been exposed to during their lifetime. Compared with various DNNs, children's high robustness requires relatively little data. Third, when recognizing objects, children-like adults but unlike DNNs-rely heavily on shape but not on texture cues. Together our results suggest that the remarkable robustness to distortions emerges early in the developmental trajectory of human object recognition and is unlikely the result of a mere accumulation of experience with distorted visual input. Even though current DNNs match human performance regarding robustness, they seem to rely on different and more data-hungry strategies to do so.
Collapse
Affiliation(s)
- Lukas S Huber
- Department of Psychology, University of Bern, Bern, Switzerland
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0002-7755-6926
| | - Robert Geirhos
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0001-7698-3187
| | - Felix A Wichmann
- Neural Information Processing Group, University of Tübingen, Tübingen, Germany
- https://orcid.org/0000-0002-2592-634X
| |
Collapse
|
35
|
Kay K, Bonnen K, Denison RN, Arcaro MJ, Barack DL. Tasks and their role in visual neuroscience. Neuron 2023; 111:1697-1713. [PMID: 37040765 DOI: 10.1016/j.neuron.2023.03.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 04/13/2023]
Abstract
Vision is widely used as a model system to gain insights into how sensory inputs are processed and interpreted by the brain. Historically, careful quantification and control of visual stimuli have served as the backbone of visual neuroscience. There has been less emphasis, however, on how an observer's task influences the processing of sensory inputs. Motivated by diverse observations of task-dependent activity in the visual system, we propose a framework for thinking about tasks, their role in sensory processing, and how we might formally incorporate tasks into our models of vision.
Collapse
Affiliation(s)
- Kendrick Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Kathryn Bonnen
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Rachel N Denison
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Mike J Arcaro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19146, USA
| | - David L Barack
- Departments of Neuroscience and Philosophy, University of Pennsylvania, Philadelphia, PA 19146, USA
| |
Collapse
|
36
|
Yovel G, Grosbard I, Abudarham N. Deep learning models challenge the prevailing assumption that face-like effects for objects of expertise support domain-general mechanisms. Proc Biol Sci 2023; 290:20230093. [PMID: 37161322 PMCID: PMC10170201 DOI: 10.1098/rspb.2023.0093] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 04/04/2023] [Indexed: 05/11/2023] Open
Abstract
The question of whether task performance is best achieved by domain-specific, or domain-general processing mechanisms is fundemental for both artificial and biological systems. This question has generated a fierce debate in the study of expert object recognition. Because humans are experts in face recognition, face-like neural and cognitive effects for objects of expertise were considered support for domain-general mechanisms. However, effects of domain, experience and level of categorization, are confounded in human studies, which may lead to erroneous inferences. To overcome these limitations, we trained deep learning algorithms on different domains (objects, faces, birds) and levels of categorization (basic, sub-ordinate, individual), matched for amount of experience. Like humans, the models generated a larger inversion effect for faces than for objects. Importantly, a face-like inversion effect was found for individual-based categorization of non-faces (birds) but only in a network specialized for that domain. Thus, contrary to prevalent assumptions, face-like effects for objects of expertise do not support domain-general mechanisms but may originate from domain-specific mechanisms. More generally, we show how deep learning algorithms can be used to dissociate factors that are inherently confounded in the natural environment of biological organisms to test hypotheses about their isolated contributions to cognition and behaviour.
Collapse
Affiliation(s)
- Galit Yovel
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69987, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69987, Israel
| | - Idan Grosbard
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69987, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69987, Israel
| | - Naphtali Abudarham
- School of Psychological Sciences, Tel Aviv University, Tel Aviv 69987, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 69987, Israel
| |
Collapse
|
37
|
Kretch KS, Marcinowski EC, Lin-Ya H, Koziol NA, Harbourne RT, Lobo MA, Dusing SC. Opportunities for learning and social interaction in infant sitting: Effects of sitting support, sitting skill, and gross motor delay. Dev Sci 2023; 26:e13318. [PMID: 36047385 PMCID: PMC10544757 DOI: 10.1111/desc.13318] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/21/2022] [Accepted: 08/15/2022] [Indexed: 12/15/2022]
Abstract
The development of independent sitting changes everyday opportunities for learning and has cascading effects on cognitive and language development. Prior to independent sitting, infants experience the sitting position with physical support from caregivers. Why does supported sitting not provide the same input for learning that is experienced in independent sitting? This question is especially relevant for infants with gross motor delay, who require support in sitting for many months after typically developing infants sit independently. We observed infants with typical development (n = 34, ages 4-7 months) and infants with gross motor delay (n = 128, ages 7-16 months) in early stages of sitting development, and their caregivers, in a dyadic play observation. We predicted that infants who required caregiver support for sitting would spend more time facing away from the caregiver and less time contacting objects than infants who could sit independently. We also predicted that caregivers of supported sitters would spend less time contacting objects because their hands would be full supporting their infants. Our first two hypotheses were confirmed; however, caregivers spent surprisingly little time using both hands to provide support, and caregivers of supported sitters spent more time contacting objects than caregivers of independent sitters. Similar patterns were seen in the group of typically developing infants and the infants with motor delay. Our findings suggest that independent sitting and supported sitting provide qualitatively distinct experiences with different implications for social interaction and learning opportunities. HIGHLIGHTS: During seated free play, supported sitters spent more time facing away from their caregivers and less time handling objects than independent sitters. Caregivers who spent more time supporting infants with both hands spent less time handling objects; however, caregivers mostly supported infants with one or no hands. A continuous measure of sitting skill did not uniquely contribute to these behaviors beyond the effect of binary sitting support (supported vs. independent sitter). The pattern of results was similar for typically developing infants and infants with gross motor delay, despite differences in age.
Collapse
Affiliation(s)
- Kari S. Kretch
- Division of Biokinesiology and Physical Therapy, University of Southern California
| | | | - Hsu Lin-Ya
- Division of Physical Therapy, University of Washington
| | - Natalie A. Koziol
- Nebraska Center for Research on Children, Youth, Families and Schools, University of Nebraska-Lincoln
| | - Regina T. Harbourne
- Physical Therapy Department, Rangos School of Health Sciences, Duquesne University
| | | | - Stacey C. Dusing
- Division of Biokinesiology and Physical Therapy, University of Southern California
| |
Collapse
|
38
|
Ko ES, Abu-Zhaya R, Kim ES, Kim T, On KW, Kim H, Zhang BT, Seidl A. Mothers' use of touch across infants' development and its implications for word learning: Evidence from Korean dyadic interactions. INFANCY 2023; 28:597-618. [PMID: 36757022 PMCID: PMC10085827 DOI: 10.1111/infa.12532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 01/05/2023] [Accepted: 01/15/2023] [Indexed: 02/10/2023]
Abstract
Caregivers' touches that occur alongside words and utterances could aid in the detection of word/utterance boundaries and the mapping of word forms to word meanings. We examined changes in caregivers' use of touches with their speech directed to infants using a multimodal cross-sectional corpus of 35 Korean mother-child dyads across three age groups of infants (8, 14, and 27 months). We tested the hypothesis that caregivers' frequency and use of touches with speech change with infants' development. Results revealed that the frequency of word/utterance-touch alignment as well as word + touch co-occurrence is highest in speech addressed to the youngest group of infants. Thus, this study provides support for the hypothesis that caregivers' use of touch during dyadic interactions is sensitive to infants' age in a way similar to caregivers' use of speech alone and could provide cues useful to infants' language learning at critical points in early development.
Collapse
Affiliation(s)
- Eon-Suk Ko
- Department of English Language and Literature, Chosun University
| | | | - Eun-Sol Kim
- Department of Computer Science, Hanyang University
| | | | | | - Hyunji Kim
- Department of English Language and Literature, Chosun University
| | - Byoung-Tak Zhang
- Department of Computer Science and Engineering & SNU Artificial Intelligence Institute, Seoul National University
| | - Amanda Seidl
- Department of Speech, Language, and Hearing Sciences, Purdue University
| |
Collapse
|
39
|
Geangu E, Vuong QC. Seven-months-old infants show increased arousal to static emotion body expressions: Evidence from pupil dilation. INFANCY 2023. [PMID: 36917082 DOI: 10.1111/infa.12535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 12/23/2022] [Accepted: 02/10/2023] [Indexed: 03/16/2023]
Abstract
Human body postures provide perceptual cues that can be used to discriminate and recognize emotions. It was previously found that 7-months-olds' fixation patterns discriminated fear from other emotion body expressions but it is not clear whether they also process the emotional content of those expressions. The emotional content of visual stimuli can increase arousal level resulting in pupil dilations. To provide evidence that infants also process the emotional content of expressions, we analyzed variations in pupil in response to emotion stimuli. Forty-eight 7-months-old infants viewed adult body postures expressing anger, fear, happiness and neutral expressions, while their pupil size was measured. There was a significant emotion effect between 1040 and 1640 ms after image onset, when fear elicited larger pupil dilations than neutral expressions. A similar trend was found for anger expressions. Our results suggest that infants have increased arousal to negative-valence body expressions. Thus, in combination with previous fixation results, the pupil data show that infants as young as 7-months can perceptually discriminate static body expressions and process the emotional content of those expressions. The results extend information about infant processing of emotion expressions conveyed through other means (e.g., faces).
Collapse
Affiliation(s)
- Elena Geangu
- Department of Psychology, University of York, York, UK
| | - Quoc C Vuong
- Biosciences Institute and School of Psychology, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
40
|
Looking at faces in the wild. Sci Rep 2023; 13:783. [PMID: 36646709 PMCID: PMC9842722 DOI: 10.1038/s41598-022-25268-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/28/2022] [Indexed: 01/18/2023] Open
Abstract
Faces are key to everyday social interactions, but our understanding of social attention is based on experiments that present images of faces on computer screens. Advances in wearable eye-tracking devices now enable studies in unconstrained natural settings but this approach has been limited by manual coding of fixations. Here we introduce an automatic 'dynamic region of interest' approach that registers eye-fixations to bodies and faces seen while a participant moves through the environment. We show that just 14% of fixations are to faces of passersby, contrasting with prior screen-based studies that suggest faces automatically capture visual attention. We also demonstrate the potential for this new tool to help understand differences in individuals' social attention, and the content of their perceptual exposure to other people. Together, this can form the basis of a new paradigm for studying social attention 'in the wild' that opens new avenues for theoretical, applied and clinical research.
Collapse
|
41
|
Yates TS, Ellis CT, Turk‐Browne NB. Face processing in the infant brain after pandemic lockdown. Dev Psychobiol 2023; 65:e22346. [PMID: 36567649 PMCID: PMC9877889 DOI: 10.1002/dev.22346] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 09/13/2022] [Accepted: 10/10/2022] [Indexed: 12/14/2022]
Abstract
The role of visual experience in the development of face processing has long been debated. We present a new angle on this question through a serendipitous study that cannot easily be repeated. Infants viewed short blocks of faces during fMRI in a repetition suppression task. The same identity was presented multiple times in half of the blocks (repeat condition) and different identities were presented once each in the other half (novel condition). In adults, the fusiform face area (FFA) tends to show greater neural activity for novel versus repeat blocks in such designs, suggesting that it can distinguish same versus different face identities. As part of an ongoing study, we collected data before the COVID-19 pandemic and after an initial local lockdown was lifted. The resulting sample of 12 infants (9-24 months) divided equally into pre- and post-lockdown groups with matching ages and data quantity/quality. The groups had strikingly different FFA responses: pre-lockdown infants showed repetition suppression (novel > repeat), whereas post-lockdown infants showed the opposite (repeat > novel), often referred to as repetition enhancement. These findings provide speculative evidence that altered visual experience during the lockdown, or other correlated environmental changes, may have affected face processing in the infant brain.
Collapse
Affiliation(s)
| | - Cameron T. Ellis
- Department of PsychologyStanford UniversityStanfordCaliforniaUSA
| | - Nicholas B. Turk‐Browne
- Department of PsychologyYale UniversityNew HavenConnecticutUSA,Wu Tsai InstituteYale UniversityNew HavenConnecticutUSA
| |
Collapse
|
42
|
Emotion is perceived accurately from isolated body parts, especially hands. Cognition 2023; 230:105260. [PMID: 36058103 DOI: 10.1016/j.cognition.2022.105260] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/21/2022]
Abstract
Body posture and configuration provide important visual cues about the emotion states of other people. We know that bodily form is processed holistically, however, emotion recognition may depend on different mechanisms; certain body parts, such as the hands, may be especially important for perceiving emotion. This study therefore compared participants' emotion recognition performance when shown images of full bodies, or of isolated hands, arms, heads and torsos. Across three experiments, emotion recognition accuracy was above chance for all body parts. While emotions were recognized most accurately from full bodies, recognition performance from the hands was more accurate than for other body parts. Representational similarity analysis further showed that the pattern of errors for the hands was related to that for full bodies. Performance was reduced when stimuli were inverted, showing a clear body inversion effect. The high performance for hands was not due only to the fact that there are two hands, as performance remained well above chance even when just one hand was shown. These results demonstrate that emotions can be decoded from body parts. Furthermore, certain features, such as the hands, are more important to emotion perception than others. STATEMENT OF RELEVANCE: Successful social interaction relies on accurately perceiving emotional information from others. Bodies provide an abundance of emotion cues; however, the way in which emotional bodies and body parts are perceived is unclear. We investigated this perceptual process by comparing emotion recognition for body parts with that for full bodies. Crucially, we found that while emotions were most accurately recognized from full bodies, emotions were also classified accurately when images of isolated hands, arms, heads and torsos were seen. Of the body parts shown, emotion recognition from the hands was most accurate. Furthermore, shared patterns of emotion classification for hands and full bodies suggested that emotion recognition mechanisms are shared for full bodies and body parts. That the hands are key to emotion perception is important evidence in its own right. It could also be applied to interventions for individuals who find it difficult to read emotions from faces and bodies.
Collapse
|
43
|
Tsurumi S, Kanazawa S, Yamaguchi MK, Kawahara JI. Development of upper visual field bias for faces in infants. Dev Sci 2023; 26:e13262. [PMID: 35340093 PMCID: PMC10078383 DOI: 10.1111/desc.13262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 03/11/2022] [Accepted: 03/22/2022] [Indexed: 12/15/2022]
Abstract
The spatial location of the face and body seen in daily life influences human perception and recognition. This contextual effect of spatial locations suggests that daily experience affects how humans visually process the face and body. However, it remains unclear whether this effect is caused by experience, or innate neural pathways. To address this issue, we examined the development of visual field asymmetry for face processing, in which faces in the upper visual field were processed preferentially compared to the lower visual field. We found that a developmental change occurred between 6 and 7 months. Older infants aged 7-8 months showed bias toward faces in the upper visual field, similar to adults, but younger infants of 5-6 months showed no such visual field bias. Furthermore, older infants preferentially memorized faces in the upper visual field, rather than in the lower visual field. These results suggest that visual field asymmetry is acquired through development, and might be caused by the learning of spatial location in daily experience.
Collapse
Affiliation(s)
- Shuma Tsurumi
- Department of Psychology, Chuo University, Hachioji, Tokyo, Japan.,Japan Society for the Promotion of Science, Chiyoda-ku, Tokyo, Japan
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Bunkyo-ku, Tokyo, Japan
| | | | | |
Collapse
|
44
|
Holden E, Buryn-Weitzel JC, Atim S, Biroch H, Donnellan E, Graham KE, Hoffman M, Jurua M, Knapper CV, Lahiff NJ, Marshall S, Paricia J, Tusiime F, Wilke C, Majid A, Slocombe KE. Maternal attitudes and behaviours differentially shape infant early life experience: A cross cultural study. PLoS One 2022; 17:e0278378. [PMID: 36542635 PMCID: PMC9770339 DOI: 10.1371/journal.pone.0278378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 11/15/2022] [Indexed: 12/24/2022] Open
Abstract
Early life environments afford infants a variety of learning opportunities, and caregivers play a fundamental role in shaping infant early life experience. Variation in maternal attitudes and parenting practices is likely to be greater between than within cultures. However, there is limited cross-cultural work characterising how early life environment differs across populations. We examined the early life environment of infants from two cultural contexts where attitudes towards parenting and infant development were expected to differ: in a group of 53 mother-infant dyads in the UK and 44 mother-infant dyads in Uganda. Participants were studied longitudinally from when infants were 3- to 15-months-old. Questionnaire data revealed the Ugandan mothers had more relational attitudes towards parenting than the mothers from the UK, who had more autonomous parenting attitudes. Using questionnaires and observational methods, we examined whether infant development and experience aligned with maternal attitudes. We found the Ugandan infants experienced a more relational upbringing than the UK infants, with Ugandan infants receiving more distributed caregiving, more body contact with their mothers, and more proximity to mothers at night. Ugandan infants also showed earlier physical development compared to UK infants. Contrary to our expectations, however, Ugandan infants were not in closer proximity to their mothers during the day, did not have more people in proximity or more partners for social interaction compared to UK infants. In addition, when we examined attitudes towards specific behaviours, mothers' attitudes rarely predicted infant experience in related contexts. Taken together our findings highlight the importance of measuring behaviour, rather than extrapolating expected behaviour based on attitudes alone. We found infants' early life environment varies cross-culturally in many important ways and future research should investigate the consequences of these differences for later development.
Collapse
Affiliation(s)
- Eve Holden
- Department of Psychology, University of York, York, United Kingdom
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| | | | - Santa Atim
- Budongo Conservation Field Station, Nyabyeya, Uganda
| | - Hellen Biroch
- Budongo Conservation Field Station, Nyabyeya, Uganda
| | - Ed Donnellan
- Department of Psychology, University of York, York, United Kingdom
- Department of Psychology, University College London, London, United Kingdom
| | - Kirsty E. Graham
- Department of Psychology, University of York, York, United Kingdom
- School of Psychology and Neuroscience, University of St Andrews, St Andrews, United Kingdom
| | - Maggie Hoffman
- School of Human Evolution and Social Change and Institute of Human Origins, Arizona State University, Tempe, Arizona, United States of America
| | - Michael Jurua
- Budongo Conservation Field Station, Nyabyeya, Uganda
| | | | - Nicole J. Lahiff
- Department of Psychology, University of York, York, United Kingdom
| | - Sophie Marshall
- Department of Psychology, University of York, York, United Kingdom
| | | | | | - Claudia Wilke
- Department of Psychology, University of York, York, United Kingdom
| | - Asifa Majid
- Department of Psychology, University of York, York, United Kingdom
| | | |
Collapse
|
45
|
Keenaghan S, Polaskova M, Thurlbeck S, Kentridge RW, Cowie D. Alice in Wonderland: The effects of body size and movement on children's size perception and body representation in virtual reality. J Exp Child Psychol 2022; 224:105518. [PMID: 35964343 DOI: 10.1016/j.jecp.2022.105518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 06/11/2022] [Accepted: 07/07/2022] [Indexed: 11/26/2022]
Abstract
Previous work shows that in adults, illusory embodiment of a virtual avatar can be induced using congruent visuomotor cues. Furthermore, embodying different-sized avatars influences adults' perception of their environment's size. This study (N = 92) investigated whether children are also susceptible to such embodiment and size illusions. Adults and 5-year-old children viewed a first-person perspective of different-sized avatars moving either congruently or incongruently with their own body. Participants rated their feelings of embodiment over the avatar and also estimated the sizes of their body and objects in the environment. Unlike adults, children embodied the avatar regardless of visuomotor congruency. Both adults and children freely embodied different-sized avatars, and this affected their size perception in the surrounding virtual environment; they felt that objects were larger in a small body and vice versa in a large body. In addition, children felt that their body had grown in the large body condition. These findings have important implications for both our theoretical understanding of own-body representation, and our knowledge of perception in virtual environments.
Collapse
Affiliation(s)
| | - Marie Polaskova
- Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Simon Thurlbeck
- Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Robert W Kentridge
- Department of Psychology, University of Durham, Durham DH1 3LE, UK; Azrieli Program in Mind, Brain & Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, Ontario M5G 1M1, Canada
| | - Dorothy Cowie
- Department of Psychology, University of Durham, Durham DH1 3LE, UK.
| |
Collapse
|
46
|
Cabral L, Zubiaurre-Elorza L, Wild CJ, Linke A, Cusack R. Anatomical correlates of category-selective visual regions have distinctive signatures of connectivity in neonates. Dev Cogn Neurosci 2022; 58:101179. [PMID: 36521345 PMCID: PMC9768242 DOI: 10.1016/j.dcn.2022.101179] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 11/15/2022] [Accepted: 11/21/2022] [Indexed: 11/25/2022] Open
Abstract
The ventral visual stream is shaped during development by innate proto-organization within the visual system, such as the strong input from the fovea to the fusiform face area. In adults, category-selective regions have distinct signatures of connectivity to brain regions beyond the visual system, likely reflecting cross-modal and motoric associations. We tested if this long-range connectivity is part of the innate proto-organization, or if it develops with postnatal experience, by using diffusion-weighted imaging to characterize the connectivity of anatomical correlates of category-selective regions in neonates (N = 445), 1-9 month old infants (N = 11), and adults (N = 14). Using the HCP data we identified face- and place- selective regions and a third intermediate region with a distinct profile of selectivity. Using linear classifiers, these regions were found to have distinctive connectivity at birth, to other regions in the visual system and to those outside of it. The results support an extended proto-organization that includes long-range connectivity that shapes, and is shaped by, experience-dependent development.
Collapse
Affiliation(s)
- Laura Cabral
- Department of Radiology, University of Pittsburgh, Pittsburgh 15224, PA, USA.
| | - Leire Zubiaurre-Elorza
- Department of Psychology, Faculty of Health Sciences, University of Deusto, Bilbao 48007, Spain
| | - Conor J Wild
- Western Institute for Neuroscience, Western University, London, ON N6A 3K7, Canada; Department of Physiology and Pharmacology,Western University, London, ON N6A 3K7, Canada
| | - Annika Linke
- Brain Development Imaging Laboratories, San Diego State University, San Diego 92120, CA, USA
| | - Rhodri Cusack
- Trinity College Institute of Neuroscience, Trinity College Dublin, College Green, Dublin 2, Ireland
| |
Collapse
|
47
|
Manzi F, Ishikawa M, Di Dio C, Itakura S, Kanda T, Ishiguro H, Massaro D, Marchetti A. Infants’ Prediction of Humanoid Robot’s Goal-Directed Action. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00941-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
AbstractSeveral studies have shown that infants anticipate human goal-directed actions, but not robot’s ones. However, the studies focusing on the robot goal-directed actions have mainly analyzed the effect of mechanical arms on infant’s attention. To date, the prediction of goal-directed actions in infants has not yet been studied when the agent is a humanoid robot. Given this lack of evidence in infancy research, the present study aims at analyzing infants’ action anticipation of both a human’s and a humanoid robot’s goal-directed action. Data were acquired on thirty 17-month-old infants, watching four video clips, where either a human or a humanoid robot performed a goal-directed action, i.e. reaching a target. Infants looking behavior was measured through the eye-tracking technique. The results showed that infants anticipated the goal-directed action of both the human and the robot and there were no differences in the anticipatory gaze behavior between the two agents. Furthermore, the findings indicated different attentional patterns for the human and the robot, showing a greater attention paid to the robot's face than the human’s face. Overall, the results suggest that 17-month-old infants may infer also humanoid robot’ underlying action goals.
Collapse
|
48
|
Belteki Z, van den Boomen C, Junge C. Face-to-face contact during infancy: How the development of gaze to faces feeds into infants' vocabulary outcomes. Front Psychol 2022; 13:997186. [PMID: 36389540 PMCID: PMC9650530 DOI: 10.3389/fpsyg.2022.997186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/03/2022] [Indexed: 08/10/2023] Open
Abstract
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants' gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
Collapse
|
49
|
Franchak JM, Kadooka K. Age differences in orienting to faces in dynamic scenes depend on face centering, not visual saliency. INFANCY 2022; 27:1032-1051. [PMID: 35932474 DOI: 10.1111/infa.12492] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
The current study investigated how infants (6-24 months), children (2-12 years), and adults differ in how visual cues-visual saliency and centering-guide their attention to faces in videos. We report a secondary analysis of Kadooka and Franchak (2020), in which observers' eye movements were recorded during viewing of television clips containing a variety of faces. For every face on every video frame, we calculated its visual saliency (based on both static and dynamic image features) and calculated how close the face was to the center of the image. Results revealed that participants of every age looked more often at each face when it was more salient compared to less salient. In contrast, centering did not increase the likelihood that infants looked at a given face, but in later childhood and adulthood, centering became a stronger cue for face looking. A control analysis determined that the age-related change in centering was specific to face looking; participants of all ages were more likely to look at the center of the image, and this center bias did not change with age. The implications for using videos in educational and diagnostic contexts are discussed.
Collapse
|
50
|
Mendoza JK, Fausey CM. Everyday Parameters for Episode-to-Episode Dynamics in the Daily Music of Infancy. Cogn Sci 2022; 46:e13178. [PMID: 35938844 PMCID: PMC9542518 DOI: 10.1111/cogs.13178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 04/12/2022] [Accepted: 05/23/2022] [Indexed: 11/26/2022]
Abstract
Experience-dependent change pervades early human development. Though trajectories of developmental change have been well charted in many domains, the episode-to-episode schedules of experiences on which they are hypothesized to depend have not. Here, we took up this issue in a domain known to be governed in part by early experiences: music. Using a corpus of longform audio recordings, we parameterized the daily schedules of music encountered by 35 infants ages 6-12 months. We discovered that everyday music episodes, as well as the interstices between episodes, typically persisted less than a minute, with most daily schedules also including some very extended episodes and interstices. We also discovered that infants encountered music episodes in a bursty rhythm, rather than a periodic or random rhythm, over the day. These findings join a suite of recent discoveries from everyday vision, motor, and language that expand our imaginations beyond artificial learning schedules and enable theorists to model the history-dependence of developmental process in ways that respect everyday sensory histories. Future theories about how infants build knowledge across multiple episodes can now be parameterized using these insights from infants' everyday lives.
Collapse
|