1
|
Zhang Y, Martinez-Cedillo AP, Mason HT, Vuong QC, Garcia-de-Soria MC, Mullineaux D, Knight MI, Geangu E. An automatic sustained attention prediction (ASAP) method for infants and toddlers using wearable device signals. Sci Rep 2025; 15:13298. [PMID: 40247023 PMCID: PMC12006380 DOI: 10.1038/s41598-025-96794-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Accepted: 03/28/2025] [Indexed: 04/19/2025] Open
Abstract
Sustained attention (SA) is a critical cognitive ability that emerges in infancy and affects various aspects of development. Research on SA typically occurs in lab settings, which may not reflect infants' real-world experiences. Infant wearable technology can collect multimodal data in natural environments, including physiological signals for measuring SA. Here we introduce an automatic sustained attention prediction (ASAP) method that harnesses electrocardiogram (ECG) and accelerometer (Acc) signals. Data from 75 infants (6- to 36-months) were recorded during different activities, with some activities emulating those occurring in the natural environment (i.e., free play). Human coders annotated the ECG data for SA periods validated by fixation data. ASAP was trained on temporal and spectral features from the ECG and Acc signals to detect SA, performing consistently across age groups. To demonstrate ASAP's applicability, we investigated the relationship between SA and perceptual features-saliency and clutter-measured from egocentric free-play videos. Results showed that saliency in infants' and toddlers' views increased during attention periods and decreased with age for attention but not inattention. We observed no differences between ASAP attention detection and human-coded SA periods, demonstrating that ASAP effectively detects SA in infants during free play. Coupled with wearable sensors, ASAP provides unprecedented opportunities for studying infant development in real-world settings.
Collapse
Affiliation(s)
- Yisi Zhang
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, 100084, People's Republic of China
| | - A Priscilla Martinez-Cedillo
- Department of Psychology, University of York, York, YO10 5DD, England
- Department of Psychology, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, England
| | - Harry T Mason
- School of Physics, Engineering and Technology, University of York, York, YO10 5DD, England
- Bristol Medical School, University of Bristol, Oakfield House, Bristol, BS8 2BN, England
| | - Quoc C Vuong
- Bioscience Institute, Newcastle University, Newcastle Upon Tyne, NE1 7RU, England
- School of Psychology, Newcastle University, Newcastle Upon Tyne, NE1 7RU, England
| | - M Carmen Garcia-de-Soria
- Department of Psychology, University of York, York, YO10 5DD, England
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - David Mullineaux
- Department of Mathematics, University of York, York, YO10 5DD, England
| | - Marina I Knight
- Department of Mathematics, University of York, York, YO10 5DD, England
| | - Elena Geangu
- Department of Psychology, University of York, York, YO10 5DD, England.
| |
Collapse
|
2
|
Fu X, Franchak JM, MacNeill LA, Gunther KE, Borjon JI, Yurkovic-Harding J, Harding S, Bradshaw J, Pérez-Edgar KE. Implementing mobile eye tracking in psychological research: A practical guide. Behav Res Methods 2024; 56:8269-8288. [PMID: 39147949 PMCID: PMC11525247 DOI: 10.3758/s13428-024-02473-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/20/2024] [Indexed: 08/17/2024]
Abstract
Eye tracking provides direct, temporally and spatially sensitive measures of eye gaze. It can capture visual attention patterns from infancy through adulthood. However, commonly used screen-based eye tracking (SET) paradigms are limited in their depiction of how individuals process information as they interact with the environment in "real life". Mobile eye tracking (MET) records participant-perspective gaze in the context of active behavior. Recent technological developments in MET hardware enable researchers to capture egocentric vision as early as infancy and across the lifespan. However, challenges remain in MET data collection, processing, and analysis. The present paper aims to provide an introduction and practical guide to starting researchers in the field to facilitate the use of MET in psychological research with a wide range of age groups. First, we provide a general introduction to MET. Next, we briefly review MET studies in adults and children that provide new insights into attention and its roles in cognitive and socioemotional functioning. We then discuss technical issues relating to MET data collection and provide guidelines for data quality inspection, gaze annotations, data visualization, and statistical analyses. Lastly, we conclude by discussing the future directions of MET implementation. Open-source programs for MET data quality inspection, data visualization, and analysis are shared publicly.
Collapse
Affiliation(s)
- Xiaoxue Fu
- Department of Psychology, University of South Carolina, Columbia, SC, USA.
| | - John M Franchak
- Department of Psychology, University of California Riverside, Riverside, CA, USA
| | - Leigha A MacNeill
- Department of Medical Social Sciences, Northwestern University, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA
- Institute for Innovations in Developmental Sciences, Northwestern University, Evanston, IL, USA
| | - Kelley E Gunther
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, USA
| | - Jeremy I Borjon
- Department of Psychology, University of Houston, Houston, TX, USA
- Texas Institute for Measurement, Evaluation, and Statistics, University of Houston, Houston, TX, USA
- Texas Center for Learning Disorders, University of Houston, Houston, TX, USA
| | | | - Samuel Harding
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Jessica Bradshaw
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Koraly E Pérez-Edgar
- Department of Psychology, The Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
3
|
Franchak JM, Adolph KE. An update of the development of motor behavior. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1682. [PMID: 38831670 PMCID: PMC11534565 DOI: 10.1002/wcs.1682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 03/31/2024] [Accepted: 04/22/2024] [Indexed: 06/05/2024]
Abstract
This primer describes research on the development of motor behavior. We focus on infancy when basic action systems are acquired-posture, locomotion, manual actions, and facial actions-and we adopt a developmental systems perspective to understand the causes and consequences of developmental change. Experience facilitates improvements in motor behavior and infants accumulate immense amounts of varied everyday experience with all the basic action systems. At every point in development, perception guides behavior by providing feedback about the results of just prior movements and information about what to do next. Across development, new motor behaviors provide new inputs for perception. Thus, motor development opens up new opportunities for acquiring knowledge and acting on the world, instigating cascades of developmental changes in perceptual, cognitive, and social domains. This article is categorized under: Cognitive Biology > Cognitive Development Psychology > Motor Skill and Performance Neuroscience > Development.
Collapse
Affiliation(s)
- John M Franchak
- Department of Psychology, University of California, Riverside, California, USA
| | - Karen E Adolph
- Department of Psychology, Center for Neural Science, New York University, New York, USA
| |
Collapse
|
4
|
Franchak JM, Smith L, Yu C. Developmental Changes in How Head Orientation Structures Infants' Visual Attention. Dev Psychobiol 2024; 66:e22538. [PMID: 39192662 PMCID: PMC11481040 DOI: 10.1002/dev.22538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 06/20/2024] [Accepted: 08/01/2024] [Indexed: 08/29/2024]
Abstract
Most studies of developing visual attention are conducted using screen-based tasks in which infants move their eyes to select where to look. However, real-world visual exploration entails active movements of both eyes and head to bring relevant areas in view. Thus, relatively little is known about how infants coordinate their eyes and heads to structure their visual experiences. Infants were tested every 3 months from 9 to 24 months while they played with their caregiver and three toys while sitting in a highchair at a table. Infants wore a head-mounted eye tracker that measured eye movement toward each of the visual targets (caregiver's face and toys) and how targets were oriented within the head-centered field of view (FOV). With age, infants increasingly aligned novel toys in the center of their head-centered FOV at the expense of their caregiver's face. Both faces and toys were better centered in view during longer looking events, suggesting that infants of all ages aligned their eyes and head to sustain attention. The bias in infants' head-centered FOV could not be accounted for by manual action: Held toys were more poorly centered compared with non-held toys. We discuss developmental factors-attentional, motoric, cognitive, and social-that may explain why infants increasingly adopted biased viewpoints with age.
Collapse
Affiliation(s)
| | - Linda Smith
- Department of Psychological and Brain Sciences, Indiana
University
| | - Chen Yu
- Department of Psychology, University of Texas at
Austin
| |
Collapse
|
5
|
Long B, Goodin S, Kachergis G, Marchman VA, Radwan SF, Sparks RZ, Xiang V, Zhuang C, Hsu O, Newman B, Yamins DLK, Frank MC. The BabyView camera: Designing a new head-mounted camera to capture children's early social and visual environments. Behav Res Methods 2024; 56:3523-3534. [PMID: 37656342 DOI: 10.3758/s13428-023-02206-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 09/02/2023]
Abstract
Head-mounted cameras have been used in developmental psychology research for more than a decade to provide a rich and comprehensive view of what infants see during their everyday experiences. However, variation between these devices has limited the field's ability to compare results across studies and across labs. Further, the video data captured by these cameras to date has been relatively low-resolution, limiting how well machine learning algorithms can operate over these rich video data. Here, we provide a well-tested and easily constructed design for a head-mounted camera assembly-the BabyView-developed in collaboration with Daylight Design, LLC., a professional product design firm. The BabyView collects high-resolution video, accelerometer, and gyroscope data from children approximately 6-30 months of age via a GoPro camera custom mounted on a soft child-safety helmet. The BabyView also captures a large, portrait-oriented vertical field-of-view that encompasses both children's interactions with objects and with their social partners. We detail our protocols for video data management and for handling sensitive data from home environments. We also provide customizable materials for onboarding families with the BabyView. We hope that these materials will encourage the wide adoption of the BabyView, allowing the field to collect high-resolution data that can link children's everyday environments with their learning outcomes.
Collapse
Affiliation(s)
- Bria Long
- Department of Psychology, Stanford University, Stanford, CA, USA.
| | | | - George Kachergis
- Department of Psychology, Stanford University, Stanford, CA, USA
| | | | - Samaher F Radwan
- Department of Psychology, Stanford University, Stanford, CA, USA
- Graduate School of Education, Stanford University, Stanford, CA, USA
| | - Robert Z Sparks
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Violet Xiang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Chengxu Zhuang
- Department of Psychology, Stanford University, Stanford, CA, USA
| | - Oliver Hsu
- Daylight Design, LLC, San Francisco, CA, USA
| | | | - Daniel L K Yamins
- Department of Psychology, Stanford University, Stanford, CA, USA
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, CA, USA
| |
Collapse
|
6
|
Sun L, Francis DJ, Nagai Y, Yoshida H. Early development of saliency-driven attention through object manipulation. Acta Psychol (Amst) 2024; 243:104124. [PMID: 38232506 DOI: 10.1016/j.actpsy.2024.104124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 12/30/2023] [Accepted: 01/02/2024] [Indexed: 01/19/2024] Open
Abstract
In the first years of life, infants progressively develop attention selection skills to gather information from visually clustered environments. As young as newborns, infants are sensitive to the distinguished differences in color, orientation, and luminance, which are the components of visual saliency. However, we know little about how saliency-driven attention emerges and develops socially through everyday free-viewing experiences. The present work assessed the saliency change in infants' egocentric scenes and investigated the impacts of manual engagements on infant object looking in the interactive context of object play. Thirty parent-infant dyads, including infants in two age groups (younger: 3- to 6-month-old; older: 9- to 12-month-old), completed a brief session of object play. Infants' looking behaviors were recorded by the head-mounted eye-tracking gear, and both parents' and infants' manual actions on objects were annotated separately for analyses. The present findings revealed distinct attention mechanisms that underlie the hand-eye coordination between parents and infants and within infants during object play: younger infants are predominantly biased toward the characteristics of the visual saliency accompanying the parent's handled actions on the objects; on the other hand, older infants gradually employed more attention to the object, regardless of the saliency in view, as they gained more self-generated manual actions. Taken together, the present work highlights the tight coordination between visual experiences and sensorimotor competence and proposes a novel dyadic pathway to sustained attention that social sensitivity to parents' hands emerges through saliency-driven attention, preparing infants to focus, follow, and steadily track moving targets in free-flow viewing activities.
Collapse
Affiliation(s)
- Lichao Sun
- Department of Psychology, University of Houston, TX, United States.
| | - David J Francis
- Texas Institute for Measurement, Evaluation, and Statistics, University of Houston, TX, United States.
| | - Yukie Nagai
- International Research Center for Neurointelligence, University of Tokyo, Tokyo, Japan.
| | - Hanako Yoshida
- Department of Psychology, University of Houston, TX, United States.
| |
Collapse
|
7
|
Mendez AH, Yu C, Smith LB. Controlling the input: How one-year-old infants sustain visual attention. Dev Sci 2024; 27:e13445. [PMID: 37665124 PMCID: PMC11384333 DOI: 10.1111/desc.13445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2022] [Revised: 08/01/2023] [Accepted: 08/18/2023] [Indexed: 09/05/2023]
Abstract
Traditionally, the exogenous control of gaze by external saliencies and the endogenous control of gaze by knowledge and context have been viewed as competing systems, with late infancy seen as a period of strengthening top-down control over the vagaries of the input. Here we found that one-year-old infants control sustained attention through head movements that increase the visibility of the attended object. Freely moving one-year-old infants (n = 45) wore head-mounted eye trackers and head motion sensors while exploring sets of toys of the same physical size. The visual size of the objects, a well-documented salience, varied naturally with the infant's moment-to-moment posture and head movements. Sustained attention to an object was characterized by the tight control of head movements that created and then stabilized a visual size advantage for the attended object for sustained attention. The findings show collaboration between exogenous and endogenous attentional systems and suggest new hypotheses about the development of sustained visual attention.
Collapse
Affiliation(s)
- Andres H Mendez
- CICEA, Universidad de la República, Montevideo, Uruguay
- Institut de Neurociencies, Universitat de Barcelona, Barcelona, Spain
| | - Chen Yu
- Department of Psychology, University of Texas, Austin, Texas, USA
| | - Linda B Smith
- Psychological and Brain Sciences, Indiana Unversity, Bloomington, Indiana, USA
| |
Collapse
|
8
|
Wedasingha N, Samarasinghe P, Senevirathna L, Papandrea M, Puiatti A, Rankin D. Automated anomalous child repetitive head movement identification through transformer networks. Phys Eng Sci Med 2023; 46:1427-1445. [PMID: 37814077 DOI: 10.1007/s13246-023-01309-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 07/24/2023] [Indexed: 10/11/2023]
Abstract
The increasing prevalence of behavioral disorders in children is of growing concern within the medical community. Recognising the significance of early identification and intervention for atypical behaviors, there is a consensus on their pivotal role in improving outcomes. Due to inadequate facilities and a shortage of medical professionals with specialized expertise, traditional diagnostic methods have been unable to effectively address the rising incidence of behavioral disorders. Hence, there is a need to develop automated approaches for the diagnosis of behavioral disorders in children, to overcome the challenges with traditional methods. The purpose of this study is to develop an automated model capable of analyzing videos to differentiate between typical and atypical repetitive head movements in. To address problems resulting from the limited availability of child datasets, various learning methods are employed to mitigate these issues. In this work, we present a fusion of transformer networks, and Non-deterministic Finite Automata (NFA) techniques, which classify repetitive head movements of a child as typical or atypical based on an analysis of gender, age, and type of repetitive head movement, along with count, duration, and frequency of each repetitive head movement. Experimentation was carried out with different transfer learning methods to enhance the performance of the model. The experimental results on five datasets: NIR face dataset, Bosphorus 3D face dataset, ASD dataset, SSBD dataset, and the Head Movements in the Wild dataset, indicate that our proposed model has outperformed many state-of-the-art frameworks when distinguishing typical and atypical repetitive head movements in children.
Collapse
Affiliation(s)
- Nushara Wedasingha
- Faculty of Computing, Sri Lanka Institute of Information Technology, New Kandy Rd, Malabe, 10115, Colombo, Sri Lanka.
| | - Pradeepa Samarasinghe
- Faculty of Computing, Sri Lanka Institute of Information Technology, New Kandy Rd, Malabe, 10115, Colombo, Sri Lanka
| | - Lasantha Senevirathna
- Faculty of Computing, Sri Lanka Institute of Information Technology, New Kandy Rd, Malabe, 10115, Colombo, Sri Lanka
| | - Michela Papandrea
- Information Systems and Networking Institute (ISIN), University of Applied Sciences and Arts of Southern Switzerland, Via Pobiette, Manno, 6928, Switzerland
| | - Alessandro Puiatti
- Institute of Digital Technologies for Personalized Healthcare (MeDiTech), University of Applied Sciences and Arts of Southern Switzerland, Via Pobiette, Manno, 6928, Switzerland
| | - Debbie Rankin
- School of Computing, Engineering and Intelligent Systems, Ulster University, Northland Road, Derry-Londonderry, BT48 7JL, Northern Ireland, UK
| |
Collapse
|
9
|
Clerkin EM, Smith LB. Real-world statistics at two timescales and a mechanism for infant learning of object names. Proc Natl Acad Sci U S A 2022; 119:e2123239119. [PMID: 35482916 PMCID: PMC9170168 DOI: 10.1073/pnas.2123239119] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 03/12/2022] [Indexed: 11/24/2022] Open
Abstract
Infants begin learning the visual referents of nouns before their first birthday. Despite considerable empirical and theoretical effort, little is known about the statistics of the experiences that enable infants to break into object–name learning. We used wearable sensors to collect infant experiences of visual objects and their heard names for 40 early-learned categories. The analyzed data were from one context that occurs multiple times a day and includes objects with early-learned names: mealtime. The statistics reveal two distinct timescales of experience. At the timescale of many mealtime episodes (n = 87), the visual categories were pervasively present, but naming of the objects in each of those categories was very rare. At the timescale of single mealtime episodes, names and referents did cooccur, but each name–referent pair appeared in very few of the mealtime episodes. The statistics are consistent with incremental learning of visual categories across many episodes and the rapid learning of name–object mappings within individual episodes. The two timescales are also consistent with a known cortical learning mechanism for one-episode learning of associations: new information, the heard name, is incorporated into well-established memories, the seen object category, when the new information cooccurs with the reactivation of that slowly established memory.
Collapse
Affiliation(s)
- Elizabeth M. Clerkin
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405-7007
| | - Linda B. Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405-7007
- Cognitive Science Program, Indiana University, Bloomington, Bloomington, IN 47405-7007
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
10
|
Perkovich E, Sun L, Mire S, Laakman A, Sakhuja U, Yoshida H. What children with and without ASD see: Similar visual experiences with different pathways through parental attention strategies. AUTISM & DEVELOPMENTAL LANGUAGE IMPAIRMENTS 2022; 7:23969415221137293. [PMID: 36518657 PMCID: PMC9742584 DOI: 10.1177/23969415221137293] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Background and aims Although young children's gaze behaviors in experimental task contexts have been shown to be potential biobehavioral markers relevant to autism spectrum disorder (ASD), we know little about their everyday gaze behaviors. The present study aims (1) to document early gaze behaviors that occur within a live, social interactive context among children with and without ASD and their parents, and (2) to examine how children's and parents' gaze behaviors are related for ASD and typically developing (TD) groups. A head-mounted eye-tracking system was used to record the frequency and duration of a set of gaze behaviors (such as sustained attention [SA] and joint attention [JA]) that are relevant to early cognitive and language development. Methods Twenty-six parent-child dyads (ASD group = 13, TD group = 13) participated. Children were between the ages of 3 and 8 years old. We placed head-mounted eye trackers on parents and children to record their parent- and child-centered views, and we also recorded their interactive parent-child object play scene from both a wall- and ceiling-mounted camera. We then annotated the frequency and duration of gaze behaviors (saccades, fixation, SA, and JA) for different regions of interest (object, face, and hands), and attention shifting. Independent group t-tests and ANOVAs were used to observe group comparisons, and linear regression was used to test the predictiveness of parent gaze behaviors for JA. Results The present study found no differences in visual experiences between children with and without ASD. Interestingly, however, significant group differences were found for parent gaze behaviors. Compared to parents of ASD children, parents of TD children focused on objects and shifted their attention between objects and their children's faces more. In contrast, parents of ASD children were more likely to shift their attention between their own hands and their children. JA experiences were also predicted differently, depending on the group: among parents of TD children, attention to objects predicted JA, but among parents of ASD children, attention to their children predicted JA. Conclusion Although no differences were found between gaze behaviors of autistic and TD children in this study, there were significant group differences in parents' looking behaviors. This suggests potentially differential pathways for the scaffolding effect of parental gaze for ASD children compared with TD children. Implications The present study revealed the impact of everyday life, social interactive context on early visual experiences, and point to potentially different pathways by which parental looking behaviors guide the looking behaviors of children with and without ASD. Identifying parental social input relevant to early attention development (e.g., JA) among autistic children has implications for mechanisms that could support socially mediated attention behaviors that have been documented to facilitate early cognitive and language development and implications for the development of parent-mediated interventions for young children with or at risk for ASD.Note: This paper uses a combination of person-first and identity-first language, an intentional decision aligning with comments put forth by Vivanti (Vivanti, 2020), recognizing the complexities of known and unknown preferences of those in the larger autism community.
Collapse
Affiliation(s)
| | - Lichao Sun
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Sarah Mire
- Educational Psychology Department, Baylor University, Waco, TX, USA
| | - Anna Laakman
- Department of Psychological Health and Learning Sciences, University of Houston, Houston, TX, USA
| | - Urvi Sakhuja
- Department of Psychology, University of Houston, Houston, TX, USA
| | - Hanako Yoshida
- Department of Psychology, University of Houston, Houston, TX, USA
| |
Collapse
|
11
|
The infant's view redefines the problem of referential uncertainty in early word learning. Proc Natl Acad Sci U S A 2021; 118:2107019118. [PMID: 34933998 PMCID: PMC8719889 DOI: 10.1073/pnas.2107019118] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/01/2021] [Indexed: 11/18/2022] Open
Abstract
The learning of first object names is deemed a hard problem due to the uncertainty inherent in mapping a heard name to the intended referent in a cluttered and variable world. However, human infants readily solve this problem. Despite considerable theoretical discussion, relatively little is known about the uncertainty infants face in the real world. We used head-mounted eye tracking during parent-infant toy play and quantified the uncertainty by measuring the distribution of infant attention to the potential referents when a parent named both familiar and unfamiliar toy objects. The results show that infant gaze upon hearing an object name is often directed to a single referent which is equally likely to be a wrong competitor or the intended target. This bimodal gaze distribution clarifies and redefines the uncertainty problem and constrains possible solutions.
Collapse
|