1
|
Engbert R, Funken J, Boll‐Avetisyan N. Toward Dynamical Modeling of Infants' Looking Times. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2025; 16:e70006. [PMID: 40411359 PMCID: PMC12102751 DOI: 10.1002/wcs.70006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 05/12/2025] [Accepted: 05/13/2025] [Indexed: 05/26/2025]
Abstract
Analyzing looking times is among the most important behavioral approaches to studying problems such as infant cognition, perception, or language development. However, process-based approaches to the dynamics of infants' looking times are lacking. Here, we propose a new dynamical framework for modeling infant gaze behavior with full account of the microstructure (i.e., saccades and fixations). Our process-based model is illustrated by reproducing inter-individual differences in a developmental study of noun comprehension (Garrison et al. 2020). In our modeling framework, numerical values of model parameters map onto specific cognitive processes (e.g., attention or working memory) involved in gaze control. Because of the general architecture of the mathematical model and our robust procedures in model inference via Bayesian data assimilation, our framework may find applications in other fields of developmental and cognitive sciences.
Collapse
Affiliation(s)
- Ralf Engbert
- PsychologyUniversity of PotsdamPotsdamGermany
- Research Focus Cognitive ScienceUniversity of PotsdamPotsdamGermany
| | | | - Natalie Boll‐Avetisyan
- Research Focus Cognitive ScienceUniversity of PotsdamPotsdamGermany
- LinguisticsUniversity of PotsdamPotsdamGermany
| |
Collapse
|
2
|
Zhang Y, Martinez-Cedillo AP, Mason HT, Vuong QC, Garcia-de-Soria MC, Mullineaux D, Knight MI, Geangu E. An automatic sustained attention prediction (ASAP) method for infants and toddlers using wearable device signals. Sci Rep 2025; 15:13298. [PMID: 40247023 PMCID: PMC12006380 DOI: 10.1038/s41598-025-96794-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Accepted: 03/28/2025] [Indexed: 04/19/2025] Open
Abstract
Sustained attention (SA) is a critical cognitive ability that emerges in infancy and affects various aspects of development. Research on SA typically occurs in lab settings, which may not reflect infants' real-world experiences. Infant wearable technology can collect multimodal data in natural environments, including physiological signals for measuring SA. Here we introduce an automatic sustained attention prediction (ASAP) method that harnesses electrocardiogram (ECG) and accelerometer (Acc) signals. Data from 75 infants (6- to 36-months) were recorded during different activities, with some activities emulating those occurring in the natural environment (i.e., free play). Human coders annotated the ECG data for SA periods validated by fixation data. ASAP was trained on temporal and spectral features from the ECG and Acc signals to detect SA, performing consistently across age groups. To demonstrate ASAP's applicability, we investigated the relationship between SA and perceptual features-saliency and clutter-measured from egocentric free-play videos. Results showed that saliency in infants' and toddlers' views increased during attention periods and decreased with age for attention but not inattention. We observed no differences between ASAP attention detection and human-coded SA periods, demonstrating that ASAP effectively detects SA in infants during free play. Coupled with wearable sensors, ASAP provides unprecedented opportunities for studying infant development in real-world settings.
Collapse
Affiliation(s)
- Yisi Zhang
- Department of Psychological and Cognitive Sciences, Tsinghua University, Beijing, 100084, People's Republic of China
| | - A Priscilla Martinez-Cedillo
- Department of Psychology, University of York, York, YO10 5DD, England
- Department of Psychology, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, England
| | - Harry T Mason
- School of Physics, Engineering and Technology, University of York, York, YO10 5DD, England
- Bristol Medical School, University of Bristol, Oakfield House, Bristol, BS8 2BN, England
| | - Quoc C Vuong
- Bioscience Institute, Newcastle University, Newcastle Upon Tyne, NE1 7RU, England
- School of Psychology, Newcastle University, Newcastle Upon Tyne, NE1 7RU, England
| | - M Carmen Garcia-de-Soria
- Department of Psychology, University of York, York, YO10 5DD, England
- Department of Psychology, University of Aberdeen, Aberdeen, UK
| | - David Mullineaux
- Department of Mathematics, University of York, York, YO10 5DD, England
| | - Marina I Knight
- Department of Mathematics, University of York, York, YO10 5DD, England
| | - Elena Geangu
- Department of Psychology, University of York, York, YO10 5DD, England.
| |
Collapse
|
3
|
Wilson VAD, Sauppe S, Brocard S, Ringen E, Daum MM, Wermelinger S, Gu N, Andrews C, Isasi-Isasmendi A, Bickel B, Zuberbühler K. Humans and great apes visually track event roles in similar ways. PLoS Biol 2024; 22:e3002857. [PMID: 39591401 PMCID: PMC11593759 DOI: 10.1371/journal.pbio.3002857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2024] [Accepted: 09/20/2024] [Indexed: 11/28/2024] Open
Abstract
Human language relies on a rich cognitive machinery, partially shared with other animals. One key mechanism, however, decomposing events into causally linked agent-patient roles, has remained elusive with no known animal equivalent. In humans, agent-patient relations in event cognition drive how languages are processed neurally and expressions structured syntactically. We compared visual event tracking between humans and great apes, using stimuli that would elicit causal processing in humans. After accounting for attention to background information, we found similar gaze patterns to agent-patient relations in all species, mostly alternating attention to agents and patients, presumably in order to learn the nature of the event, and occasionally privileging agents under specific conditions. Six-month-old infants, in contrast, did not follow agent-patient relations and attended mostly to background information. These findings raise the possibility that event role tracking, a cognitive foundation of syntax, has evolved long before language but requires time and experience to become ontogenetically available.
Collapse
Affiliation(s)
- Vanessa A. D. Wilson
- Department of Comparative Cognition, Institute of Biology; University of Neuchatel, Neuchatel, Switzerland
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| | - Sebastian Sauppe
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Sarah Brocard
- Department of Comparative Cognition, Institute of Biology; University of Neuchatel, Neuchatel, Switzerland
| | - Erik Ringen
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
- NCCR@LiRI Group, Linguistic Research Infrastructure, University of Zurich, Zurich, Switzerland
| | - Moritz M. Daum
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Stephanie Wermelinger
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Nianlong Gu
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
- NCCR@LiRI Group, Linguistic Research Infrastructure, University of Zurich, Zurich, Switzerland
| | - Caroline Andrews
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| | - Arrate Isasi-Isasmendi
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| | - Klaus Zuberbühler
- Department of Comparative Cognition, Institute of Biology; University of Neuchatel, Neuchatel, Switzerland
- Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland
| |
Collapse
|
4
|
Fu X, Platt E, Shic F, Bradshaw J. Infant Social Attention Associated with Elevated Likelihood for Autism Spectrum Disorder: A Multi-Method Comparison. J Autism Dev Disord 2024:10.1007/s10803-024-06360-z. [PMID: 38678515 PMCID: PMC11911713 DOI: 10.1007/s10803-024-06360-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/16/2024] [Indexed: 05/01/2024]
Abstract
PURPOSE The study aimed to compare eye tracking (ET) and manual coding (MC) measures of attention to social and nonsocial information in infants with elevated familial likelihood (EL) of autism spectrum disorder (ASD) and low likelihood of ASD (LL). ET provides a temporally and spatially sensitive tool for measuring gaze allocation. Existing evidence suggests that ET is a promising tool for detecting distinct social attention patterns that may serve as a biomarker for ASD. However, ET is prone to data loss, especially in young EL infants. METHODS To increase evidence for ET as a viable tool for capturing atypical social attention in EL infants, the current prospective, longitudinal study obtained ET and MC measures of social and nonsocial attention in 25 EL and 47 LL infants at several time points between 3 and 24 months of age. RESULTS ET data was obtained with a satisfactory success rate of 95.83%, albeit with a higher degree of data loss compared to MC. Infant age and ASD likelihood status did not impact the extent of ET or MC data loss. There was a significant positive association between the ET and MC measures of attention, and separate analyses of attention using ET and AC measures yielded comparable findings. These analyses indicated group differences (EL vs. LL) in age-related change in attention to social vs. nonsocial information. CONCLUSION Together, the findings support infant ET as a promising approach for identifying very early markers associated with ASD likelihood.
Collapse
Affiliation(s)
- Xiaoxue Fu
- Department of Psychology, University of South Carolina, Columbia, SC, USA.
| | - Emma Platt
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| | - Frederick Shic
- Center for Child Health, Behavior and Development, Seattle Children's Research Institute, Seattle, WA, USA
- Department of Pediatrics, University of Washington School of Medicine, Seattle, WA, USA
| | - Jessica Bradshaw
- Department of Psychology, University of South Carolina, Columbia, SC, USA
| |
Collapse
|
5
|
Oakes LM, Hayes TR, Klotz SM, Pomaranski KI, Henderson JM. The role of local meaning in infants' fixations of natural scenes. INFANCY 2024; 29:284-298. [PMID: 38183667 PMCID: PMC10872336 DOI: 10.1111/infa.12582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/08/2024]
Abstract
As infants view visual scenes every day, they must shift their eye gaze and visual attention from location to location, sampling information to process and learn. Like adults, infants' gaze when viewing natural scenes (i.e., photographs of everyday scenes) is influenced by the physical features of the scene image and a general bias to look more centrally in a scene. However, it is unknown how infants' gaze while viewing such scenes is influenced by the semantic content of the scenes. Here, we tested the relative influence of local meaning, controlling for physical salience and center bias, on the eye gaze of 4- to 12-month-old infants (N = 92) as they viewed natural scenes. Overall, infants were more likely to fixate scene regions rated as higher in meaning, indicating that, like adults, the semantic content, or local meaning, of scenes influences where they look. More importantly, the effect of meaning on infant attention increased with age, providing the first evidence for an age-related increase in the impact of local meaning on infants' eye movements while viewing natural scenes.
Collapse
Affiliation(s)
- Lisa M. Oakes
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | | - Shannon M. Klotz
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Katherine I. Pomaranski
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - John M. Henderson
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| |
Collapse
|
6
|
Zeng G, Simpson EA, Paukner A. Maximizing valid eye-tracking data in human and macaque infants by optimizing calibration and adjusting areas of interest. Behav Res Methods 2024; 56:881-907. [PMID: 36890330 DOI: 10.3758/s13428-022-02056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/24/2022] [Indexed: 03/10/2023]
Abstract
Remote eye tracking with automated corneal reflection provides insights into the emergence and development of cognitive, social, and emotional functions in human infants and non-human primates. However, because most eye-tracking systems were designed for use in human adults, the accuracy of eye-tracking data collected in other populations is unclear, as are potential approaches to minimize measurement error. For instance, data quality may differ across species or ages, which are necessary considerations for comparative and developmental studies. Here we examined how the calibration method and adjustments to areas of interest (AOIs) of the Tobii TX300 changed the mapping of fixations to AOIs in a cross-species longitudinal study. We tested humans (N = 119) at 2, 4, 6, 8, and 14 months of age and macaques (Macaca mulatta; N = 21) at 2 weeks, 3 weeks, and 6 months of age. In all groups, we found improvement in the proportion of AOI hits detected as the number of successful calibration points increased, suggesting calibration approaches with more points may be advantageous. Spatially enlarging and temporally prolonging AOIs increased the number of fixation-AOI mappings, suggesting improvements in capturing infants' gaze behaviors; however, these benefits varied across age groups and species, suggesting different parameters may be ideal, depending on the population studied. In sum, to maximize usable sessions and minimize measurement error, eye-tracking data collection and extraction approaches may need adjustments for the age groups and species studied. Doing so may make it easier to standardize and replicate eye-tracking research findings.
Collapse
Affiliation(s)
- Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Annika Paukner
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
7
|
Oakes LM. The cascading development of visual attention in infancy: Learning to look and looking to learn. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2023; 32:410-417. [PMID: 38107783 PMCID: PMC10723638 DOI: 10.1177/09637214231178744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
The development of visual attention in infancy is typically indexed by where and how long infants look, focusing on changes in alerting, orienting, or attentional control. However, visual attention and looking are both complex systems that are multiply determined. Moreover, infants' visual attention, looking, and learning are intimately connected. Infants learn to look, reflecting cascading effects of changes in attention, the visual system and motor control, as well as the information infants learn about the world around them. Furthermore infants' looking behavior provides the input infants use to perceive and learn about the world. Thus, infants look to learn about the world around them. A deeper understanding of development will be gained by appreciating the cascading effects of changes across these intertwined domains.
Collapse
Affiliation(s)
- Lisa M Oakes
- Department of Psychology and Center for Mind and Brain, UC Davis
| |
Collapse
|
8
|
Jing M, Kadooka K, Franchak J, Kirkorian HL. The effect of narrative coherence and visual salience on children's and adults' gaze while watching video. J Exp Child Psychol 2023; 226:105562. [PMID: 36257254 DOI: 10.1016/j.jecp.2022.105562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 09/12/2022] [Accepted: 09/14/2022] [Indexed: 11/05/2022]
Abstract
Low-level visual features (e.g., motion, contrast) predict eye gaze during video viewing. The current study investigated the effect of narrative coherence on the extent to which low-level visual salience predicts eye gaze. Eye movements were recorded as 4-year-olds (n = 20) and adults (n = 20) watched a cohesive versus random sequence of video shots from a 4.5-min full vignette from Sesame Street. Overall, visual salience was a stronger predictor of gaze in adults than in children, especially when viewing a random shot sequence. The impact of narrative coherence on children's gaze was limited to the short period of time surrounding cuts to new video shots. The discussion considers potential direct effects of visual salience as well as incidental effects due to overlap between salient features and semantic content. The findings are also discussed in the context of developing video comprehension.
Collapse
Affiliation(s)
- Mengguo Jing
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA.
| | - Kellan Kadooka
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - John Franchak
- Department of Psychology, University of California, Riverside, Riverside, CA 92521, USA
| | - Heather L Kirkorian
- Department of Human Development and Family Studies, University of Wisconsin-Madison, Madison, WI 53705, USA
| |
Collapse
|
9
|
DeBolt MC, Mitsven SG, Pomaranski KI, Cantrell LM, Luck SJ, Oakes LM. A new perspective on the role of physical salience in visual search: Graded effect of salience on infants' attention. Dev Psychol 2023; 59:326-343. [PMID: 36355689 PMCID: PMC9905344 DOI: 10.1037/dev0001460] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
We tested 6- and 8-month-old White and non-White infants (N = 53 total, 28 girls) from Northern California in a visual search task to determine whether a unique item in an otherwise homogeneous display (a singleton) attracts attention because it is a unique singleton and "pops out" in a categorical manner, or whether attention instead varies in a graded manner on the basis of quantitative differences in physical salience. Infants viewed arrays of four or six items; one item was a singleton and the other items were identical distractors (e.g., a single cookie and three identical toy cars). At both ages, infants looked to the singletons first more often, were faster to look at singletons, and looked longer at singletons. However, when a computational model was used to quantify the relative salience of the singleton in each display-which varied widely among the different singleton-distractor combinations-we found a strong, graded effect of physical salience on attention and no evidence that singleton status per se influenced attention. In addition, consistent with other research on attention in infancy, the effect of salience was stronger for 6-month-old infants than for 8-month-old infants. Taken together, these results show that attention-getting and attention-holding in infancy vary continuously with quantitative variations in physical salience rather than depending in a categorical manner on whether an item is unique. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
- Michaela C. DeBolt
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | | - Katherine I. Pomaranski
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Lisa M. Cantrell
- Department of Child Development, California State University, Sacramento
| | - Steven J. Luck
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | - Lisa M. Oakes
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| |
Collapse
|
10
|
Anderson EM, Seemiller ES, Smith LB. Scene saliencies in egocentric vision and their creation by parents and infants. Cognition 2022; 229:105256. [PMID: 35988453 DOI: 10.1016/j.cognition.2022.105256] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 08/09/2022] [Accepted: 08/11/2022] [Indexed: 11/15/2022]
Abstract
Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant's egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning.
Collapse
Affiliation(s)
| | | | - Linda B Smith
- Psychological and Brain Sciences, Indiana University, USA
| |
Collapse
|
11
|
Schlegelmilch K, Wertz AE. Visual segmentation of complex naturalistic structures in an infant eye-tracking search task. PLoS One 2022; 17:e0266158. [PMID: 35363809 PMCID: PMC8975119 DOI: 10.1371/journal.pone.0266158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 03/15/2022] [Indexed: 11/24/2022] Open
Abstract
An infant’s everyday visual environment is composed of a complex array of entities, some of which are well integrated into their surroundings. Although infants are already sensitive to some categories in their first year of life, it is not clear which visual information supports their detection of meaningful elements within naturalistic scenes. Here we investigated the impact of image characteristics on 8-month-olds’ search performance using a gaze contingent eye-tracking search task. Infants had to detect a target patch on a background image. The stimuli consisted of images taken from three categories: vegetation, non-living natural elements (e.g., stones), and manmade artifacts, for which we also assessed target background differences in lower- and higher-level visual properties. Our results showed that larger target-background differences in the statistical properties scaling invariance and entropy, and also stimulus backgrounds including low pictorial depth, predicted better detection performance. Furthermore, category membership only affected search performance if supported by luminance contrast. Data from an adult comparison group also indicated that infants’ search performance relied more on lower-order visual properties than adults. Taken together, these results suggest that infants use a combination of property- and category-related information to parse complex visual stimuli.
Collapse
Affiliation(s)
- Karola Schlegelmilch
- Max Planck Research Group Naturalistic Social Cognition, Max Planck Institute for Human Development, Berlin, Germany
- * E-mail:
| | - Annie E. Wertz
- Max Planck Research Group Naturalistic Social Cognition, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
12
|
Oakes LM. The development of visual attention in infancy: A cascade approach. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 64:1-37. [PMID: 37080665 DOI: 10.1016/bs.acdb.2022.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Visual attention develops rapidly and significantly during the first postnatal years. At birth, infants have poor visual acuity, poor head and neck control, and as a result have little autonomy over where and how long they look. Across the first year, the neural systems that support alerting, orienting, and endogenous attention develop, allowing infants to more effectively focus their attention on information in the environment important for processing. However, visual attention is a system that develops in the context of the whole child, and fully understanding this development requires understanding how attentional systems interact and how these systems interact with other systems across wide domains. By adopting a cascades framework we can better position the development of visual attention in the context of the whole developing child. Specifically, development builds, with previous achievements setting the stage for current development, and current development having cascading consequences on future development. In addition, development reflects changes in multiple domains, and those domains influence each other across development. Finally, development reflects and produces changes in the input that the visual system receives; understanding the changing input is key to fully understand the development of visual attention. The development of visual attention is described in this context.
Collapse
Affiliation(s)
- Lisa M Oakes
- Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, United States.
| |
Collapse
|
13
|
Kiat JE, Luck SJ, Beckner AG, Hayes TR, Pomaranski KI, Henderson JM, Oakes LM. Linking patterns of infant eye movements to a neural network model of the ventral stream using representational similarity analysis. Dev Sci 2022; 25:e13155. [PMID: 34240787 PMCID: PMC8639751 DOI: 10.1111/desc.13155] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 06/23/2021] [Accepted: 07/01/2021] [Indexed: 01/03/2023]
Abstract
Little is known about the development of higher-level areas of visual cortex during infancy, and even less is known about how the development of visually guided behavior is related to the different levels of the cortical processing hierarchy. As a first step toward filling these gaps, we used representational similarity analysis (RSA) to assess links between gaze patterns and a neural network model that captures key properties of the ventral visual processing stream. We recorded the eye movements of 4- to 12-month-old infants (N = 54) as they viewed photographs of scenes. For each infant, we calculated the similarity of the gaze patterns for each pair of photographs. We also analyzed the images using a convolutional neural network model in which the successive layers correspond approximately to the sequence of areas along the ventral stream. For each layer of the network, we calculated the similarity of the activation patterns for each pair of photographs, which was then compared with the infant gaze data. We found that the network layers corresponding to lower-level areas of visual cortex accounted for gaze patterns better in younger infants than in older infants, whereas the network layers corresponding to higher-level areas of visual cortex accounted for gaze patterns better in older infants than in younger infants. Thus, between 4 and 12 months, gaze becomes increasingly controlled by more abstract, higher-level representations. These results also demonstrate the feasibility of using RSA to link infant gaze behavior to neural network models. A video abstract of this article can be viewed at https://youtu.be/K5mF2Rw98Is.
Collapse
|
14
|
Nelson CM, Oakes LM. "May I Grab Your Attention?": An Investigation Into Infants' Visual Preferences for Handled Objects Using Lookit as an Online Platform for Data Collection. Front Psychol 2021; 12:733218. [PMID: 34566820 PMCID: PMC8460868 DOI: 10.3389/fpsyg.2021.733218] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 08/12/2021] [Indexed: 11/13/2022] Open
Abstract
We examined the relation between 4- to 12-month-old infants' (N = 107) motor development and visual preference for handled or non-handled objects, using Lookit (lookit.mit.edu) as an online tool for data collection. Infants viewed eight pairs of objects, and their looking was recorded using their own webcam. Each pair contained one item with an easily graspable “handle-like” region and one without. Infants' duration of looking at each item was coded from the recordings, allowing us to evaluate their preference for the handled item. In addition, parents reported on their infants' motor behavior in the previous week. Overall, infants looked longer to handled items than non-handled items. Additionally, by examining the duration of infants' individual looks, we show that differences in infants' interest in the handled items varied both by infants' motor level and across the course of the 8-s trials. These findings confirm infant visual preferences can be successfully measured using Lookit and that motor development is related to infants' visual preferences for items with a graspable, handle-like region. The relative roles of age and motor development are discussed.
Collapse
Affiliation(s)
- Christian M Nelson
- Department of Psychology and the Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| | - Lisa M Oakes
- Department of Psychology and the Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| |
Collapse
|