1
|
O’Connell TP, Bonnen T, Friedman Y, Tewari A, Sitzmann V, Tenenbaum JB, Kanwisher N. Approximating Human-Level 3D Visual Inferences With Deep Neural Networks. Open Mind (Camb) 2025; 9:305-324. [PMID: 40013087 PMCID: PMC11864798 DOI: 10.1162/opmi_a_00189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 01/14/2025] [Indexed: 02/28/2025] Open
Abstract
Humans make rich inferences about the geometry of the visual world. While deep neural networks (DNNs) achieve human-level performance on some psychophysical tasks (e.g., rapid classification of object or scene categories), they often fail in tasks requiring inferences about the underlying shape of objects or scenes. Here, we ask whether and how this gap in 3D shape representation between DNNs and humans can be closed. First, we define the problem space: after generating a stimulus set to evaluate 3D shape inferences using a match-to-sample task, we confirm that standard DNNs are unable to reach human performance. Next, we construct a set of candidate 3D-aware DNNs including 3D neural field (Light Field Network), autoencoder, and convolutional architectures. We investigate the role of the learning objective and dataset by training single-view (the model only sees one viewpoint of an object per training trial) and multi-view (the model is trained to associate multiple viewpoints of each object per training trial) versions of each architecture. When the same object categories appear in the model training and match-to-sample test sets, multi-view DNNs approach human-level performance for 3D shape matching, highlighting the importance of a learning objective that enforces a common representation across viewpoints of the same object. Furthermore, the 3D Light Field Network was the model most similar to humans across all tests, suggesting that building in 3D inductive biases increases human-model alignment. Finally, we explore the generalization performance of multi-view DNNs to out-of-distribution object categories not seen during training. Overall, our work shows that multi-view learning objectives for DNNs are necessary but not sufficient to make similar 3D shape inferences as humans and reveals limitations in capturing human-like shape inferences that may be inherent to DNN modeling approaches. We provide a methodology for understanding human 3D shape perception within a deep learning framework and highlight out-of-domain generalization as the next challenge for learning human-like 3D representations with DNNs.
Collapse
Affiliation(s)
| | - Tyler Bonnen
- EECS, University of California, Berkeley, Berkeley, CA, USA
| | | | | | | | | | | |
Collapse
|
2
|
Daniel-Hertz E, Yao JK, Gregorek S, Hoyos PM, Gomez J. An Eccentricity Gradient Reversal across High-Level Visual Cortex. J Neurosci 2025; 45:e0809242024. [PMID: 39516043 PMCID: PMC11713851 DOI: 10.1523/jneurosci.0809-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 10/15/2024] [Accepted: 10/21/2024] [Indexed: 11/16/2024] Open
Abstract
Human visual cortex contains regions selectively involved in perceiving and recognizing ecologically important visual stimuli such as people and places. Located in the ventral temporal lobe, these regions are organized consistently relative to cortical folding, a phenomenon thought to be inherited from how centrally or peripherally these stimuli are viewed with the retina. While this eccentricity theory of visual cortex has been one of the best descriptions of its functional organization, whether or not it accurately describes visual processing in all category-selective regions is not yet clear. Through a combination of behavioral and functional MRI measurements in 27 participants (17 females), we demonstrate that a limb-selective region neighboring well-studied face-selective regions shows tuning for the visual periphery in a cortical region originally thought to be centrally biased. We demonstrate that the spatial computations performed by the limb-selective region are consistent with visual experience and in doing so, make the novel observation that there may in fact be two eccentricity gradients, forming an eccentricity reversal across high-level visual cortex. These data expand the current theory of cortical organization to provide a unifying principle that explains the broad functional features of many visual regions, showing that viewing experience interacts with innate wiring principles to drive the location of cortical specialization.
Collapse
Affiliation(s)
- Edan Daniel-Hertz
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Jewelia K Yao
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Sidney Gregorek
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Patricia M Hoyos
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| | - Jesse Gomez
- Princeton University, Princeton Neuroscience Institute, Princeton, New Jersey 08544
| |
Collapse
|
3
|
Yurkovic-Harding J, Bradshaw J. The Dynamics of Looking and Smiling Differ for Young Infants at Elevated Likelihood for ASD. INFANCY 2025; 30:e12646. [PMID: 39716809 PMCID: PMC12047390 DOI: 10.1111/infa.12646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 11/27/2024] [Accepted: 12/06/2024] [Indexed: 12/25/2024]
Abstract
Social smiling is the earliest gained social communication skill, emerging around 2 months of age. From 2 to 6-months, infants primarily smile in response to caregivers. After 6 months, infants coordinate social smiles with other social cues to initiate interactions with the caregiver. Social smiling is reduced in older infants with autism spectrum disorder (ASD) but has rarely been studied before 6 months of life. The current study therefore aimed to understand the component parts of infant social smiles, namely look to caregiver and smile, during face-to-face interactions in 3 and 4-month-old infants at elevated (EL) and low likelihood (LL) for ASD. We found that EL and LL infants looked to their caregiver and smiled for similar amounts of time and at similar rates, suggesting that social smiling manifests similarly in both groups. A nuanced difference between groups emerged when considering temporal dynamics of looking and smiling. Specifically, 3-month-old EL infants demonstrated extended looking to the caregiver after smile offset. These findings suggest that social smiling is largely typical in EL infants in early infancy, with subtle differences in temporal coupling. Future research is needed to understand the full magnitude of these differences and their implications for social development.
Collapse
Affiliation(s)
- Julia Yurkovic-Harding
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Carolina Autism and Neurodevelopment Research Center, University of South Carolina, Columbia, South Carolina, USA
| | - Jessica Bradshaw
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
- Carolina Autism and Neurodevelopment Research Center, University of South Carolina, Columbia, South Carolina, USA
| |
Collapse
|
4
|
Yan X, Tung SS, Fascendini B, Chen YD, Norcia AM, Grill-Spector K. The emergence of visual category representations in infants' brains. eLife 2024; 13:RP100260. [PMID: 39714017 DOI: 10.7554/elife.100260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2024] Open
Abstract
Organizing the continuous stream of visual input into categories like places or faces is important for everyday function and social interactions. However, it is unknown when neural representations of these and other visual categories emerge. Here, we used steady-state evoked potential electroencephalography to measure cortical responses in infants at 3-4 months, 4-6 months, 6-8 months, and 12-15 months, when they viewed controlled, gray-level images of faces, limbs, corridors, characters, and cars. We found that distinct responses to these categories emerge at different ages. Reliable brain responses to faces emerge first, at 4-6 months, followed by limbs and places around 6-8 months. Between 6 and 15 months response patterns become more distinct, such that a classifier can decode what an infant is looking at from their brain responses. These findings have important implications for assessing typical and atypical cortical development as they not only suggest that category representations are learned, but also that representations of categories that may have innate substrates emerge at different times during infancy.
Collapse
Affiliation(s)
- Xiaoqian Yan
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Sarah Shi Tung
- Department of Psychology, Stanford University, Stanford, United States
| | - Bella Fascendini
- Department of Psychology, Stanford University, Stanford, United States
| | - Yulan Diana Chen
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
| | - Anthony M Norcia
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
| | - Kalanit Grill-Spector
- Department of Psychology, Stanford University, Stanford, United States
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, United States
- Neurosciences Program, Stanford University, Stanford, United States
| |
Collapse
|
5
|
Kamensek T, Iarocci G, Oruc I. Atypical daily visual exposure to faces in adults with autism spectrum disorder. Curr Biol 2024; 34:4197-4208.e4. [PMID: 39181127 DOI: 10.1016/j.cub.2024.07.094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 02/20/2024] [Accepted: 07/30/2024] [Indexed: 08/27/2024]
Abstract
Expert face processes are refined and tuned through a protracted development. Exposure statistics of the daily visual experience of neurotypical adults (the face diet) show substantial exposure to familiar faces. People with autism spectrum disorder (ASD) do not show the same expertise with faces as their non-autistic counterparts. This may be due to an impoverished visual experience with faces, according to experiential models of autism. Here, we present the first empirical report on the day-to-day visual experience of the faces of adults with ASD. Our results, based on over 360 h of first-person perspective footage of daily exposure, show striking qualitative and quantitative differences in the ASD face diet compared with those of neurotypical observers, which is best characterized by a pattern of reduced and atypical exposure to familiar faces in ASD. Specifically, duration of exposure to familiar faces was lower in ASD, and faces were viewed from farther distances and from viewpoints that were biased toward profile pose. Our results provide strong evidence that individuals with ASD may not be getting the experience needed for the typical development of expert face processes.
Collapse
Affiliation(s)
- Todd Kamensek
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 818 W 10th Avenue, Vancouver, BC V5Z 1M9, Canada
| | - Grace Iarocci
- Department of Psychology, Simon Fraser University, 8888 University Drive, Burnaby, BC V5A 1S6, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, 818 W 10th Avenue, Vancouver, BC V5Z 1M9, Canada.
| |
Collapse
|
6
|
Arcaro M, Livingstone M. A Whole-Brain Topographic Ontology. Annu Rev Neurosci 2024; 47:21-40. [PMID: 38360565 DOI: 10.1146/annurev-neuro-082823-073701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
It is a common view that the intricate array of specialized domains in the ventral visual pathway is innately prespecified. What this review postulates is that it is not. We explore the origins of domain specificity, hypothesizing that the adult brain emerges from an interplay between a domain-general map-based architecture, shaped by intrinsic mechanisms, and experience. We argue that the most fundamental innate organization of cortex in general, and not just the visual pathway, is a map-based topography that governs how the environment maps onto the brain, how brain areas interconnect, and ultimately, how the brain processes information.
Collapse
Affiliation(s)
- Michael Arcaro
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | | |
Collapse
|
7
|
Gui A, Throm E, da Costa PF, Penza F, Aguiló Mayans M, Jordan-Barros A, Haartsen R, Leech R, Jones EJH. Neuroadaptive Bayesian optimisation to study individual differences in infants' engagement with social cues. Dev Cogn Neurosci 2024; 68:101401. [PMID: 38870603 PMCID: PMC11225696 DOI: 10.1016/j.dcn.2024.101401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 05/31/2024] [Accepted: 06/01/2024] [Indexed: 06/15/2024] Open
Abstract
Infants' motivation to engage with the social world depends on the interplay between individual brain's characteristics and previous exposure to social cues such as the parent's smile or eye contact. Different hypotheses about why specific combinations of emotional expressions and gaze direction engage children have been tested with group-level approaches rather than focusing on individual differences in the social brain development. Here, a novel Artificial Intelligence-enhanced brain-imaging approach, Neuroadaptive Bayesian Optimisation (NBO), was applied to infant electro-encephalography (EEG) to understand how selected neural signals encode social cues in individual infants. EEG data from 42 6- to 9-month-old infants looking at images of their parent's face were analysed in real-time and used by a Bayesian Optimisation algorithm to identify which combination of the parent's gaze/head direction and emotional expression produces the strongest brain activation in the child. This individualised approach supported the theory that the infant's brain is maximally engaged by communicative cues with a negative valence (angry faces with direct gaze). Infants attending preferentially to faces with direct gaze had increased positive affectivity and decreased negative affectivity. This work confirmed that infants' attentional preferences for social cues are heterogeneous and shows the NBO's potential to study diversity in neurodevelopmental trajectories.
Collapse
Affiliation(s)
- A Gui
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom; Department of Psychology, University of Essex, Wivenhoe Park, Colchester CO4 3SQ, United Kingdom.
| | - E Throm
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom
| | - P F da Costa
- Department of Neuroimaging, Institute of Psychiatry, Psychology and, Neuroscience, King's College London, de Crespigny Road, London SE5 8AB, United Kingdom
| | - F Penza
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom
| | - M Aguiló Mayans
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom
| | - A Jordan-Barros
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom
| | - R Haartsen
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom
| | - R Leech
- Department of Neuroimaging, Institute of Psychiatry, Psychology and, Neuroscience, King's College London, de Crespigny Road, London SE5 8AB, United Kingdom
| | - E J H Jones
- Centre for Brain and Cognitive Development, Department of Psychological Science, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom
| |
Collapse
|
8
|
Aneja P, Kinna T, Newman J, Sami S, Cassidy J, McCarthy J, Tiwari M, Kumar A, Spencer JP. Leveraging technological advances to assess dyadic visual cognition during infancy in high- and low-resource settings. Front Psychol 2024; 15:1376552. [PMID: 38873529 PMCID: PMC11169819 DOI: 10.3389/fpsyg.2024.1376552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 05/08/2024] [Indexed: 06/15/2024] Open
Abstract
Caregiver-infant interactions shape infants' early visual experience; however, there is limited work from low-and middle-income countries (LMIC) in characterizing the visual cognitive dynamics of these interactions. Here, we present an innovative dyadic visual cognition pipeline using machine learning methods which captures, processes, and analyses the visual dynamics of caregiver-infant interactions across cultures. We undertake two studies to examine its application in both low (rural India) and high (urban UK) resource settings. Study 1 develops and validates the pipeline to process caregiver-infant interaction data captured using head-mounted cameras and eye-trackers. We use face detection and object recognition networks and validate these tools using 12 caregiver-infant dyads (4 dyads from a 6-month-old UK cohort, 4 dyads from a 6-month-old India cohort, and 4 dyads from a 9-month-old India cohort). Results show robust and accurate face and toy detection, as well as a high percent agreement between processed and manually coded dyadic interactions. Study 2 applied the pipeline to a larger data set (25 6-month-olds from the UK, 31 6-month-olds from India, and 37 9-month-olds from India) with the aim of comparing the visual dynamics of caregiver-infant interaction across the two cultural settings. Results show remarkable correspondence between key measures of visual exploration across cultures, including longer mean look durations during infant-led joint attention episodes. In addition, we found several differences across cultures. Most notably, infants in the UK had a higher proportion of infant-led joint attention episodes consistent with a child-centered view of parenting common in western middle-class families. In summary, the pipeline we report provides an objective assessment tool to quantify the visual dynamics of caregiver-infant interaction across high- and low-resource settings.
Collapse
Affiliation(s)
- Prerna Aneja
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Thomas Kinna
- School of Medicine, University of East Anglia, Norwich, United Kingdom
- School of Pharmacy, University of East Anglia, Norwich, United Kingdom
| | - Jacob Newman
- IT and Computing, University of East Anglia, Norwich, United Kingdom
| | - Saber Sami
- School of Medicine, University of East Anglia, Norwich, United Kingdom
| | - Joe Cassidy
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Jordan McCarthy
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | | | | | - John P. Spencer
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| |
Collapse
|
9
|
Skalaban LJ, Chan I, Rapuano KM, Lin Q, Conley MI, Watts RR, Busch EL, Murty VP, Casey BJ. Representational Dissimilarity of Faces and Places during a Working Memory Task is Associated with Subsequent Recognition Memory during Development. J Cogn Neurosci 2024; 36:415-434. [PMID: 38060253 DOI: 10.1162/jocn_a_02094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023]
Abstract
Nearly 50 years of research has focused on faces as a special visual category, especially during development. Yet it remains unclear how spatial patterns of neural similarity of faces and places relate to how information processing supports subsequent recognition of items from these categories. The current study uses representational similarity analysis and functional imaging data from 9- and 10-year-old youth during an emotional n-back task from the Adolescent Brain and Cognitive Development Study 3.0 data release to relate spatial patterns of neural similarity during working memory to subsequent out-of-scanner performance on a recognition memory task. Specifically, we examine how similarities in representations within face categories (neutral, happy, and fearful faces) and representations between visual categories (faces and places) relate to subsequent recognition memory of these visual categories. Although working memory performance was higher for faces than places, subsequent recognition memory was greater for places than faces. Representational similarity analysis revealed category-specific patterns in face-and place-sensitive brain regions (fusiform gyrus, parahippocampal gyrus) compared with a nonsensitive visual region (pericalcarine cortex). Similarity within face categories and dissimilarity between face and place categories in the parahippocampus was related to better recognition of places from the n-back task. Conversely, in the fusiform, similarity within face categories and their relative dissimilarity from places was associated with better recognition of new faces, but not old faces. These findings highlight how the representational distinctiveness of visual categories influence what information is subsequently prioritized in recognition memory during development.
Collapse
Affiliation(s)
- Lena J Skalaban
- Yale University, New Haven, CT
- Temple University, Philadelphia, PA
| | | | | | - Qi Lin
- Yale University, New Haven, CT
| | | | | | | | | | - B J Casey
- Yale University, New Haven, CT
- Barnard College, Columbia University, New York, NY
| |
Collapse
|
10
|
Kamensek T, Susilo T, Iarocci G, Oruc I. Are people with autism prosopagnosic? Autism Res 2023; 16:2100-2109. [PMID: 37740564 DOI: 10.1002/aur.3030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 08/30/2023] [Indexed: 09/24/2023]
Abstract
Difficulties in various face processing tasks have been well documented in autism spectrum disorder (ASD). Several meta-analyses and numerous case-control studies have indicated that this population experiences a moderate degree of impairment, with a small percentage of studies failing to detect any impairment. One possible account of this mixed pattern of findings is heterogeneity in face processing abilities stemming from the presence of a subpopulation of prosopagnosic individuals with ASD alongside those with normal face processing skills. Samples randomly drawn from such a population, especially relatively smaller ones, would vary in the proportion of participants with prosopagnosia, resulting in a wide range of group-level deficits from mild (or none) to severe across studies. We test this prosopagnosic subpopulation hypothesis by examining three groups of participants: adults with ASD, adults with developmental prosopagnosia (DP), and a comparison group. Our results show that the prosopagnosic subpopulation hypothesis does not account for the face impairments in the broader autism spectrum. ASD observers show a continuous and graded, rather than categorical, heterogeneity that span a range of face processing skills including many with mild to moderate deficits, inconsistent with a prosopagnosic subtype account. We suggest that pathogenic origins of face deficits for at least some with ASD differ from those of DP.
Collapse
Affiliation(s)
- Todd Kamensek
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| | - Tirta Susilo
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Grace Iarocci
- Department of Psychology, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Ipek Oruc
- Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
11
|
Geangu E, Smith WAP, Mason HT, Martinez-Cedillo AP, Hunter D, Knight MI, Liang H, del Carmen Garcia de Soria Bazan M, Tse ZTH, Rowland T, Corpuz D, Hunter J, Singh N, Vuong QC, Abdelgayed MRS, Mullineaux DR, Smith S, Muller BR. EgoActive: Integrated Wireless Wearable Sensors for Capturing Infant Egocentric Auditory-Visual Statistics and Autonomic Nervous System Function 'in the Wild'. SENSORS (BASEL, SWITZERLAND) 2023; 23:7930. [PMID: 37765987 PMCID: PMC10534696 DOI: 10.3390/s23187930] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/25/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
There have been sustained efforts toward using naturalistic methods in developmental science to measure infant behaviors in the real world from an egocentric perspective because statistical regularities in the environment can shape and be shaped by the developing infant. However, there is no user-friendly and unobtrusive technology to densely and reliably sample life in the wild. To address this gap, we present the design, implementation and validation of the EgoActive platform, which addresses limitations of existing wearable technologies for developmental research. EgoActive records the active infants' egocentric perspective of the world via a miniature wireless head-mounted camera concurrently with their physiological responses to this input via a lightweight, wireless ECG/acceleration sensor. We also provide software tools to facilitate data analyses. Our validation studies showed that the cameras and body sensors performed well. Families also reported that the platform was comfortable, easy to use and operate, and did not interfere with daily activities. The synchronized multimodal data from the EgoActive platform can help tease apart complex processes that are important for child development to further our understanding of areas ranging from executive function to emotion processing and social learning.
Collapse
Affiliation(s)
- Elena Geangu
- Psychology Department, University of York, York YO10 5DD, UK; (A.P.M.-C.); (M.d.C.G.d.S.B.)
| | - William A. P. Smith
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Harry T. Mason
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | | | - David Hunter
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Marina I. Knight
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Haipeng Liang
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | | | - Zion Tsz Ho Tse
- School of Engineering and Materials Science, Queen Mary University of London, London E1 2AT, UK; (H.L.); (Z.T.H.T.)
| | - Thomas Rowland
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Dom Corpuz
- Protolabs, Halesfield 8, Telford TF7 4QN, UK; (T.R.); (D.C.)
| | - Josh Hunter
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - Nishant Singh
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Quoc C. Vuong
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK;
| | - Mona Ragab Sayed Abdelgayed
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| | - David R. Mullineaux
- Department of Mathematics, University of York, York YO10 5DD, UK; (M.I.K.); (D.R.M.)
| | - Stephen Smith
- School of Physics, Engineering and Technology, University of York, York YO10 5DD, UK; (H.T.M.); (D.H.); (N.S.); (S.S.)
| | - Bruce R. Muller
- Department of Computer Science, University of York, York YO10 5DD, UK; (W.A.P.S.); (J.H.); (M.R.S.A.); (B.R.M.)
| |
Collapse
|
12
|
Geangu E, Vuong QC. Seven-months-old infants show increased arousal to static emotion body expressions: Evidence from pupil dilation. INFANCY 2023. [PMID: 36917082 DOI: 10.1111/infa.12535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 12/23/2022] [Accepted: 02/10/2023] [Indexed: 03/16/2023]
Abstract
Human body postures provide perceptual cues that can be used to discriminate and recognize emotions. It was previously found that 7-months-olds' fixation patterns discriminated fear from other emotion body expressions but it is not clear whether they also process the emotional content of those expressions. The emotional content of visual stimuli can increase arousal level resulting in pupil dilations. To provide evidence that infants also process the emotional content of expressions, we analyzed variations in pupil in response to emotion stimuli. Forty-eight 7-months-old infants viewed adult body postures expressing anger, fear, happiness and neutral expressions, while their pupil size was measured. There was a significant emotion effect between 1040 and 1640 ms after image onset, when fear elicited larger pupil dilations than neutral expressions. A similar trend was found for anger expressions. Our results suggest that infants have increased arousal to negative-valence body expressions. Thus, in combination with previous fixation results, the pupil data show that infants as young as 7-months can perceptually discriminate static body expressions and process the emotional content of those expressions. The results extend information about infant processing of emotion expressions conveyed through other means (e.g., faces).
Collapse
Affiliation(s)
- Elena Geangu
- Department of Psychology, University of York, York, UK
| | - Quoc C Vuong
- Biosciences Institute and School of Psychology, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
13
|
Ruba AL, Pollak SD, Saffran JR. Acquiring Complex Communicative Systems: Statistical Learning of Language and Emotion. Top Cogn Sci 2022; 14:432-450. [PMID: 35398974 PMCID: PMC9465951 DOI: 10.1111/tops.12612] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2011] [Revised: 03/16/2022] [Accepted: 03/17/2022] [Indexed: 11/30/2022]
Abstract
During the early postnatal years, most infants rapidly learn to understand two naturally evolved communication systems: language and emotion. While these two domains include different types of content knowledge, it is possible that similar learning processes subserve their acquisition. In this review, we compare the learnable statistical regularities in language and emotion input. We then consider how domain-general learning abilities may underly the acquisition of language and emotion, and how this process may be constrained in each domain. This comparative developmental approach can advance our understanding of how humans learn to communicate with others.
Collapse
Affiliation(s)
- Ashley L. Ruba
- Department of PsychologyUniversity of Wisconsin – Madison
| | - Seth D. Pollak
- Department of PsychologyUniversity of Wisconsin – Madison
| | | |
Collapse
|
14
|
Adolph KE, West KL. Autism: The face value of eye contact. Curr Biol 2022; 32:R577-R580. [PMID: 35728531 PMCID: PMC9527854 DOI: 10.1016/j.cub.2022.05.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Inattention to faces in clinical assessments is a robust marker for autism. However, a new study distinguishes diagnostic marker from behavioral mechanism, showing that face looking in everyday activity is equally rare in autistic and neurotypical children and not required for joint attention in either group.
Collapse
Affiliation(s)
- Karen E Adolph
- Department of Psychology, New York University, New York, NY 10003, USA.
| | - Kelsey L West
- Department of Psychology, New York University, New York, NY 10003, USA
| |
Collapse
|
15
|
Bastianello T, Keren-Portnoy T, Majorano M, Vihman M. Infant looking preferences towards dynamic faces: A systematic review. Infant Behav Dev 2022; 67:101709. [PMID: 35338995 DOI: 10.1016/j.infbeh.2022.101709] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 02/28/2022] [Accepted: 03/06/2022] [Indexed: 11/25/2022]
Abstract
Although the pattern of visual attention towards the region of the eyes is now well-established for infants at an early stage of development, less is known about the extent to which the mouth attracts an infant's attention. Even less is known about the extent to which these specific looking behaviours towards different regions of the talking face (i.e., the eyes or the mouth) may impact on or account for aspects of language development. The aim of the present systematic review is to synthesize and analyse (i) which factors might determine different looking patterns in infants during audio-visual tasks using dynamic faces and (ii) how these patterns have been studied in relation to aspects of the baby's development. Four bibliographic databases were explored, and the records were selected following specified inclusion criteria. The search led to the identification of 19 papers (October 2021). Some studies have tried to clarify the role played by audio-visual support in speech perception and early production based on directly related factors such as the age or language background of the participants, while others have tested the child's competence in terms of linguistic or social skills. Several hypotheses have been advanced to explain the selective attention phenomenon. The results of the selected studies have led to different lines of interpretation. Some suggestions for future research are outlined.
Collapse
Affiliation(s)
| | | | | | - Marilyn Vihman
- Department of Language and Linguistic Science, University of York, UK
| |
Collapse
|
16
|
Carnevali L, Gui A, Jones EJH, Farroni T. Face Processing in Early Development: A Systematic Review of Behavioral Studies and Considerations in Times of COVID-19 Pandemic. Front Psychol 2022; 13:778247. [PMID: 35250718 PMCID: PMC8894249 DOI: 10.3389/fpsyg.2022.778247] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 01/21/2022] [Indexed: 12/17/2022] Open
Abstract
Human faces are one of the most prominent stimuli in the visual environment of young infants and convey critical information for the development of social cognition. During the COVID-19 pandemic, mask wearing has become a common practice outside the home environment. With masks covering nose and mouth regions, the facial cues available to the infant are impoverished. The impact of these changes on development is unknown but is critical to debates around mask mandates in early childhood settings. As infants grow, they increasingly interact with a broader range of familiar and unfamiliar people outside the home; in these settings, mask wearing could possibly influence social development. In order to generate hypotheses about the effects of mask wearing on infant social development, in the present work, we systematically review N = 129 studies selected based on the most recent PRISMA guidelines providing a state-of-the-art framework of behavioral studies investigating face processing in early infancy. We focused on identifying sensitive periods during which being exposed to specific facial features or to the entire face configuration has been found to be important for the development of perceptive and socio-communicative skills. For perceptive skills, infants gradually learn to analyze the eyes or the gaze direction within the context of the entire face configuration. This contributes to identity recognition as well as emotional expression discrimination. For socio-communicative skills, direct gaze and emotional facial expressions are crucial for attention engagement while eye-gaze cuing is important for joint attention. Moreover, attention to the mouth is particularly relevant for speech learning. We discuss possible implications of the exposure to masked faces for developmental needs and functions. Providing groundwork for further research, we encourage the investigation of the consequences of mask wearing for infants' perceptive and socio-communicative development, suggesting new directions within the research field.
Collapse
Affiliation(s)
- Laura Carnevali
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| | - Anna Gui
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, United Kingdom
| | - Emily J. H. Jones
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, United Kingdom
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| |
Collapse
|
17
|
Conte S, Baccolo E, Bulf H, Proietti V, Macchi Cassia V. Infants' visual exploration strategies for adult and child faces. INFANCY 2022; 27:492-514. [PMID: 35075767 DOI: 10.1111/infa.12458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Revised: 11/05/2021] [Accepted: 12/21/2021] [Indexed: 11/28/2022]
Abstract
By the end of the first year of life, infants' discrimination abilities tune to frequently experienced face groups. Little is known about the exploration strategies adopted to efficiently discriminate frequent, familiar face types. The present eye-tracking study examined the distribution of visual fixations produced by 10-month-old and 4-month-old singletons while learning adult (i.e., familiar) and child (i.e., unfamiliar) White faces. Infants were tested in an infant-controlled visual habituation task, in which post-habituation preference measured successful discrimination. Results confirmed earlier evidence that, without sibling experience, 10-month-olds discriminate only among adult faces. Analyses of gaze movements during habituation showed that infants' fixations were centered in the upper part of the stimuli. The mouth was sampled longer in adult faces than in child faces, while the child eyes were sampled longer and more frequently than the adult eyes. At 10 months, but not at 4 months, global measures of scanning behavior on the whole face also varied according to face age, as the spatiotemporal distribution of scan paths showed larger within- and between-participants similarity for adult faces than for child faces. Results are discussed with reference to the perceptual narrowing literature, and the influence of age-appropriate developmental tasks on infants' face processing abilities.
Collapse
Affiliation(s)
- Stefania Conte
- Department of Psychology, University of South Carolina, Columbia, South Carolina, USA
| | - Elisa Baccolo
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Hermann Bulf
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Valentina Proietti
- Department of Psychology, Trinity Western University, Langley, British Columbia, Canada
| | | |
Collapse
|
18
|
Kobayashi M, Kanazawa S, Yamaguchi MK, O'Toole AJ. Cortical processing of dynamic bodies in the superior occipito-temporal regions of the infants' brain: Difference from dynamic faces and inversion effect. Neuroimage 2021; 244:118598. [PMID: 34587515 DOI: 10.1016/j.neuroimage.2021.118598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 11/17/2022] Open
Abstract
Previous functional neuroimaging studies imply a crucial role of the superior temporal regions (e.g., superior temporal sulcus: STS) for processing of dynamic faces and bodies. However, little is known about the cortical processing of moving faces and bodies in infancy. The current study used functional near-infrared spectroscopy (fNIRS) to directly compare cortical hemodynamic responses to dynamic faces (videos of approaching people with blurred bodies) and dynamic bodies (videos of approaching people with blurred faces) in infants' brain. We also examined the body-inversion effect in 5- to 8-month-old infants using hemodynamic responses as a measure. We found significant brain activity for the dynamic faces and bodies in the superior area of bilateral temporal cortices in both 5- to 6-month-old and 7- to 8-month-old infants. The hemodynamic responses to dynamic faces occurred across a broader area of cortex in 7- to 8-month-olds than in 5- to 6-month-olds, but we did not find a developmental change for dynamic bodies. There was no significant activation when the stimuli were presented upside down, indicating that these activation patterns did not result from the low-level visual properties of dynamic faces and bodies. Additionally, we found that the superior temporal regions showed a body inversion effect in infants aged over 5 months: the upright dynamic body stimuli induced stronger activation compared to the inverted stimuli. The most important contribution of the present study is that we identified cortical areas responsive to dynamic bodies and faces in two groups of infants (5-6-months and 7-8-months of age) and we found different developmental trends for the processing of bodies and faces.
Collapse
Affiliation(s)
- Megumi Kobayashi
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Developmental Disability Center, Japan.
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Japan
| | | | - Alice J O'Toole
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, USA
| |
Collapse
|
19
|
Long BL, Sanchez A, Kraus AM, Agrawal K, Frank MC. Automated detections reveal the social information in the changing infant view. Child Dev 2021; 93:101-116. [PMID: 34787894 DOI: 10.1111/cdev.13648] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
How do postural developments affect infants' access to social information? We recorded egocentric and third-person video while infants and their caregivers (N = 36, 8- to 16-month-olds, N = 19 females) participated in naturalistic play sessions. We then validated the use of a neural network pose detection model to detect faces and hands in the infant view. We used this automated method to analyze our data and a prior egocentric video dataset (N = 17, 12-month-olds). Infants' average posture and orientation with respect to their caregiver changed dramatically across this age range; both posture and orientation modulated access to social information. Together, these results confirm that infant's ability to move and act on the world plays a significant role in shaping the social information in their view.
Collapse
Affiliation(s)
- Bria L Long
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Alessandro Sanchez
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Allison M Kraus
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Ketan Agrawal
- Department of Psychology, Stanford University, Stanford, California, USA
| | - Michael C Frank
- Department of Psychology, Stanford University, Stanford, California, USA
| |
Collapse
|
20
|
Huebner PA, Willits JA. Using lexical context to discover the noun category: Younger children have it easier. PSYCHOLOGY OF LEARNING AND MOTIVATION 2021. [DOI: 10.1016/bs.plm.2021.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
21
|
Raz G, Saxe R. Learning in Infancy Is Active, Endogenously Motivated, and Depends on the Prefrontal Cortices. ACTA ACUST UNITED AC 2020. [DOI: 10.1146/annurev-devpsych-121318-084841] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A common view of learning in infancy emphasizes the role of incidental sensory experiences from which increasingly abstract statistical regularities are extracted. In this view, infant brains initially support basic sensory and motor functions, followed by maturation of higher-level association cortex. Here, we critique this view and posit that, by contrast and more like adults, infants are active, endogenously motivated learners who structure their own learning through flexible selection of attentional targets and active interventions on their environment. We further argue that the infant brain, and particularly the prefrontal cortex (PFC), is well equipped to support these learning behaviors. We review recent progress in characterizing the function of the infant PFC, which suggests that, as in adults, the PFC is functionally specialized and highly connected. Together, we present an integrative account of infant minds and brains, in which the infant PFC represents multiple intrinsic motivations, which are leveraged for active learning.
Collapse
Affiliation(s)
- Gal Raz
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
22
|
Geangu E, Vuong QC. Look up to the body: An eye-tracking investigation of 7-months-old infants' visual exploration of emotional body expressions. Infant Behav Dev 2020; 60:101473. [PMID: 32739668 DOI: 10.1016/j.infbeh.2020.101473] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 07/22/2020] [Accepted: 07/22/2020] [Indexed: 02/02/2023]
Abstract
The human body is an important source of information to infer a person's emotional state. Research with adult observers indicate that the posture of the torso, arms and hands provide important perceptual cues for recognising anger, fear and happy expressions. Much less is known about whether infants process body regions differently for different body expressions. To address this issue, we used eye tracking to investigate whether infants' visual exploration patterns differed when viewing body expressions. Forty-eight 7-months-old infants were randomly presented with static images of adult female bodies expressing anger, fear and happiness, as well as an emotionally-neutral posture. Facial cues to emotional state were removed by masking the faces. We measured the proportion of looking time, proportion and number of fixations, and duration of fixations on the head, upper body and lower body regions for the different expressions. We showed that infants explored the upper body more than the lower body. Importantly, infants at this age fixated differently on different body regions depending on the expression of the body posture. In particular, infants spent a larger proportion of their looking times and had longer fixation durations on the upper body for fear relative to the other expressions. These results extend and replicate the information about infant processing of emotional expressions displayed by human bodies, and they support the hypothesis that infants' visual exploration of human bodies is driven by the upper body.
Collapse
|
23
|
Xu TL, de Barbaro K, Abney DH, Cox RFA. Finding Structure in Time: Visualizing and Analyzing Behavioral Time Series. Front Psychol 2020; 11:1457. [PMID: 32793025 PMCID: PMC7393268 DOI: 10.3389/fpsyg.2020.01457] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 06/02/2020] [Indexed: 02/06/2023] Open
Abstract
The temporal structure of behavior contains a rich source of information about its dynamic organization, origins, and development. Today, advances in sensing and data storage allow researchers to collect multiple dimensions of behavioral data at a fine temporal scale both in and out of the laboratory, leading to the curation of massive multimodal corpora of behavior. However, along with these new opportunities come new challenges. Theories are often underspecified as to the exact nature of these unfolding interactions, and psychologists have limited ready-to-use methods and training for quantifying structures and patterns in behavioral time series. In this paper, we will introduce four techniques to interpret and analyze high-density multi-modal behavior data, namely, to: (1) visualize the raw time series, (2) describe the overall distributional structure of temporal events (Burstiness calculation), (3) characterize the non-linear dynamics over multiple timescales with Chromatic and Anisotropic Cross-Recurrence Quantification Analysis (CRQA), (4) and quantify the directional relations among a set of interdependent multimodal behavioral variables with Granger Causality. Each technique is introduced in a module with conceptual background, sample data drawn from empirical studies and ready-to-use Matlab scripts. The code modules showcase each technique's application with detailed documentation to allow more advanced users to adapt them to their own datasets. Additionally, to make our modules more accessible to beginner programmers, we provide a "Programming Basics" module that introduces common functions for working with behavioral timeseries data in Matlab. Together, the materials provide a practical introduction to a range of analyses that psychologists can use to discover temporal structure in high-density behavioral data.
Collapse
Affiliation(s)
- Tian Linger Xu
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, United States
| | - Kaya de Barbaro
- Department of Psychology, The University of Texas at Austin, Austin, TX, United States
| | - Drew H. Abney
- Department of Psychology, Center for Cognition, Action & Perception, University of Cincinnati, Cincinnati, OH, United States
| | - Ralf F. A. Cox
- Department of Psychology, University of Groningen, Groningen, Netherlands
| |
Collapse
|
24
|
Kobayashi M, Kakigi R, Kanazawa S, Yamaguchi MK. Infants' recognition of their mothers' faces in facial drawings. Dev Psychobiol 2020; 62:1011-1020. [PMID: 32227340 DOI: 10.1002/dev.21972] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2018] [Revised: 02/23/2020] [Accepted: 03/02/2020] [Indexed: 11/09/2022]
Abstract
This study examined the development of ability to recognize familiar face in drawings in infants aged 6-8 months. In Experiment 1, we investigated infants' recognition of their mothers' faces by testing their visual preference for their mother's face over a stranger's face under three conditions: photographs, cartoons produced by online software that simplifies and enhances the contours of facial features of line drawings, and veridical line drawings. We found that 7- and 8-month-old infants showed a significant preference for their mother's face in photographs and cartoons, but not in veridical line drawings. In contrast, 6-month-old infants preferred their mother's face only in photographs. In Experiment 2, we investigated a visual preference for an upright face over an inverted face for cartoons and veridical line drawings in 6- to 8-month-old infants, finding that infants aged older than 6 months showed the inversion effect in face preference in both cartoons and veridical line drawings. Our results imply that the ability to utilize the enhanced information of a face to recognize familiar faces may develop aged around 7 months of age.
Collapse
Affiliation(s)
- Megumi Kobayashi
- Department of Functioning and Disability, Institute for Developmental Research, Aichi Developmental Disability Center, Kasugai, Japan
| | - Ryusuke Kakigi
- Department of Integrative Physiology, National Institute for Physiological Sciences, Okazaki, Japan
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Kawasaki, Japan
| | | |
Collapse
|
25
|
Connectivity at the origins of domain specificity in the cortical face and place networks. Proc Natl Acad Sci U S A 2020; 117:6163-6169. [PMID: 32123077 DOI: 10.1073/pnas.1911359117] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
It is well established that the adult brain contains a mosaic of domain-specific networks. But how do these domain-specific networks develop? Here we tested the hypothesis that the brain comes prewired with connections that precede the development of domain-specific function. Using resting-state fMRI in the youngest sample of newborn humans tested to date, we indeed found that cortical networks that will later develop strong face selectivity (including the "proto" occipital face area and fusiform face area) and scene selectivity (including the "proto" parahippocampal place area and retrosplenial complex) by adulthood, already show domain-specific patterns of functional connectivity as early as 27 d of age (beginning as early as 6 d of age). Furthermore, we asked how these networks are functionally connected to early visual cortex and found that the proto face network shows biased functional connectivity with foveal V1, while the proto scene network shows biased functional connectivity with peripheral V1. Given that faces are almost always experienced at the fovea, while scenes always extend across the entire periphery, these differential inputs may serve to facilitate domain-specific processing in each network after that function develops, or even guide the development of domain-specific function in each network in the first place. Taken together, these findings reveal domain-specific and eccentricity-biased connectivity in the earliest days of life, placing new constraints on our understanding of the origins of domain-specific cortical networks.
Collapse
|
26
|
Simpson EA, Maylott SE, Mitsven SG, Zeng G, Jakobsen KV. Face detection in 2- to 6-month-old infants is influenced by gaze direction and species. Dev Sci 2019; 23:e12902. [PMID: 31505079 DOI: 10.1111/desc.12902] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 07/10/2019] [Accepted: 08/30/2019] [Indexed: 11/29/2022]
Abstract
Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills-attending to faces and attending to the eyes-surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2-, 4-, and 6-month-olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following-the ability to look where another individual looks-which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection.
Collapse
Affiliation(s)
| | - Sarah E Maylott
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | |
Collapse
|
27
|
Yamamoto H, Sato A, Itakura S. Eye tracking in an everyday environment reveals the interpersonal distance that affords infant-parent gaze communication. Sci Rep 2019; 9:10352. [PMID: 31316101 PMCID: PMC6637119 DOI: 10.1038/s41598-019-46650-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 06/29/2019] [Indexed: 11/09/2022] Open
Abstract
The unique morphology of human eyes enables gaze communication at various ranges of interpersonal distance. Although gaze communication contributes to infants' social development, little is known about how infant-parent distance affects infants' visual experience in daily gaze communication. The present study conducted longitudinal observations of infant-parent face-to-face interactions in the home environment as 5 infants aged from 10 to 15.5 months. Using head-mounted eye trackers worn by parents, we evaluated infants' daily visual experience of 3138 eye contact scenes recorded from the infants' second-person perspective. The results of a hierarchical Bayesian statistical analysis suggest that certain levels of interpersonal distance afforded smooth interaction with eye contact. Eye contacts were not likely to be exchanged when the infant and parent were too close or too far apart. The number of continuing eye contacts showed an inverse U-shaped pattern with interpersonal distance, regardless of whether the eye contact was initiated by the infant or the parent. However, the interpersonal distance was larger when the infant initiated the eye contact than when the parent initiated it, suggesting that interpersonal distance affects the infant's and parent's social look differently. Overall, the present study indicates that interpersonal distance modulates infant-parent gaze communication.
Collapse
Affiliation(s)
- Hiroki Yamamoto
- Graduate School of Letters, Kyoto University, Yoshida Honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.
| | - Atsushi Sato
- Faculty of Human Development, University of Toyama, 3190 Gofuku, Toyama, 930-8555, Japan
| | - Shoji Itakura
- Graduate School of Letters, Kyoto University, Yoshida Honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.,Center for Baby Science, Doshisha University, 4-1-1 Kizugawadai, Kizugawa, 619-0225, Japan
| |
Collapse
|
28
|
Kelly DJ, Duarte S, Meary D, Bindemann M, Pascalis O. Infants rapidly detect human faces in complex naturalistic visual scenes. Dev Sci 2019; 22:e12829. [PMID: 30896078 DOI: 10.1111/desc.12829] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Revised: 03/13/2019] [Accepted: 03/16/2019] [Indexed: 01/23/2023]
Abstract
Infants respond preferentially to faces and face-like stimuli from birth, but past research has typically presented faces in isolation or amongst an artificial array of competing objects. In the current study infants aged 3- to 12-months viewed a series of complex visual scenes; half of the scenes contained a person, the other half did not. Infants rapidly detected and oriented to faces in scenes even when they were not visually salient. Although a clear developmental improvement was observed in face detection and interest, all infants displayed sensitivity to the presence of a person in a scene, by displaying eye movements that differed quantifiably across a range of measures when viewing scenes that either did or did not contain a person. We argue that infant's face detection capabilities are ostensibly "better" with naturalistic stimuli and artificial array presentations used in previous studies have underestimated performance.
Collapse
Affiliation(s)
- David J Kelly
- School of Psychology, Keynes College, University of Kent, Canterbury, Kent, UK
| | - Sofia Duarte
- School of Psychology, Keynes College, University of Kent, Canterbury, Kent, UK
| | | | - Markus Bindemann
- School of Psychology, Keynes College, University of Kent, Canterbury, Kent, UK
| | | |
Collapse
|
29
|
Oruc I, Shafai F, Murthy S, Lages P, Ton T. The adult face-diet: A naturalistic observation study. Vision Res 2019; 157:222-229. [DOI: 10.1016/j.visres.2018.01.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 01/03/2018] [Accepted: 01/12/2018] [Indexed: 10/18/2022]
|
30
|
Sibling experience prevents neural tuning to adult faces in 10-month-old infants. Neuropsychologia 2019; 129:72-82. [PMID: 30922829 DOI: 10.1016/j.neuropsychologia.2019.03.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 03/07/2019] [Accepted: 03/18/2019] [Indexed: 11/21/2022]
Abstract
Early facial experience provided by the infant's social environment is known to shape face processing abilities, which narrow during the first year of life towards adult human faces of the most frequently encountered ethnic groups. Here we explored the hypothesis that natural variability in facial input may delay neural commitment to face processing by testing the impact of early natural experience with siblings on infants' brain responses. Event-Related Potentials (ERPs) evoked by upright and inverted adult and child faces were compared in two groups of 10-month-old infants with (N = 21) and without (N = 22) a child sibling. In first-born infants, P1 ERP component showed specificity to upright adult faces that carried over to the subsequent N290 and P400 components. In infants with siblings no inversion effects were observed. Results are discussed in the context of evidence from the language domain, showing that neural commitment to phonetic contrasts emerges later in bilinguals than in monolinguals, and that this delay facilitates subsequent learning of previously unencountered sounds of new languages.
Collapse
|
31
|
Abstract
Our social environment, from the microscopic to the macro-social, affects us for the entirety of our lives. One integral line of research to examine how interpersonal and societal environments can get "under the skin" is through the lens of epigenetics. Epigenetic mechanisms are adaptations made to our genome in response to our environment which include tags placed on and removed from the DNA itself to how our DNA is packaged, affecting how our genes are read, transcribed, and interact. These tags are affected by social environments and can persist over time; this may aid us in responding to experiences and exposures, both the enriched and the disadvantageous. From memory formation to immune function, the experience-dependent plasticity of epigenetic modifications to micro- and macro-social environments may contribute to the process of learning from comfort, pain, and stress to better survive in whatever circumstances life has in store.
Collapse
Affiliation(s)
- Sarah M Merrill
- Centre for Molecular Medicine and Therapeutics, British Columbia Children's Hospital, Vancouver, BC, Canada
- Department of Medical Genetics, University of British Columbia, Vancouver, BC, Canada
| | - Nicole Gladish
- Centre for Molecular Medicine and Therapeutics, British Columbia Children's Hospital, Vancouver, BC, Canada
- Department of Medical Genetics, University of British Columbia, Vancouver, BC, Canada
| | - Michael S Kobor
- Centre for Molecular Medicine and Therapeutics, British Columbia Children's Hospital, Vancouver, BC, Canada.
- Department of Medical Genetics, University of British Columbia, Vancouver, BC, Canada.
- Human Early Learning Partnership, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
32
|
Borjon JI, Schroer SE, Bambach S, Slone LK, Abney DH, Crandall DJ, Smith LB. A View of Their Own: Capturing the Egocentric View of Infants and Toddlers with Head-Mounted Cameras. J Vis Exp 2018. [PMID: 30346402 DOI: 10.3791/58445] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
Infants and toddlers view the world, at a basic sensory level, in a fundamentally different way from their parents. This is largely due to biological constraints: infants possess different body proportions than their parents and the ability to control their own head movements is less developed. Such constraints limit the visual input available. This protocol aims to provide guiding principles for researchers using head-mounted cameras to understand the changing visual input experienced by the developing infant. Successful use of this protocol will allow researchers to design and execute studies of the developing child's visual environment set in the home or laboratory. From this method, researchers can compile an aggregate view of all the possible items in a child's field of view. This method does not directly measure exactly what the child is looking at. By combining this approach with machine learning, computer vision algorithms, and hand-coding, researchers can produce a high-density dataset to illustrate the changing visual ecology of the developing infant.
Collapse
Affiliation(s)
- Jeremy I Borjon
- Department of Psychological and Brain Sciences, Indiana University;
| | - Sara E Schroer
- Department of Psychological and Brain Sciences, Indiana University;
| | - Sven Bambach
- School of Informatics, Computing, and Engineering, Indiana University
| | - Lauren K Slone
- Department of Psychological and Brain Sciences, Indiana University
| | - Drew H Abney
- Department of Psychological and Brain Sciences, Indiana University
| | - David J Crandall
- School of Informatics, Computing, and Engineering, Indiana University
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University
| |
Collapse
|
33
|
Powell LJ, Kosakowski HL, Saxe R. Social Origins of Cortical Face Areas. Trends Cogn Sci 2018; 22:752-763. [PMID: 30041864 PMCID: PMC6098735 DOI: 10.1016/j.tics.2018.06.009] [Citation(s) in RCA: 45] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Revised: 05/08/2018] [Accepted: 06/28/2018] [Indexed: 01/10/2023]
Abstract
Recently acquired fMRI data from human and macaque infants provide novel insights into the origins of cortical networks specialized for perceiving faces. Data from both species converge: cortical regions responding preferentially to faces are present and spatially organized early in infancy, although fully selective face areas emerge much later. What explains the earliest cortical responses to faces? We review two proposed mechanisms: proto-organization for simple shapes in visual cortex, and an innate subcortical schematic face template. In addition, we propose a third mechanism: infants choose to look at faces to engage in positively valenced, contingent social interactions. Activity in medial prefrontal cortex during social interactions may, directly or indirectly, guide the organization of cortical face areas.
Collapse
Affiliation(s)
- Lindsey J Powell
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Heather L Kosakowski
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| |
Collapse
|
34
|
Hawkes K, Finlay BL. Mammalian brain development and our grandmothering life history. Physiol Behav 2018; 193:55-68. [DOI: 10.1016/j.physbeh.2018.01.013] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2017] [Revised: 01/15/2018] [Accepted: 01/16/2018] [Indexed: 11/28/2022]
|
35
|
Farmer H, Ciaunica A, Hamilton AFDC. The functions of imitative behaviour in humans. MIND & LANGUAGE 2018; 33:378-396. [PMID: 30333677 PMCID: PMC6175014 DOI: 10.1111/mila.12189] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This article focuses on the question of the function of imitation and whether current accounts of imitative function are consistent with our knowledge about imitation's origins. We first review theories of imitative origin concluding that empirical evidence suggests that imitation arises from domain-general learning mechanisms. Next, we lay out a selective account of function that allows normative functions to be ascribed to learned behaviours. We then describe and review four accounts of the function of imitation before evaluating the relationship between the claim that imitation arises out of domain-general learning mechanisms and theories of the function of imitation.
Collapse
Affiliation(s)
- Harry Farmer
- Institute of Cognitive NeuroscienceUniversity College LondonLondonUK
- Department of PsychologyUniversity of BathBathUK
| | - Anna Ciaunica
- Institute of Cognitive NeuroscienceUniversity College LondonLondonUK
- Institute of PhilosophyUniversity of PortoPortoPortugal
| | | |
Collapse
|
36
|
Jayaraman S, Smith LB. Faces in early visual environments are persistent not just frequent. Vision Res 2018; 157:213-221. [PMID: 29852210 DOI: 10.1016/j.visres.2018.05.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2017] [Revised: 05/19/2018] [Accepted: 05/23/2018] [Indexed: 11/25/2022]
Abstract
The regularities in very young infants' visual worlds likely have out-sized effects on the development of the visual system because they comprise the first-in experience that tunes, maintains, and specifies the neural substrate from low-level to higher-level representations and therefore constitute the starting point for all other visual learning. Recent evidence from studies using head cameras suggests that the frequency of faces available in early infant visual environments declines over the first year and a half of life. The primary question for the present paper concerns the temporal structure of face experiences: Is frequency the key exposure dimension distinguishing younger and older infants' face experiences, or is it the duration for which faces remain in view? Our corpus of head-camera images collected as infants went about their daily activities consisted of over a million individually coded frames sampled at 0.2 Hz from 232 h of infant-perspective scenes, recorded from 51 infants aged 1 month to 15 months. The major finding from this corpus is that very young infants (1-3 months) not only have more frequent face experiences but also more temporally persistent ones. The repetitions of the same very few face identities presenting up-close and frontal views are exaggerated in more persistent runs of the same face, and these persistent runs are more frequent for the youngest infants. The implications of early experiences consisting of extended repeated exposures of up-close frontal views for visual learning are discussed.
Collapse
Affiliation(s)
- Swapnaa Jayaraman
- Indiana University, 1101 E. 10th st., Bloomington, IN 47404, United States.
| | - Linda B Smith
- Indiana University, 1101 E. 10th st., Bloomington, IN 47404, United States.
| |
Collapse
|
37
|
Smith LB, Jayaraman S, Clerkin E, Yu C. The Developing Infant Creates a Curriculum for Statistical Learning. Trends Cogn Sci 2018. [PMID: 29519675 DOI: 10.1016/j.tics.2018.02.004] [Citation(s) in RCA: 115] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning.
Collapse
Affiliation(s)
- Linda B Smith
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA.
| | - Swapnaa Jayaraman
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA
| | - Elizabeth Clerkin
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA
| | - Chen Yu
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA
| |
Collapse
|
38
|
Abstract
The fact that the face is a source of diverse social signals allows us to use face and person perception as a model system for asking important psychological questions about how our brains are organised. A key issue concerns whether we rely primarily on some form of generic representation of the common physical source of these social signals (the face) to interpret them, or instead create multiple representations by assigning different aspects of the task to different specialist components. Variants of the specialist components hypothesis have formed the dominant theoretical perspective on face perception for more than three decades, but despite this dominance of formally and informally expressed theories, the underlying principles and extent of any division of labour remain uncertain. Here, I discuss three important sources of constraint: first, the evolved structure of the brain; second, the need to optimise responses to different everyday tasks; and third, the statistical structure of faces in the perceiver's environment. I show how these constraints interact to determine the underlying functional organisation of face and person perception.
Collapse
|
39
|
Smith LB, Slone LK. A Developmental Approach to Machine Learning? Front Psychol 2017; 8:2124. [PMID: 29259573 PMCID: PMC5723343 DOI: 10.3389/fpsyg.2017.02124] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 11/21/2017] [Indexed: 11/13/2022] Open
Abstract
Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order - with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines.
Collapse
Affiliation(s)
- Linda B. Smith
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, United States
| | | |
Collapse
|
40
|
Pérez-Edgar K, Morales S, LoBue V, Taber-Thomas BC, Allen EK, Brown KM, Buss KA. The impact of negative affect on attention patterns to threat across the first 2 years of life. Dev Psychol 2017; 53:2219-2232. [PMID: 29022722 PMCID: PMC5705474 DOI: 10.1037/dev0000408] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The current study examined the relations between individual differences in attention to emotion faces and temperamental negative affect across the first 2 years of life. Infant studies have noted a normative pattern of preferential attention to salient cues, particularly angry faces. A parallel literature suggests that elevated attention bias to threat is associated with anxiety, particularly if coupled with temperamental risk. Examining the emerging relations between attention to threat and temperamental negative affect may help distinguish normative from at-risk patterns of attention. Infants (N = 145) ages 4 to 24 months (M = 12.93 months, SD = 5.57) completed an eye-tracking task modeled on the attention bias "dot-probe" task used with older children and adults. With age, infants spent greater time attending to emotion faces, particularly threat faces. All infants displayed slower latencies to fixate to incongruent versus congruent probes. Neither relation was moderated by temperament. Trial-by-trial analyses found that dwell time to the face was associated with latency to orient to subsequent probes, moderated by the infant's age and temperament. In young infants low in negative affect longer processing of angry faces was associated with faster subsequent fixation to probes; young infants high in negative affect displayed the opposite pattern at trend. Findings suggest that although age was directly associated with an emerging bias to threat, the impact of processing threat on subsequent orienting was associated with age and temperament. Early patterns of attention may shape how children respond to their environments, potentially via attention's gate-keeping role in framing a child's social world for processing. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | | | | | - Elizabeth K Allen
- Department of Human Development and Family Studies, The Pennsylvania State University
| | - Kayla M Brown
- Department of Psychology, The Pennsylvania State University
| | - Kristin A Buss
- Department of Psychology, The Pennsylvania State University
| |
Collapse
|
41
|
Fausey CM, Jayaraman S, Smith LB. From faces to hands: Changing visual input in the first two years. Cognition 2016; 152:101-107. [PMID: 27043744 PMCID: PMC4856551 DOI: 10.1016/j.cognition.2016.03.005] [Citation(s) in RCA: 135] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2015] [Revised: 03/07/2016] [Accepted: 03/08/2016] [Indexed: 11/25/2022]
Abstract
Human development takes place in a social context. Two pervasive sources of social information are faces and hands. Here, we provide the first report of the visual frequency of faces and hands in the everyday scenes available to infants. These scenes were collected by having infants wear head cameras during unconstrained everyday activities. Our corpus of 143hours of infant-perspective scenes, collected from 34 infants aged 1month to 2years, was sampled for analysis at 1/5Hz. The major finding from this corpus is that the faces and hands of social partners are not equally available throughout the first two years of life. Instead, there is an earlier period of dense face input and a later period of dense hand input. At all ages, hands in these scenes were primarily in contact with objects and the spatio-temporal co-occurrence of hands and faces was greater than expected by chance. The orderliness of the shift from faces to hands suggests a principled transition in the contents of visual experiences and is discussed in terms of the role of developmental gates on the timing and statistics of visual experiences.
Collapse
Affiliation(s)
- Caitlin M Fausey
- Department of Psychology, University of Oregon, Eugene, OR 97403, United States.
| | - Swapnaa Jayaraman
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, United States
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, United States
| |
Collapse
|