101
|
MacDonald K, Marchman VA, Fernald A, Frank MC. Children flexibly seek visual information to support signed and spoken language comprehension. J Exp Psychol Gen 2019; 149:1078-1096. [PMID: 31750713 DOI: 10.1037/xge0000702] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During grounded language comprehension, listeners must link the incoming linguistic signal to the visual world despite uncertainty in the input. Information gathered through visual fixations can facilitate understanding. But do listeners flexibly seek supportive visual information? Here, we propose that even young children can adapt their gaze and actively gather information for the goal of language comprehension. We present 2 studies of eye movements during real-time language processing, where the value of fixating on a social partner varies across different contexts. First, compared with children learning spoken English (n = 80), young American Sign Language (ASL) learners (n = 30) delayed gaze shifts away from a language source and produced a higher proportion of language-consistent eye movements. This result provides evidence that ASL learners adapt their gaze to effectively divide attention between language and referents, which both compete for processing via the visual channel. Second, English-speaking preschoolers (n = 39) and adults (n = 31) fixated longer on a speaker's face while processing language in a noisy auditory environment. Critically, like the ASL learners in Experiment 1, this delay resulted in gathering more visual information and a higher proportion of language-consistent gaze shifts. Taken together, these studies suggest that young listeners can adapt their gaze to seek visual information from social partners to support real-time language comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
102
|
Infant-adult vocal interaction dynamics depend on infant vocal type, child-directedness of adult speech, and timeframe. Infant Behav Dev 2019; 57:101325. [DOI: 10.1016/j.infbeh.2019.04.007] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 04/19/2019] [Accepted: 04/22/2019] [Indexed: 11/21/2022]
|
103
|
Arcaro MJ, Schade PF, Livingstone MS. Universal Mechanisms and the Development of the Face Network: What You See Is What You Get. Annu Rev Vis Sci 2019; 5:341-372. [PMID: 31226011 PMCID: PMC7568401 DOI: 10.1146/annurev-vision-091718-014917] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
Our assignment was to review the development of the face-processing network, an assignment that carries the presupposition that a face-specific developmental program exists. We hope to cast some doubt on this assumption and instead argue that the development of face processing is guided by the same ubiquitous rules that guide the development of cortex in general.
Collapse
Affiliation(s)
- Michael J Arcaro
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115, USA;
| | - Peter F Schade
- Department of Neurobiology, Harvard Medical School, Boston, Massachusetts 02115, USA;
| | | |
Collapse
|
104
|
Simpson EA, Maylott SE, Mitsven SG, Zeng G, Jakobsen KV. Face detection in 2- to 6-month-old infants is influenced by gaze direction and species. Dev Sci 2019; 23:e12902. [PMID: 31505079 DOI: 10.1111/desc.12902] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 07/10/2019] [Accepted: 08/30/2019] [Indexed: 11/29/2022]
Abstract
Humans detect faces efficiently from a young age. Face detection is critical for infants to identify and learn from relevant social stimuli in their environments. Faces with eye contact are an especially salient stimulus, and attention to the eyes in infancy is linked to the emergence of later sociality. Despite the importance of both of these early social skills-attending to faces and attending to the eyes-surprisingly little is known about how they interact. We used eye tracking to explore whether eye contact influences infants' face detection. Longitudinally, we examined 2-, 4-, and 6-month-olds' (N = 65) visual scanning of complex image arrays with human and animal faces varying in eye contact and head orientation. Across all ages, infants displayed superior detection of faces with eye contact; however, this effect varied as a function of species and head orientation. Infants were more attentive to human than animal faces and were more sensitive to eye and head orientation for human faces compared to animal faces. Unexpectedly, human faces with both averted heads and eyes received the most attention. This pattern may reflect the early emergence of gaze following-the ability to look where another individual looks-which begins to develop around this age. Infants may be especially interested in averted gaze faces, providing early scaffolding for joint attention. This study represents the first investigation to document infants' attention patterns to faces systematically varying in their attentional states. Together, these findings suggest that infants develop early, specialized functional conspecific face detection.
Collapse
Affiliation(s)
| | - Sarah E Maylott
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | |
Collapse
|
105
|
Yamamoto H, Sato A, Itakura S. Eye tracking in an everyday environment reveals the interpersonal distance that affords infant-parent gaze communication. Sci Rep 2019; 9:10352. [PMID: 31316101 PMCID: PMC6637119 DOI: 10.1038/s41598-019-46650-6] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 06/29/2019] [Indexed: 11/09/2022] Open
Abstract
The unique morphology of human eyes enables gaze communication at various ranges of interpersonal distance. Although gaze communication contributes to infants' social development, little is known about how infant-parent distance affects infants' visual experience in daily gaze communication. The present study conducted longitudinal observations of infant-parent face-to-face interactions in the home environment as 5 infants aged from 10 to 15.5 months. Using head-mounted eye trackers worn by parents, we evaluated infants' daily visual experience of 3138 eye contact scenes recorded from the infants' second-person perspective. The results of a hierarchical Bayesian statistical analysis suggest that certain levels of interpersonal distance afforded smooth interaction with eye contact. Eye contacts were not likely to be exchanged when the infant and parent were too close or too far apart. The number of continuing eye contacts showed an inverse U-shaped pattern with interpersonal distance, regardless of whether the eye contact was initiated by the infant or the parent. However, the interpersonal distance was larger when the infant initiated the eye contact than when the parent initiated it, suggesting that interpersonal distance affects the infant's and parent's social look differently. Overall, the present study indicates that interpersonal distance modulates infant-parent gaze communication.
Collapse
Affiliation(s)
- Hiroki Yamamoto
- Graduate School of Letters, Kyoto University, Yoshida Honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.
| | - Atsushi Sato
- Faculty of Human Development, University of Toyama, 3190 Gofuku, Toyama, 930-8555, Japan
| | - Shoji Itakura
- Graduate School of Letters, Kyoto University, Yoshida Honmachi, Sakyo-ku, Kyoto, 606-8501, Japan.,Center for Baby Science, Doshisha University, 4-1-1 Kizugawadai, Kizugawa, 619-0225, Japan
| |
Collapse
|
106
|
Addabbo M, Vacaru SV, Meyer M, Hunnius S. 'Something in the way you move': Infants are sensitive to emotions conveyed in action kinematics. Dev Sci 2019; 23:e12873. [PMID: 31144771 DOI: 10.1111/desc.12873] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2018] [Revised: 05/20/2019] [Accepted: 05/27/2019] [Indexed: 11/28/2022]
Abstract
Body movements, as well as faces, communicate emotions. Research in adults has shown that the perception of action kinematics has a crucial role in understanding others' emotional experiences. Still, little is known about infants' sensitivity to body emotional expressions, since most of the research in infancy focused on faces. While there is some first evidence that infants can recognize emotions conveyed in whole-body postures, it is still an open question whether they can extract emotional information from action kinematics. We measured electromyographic (EMG) activity over the muscles involved in happy (zygomaticus major, ZM), angry (corrugator supercilii, CS) and fearful (frontalis, F) facial expressions, while 11-month-old infants observed the same action performed with either happy or angry kinematics. Results demonstrate that infants responded to angry and happy kinematics with matching facial reactions. In particular, ZM activity increased while CS activity decreased in response to happy kinematics and vice versa for angry kinematics. Our results show for the first time that infants can rely on kinematic information to pick up on the emotional content of an action. Thus, from very early in life, action kinematics represent a fundamental and powerful source of information in revealing others' emotional state.
Collapse
Affiliation(s)
- Margaret Addabbo
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Stefania V Vacaru
- Donders Institute for Brain, Cognition, and Behaviour, Radbound University, Nijmegen, The Netherlands
| | - Marlene Meyer
- Department of Psychology, University of Chicago, Chicago, Illinois
| | - Sabine Hunnius
- Donders Institute for Brain, Cognition, and Behaviour, Radbound University, Nijmegen, The Netherlands
| |
Collapse
|
107
|
Suanda SH, Barnhart M, Smith LB, Yu C. The Signal in the Noise: The Visual Ecology of Parents' Object Naming. INFANCY 2019; 24:455-476. [PMID: 31551663 PMCID: PMC6759226 DOI: 10.1111/infa.12278] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Accepted: 12/02/2018] [Indexed: 11/30/2022]
Abstract
The uncertainty of reference has long been considered a key challenge for young word learners. Recent studies of head-camera wearing toddlers and their parents during object play have revealed that from toddlers' views the referents of parents' object naming are often visually quite clear. Although these studies have promising theoretical implications, they were all conducted in stripped-down laboratory contexts. The current study examines the visual referential clarity of parents' object naming during play in the home. Results revealed patterns of visual referential clarity that resembled previous laboratory studies. Furthermore, context analyses show that such clarity is largely a product of manual activity rather than the object-naming context. Implications for the mechanisms of early word learning are discussed.
Collapse
|
108
|
Gordon G. Social behaviour as an emergent property of embodied curiosity: a robotics perspective. Philos Trans R Soc Lond B Biol Sci 2019; 374:20180029. [PMID: 30853006 PMCID: PMC6452242 DOI: 10.1098/rstb.2018.0029] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/14/2018] [Indexed: 12/23/2022] Open
Abstract
Social interaction is an extremely complex yet vital component in daily life. We present a bottom-up approach for the emergence of social behaviours from the interaction of the curiosity drive, i.e. the intrinsic motivation to learn as much as possible, and the embedding environment of an agent. Implementing artificial curiosity algorithms in robots that explore human-like environments results in the emergence of a hierarchical structure of learning and behaviour. This structure resembles the sequential emergence of behavioural patterns in human babies, culminating in social behaviours, such as face detection, tracking and attention-grabbing facial expressions. These results suggest that an embodied curiosity drive may be the progenitor of many social behaviours if satiated by a social environment. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Goren Gordon
- Curiosity Lab, Department of Industrial Engineering, Tel-Aviv University, Tel-Aviv, Israel
| |
Collapse
|
109
|
de Barbaro K. Automated sensing of daily activity: A new lens into development. Dev Psychobiol 2019; 61:444-464. [PMID: 30883745 PMCID: PMC7343175 DOI: 10.1002/dev.21831] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 10/09/2018] [Accepted: 11/29/2018] [Indexed: 11/10/2022]
Abstract
Rapidly maturing technologies for sensing and activity recognition can provide unprecedented access to the complex structure daily activity and interaction, promising new insight into the mechanisms by which experience shapes developmental outcomes. Motion data, autonomic activity, and "snippets" of audio and video recordings can be conveniently logged by wearable sensors (Lazer et al., 2009). Machine learning algorithms can process these signals into meaningful markers, from child and parent behavior to outcomes such as depression or teenage drinking. Theoretically motivated aspects of daily activity can be combined and synchronized to examine reciprocal effects between children's behaviors and their environments or internal processes. Captured over longitudinal time, such data provide a new opportunity to study the processes by which individual differences emerge and stabilize. This paper introduces the reader to developments in sensing and activity recognition with implications for developmental phenomena across the lifespan, sketching a framework for leveraging mobile sensors for transactional analyses that bridge micro- and longitudinal- timescales of development. It finishes by detailing resources and best practices to facilitate the next generation of developmentalists to contribute to this emerging area.
Collapse
Affiliation(s)
- Kaya de Barbaro
- Department of Psychology, The University of Texas at Austin, Austin, Texas
| |
Collapse
|
110
|
Oruc I, Shafai F, Murthy S, Lages P, Ton T. The adult face-diet: A naturalistic observation study. Vision Res 2019; 157:222-229. [DOI: 10.1016/j.visres.2018.01.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 01/03/2018] [Accepted: 01/12/2018] [Indexed: 10/18/2022]
|
111
|
Mason GM, Goldstein MH, Schwade JA. The role of multisensory development in early language learning. J Exp Child Psychol 2019; 183:48-64. [PMID: 30856417 DOI: 10.1016/j.jecp.2018.12.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2018] [Revised: 12/14/2018] [Accepted: 12/15/2018] [Indexed: 01/11/2023]
Abstract
In typical development, communicative skills such as language emerge from infants' ability to combine multisensory information into cohesive percepts. For example, the act of associating the visual or tactile experience of an object with its spoken name is commonly used as a measure of early word learning, and social attention and speech perception frequently involve integrating both visual and auditory attributes. Early perspectives once regarded perceptual integration as one of infants' primary challenges, whereas recent work suggests that caregivers' social responses contain structured patterns that may facilitate infants' perception of multisensory social cues. In the current review, we discuss the regularities within caregiver feedback that may allow infants to more easily discriminate and learn from social signals. We focus on the statistical regularities that emerge in the moment-by-moment behaviors observed in studies of naturalistic caregiver-infant play. We propose that the spatial form and contingencies of caregivers' responses to infants' looks and prelinguistic vocalizations facilitate communicative and cognitive development. We also explore how individual differences in infants' sensory and motor abilities may reciprocally influence caregivers' response patterns, in turn regulating and constraining the types of social learning opportunities that infants experience across early development. We end by discussing implications for neurodevelopmental conditions affecting both multisensory integration and communication (i.e., autism) and suggest avenues for further research and intervention.
Collapse
Affiliation(s)
- Gina M Mason
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | | | |
Collapse
|
112
|
Abstract
Humans are endowed with an exceptional ability for detecting faces, a competence that, in adults, is supported by a set of face-specific cortical patches. Human newborns, already shortly after birth, preferentially orient to faces, even when they are presented in the form of highly schematic geometrical patterns vs. perceptually equivalent nonfacelike stimuli. The neural substrates underlying this early preference are still largely unexplored. Is the adult face-specific cortical circuit already active at birth, or does its specialization develop slowly as a function of experience and/or maturation? We measured EEG responses in 1- to 4-day-old awake, attentive human newborns to schematic facelike patterns and nonfacelike control stimuli, visually presented with slow oscillatory "peekaboo" dynamics (0.8 Hz) in a frequency-tagging design. Despite the limited duration of newborns' attention, reliable frequency-tagged responses could be estimated for each stimulus from the peak of the EEG power spectrum at the stimulation frequency. Upright facelike stimuli elicited a significantly stronger frequency-tagged response than inverted facelike controls in a large set of electrodes. Source reconstruction of the underlying cortical activity revealed the recruitment of a partially right-lateralized network comprising lateral occipitotemporal and medial parietal areas overlapping with the adult face-processing circuit. This result suggests that the cortical route specialized in face processing is already functional at birth.
Collapse
|
113
|
Rigato S, Banissy MJ, Romanska A, Thomas R, van Velzen J, Bremner AJ. Cortical signatures of vicarious tactile experience in four-month-old infants. Dev Cogn Neurosci 2019; 35:75-80. [PMID: 28942240 PMCID: PMC6968956 DOI: 10.1016/j.dcn.2017.09.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/30/2017] [Accepted: 09/11/2017] [Indexed: 11/29/2022] Open
Abstract
The human brain recruits similar brain regions when a state is experienced (e.g., touch, pain, actions) and when that state is passively observed in other individuals. In adults, seeing other people being touched activates similar brain areas as when we experience touch ourselves. Here we show that already by four months of age, cortical responses to tactile stimulation are modulated by visual information specifying another person being touched. We recorded somatosensory evoked potentials (SEPs) in 4-month-old infants while they were presented with brief vibrotactile stimuli to the hands. At the same time that the tactile stimuli were presented the infants observed another person's hand being touched by a soft paintbrush or approached by the paintbrush which then touched the surface next to their hand. A prominent positive peak in SEPs contralateral to the site of tactile stimulation around 130 ms after the tactile stimulus onset was of a significantly larger amplitude for the "Surface" trials than for the "Hand" trials. These findings indicate that, even at four months of age, somatosensory cortex is not only involved in the personal experience of touch but can also be vicariously recruited by seeing other people being touched.
Collapse
Affiliation(s)
- Silvia Rigato
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, CO4 3SQ, UK
| | - Michael J Banissy
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Aleksandra Romanska
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Rhiannon Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - José van Velzen
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK
| | - Andrew J Bremner
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, London, SE14 6NW, UK.
| |
Collapse
|
114
|
childes-db: A flexible and reproducible interface to the child language data exchange system. Behav Res Methods 2019; 51:1928-1941. [PMID: 30623390 DOI: 10.3758/s13428-018-1176-7] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The Child Language Data Exchange System (CHILDES) has played a critical role in research on child language development, particularly in characterizing the early language learning environment. Access to these data can be both complex for novices and difficult to automate for advanced users, however. To address these issues, we introduce childes-db, a database-formatted mirror of CHILDES that improves data accessibility and usability by offering novel interfaces, including browsable web applications and an R application programming interface (API). Along with versioned infrastructure that facilitates reproducibility of past analyses, these interfaces lower barriers to analyzing naturalistic parent-child language, allowing for a wider range of researchers in language and cognitive development to easily leverage CHILDES in their work.
Collapse
|
115
|
Emberson LL. How does learning and memory shape perceptual development in infancy? PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
116
|
Scott H, Batten JP, Kuhn G. Why are you looking at me? It's because I'm talking, but mostly because I'm staring or not doing much. Atten Percept Psychophys 2019; 81:109-118. [PMID: 30353500 PMCID: PMC6315010 DOI: 10.3758/s13414-018-1588-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Our attention is particularly driven toward faces, especially the eyes, and there is much debate over the factors that modulate this social attentional orienting. Most of the previous research has presented faces in isolation, and we tried to address this shortcoming by measuring people's eye movements whilst they observe more naturalistic and varied social interactions. Participants' eye movements were monitored whilst they watched three different types of social interactions (monologue, manual activity, active attentional misdirection), which were either accompanied by the corresponding audio as speech or by silence. Our results showed that (1) participants spent more time looking at the face when the person was giving a monologue, than when he/she was carrying out manual activities, and in the latter case they spent more time fixating on the person's hands. (2) Hearing speech significantly increases the amount of time participants spent looking at the face (this effect was relatively small), although this was not accounted for by any increase in mouth-oriented gaze. (3) Participants spent significantly more time fixating on the face when direct eye contact was established, and this drive to establish eye contact was significantly stronger in the manual activities than during the monologue. These results highlight people's strategic top-down control over when they attend to faces and the eyes, and support the view that we use our eyes to signal non-verbal information.
Collapse
Affiliation(s)
- Hannah Scott
- Department of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK
| | - Jonathan P Batten
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| | - Gustav Kuhn
- Department of Psychology, Goldsmiths, University of London, New Cross, London, SE14 6NW, UK.
| |
Collapse
|
117
|
Dahl A. The Science of Early Moral Development: on Defining, Constructing, and Studying Morality from Birth. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2018; 56:1-35. [PMID: 30846044 DOI: 10.1016/bs.acdb.2018.11.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
The first 4 years of moral development are perhaps the most transformative. Helpless neonates become infants who routinely help and harm others; infants develop into preschoolers who make moral judgments based on moral concerns with welfare. Over the past two decades, there has been tremendous empirical progress, but also theoretical stalemates, in research on early moral development. To advance the field, this chapter argues for providing definitions of key terms, adopting an interactionist and constructivist approach (eschewing the dichotomy between innate and learned characteristics), and combining naturalistic and experimental methods. On this basis, the chapter reviews research on how children's orientations toward helping and harming others develop gradually through everyday social interactions in the early years. In these interactions, children play active roles through initiation, negotiation, protest, and construction. The chapter concludes with key questions for future research on early moral development.
Collapse
Affiliation(s)
- Audun Dahl
- University of California, Santa Cruz, CA, United States.
| |
Collapse
|
118
|
Karasik LB, Tamis-LeMonda CS, Ossmy O, Adolph KE. The ties that bind: Cradling in Tajikistan. PLoS One 2018; 13:e0204428. [PMID: 30379916 PMCID: PMC6209138 DOI: 10.1371/journal.pone.0204428] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 09/09/2018] [Indexed: 11/30/2022] Open
Abstract
A traditional childrearing practice—“gahvora” cradling—in Tajikistan and other parts of Central Asia purportedly restricts movement of infants’ body and limbs. However, the practice has been documented only informally in anecdotal reports. Thus, this study had two research questions: (1) To what extent are infants’ movements restricted in the gahvora? (2) How is time in the gahvora distributed over a 24-hour day in infants from 1–24 months of age? To answer these questions, we video-recorded 146 mothers cradling their infants and interviewed them using 24-hour time diaries to determine the distribution of time infants spent in the gahvora within a day and across age. Infants’ movements were indeed severely restricted. Although mothers showed striking uniformity in how they restricted infants’ movements, they showed large individual differences in amount and distribution of daily use. Machine learning algorithms yielded three patterns of use: day and nighttime cradling, mostly nighttime cradling, and mostly daytime cradling, suggesting multiple functions of the cradling practice. Across age, time in the gahvora decreased, yet 20% of 12- to 24-month-olds spent more than 15 hours bound in the gahvora. We discuss the challenges and benefits of cultural research, and how the discovery of new phenomena may defy Western assumptions about childrearing and development. Future work will determine whether the extent and timing of restriction impacts infants’ physical and psychological development.
Collapse
Affiliation(s)
- Lana B. Karasik
- Department of Psychology, College of Staten Island & Graduate Center, CUNY, Staten Island, New York, United States of America
- * E-mail:
| | | | - Ori Ossmy
- Department of Psychology, New York University, New York, New York, United States of America
| | - Karen E. Adolph
- Department of Psychology, New York University, New York, New York, United States of America
| |
Collapse
|
119
|
Franchak JM. Changing Opportunities for Learning in Everyday Life: Infant Body Position Over the First Year. INFANCY 2018; 24:187-209. [DOI: 10.1111/infa.12272] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
120
|
Borjon JI, Schroer SE, Bambach S, Slone LK, Abney DH, Crandall DJ, Smith LB. A View of Their Own: Capturing the Egocentric View of Infants and Toddlers with Head-Mounted Cameras. J Vis Exp 2018. [PMID: 30346402 DOI: 10.3791/58445] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
Infants and toddlers view the world, at a basic sensory level, in a fundamentally different way from their parents. This is largely due to biological constraints: infants possess different body proportions than their parents and the ability to control their own head movements is less developed. Such constraints limit the visual input available. This protocol aims to provide guiding principles for researchers using head-mounted cameras to understand the changing visual input experienced by the developing infant. Successful use of this protocol will allow researchers to design and execute studies of the developing child's visual environment set in the home or laboratory. From this method, researchers can compile an aggregate view of all the possible items in a child's field of view. This method does not directly measure exactly what the child is looking at. By combining this approach with machine learning, computer vision algorithms, and hand-coding, researchers can produce a high-density dataset to illustrate the changing visual ecology of the developing infant.
Collapse
Affiliation(s)
- Jeremy I Borjon
- Department of Psychological and Brain Sciences, Indiana University;
| | - Sara E Schroer
- Department of Psychological and Brain Sciences, Indiana University;
| | - Sven Bambach
- School of Informatics, Computing, and Engineering, Indiana University
| | - Lauren K Slone
- Department of Psychological and Brain Sciences, Indiana University
| | - Drew H Abney
- Department of Psychological and Brain Sciences, Indiana University
| | - David J Crandall
- School of Informatics, Computing, and Engineering, Indiana University
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University
| |
Collapse
|
121
|
Tsang T, Ogren M, Peng Y, Nguyen B, Johnson KL, Johnson SP. Infant perception of sex differences in biological motion displays. J Exp Child Psychol 2018; 173:338-350. [PMID: 29807312 PMCID: PMC5986598 DOI: 10.1016/j.jecp.2018.04.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/09/2018] [Accepted: 04/10/2018] [Indexed: 11/24/2022]
Abstract
We examined mechanisms underlying infants' ability to categorize human biological motion stimuli from sex-typed walk motions, focusing on how visual attention to dynamic information in point-light displays (PLDs) contributes to infants' social category formation. We tested for categorization of PLDs produced by women and men by habituating infants to a series of female or male walk motions and then recording posthabituation preferences for new PLDs from the familiar or novel category (Experiment 1). We also tested for intrinsic preferences for female or male walk motions (Experiment 2). We found that infant boys were better able to categorize PLDs than were girls and that male PLDs were preferred overall. Neither of these effects was found to change with development across the observed age range (∼4-18 months). We conclude that infants' categorization of walk motions in PLDs is constrained by intrinsic preferences for higher motion speeds and higher spans of motion and, relatedly, by differences in walk motions produced by men and women.
Collapse
Affiliation(s)
- Tawny Tsang
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Marissa Ogren
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Yujia Peng
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Bryan Nguyen
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Kerri L Johnson
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Scott P Johnson
- University of California, Los Angeles, Los Angeles, CA 90095, USA.
| |
Collapse
|
122
|
Mason GM, Kirkpatrick F, Schwade JA, Goldstein MH. The Role of Dyadic Coordination in Organizing Visual Attention in 5‐Month‐Old Infants. INFANCY 2018; 24:162-186. [PMID: 32677200 DOI: 10.1111/infa.12255] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2017] [Revised: 06/21/2018] [Accepted: 06/25/2018] [Indexed: 10/28/2022]
|
123
|
Sheya A, Smith L. Development weaves brains, bodies and environments into cognition. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 34:1266-1273. [PMID: 31886316 PMCID: PMC6934375 DOI: 10.1080/23273798.2018.1489065] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Accepted: 06/11/2018] [Indexed: 06/02/2023]
Abstract
Understanding how and why human cognition has the properties it does is one of science's fundamental questions. Current thinking in Cognitive Science has delineated two candidate approaches that differ in how they address the question of the relationship between sensory-motor and cognitive processes. In this paper, we add to this discussion by arguing that this question is properly phrased as a developmental question and that ultimately to understand the properties of human cognition we must ask how does human cognition come to have these properties. We conclude that because development weaves brains, bodies and environments into cognition, cognition is inexorably linked to processes of perceiving and acting and inseparable from them.
Collapse
Affiliation(s)
- Adam Sheya
- Department of Psychological Sciences, University of Connecticut, Storrs, CT, USA
| | - Linda Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| |
Collapse
|
124
|
Jayaraman S, Smith LB. Faces in early visual environments are persistent not just frequent. Vision Res 2018; 157:213-221. [PMID: 29852210 DOI: 10.1016/j.visres.2018.05.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2017] [Revised: 05/19/2018] [Accepted: 05/23/2018] [Indexed: 11/25/2022]
Abstract
The regularities in very young infants' visual worlds likely have out-sized effects on the development of the visual system because they comprise the first-in experience that tunes, maintains, and specifies the neural substrate from low-level to higher-level representations and therefore constitute the starting point for all other visual learning. Recent evidence from studies using head cameras suggests that the frequency of faces available in early infant visual environments declines over the first year and a half of life. The primary question for the present paper concerns the temporal structure of face experiences: Is frequency the key exposure dimension distinguishing younger and older infants' face experiences, or is it the duration for which faces remain in view? Our corpus of head-camera images collected as infants went about their daily activities consisted of over a million individually coded frames sampled at 0.2 Hz from 232 h of infant-perspective scenes, recorded from 51 infants aged 1 month to 15 months. The major finding from this corpus is that very young infants (1-3 months) not only have more frequent face experiences but also more temporally persistent ones. The repetitions of the same very few face identities presenting up-close and frontal views are exaggerated in more persistent runs of the same face, and these persistent runs are more frequent for the youngest infants. The implications of early experiences consisting of extended repeated exposures of up-close frontal views for visual learning are discussed.
Collapse
Affiliation(s)
- Swapnaa Jayaraman
- Indiana University, 1101 E. 10th st., Bloomington, IN 47404, United States.
| | - Linda B Smith
- Indiana University, 1101 E. 10th st., Bloomington, IN 47404, United States.
| |
Collapse
|
125
|
Twomey KE, Ma L, Westermann G. All the Right Noises: Background Variability Helps Early Word Learning. Cogn Sci 2018; 42 Suppl 2:413-438. [PMID: 28940612 PMCID: PMC6001535 DOI: 10.1111/cogs.12539] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 08/09/2017] [Accepted: 08/10/2017] [Indexed: 11/30/2022]
Abstract
Variability is prevalent in early language acquisition, but, whether it supports or hinders learning is unclear; while target variability has been shown to facilitate word learning, variability in competitor items has been shown to make the task harder. Here, we tested whether background variability could boost learning in a referent selection task. Two groups of 2-year-old children saw arrays of one novel and two known objects on a screen, and they heard a novel or known label. Stimuli were identical across conditions, with the exception that in the constant color condition objects appeared on a uniform white background, and in the variable color condition backgrounds were different, uniform colors. At test, only children in the variable condition showed evidence of retaining label-object associations. These data support findings from the adult memory literature, which suggest that variability supports learning by decontextualizing representations. We argue that these data are consistent with dynamic systems accounts of learning in which low-level entropy adds sufficient noise to the developmental system to precipitate a change in behavior.
Collapse
Affiliation(s)
| | - Lizhi Ma
- Department of PsychologyLancaster University
| | | |
Collapse
|
126
|
Conceptual distortions of hand structure are robust to changes in stimulus information. Conscious Cogn 2018; 61:107-116. [DOI: 10.1016/j.concog.2018.01.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 11/10/2017] [Accepted: 01/08/2018] [Indexed: 12/18/2022]
|
127
|
Smith LB, Jayaraman S, Clerkin E, Yu C. The Developing Infant Creates a Curriculum for Statistical Learning. Trends Cogn Sci 2018. [PMID: 29519675 DOI: 10.1016/j.tics.2018.02.004] [Citation(s) in RCA: 115] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
New efforts are using head cameras and eye-trackers worn by infants to capture everyday visual environments from the point of view of the infant learner. From this vantage point, the training sets for statistical learning develop as the sensorimotor abilities of the infant develop, yielding a series of ordered datasets for visual learning that differ in content and structure between timepoints but are highly selective at each timepoint. These changing environments may constitute a developmentally ordered curriculum that optimizes learning across many domains. Future advances in computational models will be necessary to connect the developmentally changing content and statistics of infant experience to the internal machinery that does the learning.
Collapse
Affiliation(s)
- Linda B Smith
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA.
| | - Swapnaa Jayaraman
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA
| | - Elizabeth Clerkin
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA
| | - Chen Yu
- Psychological and Brain Sciences, Indiana University, 1101 East 10th Street, Bloomington, IN 47405, USA
| |
Collapse
|
128
|
Abstract
The fact that the face is a source of diverse social signals allows us to use face and person perception as a model system for asking important psychological questions about how our brains are organised. A key issue concerns whether we rely primarily on some form of generic representation of the common physical source of these social signals (the face) to interpret them, or instead create multiple representations by assigning different aspects of the task to different specialist components. Variants of the specialist components hypothesis have formed the dominant theoretical perspective on face perception for more than three decades, but despite this dominance of formally and informally expressed theories, the underlying principles and extent of any division of labour remain uncertain. Here, I discuss three important sources of constraint: first, the evolved structure of the brain; second, the need to optimise responses to different everyday tasks; and third, the statistical structure of faces in the perceiver's environment. I show how these constraints interact to determine the underlying functional organisation of face and person perception.
Collapse
|
129
|
Leong V, Byrne E, Clackson K, Georgieva S, Lam S, Wass S. Speaker gaze increases information coupling between infant and adult brains. Proc Natl Acad Sci U S A 2017; 114:13290-13295. [PMID: 29183980 PMCID: PMC5740679 DOI: 10.1073/pnas.1702493114] [Citation(s) in RCA: 171] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
When infants and adults communicate, they exchange social signals of availability and communicative intention such as eye gaze. Previous research indicates that when communication is successful, close temporal dependencies arise between adult speakers' and listeners' neural activity. However, it is not known whether similar neural contingencies exist within adult-infant dyads. Here, we used dual-electroencephalography to assess whether direct gaze increases neural coupling between adults and infants during screen-based and live interactions. In experiment 1 (n = 17), infants viewed videos of an adult who was singing nursery rhymes with (i) direct gaze (looking forward), (ii) indirect gaze (head and eyes averted by 20°), or (iii) direct-oblique gaze (head averted but eyes orientated forward). In experiment 2 (n = 19), infants viewed the same adult in a live context, singing with direct or indirect gaze. Gaze-related changes in adult-infant neural network connectivity were measured using partial directed coherence. Across both experiments, the adult had a significant (Granger) causal influence on infants' neural activity, which was stronger during direct and direct-oblique gaze relative to indirect gaze. During live interactions, infants also influenced the adult more during direct than indirect gaze. Further, infants vocalized more frequently during live direct gaze, and individual infants who vocalized longer also elicited stronger synchronization from the adult. These results demonstrate that direct gaze strengthens bidirectional adult-infant neural connectivity during communication. Thus, ostensive social signals could act to bring brains into mutual temporal alignment, creating a joint-networked state that is structured to facilitate information transfer during early communication and learning.
Collapse
Affiliation(s)
- Victoria Leong
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom;
- Division of Psychology, Nanyang Technological University, Singapore 637332, Republic of Singapore
| | - Elizabeth Byrne
- MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, United Kingdom
| | - Kaili Clackson
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Stanimira Georgieva
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Sarah Lam
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| | - Sam Wass
- Division of Psychology, University of East London, London E16 2RD, United Kingdom
| |
Collapse
|
130
|
Smith LB, Slone LK. A Developmental Approach to Machine Learning? Front Psychol 2017; 8:2124. [PMID: 29259573 PMCID: PMC5723343 DOI: 10.3389/fpsyg.2017.02124] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 11/21/2017] [Indexed: 11/13/2022] Open
Abstract
Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order - with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines.
Collapse
Affiliation(s)
- Linda B. Smith
- Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN, United States
| | | |
Collapse
|
131
|
Franchak JM, Kretch KS, Adolph KE. See and be seen: Infant-caregiver social looking during locomotor free play. Dev Sci 2017; 21:e12626. [PMID: 29071760 DOI: 10.1111/desc.12626] [Citation(s) in RCA: 78] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 09/06/2017] [Indexed: 11/27/2022]
Abstract
Face-to-face interaction between infants and their caregivers is a mainstay of developmental research. However, common laboratory paradigms for studying dyadic interaction oversimplify the act of looking at the partner's face by seating infants and caregivers face to face in stationary positions. In less constrained conditions when both partners are freely mobile, infants and caregivers must move their heads and bodies to look at each other. We hypothesized that face looking and mutual gaze for each member of the dyad would decrease with increased motor costs of looking. To test this hypothesis, 12-month-old crawling and walking infants and their parents wore head-mounted eye trackers to record eye movements of each member of the dyad during locomotor free play in a large toy-filled playroom. Findings revealed that increased motor costs decreased face looking and mutual gaze: Each partner looked less at the other's face when their own posture or the other's posture required more motor effort to gain visual access to the other's face. Caregivers mirrored infants' posture by spending more time down on the ground when infants were prone, perhaps to facilitate face looking. Infants looked more at toys than at their caregiver's face, but caregivers looked at their infant's face and at toys in equal amounts. Furthermore, infants looked less at toys and faces compared to studies that used stationary tasks, suggesting that the attentional demands differ in an unconstrained locomotor task. Taken together, findings indicate that ever-changing motor constraints affect real-life social looking.
Collapse
Affiliation(s)
- John M Franchak
- Department of Psychology, University of California, Riverside, California, USA
| | - Kari S Kretch
- Department of Psychology, New York University, New York, USA
| | - Karen E Adolph
- Department of Psychology, New York University, New York, USA
| |
Collapse
|
132
|
Minar NJ, Lewkowicz DJ. Overcoming the other-race effect in infancy with multisensory redundancy: 10-12-month-olds discriminate dynamic other-race faces producing speech. Dev Sci 2017; 21:e12604. [PMID: 28944541 DOI: 10.1111/desc.12604] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 07/03/2017] [Indexed: 11/30/2022]
Abstract
We tested 4-6- and 10-12-month-old infants to investigate whether the often-reported decline in infant sensitivity to other-race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own-race or other-race faces which were either accompanied by a speech syllable, no sound, or a non-speech sound. Results indicated that 4-6- and 10-12-month-old infants discriminated own-race as well as other-race faces accompanied by a speech syllable, that only the 10-12-month-olds discriminated silent own-race faces, and that 4-6-month-old infants discriminated own-race and other-race faces accompanied by a non-speech sound but that 10-12-month-old infants only discriminated own-race faces accompanied by a non-speech sound. Overall, the results suggest that the ORE reported to date reflects infant responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing.
Collapse
Affiliation(s)
- Nicholas J Minar
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| |
Collapse
|
133
|
Synchronized practice helps bearded capuchin monkeys learn to extend attention while learning a tradition. Proc Natl Acad Sci U S A 2017; 114:7798-7805. [PMID: 28739944 DOI: 10.1073/pnas.1621071114] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Culture extends biology in that the setting of development shapes the traditions that individuals learn, and over time, traditions evolve as occasional variations are learned by others. In humans, interactions with others impact the development of cognitive processes, such as sustained attention, that shape how individuals learn as well as what they learn. Thus, learning itself is impacted by culture. Here, we explore how social partners might shape the development of psychological processes impacting learning a tradition. We studied bearded capuchin monkeys learning a traditional tool-using skill, cracking nuts using stone hammers. Young monkeys practice components of cracking nuts with stones for years before achieving proficiency. We examined the time course of young monkeys' activity with nuts before, during, and following others' cracking nuts. Results demonstrate that the onset of others' cracking nuts immediately prompts young monkeys to start handling and percussing nuts, and they continue these activities while others are cracking. When others stop cracking nuts, young monkeys sustain the uncommon actions of percussing and striking nuts for shorter periods than the more common actions of handling nuts. We conclude that nut-cracking by adults can promote the development of sustained attention for the critical but less common actions that young monkeys must practice to learn this traditional skill. This work suggests that in nonhuman species, as in humans, socially specified settings of development impact learning processes as well as learning outcomes. Nonhumans, like humans, may be culturally variable learners.
Collapse
|
134
|
|
135
|
Dahl A. Ecological Commitments: Why Developmental Science Needs Naturalistic Methods. CHILD DEVELOPMENT PERSPECTIVES 2017; 11:79-84. [PMID: 28584562 PMCID: PMC5455774 DOI: 10.1111/cdep.12217] [Citation(s) in RCA: 56] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Much of developmental science aims to explain how or whether children's experiences influence their thoughts and actions. Developmental theories make assumptions and claims-what I call ecological commitments-about events outside research contexts. In this article, I argue that most developmental theories make ecological commitments about children's thoughts, actions, and experiences outside research contexts, and that these commitments sometimes go unstated and untested. I also argue that naturalistic methods can provide evidence for or against ecological commitments, and that naturalistic and experimental studies address unique yet complementary questions. Rather than argue for increasing the ecological validity of experiments or abandoning laboratory research, I propose reconsidering the relations among developmental theories, naturalistic methods, and laboratory experiments.
Collapse
|
136
|
Azañón E, Camacho K, Morales M, Longo MR. The Sensitive Period for Tactile Remapping Does Not Include Early Infancy. Child Dev 2017; 89:1394-1404. [PMID: 28452406 DOI: 10.1111/cdev.12813] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Visual input during development seems crucial in tactile spatial perception, given that late, but not congenitally, blind people are impaired when skin-based and tactile external representations are in conflict (when crossing the limbs). To test whether there is a sensitive period during which visual input is necessary, 14 children (age = 7.95) and a teenager (LM; age = 17.38) deprived of early vision by cataracts, and whose sight was restored during the first 5 months and at age 7, respectively, were tested. Tactile localization with arms crossed and uncrossed was measured. Children showed a crossing effect indistinguishable from a control group (Ns = 28, age = 8.24), whereas LM showed no crossing effect (Ns controls = 14, age = 20.78). This demonstrates a sensitive period which, critically, does not include early infancy.
Collapse
|
137
|
Bremner AJ. Multisensory Development: Calibrating a Coherent Sensory Milieu in Early Life. Curr Biol 2017; 27:R305-R307. [DOI: 10.1016/j.cub.2017.02.055] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
138
|
Clerkin EM, Hart E, Rehg JM, Yu C, Smith LB. Real-world visual statistics and infants' first-learned object names. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160055. [PMID: 27872373 PMCID: PMC5124080 DOI: 10.1098/rstb.2016.0055] [Citation(s) in RCA: 99] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/29/2016] [Indexed: 11/12/2022] Open
Abstract
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'.
Collapse
Affiliation(s)
- Elizabeth M Clerkin
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| | - Elizabeth Hart
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| | - James M Rehg
- Interactive Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Chen Yu
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| | - Linda B Smith
- Department of Psychological and Brain Science, Indiana University, Bloomington, IN 47203, USA
| |
Collapse
|
139
|
Jayaraman S, Fausey CM, Smith LB. Why are faces denser in the visual experiences of younger than older infants? Dev Psychol 2017; 53:38-49. [PMID: 28026190 PMCID: PMC5271576 DOI: 10.1037/dev0000230] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants declines over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested 2 opposing hypotheses about this observed age-related decline in the frequency of faces in infant views. By the people-input hypothesis, there are more faces in view for younger infants because people are more often physically in front of younger than older infants. This hypothesis predicts that not just faces but views of other body parts will decline with age. By the face-input hypothesis, the decline is strictly about faces, not people or other body parts in general. Two experiments, 1 using a time-sampling method (84 infants, 3 to 24 months in age) and the other analyses of head camera images (36 infants, 1 to 24 months) provide strong support for the face-input hypothesis. The results suggest developmental constraints on the environment that ensure faces are prevalent early in development. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University
| |
Collapse
|
140
|
Adolph KE, Franchak JM. The development of motor behavior. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2017; 8:10.1002/wcs.1430. [PMID: 27906517 PMCID: PMC5182199 DOI: 10.1002/wcs.1430] [Citation(s) in RCA: 136] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2016] [Revised: 10/10/2016] [Accepted: 10/14/2016] [Indexed: 12/27/2022]
Abstract
This article reviews research on the development of motor behavior from a developmental systems perspective. We focus on infancy when basic action systems are acquired. Posture provides a stable base for locomotion, manual actions, and facial actions. Experience facilitates improvements in motor behavior and infants accumulate immense amounts of experience with all of their basic action systems. At every point in development, perception guides motor behavior by providing feedback about the results of just prior movements and information about what to do next. Reciprocally, the development of motor behavior provides fodder for perception. More generally, motor development brings about new opportunities for acquiring knowledge about the world, and burgeoning motor skills can instigate cascades of developmental changes in perceptual, cognitive, and social domains. WIREs Cogn Sci 2017, 8:e1430. doi: 10.1002/wcs.1430 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Karen E Adolph
- Department of Psychology, New York University, New York, NY, USA
| | - John M Franchak
- Department of Psychology, University of California, Riverside, CA, USA
| |
Collapse
|
141
|
Marblestone AH, Wayne G, Kording KP. Toward an Integration of Deep Learning and Neuroscience. Front Comput Neurosci 2016; 10:94. [PMID: 27683554 PMCID: PMC5021692 DOI: 10.3389/fncom.2016.00094] [Citation(s) in RCA: 251] [Impact Index Per Article: 27.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 08/24/2016] [Indexed: 01/22/2023] Open
Abstract
Neuroscience has focused on the detailed implementation of computation, studying neural codes, dynamics and circuits. In machine learning, however, artificial neural networks tend to eschew precisely designed codes, dynamics or circuits in favor of brute force optimization of a cost function, often using simple and relatively uniform initial architectures. Two recent developments have emerged within machine learning that create an opportunity to connect these seemingly divergent perspectives. First, structured architectures are used, including dedicated systems for attention, recursion and various forms of short- and long-term memory storage. Second, cost functions and training procedures have become more complex and are varied across layers and over time. Here we think about the brain in terms of these ideas. We hypothesize that (1) the brain optimizes cost functions, (2) the cost functions are diverse and differ across brain locations and over development, and (3) optimization operates within a pre-structured architecture matched to the computational problems posed by behavior. In support of these hypotheses, we argue that a range of implementations of credit assignment through multiple layers of neurons are compatible with our current knowledge of neural circuitry, and that the brain's specialized systems can be interpreted as enabling efficient optimization for specific problem classes. Such a heterogeneously optimized system, enabled by a series of interacting cost functions, serves to make learning data-efficient and precisely targeted to the needs of the organism. We suggest directions by which neuroscience could seek to refine and test these hypotheses.
Collapse
Affiliation(s)
- Adam H. Marblestone
- Synthetic Neurobiology Group, Massachusetts Institute of Technology, Media LabCambridge, MA, USA
| | | | - Konrad P. Kording
- Rehabilitation Institute of Chicago, Northwestern UniversityChicago, IL, USA
| |
Collapse
|
142
|
Fausey CM, Jayaraman S, Smith LB. From faces to hands: Changing visual input in the first two years. Cognition 2016; 152:101-107. [PMID: 27043744 PMCID: PMC4856551 DOI: 10.1016/j.cognition.2016.03.005] [Citation(s) in RCA: 135] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2015] [Revised: 03/07/2016] [Accepted: 03/08/2016] [Indexed: 11/25/2022]
Abstract
Human development takes place in a social context. Two pervasive sources of social information are faces and hands. Here, we provide the first report of the visual frequency of faces and hands in the everyday scenes available to infants. These scenes were collected by having infants wear head cameras during unconstrained everyday activities. Our corpus of 143hours of infant-perspective scenes, collected from 34 infants aged 1month to 2years, was sampled for analysis at 1/5Hz. The major finding from this corpus is that the faces and hands of social partners are not equally available throughout the first two years of life. Instead, there is an earlier period of dense face input and a later period of dense hand input. At all ages, hands in these scenes were primarily in contact with objects and the spatio-temporal co-occurrence of hands and faces was greater than expected by chance. The orderliness of the shift from faces to hands suggests a principled transition in the contents of visual experiences and is discussed in terms of the role of developmental gates on the timing and statistics of visual experiences.
Collapse
Affiliation(s)
- Caitlin M Fausey
- Department of Psychology, University of Oregon, Eugene, OR 97403, United States.
| | - Swapnaa Jayaraman
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, United States
| | - Linda B Smith
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, United States
| |
Collapse
|