1
|
Comparing apples to manzanas and oranges to naranjas: A new measure of English-Spanish vocabulary for dual language learners. INFANCY 2024; 29:302-326. [PMID: 38217508 PMCID: PMC11019594 DOI: 10.1111/infa.12571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 01/15/2024]
Abstract
The valid assessment of vocabulary development in dual-language-learning infants is critical to developmental science. We developed the Dual Language Learners English-Spanish (DLL-ES) Inventories to measure vocabularies of U.S. English-Spanish DLLs. The inventories provide translation equivalents for all Spanish and English items on Communicative Development Inventory (CDI) short forms; extended inventories based on CDI long forms; and Spanish language-variety options. Item-Response Theory analyses applied to Wordbank and Web-CDI data (n = 2603, 12-18 months; n = 6722, 16-36 months; half female; 1% Asian, 3% Black, 2% Hispanic, 30% White, 64% unknown) showed near-perfect associations between DLL-ES and CDI long-form scores. Interviews with 10 Hispanic mothers of 18- to 24-month-olds (2 White, 1 Black, 7 multi-racial; 6 female) provide a proof of concept for the value of the DLL-ES for assessing the vocabularies of DLLs.
Collapse
|
2
|
It's not just what we don't know: The mapping problem in the acquisition of negation. Cogn Psychol 2023; 145:101592. [PMID: 37567048 DOI: 10.1016/j.cogpsych.2023.101592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 07/03/2023] [Accepted: 07/23/2023] [Indexed: 08/13/2023]
Abstract
How do learners learn what no and not mean when they are only presented with what is? Given its complexity, abstractness, and roles in logic, truth-functional negation might be a conceptual accomplishment. As a result, young children's gradual acquisition of negation words might be due to their undergoing a gradual conceptual change that is necessary to represent those words' logical meaning. However, it's also possible that linguistic expressions of negation take time to learn because of children's gradually increasing grasp of their language. To understand what no and not mean, children might first need to understand the rest of the sentences in which those words are used. We provide experimental evidence that conceptually equipped learners (adults) face the same acquisition challenges that children do when their access to linguistic information is restricted, which simulates how much language children understand at different points in acquisition. When watching a silenced video of naturalistic uses of negators by parents speaking to their children, adults could tell when the parent was prohibiting the child and struggled with inferring that negators were used to express logical negation. However, when provided with additional information about what else the parent said, guessing that the parent had expressed logical negation became easy for adults. Though our findings do not rule out that young learners also undergo conceptual change, they show that increasing understanding of language alone, with no accompanying conceptual change, can account for the gradual acquisition of negation words.
Collapse
|
3
|
Children's Early Spontaneous Comparisons Predict Later Analogical Reasoning Skills: An Investigation of Parental Influence. Open Mind (Camb) 2023; 7:483-509. [PMID: 37637299 PMCID: PMC10449400 DOI: 10.1162/opmi_a_00093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 06/26/2023] [Indexed: 08/29/2023] Open
Abstract
Laboratory studies have demonstrated beneficial effects of making comparisons on children's analogical reasoning skills. We extend this finding to an observational dataset comprising 42 children. The prevalence of specific comparisons, which identify a feature of similarity or difference, in children's spontaneous speech from 14-58 months is associated with higher scores in tests of verbal and non-verbal analogy in 6th grade. We test two pre-registered hypotheses about how parents influence children's production of specific comparisons: 1) via modelling, where parents produce specific comparisons during the sessions prior to child onset of this behaviour; 2) via responsiveness, where parents respond to their children's earliest specific comparisons in variably engaged ways. We do not find that parent modelling or responsiveness predicts children's production of specific comparisons. However, one of our pre-registered control analyses suggests that parents' global comparisons-comparisons that do not identify a specific feature of similarity or difference-may bootstrap children's later production of specific comparisons, controlling for parent IQ. We present exploratory analyses following up on this finding and suggest avenues for future confirmatory research. The results illuminate a potential route by which parents' behaviour may influence children's early spontaneous comparisons and potentially their later analogical reasoning skills.
Collapse
|
4
|
Universal Constraints on Linguistic Event Categories: A Cross-Cultural Study of Child Homesign. Psychol Sci 2023; 34:298-312. [PMID: 36608154 DOI: 10.1177/09567976221140328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Languages carve up conceptual space in varying ways-for example, English uses the verb cut both for cutting with a knife and for cutting with scissors, but other languages use distinct verbs for these events. We asked whether, despite this variability, there are universal constraints on how languages categorize events involving tools (e.g., knife-cutting). We analyzed descriptions of tool events from two groups: (a) 43 hearing adult speakers of English, Spanish, and Chinese and (b) 10 deaf child homesigners ages 3 to 11 (each of whom has created a gestural language without input from a conventional language model) in five different countries (Guatemala, Nicaragua, United States, Taiwan, Turkey). We found alignment across these two groups-events that elicited tool-prominent language among the spoken-language users also elicited tool-prominent language among the homesigners. These results suggest ways of conceptualizing tool events that are so prominent as to constitute a universal constraint on how events are categorized in language.
Collapse
|
5
|
Young children interpret number gestures differently than nonsymbolic sets. Dev Sci 2022; 26:e13335. [PMID: 36268613 DOI: 10.1111/desc.13335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 07/01/2022] [Accepted: 09/06/2022] [Indexed: 11/26/2022]
Abstract
Researchers have long been interested in the origins of humans' understanding of symbolic number, focusing primarily on how children learn the meanings of number words (e.g., "one", "two", etc.). However, recent evidence indicates that children learn the meanings of number gestures before learning number words. In the present set of experiments, we ask whether children's early knowledge of number gestures resembles their knowledge of nonsymbolic number. In four experiments, we show that preschool children (n = 139 in total; age M = 4.14 years, SD = 0.71, range = 2.75-6.20) do not view number gestures in the same the way that they view nonsymbolic representations of quantity (i.e., arrays of shapes), which opens the door for the possibility that young children view number gestures as symbolic, as adults and older children do. This article is protected by copyright. All rights reserved.
Collapse
|
6
|
Gesture is the primary modality for language creation. Proc Biol Sci 2022; 289:20220066. [PMID: 35259991 PMCID: PMC8905156 DOI: 10.1098/rspb.2022.0066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
How language began is one of the oldest questions in science, but theories remain speculative due to a lack of direct evidence. Here, we report two experiments that generate empirical evidence to inform gesture-first and vocal-first theories of language origin; in each, we tested modern humans' ability to communicate a range of meanings (995 distinct words) using either gesture or non-linguistic vocalization. Experiment 1 is a cross-cultural study, with signal Producers sampled from Australia (n = 30, Mage = 32.63, s.d. = 12.42) and Vanuatu (n = 30, Mage = 32.40, s.d. = 11.76). Experiment 2 is a cross-experiential study in which Producers were either sighted (n = 10, Mage = 39.60, s.d. = 11.18) or severely vision-impaired (n = 10, Mage = 39.40, s.d. = 10.37). A group of undergraduate student Interpreters guessed the meaning of the signals created by the Producers (n = 140). Communication success was substantially higher in the gesture modality than the vocal modality (twice as high overall; 61.17% versus 29.04% success). This was true within cultures, across cultures and even for the signals produced by severely vision-impaired participants. The success of gesture is attributed in part to its greater universality (i.e. similarity in form across different Producers). Our results support the hypothesis that gesture is the primary modality for language creation.
Collapse
|
7
|
Abstract
Why do people gesture when they speak? According to one influential proposal, the Lexical Retrieval Hypothesis (LRH), gestures serve a cognitive function in speakers' minds by helping them find the right spatial words. Do gestures also help speakers find the right words when they talk about abstract concepts that are spatialized metaphorically? If so, then preventing people from gesturing should increase the rate of disfluencies during speech about both literal and metaphorical space. Here, we sought to conceptually replicate the finding that preventing speakers from gesturing increases disfluencies in speech with literal spatial content (e.g., the rocket went up), which has been interpreted as evidence for the LRH, and to extend this pattern to speech with metaphorical spatial content (e.g., my grades went up). Across three measures of speech disfluency (disfluency rate, speech rate, and rate of nonjuncture filled pauses), we found no difference in disfluency between speakers who were allowed to gesture freely and speakers who were not allowed to gesture, for any category of speech (literal spatial content, metaphorical spatial content, and no spatial content). This large dataset (7,969 phrases containing 2,075 disfluencies) provided no support for the idea that gestures help speakers find the right words, even for speech with literal spatial content. Upon reexamining studies cited as evidence for the LRH and related proposals over the past 5 decades, we conclude that there is, in fact, no reliable evidence that preventing gestures impairs speaking. Together, these findings challenge long-held beliefs about why people gesture when they speak. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
|
8
|
Personal narrative as a 'breeding ground' for higher-order thinking talk in early parent-child interactions. Dev Psychol 2021; 57:519-534. [PMID: 34483346 DOI: 10.1037/dev0001166] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Personal narrative is decontextualized talk where individuals recount stories of personal experiences about past or future events. As an everyday discursive speech type, narrative potentially invites parents and children to explicitly link together, generalize from, and make inferences about representations-i.e., to engage in higher-order thinking talk (HOTT). Here we ask whether narratives in early parent-child interactions include proportionally more HOTT than other forms of everyday home language. Sixty-four children (31 girls; 36 White, 14 Black, 8 Hispanic, 6 mixed/other race) and their primary caregiver(s) (M income = $61,000) were recorded in 90-minute spontaneous home interactions every 4 months from 14-58 months. Speech was transcribed and coded for narrative and HOTT. We found that parents at all visits and children after 38 months used more HOTT in narrative than non-narrative, and more HOTT than expected by chance. At 38- and 50-months, we examined HOTT in a related but distinct form of decontextualized talk-pretend, or talk during imaginary episodes of interaction-as a control to test whether other forms of decontextualized talk also relate to HOTT. While pretend contained more HOTT than other (non-narrative/non-pretend) talk, it generally contained less HOTT than narrative. Additionally, unlike HOTT during narrative, the amount of HOTT during pretend did not exceed the amount expected by chance, suggesting narrative serves as a particularly rich 'breeding ground' for HOTT in parent-child interactions. These findings provide insight into the nature of narrative discourse, and suggest narrative potentially may be used as a lever to increase children's higher-order thinking.
Collapse
|
9
|
Parent Language Input Prior to School Forecasts Change in Children's Language-Related Cortical Structures During Mid-Adolescence. Front Hum Neurosci 2021; 15:650152. [PMID: 34408634 PMCID: PMC8366586 DOI: 10.3389/fnhum.2021.650152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 06/04/2021] [Indexed: 11/13/2022] Open
Abstract
Children differ widely in their early language development, and this variability has important implications for later life outcomes. Parent language input is a strong experiential factor predicting the variability in children's early language skills. However, little is known about the brain or cognitive mechanisms that underlie the relationship. In addressing this gap, we used longitudinal data spanning 15 years to examine the role of early parental language input that children receive during preschool years in the development of brain structures that support language processing during school years. Using naturalistic parent-child interactions, we measured parental language input (amount and complexity) to children between the ages of 18 and 42 months (n = 23). We then assessed longitudinal changes in children's cortical thickness measured at five time points between 9 and 16 years of age. We focused on specific regions of interest (ROIs) that have been shown to play a role in language processing. Our results support the view that, even after accounting for important covariates such as parental intelligence quotient (IQ) and education, the amount and complexity of language input to a young child prior to school forecasts the rate of change in cortical thickness during the 7-year period from 5½ to 12½ years later. Examining the proximal correlates of change in brain and cognitive differences has the potential to inform targets for effective prevention and intervention strategies.
Collapse
|
10
|
Sign language, like spoken language, promotes object categorization in young hearing infants. Cognition 2021; 215:104845. [PMID: 34273677 DOI: 10.1016/j.cognition.2021.104845] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 04/19/2021] [Accepted: 07/07/2021] [Indexed: 11/18/2022]
Abstract
The link between language and cognition is unique to our species and emerges early in infancy. Here, we provide the first evidence that this precocious language-cognition link is not limited to spoken language, but is instead sufficiently broad to include sign language, a language presented in the visual modality. Four- to six-month-old hearing infants, never before exposed to sign language, were familiarized to a series of category exemplars, each presented by a woman who either signed in American Sign Language (ASL) while pointing and gazing toward the objects, or pointed and gazed without language (control). At test, infants viewed two images: one, a new member of the now-familiar category; and the other, a member of an entirely new category. Four-month-old infants who observed ASL distinguished between the two test objects, indicating that they had successfully formed the object category; they were as successful as age-mates who listened to their native (spoken) language. Moreover, it was specifically the linguistic elements of sign language that drove this facilitative effect: infants in the control condition, who observed the woman only pointing and gazing failed to form object categories. Finally, the cognitive advantages of observing ASL quickly narrow in hearing infants: by 5- to 6-months, watching ASL no longer supports categorization, although listening to their native spoken language continues to do so. Together, these findings illuminate the breadth of infants' early link between language and cognition and offer insight into how it unfolds.
Collapse
|
11
|
People Are Less Susceptible to Illusion When They Use Their Hands to Communicate Rather Than Estimate. Psychol Sci 2021; 32:1227-1237. [PMID: 34240647 DOI: 10.1177/0956797621991552] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
When we use our hands to estimate the length of a stick in the Müller-Lyer illusion, we are highly susceptible to the illusion. But when we prepare to act on sticks under the same conditions, we are significantly less susceptible. Here, we asked whether people are susceptible to illusion when they use their hands not to act on objects but to describe them in spontaneous co-speech gestures or conventional sign languages of the deaf. Thirty-two English speakers and 13 American Sign Language signers used their hands to act on, estimate the length of, and describe sticks eliciting the Müller-Lyer illusion. For both gesture and sign, the magnitude of illusion in the description task was smaller than the magnitude of illusion in the estimation task and not different from the magnitude of illusion in the action task. The mechanisms responsible for producing gesture in speech and sign thus appear to operate not on percepts involved in estimation but on percepts derived from the way we act on objects.
Collapse
|
12
|
Changing language input following market integration in a Yucatec Mayan community. PLoS One 2021; 16:e0252926. [PMID: 34153044 PMCID: PMC8216532 DOI: 10.1371/journal.pone.0252926] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 05/25/2021] [Indexed: 11/18/2022] Open
Abstract
Like many indigenous populations worldwide, Yucatec Maya communities are rapidly undergoing change as they become more connected with urban centers and access to formal education, wage labour, and market goods became more accessible to their inhabitants. However, little is known about how these changes affect children's language input. Here, we provide the first systematic assessment of the quantity, type, source, and language of the input received by 29 Yucatec Maya infants born six years apart in communities where increased contact with urban centres has resulted in a greater exposure to the dominant surrounding language, Spanish. Results show that infants from the second cohort received less directed input than infants in the first and, when directly addressed, most of their input was in Spanish. To investigate the mechanisms driving the observed patterns, we interviewed 126 adults from the communities. Against common assumptions, we showed that reductions in Mayan input did not simply result from speakers devaluing the Maya language. Instead, changes in input could be attributed to changes in childcare practices, as well as caregiver ethnotheories regarding the relative acquisition difficulty of each of the languages. Our study highlights the need for understanding the drivers of individual behaviour in the face of socio-demographic and economic changes as it is key for determining the fate of linguistic diversity.
Collapse
|
13
|
Emergent Morphology in Child Homesign: Evidence from Number Language. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2021; 18:16-40. [PMID: 35603228 PMCID: PMC9122328 DOI: 10.1080/15475441.2021.1922281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Human languages, signed and spoken, can be characterized by the structural patterns they use to associate communicative forms with meanings. One such pattern is paradigmatic morphology, where complex words are built from the systematic use and re-use of sub-lexical units. Here, we provide evidence of emergent paradigmatic morphology akin to number inflection in a communication system developed without input from a conventional language, homesign. We study the communication systems of four deaf child homesigners (mean age 8;02). Although these idiosyncratic systems vary from one another, we nevertheless find that all four children use handshape and movement devices productively to express cardinal and non-cardinal number information, and that their number expressions are consistent in both form and meaning. Our study shows, for the first time, that all four homesigners not only incorporate number devices into representational devices used as predicates , but also into gestures functioning as nominals, including deictic gestures. In other words, the homesigners express number by systematically combining and re-combining additive markers for number (qua inflectional morphemes) with representational and deictic gestures (qua bases). The creation of new, complex forms with predictable meanings across gesture types and linguistic functions constitutes evidence for an inflectional morphological paradigm in homesign and expands our understanding of the structural patterns of language that are, and are not, dependent on linguistic input.
Collapse
|
14
|
Unpacking the gestures of chemistry learners: What the hands tell us about correct and incorrect conceptions of stereochemistry. DISCOURSE PROCESSES 2021; 58:213-232. [PMID: 34024962 DOI: 10.1080/0163853x.2020.1839343] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
In this study, adults, who were naïve to organic chemistry, drew stereoisomers of molecules and explained their drawings. From these explanations, we identified nine strategies that participants expressed during those explanations. Five of the nine strategies referred to properties of the molecule that were explanatorily irrelevant to solving the problem; the remaining four referred to properties that were explanatorily relevant to the solution. For each problem, we tallied which of the nine strategies were expressed within the explanation for that problem, and determined whether the strategy was expressed in speech only, gesture only, or in both speech and gesture within the explanation. After these explanations, all participants watched the experimenter deliver a two-minute training module on stereoisomers. Following the training, participants repeated the drawing+explanation task on six new problems. The number of relevant strategies that participants expressed in speech (alone or with gesture) before training did not predict their post-training scores. However, the number of relevant strategies participants expressed in gesture-only before training did predict their post-training scores. Conveying relevant information about stereoisomers uniquely in gesture prior to a brief training is thus a good index of who is most likely to learn from the training. We suggest that gesture reveals explanatorily relevant implicit knowledge that reflects (and perhaps even promotes) acquisition of new understanding.
Collapse
|
15
|
The Predictive Value of Non-Referential Beat Gestures: Early Use in Parent-Child Interactions Predicts Narrative Abilities at 5 Years of Age. Child Dev 2021; 92:2335-2355. [PMID: 34018614 DOI: 10.1111/cdev.13583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
A longitudinal study with 45 children (Hispanic, 13%; non-Hispanic, 87%) investigated whether the early production of non-referential beat and flip gestures, as opposed to referential iconic gestures, in parent-child naturalistic interactions from 14 to 58 months old predicts narrative abilities at age 5. Results revealed that only non-referential beats significantly (p < .01) predicted later narrative productions. The pragmatic functions of the children's speech that accompany these gestures were also analyzed in a representative sample of 18 parent-child dyads, revealing that beats were typically associated with biased assertions or questions. These findings show that the early use of beats predicts narrative abilities later in development, and suggest that this relation is likely due to the pragmatic-structuring function that beats reflect in early discourse.
Collapse
|
16
|
Abstract
Early linguistic input is a powerful predictor of children’s language outcomes. We investigated two novel questions about this relationship: Does the impact of language input vary over time, and does the impact of time-varying language input on child outcomes differ for vocabulary and for syntax? Using methods from epidemiology to account for baseline and time-varying confounding, we predicted 64 children’s outcomes on standardized tests of vocabulary and syntax in kindergarten from their parents’ vocabulary and syntax input when the children were 14 and 30 months old. For vocabulary, children whose parents provided diverse input earlier as well as later in development were predicted to have the highest outcomes. For syntax, children whose parents’ input substantially increased in syntactic complexity over time were predicted to have the highest outcomes. The optimal sequence of parents’ linguistic input for supporting children’s language acquisition thus varies for vocabulary and for syntax.
Collapse
|
17
|
Structural biases that children bring to language learning: A cross-cultural look at gestural input to homesign. Cognition 2021; 211:104608. [PMID: 33581667 DOI: 10.1016/j.cognition.2021.104608] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 12/18/2020] [Accepted: 01/17/2021] [Indexed: 10/22/2022]
Abstract
Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.
Collapse
|
18
|
Children integrate speech and gesture across a wider temporal window than speech and action when learning a math concept. Cognition 2021; 210:104604. [PMID: 33548851 DOI: 10.1016/j.cognition.2021.104604] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 01/12/2021] [Accepted: 01/13/2021] [Indexed: 11/28/2022]
Abstract
It is well established that gesture facilitates learning, but understanding the best way to harness gesture and how gesture helps learners are still open questions. Here, we consider one of the properties that may make gesture a powerful teaching tool: its temporal alignment with spoken language. Previous work shows that the simultaneity of speech and gesture matters when children receive instruction from a teacher (Congdon et al., 2017). In Study 1, we ask whether simultaneity also matters when children themselves are the ones who produce speech and gesture strategies. Third-graders (N = 75) were taught to produce one strategy in speech and one strategy in gesture for correctly solving mathematical equivalence problems; they were told to produce these strategies either simultaneously (S + G) or sequentially (S➔G; G➔S) during a training session. Learning was assessed immediately after training, at a 24-h follow-up, and at a 4-week follow-up. Children showed evidence of learning and retention across all three conditions. Study 2 was conducted to explore whether it was the special relationship between speech and gesture that helped children learn. Third-graders (N = 87) were taught an action strategy instead of a gesture strategy; all other aspects of the design were the same. Children again learned across all three conditions. But only children who produced simultaneous speech and action retained what they had learned at the follow-up sessions. Results have implications for why gesture is beneficial to learners and, taken in relation to previous literature, reveal differences in the mechanisms by which doing versus seeing gesture facilitates learning.
Collapse
|
19
|
Talking with Your (Artificial) Hands: Communicative Hand Gestures as an Implicit Measure of Embodiment. iScience 2020; 23:101650. [PMID: 33103087 PMCID: PMC7578755 DOI: 10.1016/j.isci.2020.101650] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 09/14/2020] [Accepted: 10/02/2020] [Indexed: 11/16/2022] Open
Abstract
When people talk, they move their hands to enhance meaning. Using accelerometry, we measured whether people spontaneously use their artificial limbs (prostheses) to gesture, and whether this behavior relates to everyday prosthesis use and perceived embodiment. Perhaps surprisingly, one- and two-handed participants did not differ in the number of gestures they produced in gesture-facilitating tasks. However, they did differ in their gesture profile. One-handers performed more, and bigger, gesture movements with their intact hand relative to their prosthesis. Importantly, one-handers who gestured more similarly to their two-handed counterparts also used their prosthesis more in everyday life. Although collectively one-handers only marginally agreed that their prosthesis feels like a body part, one-handers who reported they embody their prosthesis also showed greater prosthesis use for communication and daily function. Our findings provide the first empirical link between everyday prosthesis use habits and perceived embodiment and a novel means for implicitly indexing embodiment.
Collapse
|
20
|
Longitudinally adaptive assessment and instruction increase numerical skills of preschool children. Proc Natl Acad Sci U S A 2020; 117:27945-27953. [PMID: 33106414 PMCID: PMC7668039 DOI: 10.1073/pnas.2002883117] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Social inequality in mathematical skill is apparent at kindergarten entry and persists during elementary school. To level the playing field, we trained teachers to assess children's numerical and spatial skills every 10 wk. Each assessment provided teachers with information about a child's growth trajectory on each skill, information designed to help them evaluate their students' progress, reflect on past instruction, and strategize for the next phase of instruction. A key constraint is that teachers have limited time to assess individual students. To maximize the information provided by an assessment, we adapted the difficulty of each assessment based on each child's age and accumulated evidence about the child's skills. Children in classrooms of 24 trained teachers scored 0.29 SD higher on numerical skills at posttest than children in 25 randomly assigned control classrooms (P = 0.005). We observed no effect on spatial skills. The intervention also positively influenced children's verbal comprehension skills (0.28 SD higher at posttest, P < 0.001), but did not affect their print-literacy skills. We consider the potential contribution of this approach, in combination with similar regimes of assessment and instruction in elementary schools, to the reduction of social inequality in numerical skill and discuss possible explanations for the absence of an effect on spatial skills.
Collapse
|
21
|
Current Research in Pragmatic Language Use Among Deaf and Hard of Hearing Children. Pediatrics 2020; 146:S237-S245. [PMID: 33139437 DOI: 10.1542/peds.2020-0242c] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/31/2020] [Indexed: 11/24/2022] Open
Abstract
In this article, we provide a narrative review of research literature on the development of pragmatic skills and the social uses of language in children and adolescents, with a focus on those who are deaf and hard of hearing (DHH). In the review, we consider how pragmatic skills may develop over time for DHH children and adolescents depending on age, language context, amplification devices, and languages and communication modalities. The implications of these findings for enhancing intervention programs for DHH children and adolescents and for considering ideal contexts for optimizing the pragmatic development of DHH children are considered.
Collapse
|
22
|
Using Gesture To Identify and Address Early Concerns About Language and Pragmatics. Pediatrics 2020; 146:S278-S283. [PMID: 33139441 DOI: 10.1542/peds.2020-0242g] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/31/2020] [Indexed: 11/24/2022] Open
Abstract
Speakers and signers naturally and spontaneously gesture when they use language to communicate. These gestures not only play a central role in how language is used in social situations but also offer insight into speakers' and signers' cognitive processes. The goals of this article are twofold: (1) to document how gesture can be used to identify concerns in language development and (2) to illustrate how gesture can be used to address those concerns, particularly with respect to pragmatic development. These goals are explored in this article, with a focus on deaf and/or hard of hearing (DHH) children. Medical providers and allied health professionals, as well as educators and parents, can use the information gleaned from the gestures of DHH children to determine if intervention is needed. Gesture can also be used to design interventions, including interventions in which children who are DHH are presented gestures in combination with speech or signs and interventions in which they are encouraged to gesture themselves. Children's gestures not only increase their learning potential but also create opportunities for medical and health professionals, as well as educators and parents, to gain access to a DHH child's unspoken and unsigned ideas, capitalizing on the opportunity to provide intervention when it is likely to be effective.
Collapse
|
23
|
The communicative importance of agent-backgrounding: Evidence from homesign and Nicaraguan Sign Language. Cognition 2020; 203:104332. [PMID: 32559513 DOI: 10.1016/j.cognition.2020.104332] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 05/11/2020] [Accepted: 05/18/2020] [Indexed: 10/24/2022]
Abstract
Some concepts are more essential for human communication than others. In this paper, we investigate whether the concept of agent-backgrounding is sufficiently important for communication that linguistic structures for encoding this concept are present in young sign languages. Agent-backgrounding constructions serve to reduce the prominence of the agent - the English passive sentence a book was knocked over is an example. Although these constructions are widely attested cross-linguistically, there is little prior research on the emergence of such devices in new languages. Here we studied how agent-backgrounding constructions emerge in Nicaraguan Sign Language (NSL) and adult homesign systems. We found that NSL signers have innovated both lexical and morphological devices for expressing agent-backgrounding, indicating that conveying a flexible perspective on events has deep communicative value. At the same time, agent-backgrounding devices did not emerge at the same time as agentive devices. This result suggests that agent-backgrounding does not have the same core cognitive status as agency. The emergence of agent-backgrounding morphology appears to depend on receiving a linguistic system as input in which linguistic devices for expressing agency are already well-established.
Collapse
|
24
|
Unconscious Number Discrimination in the Human Visual System. Cereb Cortex 2020; 30:5821-5829. [PMID: 32537630 DOI: 10.1093/cercor/bhaa155] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 04/02/2020] [Accepted: 05/09/2020] [Indexed: 11/13/2022] Open
Abstract
How do humans compute approximate number? According to one influential theory, approximate number representations arise in the intraparietal sulcus and are amodal, meaning that they arise independent of any sensory modality. Alternatively, approximate number may be computed initially within sensory systems. Here we tested for sensitivity to approximate number in the visual system using steady state visual evoked potentials. We recorded electroencephalography from humans while they viewed dotclouds presented at 30 Hz, which alternated in numerosity (ranging from 10 to 20 dots) at 15 Hz. At this rate, each dotcloud backward masked the previous dotcloud, disrupting top-down feedback to visual cortex and preventing conscious awareness of the dotclouds' numerosities. Spectral amplitude at 15 Hz measured over the occipital lobe (Oz) correlated positively with the numerical ratio of the stimuli, even when nonnumerical stimulus attributes were controlled, indicating that subjects' visual systems were differentiating dotclouds on the basis of their numerical ratios. Crucially, subjects were unable to discriminate the numerosities of the dotclouds consciously, indicating the backward masking of the stimuli disrupted reentrant feedback to visual cortex. Approximate number appears to be computed within the visual system, independently of higher-order areas, such as the intraparietal sulcus.
Collapse
|
25
|
The origins of higher-order thinking lie in children's spontaneous talk across the pre-school years. Cognition 2020; 200:104274. [PMID: 32388140 DOI: 10.1016/j.cognition.2020.104274] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 03/17/2020] [Accepted: 03/19/2020] [Indexed: 11/26/2022]
Abstract
Higher-order thinking is relational reasoning in which multiple representations are linked together, through inferences, comparisons, abstractions, and hierarchies. We examine the development of higher-order thinking in 64 preschool-aged children, observed from 14 to 58 months in naturalistic situations at home. We used children's spontaneous talk about and with relations (i.e., higher-order thinking talk, or HOTT) as a window onto their higher-order thinking skills. We find that surface HOTT, in which relations between representations are more immediate and easily perceptible, appears before-and is far more frequent than-structure HOTT, in which relations between representations are more abstract and less easy to perceive. Child-specific factors (including early vocabulary and gesture use, first-born status, and family income) predict differences in children's onset (i.e., age of acquisition) of HOTT and its trajectory of use across development. Although HOTT utterances tend to be longer and more syntactically complex than non-HOTT utterances, HOTT frequently appears in non-complex utterances, and a substantial proportion of children achieve complex utterance onset prior to the onset of HOTT. This finding suggests that complex language is neither necessary nor sufficient for HOTT to occur; other factors above and beyond complex linguistic skills are involved in the onset and use of higher-order thinking. Finally, we found that the trajectory of HOTT, particularly structure HOTT-but not complex utterances-during the preschool period predicts standardized outcome measures of inference and analogy skills in grade school, which underscores the crucial role that this kind of early talk plays for later outcomes.
Collapse
|
26
|
Language development and brain reorganization in a child born without the left hemisphere. Cortex 2020; 127:290-312. [PMID: 32259667 DOI: 10.1016/j.cortex.2020.02.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Revised: 11/08/2019] [Accepted: 02/19/2020] [Indexed: 11/18/2022]
Abstract
We present a case of a 14-year-old girl born without the left hemisphere due to prenatal left internal carotid occlusion. We combined longitudinal language and cognitive assessments with functional and structural neuroimaging data to situate the case within age-matched, typically developing children. Despite having had a delay in getting language off the ground during the preschool years, our case performed within the normal range on a variety of standardized language tests, and exceptionally well on phonology and word reading, during the elementary and middle school years. Moreover, her spatial, number, and reasoning skills also fell in the average to above-average range based on assessments during these time periods. Functional MRI data revealed activation in right fronto-temporal areas when listening to short stories, resembling the bilateral activation patterns in age-matched typically developing children. Diffusion MRI data showed significantly larger dorsal white matter association tracts (the direct and anterior segments of the arcuate fasciculus) connecting areas active during language processing in her remaining right hemisphere, compared to either hemisphere in control children. We hypothesize that these changes in functional and structural brain organization are the result of compensatory brain plasticity, manifesting in unusually large right dorsal tracts, and exceptional performance in phonology, speech repetition, and decoding. More specifically, we posit that our case's large white matter connections might have played a compensatory role by providing fast and reliable transfer of information between cortical areas for language in the right hemisphere.
Collapse
|
27
|
Speech-accompanying gestures are not processed by the language-processing mechanisms. Neuropsychologia 2019; 132:107132. [PMID: 31276684 PMCID: PMC6708375 DOI: 10.1016/j.neuropsychologia.2019.107132] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2018] [Revised: 06/01/2019] [Accepted: 06/30/2019] [Indexed: 12/15/2022]
Abstract
Speech-accompanying gestures constitute one information channel during communication. Some have argued that processing gestures engages the brain regions that support language comprehension. However, studies that have been used as evidence for shared mechanisms suffer from one or more of the following limitations: they (a) have not directly compared activations for gesture and language processing in the same study and relied on the fallacious reverse inference (Poldrack, 2006) for interpretation, (b) relied on traditional group analyses, which are bound to overestimate overlap (e.g., Nieto-Castañon and Fedorenko, 2012), (c) failed to directly compare the magnitudes of response (e.g., Chen et al., 2017), and (d) focused on gestures that may have activated the corresponding linguistic representations (e.g., "emblems"). To circumvent these limitations, we used fMRI to examine responses to gesture processing in language regions defined functionally in individual participants (e.g., Fedorenko et al., 2010), including directly comparing effect sizes, and covering a broad range of spontaneously generated co-speech gestures. Whenever speech was present, language regions responded robustly (and to a similar degree regardless of whether the video contained gestures or grooming movements). In contrast, and critically, responses in the language regions were low - at or slightly above the fixation baseline - when silent videos were processed (again, regardless of whether they contained gestures or grooming movements). Brain regions outside of the language network, including some in close proximity to its regions, differentiated between gestures and grooming movements, ruling out the possibility that the gesture/grooming manipulation was too subtle. Behavioral studies on the critical video materials further showed robust differentiation between the gesture and grooming conditions. In summary, contra prior claims, language-processing regions do not respond to co-speech gestures in the absence of speech, suggesting that these regions are selectively driven by linguistic input (e.g., Fedorenko et al., 2011). Although co-speech gestures are uncontroversially important in communication, they appear to be processed in brain regions distinct from those that support language comprehension, similar to other extra-linguistic communicative signals, like facial expressions and prosody.
Collapse
|
28
|
Number gestures predict learning of number words. Dev Sci 2019; 22:e12791. [PMID: 30566755 PMCID: PMC6470030 DOI: 10.1111/desc.12791] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2016] [Revised: 11/14/2018] [Accepted: 11/14/2018] [Indexed: 01/29/2023]
Abstract
When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture-speech "mismatches" on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture-speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word "three" and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number-word instruction. We used the Give-a-Number task to measure number knowledge in 47 children (Mage = 4.1 years, SD = 0.58), and used the What's on this Card task to assess whether children produced gesture-speech mismatches above their knower level. Children who were early in their number learning trajectories ("one-knowers" and "two-knowers") were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture-speech mismatches at pretest. The findings suggest that numerical gesture-speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number-learning.
Collapse
|
29
|
Manual directional gestures facilitate cross-modal perceptual learning. Cognition 2019; 187:178-187. [PMID: 30877849 DOI: 10.1016/j.cognition.2019.03.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 03/04/2019] [Accepted: 03/06/2019] [Indexed: 11/24/2022]
Abstract
Action and perception interact in complex ways to shape how we learn. In the context of language acquisition, for example, hand gestures can facilitate learning novel sound-to-meaning mappings that are critical to successfully understanding a second language. However, the mechanisms by which motor and visual information influence auditory learning are still unclear. We hypothesize that the extent to which cross-modal learning occurs is directly related to the common representational format of perceptual features across motor, visual, and auditory domains (i.e., the extent to which changes in one domain trigger similar changes in another). Furthermore, to the extent that information across modalities can be mapped onto a common representation, training in one domain may lead to learning in another domain. To test this hypothesis, we taught native English speakers Mandarin tones using directional pitch gestures. Watching or performing gestures that were congruent with pitch direction (e.g., an up gesture moving up, and a down gesture moving down, in the vertical plane) significantly enhanced tone category learning, compared to auditory-only training. Moreover, when gestures were rotated (e.g., an up gesture moving away from the body, and a down gesture moving toward the body, in the horizontal plane), performing the gestures resulted in significantly better learning, compared to watching the rotated gestures. Our results suggest that when a common representational mapping can be established between motor and sensory modalities, auditory perceptual learning is likely to be enhanced.
Collapse
|
30
|
Breaking down gesture and action in mental rotation: Understanding the components of movement that promote learning. Dev Psychol 2019; 55:981-993. [PMID: 30777770 DOI: 10.1037/dev0000697] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Past research has shown that children's mental rotation skills are malleable and can be improved through action experience-physically rotating objects-or gesture experience-showing how objects could rotate (e.g., Frick, Ferrara, & Newcombe, 2013; Goldin-Meadow et al., 2012; Levine, Goldin-Meadow, Carlson, & Hemani-Lopez, 2018). These two types of movements both involve rotation, but differ on a number of components. Here, we break down action and gesture into components-feeling an object during rotation, using a grasping handshape during rotation, tracing the trajectory of rotation, and seeing the outcome of rotation-and ask, in two studies, how training children on a mental rotation task through different combinations of these components impacts learning gains across a delay. Our results extend the literature by showing that, although all children benefit from training experiences, some training experiences are more beneficial than others, and the pattern differs by sex. Not seeing the outcome of rotation emerged as a crucial training component for both males and females. However, not seeing the outcome turned out to be the only necessary component for males (who showed equivalent gains when imagining or gesturing object rotation). Females, in contrast, only benefitted from not seeing the outcome when it involved producing a relevant motor movement (i.e., when gesturing the rotation of the object and not simply imagining the rotation of the object). Results are discussed in relation to potential mechanisms driving these effects and practical implications. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
31
|
Parents' early book reading to children: Relation to children's later language and literacy outcomes controlling for other parent language input. Dev Sci 2019; 22:e12764. [PMID: 30325107 DOI: 10.1111/desc.12764] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 09/10/2018] [Accepted: 10/11/2018] [Indexed: 11/30/2022]
Abstract
It is widely believed that reading to preschool children promotes their language and literacy skills. Yet, whether early parent-child book reading is an index of generally rich linguistic input or a unique predictor of later outcomes remains unclear. To address this question, we asked whether naturally occurring parent-child book reading interactions between 1 and 2.5 years-of-age predict elementary school language and literacy outcomes, controlling for the quantity of other talk parents provide their children, family socioeconomic status, and children's own early language skill. We find that the quantity of parent-child book reading interactions predicts children's later receptive vocabulary, reading comprehension, and internal motivation to read (but not decoding, external motivation to read, or math skill), controlling for these other factors. Importantly, we also find that parent language that occurs during book reading interactions is more sophisticated than parent language outside book reading interactions in terms of vocabulary diversity and syntactic complexity.
Collapse
|
32
|
Occluding the face diminishes the conceptual accessibility of an animate agent. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 34:273-288. [PMID: 33015215 PMCID: PMC7531273 DOI: 10.1080/23273798.2018.1525495] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Accepted: 09/12/2018] [Indexed: 06/11/2023]
Abstract
The language that people use to describe events reflects their perspective on the event. This linguistic encoding is influenced by conceptual accessibility, particularly whether individuals in the event are animate or agentive--animates are more likely than inanimates to appear as Subject of a sentence, and agents are more likely than patients to appear as Subject. We tested whether perceptual aspects of a scene can override these two conceptual biases when they are aligned: whether a visually prominent inanimate patient will be selected as Subject when pitted against a visually backgrounded animate agent. We manipulated visual prominence by contrasting scenes in which the face/torso/hand of the agent were visible vs. scenes in which only the hand was visible. Events with only a hand were more often associated with passive descriptions, in both production and comprehension tasks. These results highlight the power of visual prominence to guide how people conceptualize events.
Collapse
|
33
|
Abstract
Interpreting iconic gestures can be challenging for children. Here, we explore the features and functions of iconic gestures that make them more challenging for young children to interpret than instrumental actions. In Study 1, we show that 2.5-year-olds are able to glean size information from handshape in a simple gesture, although their performance is significantly worse than 4-year-olds'. Studies 2 to 4 explore the boundary conditions of 2.5-year-olds' gesture understanding. In Study 2, 2.5-year-old children have an easier time interpreting size information in hands that reach than in hands that gesture. In Study 3, we tease apart the perceptual features and functional objectives of reaches and gestures. We created a context in which an action has the perceptual features of a reach (extending the hand toward an object) but serves the function of a gesture (the object is behind a barrier and not obtainable; the hand thus functions to represent, rather than reach for, the object). In this context, children struggle to interpret size information in the hand, suggesting that gesture's representational function (rather than its perceptual features) is what makes it hard for young children to interpret. A distance control (Study 4) in which a person holds a box in gesture space (close to the body) demonstrates that children's difficulty interpreting static gesture cannot be attributed to the physical distance between a gesture and its referent. Together, these studies provide evidence that children's struggle to interpret iconic gesture may stem from its status as representational action. (PsycINFO Database Record
Collapse
|
34
|
Creating Images With the Stroke of a Hand: Depiction of Size and Shape in Sign Language. Front Psychol 2018; 9:1276. [PMID: 30108532 PMCID: PMC6079389 DOI: 10.3389/fpsyg.2018.01276] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2018] [Accepted: 07/03/2018] [Indexed: 11/13/2022] Open
Abstract
In everyday communication, not only do speakers describe, but they also depict. When depicting, speakers take on the role of other people and quote their speech or imitate their actions. In previous work, we developed a paradigm to elicit depictions in speakers. Here we apply this paradigm to signers to explore depiction in the manual modality, with a focus on depiction of the size and shape of objects. We asked signers to describe two objects that could easily be characterized using lexical signs (Descriptive Elicitation), and objects that were more difficult to distinguish using lexical signs, thus encouraging the signers to depict (Depictive Elicitation). We found that signers used two types of depicting constructions (DCs), conventional DCs and embellished DCs. Both conventional and embellished DCs make use of categorical handshapes to identify objects. But embellished DCs also capture imagistic aspects of the objects, either by adding a tracing movement to gradiently depict the contours of the object, or by adding a second handshape to depict the configuration of the object. Embellished DCs were more frequent in the Depictive Elicitation context than in the Descriptive Elicitation context; lexical signs showed the reverse pattern; and conventional DCs were equally like in the two contexts. In addition, signers produced iconic mouth movements, which are temporally and semantically integrated with the signs they accompany and depict the size and shape of objects, more often with embellished DCs than with either lexical signs or conventional DCs. Embellished DCs share a number of properties with embedded depictions, constructed action, and constructed dialog in signed and spoken languages. We discuss linguistic constraints on these gradient depictions, focusing on how handshape constrains the type of depictions that can be formed, and the function of depiction in everyday discourse.
Collapse
|
35
|
Meaning before order: Cardinal principle knowledge predicts improvement in understanding the successor principle and exact ordering. Cognition 2018; 180:59-81. [PMID: 30007878 DOI: 10.1016/j.cognition.2018.06.012] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2016] [Revised: 06/07/2018] [Accepted: 06/18/2018] [Indexed: 11/19/2022]
Abstract
Learning the cardinal principle (the last word reached when counting a set represents the size of the whole set) is a major milestone in early mathematics. But researchers disagree about the relationship between cardinal principle knowledge and other concepts, including how counting implements the successor function (for each number word N representing a cardinal value, the next word in the count list represents the cardinal value N + 1) and exact ordering (cardinal values can be ordered such that each is one more than the value before it and one less than the value after it). No studies have investigated acquisition of the successor principle and exact ordering over time, and in relation to cardinal principle knowledge. An open question thus remains: Is the cardinal principle a "gatekeeper" concept children must acquire before learning about succession and exact ordering, or can these concepts develop separately? Preschoolers (N = 127) who knew the cardinal principle (CP-knowers) or who knew the cardinal meanings of number words up to "three" or "four" (3-4-knowers) completed succession and exact ordering tasks at pretest and posttest. In between, children completed one of two trainings: counting only versus counting, cardinal labeling, and comparison. CP-knowers started out better than 3-4-knowers on succession and exact ordering. Controlling for this disparity, we found that CP-knowers improved over time on succession and exact ordering; 3-4-knowers did not. Improvement did not differ between the two training conditions. We conclude that children can learn the cardinal principle without understanding succession or exact ordering and hypothesize that children must understand the cardinal principle before learning these concepts.
Collapse
|
36
|
Gesture helps learners learn, but not merely by guiding their visual attention. Dev Sci 2018; 21:e12664. [PMID: 29663574 DOI: 10.1111/desc.12664] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2017] [Accepted: 02/13/2018] [Indexed: 11/30/2022]
Abstract
Teaching a new concept through gestures-hand movements that accompany speech-facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, ). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism-gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture-they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e., follow along with speech) than children who watch the no-gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do not mediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesture moderates the impact of visual looking patterns on learning-following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.
Collapse
|
37
|
Blind Speakers Show Language-Specific Patterns in Co-Speech Gesture but Not Silent Gesture. Cogn Sci 2018; 42:1001-1014. [PMID: 28481418 DOI: 10.1111/cogs.12502] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Revised: 03/13/2017] [Accepted: 03/20/2017] [Indexed: 11/29/2022]
Abstract
Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture-blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language-an organization that relies on neither visuospatial cues nor language structure.
Collapse
|
38
|
Gesture for generalization: gesture facilitates flexible learning of words for actions on objects. Dev Sci 2018. [PMID: 29542238 DOI: 10.1111/desc.12656] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Verb learning is difficult for children (Gentner, ), partially because children have a bias to associate a novel verb not only with the action it represents, but also with the object on which it is learned (Kersten & Smith, ). Here we investigate how well 4- and 5-year-old children (N = 48) generalize novel verbs for actions on objects after doing or seeing the action (e.g., twisting a knob on an object) or after doing or seeing a gesture for the action (e.g., twisting in the air near an object). We find not only that children generalize more effectively through gesture experience, but also that this ability to generalize persists after a 24-hour delay.
Collapse
|
39
|
Abstract
We examined the effects of three different training conditions, all of which involve the motor system, on kindergarteners' mental transformation skill. We focused on three main questions. First, we asked whether training that involves making a motor movement that is relevant to the mental transformation-either concretely through action (action training) or more abstractly through gestural movements that represent the action (move-gesture training)-resulted in greater gains than training using motor movements irrelevant to the mental transformation (point-gesture training). We tested children prior to training, immediately after training (posttest), and 1 week after training (retest), and we found greater improvement in mental transformation skill in both the action and move-gesture training conditions than in the point-gesture condition, at both posttest and retest. Second, we asked whether the total gain made by retest differed depending on the abstractness of the movement-relevant training (action vs. move-gesture), and we found that it did not. Finally, we asked whether the time course of improvement differed for the two movement-relevant conditions, and we found that it did-gains in the action condition were realized immediately at posttest, with no further gains at retest; gains in the move-gesture condition were realized throughout, with comparable gains from pretest-to-posttest and from posttest-to-retest. Training that involves movement, whether concrete or abstract, can thus benefit children's mental transformation skill. However, the benefits unfold differently over time-the benefits of concrete training unfold immediately after training (online learning); the benefits of more abstract training unfold in equal steps immediately after training (online learning) and during the intervening week with no additional training (offline learning). These findings have implications for the kinds of instruction that can best support spatial learning.
Collapse
|
40
|
Functional neuroanatomy of gesture-speech integration in children varies with individual differences in gesture processing. Dev Sci 2018. [PMID: 29516653 DOI: 10.1111/desc.12648] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Gesture is an integral part of children's communicative repertoire. However, little is known about the neurobiology of speech and gesture integration in the developing brain. We investigated how 8- to 10-year-old children processed gesture that was essential to understanding a set of narratives. We asked whether the functional neuroanatomy of gesture-speech integration varies as a function of (1) the content of speech, and/or (2) individual differences in how gesture is processed. When gestures provided missing information not present in the speech (i.e., disambiguating gesture; e.g., "pet" + flapping palms = bird), the presence of gesture led to increased activity in inferior frontal gyri, the right middle temporal gyrus, and the left superior temporal gyrus, compared to when gesture provided redundant information (i.e., reinforcing gesture; e.g., "bird" + flapping palms = bird). This pattern of activation was found only in children who were able to successfully integrate gesture and speech behaviorally, as indicated by their performance on post-test story comprehension questions. Children who did not glean meaning from gesture did not show differential activation across the two conditions. Our results suggest that the brain activation pattern for gesture-speech integration in children overlaps with-but is broader than-the pattern in adults performing the same task. Overall, our results provide a possible neurobiological mechanism that could underlie children's increasing ability to integrate gesture and speech over childhood, and account for individual differences in that integration.
Collapse
|
41
|
A Helping Hand in Assessing Children's Knowledge: Instructing Adults to Attend to Gesture. COGNITION AND INSTRUCTION 2018. [DOI: 10.1207/s1532690xci2001_1] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
42
|
Children's Early Decontextualized Talk Predicts Academic Language Proficiency in Midadolescence. Child Dev 2018; 90:1650-1663. [PMID: 29359315 DOI: 10.1111/cdev.13034] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
This study examines whether children's decontextualized talk-talk about nonpresent events, explanations, or pretend-at 30 months predicts seventh-grade academic language proficiency (age 12). Academic language (AL) refers to the language of school texts. AL proficiency has been identified as an important predictor of adolescent text comprehension. Yet research on precursors to AL proficiency is scarce. Child decontextualized talk is known to be a predictor of early discourse development, but its relation to later language outcomes remains unclear. Forty-two children and their caregivers participated in this study. The proportion of child talk that was decontextualized emerged as a significant predictor of seventh-grade AL proficiency, even after controlling for socioeconomic status, parent decontextualized talk, child total words, child vocabulary, and child syntactic comprehension.
Collapse
|
43
|
Parent praise to toddlers predicts fourth grade academic achievement via children's incremental mindsets. Dev Psychol 2017; 54:397-409. [PMID: 29172567 DOI: 10.1037/dev0000444] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In a previous study, parent-child praise was observed in natural interactions at home when children were 1, 2, and 3 years of age. Children who received a relatively high proportion of process praise (e.g., praise for effort and strategies) showed stronger incremental motivational frameworks, including a belief that intelligence can be developed and a greater desire for challenge, when they were in 2nd or 3rd grade (Gunderson et al., 2013). The current study examines these same children's (n = 53) academic achievement 1 to 2 years later, in 4th grade. Results provide the first evidence that process praise to toddlers predicts children's academic achievement (in math and reading comprehension) 7 years later, in elementary school, via their incremental motivational frameworks. Further analysis of these motivational frameworks shows that process praise had its effect on fourth grade achievement through children's trait beliefs (e.g., believing that intelligence is fixed vs. malleable), rather than through their learning goals (e.g., preference for easy vs. challenging tasks). Implications for the socialization of motivation are discussed. (PsycINFO Database Record
Collapse
|
44
|
Abstract
Gesture can illustrate objects and events in the world by iconically reproducing elements of those objects and events. Children do not begin to express ideas iconically, however, until after they have begun to use conventional forms. In this paper, we investigate how children's use of iconic resources in gesture relates to the developing structure of their communicative systems. Using longitudinal video corpora, we compare the emergence of manual iconicity in hearing children who are learning a spoken language (co-speech gesture) to the emergence of manual iconicity in a deaf child who is creating a manual system of communication (homesign). We focus on one particular element of iconic gesture - the shape of the hand (handshape). We ask how handshape is used as an iconic resource in 1-5-year-olds, and how it relates to the semantic content of children's communicative acts. We find that patterns of handshape development are broadly similar between co-speech gesture and homesign, suggesting that the building blocks underlying children's ability to iconically map manual forms to meaning are shared across different communicative systems: those where gesture is produced alongside speech, and those where gesture is the primary mode of communication.
Collapse
|
45
|
Better together: Simultaneous presentation of speech and gesture in math instruction supports generalization and retention. LEARNING AND INSTRUCTION 2017; 50:65-74. [PMID: 29051690 PMCID: PMC5642925 DOI: 10.1016/j.learninstruc.2017.03.005] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
When teachers gesture during instruction, children retain and generalize what they are taught (Goldin-Meadow, 2014). But why does gesture have such a powerful effect on learning? Previous research shows that children learn most from a math lesson when teachers present one problem-solving strategy in speech while simultaneously presenting a different, but complementary, strategy in gesture (Singer & Goldin-Meadow, 2005). One possibility is that gesture is powerful in this context because it presents information simultaneously with speech. Alternatively, gesture may be effective simply because it involves the body, in which case the timing of information presented in speech and gesture may be less important for learning. Here we find evidence for the importance of simultaneity: 3rd grade children retain and generalize what they learn from a math lesson better when given instruction containing simultaneous speech and gesture than when given instruction containing sequential speech and gesture. Interpreting these results in the context of theories of multimodal learning, we find that gesture capitalizes on its synchrony with speech to promote learning that lasts and can be generalized.
Collapse
|
46
|
Abstract
Analogy researchers do not often examine gesture, and gesture researchers do not often borrow ideas from the study of analogy. One borrowable idea from the world of analogy is the importance of distinguishing between attributes and relations. Gentner (, ) observed that some metaphors highlight attributes and others highlight relations, and called the latter analogies. Mirroring this logic, we observe that some metaphoric gestures represent attributes and others represent relations, and propose to call the latter analogical gestures. We provide examples of such analogical gestures and show how they relate to the categories of iconic and metaphoric gestures described previously. Analogical gestures represent different types of relations and different degrees of relational complexity, and sometimes cohere into larger analogical models. Treating analogical gestures as a distinct phenomenon prompts new questions and predictions, and illustrates one way that the study of gesture and the study of analogy can be mutually informative.
Collapse
|
47
|
Abstract
A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.
Collapse
|
48
|
Unpacking the Ontogeny of Gesture Understanding: How Movement Becomes Meaningful Across Development. Child Dev 2017; 89:e245-e260. [PMID: 28504410 DOI: 10.1111/cdev.12817] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Gestures, hand movements that accompany speech, affect children's learning, memory, and thinking (e.g., Goldin-Meadow, 2003). However, it remains unknown how children distinguish gestures from other kinds of actions. In this study, 4- to 9-year-olds (n = 339) and adults (n = 50) described one of three scenes: (a) an actor moving objects, (b) an actor moving her hands in the presence of objects (but not touching them), or (c) an actor moving her hands in the absence of objects. Participants across all ages were equally able to identify actions on objects as goal directed, but the ability to identify empty-handed movements as representational actions (i.e., as gestures) increased with age and was influenced by the presence of objects, especially in older children.
Collapse
|
49
|
The Development of Causal Structure without a Language Model. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2017; 13:286-299. [PMID: 28983210 PMCID: PMC5624539 DOI: 10.1080/15475441.2016.1254633] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Across a diverse range of languages, children proceed through similar stages in their production of causal language: their initial verbs lack internal causal structure, followed by a period during which they produce causative overgeneralizations, indicating knowledge of a productive causative rule. We asked in this study whether a child not exposed to structured linguistic input could create linguistic devices for encoding causation and, if so, whether the emergence of this causal language would follow a trajectory similar to the one observed for children learning language from linguistic input. We show that the child in our study did develop causation-encoding morphology, but only after initially using verbs that lacked internal causal structure. These results suggest that the ability to encode causation linguistically can emerge in the absence of a language model, and that exposure to linguistic input is not the only factor guiding children from one stage to the next in their production of causal language.
Collapse
|
50
|
Abstract
Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.
Collapse
|