1
|
Seidl AH, Indarjit M, Borovsky A. Touch to learn: Multisensory input supports word learning and processing. Dev Sci 2024; 27:e13419. [PMID: 37291692 PMCID: PMC10704002 DOI: 10.1111/desc.13419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 04/14/2023] [Accepted: 05/22/2023] [Indexed: 06/10/2023]
Abstract
Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning.
Collapse
Affiliation(s)
- Amanda H Seidl
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Michelle Indarjit
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Arielle Borovsky
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
2
|
Krupa M, Boominathan P, Sebastian S, Raman PV. Joint Engagement in Mother-Child Dyads of Autistic and Non-Autistic Children Among Asian Indian Tamil Speaking Families. J Autism Dev Disord 2023:10.1007/s10803-023-06062-y. [PMID: 37642866 DOI: 10.1007/s10803-023-06062-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/06/2023] [Indexed: 08/31/2023]
Abstract
This study profiled various levels of engagement and related communication behaviours among 50 Asian Indian Tamil autistic children (AUT) and their mothers. The interaction was compared with two groups of mother-child dyads of non-autistic (NA) children, 50 in each group, matched for chronological age (CA), and for language level (LL). Results indicated that despite mother's efforts to engage with their children, autistic children were often 'engaged with objects' or remained 'unengaged' due to children's preference for solitary play, while NA children were often engaged in 'co-ordinated' and 'people engagement'. Across the three groups, mothers predominantly took the lead and dominated the interaction, irrespective of children's language levels. These initiations by the mothers were often to provide instructions and to ask 'What' questions. Autistic children initiated communication predominantly to ask for an object and responded often in the form of negations and protests with limited verbal output or non-verbally. Most of the communication behaviours of both children and mothers in AUT group was quantitatively and qualitatively different when compared to those in both the NA groups, indicating unique nature of interactions despite matching for CA or LL. The observations from the study highlights the need for considering adult's contingent behaviours also, while assessing communication skills of autistic children in order to provide effective intervention.
Collapse
Affiliation(s)
- Murugesan Krupa
- Department of Speech Language Pathology, Sri Ramachandra Faculty of Audiology and Speech Language Pathology, Sri Ramachandra Institute of Higher Education & Research (Deemed University, Porur, Chennai, 600 116, Tamil Nadu, India.
| | - Prakash Boominathan
- Department of Speech Language Pathology, Sri Ramachandra Faculty of Audiology and Speech Language Pathology, Sri Ramachandra Institute of Higher Education & Research (Deemed University, Porur, Chennai, 600 116, Tamil Nadu, India
| | - Swapna Sebastian
- Audiology & Speech Language Pathology Services, Department of ENT, Christian Medical College, Vellore, Tamil Nadu, India
| | - Padmasani Venkat Raman
- Department of Paediatric Medicine, Sri Ramachandra Institute of Higher Education & Research (Deemed University, Porur, Chennai, Tamil Nadu, India
| |
Collapse
|
3
|
Edgar EV, Todd JT, Eschman B, Hayes T, Bahrick LE. Effects of English versus Spanish language exposure on basic multisensory attention skills across 3 to 36 months of age. Dev Psychol 2023; 59:1359-1376. [PMID: 37199930 PMCID: PMC10523924 DOI: 10.1037/dev0001549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Recent research has demonstrated that individual differences in infant attention to faces and voices of women speaking predict language outcomes in childhood. These findings have been generated using two new audiovisual attention assessments appropriate for infants and young children, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP). The MAAP and IPEP assess three basic attention skills (sustaining attention, shifting/disengaging, intersensory matching), as well as distractibility, deployed in the context of naturalistic audiovisual social (women speaking English) and nonsocial events (objects impacting a surface). Might children with differential exposure to Spanish versus English show different patterns of attention to social events on these protocols as a function of language familiarity? We addressed this question in several ways using children (n = 81 dual-language learners; n = 23 monolingual-language learners) from South Florida, tested longitudinally across 3-36 months. Surprisingly, results indicated no significant English language advantage on any attention measure for children from monolingual English versus dual English-Spanish language environments. Second, for dual-language learners, exposure to English changed across age, decreasing slightly from 3-12 months and then increasing considerably by 36 months. Furthermore, for dual-language learners, structural equation modeling analyses revealed no English language advantage on the MAAP or IPEP as a function of degree of English language exposure. The few relations found were in the direction of greater performance for children with greater Spanish exposure. Together, findings indicate no English language advantage for basic multisensory attention skills assessed by the MAAP or IPEP between the ages of 3 to 36 months. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
4
|
Ko ES, Abu-Zhaya R, Kim ES, Kim T, On KW, Kim H, Zhang BT, Seidl A. Mothers' use of touch across infants' development and its implications for word learning: Evidence from Korean dyadic interactions. INFANCY 2023; 28:597-618. [PMID: 36757022 PMCID: PMC10085827 DOI: 10.1111/infa.12532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 01/05/2023] [Accepted: 01/15/2023] [Indexed: 02/10/2023]
Abstract
Caregivers' touches that occur alongside words and utterances could aid in the detection of word/utterance boundaries and the mapping of word forms to word meanings. We examined changes in caregivers' use of touches with their speech directed to infants using a multimodal cross-sectional corpus of 35 Korean mother-child dyads across three age groups of infants (8, 14, and 27 months). We tested the hypothesis that caregivers' frequency and use of touches with speech change with infants' development. Results revealed that the frequency of word/utterance-touch alignment as well as word + touch co-occurrence is highest in speech addressed to the youngest group of infants. Thus, this study provides support for the hypothesis that caregivers' use of touch during dyadic interactions is sensitive to infants' age in a way similar to caregivers' use of speech alone and could provide cues useful to infants' language learning at critical points in early development.
Collapse
Affiliation(s)
- Eon-Suk Ko
- Department of English Language and Literature, Chosun University
| | | | - Eun-Sol Kim
- Department of Computer Science, Hanyang University
| | | | | | - Hyunji Kim
- Department of English Language and Literature, Chosun University
| | - Byoung-Tak Zhang
- Department of Computer Science and Engineering & SNU Artificial Intelligence Institute, Seoul National University
| | - Amanda Seidl
- Department of Speech, Language, and Hearing Sciences, Purdue University
| |
Collapse
|
5
|
The temporal dynamics of labelling shape infant object recognition. Infant Behav Dev 2022; 67:101698. [DOI: 10.1016/j.infbeh.2022.101698] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 12/06/2021] [Accepted: 01/26/2022] [Indexed: 11/22/2022]
|
6
|
Foran LG, Beverly BL, Shelley-Tremblay J, Estis JM. Can gesture input support toddlers' fast mapping? JOURNAL OF CHILD LANGUAGE 2022; 50:1-23. [PMID: 35388778 DOI: 10.1017/s0305000922000149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Forty-eight toddlers participated in a word-learning task to assess gesture input on mapping nonce words to unfamiliar objects. Receptive fast mapping and expressive naming for target object-word pairs were tested in three conditions - with a point, with a shape gesture, and in a no-gesture, word-only condition. No statistically significant effect of gesture for receptive fast-mapping was found but age was a factor. Two year olds outperformed one year olds for both measures. Only one girl in the one-year-old group correctly named any items. There was a significant interaction between gesture and gender for expressive naming. Two-year-old girls were six times more likely than two-year-old boys to correctly name items given point and shape gestures; whereas, boys named more items taught with the word only than with a point or shape gesture. The role of gesture input remains unclear, particularly for children under two years and for toddler boys.
Collapse
|
7
|
Chen CH, Houston DM, Yu C. Parent-Child Joint Behaviors in Novel Object Play Create High-Quality Data for Word Learning. Child Dev 2021; 92:1889-1905. [PMID: 34463350 DOI: 10.1111/cdev.13620] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
This research takes a dyadic approach to study early word learning and focuses on toddlers' (N = 20, age: 17-23 months) information seeking and parents' information providing behaviors and the ways the two are coupled in real-time parent-child interactions. Using head-mounted eye tracking, this study provides the first detailed comparison of children's and their parents' behavioral and attentional patterns in two free-play contexts: one with novel objects with to-be-learned names (Learning condition) and the other with familiar objects with known names (Play condition). Children and parents in the Learning condition modified their individual and joint behaviors when encountering novel objects with to-be-learned names, which created clearer signals that reduced referential ambiguity and potentially facilitated word learning.
Collapse
Affiliation(s)
| | | | - Chen Yu
- The University of Texas at Austin
| |
Collapse
|
8
|
Krenn B, Sadeghi S, Neubarth F, Gross S, Trapp M, Scheutz M. Models of Cross-Situational and Crossmodal Word Learning in Task-Oriented Scenarios. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2020.2995045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
9
|
Chen CH, Castellanos I, Yu C, Houston DM. What leads to coordinated attention in parent-toddler interactions? Children's hearing status matters. Dev Sci 2020; 23:e12919. [PMID: 31680414 PMCID: PMC7160036 DOI: 10.1111/desc.12919] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 10/25/2019] [Accepted: 10/28/2019] [Indexed: 11/30/2022]
Abstract
Coordinated attention between children and their parents plays an important role in their social, language, and cognitive development. The current study used head-mounted eye-trackers to investigate the effects of children's prelingual hearing loss on how they achieve coordinated attention with their hearing parents during free-flowing object play. We found that toddlers with hearing loss (age: 24-37 months) had similar overall gaze patterns (e.g., gaze length and proportion of face looking) as their normal-hearing peers. In addition, children's hearing status did not affect how likely parents and children attended to the same object at the same time during play. However, when following parents' attention, children with hearing loss used both parents' gaze directions and hand actions as cues, whereas children with normal hearing mainly relied on parents' hand actions. The diversity of pathways leading to coordinated attention suggests the flexibility and robustness of developing systems in using multiple pathways to achieve the same functional end.
Collapse
Affiliation(s)
- Chi-hsin Chen
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
| | - Irina Castellanos
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
- Nationwide Children’s Hospital, 700 Children’s Dr, Columbus, Ohio 43205
| | - Chen Yu
- Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10 Street, Bloomington, Indiana 47405
| | - Derek M. Houston
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, Ohio 43212
- Nationwide Children’s Hospital, 700 Children’s Dr, Columbus, Ohio 43205
| |
Collapse
|
10
|
Gogate L. Maternal object naming is less adapted to preterm infants' than to term infants' word mapping. J Child Psychol Psychiatry 2020; 61:447-458. [PMID: 31710089 DOI: 10.1111/jcpp.13128] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/19/2019] [Indexed: 11/30/2022]
Abstract
BACKGROUND Term infants learn word-object relations in their first year during multisensory interactions with caregivers. Although preterm infants often experience language delays, little is known about how caregivers contribute to their early word-object learning. The present longitudinal study compared maternal naming and word learning in these infant groups. METHODS Forty moderately preterm and 40 term infants participated at 6-9 and 12 months with their mothers. At each visit, mothers named two novel objects during play, and infants' learning was assessed using dynamic displays of the familiar and novel (mismatched) word-object relations. Infants' general cognitive, language, and motoric abilities were evaluated. Maternal multisensory naming was coded for synchrony between the target words and object motions and other naming styles. RESULTS During play, although overall maternal naming-style was similar across infant groups within visits, naming frequency increased to term but not preterm infants, from visit 1 to 2. On the test at visit 1, although the term infants' looked equally to novel and familiar word-object relations, their looking to the novel relations correlated positively with maternal synchrony use but inversely with naming frequency. At visit 2, term infants looked longer to the novel relations. In contrast, preterm infants showed no looking preference at either visit. Neither was their word-object learning correlated with maternal naming. Their cognition, language, and motor scores were attenuated when compared to term infants on the Bayley-III but not their MCDI vocabulary. CONCLUSIONS Less adaptive maternal naming and delayed word mapping in moderately preterm infants underscore a critical need for multisensory language intervention prior to first-word onset to alleviate its cascading effects on later language.
Collapse
Affiliation(s)
- Lakshmi Gogate
- Department of Speech, Language and Hearing Sciences, University of Missouri, Columbia, MO, USA
| |
Collapse
|
11
|
Yoshida H, Cirino P, Mire SS, Burling JM, Lee S. Parents' gesture adaptations to children with autism spectrum disorder. JOURNAL OF CHILD LANGUAGE 2020; 47:205-224. [PMID: 31588888 DOI: 10.1017/s0305000919000497] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The present study focused on parents' social cue use in relation to young children's attention. Participants were ten parent-child dyads; all children were 36 to 60 months old and were either typically developing (TD) or were diagnosed with autism spectrum disorder (ASD). Children wore a head-mounted camera that recorded the proximate child view while their parent played with them. The study compared the following between the TD and ASD groups: (a) frequency of parent's gesture use; (b) parents' monitoring of their child's face; and (c) how children looked at parents' gestures. Results from Bayesian estimation indicated that, compared to the TD group, parents of children with ASD produced more gestures, more closely monitored their children's faces, and provided more scaffolding for their children's visual experiences. Our findings suggest the importance of further investigating parents' visual and gestural scaffolding as a potential developmental mechanism for children's early learning, including for children with ASD.
Collapse
Affiliation(s)
| | - Paul Cirino
- Department of Psychology, University of Houston, USA
| | - Sarah S Mire
- Department of Psychological, Health, and Learning Sciences, University of Houston, USA
| | | | - Sunbok Lee
- Department of Psychology, University of Houston, USA
| |
Collapse
|
12
|
Kadlaskar G, Seidl A, Tager-Flusberg H, Nelson CA, Keehn B. Caregiver Touch-Speech Communication and Infant Responses in 12-Month-Olds at High Risk for Autism Spectrum Disorder. J Autism Dev Disord 2019; 50:1064-1072. [PMID: 31754946 DOI: 10.1007/s10803-019-04310-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Multimodal communication may facilitate attention in infants. This study examined the presentation of caregiver touch-only and touch + speech input to 12-month-olds at high (HRA) and low risk for ASD. Findings indicated that, although both groups received a greater number of touch + speech bouts compared to touch-only bouts, the duration of overall touch that overlapped with speech was significantly greater in the HRA group. Additionally, HRA infants were less responsive to touch-only bouts compared to touch + speech bouts suggesting that their mothers may use more touch + speech communication to elicit infant responses. Nonetheless, the exact role of touch in multimodal communication directed towards infants at high risk for ASD warrants further exploration.
Collapse
Affiliation(s)
- Girija Kadlaskar
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, IN, USA.
| | - Amanda Seidl
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, IN, USA
| | - Helen Tager-Flusberg
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
| | - Charles A Nelson
- Boston Children's Hospital, Boston, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard Graduate School of Education, Cambridge, MA, USA
| | - Brandon Keehn
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, IN, USA.,Department of Psychological Sciences, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
13
|
George NR, Bulgarelli F, Roe M, Weiss DJ. Stacking the evidence: Parents' use of acoustic packaging with preschoolers. Cognition 2019; 191:103956. [PMID: 31276946 PMCID: PMC6814401 DOI: 10.1016/j.cognition.2019.04.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 04/23/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
Abstract
Segmenting continuous events into discrete actions is critical for understanding the world. As infants may lack top-down knowledge of event structure, caregivers provide audiovisual cues to guide the process, aligning action descriptions with event boundaries to increase their salience. This acoustic packaging may be specific to infant-directed speech, but little is known about when and why the use of this cue wanes. We explore whether acoustic packaging persists in parents' teaching of 2.5-5.5-year-old children about various toys. Parents produced a smaller percentage of action speech relative to studies with infants. However, action speech largely remained more aligned to action boundaries relative to non-action speech. Further, for the more challenging novel toys, parents modulated their use of acoustic packaging, providing it more for those children with lower vocabularies. Our findings suggest that acoustic packaging persists beyond interactions with infants, underscoring the utility of multimodal cues for learning, particularly for less knowledgeable learners in challenging learning environments.
Collapse
Affiliation(s)
| | - Federica Bulgarelli
- Duke University, United States; Pennsylvania State University, United States
| | - Mary Roe
- Pennsylvania State University, United States
| | | |
Collapse
|
14
|
Eiteljoerge SFV, Adam M, Elsner B, Mani N. Word-object and action-object association learning across early development. PLoS One 2019; 14:e0220317. [PMID: 31393901 PMCID: PMC6687139 DOI: 10.1371/journal.pone.0220317] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2019] [Accepted: 07/13/2019] [Indexed: 11/19/2022] Open
Abstract
Successful communication often involves comprehension of both spoken language and observed actions with and without objects. Even very young infants can learn associations between actions and objects as well as between words and objects. However, in daily life, children are usually confronted with both kinds of input simultaneously. Choosing the critical information to attend to in such situations might help children structure the input, and thereby, allow for successful learning. In the current study, we therefore, investigated the developmental time course of children's and adults' word and action learning when given the opportunity to learn both word-object and action-object associations for the same object. All participants went through a learning phase and a test phase. In the learning phase, they were presented with two novel objects which were associated with a distinct novel name (e.g., "Look, a Tanu") and a distinct novel action (e.g., moving up and down while tilting sideways). In the test phase, participants were presented with both objects on screen in a baseline phase, then either heard one of the two labels or saw one of the two actions in a prime phase, and then saw the two objects again on screen in a recognition phase. Throughout the trial, participants' target looking was recorded to investigate whether participants looked at the target object upon hearing its label or seeing its action, and thus, would show learning of the word-object and action-object associations. Growth curve analyses revealed that 12-month-olds showed modest learning of action-object associations, 36-month-olds learned word-object associations, and adults learned word-object and action-object associations. These results highlight how children attend to the different information types from the two modalities through which communication is addressed to them. Over time, with increased exposure to systematic word-object mappings, children attend less to action-object mappings, with the latter potentially being mediated by word-object learning even in adulthood. Thus, choosing between different kinds of input that may be more relevant in their rich environment encompassing different modalities might help learning at different points in development.
Collapse
Affiliation(s)
- Sarah F. V. Eiteljoerge
- Psychology of Language, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus “Primate Cognition”, Goettingen, Germany
| | - Maurits Adam
- Developmental Psychology, University of Potsdam, Potsdam, Germany
| | - Birgit Elsner
- Developmental Psychology, University of Potsdam, Potsdam, Germany
| | - Nivedita Mani
- Psychology of Language, University of Goettingen, Goettingen, Germany
- Leibniz ScienceCampus “Primate Cognition”, Goettingen, Germany
| |
Collapse
|
15
|
Abu-Zhaya R, Kondaurova MV, Houston D, Seidl A. Vocal and Tactile Input to Children Who Are Deaf or Hard of Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2372-2385. [PMID: 31251677 PMCID: PMC7251336 DOI: 10.1044/2019_jslhr-l-18-0185] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2018] [Revised: 10/01/2018] [Accepted: 02/08/2019] [Indexed: 06/09/2023]
Abstract
Purpose Caregivers may show greater use of nonauditory signals in interactions with children who are deaf or hard of hearing (DHH). This study explored the frequency of maternal touch and the temporal alignment of touch with speech in the input to children who are DHH and age-matched peers with normal hearing. Method We gathered audio and video recordings of mother-child free-play interactions. Maternal speech units were annotated from audio recordings, and touch events were annotated from video recordings. Analyses explored the frequency and duration of touch events and the temporal alignment of touch with speech. Results Greater variance was observed in the frequency of touch and its total duration in the input to children who are DHH. Furthermore, touches produced by mothers of children who are DHH were significantly more likely to be aligned with speech than touches produced by mothers of children with normal hearing. Conclusion Caregivers' modifications in the input to children who are DHH are observed in the combination of speech with touch. The implications for such patterns and how they may impact children's attention and access to the speech signal are discussed.
Collapse
Affiliation(s)
- Rana Abu-Zhaya
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| | | | - Derek Houston
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus
- Nationwide Children's Hospital, Columbus, OH
| | - Amanda Seidl
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| |
Collapse
|
16
|
Tseng CH, Chow HM, Ma YK, Ding J. Preverbal infants utilize cross-modal semantic congruency in artificial grammar acquisition. Sci Rep 2018; 8:12707. [PMID: 30139964 PMCID: PMC6107625 DOI: 10.1038/s41598-018-30927-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2017] [Accepted: 07/30/2018] [Indexed: 11/09/2022] Open
Abstract
Learning in a multisensory world is challenging as the information from different sensory dimensions may be inconsistent and confusing. By adulthood, learners optimally integrate bimodal (e.g. audio-visual, AV) stimulation by both low-level (e.g. temporal synchrony) and high-level (e.g. semantic congruency) properties of the stimuli to boost learning outcomes. However, it is unclear how this capacity emerges and develops. To approach this question, we examined whether preverbal infants were capable of utilizing high-level properties with grammar-like rule acquisition. In three experiments, we habituated pre-linguistic infants with an audio-visual (AV) temporal sequence that resembled a grammar-like rule (A-A-B). We varied the cross-modal semantic congruence of the AV stimuli (Exp 1: congruent syllables/faces; Exp 2: incongruent syllables/shapes; Exp 3: incongruent beeps/faces) while all the other low-level properties (e.g. temporal synchrony, sensory energy) were constant. Eight- to ten-month-old infants only learned the grammar-like rule from AV congruent stimuli pairs (Exp 1), not from incongruent AV pairs (Exp 2, 3). Our results show that similar to adults, preverbal infants' learning is influenced by a high-level multisensory integration gating system, pointing to a perceptual origin of bimodal learning advantage that was not previously acknowledged.
Collapse
Affiliation(s)
- Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Japan.
| | - Hiu Mei Chow
- Department of Psychology, University of Massachusetts Boston, Boston, USA
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Hong Kong, SAR, China
| | - Jie Ding
- Department of Psychology, The University of Hong Kong, Hong Kong, SAR, China
| |
Collapse
|
17
|
Ben-Aderet T, Gallego-Abenza M, Reby D, Mathevon N. Dog-directed speech: why do we use it and do dogs pay attention to it? Proc Biol Sci 2018; 284:rspb.2016.2429. [PMID: 28077769 DOI: 10.1098/rspb.2016.2429] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Accepted: 12/02/2016] [Indexed: 11/12/2022] Open
Abstract
Pet-directed speech is strikingly similar to infant-directed speech, a peculiar speaking pattern with higher pitch and slower tempo known to engage infants' attention and promote language learning. Here, we report the first investigation of potential factors modulating the use of dog-directed speech, as well as its immediate impact on dogs' behaviour. We recorded adult participants speaking in front of pictures of puppies, adult and old dogs, and analysed the quality of their speech. We then performed playback experiments to assess dogs' reaction to dog-directed speech compared with normal speech. We found that human speakers used dog-directed speech with dogs of all ages and that the acoustic structure of dog-directed speech was mostly independent of dog age, except for sound pitch which was relatively higher when communicating with puppies. Playback demonstrated that, in the absence of other non-auditory cues, puppies were highly reactive to dog-directed speech, and that the pitch was a key factor modulating their behaviour, suggesting that this specific speech register has a functional value in young dogs. Conversely, older dogs did not react differentially to dog-directed speech compared with normal speech. The fact that speakers continue to use dog-directed with older dogs therefore suggests that this speech pattern may mainly be a spontaneous attempt to facilitate interactions with non-verbal listeners.
Collapse
Affiliation(s)
- Tobey Ben-Aderet
- Department of Psychology, City University of New York, Hunter College, New York, NY, USA
| | - Mario Gallego-Abenza
- Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR9197, University of Lyon/Saint-Etienne, Saint-Etienne, France
| | - David Reby
- School of Psychology, University of Sussex, Brighton BN1 9QH, UK
| | - Nicolas Mathevon
- Department of Psychology, City University of New York, Hunter College, New York, NY, USA .,Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR9197, University of Lyon/Saint-Etienne, Saint-Etienne, France
| |
Collapse
|
18
|
Gogate L, Maganti M. The Origins of Verb Learning: Preverbal and Postverbal Infants' Learning of Word-Action Relations. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3538-3550. [PMID: 29143061 DOI: 10.1044/2017_jslhr-l-17-0085] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2017] [Accepted: 08/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This experiment examined English- or Spanish-learning preverbal (8-9 months, n = 32) and postverbal (12-14 months, n = 40) infants' learning of word-action pairings prior to and after the transition to verb comprehension and its relation to naturally learned vocabulary. METHOD Infants of both verbal levels were first habituated to 2 dynamic video displays of novel word-action pairings, the words /wem/ or /bæf/, spoken synchronously with an adult shaking or looming an object, and tested with interchanged (switched) versus same word-action pairings. Mothers of the postverbal infants were asked to report on their infants' vocabulary on the MacArthur-Bates Communicative Development Inventories (Fenson et al., 1994). RESULTS The preverbal infants looked longer to the switched relative to same pairings, suggesting word-action mapping, but not the postverbal infants. Mothers of the postverbal infants reported a noun bias on the MacArthur-Bates Communicative Development Inventories; infants learned more nouns than verbs in the natural environment. Further analyses revealed marginal word-action mapping in postverbal infants who learned fewer nouns and only comprehended verbs (post-verb comprehension), but not in those who learned more nouns and also produced verbs (post-verb production). CONCLUSIONS These findings on verb learning from inside and outside the laboratory suggest a developmental shift from domain-general to language-specific mechanisms. Long before they talk, infants learning a noun-dominant language learn synchronous word-action relations. As a postverbal language-specific noun bias develops, this learning temporarily diminishes. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5592637.
Collapse
|
19
|
Gogate L. Development of Early Multisensory Perception and Communication: From Environmental and Behavioral to Neural Signatures. Dev Neuropsychol 2017; 41:269-272. [PMID: 28253037 DOI: 10.1080/87565641.2017.1279429] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Lakshmi Gogate
- a Department of Communication Sciences and Disorders University of Missouri , Columbia , Missouri
| |
Collapse
|
20
|
Chang L, de Barbaro K, Deák G. Contingencies Between Infants’ Gaze, Vocal, and Manual Actions and Mothers’ Object-Naming: Longitudinal Changes From 4 to 9 Months. Dev Neuropsychol 2017; 41:342-361. [DOI: 10.1080/87565641.2016.1274313] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Lucas Chang
- Department of Cognitive Science, University of California San Diego, San Diego, California
| | - Kaya de Barbaro
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia
| | - Gedeon Deák
- Department of Cognitive Science, University of California San Diego, San Diego, California
| |
Collapse
|
21
|
Gogate L, Hollich G. Early Verb-Action and Noun-Object Mapping Across Sensory Modalities: A Neuro-Developmental View. Dev Neuropsychol 2017; 41:293-307. [PMID: 28059566 DOI: 10.1080/87565641.2016.1243112] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The authors provide an alternative to the traditional view that verbs are harder to learn than nouns by reviewing three lines of behavioral and neurophysiological evidence in word-mapping development across cultures. First, preverbal infants tune into word-action and word-object pairings using domain-general mechanisms. Second, while post-verbal infants from noun-friendly language environments experience verb-action mapping difficulty, infants from verb-friendly language environments do not. Third, children use language-specific conventions to learn all types of words, although still strongly influenced by their language environment. Additionally, the authors suggest neurophysiological research to advance these lines of evidence beyond traditional views of word learning.
Collapse
Affiliation(s)
- Lakshmi Gogate
- a Communication Sciences and Disorders , University of Missouri-Columbia , Columbia , Missouri
| | - George Hollich
- b Psychological Sciences , Purdue University , West Lafayette , Indiana
| |
Collapse
|
22
|
Bahrick LE, Todd JT, Castellanos I, Sorondo BM. Enhanced attention to speaking faces versus other event types emerges gradually across infancy. Dev Psychol 2016; 52:1705-1720. [PMID: 27786526 PMCID: PMC5291072 DOI: 10.1037/dev0000157] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The development of attention to dynamic faces versus objects providing synchronous audiovisual versus silent visual stimulation was assessed in a large sample of infants. Maintaining attention to the faces and voices of people speaking is critical for perceptual, cognitive, social, and language development. However, no studies have systematically assessed when, if, or how attention to speaking faces emerges and changes across infancy. Two measures of attention maintenance, habituation time (HT) and look-away rate (LAR), were derived from cross-sectional data of 2- to 8-month-old infants (N = 801). Results indicated that attention to audiovisual faces and voices was maintained across age, whereas attention to each of the other event types (audiovisual objects, silent dynamic faces, silent dynamic objects) declined across age. This reveals a gradually emerging advantage in attention maintenance (longer HTs, lower LARs) for audiovisual speaking faces compared with the other 3 event types. At 2 months, infants showed no attentional advantage for faces (with greater attention to audiovisual than to visual events); at 3 months, they attended more to dynamic faces than objects (in the presence or absence of voices), and by 4 to 5 and 6 to 8 months, significantly greater attention emerged to temporally coordinated faces and voices of people speaking compared with all other event types. Our results indicate that selective attention to coordinated faces and voices over other event types emerges gradually across infancy, likely as a function of experience with multimodal, redundant stimulation from person and object events. (PsycINFO Database Record
Collapse
Affiliation(s)
| | | | - Irina Castellanos
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, Columbus, OH
| | - Barbara M. Sorondo
- Flordia International University Libraries, Florida International University, Miami, FL
| |
Collapse
|
23
|
Suanda SH, Smith LB, Yu C. The Multisensory Nature of Verbal Discourse in Parent-Toddler Interactions. Dev Neuropsychol 2016; 41:324-341. [PMID: 28128992 PMCID: PMC7263485 DOI: 10.1080/87565641.2016.1256403] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Toddlers learn object names in sensory rich contexts. Many argue that this multisensory experience facilitates learning. Here, we examine how toddlers' multisensory experience is linked to another aspect of their experience associated with better learning: the temporally extended nature of verbal discourse. We observed parent-toddler dyads as they played with, and as parents talked about, a set of objects. Analyses revealed links between the multisensory and extended nature of speech, highlighting inter-connections and redundancies in the environment. We discuss the implications of these results for our understanding of early discourse, multisensory communication, and how the learning environment shapes language development.
Collapse
Affiliation(s)
- Sumarga H Suanda
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| | - Linda B Smith
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| | - Chen Yu
- a Department of Psychological and Brain Sciences , Indiana University , Bloomington , Indiana
| |
Collapse
|
24
|
Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds. J Exp Child Psychol 2016; 145:1-10. [DOI: 10.1016/j.jecp.2015.12.002] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2015] [Revised: 12/08/2015] [Accepted: 12/11/2015] [Indexed: 11/23/2022]
|
25
|
Gogate L, Maganti M. The Dynamics of Infant Attention: Implications for Crossmodal Perception and Word-Mapping Research. Child Dev 2016; 87:345-64. [PMID: 27015082 DOI: 10.1111/cdev.12509] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The present review is a novel synthesis of research on infants' attention in two related domains-crossmodal perception and word mapping. The authors hypothesize that infant attention is malleable and shifts in real time. They review dynamic models of infant attention and provide empirical evidence for parallel trends in attention shifts from the two domains that support their hypothesis. When infants are exposed to competing auditory-visual stimuli in experiments, multiple factors cause attention to shift during infant-environment interactions. Additionally, attention shifts across nested timescales and individual variations in attention systematically explain development. They suggest future research to further elucidate the causal mechanisms that influence infants' attention dynamics, emphasizing the need to examine individual variations that index shifts over time.
Collapse
Affiliation(s)
- Lakshmi Gogate
- Florida Gulf Coast University.,University of Missouri, Columbia
| | | |
Collapse
|
26
|
Trehub SE, Plantinga J, Russo FA. Maternal Vocal Interactions with Infants: Reciprocal Visual Influences. SOCIAL DEVELOPMENT 2015. [DOI: 10.1111/sode.12164] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|