1
|
Nakashima Y, Kanazawa S, Yamaguchi MK. Recognition of humans from biological motion in infants. Atten Percept Psychophys 2023; 85:2567-2576. [PMID: 36859538 DOI: 10.3758/s13414-023-02675-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/11/2023] [Indexed: 03/03/2023]
Abstract
Infant studies have suggested that the detection of biological motion (BM) might be an innate capacity, based on newborns' spontaneous preference for BM. However, it is unclear if, like adults, infants recognize humans from BM and are able to build the representation of bodies and faces. To address this issue, we tested whether exposure to BM influences subsequent face recognition in 3- to 8-month-old infants. After familiarization with a point-light walker (PLW) of either a female or a male, the infant's preference for female and male faces was measured. If infants can build the representation of not only the body but also the face from PLWs, the familiarization effect of gender induced by the PLW might be generalized to faces. We found that infants at 7 to 8 months looked for longer at the face whose gender was opposite to that of the PLW, whereas 3- to 4- and 5- to 6-month-old infants did not. These results suggest that infants can access the representation of humans from BM and extract gender, which is shared across bodies and faces, from at least 7 to 8 months of age.
Collapse
Affiliation(s)
- Yusuke Nakashima
- Research and Development Initiative, Chuo University, 742-1 Higashinakano, Hachioji-shi, Tokyo, 192-0393, Japan.
| | - So Kanazawa
- Department of Psychology, Japan Women's University, Tokyo, Japan
| | | |
Collapse
|
2
|
Wen F, Gao J, Ke W, Zuo B, Dai Y, Ju Y, Long J. The Effect of Face-Voice Gender Consistency on Impression Evaluation. Arch Sex Behav 2023; 52:1123-1139. [PMID: 36719490 DOI: 10.1007/s10508-022-02524-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 12/24/2022] [Accepted: 12/25/2022] [Indexed: 06/18/2023]
Abstract
Face and voice are important information cues of interpersonal interaction. Most previous studies have investigated the cross-modal perception of face and voice from the perspective of cognitive psychology, but few empirical studies have focused on the effect of gender consistency of face and voice on the impression evaluation of the target from the perspective of social cognition. Based on the two-stage model of stereotype activation and the stereotype content model, this research examined the effects of face-voice gender consistency on impression evaluation (gender categorization and warmth competence evaluation) by using a cross-modal priming paradigm (Study 1, 20 males and 23 females, Mage = 21.00, SDage = 2.59), a sequential presentation task (Study 2a, 57 males and 70 females, Mage = 18.54, SDage = 1.54; Study 2b, 52 males and 51 females, Mage = 18.54, SDage = 1.36), and a simultaneous presentation task (Study 3, 51 males and 55 females, Mage = 23.58, SDage = 3.20), respectively. The results showed that: (1) there was a face-voice gender consistency preference in gender categorization, and the response of face-voice consistent condition was faster than that of inconsistent condition; (2) compared with the face-voice gender-inconsistent individuals, the participants showed a higher and more stable evaluation of the warmth and competence of the gender-consistent individuals, indicating the effect of matching preference of the face-voice gender consistency on the impression evaluation; (3) people paid more attention to the gender information of faces in the impression evaluation, and the female face could improve people's evaluation on the target's warmth and competence; (4) males were more intolerant of face-voice gender inconsistency when presented sequentially; the "voice needs to match face" effect was stronger for females when presented simultaneously. These findings, on the one hand, enrich and expand previous theories and research on cross-modal processing of face and voice from the perspective of social cognitive impression evaluation; on the other hand, these findings have important practical implications for impression management and decision-making in social interaction.
Collapse
Affiliation(s)
- Fangfang Wen
- School of Psychology, Center for Studies of Social Psychology, Central China Normal University, Wuhan, 430079, China
| | - Jia Gao
- School of Psychology, Center for Studies of Social Psychology, Central China Normal University, Wuhan, 430079, China
| | - Wenlin Ke
- School of Psychology, Center for Studies of Social Psychology, Central China Normal University, Wuhan, 430079, China
| | - Bin Zuo
- School of Psychology, Center for Studies of Social Psychology, Central China Normal University, Wuhan, 430079, China.
| | - Yu Dai
- School of Psychology, Center for Studies of Social Psychology, Central China Normal University, Wuhan, 430079, China
| | - Yiyan Ju
- School of Psychology, Center for Studies of Social Psychology, Central China Normal University, Wuhan, 430079, China
| | - Jiahui Long
- School of Psychology, Center for Studies of Social Psychology, Central China Normal University, Wuhan, 430079, China
| |
Collapse
|
3
|
Schott E, Tamayo MP, Byers‐Heinlein K. Keeping track of language: Can monolingual and bilingual infants associate a speaker with the language they speak? Infant and Child Development 2023. [DOI: 10.1002/icd.2403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Affiliation(s)
- Esther Schott
- Department of Psychology Concordia University Montreal Quebec Canada
| | | | | |
Collapse
|
4
|
Cox CMM, Keren-Portnoy T, Roepstorff A, Fusaroli R. A Bayesian meta-analysis of infants' ability to perceive audio-visual congruence for speech. Infancy 2021; 27:67-96. [PMID: 34542230 DOI: 10.1111/infa.12436] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 11/29/2022]
Abstract
This paper quantifies the extent to which infants can perceive audio-visual congruence for speech information and assesses whether this ability changes with native language exposure over time. A hierarchical Bayesian robust regression model of 92 separate effect sizes extracted from 24 studies indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). This result suggests that infants possess a robust ability to detect audio-visual congruence for speech. Moderator analyses, moreover, suggest that infants' audio-visual matching ability for speech emerges at an early point in the process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data, however, indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.
Collapse
Affiliation(s)
- Christopher Martin Mikkelsen Cox
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark.,Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Tamar Keren-Portnoy
- Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Andreas Roepstorff
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Riccardo Fusaroli
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| |
Collapse
|
5
|
Ujiie Y, Kanazawa S, Yamaguchi MK. The other-race effect on the McGurk effect in infancy. Atten Percept Psychophys 2021. [PMID: 34386882 DOI: 10.3758/s13414-021-02342-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/14/2021] [Indexed: 11/30/2022]
Abstract
This study investigated the difference in the McGurk effect between own-race-face and other-race-face stimuli among Japanese infants from 5 to 9 months of age. The McGurk effect results from infants using information from a speaker’s face in audiovisual speech integration. We hypothesized that the McGurk effect varies with the speaker’s race because of the other-race effect, which indicates an advantage for own-race faces in our face processing system. Experiment 1 demonstrated the other-race effect on audiovisual speech integration such that the infants ages 5–6 months and 8–9 months are likely to perceive the McGurk effect when observing an own-race-face speaker, but not when observing an other-race-face speaker. Experiment 2 found the other-race effect on audiovisual speech integration regardless of irrelevant speech identity cues. Experiment 3 confirmed the infants’ ability to differentiate two auditory syllables. These results showed that infants are likely to integrate voice with an own-race-face, but not with an other-race-face. This implies the role of experiences with own-race-faces in the development of audiovisual speech integration. Our findings also contribute to the discussion of whether perceptual narrowing is a modality-general, pan-sensory process.
Collapse
|
6
|
Johnson SP, Dong M, Ogren M, Senturk D. Infants' identification of gender in biological motion displays. Infancy 2021; 26:798-810. [PMID: 34043273 DOI: 10.1111/infa.12406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 04/01/2021] [Accepted: 04/23/2021] [Indexed: 11/30/2022]
Abstract
Infants' knowledge of social categories, including gender-typed characteristics, is a vital aspect of social cognitive development. In the current study, we examined 9- to 12-month-old infants' understanding of the categories "male" and "female" by testing for gender matching in voices or faces with biological motion depicted in point light displays (PLDs). Infants did not show voice-PLD gender matching spontaneously (Experiment 1) or after "training" with gender-matching voice-PLD pairs (Experiment 2). In Experiment 3, however, infants were trained with gender-matching face-PLD pairs and we found that patterns of visual attention to top regions of PLD stimuli during training predicted gender matching of female faces and PLDs. Prior to the end of the first postnatal year, therefore, infants may begin to identify gender in human walk motions, and perhaps form social categories from biological motion.
Collapse
Affiliation(s)
- Scott P Johnson
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Mingfei Dong
- Department of Biostatistics, University of California, Los Angeles, CA, USA
| | - Marissa Ogren
- Department of Psychology, University of California, Los Angeles, CA, USA
| | - Damla Senturk
- Department of Biostatistics, University of California, Los Angeles, CA, USA
| |
Collapse
|
7
|
Rennels JL, Verba SA. Gender Typicality of Faces Affects Children’s Categorization and Judgments of Women More than of Men. Sex Roles 2019; 81:355-369. [DOI: 10.1007/s11199-018-0997-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
8
|
Abstract
By 3 months of age, infants can perceptually distinguish faces based upon differences in gender. However, it is still unknown when infants begin using these perceptual differences to represent faces in a conceptual, kind-based manner. The current study examined this issue by using a violation-of-expectation manual search individuation paradigm to assess 12- and 24-month-old infants’ kind-based representations of faces varying by gender. While infants of both ages successfully individuated human faces from non-face shapes in a control condition, only the 24-month-old infants’ reaching behaviors provided evidence of their individuating male from female faces. The current findings help specify when infants begin to represent male and female faces as being conceptually distinct and may serve as a starting point for socio-cognitive biases observed later in development.
Collapse
|
9
|
May L, Baron AS, Werker JF. Who can speak that language? Eleven‐month‐old infants have language‐dependent expectations regarding speaker ethnicity. Dev Psychobiol 2019; 61:859-873. [DOI: 10.1002/dev.21851] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Revised: 02/14/2019] [Accepted: 02/17/2019] [Indexed: 11/08/2022]
Affiliation(s)
- Lillian May
- Department of Psychology University of British Columbia Vancouver BC Canada
| | - Andrew S. Baron
- Department of Psychology University of British Columbia Vancouver BC Canada
| | - Janet F. Werker
- Department of Psychology University of British Columbia Vancouver BC Canada
| |
Collapse
|
10
|
Bergelson E, Casillas M, Soderstrom M, Seidl A, Warlaumont AS, Amatuni A. What Do North American Babies Hear? A large-scale cross-corpus analysis. Dev Sci 2019; 22:e12724. [PMID: 30369005 PMCID: PMC6294666 DOI: 10.1111/desc.12724] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 06/11/2018] [Accepted: 07/03/2018] [Indexed: 11/29/2022]
Abstract
A range of demographic variables influences how much speech young children hear. However, because studies have used vastly different sampling methods, quantitative comparison of interlocking demographic effects has been nearly impossible, across or within studies. We harnessed a unique collection of existing naturalistic, day-long recordings from 61 homes across four North American cities to examine language input as a function of age, gender, and maternal education. We analyzed adult speech heard by 3- to 20-month-olds who wore audio recorders for an entire day. We annotated speaker gender and speech register (child-directed or adult-directed) for 10,861 utterances from female and male adults in these recordings. Examining age, gender, and maternal education collectively in this ecologically valid dataset, we find several key results. First, the speaker gender imbalance in the input is striking: children heard 2-3× more speech from females than males. Second, children in higher-maternal education homes heard more child-directed speech than those in lower-maternal education homes. Finally, our analyses revealed a previously unreported effect: the proportion of child-directed speech in the input increases with age, due to a decrease in adult-directed speech with age. This large-scale analysis is an important step forward in collectively examining demographic variables that influence early development, made possible by pooled, comparable, day-long recordings of children's language environments. The audio recordings, annotations, and annotation software are readily available for reuse and reanalysis by other researchers.
Collapse
Affiliation(s)
- Elika Bergelson
- 417 Chapel Dr., Campus Box 90086, Psychology & Neuroscience, Duke University, Durham, NC, 27708, USA
| | - Marisa Casillas
- Language Development Department, Max Planck Institute for Psycholinguistics, PO Box 310, 6500 AH, Nijmegen, The Netherlands
| | | | - Amanda Seidl
- Speech, Language, and Hearing Sciences, Purdue University, USA
| | | | - Andrei Amatuni
- 417 Chapel Dr., Campus Box 90086, Psychology & Neuroscience, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
11
|
White H, Hock A, Jubran R, Heck A, Bhatt RS. Visual scanning of male and female bodies in infancy. J Exp Child Psychol 2018; 166:79-95. [PMID: 28888194 PMCID: PMC5724933 DOI: 10.1016/j.jecp.2017.08.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Revised: 08/10/2017] [Accepted: 08/11/2017] [Indexed: 10/18/2022]
Abstract
This study addressed the development of attention to information that is socially relevant to adults by examining infants' (N=64) scanning patterns of male and female bodies. Infants exhibited systematic attention to regions associated with sex-related scanning by adults, with 3.5- and 6.5-month-olds looking longer at the torsos of females than of males and looking longer at the legs of males than of females. However, this pattern of looking was not found when infants were tested on headless bodies in Experiment 2, suggesting that infants' differential gaze pattern in Experiment 1 was not due to low-level stimulus features, such as clothing, and also indicating that facial/head information is necessary for infants to exhibit sex-specific scanning. We discuss implications for models of face and body knowledge development.
Collapse
Affiliation(s)
- Hannah White
- University of Kentucky, Lexington, KY 40506, USA
| | - Alyson Hock
- University of Kentucky, Lexington, KY 40506, USA
| | | | - Alison Heck
- University of Kentucky, Lexington, KY 40506, USA
| | | |
Collapse
|
12
|
Minar NJ, Lewkowicz DJ. Overcoming the other-race effect in infancy with multisensory redundancy: 10-12-month-olds discriminate dynamic other-race faces producing speech. Dev Sci 2017; 21:e12604. [PMID: 28944541 DOI: 10.1111/desc.12604] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 07/03/2017] [Indexed: 11/30/2022]
Abstract
We tested 4-6- and 10-12-month-old infants to investigate whether the often-reported decline in infant sensitivity to other-race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own-race or other-race faces which were either accompanied by a speech syllable, no sound, or a non-speech sound. Results indicated that 4-6- and 10-12-month-old infants discriminated own-race as well as other-race faces accompanied by a speech syllable, that only the 10-12-month-olds discriminated silent own-race faces, and that 4-6-month-old infants discriminated own-race and other-race faces accompanied by a non-speech sound but that 10-12-month-old infants only discriminated own-race faces accompanied by a non-speech sound. Overall, the results suggest that the ORE reported to date reflects infant responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing.
Collapse
Affiliation(s)
- Nicholas J Minar
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| |
Collapse
|
13
|
Oakes LM. Sample size, statistical power, and false conclusions in infant looking-time research. Infancy 2017; 22:436-469. [PMID: 28966558 PMCID: PMC5618719 DOI: 10.1111/infa.12186] [Citation(s) in RCA: 88] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 03/01/2017] [Indexed: 11/27/2022]
Abstract
Infant research is hard. It is difficult, expensive, and time consuming to identify, recruit and test infants. As a result, ours is a field of small sample sizes. Many studies using infant looking time as a measure have samples of 8 to 12 infants per cell, and studies with more than 24 infants per cell are uncommon. This paper examines the effect of such sample sizes on statistical power and the conclusions drawn from infant looking time research. An examination of the state of the current literature suggests that most published looking time studies have low power, which leads in the long run to an increase in both false positive and false negative results. Three data sets with large samples (>30 infants) were used to simulate experiments with smaller sample sizes; 1000 random subsamples of 8, 12, 16, 20, and 24 infants from the overall samples were selected, making it possible to examine the systematic effect of sample size on the results. This approach revealed that despite clear results with the original large samples, the results with smaller subsamples were highly variable, yielding both false positive and false negative outcomes. Finally, a number of emerging possible solutions are discussed.
Collapse
|
14
|
Abstract
Experimental research has shown that pairs of stimuli which are congruent and assumed to ‘go together’ are recalled more effectively than an item presented in isolation. Will this multisensory memory benefit occur when stimuli are richer and longer, in an ecological setting? In the present study, we focused on an everyday situation of audio-visual learning and manipulated the relationship between audio guide tracks and viewed portraits in the galleries of the Tate Britain. By varying the gender and narrative style of the voice-over, we examined how the perceived congruency and assumed unity of the audio guide track with painted portraits affected subsequent recall. We show that tracks perceived as best matching the viewed portraits led to greater recall of both sensory and linguistic content. We provide the first evidence that manipulating crossmodal congruence and unity assumptions can effectively impact memory in a multisensory ecological setting, even in the absence of precise temporal alignment between sensory cues.
Collapse
Affiliation(s)
- Merle T. Fairhurst
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, United Kingdom
- Munich Centre for Neuroscience, Ludwig Maximilian University, Munich, Germany
- * E-mail:
| | - Minnie Scott
- Tate Leaning, Tate Britain, London, United Kingdom
| | - Ophelia Deroy
- Centre for the Study of the Senses, School of Advanced Study, University of London, London, United Kingdom
- Munich Centre for Neuroscience, Ludwig Maximilian University, Munich, Germany
| |
Collapse
|
15
|
Pickron CB, Fava E, Scott LS. Follow My Gaze: Face Race and Sex Influence Gaze‐Cued Attention in Infancy. Infancy 2017; 22:626-644. [DOI: 10.1111/infa.12180] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2016] [Revised: 01/06/2017] [Accepted: 01/23/2017] [Indexed: 11/29/2022]
Affiliation(s)
| | - Eswen Fava
- Psychological and Brain Sciences University of Massachusetts Amherst
| | | |
Collapse
|
16
|
Xiao NG, Quinn PC, Liu S, Ge L, Pascalis O, Lee K. Older but not younger infants associate own-race faces with happy music and other-race faces with sad music. Dev Sci 2017; 21. [DOI: 10.1111/desc.12537] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Accepted: 11/03/2016] [Indexed: 11/30/2022]
Affiliation(s)
- Naiqi G. Xiao
- Dr Eric Jackman Institute of Child Study; University of Toronto; Toronto Canada
| | - Paul C. Quinn
- Department of Psychological and Brain Sciences; University of Delaware; Newark USA
| | | | - Liezhong Ge
- Zhejiang Sci-Tech University; Hangzhou China
- Center for Psychological Sciences; Zhejiang University; Hangzhou China
| | | | - Kang Lee
- Dr Eric Jackman Institute of Child Study; University of Toronto; Toronto Canada
| |
Collapse
|
17
|
Richoz AR, Quinn PC, Hillairet de Boisferon A, Berger C, Loevenbruck H, Lewkowicz DJ, Lee K, Dole M, Caldara R, Pascalis O. Audio-Visual Perception of Gender by Infants Emerges Earlier for Adult-Directed Speech. PLoS One 2017; 12:e0169325. [PMID: 28060872 PMCID: PMC5218491 DOI: 10.1371/journal.pone.0169325] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 12/15/2016] [Indexed: 11/18/2022] Open
Abstract
Early multisensory perceptual experiences shape the abilities of infants to perform socially-relevant visual categorization, such as the extraction of gender, age, and emotion from faces. Here, we investigated whether multisensory perception of gender is influenced by infant-directed (IDS) or adult-directed (ADS) speech. Six-, 9-, and 12-month-old infants saw side-by-side silent video-clips of talking faces (a male and a female) and heard either a soundtrack of a female or a male voice telling a story in IDS or ADS. Infants participated in only one condition, either IDS or ADS. Consistent with earlier work, infants displayed advantages in matching female relative to male faces and voices. Moreover, the new finding that emerged in the current study was that extraction of gender from face and voice was stronger at 6 months with ADS than with IDS, whereas at 9 and 12 months, matching did not differ for IDS versus ADS. The results indicate that the ability to perceive gender in audiovisual speech is influenced by speech manner. Our data suggest that infants may extract multisensory gender information developmentally earlier when looking at adults engaged in conversation with other adults (i.e., ADS) than when adults are directly talking to them (i.e., IDS). Overall, our findings imply that the circumstances of social interaction may shape early multisensory abilities to perceive gender.
Collapse
Affiliation(s)
- Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Paul C. Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, United States of America
| | - Anne Hillairet de Boisferon
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Carole Berger
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Hélène Loevenbruck
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - David J. Lewkowicz
- Department of Communication Sciences & Disorders, Northeastern University, Boston, Massachusetts, United States of America
| | - Kang Lee
- Institute of Child Study University of Toronto, Toronto, Ontario, Canada
| | - Marjorie Dole
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Olivier Pascalis
- LPNC, University of Grenoble Alpes, Grenoble, France
- LPNC, CNRS-UMR 5105, University of Grenoble Alpes, Grenoble, France
| |
Collapse
|
18
|
Shinskey JL. Sound effects: Multimodal input helps infants find displaced objects. Br J Dev Psychol 2016; 35:317-333. [PMID: 27868211 DOI: 10.1111/bjdp.12165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Revised: 08/17/2016] [Indexed: 11/28/2022]
Abstract
Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion, suggesting auditory input is more salient in the absence of visual input. This article addresses how audiovisual input affects 10-month-olds' search for displaced objects. In AB tasks, infants who previously retrieved an object at A subsequently fail to find it after it is displaced to B, especially following a delay between hiding and retrieval. Experiment 1 manipulated auditory input by keeping the hidden object audible versus silent, and visual input by presenting the delay in the light versus dark. Infants succeeded more at B with audible than silent objects and, unexpectedly, more after delays in the light than dark. Experiment 2 presented both the delay and search phases in darkness. The unexpected light-dark difference disappeared. Across experiments, the presence of auditory input helped infants find displaced objects, whereas the absence of visual input did not. Sound might help by strengthening object representation, reducing memory load, or focusing attention. This work provides new evidence on when bimodal input aids object processing, corroborates claims that audiovisual processing improves over the first year of life, and contributes to multisensory approaches to studying cognition. Statement of contribution What is already known on this subject Before 9 months, infants use sound to retrieve a stationary object hidden by darkness but not one hidden by occlusion. This suggests they find auditory input more salient in the absence of visual input in simple search tasks. After 9 months, infants' object processing appears more sensitive to multimodal (e.g., audiovisual) input. What does this study add? This study tested how audiovisual input affects 10-month-olds' search for an object displaced in an AB task. Sound helped infants find displaced objects in both the presence and absence of visual input. Object processing becomes more sensitive to bimodal input as multisensory functions develop across the first year.
Collapse
Affiliation(s)
- Jeanne L Shinskey
- Royal Holloway, University of London, UK.,University of South Carolina, Columbia, South Carolina, USA
| |
Collapse
|
19
|
Murray MM, Lewkowicz DJ, Amedi A, Wallace MT. Multisensory Processes: A Balancing Act across the Lifespan. Trends Neurosci 2016; 39:567-579. [PMID: 27282408 PMCID: PMC4967384 DOI: 10.1016/j.tins.2016.05.003] [Citation(s) in RCA: 132] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Revised: 04/13/2016] [Accepted: 05/12/2016] [Indexed: 11/20/2022]
Abstract
Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, University Hospital Centre and University of Lausanne, Lausanne, Switzerland; Electroencephalography Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem, Israel; Interdisciplinary and Cognitive Science Program, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem, Israel
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|