1
|
Vannasing P, Dionne-Dostie E, Tremblay J, Paquette N, Collignon O, Gallagher A. Electrophysiological responses of audiovisual integration from infancy to adulthood. Brain Cogn 2024; 178:106180. [PMID: 38815526 DOI: 10.1016/j.bandc.2024.106180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 05/17/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.
Collapse
Affiliation(s)
- Phetsamone Vannasing
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Emmanuelle Dionne-Dostie
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Julie Tremblay
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Natacha Paquette
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada.
| | - Olivier Collignon
- Institute of Psychology (IPSY) and Institute of Neuroscience (IoNS), Université Catholique de Louvain, Louvain-La-Neuve, Belgium; School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Anne Gallagher
- Neurodevelopmental Optical Imaging Laboratory (LION Lab), Sainte-Justine University Hospital Research Centre, Montreal, QC, Canada; Cerebrum, Department of Psychology, University of Montreal, Montreal, Qc, Canada.
| |
Collapse
|
2
|
Weng Y, Rong Y, Peng G. The development of audiovisual speech perception in Mandarin-speaking children: Evidence from the McGurk paradigm. Child Dev 2024; 95:750-765. [PMID: 37843038 DOI: 10.1111/cdev.14022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Revised: 08/30/2023] [Accepted: 09/21/2023] [Indexed: 10/17/2023]
Abstract
The developmental trajectory of audiovisual speech perception in Mandarin-speaking children remains understudied. This cross-sectional study in Mandarin-speaking 3- to 4-year-old, 5- to 6-year-old, 7- to 8-year-old children, and adults from Xiamen, China (n = 87, 44 males) investigated this issue using the McGurk paradigm with three levels of auditory noise. For the identification of congruent stimuli, 3- to 4-year-olds underperformed older groups whose performances were comparable. For the perception of the incongruent stimuli, a developmental shift was observed as 3- to 4-year-olds made significantly more audio-dominant but fewer audiovisual-integrated responses to incongruent stimuli than older groups. With increasing auditory noise, the difference between children and adults widened in identifying congruent stimuli but narrowed in perceiving incongruent ones. The findings regarding noise effects agree with the statistically optimal hypothesis.
Collapse
Affiliation(s)
- Yi Weng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Yicheng Rong
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Gang Peng
- Department of Chinese and Bilingual Studies, Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
3
|
Nematova S, Zinszer B, Jasinska KK. Exploring audiovisual speech perception in monolingual and bilingual children in Uzbekistan. J Exp Child Psychol 2024; 239:105808. [PMID: 37972516 DOI: 10.1016/j.jecp.2023.105808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/23/2023] [Indexed: 11/19/2023]
Abstract
This study aimed to investigate the development of audiovisual speech perception in monolingual Uzbek-speaking and bilingual Uzbek-Russian-speaking children, focusing on the impact of language experience on audiovisual speech perception and the role of visual phonetic (i.e., mouth movements corresponding to phonetic/lexical information) and temporal (i.e., timing of speech signals) cues. A total of 321 children aged 4 to 10 years in Tashkent, Uzbekistan, discriminated /ba/ and /da/ syllables across three conditions: auditory-only, audiovisual phonetic (i.e., sound accompanied by mouth movements), and audiovisual temporal (i.e., sound onset/offset accompanied by mouth opening/closing). Effects of modality (audiovisual phonetic, audiovisual temporal, or audio-only cues), age, group (monolingual or bilingual), and their interactions were tested using a Bayesian regression model. Overall, older participants performed better than younger participants. Participants performed better in the audiovisual phonetic modality compared with the auditory modality. However, no significant difference between monolingual and bilingual children was observed across all modalities. This finding stands in contrast to earlier studies. We attribute the contrasting findings of our study and the existing literature to the cross-linguistic similarity of the language pairs involved. When the languages spoken by bilinguals exhibit substantial linguistic similarity, there may be an increased necessity to disambiguate speech signals, leading to a greater reliance on audiovisual cues. The limited phonological similarity between Uzbek and Russian might have minimized bilinguals' need to rely on visual speech cues, contributing to the lack of group differences in our study.
Collapse
Affiliation(s)
- Shakhlo Nematova
- Department of Linguistics and Cognitive Science, University of Delaware, Newark, DE 19716, USA.
| | - Benjamin Zinszer
- Department of Psychology, Swarthmore College, Swarthmore, PA 19081, USA
| | - Kaja K Jasinska
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, ON M5S 1A1, Canada
| |
Collapse
|
4
|
Moro SS, Qureshi FA, Steeves JKE. Perception of the McGurk effect in people with one eye depends on whether the eye is removed during infancy or adulthood. Front Neurosci 2023; 17:1217831. [PMID: 37901426 PMCID: PMC10603249 DOI: 10.3389/fnins.2023.1217831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 09/25/2023] [Indexed: 10/31/2023] Open
Abstract
Background The visual system is not fully mature at birth and continues to develop throughout infancy until it reaches adult levels through late childhood and adolescence. Disruption of vision during this postnatal period and prior to visual maturation results in deficits of visual processing and in turn may affect the development of complementary senses. Studying people who have had one eye surgically removed during early postnatal development is a useful model for understanding timelines of sensory development and the role of binocularity in visual system maturation. Adaptive auditory and audiovisual plasticity following the loss of one eye early in life has been observed for both low-and high-level visual stimuli. Notably, people who have had one eye removed early in life perceive the McGurk effect much less than binocular controls. Methods The current study investigates whether multisensory compensatory mechanisms are also present in people who had one eye removed late in life, after postnatal visual system maturation, by measuring whether they perceive the McGurk effect compared to binocular controls and people who have had one eye removed early in life. Results People who had one eye removed late in life perceived the McGurk effect similar to binocular viewing controls, unlike those who had one eye removed early in life. Conclusion This suggests differences in multisensory compensatory mechanisms based on age at surgical eye removal. These results indicate that cross-modal adaptations for the loss of binocularity may be dependent on plasticity levels during cortical development.
Collapse
Affiliation(s)
- Stefania S. Moro
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
- The Hospital for Sick Children, Toronto, ON, Canada
| | - Faizaan A. Qureshi
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
| | - Jennifer K. E. Steeves
- Department of Psychology and Centre for Vision Research, York University, Toronto, ON, Canada
- The Hospital for Sick Children, Toronto, ON, Canada
| |
Collapse
|
5
|
Iqbal ZJ, Shahin AJ, Bortfeld H, Backer KC. The McGurk Illusion: A Default Mechanism of the Auditory System. Brain Sci 2023; 13:brainsci13030510. [PMID: 36979322 PMCID: PMC10046462 DOI: 10.3390/brainsci13030510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/10/2023] [Accepted: 03/18/2023] [Indexed: 03/22/2023] Open
Abstract
Recent studies have questioned past conclusions regarding the mechanisms of the McGurk illusion, especially how McGurk susceptibility might inform our understanding of audiovisual (AV) integration. We previously proposed that the McGurk illusion is likely attributable to a default mechanism, whereby either the visual system, auditory system, or both default to specific phonemes—those implicated in the McGurk illusion. We hypothesized that the default mechanism occurs because visual stimuli with an indiscernible place of articulation (like those traditionally used in the McGurk illusion) lead to an ambiguous perceptual environment and thus a failure in AV integration. In the current study, we tested the default hypothesis as it pertains to the auditory system. Participants performed two tasks. One task was a typical McGurk illusion task, in which individuals listened to auditory-/ba/ paired with visual-/ga/ and judged what they heard. The second task was an auditory-only task, in which individuals transcribed trisyllabic words with a phoneme replaced by silence. We found that individuals’ transcription of missing phonemes often defaulted to ‘/d/t/th/’, the same phonemes often experienced during the McGurk illusion. Importantly, individuals’ default rate was positively correlated with their McGurk rate. We conclude that the McGurk illusion arises when people fail to integrate visual percepts with auditory percepts, due to visual ambiguity, thus leading the auditory system to default to phonemes often implicated in the McGurk illusion.
Collapse
Affiliation(s)
- Zunaira J. Iqbal
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
| | - Antoine J. Shahin
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
- Health Sciences Research Institute, University of California, Merced, CA 95343, USA
| | - Heather Bortfeld
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
- Health Sciences Research Institute, University of California, Merced, CA 95343, USA
- Department of Psychological Sciences, University of California, Merced, CA 95353, USA
| | - Kristina C. Backer
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, USA
- Health Sciences Research Institute, University of California, Merced, CA 95343, USA
- Correspondence:
| |
Collapse
|
6
|
Butera IM, Stevenson RA, Gifford RH, Wallace MT. Visually biased Perception in Cochlear Implant Users: A Study of the McGurk and Sound-Induced Flash Illusions. Trends Hear 2023; 27:23312165221076681. [PMID: 37377212 PMCID: PMC10334005 DOI: 10.1177/23312165221076681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 12/08/2021] [Accepted: 01/10/2021] [Indexed: 06/29/2023] Open
Abstract
The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.
Collapse
Affiliation(s)
- Iliza M. Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ryan A. Stevenson
- Department of Psychology, University of
Western Ontario, London, ON, Canada
- Brain and Mind Institute, University of
Western Ontario, London, ON, Canada
| | - René H. Gifford
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech
Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt
University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
7
|
Chang C, Wang E, Yang J, Luan X, Wang A, Zhang M. Differences in eccentricity for sound-induced flash illusion in four visual fields. Perception 2023; 52:56-73. [PMID: 36397675 DOI: 10.1177/03010066221136670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
A sound-induced flash illusion (SiFI) is a multisensory illusion dominated by auditory stimuli, in which the individual perceives that the number of visual flashes is equal to the number of auditory stimuli when visual flashes are presented along with an unequal number of auditory stimuli. Although the mechanisms underlying fission and fusion illusions have been documented, there is not yet a consensus on how they vary according to the different eccentricities. In the present study, by incorporating the classic SiFI paradigm into four different eccentricities, we aimed to investigate whether the SiFI varies under the different eccentricities. The results showed that the fission illusion varied significantly across the four eccentricities, with the perifovea (7°) and peripheral (11°) illusions being greater than the fovea and parafovea (3°) illusions. In contrast, the fusion illusion did not vary significantly across the four eccentricities. Our findings revealed that SiFI was affected by different visual fields and that there were differences between the fission and the fusion illusions. Furthermore, by examining the SiFI of eccentricity across visual fields, this study also suggests that bottom-up factors affect the SiFI.
Collapse
Affiliation(s)
| | - Erlei Wang
- The Second Affiliated Hospital of Soochow University, China
| | | | | | | | - Ming Zhang
- 12582Soochow University, China; Okayama University, Japan
| |
Collapse
|
8
|
Zhang F, Lei J, Gong H, Wu H, Chen L. The development of speechreading skills in Chinese students with hearing impairment. Front Psychol 2022; 13:1020211. [PMID: 36405128 PMCID: PMC9674306 DOI: 10.3389/fpsyg.2022.1020211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
The developmental trajectory of speechreading skills is poorly understood, and existing research has revealed rather inconsistent results. In this study, 209 Chinese students with hearing impairment between 7 and 20 years old were asked to complete the Chinese Speechreading Test targeting three linguistics levels (i.e., words, phrases, and sentences). Both response time and accuracy data were collected and analyzed. Results revealed (i) no developmental change in speechreading accuracy between ages 7 and 14 after which the accuracy rate either stagnates or drops; (ii) no significant developmental pattern in speed of speechreading across all ages. Results also showed that across all age groups, speechreading accuracy was higher for phrases than words and sentences, and overall levels of speechreading speed fell for phrases, words, and sentences. These findings suggest that the development of speechreading in Chinese is not a continuous, linear process.
Collapse
Affiliation(s)
- Fen Zhang
- Central China Normal University, Wuhan, China
| | | | - Huina Gong
- Central China Normal University, Wuhan, China
| | - Hui Wu
- Shandong University, Jinan, China
- *Correspondence: Hui Wu,
| | - Liang Chen
- University of Georgia, Athens, GA, United States
| |
Collapse
|
9
|
Development of visual dominance in face-voice integration: Evidence from cross-modal compatibility effects in a gender categorization task. COGNITIVE DEVELOPMENT 2022. [DOI: 10.1016/j.cogdev.2022.101263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
10
|
The magnitude of the sound-induced flash illusion does not increase monotonically as a function of visual stimulus eccentricity. Atten Percept Psychophys 2022; 84:1689-1698. [PMID: 35562629 PMCID: PMC9106326 DOI: 10.3758/s13414-022-02493-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/14/2022] [Indexed: 11/24/2022]
Abstract
The sound-induced flash illusion (SIFI) occurs when a rapidly presented visual stimulus is accompanied by two auditory stimuli, creating the illusory percept of two visual stimuli. While much research has focused on how the temporal proximity of the audiovisual stimuli impacts susceptibility to the illusion, comparatively less research has focused on the impact of spatial manipulations. Here, we aimed to assess whether manipulating the eccentricity of visual flash stimuli altered the properties of the temporal binding window associated with the SIFI. Twenty participants were required to report whether they perceived one or two flashes that were concurrently presented with one or two beeps. Visual stimuli were presented at one of four different retinal eccentricities (2.5, 5, 7.5, or 10 degrees below fixation) and audiovisual stimuli were separated by one of eight stimulus-onset asynchronies. In keeping with previous findings, increasing stimulus-onset asynchrony between the auditory and visual stimuli led to a marked decrease in susceptibility to the illusion allowing us to estimate the width and amplitude of the temporal binding window. However, varying the eccentricity of the visual stimulus had no effect on either the width or the peak amplitude of the temporal binding window, with a similar pattern of results observed for both the “fission” and “fusion” variants of the illusion. Thus, spatial manipulations of the audiovisual stimuli used to elicit the SIFI appear to have a weaker effect on the integration of sensory signals than temporal manipulations, a finding which has implications for neuroanatomical models of multisensory integration.
Collapse
|
11
|
Trudeau-Fisette P, Arnaud L, Ménard L. Visual Influence on Auditory Perception of Vowels by French-Speaking Children and Adults. Front Psychol 2022; 13:740271. [PMID: 35282186 PMCID: PMC8913716 DOI: 10.3389/fpsyg.2022.740271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 01/04/2022] [Indexed: 11/26/2022] Open
Abstract
Audiovisual interaction in speech perception is well defined in adults. Despite the large body of evidence suggesting that children are also sensitive to visual input, very few empirical studies have been conducted. To further investigate whether visual inputs influence auditory perception of phonemes in preschoolers in the same way as in adults, we conducted an audiovisual identification test. The auditory stimuli (/e/-/ø/ continuum) were presented either in an auditory condition only or simultaneously with a visual presentation of the articulation of the vowel /e/ or /ø/. The results suggest that, although all participants experienced visual influence on auditory perception, substantial individual differences exist in the 5- to 6-year-old group. While additional work is required to confirm this hypothesis, we suggest that auditory and visual systems are developing at that age and that multisensory phonological categorization of the rounding contrast took place only in children whose sensory systems and sensorimotor representations were mature.
Collapse
Affiliation(s)
- Paméla Trudeau-Fisette
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music, Montreal, QC, Canada
- *Correspondence: Paméla Trudeau-Fisette,
| | - Laureline Arnaud
- Centre for Research on Brain, Language and Music, Montreal, QC, Canada
- Integrated Program in Neuroscience, McGill University, Montreal, QC, Canada
| | - Lucie Ménard
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| |
Collapse
|
12
|
Butera IM, Larson ED, DeFreese AJ, Lee AKC, Gifford RH, Wallace MT. Functional localization of audiovisual speech using near infrared spectroscopy. Brain Topogr 2022; 35:416-430. [PMID: 35821542 PMCID: PMC9334437 DOI: 10.1007/s10548-022-00904-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 05/19/2022] [Indexed: 11/21/2022]
Abstract
Visual cues are especially vital for hearing impaired individuals such as cochlear implant (CI) users to understand speech in noise. Functional Near Infrared Spectroscopy (fNIRS) is a light-based imaging technology that is ideally suited for measuring the brain activity of CI users due to its compatibility with both the ferromagnetic and electrical components of these implants. In a preliminary step toward better elucidating the behavioral and neural correlates of audiovisual (AV) speech integration in CI users, we designed a speech-in-noise task and measured the extent to which 24 normal hearing individuals could integrate the audio of spoken monosyllabic words with the corresponding visual signals of a female speaker. In our behavioral task, we found that audiovisual pairings provided average improvements of 103% and 197% over auditory-alone listening conditions in -6 and -9 dB signal-to-noise ratios consisting of multi-talker background noise. In an fNIRS task using similar stimuli, we measured activity during auditory-only listening, visual-only lipreading, and AV listening conditions. We identified cortical activity in all three conditions over regions of middle and superior temporal cortex typically associated with speech processing and audiovisual integration. In addition, three channels active during the lipreading condition showed uncorrected correlations associated with behavioral measures of audiovisual gain as well as with the McGurk effect. Further work focusing primarily on the regions of interest identified in this study could test how AV speech integration may differ for CI users who rely on this mechanism for daily communication.
Collapse
Affiliation(s)
- Iliza M. Butera
- grid.152326.10000 0001 2264 7217Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN USA
| | - Eric D. Larson
- grid.34477.330000000122986657Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA
| | - Andrea J. DeFreese
- grid.152326.10000 0001 2264 7217Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN USA
| | - Adrian KC Lee
- grid.34477.330000000122986657Institute for Learning & Brain Sciences, University of Washington, Seattle Washington, USA ,grid.34477.330000000122986657Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington USA
| | - René H. Gifford
- grid.152326.10000 0001 2264 7217Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN USA
| | - Mark T. Wallace
- grid.152326.10000 0001 2264 7217Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN USA ,grid.152326.10000 0001 2264 7217Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN USA ,grid.412807.80000 0004 1936 9916Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN USA
| |
Collapse
|
13
|
De Nil L, Isabella S, Jobst C, Kwon S, Mollaei F, Cheyne D. Complexity-Dependent Modulations of Beta Oscillations for Verbal and Nonverbal Movements. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2248-2260. [PMID: 33900804 DOI: 10.1044/2021_jslhr-20-00275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The planning and execution of motor behaviors require coordination of neurons that are established through synchronization of neural activity. Movements are typically preceded by event-related desynchronization (ERD) in the beta range (15-30 Hz) primarily localized in the motor cortex, while movement onset is associated with event-related synchronization (ERS). It is hypothesized that ERD is important for movement preparation and execution, and ERS serves to inhibit movement and update the motor plan. The primary objective of this study was to determine to what extent movement-related oscillatory brain patterns (ERD and ERS) during verbal and nonverbal tasks may be affected differentially by variations in task complexity. Method Seventeen right-handed adult participants (nine women, eight men; M age = 25.8 years, SD = 5.13) completed a sequential button press and verbal task. The final analyses included data for 15 participants for the nonverbal task and 13 for the verbal task. Both tasks consisted of two complexity levels: simple and complex sequences. Magnetoencephalography was used to record modulations in beta band brain oscillations during task performance. Results Both the verbal and button press tasks were characterized by significant premovement ERD and postmovement ERS. However, only simple sequences showed a distinct transient synchronization during the premovement phase of the task. Differences between the two tasks were reflected in both latency and peak amplitude of ERD and ERS, as well as in lateralization of oscillations. Conclusions Both verbal and nonverbal movements showed a significant desynchronization of beta oscillations during the movement preparation and holding phase and a resynchronization upon movement termination. Importantly, the premovement phase for simple but not complex tasks was characterized by a transient partial synchronization. In addition, the data revealed significant differences between the two tasks in terms of lateralization of oscillatory modulations. Our findings suggest that, while data from the general motor control research can inform our understanding of speech motor control, significant differences exist between the two motor systems that caution against overgeneralization of underlying neural control processes.
Collapse
Affiliation(s)
- Luc De Nil
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Rehabilitation Sciences Institute, University of Toronto, Ontario, Canada
| | - Silvia Isabella
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
| | - Cecilia Jobst
- The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
| | - Soonji Kwon
- The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
| | - Fatemeh Mollaei
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
| | - Douglas Cheyne
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- The Hospital for Sick Children Research Institute, Toronto, Ontario, Canada
| |
Collapse
|
14
|
Abstract
Visual speech cues play an important role in speech recognition, and the McGurk effect is a classic demonstration of this. In the original McGurk & Macdonald (Nature 264, 746-748 1976) experiment, 98% of participants reported an illusory "fusion" percept of /d/ when listening to the spoken syllable /b/ and watching the visual speech movements for /g/. However, more recent work shows that subject and task differences influence the proportion of fusion responses. In the current study, we varied task (forced-choice vs. open-ended), stimulus set (including /d/ exemplars vs. not), and data collection environment (lab vs. Mechanical Turk) to investigate the robustness of the McGurk effect. Across experiments, using the same stimuli to elicit the McGurk effect, we found fusion responses ranging from 10% to 60%, thus showing large variability in the likelihood of experiencing the McGurk effect across factors that are unrelated to the perceptual information provided by the stimuli. Rather than a robust perceptual illusion, we therefore argue that the McGurk effect exists only for some individuals under specific task situations.Significance: This series of studies re-evaluates the classic McGurk effect, which shows the relevance of visual cues on speech perception. We highlight the importance of taking into account subject variables and task differences, and challenge future researchers to think carefully about the perceptual basis of the McGurk effect, how it is defined, and what it can tell us about audiovisual integration in speech.
Collapse
|
15
|
Hirst RJ, McGovern DP, Setti A, Shams L, Newell FN. What you see is what you hear: Twenty years of research using the Sound-Induced Flash Illusion. Neurosci Biobehav Rev 2020; 118:759-774. [DOI: 10.1016/j.neubiorev.2020.09.006] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 07/06/2020] [Accepted: 09/03/2020] [Indexed: 01/17/2023]
|
16
|
Multisensory Perception, Verbal, Visuo-spatial and Motor Working Memory Modulation After a Single Open- or Closed-Skill Exercise Session in Children. JOURNAL OF COGNITIVE ENHANCEMENT 2020. [DOI: 10.1007/s41465-020-00189-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
17
|
Masini A, Lanari M, Marini S, Tessari A, Toselli S, Stagni R, Bisi MC, Bragonzoni L, Gori D, Sansavini A, Ceciliani A, Dallolio L. A Multiple Targeted Research Protocol for a Quasi-Experimental Trial in Primary School Children Based on an Active Break Intervention: The Imola Active Breaks (I-MOVE) Study. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17176123. [PMID: 32842483 PMCID: PMC7503895 DOI: 10.3390/ijerph17176123] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 08/19/2020] [Accepted: 08/21/2020] [Indexed: 12/20/2022]
Abstract
BACKGROUND Children and adolescents should perform, according to the World Health Organization guidelines, at least 60 min of moderate-to-vigorous physical activity per-day in order to avoid the risk of metabolic and cardiovascular diseases. The school represents a fundamental setting to conduct interventions to promote physical activity (PA) and contrast sedentary behaviors. Active breaks (ABs), bouts of 10 min of PA conducted inside the classroom, seem to be a good strategy to promote PA and improve classroom behavior. The aim of this study protocol is to describe the design and the assessment of the Imola Active Breaks I-MOVE study. METHODS The I-MOVE study is a school-based intervention trial, with a quasi-experimental design, performed in a primary school. It involves one experimental-group performing the intervention, focused on ABs, and one control-group. Nine main outcomes are evaluated: PA and sedentary behaviors; health related fitness; motor control development; dietary patterns; anthropometric evaluation; sociodemographic determinants; cognitive function; time-on-task behavior and quality of life. CONCLUSIONS Results from the I-MOVE study will help to clarify the effects of incorporating ABs in the Italian school curriculum as a new public health strategy and an innovative school model oriented to the well-being of children and teachers for the best quality of school life.
Collapse
Affiliation(s)
- Alice Masini
- Department of Biomedical and Neuromotor Science, University of Bologna, 40126 Bologna, Italy; (A.M.); (S.T.) (L.D.)
| | - Marcello Lanari
- Department of Medical and Surgical Sciences, University of Bologna, 40138 Bologna, Italy;
| | - Sofia Marini
- Department of Life Quality Studies, University of Bologna, Campus of Rimini, 47921 Rimini, Italy; (L.B.); (A.C.)
- Correspondence: ; Tel.: +39-051-209-4812
| | - Alessia Tessari
- Department of Psychology, University of Bologna, 40126 Bologna, Italy; (A.T.); (A.S.)
| | - Stefania Toselli
- Department of Biomedical and Neuromotor Science, University of Bologna, 40126 Bologna, Italy; (A.M.); (S.T.) (L.D.)
| | - Rita Stagni
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi” University of Bologna, 40136 Bologna, Italy; (R.S.); (M.C.B.)
| | - Maria Cristina Bisi
- Department of Electrical, Electronic, and Information Engineering “Guglielmo Marconi” University of Bologna, 40136 Bologna, Italy; (R.S.); (M.C.B.)
| | - Laura Bragonzoni
- Department of Life Quality Studies, University of Bologna, Campus of Rimini, 47921 Rimini, Italy; (L.B.); (A.C.)
| | - Davide Gori
- Department of Biomedical and Neuromotor Science, University of Bologna, 40126 Bologna, Italy; (A.M.); (S.T.) (L.D.)
| | - Alessandra Sansavini
- Department of Psychology, University of Bologna, 40126 Bologna, Italy; (A.T.); (A.S.)
| | - Andrea Ceciliani
- Department of Life Quality Studies, University of Bologna, Campus of Rimini, 47921 Rimini, Italy; (L.B.); (A.C.)
| | - Laura Dallolio
- Department of Biomedical and Neuromotor Science, University of Bologna, 40126 Bologna, Italy; (A.M.); (S.T.) (L.D.)
| |
Collapse
|
18
|
Deploying attention to the target location of a pointing action modulates audiovisual processes at nontarget locations. Atten Percept Psychophys 2020; 82:3507-3520. [PMID: 32676805 DOI: 10.3758/s13414-020-02065-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The current study examined how the deployment of spatial attention at the onset of a pointing movement influenced audiovisual crossmodal interactions at the target of the pointing action and at nontarget locations. These interactions were quantified by measuring the susceptibility to the fission (i.e., reporting two visual flashes under one flash and two auditory beep pairings) and fusion (i.e., reporting one flash under two flashes and one beep pairing) audiovisual illusions. At movement onset, unimodal, or auditory and visual bimodal stimuli were either presented at the target of the pointing action or in an adjacent, nontarget location. In Experiment 1, perceptual accuracy within the unimodal and bimodal conditions was lower in the nontarget relative to the target condition. The fission illusion was uninfluenced by target condition. However, the fusion illusion was more likely to be reported at the target relative to the nontarget location. In Experiment 2, the stimuli from Experiment 1 were further presented at a location near where the eyes were fixated (i.e., congruent condition), where the hand was aiming (i.e., target), or in a location where neither the eyes were fixated nor the hand was aiming. The results yielded the greatest susceptibility to the fusion illusion when the visual location and movement end points were congruent relative to when either movement or fixation was incongruent. Although attention may facilitate the processing of unisensory and multisensory cues in general, attention might have the strongest influence on the audiovisual integration mechanisms that underlie the sound-induced fusion illusion.
Collapse
|
19
|
Kenwright B. There's More to Sound Than Meets the Ear: Sound in Interactive Environments. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2020; 40:62-70. [PMID: 32540788 DOI: 10.1109/mcg.2020.2996371] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
How important is sound in an interactive environment? For example, what happens when we play a video game without sound? Does the game still have the same impact? Even if sight is the primary sense in interactive environments, sound is also important, and should not be overlooked during the development process. The necessity of sound for perceptive quality enrichment in virtual environments cannot be underestimated. However, how designers should integrate and leverage the benefits of sound design effectively in an interactive environment can be challenging. This short article, discusses a variety of important and intriguing psychological concepts and immersive sound techniques, used in interactive environments, such as video games, to improve engagement and enhance the experience (from passive background music to active and procedural sounds). Computer graphics has proven itself in many fields of entertainment and computing as a means for communicating and engaging users (visually). This article discusses the hidden abilities of sound in interactive environments (e.g., the emotional, subconscious, and subliminal impact). We explain how different sounds can be combined with visual information to help improve interactive conditions and stimulate the imagination, not to mention, control (or steer) the user's emotions and attention.
Collapse
|
20
|
Zhou HY, Shi LJ, Yang HX, Cheung EFC, Chan RCK. Audiovisual temporal integration and rapid temporal recalibration in adolescents and adults: Age-related changes and its correlation with autistic traits. Autism Res 2019; 13:615-626. [PMID: 31808321 DOI: 10.1002/aur.2249] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 11/19/2019] [Indexed: 12/26/2022]
Abstract
Temporal structure is a key factor in determining the relatedness of multisensory stimuli. Stimuli that are close in time are more likely to be integrated into a unified perceptual representation. To investigate the age-related developmental differences in audiovisual temporal integration and rapid temporal recalibration, we administered simultaneity judgment (SJ) tasks to a group of adolescents (11-14 years) and young adults (18-28 years). No age-related changes were found in the width of the temporal binding window within which participants are highly likely to combine multisensory stimuli. The main distinction between adolescents and adults was audiovisual temporal recalibration. Although participants of both age groups could rapidly recalibrate based on the previous trial for speech stimuli (i.e., syllable utterances), only adults but not adolescents showed short-term recalibration for simple and non-speech stimuli. In both adolescents and adults, no significant correlation was found between audiovisual temporal integration ability and autistic or schizotypal traits. These findings provide new information on the developmental trajectory of basic multisensory function and may have implications for neurodevelopmental disorders (e.g., autism) with altered audiovisual temporal integration. Autism Res 2020, 13: 615-626. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Utilizing temporal cues to integrate and separate audiovisual information is a fundamental ability underlying higher order social communicative functions. This study examines the developmental changes of the ability to detect audiovisual asynchrony and rapidly adjust sensory decisions based on previous sensory input. In healthy adolescents and young adults, the correlation between autistic traits and audiovisual integration ability failed to reach a significant level. Therefore, more research is needed to examine whether impairment in basic sensory functions is correlated with broader autism phenotype in nonclinical populations. These results may help us understand altered multisensory integration in people with autism.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Li-Juan Shi
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,School of Education, Hunan University of Science and Technology, Xiangtan, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, Beijing, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
21
|
Derrick D, Hansmann D, Theys C. Tri-modal speech: Audio-visual-tactile integration in speech perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3495. [PMID: 31795693 DOI: 10.1121/1.5134064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 10/25/2019] [Indexed: 06/10/2023]
Abstract
Speech perception is a multi-sensory experience. Visual information enhances [Sumby and Pollack (1954). J. Acoust. Soc. Am. 25, 212-215] and interferes [McGurk and MacDonald (1976). Nature 264, 746-748] with speech perception. Similarly, tactile information, transmitted by puffs of air arriving at the skin and aligned with speech audio, alters [Gick and Derrick (2009). Nature 462, 502-504] auditory speech perception in noise. It has also been shown that aero-tactile information influences visual speech perception when an auditory signal is absent [Derrick, Bicevskis, and Gick (2019a). Front. Commun. Lang. Sci. 3(61), 1-11]. However, researchers have not yet identified the combined influence of aero-tactile, visual, and auditory information on speech perception. The effects of matching and mismatching visual and tactile speech on two-way forced-choice auditory syllable-in-noise classification tasks were tested. The results showed that both visual and tactile information altered the signal-to-noise threshold for accurate identification of auditory signals. Similar to previous studies, the visual component has a strong influence on auditory syllable-in-noise identification, as evidenced by a 28.04 dB improvement in SNR between matching and mismatching visual stimulus presentations. In comparison, the tactile component had a small influence resulting in a 1.58 dB SNR match-mismatch range. The effects of both the audio and tactile information were shown to be additive.
Collapse
Affiliation(s)
- Donald Derrick
- New Zealand Institute of Language, Brain, and Behaviour, University of Canterbury, 20 Kirkwood Avenue, Upper Riccarton, Christchurch 8041, New Zealand
| | - Doreen Hansmann
- School of Psychology, Speech and Hearing, University of Canterbury, 20 Kirkwood Avenue, Upper Riccarton, Christchurch 8041, New Zealand
| | - Catherine Theys
- School of Psychology, Speech and Hearing, University of Canterbury, 20 Kirkwood Avenue, Upper Riccarton, Christchurch 8041, New Zealand
| |
Collapse
|
22
|
Feng G, Zhou B, Zhou W, Beauchamp MS, Magnotti JF. A Laboratory Study of the McGurk Effect in 324 Monozygotic and Dizygotic Twins. Front Neurosci 2019; 13:1029. [PMID: 31636529 PMCID: PMC6787151 DOI: 10.3389/fnins.2019.01029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 09/10/2019] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration of information from the talker's voice and the talker's mouth facilitates human speech perception. A popular assay of audiovisual integration is the McGurk effect, an illusion in which incongruent visual speech information categorically changes the percept of auditory speech. There is substantial interindividual variability in susceptibility to the McGurk effect. To better understand possible sources of this variability, we examined the McGurk effect in 324 native Mandarin speakers, consisting of 73 monozygotic (MZ) and 89 dizygotic (DZ) twin pairs. When tested with 9 different McGurk stimuli, some participants never perceived the illusion and others always perceived it. Within participants, perception was similar across time (r = 0.55 at a 2-year retest in 150 participants) suggesting that McGurk susceptibility reflects a stable trait rather than short-term perceptual fluctuations. To examine the effects of shared genetics and prenatal environment, we compared McGurk susceptibility between MZ and DZ twins. Both twin types had significantly greater correlation than unrelated pairs (r = 0.28 for MZ twins and r = 0.21 for DZ twins) suggesting that the genes and environmental factors shared by twins contribute to individual differences in multisensory speech perception. Conversely, the existence of substantial differences within twin pairs (even MZ co-twins) and the overall low percentage of explained variance (5.5%) argues against a deterministic view of individual differences in multisensory integration.
Collapse
Affiliation(s)
- Guo Feng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Psychological Research and Counseling Center, Southwest Jiaotong University, Chengdu, China
| | - Bin Zhou
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Wen Zhou
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Michael S. Beauchamp
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, United States
| | - John F. Magnotti
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
23
|
Kaganovich N, Ancel E. Different neural processes underlie visual speech perception in school-age children and adults: An event-related potentials study. J Exp Child Psychol 2019; 184:98-122. [PMID: 31015101 PMCID: PMC6857813 DOI: 10.1016/j.jecp.2019.03.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 03/15/2019] [Accepted: 03/26/2019] [Indexed: 11/18/2022]
Abstract
The ability to use visual speech cues does not fully develop until late adolescence. The cognitive and neural processes underlying this slow maturation are not yet understood. We examined electrophysiological responses of younger (8-9 years) and older (11-12 years) children as well as adults elicited by visually perceived articulations in an audiovisual word matching task and related them to the amount of benefit gained during a speech-in-noise (SIN) perception task when seeing the talker's face. On each trial, participants first heard a word and, after a short pause, saw a speaker silently articulate a word. In half of the trials the articulated word matched the auditory word (congruent trials), whereas in the other half it did not (incongruent trials). In all three age groups, incongruent articulations elicited the N400 component and congruent articulations elicited the late positive complex (LPC). Groups did not differ in the mean amplitude of N400. The mean amplitude of LPC was larger in younger children compared with older children and adults. Importantly, the relationship between event-related potential measures and SIN performance varied by group. In 8- and 9-year-olds, neither component was predictive of SIN gain. The LPC amplitude predicted the SIN gain in older children but not in adults. Conversely, the N400 amplitude predicted the SIN gain in adults. We argue that although all groups were able to detect correspondences between auditory and visual word onsets at the phonemic/syllabic level, only adults could use this information for lexical access.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA; Department of Psychological Sciences, Purdue University, West Lafayette, IN 47907, USA.
| | - Elizabeth Ancel
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
| |
Collapse
|
24
|
Psychobiological Responses Reveal Audiovisual Noise Differentially Challenges Speech Recognition. Ear Hear 2019; 41:268-277. [PMID: 31283529 DOI: 10.1097/aud.0000000000000755] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES In noisy environments, listeners benefit from both hearing and seeing a talker, demonstrating audiovisual (AV) cues enhance speech-in-noise (SIN) recognition. Here, we examined the relative contribution of auditory and visual cues to SIN perception and the strategies used by listeners to decipher speech in noise interference(s). DESIGN Normal-hearing listeners (n = 22) performed an open-set speech recognition task while viewing audiovisual TIMIT sentences presented under different combinations of signal degradation including visual (AVn), audio (AnV), or multimodal (AnVn) noise. Acoustic and visual noises were matched in physical signal-to-noise ratio. Eyetracking monitored participants' gaze to different parts of a talker's face during SIN perception. RESULTS As expected, behavioral performance for clean sentence recognition was better for A-only and AV compared to V-only speech. Similarly, with noise in the auditory channel (AnV and AnVn speech), performance was aided by the addition of visual cues of the talker regardless of whether the visual channel contained noise, confirming a multimodal benefit to SIN recognition. The addition of visual noise (AVn) obscuring the talker's face had little effect on speech recognition by itself. Listeners' eye gaze fixations were biased toward the eyes (decreased at the mouth) whenever the auditory channel was compromised. Fixating on the eyes was negatively associated with SIN recognition performance. Eye gazes on the mouth versus eyes of the face also depended on the gender of the talker. CONCLUSIONS Collectively, results suggest listeners (1) depend heavily on the auditory over visual channel when seeing and hearing speech and (2) alter their visual strategy from viewing the mouth to viewing the eyes of a talker with signal degradations, which negatively affects speech perception.
Collapse
|
25
|
Hirst RJ, Kicks EC, Allen HA, Cragg L. Cross-modal interference-control is reduced in childhood but maintained in aging: A cohort study of stimulus- and response-interference in cross-modal and unimodal Stroop tasks. J Exp Psychol Hum Percept Perform 2019; 45:553-572. [PMID: 30945905 PMCID: PMC6484713 DOI: 10.1037/xhp0000608] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Interference-control is the ability to exclude distractions and focus on a specific task or stimulus. However, it is currently unclear whether the same interference-control mechanisms underlie the ability to ignore unimodal and cross-modal distractions. In 2 experiments we assessed whether unimodal and cross-modal interference follow similar trajectories in development and aging and occur at similar processing levels. In Experiment 1, 42 children (6-11 years), 31 younger adults (18-25 years) and 32 older adults (60-84 years) identified color rectangles with either written (unimodal) or spoken (cross-modal) distractor-words. Stimuli could be congruent, incongruent but mapped to the same response (stimulus-incongruent), or incongruent and mapped to different responses (response-incongruent); thus, separating interference occurring at early (sensory) and late (response) processing levels. Unimodal interference was worst in childhood and old age; however, older adults maintained the ability to ignore cross-modal distraction. Unimodal but not cross-modal response-interference also reduced accuracy. In Experiment 2 we compared the effect of audition on vision and vice versa in 52 children (6-11 years), 30 young adults (22-33 years) and 30 older adults (60-84 years). As in Experiment 1, older adults maintained the ability to ignore cross-modal distraction arising from either modality, and neither type of cross-modal distraction limited accuracy in adults. However, cross-modal distraction still reduced accuracy in children and children were more slowed by stimulus-interference compared with adults. We conclude that; unimodal and cross-modal interference follow different life span trajectories and differences in stimulus- and response-interference may increase cross-modal distractibility in childhood. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Ella C Kicks
- School of Psychology and Neuroscience, University of St. Andrews
| | | | - Lucy Cragg
- School of Psychology, University of Nottingham
| |
Collapse
|
26
|
Barutchu A, Toohey S, Shivdasani MN, Fifer JM, Crewther SG, Grayden DB, Paolini AG. Multisensory perception and attention in school-age children. J Exp Child Psychol 2019; 180:141-155. [DOI: 10.1016/j.jecp.2018.11.021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2018] [Revised: 11/26/2018] [Accepted: 11/26/2018] [Indexed: 10/27/2022]
|
27
|
McGurk Effect by Individuals with Autism Spectrum Disorder and Typically Developing Controls: A Systematic Review and Meta-analysis. J Autism Dev Disord 2019; 49:34-43. [PMID: 30019277 DOI: 10.1007/s10803-018-3680-0] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
By synthesizing existing behavioural studies through a meta-analytic approach, the current study compared the performances of Autism spectrum disorder (ASD) and typically developing groups in audiovisual speech integration and investigated potential moderators that might contribute to the heterogeneity of the existing findings. In total, nine studies were included in the current study, and the pooled overall difference between the two groups was significant, g = - 0.835 (p < 0.001; 95% CI - 1.155 to - 0.516). Age and task scoring method were found to be associated with the inconsistencies of the findings reported by previous studies. These findings indicate that individuals with ASD show weaker McGurk effect than typically developing controls.
Collapse
|
28
|
Barutchu A, Fifer JM, Shivdasani MN, Crewther SG, Paolini AG. The Interplay Between Multisensory Associative Learning and IQ in Children. Child Dev 2019; 91:620-637. [PMID: 30620403 DOI: 10.1111/cdev.13210] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
This study assessed the developmental profile of unisensory and multisensory processes, and their contribution to children's intellectual abilities (8- and 11-year olds, N = 38, compared to adults, N = 19) using a simple audiovisual detection task and three incidental associative learning tasks with different sensory signals: visual-verbal with pseudowords, novel audiovisual, and visual-visual. The level of immaturity throughout childhood was dependent on both, the sensory signal type and the task. Associative learning was significantly enhanced with verbal sounds, compared to novel audiovisual and unisensory visual learning. Visual-verbal learning was also the best predictor of children's general intellectual abilities. The results demonstrate a separate developmental trajectory for visual and verbal multisensory processes and independent contributions to the development of cognitive abilities throughout childhood.
Collapse
|
29
|
Published estimates of group differences in multisensory integration are inflated. PLoS One 2018; 13:e0202908. [PMID: 30231054 PMCID: PMC6145544 DOI: 10.1371/journal.pone.0202908] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2018] [Accepted: 08/10/2018] [Indexed: 11/19/2022] Open
Abstract
A common measure of multisensory integration is the McGurk effect, an illusion in which incongruent auditory and visual speech are integrated to produce an entirely different percept. Published studies report that participants who differ in age, gender, culture, native language, or traits related to neurological or psychiatric disorders also differ in their susceptibility to the McGurk effect. These group-level differences are used as evidence for fundamental alterations in sensory processing between populations. Using empirical data and statistical simulations tested under a range of conditions, we show that published estimates of group differences in the McGurk effect are inflated when only statistically significant (p < 0.05) results are published. With a sample size typical of published studies, a group difference of 10% would be reported as 31%. As a consequence of this inflation, follow-up studies often fail to replicate published reports of large between-group differences. Inaccurate estimates of effect sizes and replication failures are especially problematic in studies of clinical populations involving expensive and time-consuming interventions, such as training paradigms to improve sensory processing. Reducing effect size inflation and increasing replicability requires increasing the number of participants by an order of magnitude compared with current practice.
Collapse
|
30
|
Hirst RJ, Stacey JE, Cragg L, Stacey PC, Allen HA. The threshold for the McGurk effect in audio-visual noise decreases with development. Sci Rep 2018; 8:12372. [PMID: 30120399 PMCID: PMC6098036 DOI: 10.1038/s41598-018-30798-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 08/02/2018] [Indexed: 11/09/2022] Open
Abstract
Across development, vision increasingly influences audio-visual perception. This is evidenced in illusions such as the McGurk effect, in which a seen mouth movement changes the perceived sound. The current paper assessed the effects of manipulating the clarity of the heard and seen signal upon the McGurk effect in children aged 3-6 (n = 29), 7-9 (n = 32) and 10-12 (n = 29) years, and adults aged 20-35 years (n = 32). Auditory noise increased, and visual blur decreased, the likelihood of vision changing auditory perception. Based upon a proposed developmental shift from auditory to visual dominance we predicted that younger children would be less susceptible to McGurk responses, and that adults would continue to be influenced by vision in higher levels of visual noise and with less auditory noise. Susceptibility to the McGurk effect was higher in adults compared with 3-6-year-olds and 7-9-year-olds but not 10-12-year-olds. Younger children required more auditory noise, and less visual noise, than adults to induce McGurk responses (i.e. adults and older children were more easily influenced by vision). Reduced susceptibility in childhood supports the theory that sensory dominance shifts across development and reaches adult-like levels by 10 years of age.
Collapse
Affiliation(s)
| | | | - Lucy Cragg
- University of Nottingham, Nottingham, UK
| | | | | |
Collapse
|
31
|
Xu W, Kolozsvari OB, Monto SP, Hämäläinen JA. Brain Responses to Letters and Speech Sounds and Their Correlations With Cognitive Skills Related to Reading in Children. Front Hum Neurosci 2018; 12:304. [PMID: 30127729 PMCID: PMC6088176 DOI: 10.3389/fnhum.2018.00304] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Accepted: 07/16/2018] [Indexed: 11/21/2022] Open
Abstract
Letter-speech sound (LSS) integration is crucial for initial stages of reading acquisition. However, the relationship between cortical organization for supporting LSS integration, including unimodal and multimodal processes, and reading skills in early readers remains unclear. In the present study, we measured brain responses to Finnish letters and speech sounds from 29 typically developing Finnish children in a child-friendly audiovisual integration experiment using magnetoencephalography. Brain source activations in response to auditory, visual and audiovisual stimuli as well as audiovisual integration response were correlated with reading skills and cognitive skills predictive of reading development after controlling for the effect of age. Regression analysis showed that from the brain measures, the auditory late response around 400 ms showed the largest association with phonological processing and rapid automatized naming abilities. In addition, audiovisual integration effect was most pronounced in the left and right temporoparietal regions and the activities in several of these temporoparietal regions correlated with reading and writing skills. Our findings indicated the important role of temporoparietal regions in the early phase of learning to read and their unique contribution to reading skills.
Collapse
Affiliation(s)
- Weiyong Xu
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland
| | - Orsolya B. Kolozsvari
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland
| | - Simo P. Monto
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland
| | - Jarmo A. Hämäläinen
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Jyväskylä Centre for Interdisciplinary Brain Research, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
32
|
Jerger S, Damian MF, McAlpine RP, Abdi H. Visual speech fills in both discrimination and identification of non-intact auditory speech in children. JOURNAL OF CHILD LANGUAGE 2018; 45:392-414. [PMID: 28724465 PMCID: PMC5775942 DOI: 10.1017/s0305000917000265] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. bæz) coupled to non-intact (excised onsets) auditory speech (signified by /-b/æz). Children discriminated syllable pairs that differed in intactness (i.e. bæz:/-b/æz) and identified non-intact nonwords (/-b/æz). We predicted that visual speech would cause children to perceive the non-intact onsets as intact, resulting in more same responses for discrimination and more intact (i.e. bæz) responses for identification in the audiovisual than auditory mode. Visual speech for the easy-to-speechread /b/ but not for the difficult-to-speechread /g/ boosted discrimination and identification (about 35-45%) in children from four to fourteen years. The influence of visual speech on discrimination was uniquely associated with the influence of visual speech on identification and receptive vocabulary skills.
Collapse
Affiliation(s)
- Susan Jerger
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, 800 W. Campbell Rd, Richardson, TX 75080
- Callier Center for Communication Disorders, 811 Synergy Park Blvd., Richardson, TX 75080
| | - Markus F. Damian
- University of Bristol, School of Experimental Psychology, 12a Priory Road, Room 1D20, Bristol BS8 1TU, United Kingdom
| | - Rachel P. McAlpine
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, 800 W. Campbell Rd, Richardson, TX 75080
- Callier Center for Communication Disorders, 811 Synergy Park Blvd., Richardson, TX 75080
| | - Hervé Abdi
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, 800 W. Campbell Rd, Richardson, TX 75080
| |
Collapse
|
33
|
Beker S, Foxe JJ, Molholm S. Ripe for solution: Delayed development of multisensory processing in autism and its remediation. Neurosci Biobehav Rev 2018; 84:182-192. [PMID: 29162518 PMCID: PMC6389331 DOI: 10.1016/j.neubiorev.2017.11.008] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 11/09/2017] [Accepted: 11/13/2017] [Indexed: 12/24/2022]
Abstract
Difficulty integrating inputs from different sensory sources is commonly reported in individuals with Autism Spectrum Disorder (ASD). Accumulating evidence consistently points to altered patterns of behavioral reactions and neural activity when individuals with ASD observe or act upon information arriving through multiple sensory systems. For example, impairments in the integration of seen and heard speech appear to be particularly acute, with obvious implications for interpersonal communication. Here, we explore the literature on multisensory processing in autism with a focus on developmental trajectories. While much remains to be understood, some consistent observations emerge. Broadly, sensory integration deficits are found in children with an ASD whereas these appear to be much ameliorated, or even fully recovered, in older teenagers and adults on the spectrum. This protracted delay in the development of multisensory processing raises the possibility of applying early intervention strategies focused on multisensory integration, to accelerate resolution of these functions. We also consider how dysfunctional cross-sensory oscillatory neural communication may be one key pathway to impaired multisensory processing in ASD.
Collapse
Affiliation(s)
- Shlomit Beker
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, United States; Rose F. Kennedy Intellectual and Developmental Disabilities Research Center (IDDRC), Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - John J Foxe
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, United States; Rose F. Kennedy Intellectual and Developmental Disabilities Research Center (IDDRC), Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States; The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
| | - Sophie Molholm
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, United States; Rose F. Kennedy Intellectual and Developmental Disabilities Research Center (IDDRC), Department of Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States; The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States.
| |
Collapse
|
34
|
Alsius A, Paré M, Munhall KG. Forty Years After Hearing Lips and Seeing Voices: the McGurk Effect Revisited. Multisens Res 2018; 31:111-144. [PMID: 31264597 DOI: 10.1163/22134808-00002565] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2016] [Accepted: 03/09/2017] [Indexed: 11/19/2022]
Abstract
Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and neural levels. This questions the suitability of this illusion as a tool to quantify the necessary and sufficient conditions under which audiovisual integration occurs in natural conditions. In this paper, we review some of the practical and theoretical issues related to the use of the McGurk illusion as an experimental paradigm. We believe that, without a richer understanding of the mechanisms involved in the processing of the McGurk effect, experimenters should be really cautious when generalizing data generated by McGurk stimuli to matching audiovisual speech events.
Collapse
Affiliation(s)
- Agnès Alsius
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| | - Martin Paré
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| | - Kevin G Munhall
- Psychology Department, Queen's University, Humphrey Hall, 62 Arch St., Kingston, Ontario, K7L 3N6 Canada
| |
Collapse
|
35
|
Ross LA, Del Bene VA, Molholm S, Woo YJ, Andrade GN, Abrahams BS, Foxe JJ. Common variation in the autism risk gene CNTNAP2, brain structural connectivity and multisensory speech integration. BRAIN AND LANGUAGE 2017; 174:50-60. [PMID: 28738218 DOI: 10.1016/j.bandl.2017.07.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Revised: 04/07/2017] [Accepted: 07/11/2017] [Indexed: 06/07/2023]
Abstract
Three lines of evidence motivated this study. 1) CNTNAP2 variation is associated with autism risk and speech-language development. 2) CNTNAP2 variations are associated with differences in white matter (WM) tracts comprising the speech-language circuitry. 3) Children with autism show impairment in multisensory speech perception. Here, we asked whether an autism risk-associated CNTNAP2 single nucleotide polymorphism in neurotypical adults was associated with multisensory speech perception performance, and whether such a genotype-phenotype association was mediated through white matter tract integrity in speech-language circuitry. Risk genotype at rs7794745 was associated with decreased benefit from visual speech and lower fractional anisotropy (FA) in several WM tracts (right precentral gyrus, left anterior corona radiata, right retrolenticular internal capsule). These structural connectivity differences were found to mediate the effect of genotype on audiovisual speech perception, shedding light on possible pathogenic pathways in autism and biological sources of inter-individual variation in audiovisual speech processing in neurotypicals.
Collapse
Affiliation(s)
- Lars A Ross
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA.
| | - Victor A Del Bene
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Ferkauf Graduate School of Psychology Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Sophie Molholm
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Young Jae Woo
- Department of Genetics, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Gizely N Andrade
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA
| | - Brett S Abrahams
- Department of Genetics, Albert Einstein College of Medicine, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - John J Foxe
- The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Pediatrics, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA; Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY 10461, USA; Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA.
| |
Collapse
|
36
|
Central–peripheral differences in audiovisual and visuotactile event perception. Atten Percept Psychophys 2017; 79:2552-2563. [DOI: 10.3758/s13414-017-1396-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Irwin J, Brancazio L, Volpe N. The development of gaze to a speaking face. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:3145. [PMID: 28599552 PMCID: PMC5422207 DOI: 10.1121/1.4982727] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Revised: 03/16/2017] [Accepted: 04/17/2017] [Indexed: 05/11/2023]
Abstract
When a speaker talks, the visible consequences of what they are saying can be seen. Listeners are influenced by this visible speech both in a noisy listening environment and even when auditory speech can easily be heard. While visible influence on heard speech has been reported to increase from early to late childhood, little is known about the mechanism that underlies this developmental trend. One possible account of developmental differences is that looking behavior to the face of a speaker changes with age. To examine this possibility, the gaze to a speaking face was examined in children from 5 to 10 yrs of age and adults. Participants viewed a speaker's face in a range of conditions that elicit looking: in a visual only (speech reading) condition, in the presence of auditory noise (speech in noise) condition, and in an audiovisual mismatch (McGurk) condition. Results indicate an increase in gaze on the face, and specifically, to the mouth of a speaker between the ages of 5 and 10 for all conditions. This change in looking behavior may help account for previous findings in the literature showing that visual influence on heard speech increases with development.
Collapse
Affiliation(s)
- Julia Irwin
- Haskins Laboratories, 300 George Street, New Haven, Connecticut 06511, USA
| | - Lawrence Brancazio
- Southern Connecticut State University, 501 Crescent Street, New Haven, Connecticut 06515, USA
| | - Nicole Volpe
- Southern Connecticut State University, 501 Crescent Street, New Haven, Connecticut 06515, USA
| |
Collapse
|
38
|
Stevenson RA, Baum SH, Krueger J, Newhouse PA, Wallace MT. Links between temporal acuity and multisensory integration across life span. J Exp Psychol Hum Percept Perform 2017; 44:106-116. [PMID: 28447850 DOI: 10.1037/xhp0000424] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, Brain and Mind Institute, University of Western Ontario
| | - Sarah H Baum
- Department of Psychology, University of Washington
| | | | - Paul A Newhouse
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center
| | | |
Collapse
|
39
|
Odegaard B, Wozny DR, Shams L. A simple and efficient method to enhance audiovisual binding tendencies. PeerJ 2017; 5:e3143. [PMID: 28462016 PMCID: PMC5407282 DOI: 10.7717/peerj.3143] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2016] [Accepted: 03/04/2017] [Indexed: 11/20/2022] Open
Abstract
Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1) the brain’s tendency to bind in spatial perception is plastic, (2) that it can change following brief exposure to simple audiovisual stimuli, and (3) that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.
Collapse
Affiliation(s)
- Brian Odegaard
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - David R Wozny
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States.,Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, United States.,Neuroscience Interdepartmental Program, University of California-Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
40
|
Modeling the Development of Audiovisual Cue Integration in Speech Perception. Brain Sci 2017; 7:brainsci7030032. [PMID: 28335558 PMCID: PMC5366831 DOI: 10.3390/brainsci7030032] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Revised: 03/03/2017] [Accepted: 03/16/2017] [Indexed: 11/22/2022] Open
Abstract
Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.
Collapse
|
41
|
Hannon EE, Schachner A, Nave-Blodgett JE. Babies know bad dancing when they see it: Older but not younger infants discriminate between synchronous and asynchronous audiovisual musical displays. J Exp Child Psychol 2017; 159:159-174. [PMID: 28288412 DOI: 10.1016/j.jecp.2017.01.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/17/2017] [Accepted: 01/17/2017] [Indexed: 10/20/2022]
Abstract
Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy.
Collapse
Affiliation(s)
- Erin E Hannon
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV 89154, USA.
| | - Adena Schachner
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093, USA
| | | |
Collapse
|
42
|
Irwin J, DiBlasi L. Audiovisual speech perception: A new approach and implications for clinical populations. LANGUAGE AND LINGUISTICS COMPASS 2017; 11:77-91. [PMID: 29520300 PMCID: PMC5839512 DOI: 10.1111/lnc3.12237] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2015] [Accepted: 01/25/2017] [Indexed: 06/01/2023]
Abstract
This selected overview of audiovisual (AV) speech perception examines the influence of visible articulatory information on what is heard. Thought to be a cross-cultural phenomenon that emerges early in typical language development, variables that influence AV speech perception include properties of the visual and the auditory signal, attentional demands, and individual differences. A brief review of the existing neurobiological evidence on how visual information influences heard speech indicates potential loci, timing, and facilitatory effects of AV over auditory only speech. The current literature on AV speech in certain clinical populations (individuals with an autism spectrum disorder, developmental language disorder, or hearing loss) reveals differences in processing that may inform interventions. Finally, a new method of assessing AV speech that does not require obvious cross-category mismatch or auditory noise was presented as a novel approach for investigators.
Collapse
Affiliation(s)
- Julia Irwin
- LEARN Center, Haskins Laboratories Inc., USA
| | | |
Collapse
|
43
|
Jerger S, Damian MF, McAlpine RP, Abdi H. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss. Int J Pediatr Otorhinolaryngol 2017; 94:127-137. [PMID: 28167003 PMCID: PMC5308867 DOI: 10.1016/j.ijporl.2017.01.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Revised: 01/05/2017] [Accepted: 01/06/2017] [Indexed: 11/18/2022]
Abstract
OBJECTIVES Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. METHODS Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. RESULTS Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. CONCLUSIONS These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL.
Collapse
Affiliation(s)
- Susan Jerger
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, 800 W. Campbell Rd, Richardson, TX, 75080, USA; Callier Center for Communication Disorders, 811 Synergy Park Blvd., Richardson, TX, 75080, USA.
| | - Markus F Damian
- University of Bristol, School of Experimental Psychology, 12a Priory Road, Room 1D20, Bristol, BS8 1TU, United Kingdom.
| | - Rachel P McAlpine
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, 800 W. Campbell Rd, Richardson, TX, 75080, USA; Callier Center for Communication Disorders, 811 Synergy Park Blvd., Richardson, TX, 75080, USA.
| | - Hervé Abdi
- School of Behavioral and Brain Sciences, GR4.1, University of Texas at Dallas, 800 W. Campbell Rd, Richardson, TX, 75080, USA.
| |
Collapse
|
44
|
Jerger S, Damian MF, Tye-Murray N, Abdi H. Children perceive speech onsets by ear and eye. JOURNAL OF CHILD LANGUAGE 2017; 44:185-215. [PMID: 26752548 PMCID: PMC4940343 DOI: 10.1017/s030500091500077x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: -b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children - like adults - perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception.
Collapse
Affiliation(s)
- Susan Jerger
- School of Behavioral and Brain Sciences,GR4·1,University of Texas at Dallas, andCallier Center for Communication Disorders,Richardson,Texas
| | | | - Nancy Tye-Murray
- Department of Otolaryngology-Head and Neck Surgery,Washington University School of Medicine
| | - Hervé Abdi
- School of Behavioral and Brain Sciences,GR4·1,University of Texas at Dallas
| |
Collapse
|
45
|
van Laarhoven T, Keetels M, Schakel L, Vroomen J. Audio-visual speech in noise perception in dyslexia. Dev Sci 2016; 21. [DOI: 10.1111/desc.12504] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Accepted: 08/09/2016] [Indexed: 11/30/2022]
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Mirjam Keetels
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Lemmy Schakel
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
46
|
Hillock-Dunn A, Grantham DW, Wallace MT. The temporal binding window for audiovisual speech: Children are like little adults. Neuropsychologia 2016; 88:74-82. [DOI: 10.1016/j.neuropsychologia.2016.02.017] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2015] [Revised: 12/23/2015] [Accepted: 02/22/2016] [Indexed: 10/22/2022]
|
47
|
Chen YC, Shore DI, Lewis TL, Maurer D. The development of the perception of audiovisual simultaneity. J Exp Child Psychol 2016; 146:17-33. [DOI: 10.1016/j.jecp.2016.01.010] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Revised: 01/09/2016] [Accepted: 01/12/2016] [Indexed: 10/22/2022]
|
48
|
Abstract
In the McGurk effect, incongruent auditory and visual syllables are perceived as a third, completely different syllable. This striking illusion has become a popular assay of multisensory integration for individuals and clinical populations. However, there is enormous variability in how often the illusion is evoked by different stimuli and how often the illusion is perceived by different individuals. Most studies of the McGurk effect have used only one stimulus, making it impossible to separate stimulus and individual differences. We created a probabilistic model to separately estimate stimulus and individual differences in behavioral data from 165 individuals viewing up to 14 different McGurk stimuli. The noisy encoding of disparity (NED) model characterizes stimuli by their audiovisual disparity and characterizes individuals by how noisily they encode the stimulus disparity and by their disparity threshold for perceiving the illusion. The model accurately described perception of the McGurk effect in our sample, suggesting that differences between individuals are stable across stimulus differences. The most important benefit of the NED model is that it provides a method to compare multisensory integration across individuals and groups without the confound of stimulus differences. An added benefit is the ability to predict frequency of the McGurk effect for stimuli never before seen by an individual.
Collapse
|
49
|
Variability and stability in the McGurk effect: contributions of participants, stimuli, time, and response type. Psychon Bull Rev 2016; 22:1299-307. [PMID: 25802068 DOI: 10.3758/s13423-015-0817-4] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In the McGurk effect, pairing incongruent auditory and visual syllables produces a percept different from the component syllables. Although it is a popular assay of audiovisual speech integration, little is known about the distribution of responses to the McGurk effect in the population. In our first experiment, we measured McGurk perception using 12 different McGurk stimuli in a sample of 165 English-speaking adults, 40 of whom were retested following a one-year interval. We observed dramatic differences both in how frequently different individuals perceived the illusion (from 0 % to 100 %) and in how frequently the illusion was perceived across different stimuli (17 % to 58 %). For individual stimuli, the distributions of response frequencies deviated strongly from normality, with 77 % of participants almost never or almost always perceiving the effect (≤10 % or ≥90 %). This deviation suggests that the mean response frequency, the most commonly reported measure of the McGurk effect, is a poor measure of individual participants' responses, and that the assumptions made by parametric statistical tests are invalid. Despite the substantial variability across individuals and stimuli, there was little change in the frequency of the effect between initial testing and a one-year retest (mean change in frequency = 2 %; test-retest correlation, r = 0.91). In a second experiment, we replicated our findings of high variability using eight new McGurk stimuli and tested the effects of open-choice versus forced-choice responding. Forced-choice responding resulted in an estimated 18 % greater frequency of the McGurk effect but similar levels of interindividual variability. Our results highlight the importance of examining individual differences in McGurk perception instead of relying on summary statistics averaged across a population. However, individual variability in the McGurk effect does not preclude its use as a stable measure of audiovisual integration.
Collapse
|
50
|
Lalonde K, Holt RF. Audiovisual speech perception development at varying levels of perceptual processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1713. [PMID: 27106318 PMCID: PMC4826374 DOI: 10.1121/1.4945590] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 01/04/2016] [Accepted: 03/25/2016] [Indexed: 06/05/2023]
Abstract
This study used the auditory evaluation framework [Erber (1982). Auditory Training (Alexander Graham Bell Association, Washington, DC)] to characterize the influence of visual speech on audiovisual (AV) speech perception in adults and children at multiple levels of perceptual processing. Six- to eight-year-old children and adults completed auditory and AV speech perception tasks at three levels of perceptual processing (detection, discrimination, and recognition). The tasks differed in the level of perceptual processing required to complete them. Adults and children demonstrated visual speech influence at all levels of perceptual processing. Whereas children demonstrated the same visual speech influence at each level of perceptual processing, adults demonstrated greater visual speech influence on tasks requiring higher levels of perceptual processing. These results support previous research demonstrating multiple mechanisms of AV speech processing (general perceptual and speech-specific mechanisms) with independent maturational time courses. The results suggest that adults rely on both general perceptual mechanisms that apply to all levels of perceptual processing and speech-specific mechanisms that apply when making phonetic decisions and/or accessing the lexicon. Six- to eight-year-old children seem to rely only on general perceptual mechanisms across levels. As expected, developmental differences in AV benefit on this and other recognition tasks likely reflect immature speech-specific mechanisms and phonetic processing in children.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Department of Speech and Hearing Sciences, Indiana University, 200 South Jordan Avenue, Bloomington, Indiana 47405, USA
| | - Rachael Frush Holt
- Department of Speech and Hearing Science, Ohio State University, 110 Pressey Hall, 1070 Carmack Road, Columbus, Ohio 43210, USA
| |
Collapse
|