1
|
Vassall SG, Wallace MT. Sensory and Multisensory Processing Changes and Their Contributions to Autism and Schizophrenia. Curr Top Behav Neurosci 2025. [PMID: 40346436 DOI: 10.1007/7854_2025_589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2025]
Abstract
Natural environments are typically multisensory, comprising information from multiple sensory modalities. It is in the integration of these incoming sensory signals that we form our perceptual gestalt that allows us to navigate through the world with relative ease. However, differences in multisensory integration (MSI) ability are found in a number of clinical conditions. Throughout this chapter, we discuss how MSI differences contribute to phenotypic characterization of autism and schizophrenia. Although these clinical populations are often described as opposite each other on a number of spectra, we describe similarities in behavioral performance and neural functions between the two conditions. Understanding the shared features of autism and schizophrenia through the lens of MSI research allows us to better understand the neural and behavioral underpinnings of both disorders. We provide potential avenues for remediation of MSI function in these populations.
Collapse
Affiliation(s)
- Sarah G Vassall
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Department of Hearing and Speech, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt Vision Research Center, Nashville, TN, USA.
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA.
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
2
|
Gijbels L, Lee AKC, Lalonde K. Integration of audiovisual speech perception: From infancy to older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:1981-2000. [PMID: 40126041 DOI: 10.1121/10.0036137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Accepted: 02/19/2025] [Indexed: 03/25/2025]
Abstract
One of the most prevalent and relevant social experiences for humans - engaging in face-to-face conversations - is inherently multimodal. In the context of audiovisual (AV) speech perception, the visual cues from the speaker's face play a crucial role in language acquisition and in enhancing our comprehension of incoming auditory speech signals. Nonetheless, AV integration reflects substantial individual differences, which cannot be entirely accounted for by the information conveyed through the speech signal or the perceptual abilities of the individual. These differences illustrate changes in response to experience with auditory and visual sensory processing across the lifespan, and within a phase of life. To improve our understanding of integration of AV speech, the current work offers a perspective for understanding AV speech processing in relation to AV perception in general from a prelinguistic and a linguistic viewpoint, and by looking at AV perception through the lens of humans as Bayesian observers implementing a causal inference model. This allowed us to create a cohesive approach to look at differences and similarities of AV integration from infancy to older adulthood. Behavioral and neurophysiological evidence suggests that both prelinguistic and linguistic mechanisms exhibit distinct, yet mutually influential, effects across the lifespan within and between individuals.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- University of Washington, Department of Speech and Hearing Sciences, Seattle, Washington 98195, USA
- University of Washington, Institute for Learning and Brain Sciences, Seattle, Washington 98915, USA
| | - Adrian K C Lee
- University of Washington, Department of Speech and Hearing Sciences, Seattle, Washington 98195, USA
- University of Washington, Institute for Learning and Brain Sciences, Seattle, Washington 98915, USA
| | - Kaylah Lalonde
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, Nebraska 68131, USA
| |
Collapse
|
3
|
Chow HM, Ma YK, Tseng CH. Social and communicative not a prerequisite: Preverbal infants learn an abstract rule only from congruent audiovisual dynamic pitch-height patterns. J Exp Child Psychol 2024; 248:106046. [PMID: 39241321 DOI: 10.1016/j.jecp.2024.106046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 07/23/2024] [Accepted: 07/29/2024] [Indexed: 09/09/2024]
Abstract
Learning in the everyday environment often requires the flexible integration of relevant multisensory information. Previous research has demonstrated preverbal infants' capacity to extract an abstract rule from audiovisual temporal sequences matched in temporal synchrony. Interestingly, this capacity was recently reported to be modulated by crossmodal correspondence beyond spatiotemporal matching (e.g., consistent facial emotional expressions or articulatory mouth movements matched with sound). To investigate whether such modulatory influence applies to non-social and non-communicative stimuli, we conducted a critical test using audiovisual stimuli free of social information: visually upward (and downward) moving objects paired with a congruent tone of ascending or incongruent (descending) pitch. East Asian infants (8-10 months old) from a metropolitan area in Asia demonstrated successful abstract rule learning in the congruent audiovisual condition and demonstrated weaker learning in the incongruent condition. This implies that preverbal infants use crossmodal dynamic pitch-height correspondence to integrate multisensory information before rule extraction. This result confirms that preverbal infants are ready to use non-social non-communicative information in serving cognitive functions such as rule extraction in a multisensory context.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, St. Thomas University, Fredericton, New Brunswick E3B 5G3, Canada
| | - Yuen Ki Ma
- Department of Psychology, The University of Hong Kong, Pokfulam, Hong Kong
| | - Chia-Huei Tseng
- Research Institute of Electrical Communication, Tohoku University, Sendai, Miyagi 980-0812, Japan.
| |
Collapse
|
4
|
Décaillet M, Denervaud S, Huguenin-Virchaux C, Besuchet L, Fischer Fumeaux CJ, Murray MM, Schneider J. The impact of premature birth on auditory-visual processes in very preterm schoolchildren. NPJ SCIENCE OF LEARNING 2024; 9:42. [PMID: 38971881 PMCID: PMC11227572 DOI: 10.1038/s41539-024-00257-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 06/19/2024] [Indexed: 07/08/2024]
Abstract
Interactions between stimuli from different sensory modalities and their integration are central to daily life, contributing to improved perception. Being born prematurely and the subsequent hospitalization can have an impact not only on sensory processes, but also on the manner in which information from different senses is combined-i.e., multisensory processes. Very preterm (VPT) children (<32 weeks gestational age) present impaired multisensory processes in early childhood persisting at least through the age of five. However, it remains largely unknown whether and how these consequences persist into later childhood. Here, we evaluated the integrity of auditory-visual multisensory processes in VPT schoolchildren. VPT children (N = 28; aged 8-10 years) received a standardized cognitive assessment and performed a simple detection task at their routine follow-up appointment. The simple detection task involved pressing a button as quickly as possible upon presentation of an auditory, visual, or simultaneous audio-visual stimulus. Compared to full-term (FT) children (N = 23; aged 6-11 years), reaction times of VPT children were generally slower and more variable, regardless of sensory modality. Nonetheless, both groups exhibited multisensory facilitation on mean reaction times and inter-quartile ranges. There was no evidence that standardized cognitive or clinical measures correlated with multisensory gains of VPT children. However, while gains in FT children exceeded predictions based on probability summation and thus forcibly invoked integrative processes, this was not the case for VPT children. Our findings provide evidence of atypical multisensory profiles in VPT children persisting into school-age. These results could help in targeting supportive interventions for this vulnerable population.
Collapse
Affiliation(s)
- Marion Décaillet
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland.
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
| | - Solange Denervaud
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Cléo Huguenin-Virchaux
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Laureline Besuchet
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Céline J Fischer Fumeaux
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Micah M Murray
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
| | - Juliane Schneider
- The Sense Innovation and Research Center, Lausanne and Sion, Lausanne, Switzerland
- Clinic of Neonatology, Department of Mother-Woman-Child, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
5
|
Ampollini S, Ardizzi M, Ferroni F, Cigala A. Synchrony perception across senses: A systematic review of temporal binding window changes from infancy to adolescence in typical and atypical development. Neurosci Biobehav Rev 2024; 162:105711. [PMID: 38729280 DOI: 10.1016/j.neubiorev.2024.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 04/14/2024] [Accepted: 05/03/2024] [Indexed: 05/12/2024]
Abstract
Sensory integration is increasingly acknowledged as being crucial for the development of cognitive and social abilities. However, its developmental trajectory is still little understood. This systematic review delves into the topic by investigating the literature about the developmental changes from infancy through adolescence of the Temporal Binding Window (TBW) - the epoch of time within which sensory inputs are perceived as simultaneous and therefore integrated. Following comprehensive searches across PubMed, Elsevier, and PsycInfo databases, only experimental, behavioral, English-language, peer-reviewed studies on multisensory temporal processing in 0-17-year-olds have been included. Non-behavioral, non-multisensory, and non-human studies have been excluded as those that did not directly focus on the TBW. The selection process was independently performed by two Authors. The 39 selected studies involved 2859 participants in total. Findings indicate a predisposition towards cross-modal asynchrony sensitivity and a composite, still unclear, developmental trajectory, with atypical development associated to increased asynchrony tolerance. These results highlight the need for consistent and thorough research into TBW development to inform potential interventions.
Collapse
Affiliation(s)
- Silvia Ampollini
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy.
| | - Martina Ardizzi
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Francesca Ferroni
- Department of Medicine and Surgery, Unit of Neuroscience, University of Parma, Via Volturno 39E, Parma 43121, Italy
| | - Ada Cigala
- Department of Humanities, Social Sciences and Cultural Industries, University of Parma, Borgo Carissimi, 10, Parma 43121, Italy
| |
Collapse
|
6
|
Bruns P, Röder B. Development and experience-dependence of multisensory spatial processing. Trends Cogn Sci 2023; 27:961-973. [PMID: 37208286 DOI: 10.1016/j.tics.2023.04.012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 04/24/2023] [Accepted: 04/25/2023] [Indexed: 05/21/2023]
Abstract
Multisensory spatial processes are fundamental for efficient interaction with the world. They include not only the integration of spatial cues across sensory modalities, but also the adjustment or recalibration of spatial representations to changing cue reliabilities, crossmodal correspondences, and causal structures. Yet how multisensory spatial functions emerge during ontogeny is poorly understood. New results suggest that temporal synchrony and enhanced multisensory associative learning capabilities first guide causal inference and initiate early coarse multisensory integration capabilities. These multisensory percepts are crucial for the alignment of spatial maps across sensory systems, and are used to derive more stable biases for adult crossmodal recalibration. The refinement of multisensory spatial integration with increasing age is further promoted by the inclusion of higher-order knowledge.
Collapse
Affiliation(s)
- Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany.
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
7
|
Purpura G, Fumagalli S, Nacinovich R, Riva A, Ornaghi S, Serafini M, Nespoli A. Effects of social and sensory deprivation in newborns: A lesson from the Covid-19 experience. Early Hum Dev 2023; 185:105853. [PMID: 37666054 DOI: 10.1016/j.earlhumdev.2023.105853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 08/28/2023] [Accepted: 08/29/2023] [Indexed: 09/06/2023]
Abstract
BACKGROUND Infancy is a complex period of human life, in which environmental experiences have a fundamental role for neurodevelopment. Although conditions of social and sensory deprivation are uncommon in high income countries, the Covid-19 pandemic abruptly modified this condition, by depriving people of their social stimuli of daily life. AIM To understand the impact of this deprivation on infants' behaviour, we investigated the short-term effects of isolation and use of individual protective systems by mothers during the first two weeks of life. METHODS The study included 11 mother-infant dyads with mothers tested positive to SARS-CoV-2 at the time of delivery (Covid group) and 11 dyads with a SARS-CoV-2 negative mother as controls. Neurobehavioral, visual, and sensory processing assessments were performed from birth to 3 months of age. RESULTS Findings showed the effect of deprivation on some neurobehavioral abilities of infants in the Covid group; in addition, differences in sensory maturation trends were observed, although they tended to gradually decrease until disappearance at 3 months of age. CONCLUSION These findings suggest the significant effects of early sensory and social deprivation during the first two weeks of life, but also provide several insights on the ability of the brain to restore its aptitudes by deleting or reducing the effects of early deprivation before the critical periods' closure.
Collapse
Affiliation(s)
- Giulia Purpura
- University of Milano Bicocca, School of Medicine and Surgery, Monza, Italy.
| | - Simona Fumagalli
- University of Milano Bicocca, School of Medicine and Surgery, Monza, Italy; Department of Obstetrics and Gynecology, Unit of Obstetrics, Fondazione IRCCS San Gerardo dei Tintori, Monza, Italy
| | - Renata Nacinovich
- University of Milano Bicocca, School of Medicine and Surgery, Monza, Italy; Child and Adolescent Health Department, Fondazione IRCCS San Gerardo Dei Tintori, Monza, Italy
| | - Anna Riva
- University of Milano Bicocca, School of Medicine and Surgery, Monza, Italy; Child and Adolescent Health Department, Fondazione IRCCS San Gerardo Dei Tintori, Monza, Italy
| | - Sara Ornaghi
- University of Milano Bicocca, School of Medicine and Surgery, Monza, Italy; Department of Obstetrics and Gynecology, Unit of Obstetrics, Fondazione IRCCS San Gerardo dei Tintori, Monza, Italy
| | - Marzia Serafini
- University of Milano Bicocca, School of Medicine and Surgery, Monza, Italy; Department of Obstetrics and Gynecology, Unit of Obstetrics, Fondazione IRCCS San Gerardo dei Tintori, Monza, Italy
| | - Antonella Nespoli
- University of Milano Bicocca, School of Medicine and Surgery, Monza, Italy; Department of Obstetrics and Gynecology, Unit of Obstetrics, Fondazione IRCCS San Gerardo dei Tintori, Monza, Italy
| |
Collapse
|
8
|
Morelli F, Schiatti L, Cappagli G, Martolini C, Gori M, Signorini S. Clinical assessment of the TechArm system on visually impaired and blind children during uni- and multi-sensory perception tasks. Front Neurosci 2023; 17:1158438. [PMID: 37332868 PMCID: PMC10272406 DOI: 10.3389/fnins.2023.1158438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/12/2023] [Indexed: 06/20/2023] Open
Abstract
We developed the TechArm system as a novel technological tool intended for visual rehabilitation settings. The system is designed to provide a quantitative assessment of the stage of development of perceptual and functional skills that are normally vision-dependent, and to be integrated in customized training protocols. Indeed, the system can provide uni- and multisensory stimulation, allowing visually impaired people to train their capability of correctly interpreting non-visual cues from the environment. Importantly, the TechArm is suitable to be used by very young children, when the rehabilitative potential is maximal. In the present work, we validated the TechArm system on a pediatric population of low-vision, blind, and sighted children. In particular, four TechArm units were used to deliver uni- (audio or tactile) or multi-sensory stimulation (audio-tactile) on the participant's arm, and subject was asked to evaluate the number of active units. Results showed no significant difference among groups (normal or impaired vision). Overall, we observed the best performance in tactile condition, while auditory accuracy was around chance level. Also, we found that the audio-tactile condition is better than the audio condition alone, suggesting that multisensory stimulation is beneficial when perceptual accuracy and precision are low. Interestingly, we observed that for low-vision children the accuracy in audio condition improved proportionally to the severity of the visual impairment. Our findings confirmed the TechArm system's effectiveness in assessing perceptual competencies in sighted and visually impaired children, and its potential to be used to develop personalized rehabilitation programs for people with visual and sensory impairments.
Collapse
Affiliation(s)
- Federica Morelli
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Lucia Schiatti
- Computer Science and Artificial Intelligence Lab and Center for Brains, Minds and Machines, Massachusetts Institute of Technology, Boston, MA, United States
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Giulia Cappagli
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Chiara Martolini
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Sabrina Signorini
- Developmental Neuro-Ophthalmology Unit, IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
9
|
The multisensory cocktail party problem in children: Synchrony-based segregation of multiple talking faces improves in early childhood. Cognition 2022; 228:105226. [PMID: 35882100 DOI: 10.1016/j.cognition.2022.105226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 07/09/2022] [Accepted: 07/11/2022] [Indexed: 11/23/2022]
Abstract
Extraction of meaningful information from multiple talkers relies on perceptual segregation. The temporal synchrony statistics inherent in everyday audiovisual (AV) speech offer a powerful basis for perceptual segregation. We investigated the developmental emergence of synchrony-based perceptual segregation of multiple talkers in 3-7-year-old children. Children either saw four identical or four different faces articulating temporally jittered versions of the same utterance and heard the audible version of the same utterance either synchronized with one of the talkers or desynchronized with all of them. Eye tracking revealed that selective attention to the temporally synchronized talking face increased while attention to the desynchronized faces decreased with age and that attention to the talkers' mouth primarily drove responsiveness. These findings demonstrate that the temporal synchrony statistics inherent in fluent AV speech assume an increasingly greater role in perceptual segregation of the multisensory clutter created by multiple talking faces in early childhood.
Collapse
|
10
|
Gori M, Campus C, Signorini S, Rivara E, Bremner AJ. Multisensory spatial perception in visually impaired infants. Curr Biol 2021; 31:5093-5101.e5. [PMID: 34555348 PMCID: PMC8612739 DOI: 10.1016/j.cub.2021.09.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 07/29/2021] [Accepted: 09/03/2021] [Indexed: 12/02/2022]
Abstract
Congenitally blind infants are not only deprived of visual input but also of visual influences on the intact senses. The important role that vision plays in the early development of multisensory spatial perception1, 2, 3, 4, 5, 6, 7 (e.g., in crossmodal calibration8, 9, 10 and in the formation of multisensory spatial representations of the body and the world1,2) raises the possibility that impairments in spatial perception are at the heart of the wide range of difficulties that visually impaired infants show across spatial,8, 9, 10, 11, 12 motor,13, 14, 15, 16, 17 and social domains.8,18,19 But investigations of early development are needed to clarify how visually impaired infants’ spatial hearing and touch support their emerging ability to make sense of their body and the outside world. We compared sighted (S) and severely visually impaired (SVI) infants’ responses to auditory and tactile stimuli presented on their hands. No statistically reliable differences in the direction or latency of responses to auditory stimuli emerged, but significant group differences emerged in responses to tactile and audiotactile stimuli. The visually impaired infants showed attenuated audiotactile spatial integration and interference, weighted more tactile than auditory cues when the two were presented in conflict, and showed a more limited influence of representations of the external layout of the body on tactile spatial perception.20 These findings uncover a distinct phenotype of multisensory spatial perception in early postnatal visual deprivation. Importantly, evidence of audiotactile spatial integration in visually impaired infants, albeit to a lesser degree than in sighted infants, signals the potential of multisensory rehabilitation methods in early development. Video abstract
Visually impaired infants have a distinct phenotype of audiotactile perception Infants with severe visual impairment (SVI) place more weight on tactile locations SVI infants show attenuated audiotactile spatial integration and interference SVI infants do not show an influence of body representations on tactile space
Collapse
Affiliation(s)
- Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Technologia, 16152 Genova, Italy.
| | - Claudio Campus
- Unit for Visually Impaired People, Istituto Italiano di Technologia, 16152 Genova, Italy
| | - Sabrina Signorini
- Centre of Child Neurophthalmology, IRCCS Mondino Foundation, 27100 Pavia, Italy
| | | | - Andrew J Bremner
- School of Psychology, University of Birmingham, Birmingham B15 2SB, UK
| |
Collapse
|
11
|
Late development of audio-visual integration in the vertical plane. CURRENT RESEARCH IN BEHAVIORAL SCIENCES 2021. [DOI: 10.1016/j.crbeha.2021.100043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
12
|
Chow HM, Harris DA, Eid S, Ciaramitaro VM. The feeling of "kiki": Comparing developmental changes in sound-shape correspondence for audio-visual and audio-tactile stimuli. J Exp Child Psychol 2021; 209:105167. [PMID: 33915481 DOI: 10.1016/j.jecp.2021.105167] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 03/21/2021] [Accepted: 03/22/2021] [Indexed: 10/21/2022]
Abstract
Sound-shape crossmodal correspondence, the naturally occurring associations between abstract visual shapes and nonsense sounds, is one aspect of multisensory processing that strengthens across early childhood. Little is known regarding whether school-aged children exhibit other variants of sound-shape correspondences such as audio-tactile (AT) associations between tactile shapes and nonsense sounds. Based on previous research in blind individuals suggesting the role of visual experience in establishing sound-shape correspondence, we hypothesized that children would show weaker AT association than adults and that children's AT association would be enhanced with visual experience of the shapes. In Experiment 1, we showed that, when asked to match shapes explored haptically via touch to nonsense words, 6- to 8-year-olds exhibited inconsistent AT associations, whereas older children and adults exhibited the expected AT associations, despite robust audio-visual (AV) associations found across all age groups in a related study. In Experiment 2, we confirmed the role of visual experience in enhancing AT association; here, 6- to 8-year-olds could exhibit the expected AT association if first exposed to the AV condition, whereas adults showed the expected AT association irrespective of whether the AV condition was tested first or second. Our finding suggests that AT sound-shape correspondence is weak early in development relative to AV sound-shape correspondence, paralleling previous findings on the development of other types of multisensory associations. The potential role of visual experience in the development of sound-shape correspondences in other senses is discussed.
Collapse
Affiliation(s)
- Hiu Mei Chow
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA; Department of Ophthalmology and Visual Sciences, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Daniel A Harris
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA; Division of Epidemiology, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario M5T 3M7, Canada
| | - Sandy Eid
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA
| | - Vivian M Ciaramitaro
- Department of Psychology, Developmental and Brain Sciences, University of Massachusetts Boston, Boston, MA 02125, USA.
| |
Collapse
|
13
|
Gharaibeh M, Dukmak S. Effect of computer-based multisensory program on English reading skills of students with Dyslexia and reading difficulties. APPLIED NEUROPSYCHOLOGY-CHILD 2021; 11:504-517. [PMID: 33813982 DOI: 10.1080/21622965.2021.1898395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The study evaluated the effectiveness of a computer-based Multi-Sensory Program (MSP) on English reading skills as a second language of fourth-grade students with reading difficulties and Dyslexia in the United Arab Emirates (UAE). Pretest-Posttest experimental design was used in this study. The analysis showed that average pretest score in both the experimental and control groups was almost the same and the average post-test score was much higher in the experimental group as compared to the control group. Moreover, results also reveal statistically significant difference in the students' mean scores between the experimental and control groups after the MSP intervention. The study has implications in the UAE and Arab countries and everywhere in the world for students who are learning English as a second language and facing reading difficulties. Besides, the modified MSP used in this study can be adopted and imitated to create a local version in the Arabic language in the Middle East and in other countries that are teaching Arabic as a second language.
Collapse
|
14
|
Turoman N, Tivadar RI, Retsa C, Maillard AM, Scerif G, Matusz PJ. The development of attentional control mechanisms in multisensory environments. Dev Cogn Neurosci 2021; 48:100930. [PMID: 33561691 PMCID: PMC7873372 DOI: 10.1016/j.dcn.2021.100930] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 11/26/2020] [Accepted: 01/26/2021] [Indexed: 01/02/2023] Open
Abstract
Outside the laboratory, people need to pay attention to relevant objects that are typically multisensory, but it remains poorly understood how the underlying neurocognitive mechanisms develop. We investigated when adult-like mechanisms controlling one's attentional selection of visual and multisensory objects emerge across childhood. Five-, 7-, and 9-year-olds were compared with adults in their performance on a computer game-like multisensory spatial cueing task, while 129-channel EEG was simultaneously recorded. Markers of attentional control were behavioural spatial cueing effects and the N2pc ERP component (analysed traditionally and using a multivariate electrical neuroimaging framework). In behaviour, adult-like visual attentional control was present from age 7 onwards, whereas multisensory control was absent in all children groups. In EEG, multivariate analyses of the activity over the N2pc time-window revealed stable brain activity patterns in children. Adult-like visual-attentional control EEG patterns were present age 7 onwards, while multisensory control activity patterns were found in 9-year-olds (albeit behavioural measures showed no effects). By combining rigorous yet naturalistic paradigms with multivariate signal analyses, we demonstrated that visual attentional control seems to reach an adult-like state at ∼7 years, before adult-like multisensory control, emerging at ∼9 years. These results enrich our understanding of how attention in naturalistic settings develops.
Collapse
Affiliation(s)
- Nora Turoman
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, 3960, Switzerland; Working Memory, Cognition and Development lab, Department of Psychology and Educational Sciences, University of Geneva, Geneva, Switzerland
| | - Ruxandra I Tivadar
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Cognitive Computational Neuroscience group, Institute of Computer Science, Faculty of Science, University of Bern, Bern, Switzerland
| | - Chrysa Retsa
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Anne M Maillard
- Service des Troubles du Spectre de l'Autisme et apparentés, Department of Psychiatry, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxfordshire, UK
| | - Pawel J Matusz
- The LINE (Laboratory for Investigative Neurophysiology), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland; Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, 3960, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
15
|
Do infants represent human actions cross-modally? An ERP visual-auditory priming study. Biol Psychol 2021; 160:108047. [PMID: 33596461 DOI: 10.1016/j.biopsycho.2021.108047] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 01/15/2021] [Accepted: 02/08/2021] [Indexed: 12/27/2022]
Abstract
Recent findings indicate that 7-months-old infants perceive and represent the sounds inherent to moving human bodies. However, it is not known whether infants integrate auditory and visual information in representations of specific human actions. To address this issue, we used ERPs to investigate infants' neural sensitivity to the correspondence between sounds and images of human actions. In a cross-modal priming paradigm, 7-months-olds were presented with the sounds generated by two types of human body movement, walking and handclapping, after watching the kinematics of those actions in either a congruent or incongruent manner. ERPs recorded from frontal, central and parietal electrodes in response to action sounds indicate that 7-months-old infants perceptually link the visual and auditory cues of human actions. However, at this age these percepts do not seem to be integrated in cognitive multimodal representations of human actions.
Collapse
|
16
|
Camarata S, Miller LJ, Wallace MT. Evaluating Sensory Integration/Sensory Processing Treatment: Issues and Analysis. Front Integr Neurosci 2020; 14:556660. [PMID: 33324180 PMCID: PMC7726187 DOI: 10.3389/fnint.2020.556660] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 10/09/2020] [Indexed: 11/13/2022] Open
Abstract
For more than 50 years, "Sensory Integration" has been a theoretical framework for diagnosing and treating disabilities in children under the umbrella of "sensory integration dysfunction" (SID). More recently, the approach has been reframed as "the dimensions of sensory processing" or SPD in place of SID, so the review herein describes this collective framework as sensory integration/sensory processing treatment (SI/SP-T) for ASD. This review is not focused on diagnosis of SI/SPD. Broadly, the SI/SPD intervention approach views a plethora of disabilities such as ADHD, ASD, and disruptive behavior as being exacerbated by difficulties in modulating and integrating sensory input with a primary focus on contributions from tactile, proprioceptive, and vestibular systems which are hypothesized to contribute to core symptoms of the conditions (e.g., ASD). SI/SP intervention procedures include sensory protocols designed to enhance tactile, proprioceptive, and vestibular experiences. SI/SP-T procedures utilize equipment (e.g., lycra swings, balance beams, climbing walls, and trampolines), specific devices (e.g., weighted vests, sensory brushes) and activities (e.g., placing hands in messy substances such as shaving cream, sequenced movements) hypothesized to enhance sensory integration and sensory processing. The approach is reviewed herein to provide a framework for testing SI/SP-T using widely accepted clinical trials and event coding methods used in applied behavior analysis (ABA) and other behavioral interventions. Also, a related but distinct neuroscientific paradigm, multisensory integration, is presented as an independent test of whether SI/SP-T differentially impacts sensory integration and/or multisensory integration. Finally, because SI/SP-T activities include many incidental behavioral events that are known as developmental facilitators (e.g., contingent verbal models/recasts during verbal interactions), there is a compelling need to control for confounds to study the unique impact of sensory-based interventions. Note that SI/SP-T includes very specific and identifiable procedures and materials, so it is reasonable to expect high treatment fidelity when testing the approach. A patient case is presented that illustrates this confound with a known facilitator (recast intervention) and a method for controlling potential confounds in order to conduct unbiased studies of the effects of SI/SP-T approaches that accurately represent SI/SP-T theories of change.
Collapse
Affiliation(s)
- Stephen Camarata
- Department of Speech and Hearing Sciences, Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, United States
| | - Lucy Jane Miller
- STAR Institute for Sensory Processing, Greenwood Village, Centennial, CO, United States
- School of Medicine, University of Colorado, Denver, CO, United States
| | - Mark T. Wallace
- Department of Speech and Hearing Sciences, Bill Wilkerson Center, Vanderbilt University School of Medicine, Nashville, TN, United States
- Graduate School, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
17
|
Begum Ali J, Thomas RL, Mullen Raymond S, Bremner AJ. Sensitivity to Visual-Tactile Colocation on the Body Prior to Skilled Reaching in Early Infancy. Child Dev 2020; 92:21-34. [PMID: 32920852 DOI: 10.1111/cdev.13428] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Two experiments examined perceptual colocation of visual and tactile stimuli in young infants. Experiment 1 compared 4- (n = 15) and 6-month-old (n = 12) infants' visual preferences for visual-tactile stimulus pairs presented across the same or different feet. The 4- and 6-month-olds showed, respectively, preferences for colocated and noncolocated conditions, demonstrating sensitivity to visual-tactile colocation on their feet. This extends previous findings of visual-tactile perceptual colocation on the hands in older infants. Control conditions excluded the possibility that both 6- (Experiment 1), and 4-month-olds (Experiment 2, n = 12) perceived colocation on the basis of an undifferentiated supramodal coding of spatial distance between stimuli. Bimodal perception of visual-tactile colocation is available by 4 months of age, that is, prior to the development of skilled reaching.
Collapse
|
18
|
Yang W, Li S, Xu J, Li Z, Yang X, Ren Y. Selective and divided attention modulates audiovisual integration in adolescents. COGNITIVE DEVELOPMENT 2020. [DOI: 10.1016/j.cogdev.2020.100922] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
19
|
Scheller M, Proulx MJ, Haan M, Dahlmann‐Noor A, Petrini K. Late‐ but not early‐onset blindness impairs the development of audio‐haptic multisensory integration. Dev Sci 2020; 24:e13001. [DOI: 10.1111/desc.13001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 04/04/2020] [Accepted: 05/26/2020] [Indexed: 11/29/2022]
Affiliation(s)
| | | | - Michelle Haan
- Developmental Neurosciences Programme University College London London UK
| | - Annegret Dahlmann‐Noor
- NIHR Biomedical Research Centre Moorfields London UK
- Paediatric Service Moorfields Eye Hospital London UK
| | - Karin Petrini
- Department of Psychology University of Bath London UK
| |
Collapse
|
20
|
Abstract
Abstract
We agree with the authors regarding the utility of viewing cognition as resulting from an optimal use of limited resources. Here, we advocate for extending this approach to the study of cognitive development, which we feel provides particularly powerful insight into the debate between bounded optimality and true sub-optimality, precisely because young children have limited computational and cognitive resources.
Collapse
|
21
|
Scheller M, Garcia S, Bathelt J, de Haan M, Petrini K. Active touch facilitates object size perception in children but not adults: A multisensory event related potential study. Brain Res 2019; 1723:146381. [PMID: 31419429 DOI: 10.1016/j.brainres.2019.146381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Revised: 07/19/2019] [Accepted: 08/12/2019] [Indexed: 11/28/2022]
Abstract
In order to increase perceptual precision the adult brain dynamically combines redundant information from different senses depending on their reliability. During object size estimation, for example, visual, auditory and haptic information can be integrated to increase the precision of the final size estimate. Young children, however, do not integrate sensory information optimally and instead rely on active touch. Whether this early haptic dominance is reflected in age-related differences in neural mechanisms and whether it is driven by changes in bottom-up perceptual or top-down attentional processes has not yet been investigated. Here, we recorded event-related-potentials from a group of adults and children aged 5-7 years during an object size perception task using auditory, visual and haptic information. Multisensory information was presented either congruently (conveying the same information) or incongruently (conflicting information). No behavioral responses were required from participants. When haptic size information was available via actively tapping the objects, response amplitudes in the mid-parietal area were significantly reduced by information congruency in children but not in adults between 190 ms-250 ms and 310 ms-370 ms. These findings indicate that during object size perception only children's brain activity is modulated by active touch supporting a neural maturational shift from sensory dominance in early childhood to optimal multisensory benefit in adulthood.
Collapse
Affiliation(s)
| | | | - Joe Bathelt
- Brain & Cognition, University of Amsterdam, Netherlands; UCL Great Ormond Street Institute of Child Health, UK
| | | | | |
Collapse
|
22
|
Bejjanki VR, Randrup ER, Aslin RN. Young children combine sensory cues with learned information in a statistically efficient manner: But task complexity matters. Dev Sci 2019; 23:e12912. [PMID: 31608526 DOI: 10.1111/desc.12912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 07/31/2019] [Accepted: 10/08/2019] [Indexed: 11/29/2022]
Abstract
Human adults are adept at mitigating the influence of sensory uncertainty on task performance by integrating sensory cues with learned prior information, in a Bayes-optimal fashion. Previous research has shown that young children and infants are sensitive to environmental regularities, and that the ability to learn and use such regularities is involved in the development of several cognitive abilities. However, it has also been reported that children younger than 8 do not combine simultaneously available sensory cues in a Bayes-optimal fashion. Thus, it remains unclear whether, and by what age, children can combine sensory cues with learned regularities in an adult manner. Here, we examine the performance of 6- to 7-year-old children when tasked with localizing a 'hidden' target by combining uncertain sensory information with prior information learned over repeated exposure to the task. We demonstrate that 6- to 7-year-olds learn task-relevant statistics at a rate on par with adults, and like adults, are capable of integrating learned regularities with sensory information in a statistically efficient manner. We also show that variables such as task complexity can influence young children's behavior to a greater extent than that of adults, leading their behavior to look sub-optimal. Our findings have important implications for how we should interpret failures in young children's ability to carry out sophisticated computations. These 'failures' need not be attributed to deficits in the fundamental computational capacity available to children early in development, but rather to ancillary immaturities in general cognitive abilities that mask the operation of these computations in specific situations.
Collapse
Affiliation(s)
- Vikranth R Bejjanki
- Department of Psychology, Hamilton College, Clinton, NY, USA.,Program in Neuroscience, Hamilton College, Clinton, NY, USA
| | - Emily R Randrup
- Department of Psychology, Hamilton College, Clinton, NY, USA
| | | |
Collapse
|
23
|
Trudeau-Fisette P, Ito T, Ménard L. Auditory and Somatosensory Interaction in Speech Perception in Children and Adults. Front Hum Neurosci 2019; 13:344. [PMID: 31636554 PMCID: PMC6788346 DOI: 10.3389/fnhum.2019.00344] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 09/18/2019] [Indexed: 11/28/2022] Open
Abstract
Multisensory integration (MSI) allows us to link sensory cues from multiple sources and plays a crucial role in speech development. However, it is not clear whether humans have an innate ability or whether repeated sensory input while the brain is maturing leads to efficient integration of sensory information in speech. We investigated the integration of auditory and somatosensory information in speech processing in a bimodal perceptual task in 15 young adults (age 19–30) and 14 children (age 5–6). The participants were asked to identify if the perceived target was the sound /e/ or /ø/. Half of the stimuli were presented under a unimodal condition with only auditory input. The other stimuli were presented under a bimodal condition with both auditory input and somatosensory input consisting of facial skin stretches provided by a robotic device, which mimics the articulation of the vowel /e/. The results indicate that the effect of somatosensory information on sound categorization was larger in adults than in children. This suggests that integration of auditory and somatosensory information evolves throughout the course of development.
Collapse
Affiliation(s)
- Paméla Trudeau-Fisette
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| | - Takayuki Ito
- GIPSA-Lab, CNRS, Grenoble INP, Université Grenoble Alpes, Grenoble, France.,Haskins Laboratories, Yale University, New Haven, CT, United States
| | - Lucie Ménard
- Laboratoire de Phonétique, Université du Québec à Montréal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, Montreal, QC, Canada
| |
Collapse
|
24
|
Kim S, Kim J. Effects of Multimodal Association on Ambiguous Perception in Binocular Rivalry. Perception 2019; 48:796-819. [DOI: 10.1177/0301006619867023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
When two eyes view dissimilar images, an observer typically reports ambiguous perception called binocular rivalry where the subjective perception fluctuates between the two inputs. This perceptual instability is often comprised of exclusive dominance of each image and a transition state called piecemeal state where the two images are intermingled in patchwork manner. Herein, we investigated the effects of multimodal association of sensory congruent pair, arbitrary pair, and reverse pair on piecemeal state in order to see how each level of association affects the ambiguous perception during binocular rivalry. To induce the multisensory associations, we designed a matching task with audiovisual feedback where subjects were required to respond according to given pairing rules. We found that explicit audiovisual associations can substantially affect the piecemeal state during binocular rivalry and that this congruency effect that reduces the amount of visual ambiguity originates primarily from explicit audiovisual association training rather than common sensory features. Furthermore, when one information is associated with multiple information, recent and preexisting associations work collectively to influence the perceptual ambiguity during rivalry. Our findings show that learned multimodal association directly affects the temporal dynamics of ambiguous perception during binocular rivalry by modulating not only the exclusive dominance but also the piecemeal state in a systematic manner.
Collapse
Affiliation(s)
- Sungyong Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Jeounghoon Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea; School of Humanities and Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
25
|
Volpe G, Gori M. Multisensory Interactive Technologies for Primary Education: From Science to Technology. Front Psychol 2019; 10:1076. [PMID: 31316410 PMCID: PMC6611336 DOI: 10.3389/fpsyg.2019.01076] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 04/25/2019] [Indexed: 12/02/2022] Open
Abstract
While technology is increasingly used in the classroom, we observe at the same time that making teachers and students accept it is more difficult than expected. In this work, we focus on multisensory technologies and we argue that the intersection between current challenges in pedagogical practices and recent scientific evidence opens novel opportunities for these technologies to bring a significant benefit to the learning process. In our view, multisensory technologies are ideal for effectively supporting an embodied and enactive pedagogical approach exploiting the best-suited sensory modality to teach a concept at school. This represents a great opportunity for designing technologies, which are both grounded on robust scientific evidence and tailored to the actual needs of teachers and students. Based on our experience in technology-enhanced learning projects, we propose six golden rules we deem important for catching this opportunity and fully exploiting it.
Collapse
Affiliation(s)
- Gualtiero Volpe
- Casa Paganini-InfoMus, DIBRIS, University of Genoa, Genoa, Italy
| | - Monica Gori
- U-Vip Unit, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
26
|
van Laarhoven T, Stekelenburg JJ, Vroomen J. Increased sub-clinical levels of autistic traits are associated with reduced multisensory integration of audiovisual speech. Sci Rep 2019; 9:9535. [PMID: 31267024 PMCID: PMC6606565 DOI: 10.1038/s41598-019-46084-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Accepted: 06/20/2019] [Indexed: 12/21/2022] Open
Abstract
Recent studies suggest that sub-clinical levels of autistic symptoms may be related to reduced processing of artificial audiovisual stimuli. It is unclear whether these findings extent to more natural stimuli such as audiovisual speech. The current study examined the relationship between autistic traits measured by the Autism spectrum Quotient and audiovisual speech processing in a large non-clinical population using a battery of experimental tasks assessing audiovisual perceptual binding, visual enhancement of speech embedded in noise and audiovisual temporal processing. Several associations were found between autistic traits and audiovisual speech processing. Increased autistic-like imagination was related to reduced perceptual binding measured by the McGurk illusion. Increased overall autistic symptomatology was associated with reduced visual enhancement of speech intelligibility in noise. Participants reporting increased levels of rigid and restricted behaviour were more likely to bind audiovisual speech stimuli over longer temporal intervals, while an increased tendency to focus on local aspects of sensory inputs was related to a more narrow temporal binding window. These findings demonstrate that increased levels of autistic traits may be related to alterations in audiovisual speech processing, and are consistent with the notion of a spectrum of autistic traits that extends to the general population.
Collapse
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands.
| | - Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands
| |
Collapse
|
27
|
Matusz PJ, Merkley R, Faure M, Scerif G. Expert attention: Attentional allocation depends on the differential development of multisensory number representations. Cognition 2019; 186:171-177. [DOI: 10.1016/j.cognition.2019.01.013] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2018] [Revised: 01/14/2019] [Accepted: 01/23/2019] [Indexed: 01/01/2023]
|
28
|
Schaadt G, van der Meer E, Pannekamp A, Oberecker R, Männel C. Children with dyslexia show a reduced processing benefit from bimodal speech information compared to their typically developing peers. Neuropsychologia 2019; 126:147-158. [DOI: 10.1016/j.neuropsychologia.2018.01.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 11/24/2017] [Accepted: 01/12/2018] [Indexed: 01/20/2023]
|
29
|
Barutchu A, Fifer JM, Shivdasani MN, Crewther SG, Paolini AG. The Interplay Between Multisensory Associative Learning and IQ in Children. Child Dev 2019; 91:620-637. [PMID: 30620403 DOI: 10.1111/cdev.13210] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
This study assessed the developmental profile of unisensory and multisensory processes, and their contribution to children's intellectual abilities (8- and 11-year olds, N = 38, compared to adults, N = 19) using a simple audiovisual detection task and three incidental associative learning tasks with different sensory signals: visual-verbal with pseudowords, novel audiovisual, and visual-visual. The level of immaturity throughout childhood was dependent on both, the sensory signal type and the task. Associative learning was significantly enhanced with verbal sounds, compared to novel audiovisual and unisensory visual learning. Visual-verbal learning was also the best predictor of children's general intellectual abilities. The results demonstrate a separate developmental trajectory for visual and verbal multisensory processes and independent contributions to the development of cognitive abilities throughout childhood.
Collapse
|
30
|
Birulés J, Bosch L, Brieke R, Pons F, Lewkowicz DJ. Inside bilingualism: Language background modulates selective attention to a talker's mouth. Dev Sci 2018; 22:e12755. [PMID: 30251757 DOI: 10.1111/desc.12755] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 08/08/2018] [Accepted: 09/21/2018] [Indexed: 11/28/2022]
Abstract
Previous findings indicate that bilingual Catalan/Spanish-learning infants attend more to the highly salient audiovisual redundancy cues normally available in a talker's mouth than do monolingual infants. Presumably, greater attention to such cues renders the challenge of learning two languages easier. Spanish and Catalan are, however, rhythmically and phonologically close languages. This raises the possibility that bilinguals only rely on redundant audiovisual cues when their languages are close. To test this possibility, we exposed 15-month-old and 4- to 6-year-old close-language bilinguals (Spanish/Catalan) and distant-language bilinguals (Spanish/"other") to videos of a talker uttering Spanish or Catalan (native) and English (non-native) monologues and recorded eye-gaze to the talker's eyes and mouth. At both ages, the close-language bilinguals attended more to the talker's mouth than the distant-language bilinguals. This indicates that language proximity modulates selective attention to a talker's mouth during early childhood and suggests that reliance on the greater salience of audiovisual speech cues depends on the difficulty of the speech-processing task.
Collapse
Affiliation(s)
- Joan Birulés
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Laura Bosch
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Ricarda Brieke
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Ferran Pons
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, Massachusetts
| |
Collapse
|
31
|
Stroud LR, McCallum M, Salisbury AL. Impact of maternal prenatal smoking on fetal to infant neurobehavioral development. Dev Psychopathol 2018; 30:1087-1105. [PMID: 30068428 PMCID: PMC6541397 DOI: 10.1017/s0954579418000676] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Despite recent emphasis on the profound importance of the fetal environment in "programming" postnatal development, measurement of offspring development typically begins after birth. Using a novel coding strategy combining direct fetal observation via ultrasound and actocardiography, we investigated the impact of maternal smoking during pregnancy (MSDP) on fetal neurobehavior; we also investigated links between fetal and infant neurobehavior. Participants were 90 pregnant mothers and their infants (52 MSDP-exposed; 51% minorities; ages 18-40). Fetal neurobehavior at baseline and in response to vibro-acoustic stimulus was assessed via ultrasound and actocardiography at M = 35 weeks gestation and coded via the Fetal Neurobehavioral Assessment System (FENS). After delivery, the NICU Network Neurobehavioral Scale was administered up to seven times over the first postnatal month. MSDP was associated with increased fetal activity and fetal limb movements. Fetal activity, complex body movements, and cardiac-somatic coupling were associated with infants' ability to attend to stimuli and to self-regulate over the first postnatal month. Furthermore, differential associations emerged by MSDP group between fetal activity, complex body movements, quality of movement, and coupling, and infant attention and self-regulation. The present study adds to a growing literature establishing the validity of fetal neurobehavioral measures in elucidating fetal programming pathways.
Collapse
|
32
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
33
|
Hornix BE, Havekes R, Kas MJH. Multisensory cortical processing and dysfunction across the neuropsychiatric spectrum. Neurosci Biobehav Rev 2018; 97:138-151. [PMID: 29496479 DOI: 10.1016/j.neubiorev.2018.02.010] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 02/12/2018] [Accepted: 02/13/2018] [Indexed: 11/25/2022]
Abstract
Sensory processing is affected in multiple neuropsychiatric disorders like schizophrenia and autism spectrum disorders. Genetic and environmental factors guide the formation and fine-tuning of brain circuitry necessary to receive, organize, and respond to sensory input in order to behave in a meaningful and consistent manner. During certain developmental stages the brain is sensitive to intrinsic and external factors. For example, disturbed expression levels of certain risk genes during critical neurodevelopmental periods may lead to exaggerated brain plasticity processes within the sensory circuits, and sensory stimulation immediately after birth contributes to fine-tuning of these circuits. Here, the neurodevelopmental trajectory of sensory circuit development will be described and related to some example risk gene mutations that are found in neuropsychiatric disorders. Subsequently, the flow of sensory information through these circuits and the relationship to synaptic plasticity will be described. Research focusing on the combined analyses of neural circuit development and functioning are necessary to expand our understanding of sensory processing and behavioral deficits that are relevant across the neuropsychiatric spectrum.
Collapse
Affiliation(s)
- Betty E Hornix
- Groningen Institute for Evolutionary Life Sciences, University of Groningen, Nijenborgh 7, 9747 AG, Groningen, The Netherlands
| | - Robbert Havekes
- Groningen Institute for Evolutionary Life Sciences, University of Groningen, Nijenborgh 7, 9747 AG, Groningen, The Netherlands
| | - Martien J H Kas
- Groningen Institute for Evolutionary Life Sciences, University of Groningen, Nijenborgh 7, 9747 AG, Groningen, The Netherlands.
| |
Collapse
|
34
|
Rohlf S, Habets B, von Frieling M, Röder B. Infants are superior in implicit crossmodal learning and use other learning mechanisms than adults. eLife 2017; 6:e28166. [PMID: 28949291 PMCID: PMC5662286 DOI: 10.7554/elife.28166] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 09/26/2017] [Indexed: 11/13/2022] Open
Abstract
During development internal models of the sensory world must be acquired which have to be continuously adapted later. We used event-related potentials (ERP) to test the hypothesis that infants extract crossmodal statistics implicitly while adults learn them when task relevant. Participants were passively exposed to frequent standard audio-visual combinations (A1V1, A2V2, p=0.35 each), rare recombinations of these standard stimuli (A1V2, A2V1, p=0.10 each), and a rare audio-visual deviant with infrequent auditory and visual elements (A3V3, p=0.10). While both six-month-old infants and adults differentiated between rare deviants and standards involving early neural processing stages only infants were sensitive to crossmodal statistics as indicated by a late ERP difference between standard and recombined stimuli. A second experiment revealed that adults differentiated recombined and standard combinations when crossmodal combinations were task relevant. These results demonstrate a heightened sensitivity for crossmodal statistics in infants and a change in learning mode from infancy to adulthood.
Collapse
Affiliation(s)
- Sophie Rohlf
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| | - Boukje Habets
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
- Biological Psychology and Cognitive NeuroscienceUniversity of BielefeldBielefeldGermany
| | - Marco von Frieling
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| | - Brigitte Röder
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| |
Collapse
|
35
|
Thomas RL, Misra R, Akkunt E, Ho C, Spence C, Bremner AJ. Sensitivity to auditory-tactile colocation in early infancy. Dev Sci 2017; 21:e12597. [PMID: 28880496 DOI: 10.1111/desc.12597] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Accepted: 06/08/2017] [Indexed: 11/27/2022]
Abstract
An ability to detect the common location of multisensory stimulation is essential for us to perceive a coherent environment, to represent the interface between the body and the external world, and to act on sensory information. Regarding the tactile environment "at hand", we need to represent somatosensory stimuli impinging on the skin surface in the same spatial reference frame as distal stimuli, such as those transduced by vision and audition. Across two experiments we investigated whether 6- (n = 14; Experiment 1) and 4-month-old (n = 14; Experiment 2) infants were sensitive to the colocation of tactile and auditory signals delivered to the hands. We recorded infants' visual preferences for spatially congruent and incongruent auditory-tactile events delivered to their hands. At 6 months, infants looked longer toward incongruent stimuli, whilst at 4 months infants looked longer toward congruent stimuli. Thus, even from 4 months of age, infants are sensitive to the colocation of simultaneously presented auditory and tactile stimuli. We conclude that 4- and 6-month-old infants can represent auditory and tactile stimuli in a common spatial frame of reference. We explain the age-wise shift in infants' preferences from congruent to incongruent in terms of an increased preference for novel crossmodal spatial relations based on the accumulation of experience. A comparison of looking preferences across the congruent and incongruent conditions with a unisensory control condition indicates that the ability to perceive auditory-tactile colocation is based on a crossmodal rather than a supramodal spatial code by 6 months of age at least.
Collapse
Affiliation(s)
- Rhiannon L Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, UK
| | - Reeva Misra
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Emine Akkunt
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, UK
| | - Cristy Ho
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Andrew J Bremner
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths, University of London, UK
| |
Collapse
|
36
|
Purpura G, Cioni G, Tinelli F. Multisensory-Based Rehabilitation Approach: Translational Insights from Animal Models to Early Intervention. Front Neurosci 2017; 11:430. [PMID: 28798661 PMCID: PMC5526840 DOI: 10.3389/fnins.2017.00430] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 07/12/2017] [Indexed: 11/18/2022] Open
Abstract
Multisensory processes permit combinations of several inputs, coming from different sensory systems, allowing for a coherent representation of biological events and facilitating adaptation to environment. For these reasons, their application in neurological and neuropsychological rehabilitation has been enhanced in the last decades. Recent studies on animals and human models have indicated that, on one hand multisensory integration matures gradually during post-natal life and development is closely linked to environment and experience and, on the other hand, that modality-specific information seems to do not benefit by redundancy across multiple sense modalities and is more readily perceived in unimodal than in multimodal stimulation. In this review, multisensory process development is analyzed, highlighting clinical effects in animal and human models of its manipulation for rehabilitation of sensory disorders. In addition, new methods of early intervention based on multisensory-based rehabilitation approach and their applications on different infant populations at risk of neurodevelopmental disabilities are discussed.
Collapse
Affiliation(s)
- Giulia Purpura
- Department of Developmental Neuroscience, Fondazione Stella Maris (IRCCS)Pisa, Italy
| | - Giovanni Cioni
- Department of Developmental Neuroscience, Fondazione Stella Maris (IRCCS)Pisa, Italy.,Department of Clinical and Experimental Medicine, University of PisaPisa, Italy
| | - Francesca Tinelli
- Department of Developmental Neuroscience, Fondazione Stella Maris (IRCCS)Pisa, Italy
| |
Collapse
|
37
|
Thomas RL, Nardini M, Mareschal D. The impact of semantically congruent and incongruent visual information on auditory object recognition across development. J Exp Child Psychol 2017; 162:72-88. [PMID: 28595113 DOI: 10.1016/j.jecp.2017.04.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 10/19/2022]
Abstract
The ability to use different sensory signals in conjunction confers numerous advantages on perception. Multisensory perception in adults is influenced by factors beyond low-level stimulus properties such as semantic congruency. Sensitivity to semantic relations has been shown to emerge early in development; however, less is known about whether implementation of these associations changes with development or whether development in the representations themselves might modulate their influence. Here, we used a Stroop-like paradigm that requires participants to identify an auditory stimulus while ignoring a visual stimulus. Prior research shows that in adults visual distractors have more impact on processing of auditory objects than vice versa; however, this pattern appears to be inverted early in development. We found that children from 8years of age (and adults) gain a speed advantage from semantically congruent visual information and are disadvantaged by semantically incongruent visual information. At 6years of age, children gain a speed advantage for semantically congruent visual information but are not disadvantaged by semantically incongruent visual information (as compared with semantically unrelated visual information). Both children and adults were influenced by associations between auditory and visual stimuli, which they had been exposed to on only 12 occasions during the learning phase of the study. Adults showed a significant speed advantage over children for well-established associations but showed no such advantage for newly acquired pairings. This suggests that the influence of semantic associations on multisensory processing does not change with age but rather these associations become more robust and, in turn, more influential.
Collapse
Affiliation(s)
- Rhiannon L Thomas
- Sensorimotor Development Research Unit, Department of Psychology, Goldsmiths College, University of London, London SE14 6NW, UK; Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK
| | - Marko Nardini
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK; Department of Psychology, University of Durham, Durham DH1 3LE, UK
| | - Denis Mareschal
- Centre for Brain and Cognitive Development, Department of Psychological Sciences, Birkbeck College, University of London, London WC1E 7HX, UK.
| |
Collapse
|
38
|
Odegaard B, Wozny DR, Shams L. A simple and efficient method to enhance audiovisual binding tendencies. PeerJ 2017; 5:e3143. [PMID: 28462016 PMCID: PMC5407282 DOI: 10.7717/peerj.3143] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2016] [Accepted: 03/04/2017] [Indexed: 11/20/2022] Open
Abstract
Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1) the brain’s tendency to bind in spatial perception is plastic, (2) that it can change following brief exposure to simple audiovisual stimuli, and (3) that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.
Collapse
Affiliation(s)
- Brian Odegaard
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - David R Wozny
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States.,Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, United States.,Neuroscience Interdepartmental Program, University of California-Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
39
|
Bremner AJ, Spence C. The Development of Tactile Perception. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2017; 52:227-268. [PMID: 28215286 DOI: 10.1016/bs.acdb.2016.12.002] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Touch is the first of our senses to develop, providing us with the sensory scaffold on which we come to perceive our own bodies and our sense of self. Touch also provides us with direct access to the external world of physical objects, via haptic exploration. Furthermore, a recent area of interest in tactile research across studies of developing children and adults is its social function, mediating interpersonal bonding. Although there are a range of demonstrations of early competence with touch, particularly in the domain of haptics, the review presented here indicates that many of the tactile perceptual skills that we take for granted as adults (e.g., perceiving touches in the external world as well as on the body) take some time to develop in the first months of postnatal life, likely as a result of an extended process of connection with other sense modalities which provide new kinds of information from birth (e.g., vision and audition). Here, we argue that because touch is of such fundamental importance across a wide range of social and cognitive domains, it should be placed much more centrally in the study of early perceptual development than it currently is.
Collapse
Affiliation(s)
- A J Bremner
- Goldsmiths, University of London, London, United Kingdom.
| | - C Spence
- University of Oxford, Oxford, United Kingdom
| |
Collapse
|
40
|
Sounds can boost the awareness of visual events through attention without cross-modal integration. Sci Rep 2017; 7:41684. [PMID: 28139712 PMCID: PMC5282564 DOI: 10.1038/srep41684] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Accepted: 12/21/2016] [Indexed: 11/09/2022] Open
Abstract
Cross-modal interactions can lead to enhancement of visual perception, even for visual events below awareness. However, the underlying mechanism is still unclear. Can purely bottom-up cross-modal integration break through the threshold of awareness? We used a binocular rivalry paradigm to measure perceptual switches after brief flashes or sounds which, sometimes, co-occurred. When flashes at the suppressed eye coincided with sounds, perceptual switches occurred the earliest. Yet, contrary to the hypothesis of cross-modal integration, this facilitation never surpassed the assumption of probability summation of independent sensory signals. A follow-up experiment replicated the same pattern of results using silent gaps embedded in continuous noise, instead of sounds. This manipulation should weaken putative sound-flash integration, although keep them salient as bottom-up attention cues. Additional results showed that spatial congruency between flashes and sounds did not determine the effectiveness of cross-modal facilitation, which was again not better than probability summation. Thus, the present findings fail to fully support the hypothesis of bottom-up cross-modal integration, above and beyond the independent contribution of two transient signals, as an account for cross-modal enhancement of visual events below level of awareness.
Collapse
|
41
|
Hamburg S, Startin CM, Strydom A. The relationship between sound-shape matching and cognitive ability in adults with Down syndrome. Multisens Res 2017; 30:537-547. [PMID: 30984556 DOI: 10.1163/22134808-00002579] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Down syndrome (DS), the most common genetic cause of intellectual disability, is characterised by a pattern of cognitive deficits hypothesised as relating to later developing neural systems. Multisensory integration (MSI) has been shown to benefit cognitive performance on numerous tasks in the typically developing population and is implicated in the early development of various cognitive processes. Given these developmental links of both MSI and DS it is important to determine the relationship between MSI and DS. This study aimed to characterise sound-shape matching performance in young adults with DS as an indicator of MSI (correct response rate around 90% in typically developing individuals). We further investigated the relationship between task performance and estimated cognitive ability (verbal and non-verbal) in addition to everyday adaptive behavior skills. Those answering correctly (72.5%) scored significantly higher across cognitive and adaptive behavior measures compared to those answering incorrectly. Furthermore, 57.1% of individuals with estimated cognitive ability scores below the median value answered correctly compared to 89.5% of individuals scoring above the median, with similar values found for adaptive behavior skills (57.9% vs 94.4%). This preliminary finding suggests sound-shape matching deficits are relatively common in DS but may be restricted to individuals of lower ability as opposed to being a general characteristic of DS. Further studies investigating aspects of MSI across a range of modalities are necessary to fully characterise the nature of MSI in DS and to explore underlying neural correlates and mechanisms.
Collapse
Affiliation(s)
- S Hamburg
- UCL Division of Psychiatry, University College London, London, United Kingdom.,LonDownS Consortium
| | - C M Startin
- UCL Division of Psychiatry, University College London, London, United Kingdom.,LonDownS Consortium
| | - A Strydom
- UCL Division of Psychiatry, University College London, London, United Kingdom.,LonDownS Consortium
| |
Collapse
|
42
|
A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention. Curr Biol 2016; 26:3101-3105. [DOI: 10.1016/j.cub.2016.10.014] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Revised: 09/14/2016] [Accepted: 10/10/2016] [Indexed: 11/18/2022]
|
43
|
Abstract
AbstractMore than 35 years ago, Meltzoff and Moore (1977) published their famous article, “Imitation of facial and manual gestures by human neonates.” Their central conclusion, that neonates can imitate, was and continues to be controversial. Here, we focus on an often-neglected aspect of this debate, namely, neonatal spontaneous behaviors themselves. We present a case study of a paradigmatic orofacial “gesture,” namely tongue protrusion and retraction (TP/R). Against the background of new research on mammalian aerodigestive development, we ask: How does the human aerodigestive system develop, and what role does TP/R play in the neonate's emerging system of aerodigestion? We show that mammalian aerodigestion develops in two phases: (1) from the onset of isolated orofacial movementsin uteroto the postnatal mastery of suckling at 4 months after birth; and (2) thereafter, from preparation to the mastery of mastication and deglutition of solid foods. Like other orofacial stereotypies, TP/R emerges in the first phase and vanishes prior to the second. Based upon recent advances in activity-driven early neural development, we suggest a sequence of three developmental events in which TP/R might participate: the acquisition of tongue control, the integration of the central pattern generator (CPG) for TP/R with other aerodigestive CPGs, and the formation of connections within the cortical maps of S1 and M1. If correct, orofacial stereotypies are crucial to the maturation of aerodigestion in the neonatal period but also unlikely to co-occur with imitative behavior.
Collapse
|
44
|
ten Oever S, Romei V, van Atteveldt N, Soto-Faraco S, Murray MM, Matusz PJ. The COGs (context, object, and goals) in multisensory processing. Exp Brain Res 2016; 234:1307-23. [PMID: 26931340 DOI: 10.1007/s00221-016-4590-z] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Accepted: 01/30/2016] [Indexed: 12/20/2022]
Abstract
Our understanding of how perception operates in real-world environments has been substantially advanced by studying both multisensory processes and "top-down" control processes influencing sensory processing via activity from higher-order brain areas, such as attention, memory, and expectations. As the two topics have been traditionally studied separately, the mechanisms orchestrating real-world multisensory processing remain unclear. Past work has revealed that the observer's goals gate the influence of many multisensory processes on brain and behavioural responses, whereas some other multisensory processes might occur independently of these goals. Consequently, other forms of top-down control beyond goal dependence are necessary to explain the full range of multisensory effects currently reported at the brain and the cognitive level. These forms of control include sensitivity to stimulus context as well as the detection of matches (or lack thereof) between a multisensory stimulus and categorical attributes of naturalistic objects (e.g. tools, animals). In this review we discuss and integrate the existing findings that demonstrate the importance of such goal-, object- and context-based top-down control over multisensory processing. We then put forward a few principles emerging from this literature review with respect to the mechanisms underlying multisensory processing and discuss their possible broader implications.
Collapse
Affiliation(s)
- Sanne ten Oever
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Vincenzo Romei
- Department of Psychology, Centre for Brain Science, University of Essex, Colchester, UK
| | - Nienke van Atteveldt
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, The Netherlands.,Department of Educational Neuroscience, Faculty of Psychology and Education and Institute LEARN!, VU University Amsterdam, Amsterdam, The Netherlands
| | - Salvador Soto-Faraco
- Multisensory Research Group, Center for Brain and Cognition, Universitat Pompeu Fabra, Barcelona, Spain.,Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland.,EEG Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland.,Department of Ophthalmology, Jules-Gonin Eye Hospital, University of Lausanne, Lausanne, Switzerland
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Neuropsychology and Neurorehabilitation Service and Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University Hospital Center and University of Lausanne, BH7.081, rue du Bugnon 46, 1011, Lausanne, Switzerland. .,Attention, Brain, and Cognitive Development Group, Department of Experimental Psychology, University of Oxford, Oxford, UK.
| |
Collapse
|
45
|
Bedford R, Pellicano E, Mareschal D, Nardini M. Flexible integration of visual cues in adolescents with autism spectrum disorder. Autism Res 2016; 9:272-81. [PMID: 26097109 PMCID: PMC4864758 DOI: 10.1002/aur.1509] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2015] [Accepted: 05/20/2015] [Indexed: 11/18/2022]
Abstract
Although children with autism spectrum disorder (ASD) show atypical sensory processing, evidence for impaired integration of multisensory information has been mixed. In this study, we took a Bayesian model-based approach to assess within-modality integration of congruent and incongruent texture and disparity cues to judge slant in typical and autistic adolescents. Human adults optimally combine multiple sources of sensory information to reduce perceptual variance but in typical development this ability to integrate cues does not develop until late childhood. While adults cannot help but integrate cues, even when they are incongruent, young children's ability to keep cues separate gives them an advantage in discriminating incongruent stimuli. Given that mature cue integration emerges in later childhood, we hypothesized that typical adolescents would show adult-like integration, combining both congruent and incongruent cues. For the ASD group there were three possible predictions (1) "no fusion": no integration of congruent or incongruent cues, like 6-year-old typical children; (2) "mandatory fusion": integration of congruent and incongruent cues, like typical adults; (3) "selective fusion": cues are combined when congruent but not incongruent, consistent with predictions of Enhanced Perceptual Functioning (EPF) theory. As hypothesized, typical adolescents showed significant integration of both congruent and incongruent cues. The ASD group showed results consistent with "selective fusion," integrating congruent but not incongruent cues. This allowed adolescents with ASD to make perceptual judgments which typical adolescents could not. In line with EPF, results suggest that perception in ASD may be more flexible and less governed by mandatory top-down feedback.
Collapse
Affiliation(s)
- Rachael Bedford
- Biostatistics DepartmentInstitute of Psychiatry, King's College LondonUnited Kingdom
| | - Elizabeth Pellicano
- Centre for Research in Autism and Education (CRAE)Institute of Education, University of LondonUnited Kingdom
- School of PsychologyUniversity of Western AustraliaPerthAustralia
| | - Denis Mareschal
- Centre for Brain and Cognitive DevelopmentBirkbeck University of LondonUnited Kingdom
| | - Marko Nardini
- Department of PsychologyDurham UniversityDurhamUnited Kingdom
| |
Collapse
|
46
|
Yang W, Ren Y, Yang DO, Yuan X, Wu J. The Influence of Selective and Divided Attention on Audiovisual Integration in Children. Perception 2016; 45:515-526. [PMID: 26811419 DOI: 10.1177/0301006616629025] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China.,Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Yanna Ren
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Dan Ou Yang
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China
| | - Xue Yuan
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China
| | - Jinglong Wu
- Bio-robotics and System Laboratory, Beijing Institute of Technology, Beijing, China.,Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| |
Collapse
|
47
|
Interactions between space and effectiveness in human multisensory performance. Neuropsychologia 2016; 88:83-91. [PMID: 26826522 DOI: 10.1016/j.neuropsychologia.2016.01.031] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/30/2015] [Accepted: 01/26/2016] [Indexed: 11/23/2022]
Abstract
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Collapse
|
48
|
Il était une fois… l’histoire d’une collaboration scientifique avec André Bullinger autour de l’olfaction et de la douleur chez le bébé. ENFANCE 2015. [DOI: 10.4074/s0013754515004103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
49
|
Petrazzini MEM, Lucon-Xiccato T, Agrillo C, Bisazza A. Use of ordinal information by fish. Sci Rep 2015; 5:15497. [PMID: 26499450 PMCID: PMC4620454 DOI: 10.1038/srep15497] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2015] [Accepted: 09/21/2015] [Indexed: 11/25/2022] Open
Abstract
Mammals and birds can process ordinal numerical information which can be used, for instance, for recognising an object on the basis of its position in a sequence of similar objects. Recent studies have shown that teleost fish possess numerical abilities comparable to those of other vertebrates, but it is unknown if they can also learn ordinal numerical relations. Guppies (Poecilia reticulata) learned to recognise the 3rd feeder in a row of 8 identical ones even when inter-feeder distance and feeder positions were varied among trials to prevent the use of any spatial information. To assess whether guppies spontaneously use ordinal or spatial information when both are simultaneously available, fish were then trained with constant feeder positions and inter-feeder distance. In probe trials where these two sources of information were contrasted, the subjects selected the correct ordinal position significantly more often than the original spatial position, indicating that the former was preferentially encoded during training. Finally, a comparison between subjects trained on the 3rd and the 5th position revealed that guppies can also learn the latter discrimination, but the larger error rate observed in this case suggests that 5 is close to the upper limit of discrimination in guppies.
Collapse
Affiliation(s)
| | | | - Christian Agrillo
- Dipartimento di Psicologia Generale, Università di Padova, Italy.,Centro di Neuroscienze Cognitive, Università di Padova, Italy
| | - Angelo Bisazza
- Dipartimento di Psicologia Generale, Università di Padova, Italy.,Centro di Neuroscienze Cognitive, Università di Padova, Italy
| |
Collapse
|
50
|
Biagi L, Crespi SA, Tosetti M, Morrone MC. BOLD Response Selective to Flow-Motion in Very Young Infants. PLoS Biol 2015; 13:e1002260. [PMID: 26418729 PMCID: PMC4587790 DOI: 10.1371/journal.pbio.1002260] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 08/21/2015] [Indexed: 11/20/2022] Open
Abstract
In adults, motion perception is mediated by an extensive network of occipital, parietal, temporal, and insular cortical areas. Little is known about the neural substrate of visual motion in infants, although behavioural studies suggest that motion perception is rudimentary at birth and matures steadily over the first few years. Here, by measuring Blood Oxygenated Level Dependent (BOLD) responses to flow versus random-motion stimuli, we demonstrate that the major cortical areas serving motion processing in adults are operative by 7 wk of age. Resting-state correlations demonstrate adult-like functional connectivity between the motion-selective associative areas, but not between primary cortex and temporo-occipital and posterior-insular cortices. Taken together, the results suggest that the development of motion perception may be limited by slow maturation of the subcortical input and of the cortico-cortical connections. In addition they support the existence of independent input to primary (V1) and temporo-occipital (V5/MT+) cortices very early in life. Although 7-wk-old infants do not perceive motion with fine sensitivity, this study shows that their brains have a well-established network of associative cortical areas selective to visual flow-motion. While it is known that the visual brain is immature at birth, there is little firm information about the developmental timeline of the visual system in humans. Despite this, it is commonly assumed that the cortex matures slowly, with primary visual areas developing first, followed by higher associative regions. Here we use fMRI in very young infants to show that this isn’t the case. Adults are highly sensitive to moving objects, and to the spurious flow projected on their retinas while they move in the environment. Flow perception is mediated by an extensive network of areas involving primary and associative visual areas, but also vestibular associative cortices that mediate the perception of body motion (vection). Our data demonstrate that this complex network of higher associative areas is established and well developed by 7 wk of age, including the vestibular associative cortex. Interestingly, the maturation of the primary visual cortex lags behind the higher associative cortex; this suggests the existence of independent cortical inputs to the primary and the associative cortex at this stage of development, explaining why infants do not yet perceive motion with the same sensitivity as adults.
Collapse
Affiliation(s)
- Laura Biagi
- IRCCS Stella Maris Foundation, Calambrone, Pisa, Italy
| | - Sofia Allegra Crespi
- Department of Psychology, Vita-Salute San Raffaele University, Milan, Italy; CERMAC and Neuroradiology Unit, San Raffaele Hospital, Milan, Italy
| | | | - Maria Concetta Morrone
- IRCCS Stella Maris Foundation, Calambrone, Pisa, Italy; Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| |
Collapse
|