1
|
Früh D, Mendl‐Heinisch C, Bittner N, Weis S, Caspers S. Prediction of Verbal Abilities From Brain Connectivity Data Across the Lifespan Using a Machine Learning Approach. Hum Brain Mapp 2025; 46:e70191. [PMID: 40130301 PMCID: PMC11933761 DOI: 10.1002/hbm.70191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 01/27/2025] [Accepted: 03/02/2025] [Indexed: 03/26/2025] Open
Abstract
Compared to nonverbal cognition such as executive or memory functions, language-related cognition generally appears to remain more stable until later in life. Nevertheless, different language-related processes, for example, verbal fluency versus vocabulary knowledge, appear to show different trajectories across the life span. One potential explanation for differences in verbal functions may be alterations in the functional and structural network architecture of different large-scale brain networks. For example, differences in verbal abilities have been linked to the communication within and between the frontoparietal (FPN) and default mode network (DMN). It, however, remains open whether brain connectivity within these networks may be informative for language performance at the individual level across the life span. Further information in this regard may be highly desirable as verbal abilities allow us to participate in daily activities, are associated with quality of life, and may be considered in preventive and interventional setups to foster cognitive health across the life span. So far, mixed prediction results based on resting-state functional connectivity (FC) and structural connectivity (SC) data have been reported for language abilities across different samples, age groups, and machine-learning (ML) approaches. Therefore, the current study set out to investigate the predictability of verbal fluency and vocabulary knowledge based on brain connectivity data in the DMN, FPN, and the whole brain using an ML approach in a lifespan sample (N = 717; age range: 18-85) from the 1000BRAINS study. Prediction performance was, thereby, systematically compared across (i) verbal [verbal fluency and vocabulary knowledge] and nonverbal abilities [processing speed and visual working memory], (ii) modalities [FC and SC data], (iii) feature sets [DMN, FPN, DMN-FPN, and whole brain], and (iv) samples [total, younger, and older aged group]. Results from the current study showed that verbal abilities could not be reliably predicted from FC and SC data across feature sets and samples. Thereby, no predictability differences emerged between verbal fluency and vocabulary knowledge across input modalities, feature sets, and samples. In contrast to verbal functions, nonverbal abilities could be moderately predicted from connectivity data, particularly SC, in the total and younger age group. Satisfactory prediction performance for nonverbal cognitive functions based on currently chosen connectivity data was, however, not encountered in the older age group. Current results, hence, emphasized that verbal functions may be more difficult to predict from brain connectivity data in domain-general cognitive networks and the whole brain compared to nonverbal abilities, particularly executive functions, across the life span. Thus, it appears warranted to more closely investigate differences in predictability between different cognitive functions and age groups.
Collapse
Affiliation(s)
- Deborah Früh
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Camilla Mendl‐Heinisch
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Nora Bittner
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Susanne Weis
- Institute of Neuroscience and Medicine, Brain and Behaviour (INM‐7)Research Centre JülichJülichGermany
- Institute of Systems Neuroscience, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| | - Svenja Caspers
- Institute of Neuroscience and Medicine (INM‐1)Research Centre JülichJülichGermany
- Institute for Anatomy I, Medical Faculty & University Hospital DüsseldorfHeinrich Heine University DüsseldorfDüsseldorfGermany
| |
Collapse
|
2
|
Hauw F, Béranger B, Cohen L. Subtitled speech: the neural mechanisms of ticker-tape synaesthesia. Brain 2024; 147:2530-2541. [PMID: 38620012 PMCID: PMC11224615 DOI: 10.1093/brain/awae114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 02/21/2024] [Accepted: 03/21/2024] [Indexed: 04/17/2024] Open
Abstract
The acquisition of reading modifies areas of the brain associated with vision and with language, in addition to their connections. These changes enable reciprocal translation between orthography and the sounds and meaning of words. Individual variability in the pre-existing cerebral substrate contributes to the range of eventual reading abilities, extending to atypical developmental patterns, including dyslexia and reading-related synaesthesias. The present study is devoted to the little-studied but highly informative ticker-tape synaesthesia, in which speech perception triggers the vivid and irrepressible perception of words in their written form in the mind's eye. We scanned a group of 17 synaesthetes and 17 matched controls with functional MRI, while they listened to spoken sentences, words, numbers or pseudowords (Experiment 1), viewed images and written words (Experiment 2) or were at rest (Experiment 3). First, we found direct correlates of the ticker-tape synaesthesia phenomenon: during speech perception, as ticker-tape synaesthesia was active, synaesthetes showed over-activation of left perisylvian regions supporting phonology and of the occipitotemporal visual word form area, where orthography is represented. Second, we provided support to the hypothesis that ticker-tape synaesthesia results from atypical relationships between spoken and written language processing: the ticker-tape synaesthesia-related regions overlap closely with cortices activated during reading, and the overlap of speech-related and reading-related areas is larger in synaesthetes than in controls. Furthermore, the regions over-activated in ticker-tape synaesthesia overlap with regions under-activated in dyslexia. Third, during the resting state (i.e. in the absence of current ticker-tape synaesthesia), synaesthetes showed increased functional connectivity between left prefrontal and bilateral occipital regions. This pattern might reflect a lowered threshold for conscious access to visual mental contents and might imply a non-specific predisposition to all synaesthesias with a visual content. These data provide a rich and coherent account of ticker-tape synaesthesia as a non-detrimental developmental condition created by the interaction of reading acquisition with an atypical cerebral substrate.
Collapse
Affiliation(s)
- Fabien Hauw
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris 75013, France
- AP-HP, Hôpital de La Pitié Salpêtrière, Fédération de Neurologie, Paris 75013, France
| | - Benoît Béranger
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris 75013, France
| | - Laurent Cohen
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris 75013, France
- AP-HP, Hôpital de La Pitié Salpêtrière, Fédération de Neurologie, Paris 75013, France
| |
Collapse
|
3
|
Fatić S, Stanojević N, Jeličić L, Bilibajkić R, Marisavljević M, Maksimović S, Gavrilović A, Subotić M. Beta Spectral Power during Passive Listening in Preschool Children with Specific Language Impairment. Dev Neurosci 2024; 47:98-111. [PMID: 38723615 PMCID: PMC11965842 DOI: 10.1159/000539135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 04/18/2024] [Indexed: 06/19/2024] Open
Abstract
INTRODUCTION Children with specific language impairment (SLI) have difficulties in different speech and language domains. Electrophysiological studies have documented that auditory processing in children with SLI is atypical and probably caused by delayed and abnormal auditory maturation. During the resting state, or different auditory tasks, children with SLI show low or high beta spectral power, which could be a clinical correlate for investigating brain rhythms. METHODS The aim of this study was to examine the electrophysiological cortical activity of the beta rhythm while listening to words and nonwords in children with SLI in comparison to typical development (TD) children. The participants were 50 children with SLI, aged 4 and 5 years, and 50 age matched TD children. The children were divided into two subgroups according to age: (1) children 4 years of age; (2) children 5 years of age. RESULTS The older group differed from the younger group in beta auditory processing, with increased values of beta spectral power in the right frontal, temporal, and parietal regions. In addition, children with SLI have higher beta spectral power than TD children in the bilateral temporal regions. CONCLUSION Complex beta auditory activation in TD and SLI children indicates the presence of early changes in functional brain connectivity. INTRODUCTION Children with specific language impairment (SLI) have difficulties in different speech and language domains. Electrophysiological studies have documented that auditory processing in children with SLI is atypical and probably caused by delayed and abnormal auditory maturation. During the resting state, or different auditory tasks, children with SLI show low or high beta spectral power, which could be a clinical correlate for investigating brain rhythms. METHODS The aim of this study was to examine the electrophysiological cortical activity of the beta rhythm while listening to words and nonwords in children with SLI in comparison to typical development (TD) children. The participants were 50 children with SLI, aged 4 and 5 years, and 50 age matched TD children. The children were divided into two subgroups according to age: (1) children 4 years of age; (2) children 5 years of age. RESULTS The older group differed from the younger group in beta auditory processing, with increased values of beta spectral power in the right frontal, temporal, and parietal regions. In addition, children with SLI have higher beta spectral power than TD children in the bilateral temporal regions. CONCLUSION Complex beta auditory activation in TD and SLI children indicates the presence of early changes in functional brain connectivity.
Collapse
Affiliation(s)
- Saška Fatić
- Cognitive Neuroscience Department, Research and Development Institute “Life Activities Advancement Institute,” Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology, Belgrade, Serbia
| | - Nina Stanojević
- Cognitive Neuroscience Department, Research and Development Institute “Life Activities Advancement Institute,” Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology, Belgrade, Serbia
| | - Ljiljana Jeličić
- Cognitive Neuroscience Department, Research and Development Institute “Life Activities Advancement Institute,” Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology, Belgrade, Serbia
| | - Ružica Bilibajkić
- Cognitive Neuroscience Department, Research and Development Institute “Life Activities Advancement Institute,” Belgrade, Serbia
| | - Maša Marisavljević
- Cognitive Neuroscience Department, Research and Development Institute “Life Activities Advancement Institute,” Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology, Belgrade, Serbia
| | - Slavica Maksimović
- Cognitive Neuroscience Department, Research and Development Institute “Life Activities Advancement Institute,” Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology, Belgrade, Serbia
| | - Aleksandar Gavrilović
- Faculty of Medical Sciences, Department of Neurology, University of Kragujevac, Kragujevac, Serbia
- Clinic of Neurology, Clinical Center Kragujevac, Kragujevac, Serbia
| | - Miško Subotić
- Cognitive Neuroscience Department, Research and Development Institute “Life Activities Advancement Institute,” Belgrade, Serbia
| |
Collapse
|
4
|
Dopierała AAW, López Pérez D, Mercure E, Pluta A, Malinowska-Korczak A, Evans S, Wolak T, Tomalski P. Watching talking faces: The development of cortical representation of visual syllables in infancy. BRAIN AND LANGUAGE 2023; 244:105304. [PMID: 37481794 DOI: 10.1016/j.bandl.2023.105304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 07/13/2023] [Accepted: 07/17/2023] [Indexed: 07/25/2023]
Abstract
From birth, we perceive speech by hearing and seeing people talk. In adults cortical representations of visual speech are processed in the putative temporal visual speech area (TVSA), but it remains unknown how these representations develop. We measured infants' cortical responses to silent visual syllables and non-communicative mouth movements using functional Near-Infrared Spectroscopy. Our results indicate that cortical specialisation for visual speech may emerge during infancy. The putative TVSA was active to both visual syllables and gurning around 5 months of age, and more active to gurning than to visual syllables around 10 months of age. Multivariate pattern analysis classification of distinct cortical responses to visual speech and gurning was successful at 10, but not at 5 months of age. These findings imply that cortical representations of visual speech change between 5 and 10 months of age, showing that the putative TVSA is initially broadly tuned and becomes selective with age.
Collapse
Affiliation(s)
- Aleksandra A W Dopierała
- Faculty of Psychology, University of Warsaw, Warsaw, Poland; Department of Psychology, University of British Columbia, Vancouver, Canada.
| | - David López Pérez
- Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.
| | | | - Agnieszka Pluta
- Faculty of Psychology, University of Warsaw, Warsaw, Poland; Institute of Physiology and Pathology of Hearing, Bioimaging Research Center, World Hearing Centre, Warsaw, Poland.
| | | | - Samuel Evans
- University of Westminister, London, UK; Kings College London, London, UK.
| | - Tomasz Wolak
- Institute of Physiology and Pathology of Hearing, Bioimaging Research Center, World Hearing Centre, Warsaw, Poland.
| | - Przemysław Tomalski
- Faculty of Psychology, University of Warsaw, Warsaw, Poland; Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
5
|
Krason A, Vigliocco G, Mailend ML, Stoll H, Varley R, Buxbaum LJ. Benefit of visual speech information for word comprehension in post-stroke aphasia. Cortex 2023; 165:86-100. [PMID: 37271014 PMCID: PMC10850036 DOI: 10.1016/j.cortex.2023.04.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 03/13/2023] [Accepted: 04/22/2023] [Indexed: 06/06/2023]
Abstract
Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.
Collapse
Affiliation(s)
- Anna Krason
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA.
| | - Gabriella Vigliocco
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Marja-Liisa Mailend
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Special Education, University of Tartu, Tartu Linn, Estonia
| | - Harrison Stoll
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Applied Cognitive and Brain Science, Drexel University, Philadelphia, PA, USA
| | | | - Laurel J Buxbaum
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Rehabilitation Medicine, Thomas Jefferson University, Philadelphia, PA, USA
| |
Collapse
|
6
|
Hauw F, El Soudany M, Rosso C, Daunizeau J, Cohen L. A single case neuroimaging study of tickertape synesthesia. Sci Rep 2023; 13:12185. [PMID: 37500762 PMCID: PMC10374523 DOI: 10.1038/s41598-023-39276-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 07/22/2023] [Indexed: 07/29/2023] Open
Abstract
Reading acquisition is enabled by deep changes in the brain's visual system and language areas, and in the links subtending their collaboration. Disruption of those plastic processes commonly results in developmental dyslexia. However, atypical development of reading mechanisms may occasionally result in ticker-tape synesthesia (TTS), a condition described by Francis Galton in 1883 wherein individuals "see mentally in print every word that is uttered (…) as from a long imaginary strip of paper". While reading is the bottom-up translation of letters into speech, TTS may be viewed as its opposite, the top-down translation of speech into internally visualized letters. In a series of functional MRI experiments, we studied MK, a man with TTS. We showed that a set of left-hemispheric areas were more active in MK than in controls during the perception of normal than reversed speech, including frontoparietal areas involved in speech processing, and the Visual Word Form Area, an occipitotemporal region subtending orthography. Those areas were identical to those involved in reading, supporting the construal of TTS as upended reading. Using dynamic causal modeling, we further showed that, parallel to reading, TTS induced by spoken words and pseudowords relied on top-down flow of information along distinct lexical and phonological routes, involving the middle temporal and supramarginal gyri, respectively. Future studies of TTS should shed new light on the neurodevelopmental mechanisms of reading acquisition, their variability and their disorders.
Collapse
Affiliation(s)
- Fabien Hauw
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France.
- AP-HP, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, Paris, France.
| | - Mohamed El Soudany
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
| | - Charlotte Rosso
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
- AP-HP, Urgences Cérébro-Vasculaires, Hôpital Pitié-Salpêtrière, Paris, France
| | - Jean Daunizeau
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
| | - Laurent Cohen
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
- AP-HP, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, Paris, France
| |
Collapse
|
7
|
Li J, Yang Y, Viñas-Guasch N, Yang Y, Bi HY. Differences in brain functional networks for audiovisual integration during reading between children and adults. Ann N Y Acad Sci 2023; 1520:127-139. [PMID: 36478220 DOI: 10.1111/nyas.14943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Building robust letter-to-sound correspondences is a prerequisite for developing reading capacity. However, the neural mechanisms underlying the development of audiovisual integration for reading are largely unknown. This study used functional magnetic resonance imaging in a lexical decision task to investigate functional brain networks that support audiovisual integration during reading in developing child readers (10-12 years old) and skilled adult readers (20-28 years old). The results revealed enhanced connectivity in a prefrontal-superior temporal network (including the right medial frontal gyrus, right superior frontal gyrus, and left superior temporal gyrus) in adults relative to children, reflecting the development of attentional modulation of audiovisual integration involved in reading processing. Furthermore, the connectivity strength of this brain network was correlated with reading accuracy. Collectively, this study, for the first time, elucidates the differences in brain networks of audiovisual integration for reading between children and adults, promoting the understanding of the neurodevelopment of multisensory integration in high-level human cognition.
Collapse
Affiliation(s)
- Junjun Li
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yang Yang
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Yinghui Yang
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,China Welfare Institute Information and Research Center, Soong Ching Ling Children Development Center, Shanghai, China
| | - Hong-Yan Bi
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
8
|
Yang W, Guo A, Yao H, Yang X, Li Z, Li S, Chen J, Ren Y, Yang J, Wu J, Zhang Z. Effect of aging on audiovisual integration: Comparison of high- and low-intensity conditions in a speech discrimination task. Front Aging Neurosci 2022; 14:1010060. [DOI: 10.3389/fnagi.2022.1010060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 10/11/2022] [Indexed: 11/13/2022] Open
Abstract
Audiovisual integration is an essential process that influences speech perception in conversation. However, it is still debated whether older individuals benefit more from audiovisual integration than younger individuals. This ambiguity is likely due to stimulus features, such as stimulus intensity. The purpose of the current study was to explore the effect of aging on audiovisual integration, using event-related potentials (ERPs) at different stimulus intensities. The results showed greater audiovisual integration in older adults at 320–360 ms. Conversely, at 460–500 ms, older adults displayed attenuated audiovisual integration in the frontal, fronto-central, central, and centro-parietal regions compared to younger adults. In addition, we found older adults had greater audiovisual integration at 200–230 ms under the low-intensity condition compared to the high-intensity condition, suggesting inverse effectiveness occurred. However, inverse effectiveness was not found in younger adults. Taken together, the results suggested that there was age-related dissociation in audiovisual integration and inverse effectiveness, indicating that the neural mechanisms underlying audiovisual integration differed between older adults and younger adults.
Collapse
|
9
|
Fiber tracing and microstructural characterization among audiovisual integration brain regions in neonates compared with young adults. Neuroimage 2022; 254:119141. [PMID: 35342006 DOI: 10.1016/j.neuroimage.2022.119141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 02/23/2022] [Accepted: 03/21/2022] [Indexed: 11/23/2022] Open
Abstract
Audiovisual integration has been related with cognitive-processing and behavioral advantages, as well as with various socio-cognitive disorders. While some studies have identified brain regions instantiating this ability shortly after birth, little is known about the structural pathways connecting them. The goal of the present study was to reconstruct fiber tracts linking AVI regions in the newborn in-vivo brain and assess their adult-likeness by comparing them with analogous fiber tracts of young adults. We performed probabilistic tractography and compared connective probabilities between a sample of term-born neonates (N = 311; the Developing Human Connectome Project (dHCP, http://www.developingconnectome.org) and young adults (N = 311 The Human Connectome Project; https://www.humanconnectome.org/) by means of a classification algorithm. Furthermore, we computed Dice coefficients to assess between-group spatial similarity of the reconstructed fibers and used diffusion metrics to characterize neonates' AVI brain network in terms of microstructural properties, interhemispheric differences and the association with perinatal covariates and biological sex. Overall, our results indicate that the AVI fiber bundles were successfully reconstructed in a vast majority of neonates, similarly to adults. Connective probability distributional similarities and spatial overlaps of AVI fibers between the two groups differed across the reconstructed fibers. There was a rank-order correspondence of the fibers' connective strengths across the groups. Additionally, the study revealed patterns of diffusion metrics in line with early white matter developmental trajectories and a developmental advantage for females. Altogether, these findings deliver evidence of meaningful structural connections among AVI regions in the newborn in-vivo brain.
Collapse
|
10
|
Cox CMM, Keren-Portnoy T, Roepstorff A, Fusaroli R. A Bayesian meta-analysis of infants' ability to perceive audio-visual congruence for speech. INFANCY 2021; 27:67-96. [PMID: 34542230 DOI: 10.1111/infa.12436] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 11/29/2022]
Abstract
This paper quantifies the extent to which infants can perceive audio-visual congruence for speech information and assesses whether this ability changes with native language exposure over time. A hierarchical Bayesian robust regression model of 92 separate effect sizes extracted from 24 studies indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). This result suggests that infants possess a robust ability to detect audio-visual congruence for speech. Moderator analyses, moreover, suggest that infants' audio-visual matching ability for speech emerges at an early point in the process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data, however, indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.
Collapse
Affiliation(s)
- Christopher Martin Mikkelsen Cox
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark.,Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Tamar Keren-Portnoy
- Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Andreas Roepstorff
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Riccardo Fusaroli
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| |
Collapse
|
11
|
Finkl T, Hahne A, Friederici AD, Gerber J, Mürbe D, Anwander A. Language Without Speech: Segregating Distinct Circuits in the Human Brain. Cereb Cortex 2021; 30:812-823. [PMID: 31373629 DOI: 10.1093/cercor/bhz128] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 05/08/2019] [Accepted: 05/20/2019] [Indexed: 01/09/2023] Open
Abstract
Language is a fundamental part of human cognition. The question of whether language is processed independently of speech, however, is still heavily discussed. The absence of speech in deaf signers offers the opportunity to disentangle language from speech in the human brain. Using probabilistic tractography, we compared brain structural connectivity of adult deaf signers who had learned sign language early in life to that of matched hearing controls. Quantitative comparison of the connectivity profiles revealed that the core language tracts did not differ between signers and controls, confirming that language is independent of speech. In contrast, pathways involved in the production and perception of speech displayed lower connectivity in deaf signers compared to hearing controls. These differences were located in tracts towards the left pre-supplementary motor area and the thalamus when seeding in Broca's area, and in ipsilateral parietal areas and the precuneus with seeds in left posterior temporal regions. Furthermore, the interhemispheric connectivity between the auditory cortices was lower in the deaf than in the hearing group, underlining the importance of the transcallosal connection for early auditory processes. The present results provide evidence for a functional segregation of the neural pathways for language and speech.
Collapse
Affiliation(s)
- Theresa Finkl
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Anja Hahne
- Saxonian Cochlear Implant Centre, Phoniatrics and Audiology, Faculty of Medicine, Technische Universität Dresden, Fetscherstraße 74, Dresden, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Johannes Gerber
- Neuroradiology, Faculty of Medicine, Technische Universität Dresden, Dresden, Germany
| | - Dirk Mürbe
- Department of Audiology and Phoniatrics, Charité-Universitätsmedizin, Berlin, Germany
| | - Alfred Anwander
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
12
|
Lalonde K, Werner LA. Development of the Mechanisms Underlying Audiovisual Speech Perception Benefit. Brain Sci 2021; 11:49. [PMID: 33466253 PMCID: PMC7824772 DOI: 10.3390/brainsci11010049] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Revised: 12/30/2020] [Accepted: 12/30/2020] [Indexed: 02/07/2023] Open
Abstract
The natural environments in which infants and children learn speech and language are noisy and multimodal. Adults rely on the multimodal nature of speech to compensate for noisy environments during speech communication. Multiple mechanisms underlie mature audiovisual benefit to speech perception, including reduced uncertainty as to when auditory speech will occur, use of correlations between the amplitude envelope of auditory and visual signals in fluent speech, and use of visual phonetic knowledge for lexical access. This paper reviews evidence regarding infants' and children's use of temporal and phonetic mechanisms in audiovisual speech perception benefit. The ability to use temporal cues for audiovisual speech perception benefit emerges in infancy. Although infants are sensitive to the correspondence between auditory and visual phonetic cues, the ability to use this correspondence for audiovisual benefit may not emerge until age four. A more cohesive account of the development of audiovisual speech perception may follow from a more thorough understanding of the development of sensitivity to and use of various temporal and phonetic cues.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE 68131, USA
| | - Lynne A. Werner
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, USA;
| |
Collapse
|
13
|
Ullas S, Hausfeld L, Cutler A, Eisner F, Formisano E. Neural Correlates of Phonetic Adaptation as Induced by Lexical and Audiovisual Context. J Cogn Neurosci 2020; 32:2145-2158. [PMID: 32662723 DOI: 10.1162/jocn_a_01608] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
When speech perception is difficult, one way listeners adjust is by reconfiguring phoneme category boundaries, drawing on contextual information. Both lexical knowledge and lipreading cues are used in this way, but it remains unknown whether these two differing forms of perceptual learning are similar at a neural level. This study compared phoneme boundary adjustments driven by lexical or audiovisual cues, using ultra-high-field 7-T fMRI. During imaging, participants heard exposure stimuli and test stimuli. Exposure stimuli for lexical retuning were audio recordings of words, and those for audiovisual recalibration were audio-video recordings of lip movements during utterances of pseudowords. Test stimuli were ambiguous phonetic strings presented without context, and listeners reported what phoneme they heard. Reports reflected phoneme biases in preceding exposure blocks (e.g., more reported /p/ after /p/-biased exposure). Analysis of corresponding brain responses indicated that both forms of cue use were associated with a network of activity across the temporal cortex, plus parietal, insula, and motor areas. Audiovisual recalibration also elicited significant occipital cortex activity despite the lack of visual stimuli. Activity levels in several ROIs also covaried with strength of audiovisual recalibration, with greater activity accompanying larger recalibration shifts. Similar activation patterns appeared for lexical retuning, but here, no significant ROIs were identified. Audiovisual and lexical forms of perceptual learning thus induce largely similar brain response patterns. However, audiovisual recalibration involves additional visual cortex contributions, suggesting that previously acquired visual information (on lip movements) is retrieved and deployed to disambiguate auditory perception.
Collapse
Affiliation(s)
- Shruti Ullas
- Maastricht University.,Maastricht Brain Imaging Centre
| | - Lars Hausfeld
- Maastricht University.,Maastricht Brain Imaging Centre
| | | | | | - Elia Formisano
- Maastricht University.,Maastricht Brain Imaging Centre.,Maastricht Centre for Systems Biology
| |
Collapse
|
14
|
Rajasilta O, Tuulari JJ, Björnsdotter M, Scheinin NM, Lehtola SJ, Saunavaara J, Häkkinen S, Merisaari H, Parkkola R, Lähdesmäki T, Karlsson L, Karlsson H. Resting-state networks of the neonate brain identified using independent component analysis. Dev Neurobiol 2020; 80:111-125. [PMID: 32267069 DOI: 10.1002/dneu.22742] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 03/10/2020] [Accepted: 03/31/2020] [Indexed: 12/12/2022]
Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) has been successfully used to probe the intrinsic functional organization of the brain and to study brain development. Here, we implemented a combination of individual and group independent component analysis (ICA) of FSL on a 6-min resting-state data set acquired from 21 naturally sleeping term-born (age 26 ± 6.7 d), healthy neonates to investigate the emerging functional resting-state networks (RSNs). In line with the previous literature, we found evidence of sensorimotor, auditory/language, visual, cerebellar, thalmic, parietal, prefrontal, anterior cingulate as well as dorsal and ventral aspects of the default-mode-network. Additionally, we identified RSNs in frontal, parietal, and temporal regions that have not been previously described in this age group and correspond to the canonical RSNs established in adults. Importantly, we found that careful ICA-based denoising of fMRI data increased the number of networks identified with group-ICA, whereas the degree of spatial smoothing did not change the number of identified networks. Our results show that the infant brain has an established set of RSNs soon after birth.
Collapse
Affiliation(s)
- Olli Rajasilta
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Institute of Clinical Medicine, University of Turku, Turku, Finland
| | - Jetro J Tuulari
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Institute of Clinical Medicine, University of Turku, Turku, Finland.,Department of Psychiatry, University of Turku and Turku University Hospital, Turku, Finland.,Department of Psychiatry, University of Oxford, Oxford, UK.,Turku Collegium for Science and Medicine, University of Turku, Turku, Finland
| | - Malin Björnsdotter
- The Sahlgrenska University Hospital, Gothenburg, Sweden.,Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Noora M Scheinin
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Institute of Clinical Medicine, University of Turku, Turku, Finland.,Department of Psychiatry, University of Turku and Turku University Hospital, Turku, Finland
| | - Satu J Lehtola
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Institute of Clinical Medicine, University of Turku, Turku, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Suvi Häkkinen
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Institute of Clinical Medicine, University of Turku, Turku, Finland
| | - Harri Merisaari
- Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Tuire Lähdesmäki
- Department of Pediatric Neurology, University of Turku and Turku University Hospital, Turku, Finland
| | - Linnea Karlsson
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Institute of Clinical Medicine, University of Turku, Turku, Finland.,Department of Child Psychiatry, University of Turku and Turku University Hospital, Turku, Finland
| | - Hasse Karlsson
- FinnBrain Birth Cohort Study, Turku Brain and Mind Center, Institute of Clinical Medicine, University of Turku, Turku, Finland.,Department of Psychiatry, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
15
|
Tholen MG, Trautwein FM, Böckler A, Singer T, Kanske P. Functional magnetic resonance imaging (fMRI) item analysis of empathy and theory of mind. Hum Brain Mapp 2020; 41:2611-2628. [PMID: 32115820 PMCID: PMC7294056 DOI: 10.1002/hbm.24966] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 02/05/2020] [Accepted: 02/11/2020] [Indexed: 12/21/2022] Open
Abstract
In contrast to conventional functional magnetic resonance imaging (fMRI) analysis across participants, item analysis allows generalizing the observed neural response patterns from a specific stimulus set to the entire population of stimuli. In the present study, we perform an item analysis on an fMRI paradigm (EmpaToM) that measures the neural correlates of empathy and Theory of Mind (ToM). The task includes a large stimulus set (240 emotional vs. neutral videos to probe empathic responding and 240 ToM or factual reasoning questions to probe ToM), which we tested in two large participant samples (N = 178, N = 130). Both, the empathy‐related network comprising anterior insula, anterior cingulate/dorsomedial prefrontal cortex, inferior frontal gyrus, and dorsal temporoparietal junction/supramarginal gyrus (TPJ) and the ToM related network including ventral TPJ, superior temporal gyrus, temporal poles, and anterior and posterior midline regions, were observed across participants and items. Regression analyses confirmed that these activations are predicted by the empathy or ToM condition of the stimuli, but not by low‐level features such as video length, number of words, syllables or syntactic complexity. The item analysis also allowed for the selection of the most effective items to create optimized stimulus sets that provide the most stable and reproducible results. Finally, reproducibility was shown in the replication of all analyses in the second participant sample. The data demonstrate (a) the generalizability of empathy and ToM related neural activity and (b) the reproducibility of the EmpaToM task and its applicability in intervention and clinical imaging studies.
Collapse
Affiliation(s)
- Matthias G Tholen
- Centre for Cognitive Neuroscience, Department of Psychology, University of Salzburg, Austria
| | | | - Anne Böckler
- Department of Psychology, Leibniz University Hannover, Hannover, Germany
| | - Tania Singer
- Max Planck Society, Social Neuroscience Lab, Berlin, Germany
| | - Philipp Kanske
- Clinical Psychology and Behavioral Neuroscience, Faculty of Psychology, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Research Group Social Stress and Family Health, Leipzig, Germany
| |
Collapse
|
16
|
The facilitative effect of gestures on the neural processing of semantic complexity in a continuous narrative. Neuroimage 2019; 195:38-47. [DOI: 10.1016/j.neuroimage.2019.03.054] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 03/25/2019] [Indexed: 11/19/2022] Open
|
17
|
McGurk Effect by Individuals with Autism Spectrum Disorder and Typically Developing Controls: A Systematic Review and Meta-analysis. J Autism Dev Disord 2019; 49:34-43. [PMID: 30019277 DOI: 10.1007/s10803-018-3680-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
By synthesizing existing behavioural studies through a meta-analytic approach, the current study compared the performances of Autism spectrum disorder (ASD) and typically developing groups in audiovisual speech integration and investigated potential moderators that might contribute to the heterogeneity of the existing findings. In total, nine studies were included in the current study, and the pooled overall difference between the two groups was significant, g = - 0.835 (p < 0.001; 95% CI - 1.155 to - 0.516). Age and task scoring method were found to be associated with the inconsistencies of the findings reported by previous studies. These findings indicate that individuals with ASD show weaker McGurk effect than typically developing controls.
Collapse
|
18
|
Altvater-Mackensen N, Grossmann T. Modality-independent recruitment of inferior frontal cortex during speech processing in human infants. Dev Cogn Neurosci 2018; 34:130-138. [PMID: 30391756 PMCID: PMC6969291 DOI: 10.1016/j.dcn.2018.10.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 08/25/2018] [Accepted: 10/25/2018] [Indexed: 11/22/2022] Open
Abstract
Despite increasing interest in the development of audiovisual speech perception in infancy, the underlying mechanisms and neural processes are still only poorly understood. In addition to regions in temporal cortex associated with speech processing and multimodal integration, such as superior temporal sulcus, left inferior frontal cortex (IFC) has been suggested to be critically involved in mapping information from different modalities during speech perception. To further illuminate the role of IFC during infant language learning and speech perception, the current study examined the processing of auditory, visual and audiovisual speech in 6-month-old infants using functional near-infrared spectroscopy (fNIRS). Our results revealed that infants recruit speech-sensitive regions in frontal cortex including IFC regardless of whether they processed unimodal or multimodal speech. We argue that IFC may play an important role in associating multimodal speech information during the early steps of language learning.
Collapse
Affiliation(s)
- Nicole Altvater-Mackensen
- Department of Psychology, Johannes-Gutenberg-University Mainz, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tobias Grossmann
- Department of Psychology, University of Virginia, USA; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
19
|
Riley JD, Chen EE, Winsell J, Davis EP, Glynn LM, Baram TZ, Sandman CA, Small SL, Solodkin A. Network specialization during adolescence: Hippocampal effective connectivity in boys and girls. Neuroimage 2018; 175:402-412. [PMID: 29649560 PMCID: PMC5978413 DOI: 10.1016/j.neuroimage.2018.04.013] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 04/04/2018] [Accepted: 04/08/2018] [Indexed: 12/15/2022] Open
Abstract
Adolescence is a complex period of concurrent mental and physical development that facilitates adult functioning at multiple levels. Despite the growing number of neuroimaging studies of cognitive development in adolescence focusing on regional activation patterns, there remains a paucity of information about the functional interactions across these participating regions that are critical for cognitive functioning, including memory. The current study used structural equation modeling (SEM) to determine how interactions among brain regions critical for memory change over the course of adolescence. We obtained functional MRI in 77 individuals aged 8-16 years old, divided into younger (ages 8-10) and older (ages > 11) cohorts, using an incidental encoding memory task to activate hippocampus formation and associated brain networks, as well as behavioral data on memory function. SEM was performed on the imaging data for four groups (younger girls, younger boys, older girls, and older boys) that were subsequently compared using a stacked model approach. Significant differences were seen between the models for these groups. Younger boys had a predominantly posterior distribution of connections originating in primary visual regions and terminating on multi-modal processing regions. In older boys, there was a relatively greater anterior connection distribution, with increased effective connectivity within association and multi-modal processing regions. Connection patterns in younger girls were similar to those of older boys, with a generally anterior-posterior distributed network among sensory, multi-modal, and limbic regions. In contrast, connections in older girls were widely distributed but relatively weaker. Memory performance increased with age, without a significant difference between the sexes. These findings suggest a progressive reorganization among brain regions, with a commensurate increase in efficiency of cognitive functioning, from younger to older individuals in both girls and boys, providing insight into the age- and gender-specific processes at play during this critical transition period.
Collapse
Affiliation(s)
- Jeffrey D Riley
- Department of Neurology, University of California Irvine, USA.
| | - E Elinor Chen
- Department of Anatomy & Neurobiology, University of California Irvine, USA
| | - Jessica Winsell
- Department of Anatomy & Neurobiology, University of California Irvine, USA
| | | | - Laura M Glynn
- Department of Psychology, Chapman University, USA; Department of Psychiatry & Human Behavior, University of California Irvine, USA
| | - Tallie Z Baram
- Department of Neurology, University of California Irvine, USA; Department of Anatomy & Neurobiology, University of California Irvine, USA; Department of Pediatrics, University of California Irvine, USA
| | - Curt A Sandman
- Department of Psychiatry & Human Behavior, University of California Irvine, USA
| | - Steven L Small
- Department of Neurology, University of California Irvine, USA
| | - Ana Solodkin
- Department of Neurology, University of California Irvine, USA; Department of Anatomy & Neurobiology, University of California Irvine, USA
| |
Collapse
|
20
|
Echeverría-Palacio CM, Uscátegui-Daccarett A, Talero-Gutiérrez C. Integración auditiva, visual y propioceptiva como sustrato del desarrollo del lenguaje. REVISTA DE LA FACULTAD DE MEDICINA 2018. [DOI: 10.15446/revfacmed.v66n3.60490] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Introducción. El desarrollo del lenguaje es un proceso complejo considerado como marcador evolutivo del ser humano y puede ser comprendido a partir de la contribución de los sistemas sensoriales y de los eventos que ocurren en periodos críticos del desarrollo.Objetivo. Realizar una revisión de cómo se da la integración de la información auditiva, visual y propioceptiva y cómo se refleja en el desarrollo del lenguaje, destacando el papel de la interacción social como contexto que favorece este proceso.Materiales y métodos. Se utilizaron los términos MeSH “Language Development”; “Visual Perception”; “Hearing”; y “Proprioception en las bases de datos MEDLINE y Embase, limitando la búsqueda principal a artículos escritos en inglés, español y portugués.Resultados. El punto de partida lo constituye la información auditiva, la cual, en el primer año de vida, permite la discriminación de los elementos del ambiente que corresponden al lenguaje; luego un pico en su adquisición y posteriormente una etapa de máxima discriminación lingüística. La información visual proporciona la correspondencia del lenguaje en imágenes, sustrato de nominación y comprensión de palabras, además de la interpretación e imitación del componente emocional en la gesticulación. La información propioceptiva ofrece la retroalimentación de los patrones de ejecución motora empleados en la producción del lenguaje.Conclusión. El estudio del desarrollo lenguaje desde la integración sensorial ofrece nuevas perspectivas para el abordaje e intervención de sus desviaciones.
Collapse
|
21
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
22
|
Santhanam P, Duncan ES, Small SL. Therapy-Induced Plasticity in Chronic Aphasia Is Associated with Behavioral Improvement and Time Since Stroke. Brain Connect 2018; 8:179-188. [PMID: 29338310 PMCID: PMC5899281 DOI: 10.1089/brain.2017.0508] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Cortical reorganization after stroke is thought to underlie functional improvement. Patterns of reorganization may differ depending on the amount of time since the stroke or the degree of improvement. We investigated these issues in a study of brain connectivity changes with aphasia therapy. Twelve individuals with chronic aphasia participated in a 6-week trial of imitation-based speech therapy. We assessed improvement on a repetition test and analyzed effective connectivity during functional magnetic resonance imaging of a speech observation task before and after therapy. Using structural equation modeling, patient networks were compared with a model derived from healthy controls performing the same task. Independent of the amount of time since the stroke, patients demonstrating behavioral improvement had networks that reorganized to be more similar to controls in two functional pathways in the left hemisphere. Independent of behavioral improvement, patients with remote infarcts (2-7 years poststroke; n = 5) also reorganized to more closely resemble controls in one of these pathways. Patients with far removed injury (>10 years poststroke; n = 3) did not show behavioral improvement and, despite similarities to the normative model and overall network heterogeneity, reorganized to be less similar to controls following therapy in a distinct right-lateralized pathway. Behavioral improvement following aphasia therapy was associated with connectivity more closely approximating that of healthy controls. Individuals who had a stroke more than a decade before testing also showed plasticity, with a few pathways becoming less like controls, possibly representing compensation. Better understanding of these mechanisms may help direct targeted brain stimulation.
Collapse
Affiliation(s)
- Priya Santhanam
- Department of Neurology, The University of Chicago, Chicago, Illinois
| | - E. Susan Duncan
- Department of Neurology, University of California, Irvine, Orange, California
- Department of Communication Sciences & Disorders, Louisiana State University, Baton Rouge, Louisiana
| | - Steven L. Small
- Department of Neurology, The University of Chicago, Chicago, Illinois
- Department of Neurology, University of California, Irvine, Orange, California
- Department of Neurobiology and Behavior, University of California, Irvine, California
| |
Collapse
|
23
|
Li Y, Li P, Yang QX, Eslinger PJ, Sica CT, Karunanayaka P. Lexical-Semantic Search Under Different Covert Verbal Fluency Tasks: An fMRI Study. Front Behav Neurosci 2017; 11:131. [PMID: 28848407 PMCID: PMC5550713 DOI: 10.3389/fnbeh.2017.00131] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2017] [Accepted: 06/30/2017] [Indexed: 11/13/2022] Open
Abstract
Background: Verbal fluency is a measure of cognitive flexibility and word search strategies that is widely used to characterize impaired cognitive function. Despite the wealth of research on identifying and characterizing distinct aspects of verbal fluency, the anatomic and functional substrates of retrieval-related search and post-retrieval control processes still have not been fully elucidated. Methods: Twenty-one native English-speaking, healthy, right-handed, adult volunteers (mean age = 31 years; range = 21-45 years; 9 F) took part in a block-design functional Magnetic Resonance Imaging (fMRI) study of free recall, covert word generation tasks when guided by phonemic (P), semantic-category (C), and context-based fill-in-the-blank sentence completion (S) cues. General linear model (GLM), Independent Component Analysis (ICA), and psychophysiological interaction (PPI) were used to further characterize the neural substrate of verbal fluency as a function of retrieval cue type. Results: Common localized activations across P, C, and S tasks occurred in the bilateral superior and left inferior frontal gyrus, left anterior cingulate cortex, bilateral supplementary motor area (SMA), and left insula. Differential task activations were centered in the occipital, temporal and parietal regions as well as the thalamus and cerebellum. The context-based fluency task, i.e., the S task, elicited higher differential brain activity in a lateralized frontal-temporal network typically engaged in complex language processing. P and C tasks elicited activation in limited pathways mainly within the left frontal regions. ICA and PPI results of the S task suggested that brain regions distributed across both hemispheres, extending beyond classical language areas, are recruited for lexical-semantic access and retrieval during sentence completion. Conclusion: Study results support the hypothesis of overlapping, as well as distinct, neural networks for covert word generation when guided by different linguistic cues. The increased demand on word retrieval is met by the concurrent recruitment of classical as well as non-classical language-related brain regions forming a large cognitive neural network. The retrieval-related search and post-retrieval control processes that subserve verbal fluency, therefore, reverberates across distinct functional networks as determined by respective task demands.
Collapse
Affiliation(s)
- Yunqing Li
- Department of Radiology, Pennsylvania State University College of MedicineHershey, PA, United States
| | - Ping Li
- Department of Psychology and Center for Brain, Behavior, and Cognition, Pennsylvania State UniversityUniversity Park, PA, United States
| | - Qing X Yang
- Department of Radiology, Pennsylvania State University College of MedicineHershey, PA, United States.,Department of Neurosurgery, Pennsylvania State University College of MedicineHershey, PA, United States
| | - Paul J Eslinger
- Department of Radiology, Pennsylvania State University College of MedicineHershey, PA, United States.,Department of Neurology, Pennsylvania State University College of MedicineHershey, PA, United States.,Department of Neural and Behavioral Sciences, Pennsylvania State University College of MedicineHershey, PA, United States
| | - Chris T Sica
- Department of Radiology, Pennsylvania State University College of MedicineHershey, PA, United States
| | - Prasanna Karunanayaka
- Department of Radiology, Pennsylvania State University College of MedicineHershey, PA, United States
| |
Collapse
|
24
|
Brain regions and functional interactions supporting early word recognition in the face of input variability. Proc Natl Acad Sci U S A 2017; 114:7588-7593. [PMID: 28674020 DOI: 10.1073/pnas.1617589114] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Perception and cognition in infants have been traditionally investigated using habituation paradigms, assuming that babies' memories in laboratory contexts are best constructed after numerous repetitions of the very same stimulus in the absence of interference. A crucial, yet open, question regards how babies deal with stimuli experienced in a fashion similar to everyday learning situations-namely, in the presence of interfering stimuli. To address this question, we used functional near-infrared spectroscopy to test 40 healthy newborns on their ability to encode words presented in concomitance with other words. The results evidenced a habituation-like hemodynamic response during encoding in the left-frontal region, which was associated with a progressive decrement of the functional connections between this region and the left-temporal, right-temporal, and right-parietal regions. In a recognition test phase, a characteristic neural signature of recognition recruited first the right-frontal region and subsequently the right-parietal ones. Connections originating from the right-temporal regions to these areas emerged when newborns listened to the familiar word in the test phase. These findings suggest a neural specialization at birth characterized by the lateralization of memory functions: the interplay between temporal and left-frontal regions during encoding and between temporo-parietal and right-frontal regions during recognition of speech sounds. Most critically, the results show that newborns are capable of retaining the sound of specific words despite hearing other stimuli during encoding. Thus, habituation designs that include various items may be as effective for studying early memory as repeated presentation of a single word.
Collapse
|
25
|
Smith E, Zhang S, Bennetto L. Temporal synchrony and audiovisual integration of speech and object stimuli in autism. RESEARCH IN AUTISM SPECTRUM DISORDERS 2017; 39:11-19. [PMID: 30220908 PMCID: PMC6135104 DOI: 10.1016/j.rasd.2017.04.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
BACKGROUND Individuals with Autism Spectrum Disorders (ASD) have been shown to have multisensory integration deficits, which may lead to problems perceiving complex, multisensory environments. For example, understanding audiovisual speech requires integration of visual information from the lips and face with auditory information from the voice, and audiovisual speech integration deficits can lead to impaired understanding and comprehension. While there is strong evidence for an audiovisual speech integration impairment in ASD, it is unclear whether this impairment is due to low level perceptual processes that affect all types of audiovisual integration or if it is specific to speech processing. METHOD Here, we measure audiovisual integration of basic speech (i.e., consonant-vowel utterances) and object stimuli (i.e., a bouncing ball) in adolescents with ASD and well-matched controls. We calculate a temporal window of integration (TWI) using each individual's ability to identify which of two videos (one temporally aligned and one misaligned) matches auditory stimuli. The TWI measures tolerance for temporal asynchrony between the auditory and visual streams, and is an important feature of audiovisual perception. RESULTS While controls showed similar tolerance of asynchrony for the simple speech and object stimuli, individuals with ASD did not. Specifically, individuals with ASD showed less tolerance of asynchrony for speech stimuli compared to object stimuli. In individuals with ASD, decreased tolerance for asynchrony in speech stimuli was associated with higher ratings of autism symptom severity. CONCLUSIONS These results suggest that audiovisual perception in ASD may vary for speech and object stimuli beyond what can be accounted for by stimulus complexity.
Collapse
Affiliation(s)
- Elizabeth Smith
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| | - Shouling Zhang
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| | - Loisa Bennetto
- Department of Clinical and Social Sciences in Psychology, University of Rochester, Rochester, NY USA
| |
Collapse
|
26
|
Atypical audiovisual word processing in school-age children with a history of specific language impairment: an event-related potential study. J Neurodev Disord 2016; 8:33. [PMID: 27597881 PMCID: PMC5011345 DOI: 10.1186/s11689-016-9168-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Accepted: 08/17/2016] [Indexed: 11/12/2022] Open
Abstract
Background Visual speech cues influence different aspects of language acquisition. However, whether developmental language disorders may be associated with atypical processing of visual speech is unknown. In this study, we used behavioral and ERP measures to determine whether children with a history of SLI (H-SLI) differ from their age-matched typically developing (TD) peers in the ability to match auditory words with corresponding silent visual articulations. Methods Nineteen 7–13-year-old H-SLI children and 19 age-matched TD children participated in the study. Children first heard a word and then saw a speaker silently articulating a word. In half of trials, the articulated word matched the auditory word (congruent trials), while in another half, it did not (incongruent trials). Children specified whether the auditory and the articulated words matched. We examined ERPs elicited by the onset of visual stimuli (visual P1, N1, and P2) as well as ERPs elicited by the articulatory movements themselves—namely, N400 to incongruent articulations and late positive complex (LPC) to congruent articulations. We also examined whether ERP measures of visual speech processing could predict (1) children’s linguistic skills and (2) the use of visual speech cues when listening to speech-in-noise (SIN). Results H-SLI children were less accurate in matching auditory words with visual articulations. They had a significantly reduced P1 to the talker’s face and a smaller N400 to incongruent articulations. In contrast, congruent articulations elicited LPCs of similar amplitude in both groups of children. The P1 and N400 amplitude was significantly correlated with accuracy enhancement on the SIN task when seeing the talker’s face. Conclusions H-SLI children have poorly defined correspondences between speech sounds and visually observed articulatory movements that produce them.
Collapse
|
27
|
Functional organization of the language network in three- and six-year-old children. Neuropsychologia 2016; 98:24-33. [PMID: 27542319 PMCID: PMC5407357 DOI: 10.1016/j.neuropsychologia.2016.08.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Revised: 08/02/2016] [Accepted: 08/14/2016] [Indexed: 11/08/2022]
Abstract
The organization of the language network undergoes continuous changes during development as children learn to understand sentences. In the present study, functional magnetic resonance imaging and behavioral measures were utilized to investigate functional activation and functional connectivity (FC) in three-year-old (3yo) and six-year-old (6yo) children during sentence comprehension. Transitive German sentences varying the word order (subject-initial and object-initial) with case marking were presented auditorily. We selected children who were capable of processing the subject-initial sentences above chance level accuracy from each age group to ensure that we were tapping real comprehension. Both age groups showed a main effect of word order in the left posterior superior temporal gyrus (pSTG), with greater activation for object-initial compared to subject-initial sentences. However, age differences were observed in the FC between left pSTG and the left inferior frontal gyrus (IFG). The 6yo group showed stronger FC between the left pSTG and Brodmann area (BA) 44 of the left IFG compared to the 3yo group. For the 3yo group, in turn, the FC between left pSTG and left BA 45 was stronger than with left BA 44. Our study demonstrates that while task-related activation was comparable, the small behavioral differences between age groups were reflected in the underlying functional organization revealing the ongoing development of the neural language network. We examined functional connectivity of sentence processing in 3- and 6-year-olds. Performance-matched age groups activated left pSTG for processing complex syntax. 6-year-olds had stronger connectivity between left BA44 and pSTG than 3-year-olds. 3-year-olds had greater connectivity between left BA45 and pSTG than BA44 and pSTG. Functional connectivity results could be related to behavioral performance.
Collapse
|
28
|
Tune S, Schlesewsky M, Nagels A, Small SL, Bornkessel-Schlesewsky I. Sentence understanding depends on contextual use of semantic and real world knowledge. Neuroimage 2016; 136:10-25. [PMID: 27177762 PMCID: PMC5120675 DOI: 10.1016/j.neuroimage.2016.05.020] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2015] [Revised: 04/05/2016] [Accepted: 05/06/2016] [Indexed: 11/28/2022] Open
Abstract
Human language allows us to express our thoughts and ideas by combining entities, concepts and actions into multi-event episodes. Yet, the functional neuroanatomy engaged in interpretation of such high-level linguistic input remains poorly understood. Here, we used easy to detect and more subtle "borderline" anomalies to investigate the brain regions and mechanistic principles involved in the use of real-world event knowledge in language comprehension. Overall, the results showed that the processing of sentences in context engages a complex set of bilateral brain regions in the frontal, temporal and inferior parietal lobes. Easy anomalies preferentially engaged lower-order cortical areas adjacent to the primary auditory cortex. In addition, the left supramarginal gyrus and anterior temporal sulcus as well as the right posterior middle temporal gyrus contributed to the processing of easy and borderline anomalies. The observed pattern of results is explained in terms of (i) hierarchical processing along a dorsal-ventral axis and (ii) the assumption of high-order association areas serving as cortical hubs in the convergence of information in a distributed network. Finally, the observed modulation of BOLD signal in prefrontal areas provides support for their role in the implementation of executive control processes.
Collapse
Affiliation(s)
- Sarah Tune
- Department of Neurology, University of California, Irvine, CA, USA.
| | - Matthias Schlesewsky
- School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, Australia
| | - Arne Nagels
- Department of Psychiatry and Psychotherapy, University of Marburg, Marburg, Germany
| | - Steven L Small
- Department of Neurology, University of California, Irvine, CA, USA
| | | |
Collapse
|
29
|
Schindler S, Kissler J. People matter: Perceived sender identity modulates cerebral processing of socio-emotional language feedback. Neuroimage 2016; 134:160-169. [PMID: 27039140 DOI: 10.1016/j.neuroimage.2016.03.052] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2015] [Revised: 02/17/2016] [Accepted: 03/21/2016] [Indexed: 11/17/2022] Open
Affiliation(s)
- Sebastian Schindler
- Department of Psychology, University of Bielefeld, Germany; Center of Excellence Cognitive Interaction Technology (CITEC), University of Bielefeld, Germany.
| | - Johanna Kissler
- Department of Psychology, University of Bielefeld, Germany; Center of Excellence Cognitive Interaction Technology (CITEC), University of Bielefeld, Germany
| |
Collapse
|
30
|
Giannopulu I, Montreynaud V, Watanabe T. Minimalistic toy robot to analyze a scenery of speaker–listener condition in autism. Cogn Process 2016; 17:195-203. [DOI: 10.1007/s10339-016-0752-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2015] [Accepted: 01/19/2016] [Indexed: 11/30/2022]
|
31
|
Knowland VCP, Evans S, Snell C, Rosen S. Visual Speech Perception in Children With Language Learning Impairments. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:1-14. [PMID: 26895558 DOI: 10.1044/2015_jslhr-s-14-0269] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Accepted: 07/30/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. METHOD In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker. RESULTS Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception. CONCLUSION Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.
Collapse
|
32
|
Shaw KE, Bortfeld H. Sources of Confusion in Infant Audiovisual Speech Perception Research. Front Psychol 2015; 6:1844. [PMID: 26696919 PMCID: PMC4678229 DOI: 10.3389/fpsyg.2015.01844] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2015] [Accepted: 11/13/2015] [Indexed: 12/01/2022] Open
Abstract
Speech is a multimodal stimulus, with information provided in both the auditory and visual modalities. The resulting audiovisual signal provides relatively stable, tightly correlated cues that support speech perception and processing in a range of contexts. Despite the clear relationship between spoken language and the moving mouth that produces it, there remains considerable disagreement over how sensitive early language learners-infants-are to whether and how sight and sound co-occur. Here we examine sources of this disagreement, with a focus on how comparisons of data obtained using different paradigms and different stimuli may serve to exacerbate misunderstanding.
Collapse
Affiliation(s)
- Kathleen E. Shaw
- Department of Psychology, University of ConnecticutStorrs, CT, USA
| | - Heather Bortfeld
- Psychological Sciences, University of California, MercedMerced, CA, USA
- Haskins LaboratoriesNew Haven, CT, USA
| |
Collapse
|
33
|
The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies. Neurosci Biobehav Rev 2015; 57:88-104. [DOI: 10.1016/j.neubiorev.2015.08.006] [Citation(s) in RCA: 65] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2015] [Revised: 07/13/2015] [Accepted: 08/06/2015] [Indexed: 11/18/2022]
|
34
|
Streri A, Coulon M, Marie J, Yeung HH. Developmental Change in Infants' Detection of Visual Faces that Match Auditory Vowels. INFANCY 2015. [DOI: 10.1111/infa.12104] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Arlette Streri
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - Marion Coulon
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - Julien Marie
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
| | - H. Henny Yeung
- Laboratoire Psychologie de la Perception (UMR 8242); Université Paris Descartes
- The Centre National de la Recherche Scientifique
| |
Collapse
|
35
|
Kaganovich N, Schumaker J, Macias D, Gustafson D. Processing of audiovisually congruent and incongruent speech in school-age children with a history of specific language impairment: a behavioral and event-related potentials study. Dev Sci 2015; 18:751-70. [PMID: 25440407 PMCID: PMC4449323 DOI: 10.1111/desc.12263] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2013] [Accepted: 09/07/2014] [Indexed: 11/30/2022]
Abstract
Previous studies indicate that at least some aspects of audiovisual speech perception are impaired in children with specific language impairment (SLI). However, whether audiovisual processing difficulties are also present in older children with a history of this disorder is unknown. By combining electrophysiological and behavioral measures, we examined perception of both audiovisually congruent and audiovisually incongruent speech in school-age children with a history of SLI (H-SLI), their typically developing (TD) peers, and adults. In the first experiment, all participants watched videos of a talker articulating syllables 'ba', 'da', and 'ga' under three conditions - audiovisual (AV), auditory only (A), and visual only (V). The amplitude of the N1 (but not of the P2) event-related component elicited in the AV condition was significantly reduced compared to the N1 amplitude measured from the sum of the A and V conditions in all groups of participants. Because N1 attenuation to AV speech is thought to index the degree to which facial movements predict the onset of the auditory signal, our findings suggest that this aspect of audiovisual speech perception is mature by mid-childhood and is normal in the H-SLI children. In the second experiment, participants watched videos of audivisually incongruent syllables created to elicit the so-called McGurk illusion (with an auditory 'pa' dubbed onto a visual articulation of 'ka', and the expectant perception being that of 'ta' if audiovisual integration took place). As a group, H-SLI children were significantly more likely than either TD children or adults to hear the McGurk syllable as 'pa' (in agreement with its auditory component) than as 'ka' (in agreement with its visual component), suggesting that susceptibility to the McGurk illusion is reduced in at least some children with a history of SLI. Taken together, the results of the two experiments argue against global audiovisual integration impairment in children with a history of SLI and suggest that, when present, audiovisual integration difficulties in this population likely stem from a later (non-sensory) stage of processing.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038
- Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907-2038
| | - Jennifer Schumaker
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038
| | - Danielle Macias
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038
| | - Dana Gustafson
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038
| |
Collapse
|
36
|
Weiss-Croft LJ, Baldeweg T. Maturation of language networks in children: A systematic review of 22years of functional MRI. Neuroimage 2015. [PMID: 26213350 DOI: 10.1016/j.neuroimage.2015.07.046] [Citation(s) in RCA: 85] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023] Open
Abstract
Understanding how language networks change during childhood is important for theories of cognitive development and for identifying the neural causes of language impairment. Despite this, there is currently little systematic evidence regarding the typical developmental trajectory for language from the field of neuroimaging. We reviewed functional MRI (fMRI) studies published between 1992 and 2014, and quantified the evidence for age-related changes in localisation and lateralisation of fMRI activation in the language network (excluding the cerebellum and subcortical regions). Although age-related changes differed according to task type and input modality, we identified four consistent findings concerning the typical maturation of the language system. First, activation in core semantic processing regions increases with age. Second, activation in lower-level sensory and motor regions increases with age as activation in higher-level control regions reduces. We suggest that this reflects increased automaticity of language processing as children become more proficient. Third, the posterior cingulate cortex and precuneus (regions associated with the default mode network) show increasing attenuation across childhood and adolescence. Finally, language lateralisation is established by approximately 5years of age. Small increases in leftward lateralisation are observed in frontal regions, but these are tightly linked to performance.
Collapse
Affiliation(s)
- Louise J Weiss-Croft
- Cognitive Neuroscience and Neuropsychiatry Section, Developmental Neurosciences Programme, UCL Institute of Child Health, 30 Guilford Street, London WC1N 1EH, UK.
| | - Torsten Baldeweg
- Cognitive Neuroscience and Neuropsychiatry Section, Developmental Neurosciences Programme, UCL Institute of Child Health, 30 Guilford Street, London WC1N 1EH, UK.
| |
Collapse
|
37
|
Turner AC, McIntosh DN, Moody EJ. Don't Listen With Your Mouth Full: The Role of Facial Motor Action in Visual Speech Perception. LANGUAGE AND SPEECH 2015; 58:267-278. [PMID: 26677646 DOI: 10.1177/0023830914542305] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Theories of speech perception agree that visual input enhances the understanding of speech but disagree on whether physically mimicking the speaker improves understanding. This study investigated whether facial motor mimicry facilitates visual speech perception by testing whether blocking facial motor action impairs speechreading performance. Thirty-five typically developing children (19 boys; 16 girls; M age = 7 years) completed the Revised Craig Lipreading Inventory under two conditions. While observing silent videos of 15 words being spoken, participants either held a tongue depressor horizontally with their teeth (blocking facial motor action) or squeezed a ball with one hand (allowing facial motor action). As hypothesized, blocking motor action resulted in fewer correctly understood words than that of the control task. The results suggest that facial mimicry or other methods of facial action support visual speech perception in children. Future studies on the impact of motor action on the typical and atypical development of speech perception are warranted.
Collapse
|
38
|
Guàrdia-Olmos J, Peró-Cebollero M, Zarabozo-Hurtado D, González-Garrido AA, Gudayol-Ferré E. Effective connectivity of visual word recognition and homophone orthographic errors. Front Psychol 2015; 6:640. [PMID: 26042070 PMCID: PMC4438596 DOI: 10.3389/fpsyg.2015.00640] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2015] [Accepted: 05/01/2015] [Indexed: 11/13/2022] Open
Abstract
The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group's SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages.
Collapse
Affiliation(s)
- Joan Guàrdia-Olmos
- Facultat de Psicologia, Institut de Recerca en Cognició, Cervell i Conducta, Universitat de BarcelonaBarcelona, Spain
- Department of Methodology of Behavioral Sciences, School of Psychology, University of BarcelonaBarcelona, Spain
| | - Maribel Peró-Cebollero
- Facultat de Psicologia, Institut de Recerca en Cognició, Cervell i Conducta, Universitat de BarcelonaBarcelona, Spain
| | | | | | | |
Collapse
|
39
|
Kaganovich N, Schumaker J. Audiovisual integration for speech during mid-childhood: electrophysiological evidence. BRAIN AND LANGUAGE 2014; 139:36-48. [PMID: 25463815 PMCID: PMC4363284 DOI: 10.1016/j.bandl.2014.09.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2014] [Revised: 09/28/2014] [Accepted: 09/30/2014] [Indexed: 05/05/2023]
Abstract
Previous studies have demonstrated that the presence of visual speech cues reduces the amplitude and latency of the N1 and P2 event-related potential (ERP) components elicited by speech stimuli. However, the developmental trajectory of this effect is not yet fully mapped. We examined ERP responses to auditory, visual, and audiovisual speech in two groups of school-age children (7-8-year-olds and 10-11-year-olds) and in adults. Audiovisual speech led to the attenuation of the N1 and P2 components in all groups of participants, suggesting that the neural mechanisms underlying these effects are functional by early school years. Additionally, while the reduction in N1 was largest over the right scalp, the P2 attenuation was largest over the left and midline scalp. The difference in the hemispheric distribution of the N1 and P2 attenuation supports the idea that these components index at least somewhat disparate neural processes within the context of audiovisual speech perception.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States; Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907-2038, United States.
| | - Jennifer Schumaker
- Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States
| |
Collapse
|
40
|
Bernstein LE, Liebenthal E. Neural pathways for visual speech perception. Front Neurosci 2014; 8:386. [PMID: 25520611 PMCID: PMC4248808 DOI: 10.3389/fnins.2014.00386] [Citation(s) in RCA: 88] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2014] [Accepted: 11/10/2014] [Indexed: 12/03/2022] Open
Abstract
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.
Collapse
Affiliation(s)
- Lynne E Bernstein
- Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin Milwaukee, WI, USA ; Department of Psychiatry, Brigham and Women's Hospital Boston, MA, USA
| |
Collapse
|
41
|
Dissociating Cortical Activity during Processing of Native and Non-Native Audiovisual Speech from Early to Late Infancy. Brain Sci 2014; 4:471-87. [PMID: 25116572 PMCID: PMC4194034 DOI: 10.3390/brainsci4030471] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2014] [Revised: 06/27/2014] [Accepted: 07/14/2014] [Indexed: 11/17/2022] Open
Abstract
Initially, infants are capable of discriminating phonetic contrasts across the world's languages. Starting between seven and ten months of age, they gradually lose this ability through a process of perceptual narrowing. Although traditionally investigated with isolated speech sounds, such narrowing occurs in a variety of perceptual domains (e.g., faces, visual speech). Thus far, tracking the developmental trajectory of this tuning process has been focused primarily on auditory speech alone, and generally using isolated sounds. But infants learn from speech produced by people talking to them, meaning they learn from a complex audiovisual signal. Here, we use near-infrared spectroscopy to measure blood concentration changes in the bilateral temporal cortices of infants in three different age groups: 3-to-6 months, 7-to-10 months, and 11-to-14-months. Critically, all three groups of infants were tested with continuous audiovisual speech in both their native and another, unfamiliar language. We found that at each age range, infants showed different patterns of cortical activity in response to the native and non-native stimuli. Infants in the youngest group showed bilateral cortical activity that was greater overall in response to non-native relative to native speech; the oldest group showed left lateralized activity in response to native relative to non-native speech. These results highlight perceptual tuning as a dynamic process that happens across modalities and at different levels of stimulus complexity.
Collapse
|
42
|
Guellaï B, Streri A, Yeung HH. The development of sensorimotor influences in the audiovisual speech domain: some critical questions. Front Psychol 2014; 5:812. [PMID: 25147528 PMCID: PMC4123602 DOI: 10.3389/fpsyg.2014.00812] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 07/09/2014] [Indexed: 11/13/2022] Open
Abstract
Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.
Collapse
Affiliation(s)
- Bahia Guellaï
- Laboratoire Ethologie, Cognition, Développement, Université Paris Ouest Nanterre La Défense, NanterreFrance
| | - Arlette Streri
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
| | - H. Henny Yeung
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
- Université Paris Descartes, Paris Sorbonne Cité, ParisFrance
| |
Collapse
|
43
|
Kaganovich N, Schumaker J, Leonard LB, Gustafson D, Macias D. Children with a history of SLI show reduced sensitivity to audiovisual temporal asynchrony: an ERP study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:1480-502. [PMID: 24686922 PMCID: PMC4266431 DOI: 10.1044/2014_jslhr-l-13-0192] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
PURPOSE The authors examined whether school-age children with a history of specific language impairment (H-SLI), their peers with typical development (TD), and adults differ in sensitivity to audiovisual temporal asynchrony and whether such difference stems from the sensory encoding of audiovisual information. METHOD Fifteen H-SLI children, 15 TD children, and 15 adults judged whether a flashed explosion-shaped figure and a 2-kHz pure tone occurred simultaneously. The stimuli were presented at 0-, 100-, 200-, 300-, 400-, and 500-ms temporal offsets. This task was combined with EEG recordings. RESULTS H-SLI children were profoundly less sensitive to temporal separations between auditory and visual modalities compared with their TD peers. Those H-SLI children who performed better at simultaneity judgment also had higher language aptitude. TD children were less accurate than adults, revealing a remarkably prolonged developmental course of the audiovisual temporal discrimination. Analysis of early event-related potential components suggested that poor sensory encoding was not a key factor in H-SLI children's reduced sensitivity to audiovisual asynchrony. CONCLUSIONS Audiovisual temporal discrimination is impaired in H-SLI children and is still immature during mid-childhood in TD children. The present findings highlight the need for further evaluation of the role of atypical audiovisual processing in the development of SLI.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue
University, 500 Oval Drive West Lafayette, IN 47907-2038
- Department of Psychological Sciences, Purdue University, 703 Third
Street, West Lafayette, IN 47907-2038
| | - Jennifer Schumaker
- Department of Speech, Language, and Hearing Sciences, Purdue
University, 500 Oval Drive West Lafayette, IN 47907-2038
| | - Laurence B. Leonard
- Department of Speech, Language, and Hearing Sciences, Purdue
University, 500 Oval Drive West Lafayette, IN 47907-2038
| | - Dana Gustafson
- Department of Speech, Language, and Hearing Sciences, Purdue
University, 500 Oval Drive West Lafayette, IN 47907-2038
| | - Danielle Macias
- Department of Speech, Language, and Hearing Sciences, Purdue
University, 500 Oval Drive West Lafayette, IN 47907-2038
| |
Collapse
|
44
|
van Geemen K, Herbet G, Moritz-Gasser S, Duffau H. Limited plastic potential of the left ventral premotor cortex in speech articulation: evidence from intraoperative awake mapping in glioma patients. Hum Brain Mapp 2014; 35:1587-96. [PMID: 23616288 PMCID: PMC6869841 DOI: 10.1002/hbm.22275] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2012] [Revised: 01/13/2013] [Accepted: 01/31/2013] [Indexed: 11/11/2022] Open
Abstract
OBJECTIVES Despite previous lesional and functional neuroimaging studies, the actual role of the left ventral premotor cortex (vPMC), i.e., the lateral part of the precentral gyrus, is still poorly known. EXPERIMENTAL DESIGN We report a series of eight patients with a glioma involving the left vPMC, who underwent awake surgery with intraoperative cortical and subcortical language mapping. The function of the vPMC, its subcortical connections, and its reorganization potential are investigated in the light of surgical findings and language outcome after resection. PRINCIPAL OBSERVATIONS Electrostimulation of both the vPMC and subcortical white matter tract underneath the vPMC, that is, the anterior segment of the lateral part of the superior longitudinal fascicle (SLF), induced speech production disturbances with anarthria in all cases. Moreover, although some degrees of redistribution of the vPMC have been found in four patients, allowing its partial resection with no permanent speech disorders, this area was nonetheless still detected more medially in the precentral gyrus in the eight patients, despite its invasion by the glioma. Moreover, a direct connection of the vPMC with the SLF was preserved in all cases. CONCLUSIONS Our original data suggest that the vPMC plays a crucial role in the speech production network and that its plastic potential is limited. We propose that this limitation is due to an anatomical constraint, namely the necessity for the left vPMC to remain connected to the lateral SLF. Beyond fundamental implications, such knowledge may have clinical applications, especially in surgery for tumors involving this cortico-subcortical circuit.
Collapse
Affiliation(s)
- Kim van Geemen
- Department of Neurosurgery, Gui de Chauliac Hospital, Montpellier University Medical Centre, Montpellier, France
| | | | | | | |
Collapse
|
45
|
Knowland VCP, Mercure E, Karmiloff-Smith A, Dick F, Thomas MSC. Audio-visual speech perception: a developmental ERP investigation. Dev Sci 2014; 17:110-24. [PMID: 24176002 PMCID: PMC3995015 DOI: 10.1111/desc.12098] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2012] [Accepted: 05/14/2013] [Indexed: 11/29/2022]
Abstract
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development.
Collapse
Affiliation(s)
- Victoria CP Knowland
- School of Health Sciences, City UniversityLondon, UK
- Department of Psychological Sciences, Birkbeck CollegeLondon, UK
| | | | | | - Fred Dick
- Department of Psychological Sciences, Birkbeck CollegeLondon, UK
| | | |
Collapse
|
46
|
Abstract
Following stroke, patients are commonly left with debilitating motor and speech impairments. This article reviews the state of the art in neurological repair for stroke and proposes a new model for the future. We suggest that stroke treatment--from the time of the ictus itself to living with the consequences--must be fundamentally neurological, from limiting the extent of injury at the outset, to repairing the consequent damage. Our model links brain and behaviour by targeting brain circuits, and we illustrate the model though action observation treatment, which aims to enhance brain network connectivity. The model is based on the assumptions that the mechanisms of neural repair inherently involve cellular and circuit plasticity, that brain plasticity is a synaptic phenomenon that is largely stimulus-dependent, and that brain repair required both physical and behavioural interventions that are tailored to reorganize specific brain circuits. We review current approaches to brain repair after stroke and present our new model, and discuss the biological foundations, rationales, and data to support our novel approach to upper-extremity and language rehabilitation. We believe that by enhancing plasticity at the level of brain network interactions, this neurological model for brain repair could ultimately lead to a cure for stroke.
Collapse
Affiliation(s)
- Steven L Small
- Department of Neurology, University of California, Irvine, 200 Manchester Avenue, Suite 206, Orange, CA 92697, USA
| | | | | |
Collapse
|
47
|
Li Y, Yang J, Suzanne Scherf K, Li P. Two faces, two languages: an fMRI study of bilingual picture naming. BRAIN AND LANGUAGE 2013; 127:452-462. [PMID: 24129199 DOI: 10.1016/j.bandl.2013.09.005] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2013] [Revised: 07/24/2013] [Accepted: 09/13/2013] [Indexed: 06/02/2023]
Abstract
This fMRI study explores how nonlinguistic cues modulate lexical activation in the bilingual brain. We examined the influence of face race on bilingual language production in a picture-naming paradigm. Chinese-English bilinguals were presented with pictures of objects and images of faces (Asian or Caucasian). Participants named the picture in their first or second language (Chinese or English) in separate blocks. Face race and naming language were either congruent (e.g., naming in Chinese when seeing an Asian face) or incongruent (e.g., naming in English when seeing an Asian face). Our results revealed that face cues facilitate naming when the socio-cultural identity of the face is congruent with the naming language. The congruence effects are reflected as effective integration of lexical and facial cues in key brain regions including IFG, MFG, ACC, and caudate. Implications of the findings in light of theories of language processing and cultural priming are discussed.
Collapse
Affiliation(s)
- Yunqing Li
- Department of Psychology and Center for Brain, Behavior, and Cognition, Pennsylvania State University, University Park, PA 16802, USA
| | | | | | | |
Collapse
|
48
|
Skrandies W. Electrophysiological correlates of connotative meaning in healthy children. Brain Topogr 2013; 27:271-8. [PMID: 23974725 DOI: 10.1007/s10548-013-0309-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2013] [Accepted: 08/14/2013] [Indexed: 10/26/2022]
Abstract
The affective, connotative meaning of words can be statistically quantified by the semantic differential technique. Words that are located clearly on one of the three dimensions called "Evaluation", "Potency", and "Activity" were used as visual stimuli in a topographic event related potential study (ERP). Stimuli had been statistically defined in a group of 249 children (Skrandies, Jpn Psychol Res 53: 65-76, 2011). We investigated electrical brain activity in 19 healthy children with normal intelligence and reading skills between 11 and 15 years of age. Words that belonged to different semantic classes were presented at random on a monitor, and EEG was measured from 30 channels. Evoked potentials were computed offline for each semantic class. In the ERP data we observed significant effects of word class on component latency, field strength and topography. Similar as with adult subjects such effects occurred at small latency of about 115 ms after word presentation. The language-evoked components in children were similar but not identical to those reported previously for various groups of adults. Our data show that visually evoked brain activity is modulated by connotative meaning of the stimuli at early processing stages not only in adults but also in children.
Collapse
Affiliation(s)
- Wolfgang Skrandies
- Institute of Physiology, Justus-Liebig University, Aulweg 129, 35392, Giessen, Germany,
| |
Collapse
|
49
|
Abstract
AbstractWith a focus on receptive language, we examine the neurobiological evidence for the interdependence of receptive and expressive language processes. While we agree that there is compelling evidence for such interdependence, we suggest that Pickering & Garrod's (P&G's) account would be enhanced by considering more-specific situations in which their model does, and does not, apply.
Collapse
|
50
|
Fava E, Hull R, Baumbauer K, Bortfeld H. Hemodynamic responses to speech and music in preverbal infants. Child Neuropsychol 2013; 20:430-48. [PMID: 23777481 DOI: 10.1080/09297049.2013.803524] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Numerous studies have provided clues about the ontogeny of lateralization of auditory processing in humans, but most have employed specific subtypes of stimuli and/or have assessed responses in discrete temporal windows. The present study used near-infrared spectroscopy (NIRS) to establish changes in hemodynamic activity in the neocortex of preverbal infants (aged 4-11 months) while they were exposed to two distinct types of complex auditory stimuli (full sentences and musical phrases). Measurements were taken from bilateral temporal regions, including both anterior and posterior superior temporal gyri. When the infant sample was treated as a homogenous group, no significant effects emerged for stimulus type. However, when infants' hemodynamic responses were categorized according to their overall changes in volume, two very clear neurophysiological patterns emerged. A high-responder group showed a pattern of early and increasing activation, primarily in the left hemisphere, similar to that observed in comparable studies with adults. In contrast, a low-responder group showed a pattern of gradual decreases in activation over time. Although age did track with responder type, no significant differences between these groups emerged for stimulus type, suggesting that the high- versus low-responder characterization generalizes across classes of auditory stimuli. These results highlight a new way to conceptualize the variable cortical blood flow patterns that are frequently observed across infants and stimuli, with hemodynamic response volumes potentially serving as an early indicator of developmental changes in auditory-processing sensitivity.
Collapse
Affiliation(s)
- Eswen Fava
- a Department of Psychology , University of Massachusetts Amherst , Amherst , MA , USA
| | | | | | | |
Collapse
|