1
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Hanna L, Towler W, Wilson C, Bien AG, Miller S, Schafer E, Gemignani J, Alemi R, Muthuraman M, Koirala N, Gracco VL. Cross-modal plasticity in children with cochlear implant: converging evidence from EEG and functional near-infrared spectroscopy. Brain Commun 2024; 6:fcae175. [PMID: 38846536 PMCID: PMC11154148 DOI: 10.1093/braincomms/fcae175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 04/02/2024] [Accepted: 05/17/2024] [Indexed: 06/09/2024] Open
Abstract
Over the first years of life, the brain undergoes substantial organization in response to environmental stimulation. In a silent world, it may promote vision by (i) recruiting resources from the auditory cortex and (ii) making the visual cortex more efficient. It is unclear when such changes occur and how adaptive they are, questions that children with cochlear implants can help address. Here, we examined 7-18 years old children: 50 had cochlear implants, with delayed or age-appropriate language abilities, and 25 had typical hearing and language. High-density electroencephalography and functional near-infrared spectroscopy were used to evaluate cortical responses to a low-level visual task. Evidence for a 'weaker visual cortex response' and 'less synchronized or less inhibitory activity of auditory association areas' in the implanted children with language delays suggests that cross-modal reorganization can be maladaptive and does not necessarily strengthen the dominant visual sense.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, Montreal, Quebec, Canada, H4B 1R6
| | - Jace Wolfe
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Will Towler
- Hearts for Hearing Foundation, Oklahoma City, OK 73120, USA
| | - Caleb Wilson
- Department of Otolaryngology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Alexander G Bien
- Department of Otolaryngology, University of Oklahoma Health Sciences Center, Oklahoma City, OK 73104, USA
| | - Sharon Miller
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX 76201, USA
| | - Erin Schafer
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX 76201, USA
| | - Jessica Gemignani
- Department of Developmental and Social Psychology, University of Padova, 35131 Padua, Italy
| | - Razieh Alemi
- Department of Psychology, Concordia University, Montreal, Quebec, Canada, H4B 1R6
| | - Muthuraman Muthuraman
- Section of Neural Engineering with Signal Analytics and Artificial Intelligence, Department of Neurology, University Hospital Würzburg, 97080 Würzburg, Germany
| | | | | |
Collapse
|
2
|
Jafari Z, Kolb BE, Mohajerani MH. A systematic review of altered resting-state networks in early deafness and implications for cochlear implantation outcomes. Eur J Neurosci 2024; 59:2596-2615. [PMID: 38441248 DOI: 10.1111/ejn.16295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 05/22/2024]
Abstract
Auditory deprivation following congenital/pre-lingual deafness (C/PD) can drastically affect brain development and its functional organisation. This systematic review intends to extend current knowledge of the impact of C/PD and deafness duration on brain resting-state networks (RSNs), review changes in RSNs and spoken language outcomes post-cochlear implant (CI) and draw conclusions for future research. The systematic literature search followed the PRISMA guideline. Two independent reviewers searched four electronic databases using combined keywords: 'auditory deprivation', 'congenital/prelingual deafness', 'resting-state functional connectivity' (RSFC), 'resting-state fMRI' and 'cochlear implant'. Seventeen studies (16 cross-sectional and one longitudinal) met the inclusion criteria. Using the Crowe Critical Appraisal Tool, the publications' quality was rated between 65.0% and 92.5% (mean: 84.10%), ≥80% in 13 out of 17 studies. A few studies were deficient in sampling and/or ethical considerations. According to the findings, early auditory deprivation results in enhanced RSFC between the auditory network and brain networks involved in non-verbal communication, and high levels of spontaneous neural activity in the auditory cortex before CI are evidence of occupied auditory cortical areas with other sensory modalities (cross-modal plasticity) and sub-optimal CI outcomes. Overall, current evidence supports the idea that moreover intramodal and cross-modal plasticity, the entire brain adaptation following auditory deprivation contributes to spoken language development and compensatory behaviours.
Collapse
Affiliation(s)
- Zahra Jafari
- School of Communication Sciences and Disorders (SCSD), Dalhousie University, Halifax, Nova Scotia, Canada
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Bryan E Kolb
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
- Douglas Research Centre, Department of Psychiatry, McGill University, Montreal, Québec, Canada
| |
Collapse
|
3
|
Alemi R, Wolfe J, Neumann S, Manning J, Hanna L, Towler W, Wilson C, Bien A, Miller S, Schafer E, Gemignani J, Koirala N, Gracco VL, Deroche M. Motor Processing in Children With Cochlear Implants as Assessed by Functional Near-Infrared Spectroscopy. Percept Mot Skills 2024; 131:74-105. [PMID: 37977135 PMCID: PMC10863375 DOI: 10.1177/00315125231213167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Jace Wolfe
- Oberkotter Foundation, Oklahoma City, OK, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Will Towler
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Caleb Wilson
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Alexander Bien
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Sharon Miller
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Erin Schafer
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Jessica Gemignani
- Department of Developmental and Social Psychology, University of Padua, Padova, Italy
| | | | | | - Mickael Deroche
- Department of Psychology, Concordia University, Montreal, QC, Canada
| |
Collapse
|
4
|
McMurray B. The acquisition of speech categories: Beyond perceptual narrowing, beyond unsupervised learning and beyond infancy. LANGUAGE, COGNITION AND NEUROSCIENCE 2022; 38:419-445. [PMID: 38425732 PMCID: PMC10904032 DOI: 10.1080/23273798.2022.2105367] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 07/01/2022] [Indexed: 03/02/2024]
Abstract
An early achievement in language is carving a variable acoustic space into categories. The canonical story is that infants accomplish this by the second year, when only unsupervised learning is plausible. I challenge this view, synthesizing five lines of developmental, phonetic and computational work. First, unsupervised learning may be insufficient given the statistics of speech (including infant-directed). Second, evidence that infants "have" speech categories rests on tenuous methodological assumptions. Third, the fact that the ecology of the learning environment is unsupervised does not rule out more powerful error driven learning mechanisms. Fourth, several implicit supervisory signals are available to older infants. Finally, development is protracted through adolescence, enabling richer avenues for development. Infancy may be a time of organizing the auditory space, but true categorization only arises via complex developmental cascades later in life. This has implications for critical periods, second language acquisition, and our basic framing of speech perception.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological and Brain Sciences, Dept. of Communication Sciences and Disorders, Dept. of Linguistics, University of Iowa and Haskins Laboratories
| |
Collapse
|
5
|
Weerathunge HR, Alzamendi GA, Cler GJ, Guenther FH, Stepp CE, Zañartu M. LaDIVA: A neurocomputational model providing laryngeal motor control for speech acquisition and production. PLoS Comput Biol 2022; 18:e1010159. [PMID: 35737706 PMCID: PMC9258861 DOI: 10.1371/journal.pcbi.1010159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 07/06/2022] [Accepted: 05/02/2022] [Indexed: 11/18/2022] Open
Abstract
Many voice disorders are the result of intricate neural and/or biomechanical impairments that are poorly understood. The limited knowledge of their etiological and pathophysiological mechanisms hampers effective clinical management. Behavioral studies have been used concurrently with computational models to better understand typical and pathological laryngeal motor control. Thus far, however, a unified computational framework that quantitatively integrates physiologically relevant models of phonation with the neural control of speech has not been developed. Here, we introduce LaDIVA, a novel neurocomputational model with physiologically based laryngeal motor control. We combined the DIVA model (an established neural network model of speech motor control) with the extended body-cover model (a physics-based vocal fold model). The resulting integrated model, LaDIVA, was validated by comparing its model simulations with behavioral responses to perturbations of auditory vocal fundamental frequency (fo) feedback in adults with typical speech. LaDIVA demonstrated capability to simulate different modes of laryngeal motor control, ranging from short-term (i.e., reflexive) and long-term (i.e., adaptive) auditory feedback paradigms, to generating prosodic contours in speech. Simulations showed that LaDIVA’s laryngeal motor control displays properties of motor equivalence, i.e., LaDIVA could robustly generate compensatory responses to reflexive vocal fo perturbations with varying initial laryngeal muscle activation levels leading to the same output. The model can also generate prosodic contours for studying laryngeal motor control in running speech. LaDIVA can expand the understanding of the physiology of human phonation to enable, for the first time, the investigation of causal effects of neural motor control in the fine structure of the vocal signal.
Collapse
Affiliation(s)
- Hasini R. Weerathunge
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts, United States of America
- * E-mail:
| | - Gabriel A. Alzamendi
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso, Chile
- Institute for Research and Development on Bioengineering and Bioinformatics (IBB), CONICET-UNER, Oro Verde, Argentina
| | - Gabriel J. Cler
- Department of Speech & Hearing Sciences, University of Washington, Seattle, Washington, United States of America
| | - Frank H. Guenther
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts, United States of America
| | - Cara E. Stepp
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts, United States of America
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts, United States of America
- Department of Otolaryngology-Head and Neck Surgery, Boston University School of Medicine, Boston, Massachusetts, United States of America
| | - Matías Zañartu
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso, Chile
| |
Collapse
|
6
|
Feldman NH, Goldwater S, Dupoux E, Schatz T. Do Infants Really Learn Phonetic Categories? OPEN MIND 2022; 5:113-131. [PMID: 35024527 PMCID: PMC8746127 DOI: 10.1162/opmi_a_00046] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 08/06/2021] [Indexed: 11/16/2022] Open
Abstract
Early changes in infants’ ability to perceive native and nonnative speech sound contrasts are typically attributed to their developing knowledge of phonetic categories. We critically examine this hypothesis and argue that there is little direct evidence of category knowledge in infancy. We then propose an alternative account in which infants’ perception changes because they are learning a perceptual space that is appropriate to represent speech, without yet carving up that space into phonetic categories. If correct, this new account has substantial implications for understanding early language development.
Collapse
Affiliation(s)
- Naomi H Feldman
- Department of Linguistics and UMIACS, University of Maryland, College Park, MD, USA
| | | | - Emmanuel Dupoux
- Cognitive Machine Learning (ENS - EHESS - PSL Research University - CNRS - INRIA), Paris, France
| | - Thomas Schatz
- Department of Linguistics and UMIACS, University of Maryland, College Park, MD, USA
| |
Collapse
|
7
|
Guo P, Lang S, Jiang M, Wang Y, Zeng Z, Wen Z, Liu Y, Chen BT. Alterations of Regional Homogeneity in Children With Congenital Sensorineural Hearing Loss: A Resting-State fMRI Study. Front Neurosci 2021; 15:678910. [PMID: 34690668 PMCID: PMC8526795 DOI: 10.3389/fnins.2021.678910] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Accepted: 09/07/2021] [Indexed: 11/30/2022] Open
Abstract
Background: Brain functional alterations have been observed in children with congenital sensorineural hearing loss (CSNHL). The purpose of this study was to assess the alterations of regional homogeneity in children with CSNHL. Methods: Forty-five children with CSNHL and 20 healthy controls were enrolled into this study. Brain resting-state functional MRI (rs-fMRI) for regional homogeneity including the Kendall coefficient consistency (KCC-ReHo) and the coherence-based parameter (Cohe-ReHo) was analyzed and compared between the two groups, i.e., the CSNHL group and the healthy control group. Results: Compared to the healthy controls, children with CSNHL showed increased Cohe-ReHo values in left calcarine and decreased values in bilateral ventrolateral prefrontal cortex (VLPFC) and right dorsolateral prefrontal cortex (DLPFC). Children with CSNHL also had increased KCC-ReHo values in the left calcarine, cuneus, precentral gyrus, and right superior parietal lobule (SPL) and decreased values in the left VLPFC and right DLPFC. Correlations were detected between the ReHo values and age of the children with CSNHL. There were positive correlations between ReHo values in the pre-cuneus/pre-frontal cortex and age (p < 0.05). There were negative correlations between ReHo values in bilateral temporal lobes, fusiform gyrus, parahippocampal gyrus and precentral gyrus, and age (p < 0.05). Conclusion: Children with CSNHL had RoHo alterations in the auditory, visual, motor, and other related brain cortices as compared to the healthy controls with normal hearing. There were significant correlations between ReHo values and age in brain regions involved in information integration and processing. Our study showed promising data using rs-fMRI ReHo parameters to assess brain functional alterations in children with CSNHL.
Collapse
Affiliation(s)
- Pingping Guo
- Department of Medical Ultrasound, Affiliated Tumor Hospital of Guangxi Medical University, Nanning, China
| | - Siyuan Lang
- Department of Radiology, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Muliang Jiang
- Department of Radiology, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Yifeng Wang
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu, China
| | - Zisan Zeng
- Department of Radiology, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Zuguang Wen
- Department of Radiology, Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen, China
| | - Yikang Liu
- Department of Otorhinolaryngology Head and Neck Surgery, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Bihong T Chen
- Department of Diagnostic Radiology, City of Hope National Medical Center, Duarte, CA, United States
| |
Collapse
|
8
|
Abstract
Creating invariant representations from an everchanging speech signal is a major challenge for the human brain. Such an ability is particularly crucial for preverbal infants who must discover the phonological, lexical, and syntactic regularities of an extremely inconsistent signal in order to acquire language. Within the visual domain, an efficient neural solution to overcome variability consists in factorizing the input into a reduced set of orthogonal components. Here, we asked whether a similar decomposition strategy is used in early speech perception. Using a 256-channel electroencephalographic system, we recorded the neural responses of 3-mo-old infants to 120 natural consonant-vowel syllables with varying acoustic and phonetic profiles. Using multivariate pattern analyses, we show that syllables are factorized into distinct and orthogonal neural codes for consonants and vowels. Concerning consonants, we further demonstrate the existence of two stages of processing. A first phase is characterized by orthogonal and context-invariant neural codes for the dimensions of manner and place of articulation. Within the second stage, manner and place codes are integrated to recover the identity of the phoneme. We conclude that, despite the paucity of articulatory motor plans and speech production skills, pre-babbling infants are already equipped with a structured combinatorial code for speech analysis, which might account for the rapid pace of language acquisition during the first year.
Collapse
|
9
|
Zanaboni MP, Pasca L, Villa BV, Faggio A, Grumi S, Provenzi L, Varesio C, De Giorgis V. Characterization of Speech and Language Phenotype in GLUT1DS. CHILDREN-BASEL 2021; 8:children8050344. [PMID: 33925679 PMCID: PMC8146076 DOI: 10.3390/children8050344] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 04/23/2021] [Accepted: 04/23/2021] [Indexed: 12/02/2022]
Abstract
Background: To analyze the oral motor, speech and language phenotype in a sample of pediatric patients with GLUT 1 transporter deficiency syndrome (GLUT1DS). Methods: eight Italian-speaking children with GLUT1DS (aged 4.6–15.4 years) in stable treatment with ketogenic diet from a variable time underwent a specific and standardized speech and language assessment battery. Results: All patients showed deficits with different degrees of impairment in multiple speech and language areas. In particular, orofacial praxis, parallel and total movements were the most impaired in the oromotor domain; in the speech domain patients obtained a poor performance in the diadochokinesis rate and in the repetition of words that resulted as severely deficient in seven out of eight patients; in the language domain the most affected abilities were semantic/phonological fluency and receptive grammar. Conclusions: GLUT1DS is associated to different levels of speech and language impairment, which should guide diagnostic and therapeutic intervention. Larger population data are needed to identify more precisely a speech and language profile in GLUT1DS patients.
Collapse
Affiliation(s)
- Martina Paola Zanaboni
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
| | - Ludovica Pasca
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
- Department of Brain and Behaviour Neuroscience, University of Pavia, 27100 Pavia, Italy
- Correspondence: ; Tel.: +39-0382-380289
| | - Barbara Valeria Villa
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
| | - Antonella Faggio
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
| | - Serena Grumi
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
| | - Livio Provenzi
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
| | - Costanza Varesio
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
- Department of Brain and Behaviour Neuroscience, University of Pavia, 27100 Pavia, Italy
| | - Valentina De Giorgis
- Department of Child Neurology and Psychiatry, IRCCS Mondino Foundation, 27100 Pavia, Italy; (M.P.Z.); (B.V.V.); (A.F.); (S.G.); (L.P.); (C.V.); (V.D.G.)
| |
Collapse
|
10
|
Goal-Directed Exploration for Learning Vowels and Syllables: A Computational Model of Speech Acquisition. KUNSTLICHE INTELLIGENZ 2021. [DOI: 10.1007/s13218-021-00704-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractInfants learn to speak rapidly during their first years of life, gradually improving from simple vowel-like sounds to larger consonant-vowel complexes. Learning to control their vocal tract in order to produce meaningful speech sounds is a complex process which requires to learn the relationship between motor and sensory processes. In this paper, a computational framework is proposed that models the problem of learning articulatory control for a physiologically plausible 3-D vocal tract model using a developmentally-inspired approach. The system babbles and explores efficiently in a low-dimensional space of goals that are relevant to the learner in its synthetic environment. The learning process is goal-directed and self-organized, and yields an inverse model of the mapping between sensory space and motor commands. This study provides a unified framework that can be used for learning static as well as dynamic motor representations. The successful learning of vowel and syllable sounds as well as the benefit of active and adaptive learning strategies are demonstrated. Categorical perception is found in the acquired models, suggesting that the framework has the potential to replicate phenomena of human speech acquisition.
Collapse
|
11
|
Majorano M, Brondino M, Morelli M, Ferrari R, Lavelli M, Guerzoni L, Cuda D, Persici V. Preverbal Production and Early Lexical Development in Children With Cochlear Implants: A Longitudinal Study Following Pre-implanted Children Until 12 Months After Cochlear Implant Activation. Front Psychol 2020; 11:591584. [PMID: 33329253 PMCID: PMC7713996 DOI: 10.3389/fpsyg.2020.591584] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 10/23/2020] [Indexed: 11/13/2022] Open
Abstract
Studies have shown that children vary in the trajectories of their language development after cochlear implant (CI) activation. The aim of the present study is to assess the preverbal and lexical development of a group of 20 Italian-speaking children observed longitudinally before CI activation and at three, 6 and 12 months after CI surgery (mean age at the first session: 17.5 months; SD: 8.3; and range: 10–35). The group of children with CIs (G-CI) was compared with two groups of normally-hearing (NH) children, one age-matched (G-NHA; mean age at the first session: 17.4 months; SD: 8.0; and range: 10–34) and one language-matched (G-NHL; n = 20; mean age at the first session: 11.2 months; SD: 0.4; and range: 11–12). The spontaneous interactions between children and their mothers during free-play were transcribed. Preverbal babbling production and first words were considered for each child. Data analysis showed significant differences in babbling and word production between groups, with a lower production of words in children with CIs compared to the G-NHA group and a higher production of babbling compared to the G-NHL children. Word production 1 year after activation was significantly lower for the children with CIs than for language-matched children only when maternal education was controlled for. Furthermore, latent class growth analysis showed that children with CIs belonged mainly to classes that exhibited a low level of initial production but also progressive increases over time. Babbling production had a statistically significant effect on lexical growth but not on class membership, and only for groups showing slower and constant increases. Results highlight the importance of preverbal vocal patterns for later lexical development and may support families and speech therapists in the early identification of risk and protective factors for language delay in children with CIs.
Collapse
Affiliation(s)
| | | | - Marika Morelli
- Department of Human Sciences, University of Verona, Verona, Italy
| | - Rachele Ferrari
- Department of Human Sciences, University of Verona, Verona, Italy
| | - Manuela Lavelli
- Department of Human Sciences, University of Verona, Verona, Italy
| | - Letizia Guerzoni
- U.O. Otorhinolaryngology, Guglielmo da Saliceto Hospital, Piacenza, Italy
| | - Domenico Cuda
- U.O. Otorhinolaryngology, Guglielmo da Saliceto Hospital, Piacenza, Italy
| | | |
Collapse
|
12
|
|
13
|
Tomasello R, Garagnani M, Wennekers T, Pulvermüller F. A Neurobiologically Constrained Cortex Model of Semantic Grounding With Spiking Neurons and Brain-Like Connectivity. Front Comput Neurosci 2018; 12:88. [PMID: 30459584 PMCID: PMC6232424 DOI: 10.3389/fncom.2018.00088] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Accepted: 10/15/2018] [Indexed: 11/18/2022] Open
Abstract
One of the most controversial debates in cognitive neuroscience concerns the cortical locus of semantic knowledge and processing in the human brain. Experimental data revealed the existence of various cortical regions relevant for meaning processing, ranging from semantic hubs generally involved in semantic processing to modality-preferential sensorimotor areas involved in the processing of specific conceptual categories. Why and how the brain uses such complex organization for conceptualization can be investigated using biologically constrained neurocomputational models. Here, we improve pre-existing neurocomputational models of semantics by incorporating spiking neurons and a rich connectivity structure between the model ‘areas’ to mimic important features of the underlying neural substrate. Semantic learning and symbol grounding in action and perception were simulated by associative learning between co-activated neuron populations in frontal, temporal and occipital areas. As a result of Hebbian learning of the correlation structure of symbol, perception and action information, distributed cell assembly circuits emerged across various cortices of the network. These semantic circuits showed category-specific topographical distributions, reaching into motor and visual areas for action- and visually-related words, respectively. All types of semantic circuits included large numbers of neurons in multimodal connector hub areas, which is explained by cortical connectivity structure and the resultant convergence of phonological and semantic information on these zones. Importantly, these semantic hub areas exhibited some category-specificity, which was less pronounced than that observed in primary and secondary modality-preferential cortices. The present neurocomputational model integrates seemingly divergent experimental results about conceptualization and explains both semantic hubs and category-specific areas as an emergent process causally determined by two major factors: neuroanatomical connectivity structure and correlated neuronal activation during language learning.
Collapse
Affiliation(s)
- Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany.,Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, United Kingdom.,Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany
| | - Max Garagnani
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany.,Department of Computing, Goldsmiths, University of London, London, United Kingdom
| | - Thomas Wennekers
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth, United Kingdom
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4, Freie Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany.,Einstein Center for Neurosciences, Berlin, Germany
| |
Collapse
|
14
|
Modality-independent recruitment of inferior frontal cortex during speech processing in human infants. Dev Cogn Neurosci 2018; 34:130-138. [PMID: 30391756 PMCID: PMC6969291 DOI: 10.1016/j.dcn.2018.10.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 08/25/2018] [Accepted: 10/25/2018] [Indexed: 11/22/2022] Open
Abstract
Despite increasing interest in the development of audiovisual speech perception in infancy, the underlying mechanisms and neural processes are still only poorly understood. In addition to regions in temporal cortex associated with speech processing and multimodal integration, such as superior temporal sulcus, left inferior frontal cortex (IFC) has been suggested to be critically involved in mapping information from different modalities during speech perception. To further illuminate the role of IFC during infant language learning and speech perception, the current study examined the processing of auditory, visual and audiovisual speech in 6-month-old infants using functional near-infrared spectroscopy (fNIRS). Our results revealed that infants recruit speech-sensitive regions in frontal cortex including IFC regardless of whether they processed unimodal or multimodal speech. We argue that IFC may play an important role in associating multimodal speech information during the early steps of language learning.
Collapse
|
15
|
Echeverría-Palacio CM, Uscátegui-Daccarett A, Talero-Gutiérrez C. Integración auditiva, visual y propioceptiva como sustrato del desarrollo del lenguaje. REVISTA DE LA FACULTAD DE MEDICINA 2018. [DOI: 10.15446/revfacmed.v66n3.60490] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Introducción. El desarrollo del lenguaje es un proceso complejo considerado como marcador evolutivo del ser humano y puede ser comprendido a partir de la contribución de los sistemas sensoriales y de los eventos que ocurren en periodos críticos del desarrollo.Objetivo. Realizar una revisión de cómo se da la integración de la información auditiva, visual y propioceptiva y cómo se refleja en el desarrollo del lenguaje, destacando el papel de la interacción social como contexto que favorece este proceso.Materiales y métodos. Se utilizaron los términos MeSH “Language Development”; “Visual Perception”; “Hearing”; y “Proprioception en las bases de datos MEDLINE y Embase, limitando la búsqueda principal a artículos escritos en inglés, español y portugués.Resultados. El punto de partida lo constituye la información auditiva, la cual, en el primer año de vida, permite la discriminación de los elementos del ambiente que corresponden al lenguaje; luego un pico en su adquisición y posteriormente una etapa de máxima discriminación lingüística. La información visual proporciona la correspondencia del lenguaje en imágenes, sustrato de nominación y comprensión de palabras, además de la interpretación e imitación del componente emocional en la gesticulación. La información propioceptiva ofrece la retroalimentación de los patrones de ejecución motora empleados en la producción del lenguaje.Conclusión. El estudio del desarrollo lenguaje desde la integración sensorial ofrece nuevas perspectivas para el abordaje e intervención de sus desviaciones.
Collapse
|
16
|
Moseley RL, Pulvermüller F. What can autism teach us about the role of sensorimotor systems in higher cognition? New clues from studies on language, action semantics, and abstract emotional concept processing. Cortex 2018; 100:149-190. [DOI: 10.1016/j.cortex.2017.11.019] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 05/17/2017] [Accepted: 11/21/2017] [Indexed: 01/08/2023]
|
17
|
Pulvermüller F. Neural reuse of action perception circuits for language, concepts and communication. Prog Neurobiol 2017; 160:1-44. [PMID: 28734837 DOI: 10.1016/j.pneurobio.2017.07.001] [Citation(s) in RCA: 96] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 05/12/2017] [Accepted: 07/13/2017] [Indexed: 10/19/2022]
Abstract
Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy & Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences, Berlin 10117 Berlin, Germany.
| |
Collapse
|
18
|
Murphy K, James LS, Sakata JT, Prather JF. Advantages of comparative studies in songbirds to understand the neural basis of sensorimotor integration. J Neurophysiol 2017; 118:800-816. [PMID: 28331007 DOI: 10.1152/jn.00623.2016] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 03/14/2017] [Accepted: 03/15/2017] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor integration is the process through which the nervous system creates a link between motor commands and associated sensory feedback. This process allows for the acquisition and refinement of many behaviors, including learned communication behaviors such as speech and birdsong. Consequently, it is important to understand fundamental mechanisms of sensorimotor integration, and comparative analyses of this process can provide vital insight. Songbirds offer a powerful comparative model system to study how the nervous system links motor and sensory information for learning and control. This is because the acquisition, maintenance, and control of birdsong critically depend on sensory feedback. Furthermore, there is an incredible diversity of song organizations across songbird species, ranging from songs with simple, stereotyped sequences to songs with complex sequencing of vocal gestures, as well as a wide diversity of song repertoire sizes. Despite this diversity, the neural circuitry for song learning, control, and maintenance remains highly similar across species. Here, we highlight the utility of songbirds for the analysis of sensorimotor integration and the insights about mechanisms of sensorimotor integration gained by comparing different songbird species. Key conclusions from this comparative analysis are that variation in song sequence complexity seems to covary with the strength of feedback signals in sensorimotor circuits and that sensorimotor circuits contain distinct representations of elements in the vocal repertoire, possibly enabling evolutionary variation in repertoire sizes. We conclude our review by highlighting important areas of research that could benefit from increased comparative focus, with particular emphasis on the integration of new technologies.
Collapse
Affiliation(s)
- Karagh Murphy
- Program in Neuroscience, Department of Zoology and Physiology, University of Wyoming, Laramie, Wyoming; and
| | - Logan S James
- Department of Biology, McGill University, Montreal, Quebec, Canada
| | - Jon T Sakata
- Department of Biology, McGill University, Montreal, Quebec, Canada
| | - Jonathan F Prather
- Program in Neuroscience, Department of Zoology and Physiology, University of Wyoming, Laramie, Wyoming; and
| |
Collapse
|
19
|
Schomers MR, Garagnani M, Pulvermüller F. Neurocomputational Consequences of Evolutionary Connectivity Changes in Perisylvian Language Cortex. J Neurosci 2017; 37:3045-3055. [PMID: 28193685 PMCID: PMC5354338 DOI: 10.1523/jneurosci.2693-16.2017] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 12/20/2016] [Accepted: 01/11/2017] [Indexed: 01/07/2023] Open
Abstract
The human brain sets itself apart from that of its primate relatives by specific neuroanatomical features, especially the strong linkage of left perisylvian language areas (frontal and temporal cortex) by way of the arcuate fasciculus (AF). AF connectivity has been shown to correlate with verbal working memory-a specifically human trait providing the foundation for language abilities-but a mechanistic explanation of any related causal link between anatomical structure and cognitive function is still missing. Here, we provide a possible explanation and link, by using neurocomputational simulations in neuroanatomically structured models of the perisylvian language cortex. We compare networks mimicking key features of cortical connectivity in monkeys and humans, specifically the presence of relatively stronger higher-order "jumping links" between nonadjacent perisylvian cortical areas in the latter, and demonstrate that the emergence of working memory for syllables and word forms is a functional consequence of this structural evolutionary change. We also show that a mere increase of learning time is not sufficient, but that this specific structural feature, which entails higher connectivity degree of relevant areas and shorter sensorimotor path length, is crucial. These results offer a better understanding of specifically human anatomical features underlying the language faculty and their evolutionary selection advantage.SIGNIFICANCE STATEMENT Why do humans have superior language abilities compared to primates? Recently, a uniquely human neuroanatomical feature has been demonstrated in the strength of the arcuate fasciculus (AF), a fiber pathway interlinking the left-hemispheric language areas. Although AF anatomy has been related to linguistic skills, an explanation of how this fiber bundle may support language abilities is still missing. We use neuroanatomically structured computational models to investigate the consequences of evolutionary changes in language area connectivity and demonstrate that the human-specific higher connectivity degree and comparatively shorter sensorimotor path length implicated by the AF entail emergence of verbal working memory, a prerequisite for language learning. These results offer a better understanding of specifically human anatomical features for language and their evolutionary selection advantage.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany,
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Max Garagnani
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth PL4 8AA, United Kingdom, and
- Department of Computing, Goldsmiths, University of London, London SE14 6NW, United Kingdom
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
20
|
Vihman MM. In search of a learning model. Br J Psychol 2017; 108:40-42. [PMID: 28059463 DOI: 10.1111/bjop.12229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2016] [Indexed: 11/29/2022]
Abstract
While the four commentaries reflect a range of different perspectives on my target paper (Vihman, 2017), all basically accept the overall approach, which has been central to my research for 30 years. Each commentary proposes ways of deepening aspects of the ideas expressed or points out limitations and potential areas in which elaboration would be useful. This response takes up each commentary in turn.
Collapse
|
21
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 117] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
22
|
Asada M. Modeling Early Vocal Development Through Infant–Caregiver Interaction: A Review. IEEE Trans Cogn Dev Syst 2016. [DOI: 10.1109/tcds.2016.2552493] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
23
|
The role of left inferior frontal cortex during audiovisual speech perception in infants. Neuroimage 2016; 133:14-20. [PMID: 26946090 DOI: 10.1016/j.neuroimage.2016.02.061] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2016] [Accepted: 02/21/2016] [Indexed: 11/21/2022] Open
Abstract
In the first year of life, infants' speech perception attunes to their native language. While the behavioral changes associated with native language attunement are fairly well mapped, the underlying mechanisms and neural processes are still only poorly understood. Using fNIRS and eye tracking, the current study investigated 6-month-old infants' processing of audiovisual speech that contained matching or mismatching auditory and visual speech cues. Our results revealed that infants' speech-sensitive brain responses in inferior frontal brain regions were lateralized to the left hemisphere. Critically, our results further revealed that speech-sensitive left inferior frontal regions showed enhanced responses to matching when compared to mismatching audiovisual speech, and that infants with a preference to look at the speaker's mouth showed an enhanced left inferior frontal response to speech compared to infants with a preference to look at the speaker's eyes. These results suggest that left inferior frontal regions play a crucial role in associating information from different modalities during native language attunement, fostering the formation of multimodal phonological categories.
Collapse
|
24
|
Early-onset hearing loss reorganizes the visual and auditory network in children without cochlear implantation. Neuroreport 2016; 27:197-202. [DOI: 10.1097/wnr.0000000000000524] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
25
|
Venezia JH, Fillmore P, Matchin W, Isenberg AL, Hickok G, Fridriksson J. Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech. Neuroimage 2016; 126:196-207. [PMID: 26608242 PMCID: PMC4733636 DOI: 10.1016/j.neuroimage.2015.11.038] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Revised: 11/09/2015] [Accepted: 11/15/2015] [Indexed: 11/22/2022] Open
Abstract
Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development.
Collapse
Affiliation(s)
- Jonathan H Venezia
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States.
| | - Paul Fillmore
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX 76798, United States
| | - William Matchin
- Department of Linguistics, University of Maryland, College Park, MD 20742, United States
| | - A Lisette Isenberg
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, United States
| |
Collapse
|
26
|
Warlaumont AS, Finnegan MK. Learning to Produce Syllabic Speech Sounds via Reward-Modulated Neural Plasticity. PLoS One 2016; 11:e0145096. [PMID: 26808148 PMCID: PMC4726623 DOI: 10.1371/journal.pone.0145096] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2015] [Accepted: 11/29/2015] [Indexed: 11/19/2022] Open
Abstract
At around 7 months of age, human infants begin to reliably produce well-formed syllables containing both consonants and vowels, a behavior called canonical babbling. Over subsequent months, the frequency of canonical babbling continues to increase. How the infant's nervous system supports the acquisition of this ability is unknown. Here we present a computational model that combines a spiking neural network, reinforcement-modulated spike-timing-dependent plasticity, and a human-like vocal tract to simulate the acquisition of canonical babbling. Like human infants, the model's frequency of canonical babbling gradually increases. The model is rewarded when it produces a sound that is more auditorily salient than sounds it has previously produced. This is consistent with data from human infants indicating that contingent adult responses shape infant behavior and with data from deaf and tracheostomized infants indicating that hearing, including hearing one's own vocalizations, is critical for canonical babbling development. Reward receipt increases the level of dopamine in the neural network. The neural network contains a reservoir with recurrent connections and two motor neuron groups, one agonist and one antagonist, which control the masseter and orbicularis oris muscles, promoting or inhibiting mouth closure. The model learns to increase the number of salient, syllabic sounds it produces by adjusting the base level of muscle activation and increasing their range of activity. Our results support the possibility that through dopamine-modulated spike-timing-dependent plasticity, the motor cortex learns to harness its natural oscillations in activity in order to produce syllabic sounds. It thus suggests that learning to produce rhythmic mouth movements for speech production may be supported by general cortical learning mechanisms. The model makes several testable predictions and has implications for our understanding not only of how syllabic vocalizations develop in infancy but also for our understanding of how they may have evolved.
Collapse
Affiliation(s)
- Anne S. Warlaumont
- Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States of America
| | - Megan K. Finnegan
- Speech & Hearing Sciences, University of Illinois at Urbana-Champaign, Champaign, IL, United States of America
| |
Collapse
|
27
|
Asada M, Endo N. Infant-caregiver interactions affect the early development of vocalization. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:5351-4. [PMID: 26737500 DOI: 10.1109/embc.2015.7319600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Vocal communication is a unique means to bilaterally exchange messages in real-time. The developmental origin of such communication is the vocal interactions between an infant and a caregiver, and one of the big mysteries is how the infant learns to vocalize the mother tongue of the caregiver. Many theories claim to explain an infant's capability to imitate a caregiver based on acoustic matching. However, the acoustic qualities of the infant and the caregiver are quite different, and, therefore, cannot fully explain the imitation. Instead, the interaction itself may have an important role, but the mechanism is still unclear. In this article, we review studies addressing this problem using constructive approaches based on cognitive developmental robotics.
Collapse
|
28
|
Shaobai Z, Yanchun J, Liwen H. Research on the mechanism for phonating stressed English syllables based on DIVA model. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2014.11.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
29
|
Altvater-Mackensen N, Grossmann T. Learning to Match Auditory and Visual Speech Cues: Social Influences on Acquisition of Phonological Categories. Child Dev 2014; 86:362-78. [DOI: 10.1111/cdev.12320] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
30
|
Howard IS, Messum P. Learning to pronounce first words in three languages: an investigation of caregiver and infant behavior using a computational model of an infant. PLoS One 2014; 9:e110334. [PMID: 25333740 PMCID: PMC4204867 DOI: 10.1371/journal.pone.0110334] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2014] [Accepted: 09/12/2014] [Indexed: 11/18/2022] Open
Abstract
Words are made up of speech sounds. Almost all accounts of child speech development assume that children learn the pronunciation of first language (L1) speech sounds by imitation, most claiming that the child performs some kind of auditory matching to the elements of ambient speech. However, there is evidence to support an alternative account and we investigate the non-imitative child behavior and well-attested caregiver behavior that this account posits using Elija, a computational model of an infant. Through unsupervised active learning, Elija began by discovering motor patterns, which produced sounds. In separate interaction experiments, native speakers of English, French and German then played the role of his caregiver. In their first interactions with Elija, they were allowed to respond to his sounds if they felt this was natural. We analyzed the interactions through phonemic transcriptions of the caregivers' utterances and found that they interpreted his output within the framework of their native languages. Their form of response was almost always a reformulation of Elija's utterance into well-formed sounds of L1. Elija retained those motor patterns to which a caregiver responded and formed associations between his motor pattern and the response it provoked. Thus in a second phase of interaction, he was able to parse input utterances in terms of the caregiver responses he had heard previously, and respond using his associated motor patterns. This capacity enabled the caregivers to teach Elija to pronounce some simple words in their native languages, by his serial imitation of the words' component speech sounds. Overall, our results demonstrate that the natural responses and behaviors of human subjects to infant-like vocalizations can take a computational model from a biologically plausible initial state through to word pronunciation. This provides support for an alternative to current auditory matching hypotheses for how children learn to pronounce.
Collapse
Affiliation(s)
- Ian S. Howard
- Centre for Robotics and Neural Systems, School of Computing and Mathematics, Plymouth University, Plymouth, United Kingdom
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, Cambridge, United Kingdom
- * E-mail:
| | - Piers Messum
- Pronunciation Science Ltd, London, United Kingdom
| |
Collapse
|
31
|
The emergence of mirror-like response properties from domain-general principles in vision and audition. Behav Brain Sci 2014; 37:219. [PMID: 24775176 DOI: 10.1017/s0140525x13002483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Like Cook et al., we suggest that mirror neurons are a fascinating product of cross-modal learning. As predicted by an associative account, responses in motor regions are observed for novel and/or abstract visual stimuli such as point-light and android movements. Domain-specific mirror responses also emerge as a function of audiomotor expertise that is slowly acquired over years of intensive training.
Collapse
|
32
|
Pulvermüller F. Semantic embodiment, disembodiment or misembodiment? In search of meaning in modules and neuron circuits. BRAIN AND LANGUAGE 2013; 127:86-103. [PMID: 23932167 DOI: 10.1016/j.bandl.2013.05.015] [Citation(s) in RCA: 82] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2012] [Revised: 03/01/2013] [Accepted: 05/23/2013] [Indexed: 06/02/2023]
Abstract
"Embodied" proposals claim that the meaning of at least some words, concepts and constructions is grounded in knowledge about actions and objects. An alternative "disembodied" position locates semantics in a symbolic system functionally detached from sensorimotor modules. This latter view is not tenable theoretically and has been empirically falsified by neuroscience research. A minimally-embodied approach now claims that action-perception systems may "color", but not represent, meaning; however, such minimal embodiment (misembodiment?) still fails to explain why action and perception systems exert causal effects on the processing of symbols from specific semantic classes. Action perception theory (APT) offers neurobiological mechanisms for "embodied" referential, affective and action semantics along with "disembodied" mechanisms of semantic abstraction, generalization and symbol combination, which draw upon multimodal brain systems. In this sense, APT suggests integrative-neuromechanistic explanations of why both sensorimotor and multimodal areas of the human brain differentially contribute to specific facets of meaning and concepts.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany; Medical Research Council, Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK.
| |
Collapse
|
33
|
Warlaumont AS, Westermann G, Buder EH, Oller DK. Prespeech motor learning in a neural network using reinforcement. Neural Netw 2012; 38:64-75. [PMID: 23275137 DOI: 10.1016/j.neunet.2012.11.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2012] [Revised: 11/13/2012] [Accepted: 11/13/2012] [Indexed: 01/17/2023]
Abstract
Vocal motor development in infancy provides a crucial foundation for language development. Some significant early accomplishments include learning to control the process of phonation (the production of sound at the larynx) and learning to produce the sounds of one's language. Previous work has shown that social reinforcement shapes the kinds of vocalizations infants produce. We present a neural network model that provides an account of how vocal learning may be guided by reinforcement. The model consists of a self-organizing map that outputs to muscles of a realistic vocalization synthesizer. Vocalizations are spontaneously produced by the network. If a vocalization meets certain acoustic criteria, it is reinforced, and the weights are updated to make similar muscle activations increasingly likely to recur. We ran simulations of the model under various reinforcement criteria and tested the types of vocalizations it produced after learning in the different conditions. When reinforcement was contingent on the production of phonated (i.e. voiced) sounds, the network's post-learning productions were almost always phonated, whereas when reinforcement was not contingent on phonation, the network's post-learning productions were almost always not phonated. When reinforcement was contingent on both phonation and proximity to English vowels as opposed to Korean vowels, the model's post-learning productions were more likely to resemble the English vowels and vice versa.
Collapse
Affiliation(s)
- Anne S Warlaumont
- Cognitive and Information Sciences, University of California, Merced, 5200 North Lake Rd., Merced, CA 95343, USA.
| | | | | | | |
Collapse
|
34
|
Catmur C. Sensorimotor learning and the ontogeny of the mirror neuron system. Neurosci Lett 2012; 540:21-7. [PMID: 23063950 DOI: 10.1016/j.neulet.2012.10.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2011] [Revised: 08/13/2012] [Accepted: 10/02/2012] [Indexed: 11/28/2022]
Abstract
Mirror neurons, which have now been found in the human and songbird as well as the macaque, respond to both the observation and the performance of the same action. It has been suggested that their matching response properties have evolved as an adaptation for action understanding; alternatively, these properties may arise through sensorimotor experience. Here I review mirror neuron response characteristics from the perspective of ontogeny; I discuss the limited evidence for mirror neurons in early development; and I describe the growing body of evidence suggesting that mirror neuron responses can be modified through experience, and that sensorimotor experience is the critical type of experience for producing mirror neuron responses.
Collapse
Affiliation(s)
- Caroline Catmur
- Department of Psychology, University of Surrey, Guildford GU2 7XH, UK.
| |
Collapse
|
35
|
|
36
|
Törölä H, Lehtihalmes M, Heikkinen H, Olsén P, Yliherva A. Early vocalization of preterm infants with extremely low birth weight (ELBW), Part II: From canonical babbling up to the appearance of the first word. CLINICAL LINGUISTICS & PHONETICS 2012; 26:345-356. [PMID: 22404864 DOI: 10.3109/02699206.2011.636500] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The aim of this study was to systematically describe the preverbal development of preterm infants from canonical babbling up to the first word and to compare it with that of healthy full-term infants. In addition, the amount of vocalization between the preterm and full-term groups was compared. The sample consisted of 18 preterm infants with extremely low birth weight and 11 full-term infants. The development of preverbal vocalization before variegated babbling did not differ between the groups. Instead, the preterm infants failed to produce more different kinds of canonical syllable types than the full-term infants. However, they showed a larger variance of variegated babbling skills and remained in the babbling phase longer before reaching the first meaningful word compared with the full-term infants. Following the onset of canonical babbling, the preterm infants produced fewer vocalizations than the full-term infants and they reached the first word later than the full-term infants.
Collapse
Affiliation(s)
- Helena Törölä
- Faculty of Humanities, Logopedics, Department of Mathematical Sciences/IT Administration Services, University of Oulu, and Department of Paediatrics and Adolescence, Oulu University Hospital, Oulu, Finland.
| | | | | | | | | |
Collapse
|
37
|
Do production patterns influence the processing of speech in prelinguistic infants? Infant Behav Dev 2011; 34:590-601. [PMID: 21774986 DOI: 10.1016/j.infbeh.2011.06.005] [Citation(s) in RCA: 103] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2010] [Revised: 02/12/2011] [Accepted: 06/23/2011] [Indexed: 11/21/2022]
Abstract
The headturn preference procedure was used to test 18 infants on their response to three different passages chosen to reflect their individual production patterns. The passages contained nonwords with consonants in one of three categories: (a) often produced by that infant ('own'), (b) rarely produced by that infant but common at that age ('other'), and (c) not generally produced by infants. Infants who had a single 'own' consonant showed no significant preference for either 'own' (a) or 'other' (b) passages. In contrast, infants' with two 'own' consonants exhibited greater attention to 'other' passages (b). Both groups attended equally to the passage featuring consonants rarely produced by infants of that age (c). An analysis of a sample of the infant-directed speech ruled out the mothers' speech as a source of the infant preferences. The production-based shift to a focus on the 'other' passage suggests that nascent production abilities combine with emergent perceptual experience to facilitate word learning.
Collapse
|
38
|
Listening Preference for Child-Directed Speech Versus Nonspeech Stimuli in Normal-Hearing and Hearing-Impaired Infants After Cochlear Implantation. Ear Hear 2011; 32:358-72. [DOI: 10.1097/aud.0b013e3182008afc] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
39
|
Keren-Portnoy T, Vihman MM, DePaolis RA, Whitaker CJ, Williams NM. The role of vocal practice in constructing phonological working memory. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2010; 53:1280-93. [PMID: 20631231 DOI: 10.1044/1092-4388(2009/09-0003)] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
PURPOSE In this study, the authors looked for effects of vocal practice on phonological working memory. METHOD A longitudinal design was used, combining both naturalistic observations and a nonword repetition test. Fifteen 26-month-olds (12 of whom were followed from age 11 months) were administered a nonword test including real words, "standard" nonwords (identical for all children), and nonwords based on individual children's production inventory (in and out words). RESULTS A strong relationship was found between (a) length of experience with consonant production and (b) nonword repetition and between (a) differential experience with specific consonants through production and (b) performance on the in versus out words. CONCLUSIONS Performance depended on familiarity with words or their subunits and was strongest for real words, weaker for in words, and weakest for out words. The results demonstrate the important role of speech production in the construction of phonological working memory.
Collapse
Affiliation(s)
- Tamar Keren-Portnoy
- Department of Language and Linguistic Science, University of York, Heslington, York YO10 5DD, United Kingdom.
| | | | | | | | | |
Collapse
|
40
|
Dick AS, Solodkin A, Small SL. Neural development of networks for audiovisual speech comprehension. BRAIN AND LANGUAGE 2010; 114:101-14. [PMID: 19781755 PMCID: PMC2891225 DOI: 10.1016/j.bandl.2009.08.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2009] [Revised: 07/17/2009] [Accepted: 08/20/2009] [Indexed: 05/21/2023]
Abstract
Everyday conversation is both an auditory and a visual phenomenon. While visual speech information enhances comprehension for the listener, evidence suggests that the ability to benefit from this information improves with development. A number of brain regions have been implicated in audiovisual speech comprehension, but the extent to which the neurobiological substrate in the child compares to the adult is unknown. In particular, developmental differences in the network for audiovisual speech comprehension could manifest through the incorporation of additional brain regions, or through different patterns of effective connectivity. In the present study we used functional magnetic resonance imaging and structural equation modeling (SEM) to characterize the developmental changes in network interactions for audiovisual speech comprehension. The brain response was recorded while children 8- to 11-years-old and adults passively listened to stories under audiovisual (AV) and auditory-only (A) conditions. Results showed that in children and adults, AV comprehension activated the same fronto-temporo-parietal network of regions known for their contribution to speech production and perception. However, the SEM network analysis revealed age-related differences in the functional interactions among these regions. In particular, the influence of the posterior inferior frontal gyrus/ventral premotor cortex on supramarginal gyrus differed across age groups during AV, but not A speech. This functional pathway might be important for relating motor and sensory information used by the listener to identify speech sounds. Further, its development might reflect changes in the mechanisms that relate visual speech information to articulatory speech representations through experience producing and perceiving speech.
Collapse
|
41
|
Pulvermüller F, Fadiga L. Active perception: sensorimotor circuits as a cortical basis for language. Nat Rev Neurosci 2010; 11:351-60. [PMID: 20383203 DOI: 10.1038/nrn2811] [Citation(s) in RCA: 533] [Impact Index Per Article: 38.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Action and perception are functionally linked in the brain, but a hotly debated question is whether perception and comprehension of stimuli depend on motor circuits. Brain language mechanisms are ideal for addressing this question. Neuroimaging investigations have found specific motor activations when subjects understand speech sounds, word meanings and sentence structures. Moreover, studies involving transcranial magnetic stimulation and patients with lesions affecting inferior frontal regions of the brain have shown contributions of motor circuits to the comprehension of phonemes, semantic categories and grammar. These data show that language comprehension benefits from frontocentral action systems, indicating that action and perception circuits are interdependent.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Medical Research Council, Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 2EF, UK.
| | | |
Collapse
|
42
|
Garagnani M, Wennekers T, Pulvermüller F. Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network. Cognit Comput 2009; 1:160-176. [PMID: 20396612 PMCID: PMC2854812 DOI: 10.1007/s12559-009-9011-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Current cognitive theories postulate either localist representations of knowledge or fully overlapping, distributed ones. We use a connectionist model that closely replicates known anatomical properties of the cerebral cortex and neurophysiological principles to show that Hebbian learning in a multi-layer neural network leads to memory traces (cell assemblies) that are both distributed and anatomically distinct. Taking the example of word learning based on action-perception correlation, we document mechanisms underlying the emergence of these assemblies, especially (i) the recruitment of neurons and consolidation of connections defining the kernel of the assembly along with (ii) the pruning of the cell assembly's halo (consisting of very weakly connected cells). We found that, whereas a learning rule mapping covariance led to significant overlap and merging of assemblies, a neurobiologically grounded synaptic plasticity rule with fixed LTP/LTD thresholds produced minimal overlap and prevented merging, exhibiting competitive learning behaviour. Our results are discussed in light of current theories of language and memory. As simulations with neurobiologically realistic neural networks demonstrate here spontaneous emergence of lexical representations that are both cortically dispersed and anatomically distinct, both localist and distributed cognitive accounts receive partial support.
Collapse
Affiliation(s)
- Max Garagnani
- Medical Research Council, Cognition and Brain Sciences Unit, 15, Chaucer Road, Cambridge CB2 7EF, UK
| | | | | |
Collapse
|
43
|
Asada M, Hosoda K, Kuniyoshi Y, Ishiguro H, Inui T, Yoshikawa Y, Ogino M, Yoshida C. Cognitive Developmental Robotics: A Survey. ACTA ACUST UNITED AC 2009. [DOI: 10.1109/tamd.2009.2021702] [Citation(s) in RCA: 362] [Impact Index Per Article: 24.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
44
|
Affiliation(s)
- Marco Del Giudice
- Center for Cognitive Science, Department of Psychology, University of Turin, Torino, Italy.
| | | | | |
Collapse
|
45
|
Affiliation(s)
| | - Dorothy V.M. Bishop
- Department of Experimental Psychology, University of Oxford, OX1 3UD, United Kingdom;
| |
Collapse
|
46
|
Acheson DJ, MacDonald MC. Verbal working memory and language production: Common approaches to the serial ordering of verbal information. Psychol Bull 2009; 135:50-68. [PMID: 19210053 PMCID: PMC3000524 DOI: 10.1037/a0014411] [Citation(s) in RCA: 154] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Verbal working memory (WM) tasks typically involve the language production architecture for recall; however, language production processes have had a minimal role in theorizing about WM. A framework for understanding verbal WM results is presented here. In this framework, domain-specific mechanisms for serial ordering in verbal WM are provided by the language production architecture, in which positional, lexical, and phonological similarity constraints are highly similar to those identified in the WM literature. These behavioral similarities are paralleled in computational modeling of serial ordering in both fields. The role of long-term learning in serial ordering performance is emphasized, in contrast to some models of verbal WM. Classic WM findings are discussed in terms of the language production architecture. The integration of principles from both fields illuminates the maintenance and ordering mechanisms for verbal information.
Collapse
Affiliation(s)
- Daniel J Acheson
- Department of Psychology, University of Wisconsin, Madison, WI 53706, USA
| | | |
Collapse
|
47
|
Wilson EM, Green JR, Yunusova Y, Moore CA. Task specificity in early oral motor development. Semin Speech Lang 2008; 29:257-66. [PMID: 19058112 DOI: 10.1055/s-0028-1103389] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
This article addresses a long-standing clinical and theoretical debate regarding the potential relationship between speech and nonspeech behaviors in the developing system. The review is motivated by the high popularity of nonspeech oral motor exercises (NSOMEs), including alimentary behaviors such as chewing, in the treatment of speech disorders in young children. The similarities and differences in the behavioral characteristics, sensory requirements, and task goals for speech and nonspeech oromotor behaviors are compared. Integrated theoretical paradigms and empirical data on the development of early oromotor behaviors are discussed. Although the efficacy of NSOMEs remains empirically untested at this time, studies of typical developmental speech physiology fail to support a theoretical framework promoting the use of NSOMEs. Well-designed empirical studies are necessary, however, to establish the efficacy of NSOMEs for specific clinical population and treatment targets.
Collapse
Affiliation(s)
- Erin M Wilson
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA.
| | | | | | | |
Collapse
|
48
|
Précis of neuroconstructivism: how the brain constructs cognition. Behav Brain Sci 2008; 31:321-31; discussion 331-56. [PMID: 18578929 DOI: 10.1017/s0140525x0800407x] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr [1982]) between levels of organization. We propose that three mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment.
Collapse
|
49
|
Richardson FM, Thomas MS. Critical periods and catastrophic interference effects in the development of self-organizing feature maps. Dev Sci 2008; 11:371-89. [DOI: 10.1111/j.1467-7687.2008.00682.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
50
|
Garagnani M, Wennekers T, Pulvermüller F. A neuroanatomically grounded Hebbian-learning model of attention-language interactions in the human brain. Eur J Neurosci 2008; 27:492-513. [PMID: 18215243 PMCID: PMC2258460 DOI: 10.1111/j.1460-9568.2008.06015.x] [Citation(s) in RCA: 97] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Meaningful familiar stimuli and senseless unknown materials lead to different patterns of brain activation. A late major neurophysiological response indexing ‘sense’ is the negative component of event-related potential peaking at around 400 ms (N400), an event-related potential that emerges in attention-demanding tasks and is larger for senseless materials (e.g. meaningless pseudowords) than for matched meaningful stimuli (words). However, the mismatch negativity (latency 100–250 ms), an early automatic brain response elicited under distraction, is larger to words than to pseudowords, thus exhibiting the opposite pattern to that seen for the N400. So far, no theoretical account has been able to reconcile and explain these findings by means of a single, mechanistic neural model. We implemented a neuroanatomically grounded neural network model of the left perisylvian language cortex and simulated: (i) brain processes of early language acquisition and (ii) cortical responses to familiar word and senseless pseudoword stimuli. We found that variation of the area-specific inhibition (the model correlate of attention) modulated the simulated brain response to words and pseudowords, producing either an N400- or a mismatch negativity-like response depending on the amount of inhibition (i.e. available attentional resources). Our model: (i) provides a unifying explanatory account, at cortical level, of experimental observations that, so far, had not been given a coherent interpretation within a single framework; (ii) demonstrates the viability of purely Hebbian, associative learning in a multilayered neural network architecture; and (iii) makes clear predictions on the effects of attention on latency and magnitude of event-related potentials to lexical items. Such predictions have been confirmed by recent experimental evidence.
Collapse
Affiliation(s)
- Max Garagnani
- MRC Cognition & Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, UK.
| | | | | |
Collapse
|