1
|
Maxfield ND. Exploring the Activation of Target Words in Picture Naming in Children Who Stutter: Evidence From Event-Related Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2903-2919. [PMID: 39058928 PMCID: PMC11427420 DOI: 10.1044/2024_jslhr-23-00570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 01/30/2024] [Accepted: 05/01/2024] [Indexed: 07/28/2024]
Abstract
PURPOSE Target word activation in picture naming was explored in children who stutter (CWS) and typically fluent children (TFC) using event-related potentials (ERPs). METHOD A total of 18 CWS and 16 TFC completed a task combining picture naming and probe word identification. On each trial, a picture-to-be-named was followed by an auditory probe word-to-be-identified; the probe was identical (Identity condition) or Unrelated to the picture name. ERPs were recorded from probe onset. Attenuation of the N400 ERP component was predicted to Identity versus Unrelated trials (N400 priming). Between-groups differences in amplitude, timing, and topography of N400 priming were explored. RESULTS Naming was more accurate on Identity versus Unrelated trials. Probe word identification accuracy was not affected by Condition. N400 priming was detected, indicating that self-generated picture names facilitated semantic processing of identical probes. This effect was larger in amplitude in CWS versus TFC. Unexpectedly, an N400-preceding, frontally maximal, positive-going ERP component-associated with expectancy processing-was larger in amplitude to Unrelated versus Identity trials. This effect was smaller in CWS versus TFC. CONCLUSIONS A larger N400 priming effect in CWS versus TFC reflects a tendency toward more extensive semantic processing in picture naming in CWS. A smaller Condition effect on frontally maximal, positive-going, N400-preceding ERP activity in CWS versus TFC indicates a reduced ability to form expectancies about the lexical and/or phonological identity of probe words in CWS. Both effects may point to inefficient activation of target words in picture naming in CWS.
Collapse
Affiliation(s)
- Nathan D Maxfield
- Department of Communication Sciences & Disorders, University of South Florida, Tampa
| |
Collapse
|
2
|
Siemons-Lühring DI, Euler HA, Mathmann P, Suchan B, Neumann K. The Effectiveness of an Integrated Treatment for Functional Speech Sound Disorders-A Randomized Controlled Trial. CHILDREN (BASEL, SWITZERLAND) 2021; 8:1190. [PMID: 34943386 PMCID: PMC8700312 DOI: 10.3390/children8121190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 12/10/2021] [Accepted: 12/11/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND The treatment of functional speech sound disorders (SSDs) in children is often lengthy, ill-defined, and without satisfactory evidence of success; effectiveness studies on SSDs are rare. This randomized controlled trial evaluates the effectiveness of the integrated SSD treatment program PhonoSens, which focuses on integrating phonological and phonetic processing according to the Integrated Psycholinguistic Model of Speech Processing (IPMSP). METHODS Thirty-two German-speaking children aged from 3.5 to 5.5 years (median 4.6) with functional SSD were randomly assigned to a treatment or a wait-list control group with 16 children each. All children in the treatment group and, after an average waiting period of 6 months, 12 children in the control group underwent PhonoSens treatment. RESULTS The treatment group showed more percent correct consonants (PCC) and a greater reduction in phonological processes after 15 therapy sessions than the wait-list control group, both with large effect sizes (Cohen's d = 0.89 and 1.04). All 28 children treated achieved normal phonological abilities: 21 before entering school and 7 during first grade. The average number of treatment sessions was 28; the average treatment duration was 11.5 months. CONCLUSION IPMSP-aligned therapy is effective in the treatment of SSD and is well adaptable for languages other than German.
Collapse
Affiliation(s)
- Denise I. Siemons-Lühring
- Department of Phoniatrics and Pedaudiology, University Hospital Münster, University of Münster, Malmedyweg 13, 48149 Münster, Germany; (H.A.E.); (P.M.); (K.N.)
| | - Harald A. Euler
- Department of Phoniatrics and Pedaudiology, University Hospital Münster, University of Münster, Malmedyweg 13, 48149 Münster, Germany; (H.A.E.); (P.M.); (K.N.)
| | - Philipp Mathmann
- Department of Phoniatrics and Pedaudiology, University Hospital Münster, University of Münster, Malmedyweg 13, 48149 Münster, Germany; (H.A.E.); (P.M.); (K.N.)
| | - Boris Suchan
- Department of Clinical Neuropsychology, Ruhr-University of Bochum, Universitätsstraße 150, 44801 Bochum, Germany;
| | - Katrin Neumann
- Department of Phoniatrics and Pedaudiology, University Hospital Münster, University of Münster, Malmedyweg 13, 48149 Münster, Germany; (H.A.E.); (P.M.); (K.N.)
| |
Collapse
|
3
|
Abstract
As all human activities, verbal communication is fraught with errors. It is estimated that humans produce around 16,000 words per day, but the word that is selected for production is not always correct and neither is the articulation always flawless. However, to facilitate communication, it is important to limit the number of errors. This is accomplished via the verbal monitoring mechanism. A body of research over the last century has uncovered a number of properties of the mechanisms at work during verbal monitoring. Over a dozen routes for verbal monitoring have been postulated. However, to date a complete account of verbal monitoring does not exist. In the current paper we first outline the properties of verbal monitoring that have been empirically demonstrated. This is followed by a discussion of current verbal monitoring models: the perceptual loop theory, conflict monitoring, the hierarchical state feedback control model, and the forward model theory. Each of these models is evaluated given empirical findings and theoretical considerations. We then outline lacunae of current theories, which we address with a proposal for a new model of verbal monitoring for production and perception, based on conflict monitoring models. Additionally, this novel model suggests a mechanism of how a detected error leads to a correction. The error resolution mechanism proposed in our new model is then tested in a computational model. Finally, we outline the advances and predictions of the model.
Collapse
|
4
|
Abstract
Speakers occasionally make speech errors, which may be detected and corrected. According to the comprehension-based account proposed by Levelt, Roelofs, and Meyer (1999) and Roelofs (2004), speakers detect errors by using their speech comprehension system for the monitoring of overt as well as inner speech. According to the production-based account of Nozari, Dell, and Schwartz (2011), speakers may use their comprehension system for external monitoring but error detection in internal monitoring is based on the amount of conflict within the speech production system, assessed by the anterior cingulate cortex (ACC). Here, I address three main arguments of Nozari et al. and Nozari and Novick (2017) against a comprehension-based account of internal monitoring, which concern cross-talk interference between inner and overt speech, a double dissociation between comprehension and self-monitoring ability in patients with aphasia, and a domain-general error-related negativity in the ACC that is allegedly independent of conscious awareness. I argue that none of the arguments are conclusive, and conclude that comprehension-based monitoring remains a viable account of self-monitoring in speaking.
Collapse
|
5
|
Roelofs A. On (Correctly Representing) Comprehension-Based Monitoring in Speaking: Rejoinder to Nozari (2020). J Cogn 2020; 3:20. [PMID: 32944683 PMCID: PMC7473236 DOI: 10.5334/joc.112] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Accepted: 07/02/2020] [Indexed: 12/04/2022] Open
Abstract
Misunderstanding exists about what constitutes comprehension-based monitoring in speaking and what it empirically implies. Here, I make clear that the use of the speech comprehension system is the defining property of comprehension-based monitoring rather than conscious and deliberate processing, as maintained by Nozari (2020). Therefore, contrary to what Nozari claims, my arguments in Roelofs (2020) are suitable for addressing her criticisms raised against comprehension-based monitoring. Also, I indicate that Nozari does not correctly describe my view in a review of her paper. Finally, I further clarify what comprehension-based monitoring entails empirically, thereby dealing with Nozari's new criticisms and inaccurate descriptions of empirical findings. I conclude that comprehension-based monitoring remains a viable account of self-monitoring in speaking.
Collapse
Affiliation(s)
- Ardi Roelofs
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognition, Nijmegen, NL
| |
Collapse
|
6
|
Nozari N. A Comprehension- or a Production-Based Monitor? Response to Roelofs (2020). J Cogn 2020; 3:19. [PMID: 32944682 PMCID: PMC7473204 DOI: 10.5334/joc.102] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Accepted: 04/16/2020] [Indexed: 11/20/2022] Open
Abstract
Roelofs (2020) has put forth a rebuttal of the criticisms raised against comprehension-based monitoring and has also raised a number of objections against production-based monitors. In this response, I clarify that the model defended by Roelofs is not a comprehension-based monitor, but belongs to a class of monitoring models which I refer to as production-perception models. I review comprehension-based and production-perception models, highlight the strength of each, and point out the differences between them. I then discuss the limitations of both for monitoring production at higher levels, which has been the motivation for production-based monitors. Next, I address the specific criticisms raised by Roelofs (2020) in light of the current evidence. I end by presenting several lines of arguments that preclude a single monitoring mechanism as meeting all the demands of monitoring in a task as complex as communication. A more fruitful avenue is perhaps to focus on what theories are compatible with the nature of representations at specific levels of the production system and with specific aims of monitoring in language production.
Collapse
Affiliation(s)
- Nazbanou Nozari
- Department of Psychology, Carnegie Mellon University, US
- Center for Neural Basis Cognition (CNBC), US
| |
Collapse
|
7
|
Geva S, Fernyhough C. A Penny for Your Thoughts: Children's Inner Speech and Its Neuro-Development. Front Psychol 2019; 10:1708. [PMID: 31474897 PMCID: PMC6702515 DOI: 10.3389/fpsyg.2019.01708] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Accepted: 07/09/2019] [Indexed: 01/01/2023] Open
Abstract
Inner speech emerges in early childhood, in parallel with the maturation of the dorsal language stream. To date, the developmental relations between these two processes have not been examined. We review evidence that the dorsal language stream has a role in supporting the psychological phenomenon of inner speech, before considering pediatric studies of the dorsal stream's anatomical development and evidence for its emerging functional roles. We examine possible causal accounts of the relations between these two developmental processes and consider their implications for phylogenetic theories about the evolution of inner speech and the accounts of the ontogenetic relations between language and cognition.
Collapse
Affiliation(s)
- Sharon Geva
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | | |
Collapse
|
8
|
Abstract
Objective Inner speech, or the ability to talk to yourself in your head, is one of the most ubiquitous phenomena of everyday experience. Recent years have seen growing interest in the role and function of inner speech in various typical and cognitively impaired populations. Although people vary in their ability to produce inner speech, there is currently no test battery which can be used to evaluate people's inner speech ability. Here we developed a test battery which can be used to evaluate individual differences in the ability to access the auditory word form internally. Methods We developed and standardized five tests: rhyme judgment of pictures and written words, homophone judgment of written words and non-words, and judgment of lexical stress of written words. The tasks were administered to adult healthy native British English speakers (age range 20-72, n = 28-97, varies between tests). Results In all tests, some items were excluded based on low success rates among participants, or documented regional variability in accent. Level of education, but not age, correlated with task performance for some of the tasks, and there were no gender difference in performance. Conclusion A process of standardization resulted in a battery of tests which can be used to assess natural variability of inner speech abilities among English speaking adults.
Collapse
Affiliation(s)
- Sharon Geva
- Department of Clinical Neurosciences, University of Cambridge, R3 Neurosciences - Box 83, Addenbrooke's Hospital, Cambridge, UK
| | - Elizabeth A Warburton
- Department of Clinical Neurosciences, University of Cambridge, R3 Neurosciences - Box 83, Addenbrooke's Hospital, Cambridge, UK
| |
Collapse
|
9
|
Broos WP, Duyck W, Hartsuiker RJ. Monitoring speech production and comprehension: Where is the second-language delay? Q J Exp Psychol (Hove) 2018; 72:1601-1619. [PMID: 30270750 DOI: 10.1177/1747021818807447] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Research on error monitoring suggests that bilingual Dutch-English speakers are slower to correct some speech errors in their second language (L2) as opposed to their first language (L1). But which component of self-monitoring is slowed down in L2, error detection or interruption and repair of the error? This study charted the time course of monitoring in monolingual English speakers and bilingual Dutch-English speakers in language production and language comprehension, with the aim of pinpointing the component(s) of monitoring that cause an L2 disadvantage. First, we asked whether phonological errors are interrupted more slowly in L2. An analysis of data from three speech error elicitation experiments indeed showed that Dutch-English bilinguals were slower to stop speaking after an error had been detected in their L2 (English) than in their L1 (Dutch), at least for interrupted errors. A similar L2 disadvantage was found when comparing the L2 of Dutch-English bilinguals to the L1 of English monolinguals. Second, monolingual English speakers and bilingual Dutch-English speakers performed a picture naming task, a production monitoring task, and a comprehension monitoring task. Bilingual English speakers were slower in naming pictures in their L2 than monolingual English speakers. However, the production monitoring task and comprehension monitoring task yielded comparable response latencies between monolinguals in their L1 and bilinguals in their L2, indicating that monitoring processes in L2 are not generally slower. We suggest that interruption and repair are planned concurrently and that the difficulty of repairing in L2 triggers a slow-down in L2 interruption.
Collapse
Affiliation(s)
- Wouter Pj Broos
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Wouter Duyck
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | | |
Collapse
|
10
|
Coalson GA, Byrd CT, Kuylen A. Uniqueness Point Effects during Speech Planning in Adults Who Do and Do Not Stutter. Folia Phoniatr Logop 2018. [PMID: 29533938 DOI: 10.1159/000485657] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND/AIMS Previous studies employing a variety of tasks have demonstrated that adults who stutter (AWS) pre-sent with phonological encoding differences compared to adults who do not stutter (AWNS). The present study examined whether atypical preverbal monitoring also influenced AWS performance during one such paradigm - the silent phoneme monitoring task. Specifically, we investigated whether monitoring latencies for AWS were accelerated after the word's uniqueness point - the phoneme that isolates the word from all lexical competitors - as observed for AWNS when monitoring internal and external speech. METHODS Twenty adults (10 AWS, 10 AWNS) completed a silent phoneme monitoring task using stimuli which contained either (a) early uniqueness points (EUP), (b) late uniqueness points, or (c) no uniqueness point (NUP). Response latency when identifying word-final phonemes was measured. RESULTS AWNS exhibited the expected uniqueness point effect when monitoring internal speech; word-final phonemes were accessed more rapidly for words with EUP than NUP. In contrast, AWS did not differ in the phoneme monitoring speed. That is, AWS did not exhibit the expected uniqueness point effects. CONCLUSION Findings suggest that inefficient or atypical preverbal monitoring may be present in AWS and support theories that implicate the internal speech monitor as an area of deficit.
Collapse
Affiliation(s)
- Geoffrey A Coalson
- Department of Communication Sciences and Disorders, Louisiana State University, Baton Rouge, Louisiana, USA
| | - Courtney T Byrd
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas, USA
| | - Amanda Kuylen
- Department of Communication Sciences and Disorders, Louisiana State University, Baton Rouge, Louisiana, USA
| |
Collapse
|
11
|
Coalson GA, Byrd CT. Nonword repetition in adults who stutter: The effects of stimuli stress and auditory-orthographic cues. PLoS One 2017; 12:e0188111. [PMID: 29186179 PMCID: PMC5706734 DOI: 10.1371/journal.pone.0188111] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2016] [Accepted: 11/01/2017] [Indexed: 12/29/2022] Open
Abstract
Purpose Adults who stutter (AWS) are less accurate in their immediate repetition of novel phonological sequences compared to adults who do not stutter (AWNS). The present study examined whether manipulation of the following two aspects of traditional nonword repetition tasks unmask distinct weaknesses in phonological working memory in AWS: (1) presentation of stimuli with less-frequent stress patterns, and (2) removal of auditory-orthographic cues immediately prior to response. Method Fifty-two participants (26 AWS, 26 AWNS) produced 12 bisyllabic nonwords in the presence of corresponding auditory-orthographic cues (i.e., immediate repetition task), and the absence of auditory-orthographic cues (i.e., short-term recall task). Half of each cohort (13 AWS, 13 AWNS) were exposed to the stimuli with high-frequency trochaic stress, and half (13 AWS, 13 AWNS) were exposed to identical stimuli with lower-frequency iambic stress. Results No differences in immediate repetition accuracy for trochaic or iambic nonwords were observed for either group. However, AWS were less accurate when recalling iambic nonwords than trochaic nonwords in the absence of auditory-orthographic cues. Conclusions Manipulation of two factors which may minimize phonological demand during standard nonword repetition tasks increased the number of errors in AWS compared to AWNS. These findings suggest greater vulnerability in phonological working memory in AWS, even when producing nonwords as short as two syllables.
Collapse
Affiliation(s)
- Geoffrey A. Coalson
- Department of Communication Sciences and Disorders, Louisiana State University, Baton Rouge, Louisiana, United States of America
- * E-mail:
| | - Courtney T. Byrd
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
12
|
Gauvin HS, Mertens J, Mariën P, Santens P, Pickut BA, Hartsuiker RJ. Verbal monitoring in Parkinson's disease: A comparison between internal and external monitoring. PLoS One 2017; 12:e0182159. [PMID: 28832595 PMCID: PMC5568285 DOI: 10.1371/journal.pone.0182159] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Accepted: 03/29/2017] [Indexed: 11/20/2022] Open
Abstract
Patients with Parkinson's disease (PD) display a variety of impairments in motor and non-motor language processes; speech is decreased on motor aspects such as amplitude, prosody and speed and on linguistic aspects including grammar and fluency. Here we investigated whether verbal monitoring is impaired and what the relative contributions of the internal and external monitoring route are on verbal monitoring in patients with PD relative to controls. Furthermore, the data were used to investigate whether internal monitoring performance could be predicted by internal speech perception tasks, as perception based monitoring theories assume. Performance of 18 patients with Parkinson's disease was measured on two cognitive performance tasks and a battery of 11 linguistic tasks, including tasks that measured performance on internal and external monitoring. Results were compared with those of 16 age-matched healthy controls. PD patients and controls generally performed similarly on the linguistic and monitoring measures. However, we observed qualitative differences in the effects of noise masking on monitoring and disfluencies and in the extent to which the linguistic tasks predicted monitoring behavior. We suggest that the patients differ from healthy subjects in their recruitment of monitoring channels.
Collapse
Affiliation(s)
- Hanna S. Gauvin
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
- School of Psychology and Counselling, Queensland University of Technology, Brisbane, Australia
| | - Jolien Mertens
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Peter Mariën
- Clinical and Experimental Neurolinguistics, Vrije Universiteit Brussel, Brussels, Belgium
- Department of Neurology, ZNA-Middelheim Hospital, Antwerp, Belgium
| | - Patrick Santens
- Department of Neurology, Ghent University Hospital, Ghent University, Ghent, Belgium
| | - Barbara A. Pickut
- University of Antwerp, Wilrijk, Belgium
- Department of Neurology, Antwerp University Hospital, Edegem, Belgium
- Mercy Health Saint Mary's Hauenstein Neurosciences, Grand Rapids, Michigan, United States of America
- Michigan State University, College of Human Medicine, Department of Translational Science and Molecular Medicine, Grand Rapids, Michigan, United States of America
| | | |
Collapse
|
13
|
Ivanova I, Ferreira VS, Gollan TH. Form Overrides Meaning When Bilinguals Monitor for Errors. JOURNAL OF MEMORY AND LANGUAGE 2017; 94:75-102. [PMID: 28649169 PMCID: PMC5478198 DOI: 10.1016/j.jml.2016.11.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Bilinguals rarely produce unintended language switches, which may in part be because switches are detected and corrected by an internal monitor. But are language switches easier or harder to detect than within-language semantic errors? To approximate internal monitoring, bilinguals listened (Experiment 1) or read aloud (Experiment 2) stories, and detected language switches (translation equivalents or semantically unrelated to expected words) and within-language errors (semantically related or unrelated to expected words). Bilinguals detected semantically related within-language errors most slowly and least accurately, language switches more quickly and accurately than within-language errors, and (in Experiment 2), translation equivalents as quickly and accurately as unrelated language switches. These results suggest that internal monitoring of form (which can detect mismatches in language membership) completes earlier than, and is independent of, monitoring of meaning. However, analysis of reading times prior to error detection revealed meaning violations to be more disruptive for processing than language violations.
Collapse
Affiliation(s)
- Iva Ivanova
- Department of Psychology, University of California, San Diego
- Department of Psychiatry, University of California, San Diego
| | | | - Tamar H. Gollan
- Department of Psychiatry, University of California, San Diego
| |
Collapse
|
14
|
Coalson GA, Byrd CT. Metrical Encoding in Adults Who Do and Do Not Stutter. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:601-621. [PMID: 25679444 DOI: 10.1044/2015_jslhr-s-14-0111] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2014] [Accepted: 01/21/2015] [Indexed: 06/04/2023]
Abstract
PURPOSE The purpose of this study was to explore metrical aspects of phonological encoding (i.e., stress and syllable boundary assignment) in adults who do and do not stutter (AWS and AWNS, respectively). METHOD Participants monitored nonwords for target sounds during silent phoneme monitoring tasks across two distinct experiments. For Experiment 1, the 22 participants (11 AWNS, 11 AWS) silently monitored target phonemes in nonwords with initial stress. For Experiment 2, an additional cohort of 22 participants (11 AWNS, 11 AWS) silently monitored phonemes in nonwords with noninitial stress. RESULTS In Experiment 1, AWNS and AWS silently monitored target phonemes in initial stress stimuli with similar speed and accuracy. In Experiment 2, AWS demonstrated a within-group effect that was not present for AWNS. They required additional time when monitoring phonemes immediately following syllable boundary assignment in stimuli with noninitial stress. There was also a between-groups effect, with AWS exhibiting significantly greater errors identifying phonemes in nonwords with noninitial stress than AWNS. CONCLUSIONS Findings suggest metrical properties may affect the time course of phonological encoding in AWS in a manner distinct from AWNS. Specifically, in the absence of initial stress, metrical encoding of the syllable boundary may delay speech planning in AWS and contribute to breakdowns in fluent speech production.
Collapse
|
15
|
Manoiloff L, Segui J, Hallé P. Subliminal repetition primes help detection of phonemes in a picture: Evidence for a phonological level of the priming effects. Q J Exp Psychol (Hove) 2015; 69:24-36. [PMID: 25679503 DOI: 10.1080/17470218.2015.1018836] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
In this research, we combine a cross-form word-picture visual masked priming procedure with an internal phoneme monitoring task to examine repetition priming effects. In this paradigm, participants have to respond to pictures whose names begin with a prespecified target phoneme. This task unambiguously requires retrieving the word-form of the target picture's name and implicitly orients participants' attention towards a phonological level of representation. The experiments were conducted within Spanish, whose highly transparent orthography presumably promotes fast and automatic phonological recoding of subliminal, masked visual word primes. Experiments 1 and 2 show that repetition primes speed up internal phoneme monitoring in the target, compared to primes beginning with a different phoneme from the target, or sharing only their first phoneme with the target. This suggests that repetition primes preactivate the phonological code of the entire target picture's name, hereby speeding up internal monitoring, which is necessarily based on such a code. To further qualify the nature of the phonological code underlying internal phoneme monitoring, a concurrent articulation task was used in Experiment 3. This task did not affect the repetition priming effect. We propose that internal phoneme monitoring is based on an abstract phonological code, prior to its translation into articulation.
Collapse
Affiliation(s)
- Laura Manoiloff
- a Equipo de Investigación de Psicología Cognitiva del Lenguaje y Psicolingüística, Laboratorio de Psicologia Cognitiva , Universidad Nacional de Cordoba , Córdoba , Argentina
| | - Juan Segui
- b Laboratoire Mémoire et Cognition (INSERM - Paris 5) and CNRS , Paris , France.,d Labex EFL , Paris , France
| | - Pierre Hallé
- b Laboratoire Mémoire et Cognition (INSERM - Paris 5) and CNRS , Paris , France.,c Laboratoire de Phonétique et Phonologie (CNRS-Paris 3) Paris , France.,d Labex EFL , Paris , France
| |
Collapse
|
16
|
Lind A, Hall L, Breidegard B, Balkenius C, Johansson P. Auditory feedback of one's own voice is used for high-level semantic monitoring: the "self-comprehension" hypothesis. Front Hum Neurosci 2014; 8:166. [PMID: 24734014 PMCID: PMC3975125 DOI: 10.3389/fnhum.2014.00166] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2013] [Accepted: 03/05/2014] [Indexed: 11/13/2022] Open
Abstract
What would it be like if we said one thing, and heard ourselves saying something else? Would we notice something was wrong? Or would we believe we said the thing we heard? Is feedback of our own speech only used to detect errors, or does it also help to specify the meaning of what we say? Comparator models of self-monitoring favor the first alternative, and hold that our sense of agency is given by the comparison between intentions and outcomes, while inferential models argue that agency is a more fluent construct, dependent on contextual inferences about the most likely cause of an action. In this paper, we present a theory about the use of feedback during speech. Specifically, we discuss inferential models of speech production that question the standard comparator assumption that the meaning of our utterances is fully specified before articulation. We then argue that auditory feedback provides speakers with a channel for high-level, semantic “self-comprehension”. In support of this we discuss results using a method we recently developed called Real-time Speech Exchange (RSE). In our first study using RSE (Lind et al., in press) participants were fitted with headsets and performed a computerized Stroop task. We surreptitiously recorded words they said, and later in the test we played them back at the exact same time that the participants uttered something else, while blocking the actual feedback of their voice. Thus, participants said one thing, but heard themselves saying something else. The results showed that when timing conditions were ideal, more than two thirds of the manipulations went undetected. Crucially, in a large proportion of the non-detected manipulated trials, the inserted words were experienced as self-produced by the participants. This indicates that our sense of agency for speech has a strong inferential component, and that auditory feedback of our own voice acts as a pathway for semantic monitoring. We believe RSE holds great promise as a tool for investigating the role of auditory feedback during speech, and we suggest a number of future studies to serve this purpose.
Collapse
Affiliation(s)
- Andreas Lind
- Department of Philosophy, Lund University Cognitive Science, Lund University Lund, Sweden
| | - Lars Hall
- Department of Philosophy, Lund University Cognitive Science, Lund University Lund, Sweden
| | - Björn Breidegard
- Certec - Division of Rehabilitation Engineering Research, Department of Design Sciences, Lund University Lund, Sweden
| | - Christian Balkenius
- Department of Philosophy, Lund University Cognitive Science, Lund University Lund, Sweden
| | - Petter Johansson
- Department of Philosophy, Lund University Cognitive Science, Lund University Lund, Sweden ; Swedish Collegium for Advanced Study, Linneanum, Uppsala University Uppsala, Sweden
| |
Collapse
|
17
|
Hickok G. The architecture of speech production and the role of the phoneme in speech processing. LANGUAGE AND COGNITIVE PROCESSES 2014; 29:2-20. [PMID: 24489420 PMCID: PMC3904400 DOI: 10.1080/01690965.2013.834370] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Speech production has been studied within a number of traditions including linguistics, psycholinguistics, motor control, neuropsychology, and neuroscience. These traditions have had limited interaction, ostensibly because they target different levels of speech production or different dimensions such as representation, processing, or implementation. However, closer examination of reveals a substantial convergence of ideas across the traditions and recent proposals have suggested that an integrated approach may help move the field forward. The present article reviews one such attempt at integration, the state feedback control model and its descendent, the hierarchical state feedback control model. Also considered is how phoneme-level representations might fit in the context of the model.
Collapse
Affiliation(s)
- Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, California, 92697, USA
| |
Collapse
|
18
|
Gauvin HS, Hartsuiker RJ, Huettig F. Speech monitoring and phonologically-mediated eye gaze in language perception and production: a comparison using printed word eye-tracking. Front Hum Neurosci 2013; 7:818. [PMID: 24339809 PMCID: PMC3857580 DOI: 10.3389/fnhum.2013.00818] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 11/11/2013] [Indexed: 11/21/2022] Open
Abstract
The Perceptual Loop Theory of speech monitoring assumes that speakers routinely inspect their inner speech. In contrast, Huettig and Hartsuiker (2010) observed that listening to one's own speech during language production drives eye-movements to phonologically related printed words with a similar time-course as listening to someone else's speech does in speech perception experiments. This suggests that speakers use their speech perception system to listen to their own overt speech, but not to their inner speech. However, a direct comparison between production and perception with the same stimuli and participants is lacking so far. The current printed word eye-tracking experiment therefore used a within-subjects design, combining production and perception. Displays showed four words, of which one, the target, either had to be named or was presented auditorily. Accompanying words were phonologically related, semantically related, or unrelated to the target. There were small increases in looks to phonological competitors with a similar time-course in both production and perception. Phonological effects in perception however lasted longer and had a much larger magnitude. We conjecture that this difference is related to a difference in predictability of one's own and someone else's speech, which in turn has consequences for lexical competition in other-perception and possibly suppression of activation in self-perception.
Collapse
Affiliation(s)
- Hanna S Gauvin
- Department of Experimental Psychology, Ghent University Ghent, Belgium
| | | | | |
Collapse
|
19
|
|
20
|
Hutson J, Damian MF, Spalek K. Distractor frequency effects in picture–word interference tasks with vocal and manual responses. ACTA ACUST UNITED AC 2013. [DOI: 10.1080/01690965.2011.605599] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
21
|
Piai V, Roelofs A, van der Meij R. Event-related potentials and oscillatory brain responses associated with semantic and Stroop-like interference effects in overt naming. Brain Res 2012; 1450:87-101. [DOI: 10.1016/j.brainres.2012.02.050] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2011] [Revised: 01/24/2012] [Accepted: 02/18/2012] [Indexed: 11/16/2022]
|
22
|
Abstract
Speech production has been studied predominantly from within two traditions, psycholinguistics and motor control. These traditions have rarely interacted, and the resulting chasm between these approaches seems to reflect a level of analysis difference: whereas motor control is concerned with lower-level articulatory control, psycholinguistics focuses on higher-level linguistic processing. However, closer examination of both approaches reveals a substantial convergence of ideas. The goal of this article is to integrate psycholinguistic and motor control approaches to speech production. The result of this synthesis is a neuroanatomically grounded, hierarchical state feedback control model of speech production.
Collapse
|
23
|
Geva S, Jones PS, Crinion JT, Price CJ, Baron JC, Warburton EA. The neural correlates of inner speech defined by voxel-based lesion-symptom mapping. Brain 2011; 134:3071-82. [PMID: 21975590 PMCID: PMC3187541 DOI: 10.1093/brain/awr232] [Citation(s) in RCA: 104] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Revised: 08/08/2011] [Accepted: 08/10/2011] [Indexed: 11/21/2022] Open
Abstract
The neural correlates of inner speech have been investigated previously using functional imaging. However, methodological and other limitations have so far precluded a clear description of the neural anatomy of inner speech and its relation to overt speech. Specifically, studies that examine only inner speech often fail to control for subjects' behaviour in the scanner and therefore cannot determine the relation between inner and overt speech. Functional imaging studies comparing inner and overt speech have not produced replicable results and some have similar methodological caveats as studies looking only at inner speech. Lesion analysis can avoid the methodological pitfalls associated with using inner and overt speech in functional imaging studies, while at the same time providing important data about the neural correlates essential for the specific function. Despite its advantages, a study of the neural correlates of inner speech using lesion analysis has not been carried out before. In this study, 17 patients with chronic post-stroke aphasia performed inner speech tasks (rhyme and homophone judgements), and overt speech tasks (reading aloud). The relationship between brain structure and language ability was studied using voxel-based lesion-symptom mapping. This showed that inner speech abilities were affected by lesions to the left pars opercularis in the inferior frontal gyrus and to the white matter adjacent to the left supramarginal gyrus, over and above overt speech production and working memory. These results suggest that inner speech cannot be assumed to be simply overt speech without a motor component. It also suggests that the use of overt speech to understand inner speech and vice versa might result in misleading conclusions, both in imaging studies and clinical practice.
Collapse
Affiliation(s)
- Sharon Geva
- Department of Clinical Neurosciences, University of Cambridge, R3 Neurosciences, Addenbrooke's Hospital, Cambridge CB2 0QQ, UK.
| | | | | | | | | | | |
Collapse
|
24
|
Nozari N, Dell GS, Schwartz MF. Is comprehension necessary for error detection? A conflict-based account of monitoring in speech production. Cogn Psychol 2011; 63:1-33. [PMID: 21652015 PMCID: PMC3135428 DOI: 10.1016/j.cogpsych.2011.05.001] [Citation(s) in RCA: 121] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2011] [Accepted: 05/13/2011] [Indexed: 11/17/2022]
Abstract
Despite the existence of speech errors, verbal communication is successful because speakers can detect (and correct) their errors. The standard theory of speech-error detection, the perceptual-loop account, posits that the comprehension system monitors production output for errors. Such a comprehension-based monitor, however, cannot explain the double dissociation between comprehension and error-detection ability observed in the aphasic patients. We propose a new theory of speech-error detection which is instead based on the production process itself. The theory borrows from studies of forced-choice-response tasks the notion that error detection is accomplished by monitoring response conflict via a frontal brain structure, such as the anterior cingulate cortex. We adapt this idea to the two-step model of word production, and test the model-derived predictions on a sample of aphasic patients. Our results show a strong correlation between patients' error-detection ability and the model's characterization of their production skills, and no significant correlation between error detection and comprehension measures, thus supporting a production-based monitor, generally, and the implemented conflict-based monitor in particular. The successful application of the conflict-based theory to error-detection in linguistic, as well as non-linguistic domains points to a domain-general monitoring system.
Collapse
Affiliation(s)
- Nazbanou Nozari
- Beckman Institute, University of Illinois at Urbana-Champaign, 405 N. Matthews Ave., Urbana, IL 61801, USA.
| | | | | |
Collapse
|
25
|
Roux S, Bonin P. Comment l’information circule d’un niveau de traitement à l’autre lors de l’accès lexical en production verbale de mots ? Éléments de synthèse. ANNEE PSYCHOLOGIQUE 2011. [DOI: 10.3917/anpsy.111.0145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
26
|
Abstract
Inner speech is typically characterized as either the activation of abstract linguistic representations or a detailed articulatory simulation that lacks only the production of sound. We present a study of the speech errors that occur during the inner recitation of tongue-twister-like phrases. Two forms of inner speech were tested: inner speech without articulatory movements and articulated (mouthed) inner speech. Although mouthing one's inner speech could reasonably be assumed to require more articulatory planning, prominent theories assume that such planning should not affect the experience of inner speech and, consequently, the errors that are "heard" during its production. The errors occurring in articulated inner speech exhibited the phonemic similarity effect and the lexical bias effect--two speech-error phenomena that, in overt speech, have been localized to an articulatory-feature-processing level and a lexical-phonological level, respectively. In contrast, errors in unarticulated inner speech did not exhibit the phonemic similarity effect--just the lexical bias effect. The results are interpreted as support for a flexible abstraction account of inner speech. This conclusion has ramifications for the embodiment of language and speech and for the theories of speech production.
Collapse
Affiliation(s)
- Gary M Oppenheim
- Beckman Institute, University of Illinois, 405 North Mathews Avenue, Urbana, IL 61801, USA.
| | | |
Collapse
|
27
|
Huettig F, Hartsuiker RJ. Listening to yourself is like listening to others: External, but not internal, verbal self-monitoring is based on speech perception. ACTA ACUST UNITED AC 2010. [DOI: 10.1080/01690960903046926] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
28
|
Hinojosa JA, Méndez-Bértolo C, Carretié L, Pozo MA. Emotion modulates language production during covert picture naming. Neuropsychologia 2010; 48:1725-34. [PMID: 20188114 DOI: 10.1016/j.neuropsychologia.2010.02.020] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2009] [Revised: 01/19/2010] [Accepted: 02/17/2010] [Indexed: 11/28/2022]
Abstract
Previous studies have shown that emotional content modulates the activity of several components of the event-related potentials during word comprehension. However, little is known about the impact of affective information on the different processing stages involved in word production. In the present study we aimed to investigate the influence of positive and negative emotions in phonological encoding, a process that have been shown to take place between 300 and 450 ms in previous studies. Participants performed letter searching in a picture naming task. It was found that grapheme monitoring in positive and negative picture names was associated with slower reaction times and enhanced amplitudes of a positive component around 400 ms as compared to monitoring letters in neutral picture names. We propose that this modulation reflects a disruption in phonological encoding processes as a consequence of the capture of attention by affective content. Grapheme monitoring in positive picture names also elicited higher amplitudes than letter searching in neutral image names in a positive component around 100 ms. This amplitude enhancement might be interpreted as a manifestation of the 'positive offset' during conceptual preparation processes. The results of a control experiment with a passive viewing task showed that both effects cannot be simply attributed to the processing of the emotional images per se. Overall, it seems that emotion modulates word production at several processing stages.
Collapse
Affiliation(s)
- José A Hinojosa
- Instituto Pluridisciplinar, Universidad Complutense de Madrid, 28040 Madrid, Spain.
| | | | | | | |
Collapse
|
29
|
Severens E, Hartsuiker RJ. Is there a lexical bias effect in comprehension monitoring? ACTA ACUST UNITED AC 2009. [DOI: 10.1080/01690960902775517] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
30
|
Zhang Q, Damian MF. The time course of segment and tone encoding in Chinese spoken production: an event-related potential study. Neuroscience 2009; 163:252-65. [PMID: 19524018 DOI: 10.1016/j.neuroscience.2009.06.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2009] [Revised: 06/04/2009] [Accepted: 06/06/2009] [Indexed: 11/18/2022]
Abstract
The present study investigated the time course of segment and tone encoding in Chinese spoken production with an event-related brain potentials (ERPs) experiment. Native Chinese speakers viewed a series of pictures and made Go/noGo decisions along dimensions of segmental onset or tone information of picture names. Behavioral data and onset latency of the N200 effect indicated that segmental information became available prior to tonal information. Moreover, the results of scalp distributions and onset latency patterns of the N200 effect on segmental and tonal decisions suggest that segmental and metrical encoding is relatively disassociated in Chinese spoken production. Our findings provide additional evidence from Chinese as a kind of non-alphabetic language concerning theories of phonological encoding based on alphabetic languages.
Collapse
Affiliation(s)
- Q Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Datun Road 10A, Beijing, 100101, China.
| | | |
Collapse
|
31
|
The time course of semantic and orthographic encoding in Chinese word production: an event-related potential study. Brain Res 2009; 1273:92-105. [PMID: 19344700 DOI: 10.1016/j.brainres.2009.03.049] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2008] [Revised: 03/16/2009] [Accepted: 03/18/2009] [Indexed: 11/21/2022]
Abstract
Previous studies have shown that access to conceptual/semantic information precedes phonological access in alphabetic language production such as English or Dutch. The present study investigated the time course of semantic and orthographic encoding in Chinese (a non-alphabetic language) spoken word production. Participants were shown pictures and carried out a dual-choice go/nogo task based on semantic information and orthographic information. The results of the N200 (related to response inhibition) and LRP (related to response preparation) indicated that semantic access preceded orthographic encoding by 176-202 ms. The different patterns of the two N200 effects suggest that they may tap into different processes. The N200 and LRP analyses also indicate that accessing the orthographic representation in speaking is likely optional and depends on specific task requirement.
Collapse
|
32
|
Stadthagen-Gonzalez H, Damian MF, Pérez MA, Bowers JS, Marín J. Name-picture verification as a control measure for object naming: a task analysis and norms for a large set of pictures. Q J Exp Psychol (Hove) 2008; 62:1581-97. [PMID: 19123116 DOI: 10.1080/17470210802511139] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The name-picture verification task is widely used in spoken production studies to control for nonlexical differences between picture sets. In this task a word is presented first and followed, after a pause, by a picture. Participants must then make a speeded decision on whether both word and picture refer to the same object. Using regression analyses, we systematically explored the characteristics of this task by assessing the independent contribution of a series of factors that have been found relevant for picture naming in previous studies. We found that, for "match" responses, both visual and conceptual factors played a role, but lexical variables were not significant contributors. No clear pattern emerged from the analysis of "no-match" responses. We interpret these results as validating the use of "match" latencies as control variables in studies or spoken production using picture naming. Norms for match and no-match responses for 396 line drawings taken from Cycowicz, Friedman, Rothstein, and Snodgrass (1997) can be downloaded at: http://language.psy.bris.ac.uk/name-picture_verification.html.
Collapse
|