1
|
Special issue in honor of Jacques Mehler, Cognition's founding editor. Cognition 2021; 213:104786. [PMID: 34116795 DOI: 10.1016/j.cognition.2021.104786] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
2
|
Twelve to 24-month-olds can understand the meaning of morphological regularities in their language. Dev Psychol 2019; 56:40-52. [PMID: 31789528 DOI: 10.1037/dev0000845] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
To learn a language infants must learn to link arbitrary sounds to their meaning. While words are the clearest example of this link, they are not the only component of language; morphological regularities (e.g., the plural -s suffix in English) carry meaning as well. Comprehensive theories of language acquisition must account for how infants build links between these other parts of language and their meaning. Here, we investigated the acquisition of morphology in infants learning Italian, a language with a rich inflectional morphology that marks both gender and number on both the article and final vowel of nouns. We demonstrate that infants can build these links between concepts and morphological regularities much earlier than previously thought. Italian-learning 12-18- and 24-month-olds were shown pairs of images of faces that differed either in number (1 female vs. 2 females; 1 male vs. 2 males) or gender (1 female vs. 1 male; 2 females vs. 2 males). On each trial infants were directed to look at one of the images with the morphological regularities as the only distinguishing cue. Overall, across all ages, the infants looked to the labeled image, indicating that they had at least some understanding of the morphology. While infants succeeded on both gender comparisons, they only showed evidence of understanding the feminine number distinction. These results indicate that in the early stages of language acquisition, infants are able to identify recurring morphemes and to map those morphological regularities to the concepts that they mark in the language. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
3
|
Newborns are sensitive to multiple cues for word segmentation in continuous speech. Dev Sci 2019; 22:e12802. [PMID: 30681763 DOI: 10.1111/desc.12802] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 01/19/2019] [Accepted: 01/21/2019] [Indexed: 11/30/2022]
Abstract
Before infants can learn words, they must identify those words in continuous speech. Yet, the speech signal lacks obvious boundary markers, which poses a potential problem for language acquisition (Swingley, Philos Trans R Soc Lond. Series B, Biol Sci 364(1536), 3617-3632, 2009). By the middle of the first year, infants seem to have solved this problem (Bergelson & Swingley, Proc Natl Acad Sci 109(9), 3253-3258, 2012; Jusczyk & Aslin, Cogn Psychol 29, 1-23, 1995), but it is unknown if segmentation abilities are present from birth, or if they only emerge after sufficient language exposure and/or brain maturation. Here, in two independent experiments, we looked at two cues known to be crucial for the segmentation of human speech: the computation of statistical co-occurrences between syllables and the use of the language's prosody. After a brief familiarization of about 3 min with continuous speech, using functional near-infrared spectroscopy, neonates showed differential brain responses on a recognition test to words that violated either the statistical (Experiment 1) or prosodic (Experiment 2) boundaries of the familiarization, compared to words that conformed to those boundaries. Importantly, word recognition in Experiment 2 occurred even in the absence of prosodic information at test, meaning that newborns encoded the phonological content independently of its prosody. These data indicate that humans are born with operational language processing and memory capacities and can use at least two types of cues to segment otherwise continuous speech, a key first step in language acquisition.
Collapse
|
4
|
Abstract
Research has demonstrated distinct roles for consonants and vowels in speech processing. For example, consonants have been shown to support lexical processes, such as the segmentation of speech based on transitional probabilities (TPs), more effectively than vowels. Theory and data so far, however, have considered only non-tone languages, that is to say, languages that lack contrastive lexical tones. In the present work, we provide a first investigation of the role of consonants and vowels in statistical speech segmentation by native speakers of Cantonese, as well as assessing how tones modulate the processing of vowels. Results show that Cantonese speakers are unable to use statistical cues carried by consonants for segmentation, but they can use cues carried by vowels. This difference becomes more evident when considering tone-bearing vowels. Additional data from speakers of Russian and Mandarin suggest that the ability of Cantonese speakers to segment streams with statistical cues carried by tone-bearing vowels extends to other tone languages, but is much reduced in speakers of non-tone languages.
Collapse
|
5
|
Rhythm in language acquisition. Neurosci Biobehav Rev 2017; 81:158-166. [DOI: 10.1016/j.neubiorev.2016.12.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 11/17/2016] [Accepted: 12/12/2016] [Indexed: 10/20/2022]
|
6
|
|
7
|
Abstract
The Iambic-Trochaic Law (ITL) accounts for speech rhythm, grouping of sounds as either Iambs—if alternating in duration—or Trochees—if alternating in pitch and/or intensity. The two different rhythms signal word order, one of the basic syntactic properties of language. We investigated the extent to which Iambic and Trochaic phrases could be auditorily and visually recognized, when visual stimuli engage lip reading. Our results show both rhythmic patterns were recognized from both, auditory and visual stimuli, suggesting that speech rhythm has a multimodal representation. We further explored whether participants could match Iambic and Trochaic phrases across the two modalities. We found that participants auditorily familiarized with Trochees, but not with Iambs, were more accurate in recognizing visual targets, while participants visually familiarized with Iambs, but not with Trochees, were more accurate in recognizing auditory targets. The latter results suggest an asymmetric processing of speech rhythm: in auditory domain, the changes in either pitch or intensity are better perceived and represented than changes in duration, while in the visual domain the changes in duration are better processed and represented than changes in pitch, raising important questions about domain general and specialized mechanisms for speech rhythm processing.
Collapse
|
8
|
Infants' Selectively Pay Attention to the Information They Receive from a Native Speaker of Their Language. Front Psychol 2016; 7:1150. [PMID: 27536263 PMCID: PMC4971095 DOI: 10.3389/fpsyg.2016.01150] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Accepted: 07/18/2016] [Indexed: 11/13/2022] Open
Abstract
From the first moments of their life, infants show a preference for their native language, as well as toward speakers with whom they share the same language. This preference appears to have broad consequences in various domains later on, supporting group affiliations and collaborative actions in children. Here, we propose that infants' preference for native speakers of their language also serves a further purpose, specifically allowing them to efficiently acquire culture specific knowledge via social learning. By selectively attending to informants who are native speakers of their language and who probably also share the same cultural background with the infant, young learners can maximize the possibility to acquire cultural knowledge. To test whether infants would preferably attend the information they receive from a speaker of their native language, we familiarized 12-month-old infants with a native and a foreign speaker, and then presented them with movies where each of the speakers silently gazed toward unfamiliar objects. At test, infants' looking behavior to the two objects alone was measured. Results revealed that infants preferred to look longer at the object presented by the native speaker. Strikingly, the effect was replicated also with 5-month-old infants, indicating an early development of such preference. These findings provide evidence that young infants pay more attention to the information presented by a person with whom they share the same language. This selectivity can serve as a basis for efficient social learning by influencing how infants' allocate attention between potential sources of information in their environment.
Collapse
|
9
|
Abstract
Humans share with non-human animals perceptual biases that might form the basis of complex cognitive abilities. One example comes from the principles described by the iambic-trochaic law (ITL). According to the ITL, sequences of sounds varying in duration are grouped as iambs, whereas sequences varying in intensity are grouped as trochees. These grouping biases have gained much attention because they might help pre-lexical infants bootstrap syntactic parameters (such as word order) in their language. Here, we explore how experience triggers the emergence of perceptual grouping biases in a non-human species. We familiarized rats with either long-short or short-long tone pairs. We then trained the animals to discriminate between sequences of alternating and randomly ordered tones. Results showed animals developed a grouping bias coherent with the exposure they had. Together with results observed in human adults and infants, these results suggest that experience modulates perceptual organizing principles that are present across species.
Collapse
|
10
|
|
11
|
Co-occurrence statistics as a language-dependent cue for speech segmentation. Dev Sci 2016; 20. [PMID: 27146310 DOI: 10.1111/desc.12390] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 11/06/2015] [Indexed: 11/29/2022]
Abstract
To what extent can language acquisition be explained in terms of different associative learning mechanisms? It has been hypothesized that distributional regularities in spoken languages are strong enough to elicit statistical learning about dependencies among speech units. Distributional regularities could be a useful cue for word learning even without rich language-specific knowledge. However, it is not clear how strong and reliable the distributional cues are that humans might use to segment speech. We investigate cross-linguistic viability of different statistical learning strategies by analyzing child-directed speech corpora from nine languages and by modeling possible statistics-based speech segmentations. We show that languages vary as to which statistical segmentation strategies are most successful. The variability of the results can be partially explained by systematic differences between languages, such as rhythmical differences. The results confirm previous findings that different statistical learning strategies are successful in different languages and suggest that infants may have to primarily rely on non-statistical cues when they begin their process of speech segmentation.
Collapse
|
12
|
Abstract
Our native tongue influences the way we perceive other languages. But does it also determine the way we perceive nonlinguistic sounds? The authors investigated how speakers of Italian, Turkish, and Persian group sequences of syllables, tones, or visual shapes alternating in either frequency or duration. We found strong native listening effects with linguistic stimuli. Speakers of Italian grouped the linguistic stimuli differently from speakers of Turkish and Persian. However, speakers of all languages showed the same perceptual biases when grouping the nonlinguistic auditory and the visual stimuli. The shared perceptual biases appear to be determined by universal grouping principles, and the linguistic differences caused by prosodic differences between the languages. Although previous findings suggest that acquired linguistic knowledge can either enhance or diminish the perception of both linguistic and nonlinguistic auditory stimuli, we found no transfer of native listening effects across auditory domains or perceptual modalities. (PsycINFO Database Record
Collapse
|
13
|
A new perspective on word order preferences: the availability of a lexicon triggers the use of SVO word order. Front Psychol 2015; 6:1183. [PMID: 26321994 PMCID: PMC4534792 DOI: 10.3389/fpsyg.2015.01183] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2015] [Accepted: 07/27/2015] [Indexed: 11/13/2022] Open
Abstract
Word orders are not distributed equally: SOV and SVO are the most prevalent among the world's languages. While there is a consensus that SOV might be the “default” order in human languages, the factors that trigger the preference for SVO are still a matter of debate. Here we provide a new perspective on word order preferences that emphasizes the role of a lexicon. We propose that while there is a tendency to favor SOV in the case of improvised communication, the exposure to a shared lexicon makes it possible to liberate sufficient cognitive resources to use syntax. Consequently SVO, the more efficient word order to express syntactic relations, emerges. To test this hypothesis, we taught Italian (SVO) and Persian (SOV) speakers a set of gestures and later asked them to describe simple events. Confirming our prediction, results showed that in both groups a consistent use of SVO emerged after acquiring a stable gesture repertoire.
Collapse
|
14
|
Can you see what I am talking about? Human speech triggers referential expectation in four-month-old infants. Sci Rep 2015; 5:13594. [PMID: 26323990 PMCID: PMC4555167 DOI: 10.1038/srep13594] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Accepted: 07/24/2015] [Indexed: 11/08/2022] Open
Abstract
Infants' sensitivity to selectively attend to human speech and to process it in a unique way has been widely reported in the past. However, in order to successfully acquire language, one should also understand that speech is a referential, and that words can stand for other entities in the world. While there has been some evidence showing that young infants can make inferences about the communicative intentions of a speaker, whether they would also appreciate the direct relationship between a specific word and its referent, is still unknown. In the present study we tested four-month-old infants to see whether they would expect to find a referent when they hear human speech. Our results showed that compared to other auditory stimuli or to silence, when infants were listening to speech they were more prepared to find some visual referents of the words, as signalled by their faster orienting towards the visual objects. Hence, our study is the first to report evidence that infants at a very young age already understand the referential relationship between auditory words and physical objects, thus show a precursor in appreciating the symbolic nature of language, even if they do not understand yet the meanings of words.
Collapse
|
15
|
On the edge of language acquisition: inherent constraints on encoding multisyllabic sequences in the neonate brain. Dev Sci 2015; 19:488-503. [DOI: 10.1111/desc.12323] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2014] [Accepted: 04/11/2015] [Indexed: 11/27/2022]
|
16
|
Spontaneous object and movement representations in 4-month-old human infants and albino Swiss mice. Cognition 2015; 137:63-71. [PMID: 25615902 DOI: 10.1016/j.cognition.2014.12.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2013] [Revised: 12/22/2014] [Accepted: 12/27/2014] [Indexed: 11/30/2022]
Abstract
Can young infants decompose visual events into independent representations of objects and movements? Previous studies suggest that human infants may be born with the notion of objects but there is little evidence for movement representations during the first months of life. We devised a novel Rapid Visual Recognition Procedure to test whether the nervous system is innately disposed for the conceptual decomposition of visual events. We show that 4-month-old infants can spontaneously build object and movement representations and recognize these in partially matching test events. Also albino Swiss mice that were tested on a comparable procedure could spontaneously build detailed mental representations of moving objects. Our results dissociate the ability to conceptually decompose physical events into objects and spatio-temporal relations from various types of human and non-human specific experience, and suggest that the nervous system is genetically predisposed to anticipate the representation of objects and movements in both humans and non-human species.
Collapse
|
17
|
Abstract
In everyday life, speech is accompanied by gestures. In the present study, two experiments tested the possibility that spontaneous gestures accompanying speech carry prosodic information. Experiment 1 showed that gestures provide prosodic information, as adults are able to perceive the congruency between low-pass filtered—thus unintelligible—speech and the gestures of the speaker. Experiment 2 shows that in the case of ambiguous sentences (i.e., sentences with two alternative meanings depending on their prosody) mismatched prosody and gestures lead participants to choose more often the meaning signaled by gestures. Our results demonstrate that the prosody that characterizes speech is not a modality specific phenomenon: it is also perceived in the spontaneous gestures that accompany speech. We draw the conclusion that spontaneous gestures and speech form a single communication system where the suprasegmental aspects of spoken language are mapped to the motor-programs responsible for the production of both speech sounds and hand gestures.
Collapse
|
18
|
Word frequency cues word order in adults: cross-linguistic evidence. Front Psychol 2013; 4:689. [PMID: 24106483 PMCID: PMC3788341 DOI: 10.3389/fpsyg.2013.00689] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2013] [Accepted: 09/11/2013] [Indexed: 11/13/2022] Open
Abstract
One universal feature of human languages is the division between grammatical functors and content words. From a learnability point of view, functors might provide entry points or anchors into the syntactic structure of utterances due to their high frequency. Despite its potentially universal scope, this hypothesis has not yet been tested on typologically different languages and on populations of different ages. Here we report a corpus study and an artificial grammar learning experiment testing the anchoring hypothesis in Basque, Japanese, French, and Italian adults. We show that adults are sensitive to the distribution of functors in their native language and use them when learning new linguistic material. However, compared to infants' performance on a similar task, adults exhibit a slightly different behavior, matching the frequency distributions of their native language more closely than infants do. This finding bears on the issue of the continuity of language learning mechanisms.
Collapse
|
19
|
Abstract
Recent research has shown that specific areas of the human brain are activated by speech from the time of birth. However, it is currently unknown whether newborns' brains also encode and remember the sounds of words when processing speech. The present study investigates the type of information that newborns retain when they hear words and the brain structures that support word-sound recognition. Forty-four healthy newborns were tested with the functional near-infrared spectroscopy method to establish their ability to memorize the sound of a word and distinguish it from a phonetically similar one, 2 min after encoding. Right frontal regions--comparable to those activated in adults during retrieval of verbal material--showed a characteristic neural signature of recognition when newborns listened to a test word that had the same vowel of a previously heard word. In contrast, a characteristic novelty response was found when a test word had different vowels than the familiar word, despite having the same consonants. These results indicate that the information carried by vowels is better recognized by newborns than the information carried by consonants. Moreover, these data suggest that right frontal areas may support the recognition of speech sequences from the very first stages of language acquisition.
Collapse
|
20
|
|
21
|
Abstract
Language acquisition involves both acquiring a set of words (i.e. the lexicon) and learning the rules that combine them to form sentences (i.e. syntax). Here, we show that consonants are mainly involved in word processing, whereas vowels are favored for extracting and generalizing structural relations. We demonstrate that such a division of labor between consonants and vowels plays a role in language acquisition. In two very similar experimental paradigms, we show that 12-month-old infants rely more on the consonantal tier when identifying words (Experiment 1), but are better at extracting and generalizing repetition-based srtuctures over the vocalic tier (Experiment 2). These results indicate that infants are able to exploit the functional differences between consonants and vowels at an age when they start acquiring the lexicon, and suggest that basic speech categories are assigned to different learning mechanisms that sustain early language acquisition.
Collapse
|
22
|
Corrigendum to “Cognitive systems struggling for word order” [Cognitive Psychology 60 (2010) 291–318]. Cogn Psychol 2011. [DOI: 10.1016/j.cogpsych.2010.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
23
|
Acoustic markers of prominence influence infants' and adults' segmentation of speech sequences. LANGUAGE AND SPEECH 2011; 54:123-140. [PMID: 21524015 DOI: 10.1177/0023830910388018] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Two experiments investigated the way acoustic markers of prominence influence the grouping of speech sequences by adults and 7-month-old infants. In the first experiment, adults were familiarized with and asked to memorize sequences of adjacent syllables that alternated in either pitch or duration. During the test phase, participants heard pairs of syllables with constant pitch and duration and were asked whether the syllables had appeared adjacently during familiarization. Adults were better at remembering pairs of syllables that during familiarization had short syllables preceding long syllables, or high-pitched syllables preceding low-pitched syllables. In the second experiment, infants were familiarized and tested with similar stimuli as in the first experiment, and their preference for pairs of syllables was accessed using the head-turn preference paradigm.When familiarized with syllables alternating in pitch, infants showed a preference to listen to pairs of syllables that had high pitch in the first syllable. However, no preference was found when the familiarization stream alternated in duration. It is proposed that these perceptual biases help infants and adults find linguistic units in the continuous speech stream.While the bias for grouping based on pitch appears early in development, biases for durational grouping might rely on more extensive linguistic experience.
Collapse
|
24
|
How modality specific is the iambic–trochaic law? Evidence from vision. ACTA ACUST UNITED AC 2011; 37:1199-208. [DOI: 10.1037/a0023944] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
25
|
Cognitive systems struggling for word order. Cogn Psychol 2010; 60:291-318. [PMID: 20189553 DOI: 10.1016/j.cogpsych.2010.01.004] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2009] [Revised: 01/22/2010] [Accepted: 01/26/2010] [Indexed: 10/19/2022]
Abstract
We argue that the grammatical diversity observed among the world's languages emerges from the struggle between individual cognitive systems trying to impose their preferred structure on human language. We investigate the cognitive bases of the two most common word orders in the world's languages: SOV (Subject-Object-Verb) and SVO. Evidence from language change, grammaticalization, stability of order, and theoretical arguments, indicates a syntactic preference for SVO. The reason for the prominence of SOV languages is not as clear. In two gesture-production experiments and one gesture comprehension experiment, we show that SOV emerges as the preferred constituent configuration in participants whose native languages (Italian and Turkish) have different word orders. We propose that improvised communication does not rely on the computational system of grammar. The results of a fourth experiment, where participants comprehended strings of prosodically flat words in their native language, shows that the computational system of grammar prefers the orthogonal Verb-Object orders.
Collapse
|
26
|
Abstract
We have proposed that consonants give cues primarily about the lexicon, whereas vowels carry cues about syntax. In a study supporting this hypothesis, we showed that when segmenting words from an artificial continuous stream, participants compute statistical relations over consonants, but not over vowels. In the study reported here, we tested the symmetrical hypothesis that when participants listen to words in a speech stream, they tend to exploit relations among vowels to extract generalizations, but tend to disregard the same relations among consonants. In our streams, participants could segment words on the basis of transitional probabilities in one tier and could extract a structural regularity in the other tier. Participants used consonants to extract words, but vowels to extract a structural generalization. They were unable to extract the same generalization using consonants, even when word segmentation was facilitated and the generalization made simpler. Our results suggest that different signal-driven computations prime lexical and grammatical processing.
Collapse
|
27
|
Abstract
This paper reviews studies of language processing with the aim of establishing whether any type of statistical information embedded in linguistic signals can be exploited by the language learner. The constraints as to the information that can be so used, we will argue, should be used to inform theories of language acquisition. We present two experiments with their respective controls. Both show that consonants (Cs) are much more suitable than vowels (Vs) to parse speech streams using statistical dependencies. These experiments use streams composed of items in which statistical information is carried either by the sequence of consonants or by the sequence of vowels. Both kinds of items are simultaneously present is the speech stream but, crucially, their overlap is only partial. Since the location of dips in transitional probabilities (TPs) between adjacent syllables differ for the first and the second type of sequences, we can explore whether consonants and vowels are equally efficient segments to parse signals. Our results show that "consonant words" (CW) are significantly preferred over "vowel words" (VW). We discuss the implication of our results for models of language acquisition.
Collapse
|
28
|
How to hit Scylla without avoiding Charybdis: comment on Perruchet, Tyler, Galland, and Peereman (2004). J Exp Psychol Gen 2006; 135:314-21; discussion 322-6. [PMID: 16719656 DOI: 10.1037/0096-3445.135.2.314] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
M. Peña, L. L. Bonatti, M. Nespor, and J. Mehler argued that humans compute nonadjacent statistical relations among syllables in a continuous artificial speech stream to extract words, but they use other computations to determine the structural properties of words. Instead, when participants are familiarized with a segmented stream, structural generalizations about words are quickly established. P. Perruchet, M. D. Tyler, N. Galland, and R. Peereman criticized M. Peña et al.'s work and dismissed their results. In this article, the authors show that P. Perruchet et al.'s criticisms are groundless.
Collapse
|
29
|
An interaction between prosody and statistics in the segmentation of fluent speech. Cogn Psychol 2006; 54:1-32. [PMID: 16782083 DOI: 10.1016/j.cogpsych.2006.04.002] [Citation(s) in RCA: 88] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2005] [Accepted: 04/09/2006] [Indexed: 10/24/2022]
Abstract
Sensitivity to prosodic cues might be used to constrain lexical search. Indeed, the prosodic organization of speech is such that words are invariably aligned with phrasal prosodic edges, providing a cue to segmentation. In this paper we devise an experimental paradigm that allows us to investigate the interaction between statistical and prosodic cues to extract words from a speech stream. We provide evidence that statistics over the syllables are computed independently of prosody. However, we also show that trisyllabic sequences with high transition probabilities that straddle two prosodic constituents appear not to be recognized. Taken together, our findings suggest that prosody acts as a filter, suppressing possible word-like sequences that span prosodic constituents.
Collapse
|
30
|
Why is language unique to humans? NOVARTIS FOUNDATION SYMPOSIUM 2006; 270:251-80; discussion 280-92. [PMID: 16649719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Cognitive neuroscience has focused on language acquisition as one of the main domains to test the respective roles of statistical vs. rule-like computation. Recent studies have uncovered that the brain of human neconates displays a typical signature in response to speech sounds even a few hours after birth. This suggests that neuroscience and linguistics converge on the view that, to a large extent, language acquisition arises due to our genetic endowment. Our research has also shown how statistical dependencies and the ability to draw structural generalizations are basic processes that interact intimately. First, we explore how the rhythmic properties of language bias word segmentation. Second, we demonstrate that natural speech categories play specific roles during language acquisition: some categories are optimally suited to compute statistical dependencies while other categories are optimally suited for the extraction of structural generalizations.
Collapse
|
31
|
Linguistic constraints on statistical computations: the role of consonants and vowels in continuous speech processing. Psychol Sci 2005; 16:451-9. [PMID: 15943671 DOI: 10.1111/j.0956-7976.2005.01556.x] [Citation(s) in RCA: 122] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Speech is produced mainly in continuous streams containing several words. Listeners can use the transitional probability (TP) between adjacent and non-adjacent syllables to segment "words" from a continuous stream of artificial speech, much as they use TPs to organize a variety of perceptual continua. It is thus possible that a general-purpose statistical device exploits any speech unit to achieve segmentation of speech streams. Alternatively, language may limit what representations are open to statistical investigation according to their specific linguistic role. In this article, we focus on vowels and consonants in continuous speech. We hypothesized that vowels and consonants in words carry different kinds of information, the latter being more tied to word identification and the former to grammar. We thus predicted that in a word identification task involving continuous speech, learners would track TPs among consonants, but not among vowels. Our results show a preferential role for consonants in word identification.
Collapse
|
32
|
|
33
|
Abstract
Learning a language requires both statistical computations to identify words in speech and algebraic-like computations to discover higher level (grammatical) structure. Here we show that these computations can be influenced by subtle cues in the speech signal. After a short familiarization to a continuous speech stream, adult listeners are able to segment it using powerful statistics, but they fail to extract the structural regularities included in the stream even when the familiarization is greatly extended. With the introduction of subliminal segmentation cues, however, these regularities can be rapidly captured.
Collapse
|
34
|
Abstract
Spoken languages have been classified by linguists according to their rhythmic properties, and psycholinguists have relied on this classification to account for infants' capacity to discriminate languages. Although researchers have measured many speech signal properties, they have failed to identify reliable acoustic characteristics for language classes. This paper presents instrumental measurements based on a consonant/vowel segmentation for eight languages. The measurements suggest that intuitive rhythm types reflect specific phonological properties, which in turn are signaled by the acoustic/phonetic properties of speech. The data support the notion of rhythm classes and also allow the simulation of infant language discrimination, consistent with the hypothesis that newborns rely on a coarse segmentation of speech. A hypothesis is proposed regarding the role of rhythm perception in language acquisition.
Collapse
|
35
|
Abstract
We report the case of an aphasic patient who, following an acquired lesion involving the left temporo-parietal cortex, produced many word stress errors in spontaneous speech, naming of objects and reading aloud. The stress impairment concerned exclusively words in which stress was unpredictable on the basis of syllabic structure, and was equally severe in naming and reading aloud. Errors were significantly more frequent in the cases of words with stress on the antepenultimate syllable, and of low frequency words. There was a high consistency between errors in naming and reading aloud. These findings suggest that stress representation can be selectively impaired after brain damage; we hypothesise that a partial disorder at the level of the form lexicon, involving the representation of lexical stress, can account for most of the features of the patient's performance.
Collapse
|
36
|
|