1
|
Verhoef T, Marghetis T, Walker E, Coulson S. Brain responses to a lab-evolved artificial language with space-time metaphors. Cognition 2024; 246:105763. [PMID: 38442586 DOI: 10.1016/j.cognition.2024.105763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 01/05/2024] [Accepted: 02/26/2024] [Indexed: 03/07/2024]
Abstract
What is the connection between the cultural evolution of a language and the rapid processing response to that language in the brains of individual learners? In an iterated communication study that was conducted previously, participants were asked to communicate temporal concepts such as "tomorrow," "day after," "year," and "past" using vertical movements recorded on a touch screen. Over time, participants developed simple artificial 'languages' that used space metaphorically to communicate in nuanced ways about time. Some conventions appeared rapidly and universally (e.g., using larger vertical movements to convey greater temporal durations). Other conventions required extensive social interaction and exhibited idiosyncratic variation (e.g., using vertical location to convey past or future). Here we investigate whether the brain's response during acquisition of such a language reflects the process by which the language's conventions originally evolved. We recorded participants' EEG as they learned one of these artificial space-time languages. Overall, the brain response to this artificial communication system was language-like, with, for instance, violations to the system's conventions eliciting an N400-like component. Over the course of learning, participants' brain responses developed in ways that paralleled the process by which the language had originally evolved, with early neural sensitivity to violations of a rapidly-evolving universal convention, and slowly developing neural sensitivity to an idiosyncratic convention that required slow social negotiation to emerge. This study opens up exciting avenues of future work to disentangle how neural biases influence learning and transmission in the emergence of structure in language.
Collapse
Affiliation(s)
- Tessa Verhoef
- Leiden Institute of Advanced Computer Science, Leiden University, Gorlaeus Building, Einsteinweg 55, 2333 CC Leiden, the Netherlands; Department of Cognitive Science, University of California, San Diego, Mail Code 0515; 9500, Gilman Drive, La Jolla, CA 92093-0515, USA.
| | - Tyler Marghetis
- Department of Cognitive and Information Sciences, University of California, Merced, 5200 North Lake Rd., Merced, CA 95343, USA
| | - Esther Walker
- Department of Cognitive Science, University of California, San Diego, Mail Code 0515; 9500, Gilman Drive, La Jolla, CA 92093-0515, USA
| | - Seana Coulson
- Department of Cognitive Science, University of California, San Diego, Mail Code 0515; 9500, Gilman Drive, La Jolla, CA 92093-0515, USA
| |
Collapse
|
2
|
Winter B, Lupyan G, Perry LK, Dingemanse M, Perlman M. Iconicity ratings for 14,000+ English words. Behav Res Methods 2024; 56:1640-1655. [PMID: 37081237 DOI: 10.3758/s13428-023-02112-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available.
Collapse
Affiliation(s)
- Bodo Winter
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK.
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Lynn K Perry
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Marcus Perlman
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK
| |
Collapse
|
3
|
Pyers JE, Emmorey K. The iconic motivation for the morphophonological distinction between noun-verb pairs in American Sign Language does not reflect common human construals of objects and actions. LANGUAGE AND COGNITION 2022; 14:622-644. [PMID: 36426211 PMCID: PMC9681175 DOI: 10.1017/langcog.2022.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs. Experiment 1a's match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners' judgments. We speculate that the morphophonological distinctions in noun-verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.
Collapse
Affiliation(s)
- Jennie E. Pyers
- Wellesley College, Psychology Department, Wellesley, MA, USA
| | - Karen Emmorey
- San Diego State University, School of Speech, Language and Hearing Sciences, San Diego, CA, USA
| |
Collapse
|
4
|
Gappmayr P, Lieberman AM, Pyers J, Caselli NK. Do parents modify child-directed signing to emphasize iconicity? Front Psychol 2022; 13:920729. [PMID: 36092032 PMCID: PMC9453873 DOI: 10.3389/fpsyg.2022.920729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
Iconic signs are overrepresented in the vocabularies of young deaf children, but it is unclear why. It is possible that iconic signs are easier for children to learn, but it is also possible that adults use iconic signs in child-directed signing in ways that make them more learnable, either by using them more often than less iconic signs or by lengthening them. We analyzed videos of naturalistic play sessions between parents and deaf children (n = 24 dyads) aged 9-60 months. To determine whether iconic signs are overrepresented during child-directed signing, we compared the iconicity of actual parent productions to the iconicity of simulated vocabularies designed to estimate chance levels of iconicity. For almost all dyads, parent sign types and tokens were not more iconic than the simulated vocabularies, suggesting that parents do not select more iconic signs during child-directed signing. To determine whether iconic signs are more likely to be lengthened, we ran a linear regression predicting sign duration, and found an interaction between age and iconicity: while parents of younger children produced non-iconic and iconic signs with similar durations, parents of older children produced non-iconic signs with shorter durations than iconic signs. Thus, parents sign more quickly with older children than younger children, and iconic signs appear to resist that reduction in sign length. It is possible that iconic signs are perceptually available longer, and their availability is a candidate hypothesis as to why iconic signs are overrepresented in children's vocabularies.
Collapse
Affiliation(s)
- Paris Gappmayr
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| | - Amy M. Lieberman
- Wheelock College of Education and Human Development, Boston University, Boston, MA, United States
| | - Jennie Pyers
- Department of Psychology, Wellesley College, Wellesley, MA, United States
| | - Naomi K. Caselli
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, United States
| |
Collapse
|
5
|
Hofweber JE, Aumonier L, Janke V, Gullberg M, Marshall C. Breaking Into Language in a New Modality: The Role of Input and Individual Differences in Recognising Signs. Front Psychol 2022; 13:895880. [PMID: 35664149 PMCID: PMC9158439 DOI: 10.3389/fpsyg.2022.895880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 04/11/2022] [Indexed: 11/14/2022] Open
Abstract
A key challenge when learning language in naturalistic circumstances is to extract linguistic information from a continuous stream of speech. This study investigates the predictors of such implicit learning among adults exposed to a new language in a new modality (a sign language). Sign-naïve participants (N = 93; British English speakers) were shown a 4-min weather forecast in Swedish Sign Language. Subsequently, we tested their ability to recognise 22 target sign forms that had been viewed in the forecast, amongst 44 distractor signs that had not been viewed. The target items differed in their occurrence frequency in the forecast and in their degree of iconicity. The results revealed that both frequency and iconicity facilitated recognition of target signs cumulatively. The adult mechanism for language learning thus operates similarly on sign and spoken languages as regards frequency, but also exploits modality-salient properties, for example iconicity for sign languages. Individual differences in cognitive skills and language learning background did not predict recognition. The properties of the input thus influenced adults’ language learning abilities at first exposure more than individual differences.
Collapse
Affiliation(s)
- Julia Elisabeth Hofweber
- Department of Psychology and Human Development, Institute of Education, University College London, London, United Kingdom
| | - Lizzy Aumonier
- School of Cultures and Languages, University of Kent, Canterbury, United Kingdom
| | - Vikki Janke
- School of Cultures and Languages, University of Kent, Canterbury, United Kingdom
| | | | - Chloe Marshall
- Department of Psychology and Human Development, Institute of Education, University College London, London, United Kingdom
| |
Collapse
|
6
|
Affiliation(s)
- E. Mara Green
- Department of Anthropology Barnard College, Columbia University
| |
Collapse
|
7
|
Bradley C, Malaia EA, Siskind JM, Wilbur RB. Visual form of ASL verb signs predicts non-signer judgment of transitivity. PLoS One 2022; 17:e0262098. [PMID: 35213558 PMCID: PMC8880903 DOI: 10.1371/journal.pone.0262098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 12/17/2021] [Indexed: 11/18/2022] Open
Abstract
Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event’s underlying representation and its syntactic encoding, such that–for example–the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs’ visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.
Collapse
Affiliation(s)
- Chuck Bradley
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- * E-mail:
| | - Evie A. Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama, United States of America
| | - Jeffrey Mark Siskind
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- Elmore Family School School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, United States of America
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, United States of America
| | - Ronnie B. Wilbur
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, United States of America
| |
Collapse
|
8
|
Fitch A, Arunachalam S, Lieberman AM. Mapping Word to World in ASL: Evidence from a Human Simulation Paradigm. Cogn Sci 2021; 45:e13061. [PMID: 34861057 PMCID: PMC9365062 DOI: 10.1111/cogs.13061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 09/27/2021] [Accepted: 10/05/2021] [Indexed: 11/30/2022]
Abstract
Across languages, children map words to meaning with great efficiency, despite a seemingly unconstrained space of potential mappings. The literature on how children do this is primarily limited to spoken language. This leaves a gap in our understanding of sign language acquisition, because several of the hypothesized mechanisms that children use are visual (e.g., visual attention to the referent), and sign languages are perceived in the visual modality. Here, we used the Human Simulation Paradigm in American Sign Language (ASL) to determine potential cues to word learning. Sign-naïve adult participants viewed video clips of parent-child interactions in ASL, and at a designated point, had to guess what ASL sign the parent produced. Across two studies, we demonstrate that referential clarity in ASL interactions is characterized by access to information about word class and referent presence (for verbs), similarly to spoken language. Unlike spoken language, iconicity is a cue to word meaning in ASL, although this is not always a fruitful cue. We also present evidence that verbs are highlighted well in the input, relative to spoken English. The results shed light on both similarities and differences in the information that learners may have access to in acquiring signed versus spoken languages.
Collapse
Affiliation(s)
- Allison Fitch
- Deaf Education and Deaf Studies, Boston University.,Psychology, Rochester Institute of Technology
| | | | | |
Collapse
|
9
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
10
|
Sehyr ZS, Caselli N, Cohen-Goldberg AM, Emmorey K. The ASL-LEX 2.0 Project: A Database of Lexical and Phonological Properties for 2,723 Signs in American Sign Language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:263-277. [PMID: 33598676 PMCID: PMC7977685 DOI: 10.1093/deafed/enaa038] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 09/25/2020] [Accepted: 09/27/2020] [Indexed: 06/12/2023]
Abstract
ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency ("guessability") ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.
Collapse
|
11
|
Mott M, Midgley KJ, Holcomb PJ, Emmorey K. Cross-modal translation priming and iconicity effects in deaf signers and hearing learners of American Sign Language. BILINGUALISM (CAMBRIDGE, ENGLAND) 2020; 23:1032-1044. [PMID: 33897272 PMCID: PMC8061897 DOI: 10.1017/s1366728919000889] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
This study used ERPs to a) assess the neural correlates of cross-linguistic, cross-modal translation priming in hearing beginning learners of American Sign Language (ASL) and deaf highly proficient signers and b) examine whether sign iconicity modulates these priming effects. Hearing learners exhibited translation priming for ASL signs preceded by English words (greater negativity for unrelated than translation primes) later in the ERP waveform than deaf signers and exhibited earlier and greater priming for iconic than non-iconic signs. Iconicity did not modulate translation priming effects either behaviorally or in the ERPs for deaf signers (except in a 800-1000 ms time window). Because deaf signers showed early translation priming effects (beginning at 400ms-600ms), we suggest that iconicity did not facilitate lexical access, but deaf signers may have recognized sign iconicity later in processing. Overall, the results indicate that iconicity speeds lexical access for L2 sign language learners, but not for proficient signers.
Collapse
Affiliation(s)
- Megan Mott
- Department of Psychology, San Diego State University
| | | | | | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
12
|
Pyers J, Senghas A. Lexical Iconicity is differentially favored under transmission in a new sign language: The effect of type of iconicity. SIGN LANGUAGE AND LINGUISTICS 2020; 23:73-95. [PMID: 33613090 PMCID: PMC7894619 DOI: 10.1075/sll.00044.pye] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Observations that iconicity diminishes over time in sign languages pose a puzzle--why should something so evidently useful and functional decrease? Using an archival dataset of signs elicited over 15 years from 4 first-cohort and 4 third-cohort signers of an emerging sign language (Nicaraguan Sign Language), we investigated changes in pantomimic (body-to-body) and perceptual (body-to-object) iconicity. We make three key observations: (1) there is greater variability in the signs produced by the first cohort compared to the third; (2) while both types of iconicity are evident, pantomimic iconicity is more prevalent than perceptual iconicity for both groups; and (3) across cohorts, pantomimic elements are dropped to a greater proportion than perceptual elements. The higher rate of pantomimic iconicity in the first-cohort lexicon reflects the usefulness of body-as-body mapping in language creation. Yet, its greater vulnerability to change over transmission suggests that it is less favored by children's language acquisition processes.
Collapse
|
13
|
Emmorey K, Winsler K, Midgley KJ, Grainger J, Holcomb PJ. Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:249-267. [PMID: 33043298 PMCID: PMC7544239 DOI: 10.1162/nol_a_00012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 04/16/2020] [Indexed: 05/21/2023]
Abstract
To investigate possible universal and modality-specific factors that influence the neurophysiological response during lexical processing, we recorded event-related potentials while a large group of deaf adults (n = 40) viewed 404 signs in American Sign Language (ASL) that varied in ASL frequency, concreteness, and iconicity. Participants performed a go/no-go semantic categorization task (does the sign refer to people?) to videoclips of ASL signs (clips began with the signer's hands at rest). Linear mixed-effects regression models were fit with per-participant, per-trial, and per-electrode data, allowing us to identify unique effects of each lexical variable. We observed an early effect of frequency (greater negativity for less frequent signs) beginning at 400 ms postvideo onset at anterior sites, which we interpreted as reflecting form-based lexical processing. This effect was followed by a more widely distributed posterior response that we interpreted as reflecting lexical-semantic processing. Paralleling spoken language, more concrete signs elicited greater negativities, beginning 600 ms postvideo onset with a wide scalp distribution. Finally, there were no effects of iconicity (except for a weak effect in the latest epochs; 1,000-1,200 ms), suggesting that iconicity does not modulate the neural response during sign recognition. Despite the perceptual and sensorimotoric differences between signed and spoken languages, the overall results indicate very similar neurophysiological processes underlie lexical access for both signs and words.
Collapse
Affiliation(s)
| | - Kurt Winsler
- Department of Psychology, University of California, Davis
| | | | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Aix-Marseille University, Centre National de la Recherche Scientifique
| | | |
Collapse
|