1
|
Motamedi Y, Murgiano M, Grzyb B, Gu Y, Kewenig V, Brieke R, Donnellan E, Marshall C, Wonnacott E, Perniss P, Vigliocco G. Language development beyond the here-and-now: Iconicity and displacement in child-directed communication. Child Dev 2024. [PMID: 38563146 DOI: 10.1111/cdev.14099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Most language use is displaced, referring to past, future, or hypothetical events, posing the challenge of how children learn what words refer to when the referent is not physically available. One possibility is that iconic cues that imagistically evoke properties of absent referents support learning when referents are displaced. In an audio-visual corpus of caregiver-child dyads, English-speaking caregivers interacted with their children (N = 71, 24-58 months) in contexts in which the objects talked about were either familiar or unfamiliar to the child, and either physically present or displaced. The analysis of the range of vocal, manual, and looking behaviors caregivers produced suggests that caregivers used iconic cues especially in displaced contexts and for unfamiliar objects, using other cues when objects were present.
Collapse
Affiliation(s)
- Yasamin Motamedi
- Department of Experimental Psychology, University College London, London, UK
| | - Margherita Murgiano
- Department of Experimental Psychology, University College London, London, UK
| | - Beata Grzyb
- Department of Experimental Psychology, University College London, London, UK
| | - Yan Gu
- Department of Experimental Psychology, University College London, London, UK
- Department of Psychology, University of Essex, Colchester, UK
| | - Viktor Kewenig
- Department of Experimental Psychology, University College London, London, UK
| | - Ricarda Brieke
- Department of Experimental Psychology, University College London, London, UK
| | - Ed Donnellan
- Department of Experimental Psychology, University College London, London, UK
| | - Chloe Marshall
- Institute of Education, University College London, London, UK
| | - Elizabeth Wonnacott
- Department of Language and Cognition, University College London, London, UK
- Department of Education, University of Oxford, Oxford, UK
| | | | - Gabriella Vigliocco
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
2
|
Hollander J, Olney A. Raising the Roof: Situating Verbs in Symbolic and Embodied Language Processing. Cogn Sci 2024; 48:e13442. [PMID: 38655894 DOI: 10.1111/cogs.13442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 02/05/2024] [Accepted: 03/29/2024] [Indexed: 04/26/2024]
Abstract
Recent investigations on how people derive meaning from language have focused on task-dependent shifts between two cognitive systems. The symbolic (amodal) system represents meaning as the statistical relationships between words. The embodied (modal) system represents meaning through neurocognitive simulation of perceptual or sensorimotor systems associated with a word's referent. A primary finding of literature in this field is that the embodied system is only dominant when a task necessitates it, but in certain paradigms, this has only been demonstrated using nouns and adjectives. The purpose of this paper is to study whether similar effects hold with verbs. Experiment 1 evaluated a novel task in which participants rated a selection of verbs on their implied vertical movement. Ratings correlated well with distributional semantic models, establishing convergent validity, though some variance was unexplained by language statistics alone. Experiment 2 replicated previous noun-based location-cue congruency experimental paradigms with verbs and showed that the ratings obtained in Experiment 1 predicted reaction times more strongly than language statistics. Experiment 3 modified the location-cue paradigm by adding movement to create an animated, temporally decoupled, movement-verb judgment task designed to examine the relative influence of symbolic and embodied processing for verbs. Results were generally consistent with linguistic shortcut hypotheses of symbolic-embodied integrated language processing; location-cue congruence elicited processing facilitation in some conditions, and perceptual information accounted for reaction times and accuracy better than language statistics alone. These studies demonstrate novel ways in which embodied and linguistic information can be examined while using verbs as stimuli.
Collapse
Affiliation(s)
- John Hollander
- Department of Psychology, Institute for Intelligent Systems, University of Memphis
| | - Andrew Olney
- Department of Psychology, Institute for Intelligent Systems, University of Memphis
| |
Collapse
|
3
|
Winter B, Lupyan G, Perry LK, Dingemanse M, Perlman M. Iconicity ratings for 14,000+ English words. Behav Res Methods 2024; 56:1640-1655. [PMID: 37081237 DOI: 10.3758/s13428-023-02112-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/22/2023] [Indexed: 04/22/2023]
Abstract
Iconic words and signs are characterized by a perceived resemblance between aspects of their form and aspects of their meaning. For example, in English, iconic words include peep and crash, which mimic the sounds they denote, and wiggle and zigzag, which mimic motion. As a semiotic property of words and signs, iconicity has been demonstrated to play a role in word learning, language processing, and language evolution. This paper presents the results of a large-scale norming study for more than 14,000 English words conducted with over 1400 American English speakers. We demonstrate the utility of these ratings by replicating a number of existing findings showing that iconicity ratings are related to age of acquisition, sensory modality, semantic neighborhood density, structural markedness, and playfulness. We discuss possible use cases and limitations of the rating dataset, which is made publicly available.
Collapse
Affiliation(s)
- Bodo Winter
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK.
| | - Gary Lupyan
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Lynn K Perry
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| | - Marcus Perlman
- Department of English Language & Linguistics, University of Birmingham, Birmingham, UK
| |
Collapse
|
4
|
Watkins F, Abdlkarim D, Winter B, Thompson RL. Viewing angle matters in British Sign Language processing. Sci Rep 2024; 14:1043. [PMID: 38200108 PMCID: PMC10781993 DOI: 10.1038/s41598-024-51330-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 01/03/2024] [Indexed: 01/12/2024] Open
Abstract
The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.
Collapse
Affiliation(s)
- Freya Watkins
- School of Psychology, University of Birmingham, Edgbaston, Birmingham, UK
| | - Diar Abdlkarim
- School of Psychology, University of Birmingham, Edgbaston, Birmingham, UK
| | - Bodo Winter
- Department of English Language and Linguistics, University of Birmingham, Edgbaston, Birmingham, UK
| | - Robin L Thompson
- School of Psychology, University of Birmingham, Edgbaston, Birmingham, UK.
| |
Collapse
|
5
|
Goppelt-Kunkel M, Stroh AL, Hänel-Faulhaber B. Sign learning of hearing children in inclusive day care centers-does iconicity matter? Front Psychol 2023; 14:1196114. [PMID: 37655202 PMCID: PMC10467423 DOI: 10.3389/fpsyg.2023.1196114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/22/2023] [Indexed: 09/02/2023] Open
Abstract
An increasing number of experimental studies suggest that signs and gestures can scaffold vocabulary learning for children with and without special educational needs and/or disabilities (SEND). However, little research has been done on the extent to which iconicity plays a role in sign learning, particularly in inclusive day care centers. This current study investigated the role of iconicity in the sign learning of 145 hearing children (2;1 to 6;3 years) from inclusive day care centers with educators who started using sign-supported speech after a training module. Children's sign use was assessed via a questionnaire completed by their educators. We found that older children were more likely to learn signs with a higher degree of iconicity, whereas the learning of signs by younger children was less affected by iconicity. Children with SEND did not benefit more from iconicity than children without SEND. These results suggest that whether iconicity plays a role in sign learning depends on the age of the children.
Collapse
Affiliation(s)
- Madlen Goppelt-Kunkel
- Department of Special Education, Faculty of Education, Universität Hamburg, Hamburg, Germany
| | - Anna-Lena Stroh
- Department of Special Education, Faculty of Education, Universität Hamburg, Hamburg, Germany
- Faculty of Psychology, Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - Barbara Hänel-Faulhaber
- Department of Special Education, Faculty of Education, Universität Hamburg, Hamburg, Germany
| |
Collapse
|
6
|
McGarry ME, Midgley KJ, Holcomb PJ, Emmorey K. How (and why) does iconicity effect lexical access: An electrophysiological study of American sign language. Neuropsychologia 2023; 183:108516. [PMID: 36796720 PMCID: PMC10576952 DOI: 10.1016/j.neuropsychologia.2023.108516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 12/17/2022] [Accepted: 02/13/2023] [Indexed: 02/16/2023]
Abstract
Prior research has found that iconicity facilitates sign production in picture-naming paradigms and has effects on ERP components. These findings may be explained by two separate hypotheses: (1) a task-specific hypothesis that suggests these effects occur because visual features of the iconic sign form can map onto the visual features of the pictures, and (2) a semantic feature hypothesis that suggests that the retrieval of iconic signs results in greater semantic activation due to the robust representation of sensory-motor semantic features compared to non-iconic signs. To test these two hypotheses, iconic and non-iconic American Sign Language (ASL) signs were elicited from deaf native/early signers using a picture-naming task and an English-to-ASL translation task, while electrophysiological recordings were made. Behavioral facilitation (faster response times) and reduced negativities were observed for iconic signs (both prior to and within the N400 time window), but only in the picture-naming task. No ERP or behavioral differences were found between iconic and non-iconic signs in the translation task. This pattern of results supports the task-specific hypothesis and provides evidence that iconicity only facilitates sign production when the eliciting stimulus and the form of the sign can visually overlap (a picture-sign alignment effect).
Collapse
Affiliation(s)
- Meghan E McGarry
- Joint Doctoral Program in Language and Communication Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, San Diego, CA, USA.
| |
Collapse
|
7
|
Rodríguez-Moreno I, Martínez-Otzeta JM, Goienetxea I, Sierra B. Sign language recognition by means of common spatial patterns: An analysis. PLoS One 2022; 17:e0276941. [PMID: 36315481 PMCID: PMC9621452 DOI: 10.1371/journal.pone.0276941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 10/17/2022] [Indexed: 01/24/2023] Open
Abstract
Currently there are around 466 million hard of hearing people and this amount is expected to grow in the coming years. Despite the efforts that have been made, there is a communication barrier between deaf and hard of hearing signers and non-signers in environments without an interpreter. Different approaches have been developed lately to try to deal with this issue. In this work, we present an Argentinian Sign Language (LSA) recognition system which uses hand landmarks extracted from videos of the LSA64 dataset in order to distinguish between different signs. Different features are extracted from the signals created with the hand landmarks values, which are first transformed by the Common Spatial Patterns (CSP) algorithm. CSP is a dimensionality reduction algorithm and it has been widely used for EEG systems. The features extracted from the transformed signals have been then used to feed different classifiers, such as Random Forest (RF), K-Nearest Neighbors (KNN) or Multilayer Perceptron (MLP). Several experiments have been performed from which promising results have been obtained, achieving accuracy values between 0.90 and 0.95 on a set of 42 signs.
Collapse
Affiliation(s)
- Itsaso Rodríguez-Moreno
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Donostia-San Sebastián, Spain
- * E-mail:
| | - José María Martínez-Otzeta
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Donostia-San Sebastián, Spain
| | - Izaro Goienetxea
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Donostia-San Sebastián, Spain
| | - Basilio Sierra
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Donostia-San Sebastián, Spain
| |
Collapse
|
8
|
Shen S, Xiao X, Yin J, Xiao X, Chen J. Self-Powered Smart Gloves Based on Triboelectric Nanogenerators. SMALL METHODS 2022; 6:e2200830. [PMID: 36068171 DOI: 10.1002/smtd.202200830] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/14/2022] [Indexed: 06/15/2023]
Abstract
The hands are used in all facets of daily life, from simple tasks such as grasping and holding to complex tasks such as communication and using technology. Finding a way to not only monitor hand movements and gestures but also to integrate that data with technology is thus a worthwhile task. Gesture recognition is particularly important for those who rely on sign language to communicate, but the limitations of current vision-based and sensor-based methods, including lack of portability, bulkiness, low sensitivity, highly expensive, and need for external power sources, among many others, make them impractical for daily use. To resolve these issues, smart gloves can be created using a triboelectric nanogenerator (TENG), a self-powered technology that functions based on the triboelectric effect and electrostatic induction and is also cheap to manufacture, small in size, lightweight, and highly flexible in terms of materials and design. In this review, an overview of the existing self-powered smart gloves will be provided based on TENGs, both for gesture recognition and human-machine interface, concluding with a discussion on the future outlook of these devices.
Collapse
Affiliation(s)
- Sophia Shen
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
| | - Xiao Xiao
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
| | - Junyi Yin
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
| | - Xiao Xiao
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
| | - Jun Chen
- Department of Bioengineering, University of California, Los Angeles, CA, 90095, USA
| |
Collapse
|
9
|
Royka A, Chen A, Aboody R, Huanca T, Jara-Ettinger J. People infer communicative action through an expectation for efficient communication. Nat Commun 2022; 13:4160. [PMID: 35851397 PMCID: PMC9293910 DOI: 10.1038/s41467-022-31716-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 06/30/2022] [Indexed: 11/09/2022] Open
Abstract
Humans often communicate using body movements like winks, waves, and nods. However, it is unclear how we identify when someone’s physical actions are communicative. Given people’s propensity to interpret each other’s behavior as aimed to produce changes in the world, we hypothesize that people expect communicative actions to efficiently reveal that they lack an external goal. Using computational models of goal inference, we predict that movements that are unlikely to be produced when acting towards the world and, in particular, repetitive ought to be seen as communicative. We find support for our account across a variety of paradigms, including graded acceptability tasks, forced-choice tasks, indirect prompts, and open-ended explanation tasks, in both market-integrated and non-market-integrated communities. Our work shows that the recognition of communicative action is grounded in an inferential process that stems from fundamental computations shared across different forms of action interpretation. Humans can quickly infer when someone’s body movements are meant to be communicative. Here, the authors show that this capacity is underpinned by an expectation that communicative actions will efficiently reveal that they lack an external goal.
Collapse
Affiliation(s)
- Amanda Royka
- Department of Psychology, Yale University, New Haven, CT, USA.
| | - Annie Chen
- Department of Computer Science, Yale University, New Haven, CT, USA
| | - Rosie Aboody
- Department of Psychology, Yale University, New Haven, CT, USA
| | - Tomas Huanca
- Centro Boliviano de Desarrollo Socio-Integral, La paz, Bolivia
| | - Julian Jara-Ettinger
- Department of Psychology, Yale University, New Haven, CT, USA. .,Department of Computer Science, Yale University, New Haven, CT, USA. .,Wu Tsai Institute, Yale University, New Haven, CT, USA.
| |
Collapse
|
10
|
Campbell EE, Bergelson E. Making sense of sensory language: Acquisition of sensory knowledge by individuals with congenital sensory impairments. Neuropsychologia 2022; 174:108320. [PMID: 35842021 DOI: 10.1016/j.neuropsychologia.2022.108320] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 06/21/2022] [Accepted: 07/06/2022] [Indexed: 10/17/2022]
Abstract
The present article provides a narrative review on how language communicates sensory information and how knowledge of sight and sound develops in individuals born deaf or blind. Studying knowledge of the perceptually inaccessible sensory domain for these populations offers a lens into how humans learn about that which they cannot perceive. We first review the linguistic strategies within language that communicate sensory information. Highlighting the power of language to shape knowledge, we next review the detailed knowledge of sensory information by individuals with congenital sensory impairments, limitations therein, and neural representations of imperceptible phenomena. We suggest that the acquisition of sensory knowledge is supported by language, experience with multiple perceptual domains, and cognitive and social abilities which mature over the first years of life, both in individuals with and without sensory impairment. We conclude by proposing a developmental trajectory for acquiring sensory knowledge in the absence of sensory perception.
Collapse
Affiliation(s)
- Erin E Campbell
- Duke University, Department of Psychology and Neuroscience, USA.
| | - Elika Bergelson
- Duke University, Department of Psychology and Neuroscience, USA
| |
Collapse
|
11
|
Hodge G, Ferrara L. Iconicity as Multimodal, Polysemiotic, and Plurifunctional. Front Psychol 2022; 13:808896. [PMID: 35769755 PMCID: PMC9234520 DOI: 10.3389/fpsyg.2022.808896] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Accepted: 03/18/2022] [Indexed: 11/13/2022] Open
Abstract
Investigations of iconicity in language, whereby interactants coordinate meaningful bodily actions to create resemblances, are prevalent across the human communication sciences. However, when it comes to analysing and comparing iconicity across different interactions (e.g., deaf, deafblind, hearing) and modes of communication (e.g., manual signs, speech, writing), it is not always clear we are looking at the same thing. For example, tokens of spoken ideophones and manual depicting actions may both be analysed as iconic forms. Yet spoken ideophones may signal depictive and descriptive qualities via speech, while manual actions may signal depictive, descriptive, and indexical qualities via the shape, movement, and placement of the hands in space. Furthermore, each may co-occur with other semiotics articulated with the face, hands, and body within composite utterances. The paradigm of iconicity as a single property is too broad and coarse for comparative semiotics, as important details necessary for understanding the range of human communicative potentialities may be masked. Here, we draw on semiotic approaches to language and communication, including the model of language as signalled via describing, indicating and/or depicting and the notion of non-referential indexicality, to illustrate the multidimensionality of iconicity in co-present interactions. This builds on our earlier proposal for analysing how different methods of semiotic signalling are combined in multimodal language use. We discuss some implications for the language and communication sciences and explain how this approach may inform a theory of biosemiotics.
Collapse
Affiliation(s)
- Gabrielle Hodge
- Deafness Cognition and Language Research Centre, University College London, London, United Kingdom
- College of Asia and the Pacific, Australian National University, Canberra, ACT, Australia
- *Correspondence: Gabrielle Hodge,
| | - Lindsay Ferrara
- Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
12
|
Martinez Del Rio A, Ferrara C, Kim SJ, Hakgüder E, Brentari D. Identifying the Correlations Between the Semantics and the Phonology of American Sign Language and British Sign Language: A Vector Space Approach. Front Psychol 2022; 13:806471. [PMID: 35369213 PMCID: PMC8966728 DOI: 10.3389/fpsyg.2022.806471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/07/2022] [Indexed: 11/13/2022] Open
Abstract
Over the history of research on sign languages, much scholarship has highlighted the pervasive presence of signs whose forms relate to their meaning in a non-arbitrary way. The presence of these forms suggests that sign language vocabularies are shaped, at least in part, by a pressure toward maintaining a link between form and meaning in wordforms. We use a vector space approach to test the ways this pressure might shape sign language vocabularies, examining how non-arbitrary forms are distributed within the lexicons of two unrelated sign languages. Vector space models situate the representations of words in a multi-dimensional space where the distance between words indexes their relatedness in meaning. Using phonological information from the vocabularies of American Sign Language (ASL) and British Sign Language (BSL), we tested whether increased similarity between the semantic representations of signs corresponds to increased phonological similarity. The results of the computational analysis showed a significant positive relationship between phonological form and semantic meaning for both sign languages, which was strongest when the sign language lexicons were organized into clusters of semantically related signs. The analysis also revealed variation in the strength of patterns across the form-meaning relationships seen between phonological parameters within each sign language, as well as between the two languages. This shows that while the connection between form and meaning is not entirely language specific, there are cross-linguistic differences in how these mappings are realized for signs in each language, suggesting that arbitrariness as well as cognitive or cultural influences may play a role in how these patterns are realized. The results of this analysis not only contribute to our understanding of the distribution of non-arbitrariness in sign language lexicons, but also demonstrate a new way that computational modeling can be harnessed in lexicon-wide investigations of sign languages.
Collapse
Affiliation(s)
| | - Casey Ferrara
- Department of Psychology, University of Chicago, Chicago, IL, United States
| | - Sanghee J Kim
- Department of Linguistics, University of Chicago, Chicago, IL, United States
| | - Emre Hakgüder
- Department of Linguistics, University of Chicago, Chicago, IL, United States
| | - Diane Brentari
- Department of Linguistics, University of Chicago, Chicago, IL, United States
| |
Collapse
|
13
|
Sidhu DM, Williamson J, Slavova V, Pexman PM. An investigation of iconic language development in four datasets. JOURNAL OF CHILD LANGUAGE 2022; 49:382-396. [PMID: 34176538 DOI: 10.1017/s0305000921000040] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Iconic words imitate their meanings. Previous work has demonstrated that iconic words are more common in infants' early speech, and in adults' child-directed speech (e.g., Perry et al., 2015; 2018). This is consistent with the proposal that iconicity provides a benefit to word learning. Here we explored iconicity in four diverse language development datasets: a production corpus for infants and preschoolers (MacWhinney, 2000), comprehension data for school-aged children to young adults (Dale & O'Rourke, 1981), word frequency norms from educational texts for school aged children to young adults (Zeno et al., 1995), and a database of parent-reported infant word production (Frank et al., 2017). In all four analyses, we found that iconic words were more common at younger ages. We also explored how this relationship differed by syntactic class, finding only modest evidence for differences. Overall, the results suggest that, beyond infancy, iconicity is an important factor in language acquisition.
Collapse
|
14
|
Winter B, Sóskuthy M, Perlman M, Dingemanse M. Trilled /r/ is associated with roughness, linking sound and touch across spoken languages. Sci Rep 2022; 12:1035. [PMID: 35058475 PMCID: PMC8776840 DOI: 10.1038/s41598-021-04311-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 12/20/2021] [Indexed: 11/25/2022] Open
Abstract
Cross-modal integration between sound and texture is important to perception and action. Here we show this has repercussions for the structure of spoken languages. We present a new statistical universal linking speech with the evolutionarily ancient sense of touch. Words that express roughness—the primary perceptual dimension of texture—are highly likely to feature a trilled /r/, the most commonly occurring rhotic consonant. In four studies, we show the pattern to be extremely robust, being the first widespread pattern of iconicity documented not just across a large, diverse sample of the world’s spoken languages, but also across numerous sensory words within languages. Our deep analysis of Indo-European languages and Proto-Indo-European roots indicates remarkable historical stability of the pattern, which appears to date back at least 6000 years.
Collapse
Affiliation(s)
- Bodo Winter
- Department of English Language and Linguistics, University of Birmingham, Birmingham, UK.
| | - Márton Sóskuthy
- Department of Linguistics, University of British Columbia, Vancouver, Canada
| | - Marcus Perlman
- Department of English Language and Linguistics, University of Birmingham, Birmingham, UK
| | - Mark Dingemanse
- Centre for Language Studies, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
15
|
Ananthanarayana T, Srivastava P, Chintha A, Santha A, Landy B, Panaro J, Webster A, Kotecha N, Sah S, Sarchet T, Ptucha R, Nwogu I. Deep Learning Methods for Sign Language Translation. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2021. [DOI: 10.1145/3477498] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Many sign languages are bona fide natural languages with grammatical rules and lexicons hence can benefit from machine translation methods. Similarly, since sign language is a visual-spatial language, it can also benefit from computer vision methods for encoding it. With the advent of deep learning methods in recent years, significant advances have been made in natural language processing (specifically neural machine translation) and in computer vision methods (specifically image and video captioning). Researchers have therefore begun expanding these learning methods to sign language understanding. Sign language interpretation is especially challenging, because it involves a continuous visual-spatial modality where meaning is often derived based on context.
The focus of this article, therefore, is to examine various deep learning–based methods for encoding sign language as inputs, and to analyze the efficacy of several machine translation methods, over three different sign language datasets. The goal is to determine which combinations are sufficiently robust for sign language translation
without
any gloss-based information.
To understand the role of the different input features, we perform ablation studies over the model architectures (input features + neural translation models) for improved continuous sign language translation. These input features include body and finger joints, facial points, as well as vector representations/embeddings from convolutional neural networks. The machine translation models explored include several baseline sequence-to-sequence approaches, more complex and challenging networks using attention, reinforcement learning, and the transformer model. We implement the translation methods over multiple sign languages—German (GSL), American (ASL), and Chinese sign languages (CSL). From our analysis, the transformer model combined with input embeddings from ResNet50 or pose-based landmark features outperformed all the other sequence-to-sequence models by achieving higher BLEU2-BLEU4 scores when applied to the controlled and constrained GSL benchmark dataset. These combinations also showed significant promise on the other less controlled ASL and CSL datasets.
Collapse
Affiliation(s)
| | | | - Akash Chintha
- Rochester Institute of Technology, Rochester, New York
| | - Akhil Santha
- Rochester Institute of Technology, Rochester, New York
| | - Brian Landy
- Rochester Institute of Technology, Rochester, New York
| | - Joseph Panaro
- Rochester Institute of Technology, Rochester, New York
| | - Andre Webster
- Rochester Institute of Technology, Rochester, New York
| | | | - Shagan Sah
- Rochester Institute of Technology, Rochester, New York
| | | | | | - Ifeoma Nwogu
- Rochester Institute of Technology, Rochester, New York
| |
Collapse
|
16
|
Abstract
Interest in iconicity (the resemblance-based mapping between aspects of form and meaning) is in the midst of a resurgence, and a prominent focus in the field has been the possible role of iconicity in language learning. Here we critically review theory and empirical findings in this domain. We distinguish local learning enhancement (where the iconicity of certain lexical items influences the learning of those items) and general learning enhancement (where the iconicity of certain lexical items influences the later learning of non-iconic items or systems). We find that evidence for local learning enhancement is quite strong, though not as clear cut as it is often described and based on a limited sample of languages. Despite common claims about broader facilitatory effects of iconicity on learning, we find that current evidence for general learning enhancement is lacking. We suggest a number of productive avenues for future research and specify what types of evidence would be required to show a role for iconicity in general learning enhancement. We also review evidence for functions of iconicity beyond word learning: iconicity enhances comprehension by providing complementary representations, supports communication about sensory imagery, and expresses affective meanings. Even if learning benefits may be modest or cross-linguistically varied, on balance, iconicity emerges as a vital aspect of language.
Collapse
Affiliation(s)
| | - Mark Dingemanse
- Mark Dingemanse, Centre for Language Studies, Radboud University, Houtlaan 4, Nijmegen, 6500 HD, Netherlands.
| |
Collapse
|
17
|
Raviv L, de Heer Kloots M, Meyer A. What makes a language easy to learn? A preregistered study on how systematic structure and community size affect language learnability. Cognition 2021; 210:104620. [PMID: 33571814 DOI: 10.1016/j.cognition.2021.104620] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2020] [Revised: 01/14/2021] [Accepted: 01/27/2021] [Indexed: 11/15/2022]
Abstract
Cross-linguistic differences in morphological complexity could have important consequences for language learning. Specifically, it is often assumed that languages with more regular, compositional, and transparent grammars are easier to learn by both children and adults. Moreover, it has been shown that such grammars are more likely to evolve in bigger communities. Together, this suggests that some languages are acquired faster than others, and that this advantage can be traced back to community size and to the degree of systematicity in the language. However, the causal relationship between systematic linguistic structure and language learnability has not been formally tested, despite its potential importance for theories on language evolution, second language learning, and the origin of linguistic diversity. In this pre-registered study, we experimentally tested the effects of community size and systematic structure on adult language learning. We compared the acquisition of different yet comparable artificial languages that were created by big or small groups in a previous communication experiment, which varied in their degree of systematic linguistic structure. We asked (a) whether more structured languages were easier to learn; and (b) whether languages created by the bigger groups were easier to learn. We found that highly systematic languages were learned faster and more accurately by adults, but that the relationship between language learnability and linguistic structure was typically non-linear: high systematicity was advantageous for learning, but learners did not benefit from partly or semi-structured languages. Community size did not affect learnability: languages that evolved in big and small groups were equally learnable, and there was no additional advantage for languages created by bigger groups beyond their degree of systematic structure. Furthermore, our results suggested that predictability is an important advantage of systematic structure: participants who learned more structured languages were better at generalizing these languages to new, unfamiliar meanings, and different participants who learned the same more structured languages were more likely to produce similar labels. That is, systematic structure may allow speakers to converge effortlessly, such that strangers can immediately understand each other.
Collapse
Affiliation(s)
- Limor Raviv
- Vrije Universiteit Brussels, Belgium; Max Planck Institute for Psycholinguistics, the Netherlands.
| | | | - Antje Meyer
- Max Planck Institute for Psycholinguistics, the Netherlands; Radboud University Nijmegen, the Netherlands
| |
Collapse
|
18
|
The Lancaster Sensorimotor Norms: multidimensional measures of perceptual and action strength for 40,000 English words. Behav Res Methods 2020; 52:1271-1291. [PMID: 31832879 PMCID: PMC7280349 DOI: 10.3758/s13428-019-01316-z] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Sensorimotor information plays a fundamental role in cognition. However, the existing materials that measure the sensorimotor basis of word meanings and concepts have been restricted in terms of their sample size and breadth of sensorimotor experience. Here we present norms of sensorimotor strength for 39,707 concepts across six perceptual modalities (touch, hearing, smell, taste, vision, and interoception) and five action effectors (mouth/throat, hand/arm, foot/leg, head excluding mouth/throat, and torso), gathered from a total of 3,500 individual participants using Amazon’s Mechanical Turk platform. The Lancaster Sensorimotor Norms are unique and innovative in a number of respects: They represent the largest-ever set of semantic norms for English, at 40,000 words × 11 dimensions (plus several informative cross-dimensional variables), they extend perceptual strength norming to the new modality of interoception, and they include the first norming of action strength across separate bodily effectors. In the first study, we describe the data collection procedures, provide summary descriptives of the dataset, and interpret the relations observed between sensorimotor dimensions. We then report two further studies, in which we (1) extracted an optimal single-variable composite of the 11-dimension sensorimotor profile (Minkowski 3 strength) and (2) demonstrated the utility of both perceptual and action strength in facilitating lexical decision times and accuracy in two separate datasets. These norms provide a valuable resource to researchers in diverse areas, including psycholinguistics, grounded cognition, cognitive semantics, knowledge representation, machine learning, and big-data approaches to the analysis of language and conceptual representations. The data are accessible via the Open Science Framework (http://osf.io/7emr6/) and an interactive web application (https://www.lancaster.ac.uk/psychology/lsnorms/).
Collapse
|
19
|
Iconicity ratings for 10,995 Spanish words and their relationship with psycholinguistic variables. Behav Res Methods 2020; 53:1262-1275. [PMID: 33037603 DOI: 10.3758/s13428-020-01496-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/30/2020] [Indexed: 11/08/2022]
Abstract
The study of iconicity, or the resemblance between word forms and their meanings, has been the focus of increasing attention in recent years. Nevertheless, there is a lack of large-scale normative studies on the iconic properties of words, which could prove crucial to expanding our understanding of form-meaning associations. In this work, we report subjective iconicity ratings for 10,995 visually presented Spanish words from 1350 participants who were asked to repeat each of the words aloud before rating them. The response reliability and the consistency between the present and previous ratings were good. The relationships between iconicity and several psycholinguistic variables were examined through multiple regression analyses. We found that sensory experience ratings were the main predictor of iconicity, and that early-acquired and more abstract words received higher iconicity scores. We also found that onomatopoeias and interjections were the most iconic words, followed by adjectives. Finally, a follow-up study was conducted in which a subsample of 360 words with different levels of iconicity from the visual presentation study was auditorily presented to the participants. A high correlation was observed between the iconicity scores in the visual and auditory presentations. The normative data provided in this database might prove useful in expanding the body of knowledge on issues such as the processing of the iconic properties of words and the role of word-form associations in the acquisition of vocabularies. The database can be downloaded from https://osf.io/v5er3/ .
Collapse
|
20
|
Massaro D. My Words Fly Up, My Thoughts Remain Below. Words without Thoughts Never to Heaven Go. AMERICAN JOURNAL OF PSYCHOLOGY 2020. [DOI: 10.5406/amerjpsyc.133.3.0378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Dom Massaro
- Department of Psychology, University of California, Santa Cruz, Santa Cruz, CA 95062, E-mail:
| |
Collapse
|
21
|
Perceptual modality norms for 1,121 Italian words: A comparison with concreteness and imageability scores and an analysis of their impact in word processing tasks. Behav Res Methods 2020; 52:1599-1616. [DOI: 10.3758/s13428-019-01337-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
22
|
Caselli NK, Pyers JE. Degree and not type of iconicity affects sign language vocabulary acquisition. J Exp Psychol Learn Mem Cogn 2020; 46:127-139. [PMID: 31094562 PMCID: PMC6858483 DOI: 10.1037/xlm0000713] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Lexical iconicity-signs or words that resemble their meaning-is overrepresented in children's early vocabularies. Embodied theories of language acquisition predict that symbols are more learnable when they are grounded in a child's firsthand experiences. As such, pantomimic iconic signs, which use the signer's body to represent a body, might be more readily learned than other types of iconic signs. Alternatively, the structure mapping theory of iconicity predicts that learners are sensitive to the amount of overlap between form and meaning. In this exploratory study of early vocabulary development in American Sign Language (ASL), we asked whether type of iconicity predicts sign acquisition above and beyond degree of iconicity. We also controlled for concreteness and relevance to babies, two possible confounding factors. Highly concrete referents and concepts that are germane to babies may be amenable to iconic mappings. We reanalyzed a previously published set of ASL Communicative Development Inventory (CDI) reports from 58 deaf children learning ASL from their deaf parents (Anderson & Reilly, 2002). Pantomimic signs were more iconic than other types of iconic signs (perceptual, both pantomimic and perceptual, or arbitrary), but type of iconicity had no effect on acquisition. Children may not make use of the special status of pantomimic elements of signs. Their vocabularies are, however, shaped by degree of iconicity, which aligns with a structure mapping theory of iconicity, though other explanations are also compatible (e.g., iconicity in child-directed signing). Previously demonstrated effects of type of iconicity may be an artifact of the increased degree of iconicity among pantomimic signs. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
23
|
Pyers J, Senghas A. Lexical Iconicity is differentially favored under transmission in a new sign language: The effect of type of iconicity. SIGN LANGUAGE AND LINGUISTICS 2020; 23:73-95. [PMID: 33613090 PMCID: PMC7894619 DOI: 10.1075/sll.00044.pye] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Observations that iconicity diminishes over time in sign languages pose a puzzle--why should something so evidently useful and functional decrease? Using an archival dataset of signs elicited over 15 years from 4 first-cohort and 4 third-cohort signers of an emerging sign language (Nicaraguan Sign Language), we investigated changes in pantomimic (body-to-body) and perceptual (body-to-object) iconicity. We make three key observations: (1) there is greater variability in the signs produced by the first cohort compared to the third; (2) while both types of iconicity are evident, pantomimic iconicity is more prevalent than perceptual iconicity for both groups; and (3) across cohorts, pantomimic elements are dropped to a greater proportion than perceptual elements. The higher rate of pantomimic iconicity in the first-cohort lexicon reflects the usefulness of body-as-body mapping in language creation. Yet, its greater vulnerability to change over transmission suggests that it is less favored by children's language acquisition processes.
Collapse
|
24
|
Macuch Silva V, Holler J, Ozyurek A, Roberts SG. Multimodality and the origin of a novel communication system in face-to-face interaction. ROYAL SOCIETY OPEN SCIENCE 2020; 7:182056. [PMID: 32218922 PMCID: PMC7029942 DOI: 10.1098/rsos.182056] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 11/27/2019] [Indexed: 05/05/2023]
Abstract
Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the option to use both (multimodal communication). The results suggest that even in the absence of conventional signals, gesture is a more powerful mode of communication compared with vocalization, but that there are also advantages to multimodality compared to using gesture alone. Participants with an option to produce multimodal signals had comparable accuracy to those using only gesture, but gained an efficiency advantage. The analysis of the interactions between participants showed that interactants developed novel communication systems for unfamiliar stimuli by deploying different modalities flexibly to suit their needs and by taking advantage of multimodality when required.
Collapse
Affiliation(s)
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Asli Ozyurek
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, The Netherlands
- Center for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Seán G. Roberts
- Department of Archaeology and Anthropology (excd.lab), University of Bristol, Bristol, UK
| |
Collapse
|
25
|
Winter B, Pérez-Sobrino P, Brown L. The sound of soft alcohol: Crossmodal associations between interjections and liquor. PLoS One 2019; 14:e0220449. [PMID: 31393912 PMCID: PMC6687133 DOI: 10.1371/journal.pone.0220449] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Accepted: 07/16/2019] [Indexed: 11/19/2022] Open
Abstract
An increasing number of studies reveal crossmodal correspondences between speech sounds and perceptual features such as shape and size. In this study, we show that an interjection Koreans produce when downing a shot of liquor reliably triggers crossmodal associations in American English, German, Spanish, and Chinese listeners who do not speak Korean. Based on how this sound is used in advertising campaigns for the Korean liquor soju, we derive predictions for different crossmodal associations. Our experiments show that the same speech sound is reliably associated with various perceptual, affective, and social meanings. This demonstrates what we call the 'pluripotentiality' of iconicity, that is, the same speech sound is able to trigger a web of interrelated mental associations across different dimensions. We argue that the specific semantic associations evoked by iconic stimuli depend on the task, with iconic meanings having a 'latent' quality that becomes 'actual' in specific semantic contexts. We outline implications for theories of iconicity and advertising.
Collapse
Affiliation(s)
- Bodo Winter
- Department of English Language & Linguistics, University of Birmingham, Birmingham, United Kingdom
- * E-mail:
| | - Paula Pérez-Sobrino
- Applied Linguistics Department, Universidad Politécnica de Madrid, Madrid, Spain
| | - Lucien Brown
- Korean Studies Program, Monash University, Melbourne, Australia
| |
Collapse
|
26
|
Sehyr ZS, Emmorey K. The perceived mapping between form and meaning in American Sign Language depends on linguistic knowledge and task: evidence from iconicity and transparency judgments. LANGUAGE AND COGNITION 2019; 11:208-234. [PMID: 31798755 PMCID: PMC6886719 DOI: 10.1017/langcog.2019.18] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), 'perceived transparency' (transparency ratings of the guesses), and 'semantic potential' (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing non-signers for 991 signs from the ASL-LEX database. Signers and non-signers' ratings were highly correlated; however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, non-signers guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential.
Collapse
|