1
|
Hollander J, Olney A. Raising the Roof: Situating Verbs in Symbolic and Embodied Language Processing. Cogn Sci 2024; 48:e13442. [PMID: 38655894 DOI: 10.1111/cogs.13442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 02/05/2024] [Accepted: 03/29/2024] [Indexed: 04/26/2024]
Abstract
Recent investigations on how people derive meaning from language have focused on task-dependent shifts between two cognitive systems. The symbolic (amodal) system represents meaning as the statistical relationships between words. The embodied (modal) system represents meaning through neurocognitive simulation of perceptual or sensorimotor systems associated with a word's referent. A primary finding of literature in this field is that the embodied system is only dominant when a task necessitates it, but in certain paradigms, this has only been demonstrated using nouns and adjectives. The purpose of this paper is to study whether similar effects hold with verbs. Experiment 1 evaluated a novel task in which participants rated a selection of verbs on their implied vertical movement. Ratings correlated well with distributional semantic models, establishing convergent validity, though some variance was unexplained by language statistics alone. Experiment 2 replicated previous noun-based location-cue congruency experimental paradigms with verbs and showed that the ratings obtained in Experiment 1 predicted reaction times more strongly than language statistics. Experiment 3 modified the location-cue paradigm by adding movement to create an animated, temporally decoupled, movement-verb judgment task designed to examine the relative influence of symbolic and embodied processing for verbs. Results were generally consistent with linguistic shortcut hypotheses of symbolic-embodied integrated language processing; location-cue congruence elicited processing facilitation in some conditions, and perceptual information accounted for reaction times and accuracy better than language statistics alone. These studies demonstrate novel ways in which embodied and linguistic information can be examined while using verbs as stimuli.
Collapse
Affiliation(s)
- John Hollander
- Department of Psychology, Institute for Intelligent Systems, University of Memphis
| | - Andrew Olney
- Department of Psychology, Institute for Intelligent Systems, University of Memphis
| |
Collapse
|
2
|
Shahmohammadi H, Heitmeier M, Shafaei-Bajestan E, Lensch HPA, Baayen RH. Language with vision: A study on grounded word and sentence embeddings. Behav Res Methods 2023:10.3758/s13428-023-02294-z. [PMID: 38114881 DOI: 10.3758/s13428-023-02294-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/09/2023] [Indexed: 12/21/2023]
Abstract
Grounding language in vision is an active field of research seeking to construct cognitively plausible word and sentence representations by incorporating perceptual knowledge from vision into text-based representations. Despite many attempts at language grounding, achieving an optimal equilibrium between textual representations of the language and our embodied experiences remains an open field. Some common concerns are the following. Is visual grounding advantageous for abstract words, or is its effectiveness restricted to concrete words? What is the optimal way of bridging the gap between text and vision? To what extent is perceptual knowledge from images advantageous for acquiring high-quality embeddings? Leveraging the current advances in machine learning and natural language processing, the present study addresses these questions by proposing a simple yet very effective computational grounding model for pre-trained word embeddings. Our model effectively balances the interplay between language and vision by aligning textual embeddings with visual information while simultaneously preserving the distributional statistics that characterize word usage in text corpora. By applying a learned alignment, we are able to indirectly ground unseen words including abstract words. A series of evaluations on a range of behavioral datasets shows that visual grounding is beneficial not only for concrete words but also for abstract words, lending support to the indirect theory of abstract concepts. Moreover, our approach offers advantages for contextualized embeddings, such as those generated by BERT (Devlin et al, 2018), but only when trained on corpora of modest, cognitively plausible sizes. Code and grounded embeddings for English are available at ( https://github.com/Hazel1994/Visually_Grounded_Word_Embeddings_2 ).
Collapse
|
3
|
Takahashi K, Kajiya T, Ishihara M. Proposal for a Display Method for Myocardial Single Photon Emission Computed Tomography Based on Left Ventricular Volume. Int Heart J 2023; 64:993-1001. [PMID: 37967986 DOI: 10.1536/ihj.23-251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2023]
Abstract
Bull's eye view for the display of myocardial single-photon emission computed tomography (SPECT) 3-D perfusion maps does not reflect left ventricular (LV) volume, an important parameter. We created and evaluated a myocardial SPECT display method that reflects the LV volume.Using Digital Imaging and Communications in Medicine data, short-axis slices from the apex to the base were reconstructed and interpolated into 0.5-mm thickness. We obtained the radial lengths at 1° intervals throughout 360°, and calculated the length of the LV long axis and half circumference (1/2 circ). Myocardial perfusion was displayed as 2 ellipsoidal developments that exhibited the left anterior descending coronary artery (LAD) and non-LAD regions. We created a system that can display these processes on a personal computer. Myocardial SPECT data from 526 individuals without heart disease were analyzed. The long axis and 1/2 circ were compared with the body size, LV end-diastolic diameter (LVDd) obtained by echocardiography, and the end-diastolic volume (EDV) obtained by electrocardiogram-gated SPECT analysis. The 1/2 circle correlated with the LVDd and EDV. The images obtained allowed a diagnosis comparable to that made using the conventional coordinate display system.The new myocardial display reflects ischemia and LV volume within a single image, which cannot be achieved with conventional SPECT image display. Additional studies of this display system are required to allow its application to patients with heart disease.
Collapse
Affiliation(s)
- Keiko Takahashi
- Department of Patient Safety and Quality Management, School of Medicine, Hyogo Medical University
- Department of Cardiovascular and Renal Medicine, School of Medicine, Hyogo Medical University
| | | | - Masaharu Ishihara
- Department of Cardiovascular and Renal Medicine, School of Medicine, Hyogo Medical University
| |
Collapse
|
4
|
Körner A, Castillo M, Drijvers L, Fischer MH, Günther F, Marelli M, Platonova O, Rinaldi L, Shaki S, Trujillo JP, Tsaregorodtseva O, Glenberg AM. Embodied Processing at Six Linguistic Granularity Levels: A Consensus Paper. J Cogn 2023; 6:60. [PMID: 37841668 PMCID: PMC10573585 DOI: 10.5334/joc.231] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 06/13/2022] [Indexed: 10/17/2023] Open
Abstract
Language processing is influenced by sensorimotor experiences. Here, we review behavioral evidence for embodied and grounded influences in language processing across six linguistic levels of granularity. We examine (a) sub-word features, discussing grounded influences on iconicity (systematic associations between word form and meaning); (b) words, discussing boundary conditions and generalizations for the simulation of color, sensory modality, and spatial position; (c) sentences, discussing boundary conditions and applications of action direction simulation; (d) texts, discussing how the teaching of simulation can improve comprehension in beginning readers; (e) conversations, discussing how multi-modal cues improve turn taking and alignment; and (f) text corpora, discussing how distributional semantic models can reveal how grounded and embodied knowledge is encoded in texts. These approaches are converging on a convincing account of the psychology of language, but at the same time, there are important criticisms of the embodied approach and of specific experimental paradigms. The surest way forward requires the adoption of a wide array of scientific methods. By providing complimentary evidence, a combination of multiple methods on various levels of granularity can help us gain a more complete understanding of the role of embodiment and grounding in language processing.
Collapse
Affiliation(s)
- Anita Körner
- Department of Psychology, University of Kassel, DE
| | - Mauricio Castillo
- Center for Basic Research in Psychology, University of the Republic of Uruguay, UY
| | | | | | - Fritz Günther
- Department of Psychology, Humboldt-Universität zu Berlin, DE
| | - Marco Marelli
- Department of Psychology, University of Milano-Bicocca, IT
| | | | - Luca Rinaldi
- Department of Brain and Behavioral Sciences, University of Pavia, IT
| | - Samuel Shaki
- Department of Behavioral Sciences, Ariel University, IL
| | - James P. Trujillo
- Max Planck Institute for Psycholinguistics, NL
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, NL
| | - Oksana Tsaregorodtseva
- Department of Psychology, University of Tübingen, DE
- Linguistic Anthropology Laboratory, Tomsk State University, RU
| | - Arthur M. Glenberg
- Department of Psychology, Arizona State University, US
- Department of Psychology, University of Wisconsin-Madison, US
- INICO, Universidad de Salamanca, ES
| |
Collapse
|
5
|
Wingfield C, Connell L. Sensorimotor distance: A grounded measure of semantic similarity for 800 million concept pairs. Behav Res Methods 2023; 55:3416-3432. [PMID: 36131199 PMCID: PMC10615916 DOI: 10.3758/s13428-022-01965-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/24/2022] [Indexed: 11/08/2022]
Abstract
Experimental design and computational modelling across the cognitive sciences often rely on measures of semantic similarity between concepts. Traditional measures of semantic similarity are typically derived from distance in taxonomic databases (e.g. WordNet), databases of participant-produced semantic features, or corpus-derived linguistic distributional similarity (e.g. CBOW), all of which are theoretically problematic in their lack of grounding in sensorimotor experience. We present a new measure of sensorimotor distance between concepts, based on multidimensional comparisons of their experiential strength across 11 perceptual and action-effector dimensions in the Lancaster Sensorimotor Norms. We demonstrate that, in modelling human similarity judgements, sensorimotor distance has comparable explanatory power to other measures of semantic similarity, explains variance in human judgements which is missed by other measures, and does so with the advantages of remaining both grounded and computationally efficient. Moreover, sensorimotor distance is equally effective for both concrete and abstract concepts. We further introduce a web-based tool ( https://lancaster.ac.uk/psychology/smdistance ) for easily calculating and visualising sensorimotor distance between words, featuring coverage of nearly 800 million word pairs. Supplementary materials are available at https://osf.io/d42q6/ .
Collapse
Affiliation(s)
- Cai Wingfield
- Department of Psychology, Fylde College, Lancaster University, Lancaster, LA1 4YF, UK.
| | - Louise Connell
- Department of Psychology, Fylde College, Lancaster University, Lancaster, LA1 4YF, UK.
- Department of Psychology, Maynooth University, Maynooth, Co. Kildare, Ireland.
| |
Collapse
|
6
|
Lee J, Shin JA. The cross-linguistic comparison of perceptual strength norms for Korean, English and L2 English. Front Psychol 2023; 14:1188909. [PMID: 37538997 PMCID: PMC10395129 DOI: 10.3389/fpsyg.2023.1188909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/22/2023] [Indexed: 08/05/2023] Open
Abstract
This study aimed to establish perceptual strength norms for 1,000 words in the languages of Korean, English, and L2 English, in order to investigate the similarity and difference across languages as well as the influence of the environment on semantic processing. The perceptual strength norms, which are a collection of word profiles that summarize how a word is experienced through different sensory modalities including the five common senses and interoception, provide a valuable tool for testing embodiment cognition theory. The results of this study demonstrated that language users had parallel sensory experiences with concepts, and that L2 learners were also able to associate their sensory experiences with linguistic concepts. Additionally, the results highlighted the importance of incorporating interoception as a sensory modality in the development of perceptual strength norms, as it had a negative correlation with both vision and concreteness. This study was the first to establish norms for Korean and L2 English and directly compare languages using the identical and translation-equivalent word list.
Collapse
Affiliation(s)
- Jonghyun Lee
- Department of English Language and Literature, College of Humanities, Seoul National University, Seoul, Republic of Korea
| | - Jeong-Ah Shin
- Department of English Language and Literature, College of Humanities, Dongguk University, Seoul, Republic of Korea
| |
Collapse
|
7
|
Caballero R, Paradis C. Sharing Perceptual Experiences through Language. J Intell 2023; 11:129. [PMID: 37504772 PMCID: PMC10381558 DOI: 10.3390/jintelligence11070129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Revised: 06/14/2023] [Accepted: 06/17/2023] [Indexed: 07/29/2023] Open
Abstract
The aim of this article is to shed light on how sensory perceptions are communicated through authentic language. What are the language resources available to match multimodal perceptions, and how do we use them in real communication? We discuss insights from previous work on the topic of the interaction of perception, cognition, and language and explain how language users recontextualise perception in communication about sensory experiences. Within the framework of cognitive semantics, we show that the complexities of multimodal perception are clearly reflected in the multifunctional use of words to convey meanings and feelings. To showcase the language resources employed, we base our findings on research on how architects convey their perceptions of built space. Two main patterns emerge: they use multimodal expressions (soft, bland, and jarring) and descriptions of built space through motion (the building reaches out, or routes and directions such as destination, promenade, route, or landscape in combination with verbs such as start and lead) in which case the architect may either be the observer or the emerged actor. The important take-home message is that there is no neat and clear a priori link between words and meanings, but rather "unforeseen" patterns surface in natural production data describing sensory perceptions.
Collapse
Affiliation(s)
- Rosario Caballero
- Facultad de Letras, Universidad de Castilla-La Mancha, 13071 Ciudad Real, Spain
| | - Carita Paradis
- Centre for Languages and Literature, Lund University, 22100 Lund, Sweden
| |
Collapse
|
8
|
Gatti D, Marelli M, Vecchi T, Rinaldi L. Spatial Representations Without Spatial Computations. Psychol Sci 2022; 33:1947-1958. [PMID: 36201754 DOI: 10.1177/09567976221094863] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Cognitive maps are assumed to be fundamentally spatial and grounded only in perceptual processes, as supported by the discovery of functionally dedicated cell types in the human brain, which tile the environment in a maplike fashion. Challenging this view, we demonstrate that spatial representations-such as large-scale geographical maps-can be as well retrieved with high confidence from natural language through cognitively plausible artificial-intelligence models on the basis of nonspatial associative-learning mechanisms. More critically, we show that linguistic information accounts for the specific distortions observed in tasks when college-age adults have to judge the geographical positions of cities, even when these positions are estimated on real maps. These findings indicate that language experience can encode and reproduce cognitive maps without the need for a dedicated spatial-representation system, thus suggesting that the formation of these maps is the result of a strict interplay between spatial- and nonspatial-learning principles.
Collapse
Affiliation(s)
- Daniele Gatti
- Department of Brain and Behavioral Sciences, University of Pavia
| | - Marco Marelli
- Department of Psychology, University of Milano-Bicocca.,NeuroMI, Milan Center for Neuroscience, Milano, Italy
| | - Tomaso Vecchi
- Department of Brain and Behavioral Sciences, University of Pavia.,Cognitive Psychology Unit, IRCCS Mondino Foundation, Pavia, Italy
| | - Luca Rinaldi
- Department of Brain and Behavioral Sciences, University of Pavia.,Cognitive Psychology Unit, IRCCS Mondino Foundation, Pavia, Italy
| |
Collapse
|
9
|
Krishna PP, Arulmozi S, Mishra RK. "Do You See and Hear More? A Study on Telugu Perception Verbs". JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2022; 51:473-484. [PMID: 34993848 DOI: 10.1007/s10936-021-09827-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 11/23/2021] [Indexed: 06/14/2023]
Abstract
Verbs of perception describe the actual perception of some entity and it is emphasized by earlier researchers that lexicon in languages is conceptually-oriented and is necessary for our daily communicative needs. In this paper, we demonstrate and explain, which among the perception verbs have the higher frequencies of all the five senses (vision, hear, smell, taste, touch) by using a Telugu corpus and self-rating task. This study shows a greater lexical differentiation when compared to studies done using English corpus and other languages. Based on our analysis-vision, followed by hear are the most commonly used verbs in daily communicative needs by the Telugu speakers as compared to touch, taste, and smell; The inconsistency in usage of other senses are not identical to the vision and hear in other studies, it may be due to sampling and methodological variations in the corpus of different language, but in common these two senses play a key role in perception verbs. The study of Telugu perception verbs may give more interesting facts and insights into the cognitive linguistics paradigm.
Collapse
Affiliation(s)
- P Phani Krishna
- Centre for Neural and Cognitive Sciences, University of Hyderabad, Hyderabad, 500046, India.
| | - S Arulmozi
- Centre for Applied Linguistics and Translation Studies, University of Hyderabad, Hyderabad, 500046, India
| | - Ramesh Kumar Mishra
- Centre for Neural and Cognitive Sciences, University of Hyderabad, Hyderabad, 500046, India
| |
Collapse
|
10
|
Grand G, Blank IA, Pereira F, Fedorenko E. Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nat Hum Behav 2022; 6:975-987. [PMID: 35422527 DOI: 10.1038/s41562-022-01316-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Accepted: 01/31/2022] [Indexed: 12/14/2022]
Abstract
How is knowledge about word meaning represented in the mental lexicon? Current computational models infer word meanings from lexical co-occurrence patterns. They learn to represent words as vectors in a multidimensional space, wherein words that are used in more similar linguistic contexts-that is, are more semantically related-are located closer together. However, whereas inter-word proximity captures only overall relatedness, human judgements are highly context dependent. For example, dolphins and alligators are similar in size but differ in dangerousness. Here, we use a domain-general method to extract context-dependent relationships from word embeddings: 'semantic projection' of word-vectors onto lines that represent features such as size (the line connecting the words 'small' and 'big') or danger ('safe' to 'dangerous'), analogous to 'mental scales'. This method recovers human judgements across various object categories and properties. Thus, the geometry of word embeddings explicitly represents a wealth of context-dependent world knowledge.
Collapse
|
11
|
Banks B, Wingfield C, Connell L. Linguistic Distributional Knowledge and Sensorimotor Grounding both Contribute to Semantic Category Production. Cogn Sci 2021; 45:e13055. [PMID: 34647346 DOI: 10.1111/cogs.13055] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 06/22/2021] [Accepted: 09/09/2021] [Indexed: 11/28/2022]
Abstract
The human conceptual system comprises simulated information of sensorimotor experience and linguistic distributional information of how words are used in language. Moreover, the linguistic shortcut hypothesis predicts that people will use computationally cheaper linguistic distributional information where it is sufficient to inform a task response. In a pre-registered category production study, we asked participants to verbally name members of concrete and abstract categories and tested whether performance could be predicted by a novel measure of sensorimotor similarity (based on an 11-dimensional representation of sensorimotor strength) and linguistic proximity (based on word co-occurrence derived from a large corpus). As predicted, both measures predicted the order and frequency of category production but, critically, linguistic proximity had an effect above and beyond sensorimotor similarity. A follow-up study using typicality ratings as an additional predictor found that typicality was often the strongest predictor of category production variables, but it did not subsume sensorimotor and linguistic effects. Finally, we created a novel, fully grounded computational model of conceptual activation during category production, which best approximated typical human performance when conceptual activation was allowed to spread indirectly between concepts, and when candidate category members came from both sensorimotor and linguistic distributional representations. Critically, model performance was indistinguishable from typical human performance. Results support the linguistic shortcut hypothesis in semantic processing and provide strong evidence that both linguistic and grounded representations are inherent to the functioning of the conceptual system. All materials, data, and code are available at https://osf.io/vaq56/.
Collapse
Affiliation(s)
| | | | - Louise Connell
- Department of Psychology, Lancaster University.,Department of Psychology, Maynooth University
| |
Collapse
|
12
|
Utsumi A. Exploring What Is Encoded in Distributional Word Vectors: A Neurobiologically Motivated Analysis. Cogn Sci 2021; 44:e12844. [PMID: 32458523 DOI: 10.1111/cogs.12844] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2019] [Revised: 12/27/2019] [Accepted: 03/21/2020] [Indexed: 11/27/2022]
Abstract
The pervasive use of distributional semantic models or word embeddings for both cognitive modeling and practical application is because of their remarkable ability to represent the meanings of words. However, relatively little effort has been made to explore what types of information are encoded in distributional word vectors. Knowing the internal knowledge embedded in word vectors is important for cognitive modeling using distributional semantic models. Therefore, in this paper, we attempt to identify the knowledge encoded in word vectors by conducting a computational experiment using Binder et al.'s (2016) featural conceptual representations based on neurobiologically motivated attributes. In an experiment, these conceptual vectors are predicted from text-based word vectors using a neural network and linear transformation, and prediction performance is compared among various types of information. The analysis demonstrates that abstract information is generally predicted more accurately by word vectors than perceptual and spatiotemporal information, and specifically, the prediction accuracy of cognitive and social information is higher. Emotional information is also found to be successfully predicted for abstract words. These results indicate that language can be a major source of knowledge about abstract attributes, and they support the recent view that emphasizes the importance of language for abstract concepts. Furthermore, we show that word vectors can capture some types of perceptual and spatiotemporal information about concrete concepts and some relevant word categories. This suggests that language statistics can encode more perceptual knowledge than often expected.
Collapse
Affiliation(s)
- Akira Utsumi
- Department of Informatics & Artificial Intelligence eXploration Research Center, The University of Electro-Communications
| |
Collapse
|
13
|
Davis CP, Yee E. Building semantic memory from embodied and distributional language experience. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2021; 12:e1555. [PMID: 33533205 DOI: 10.1002/wcs.1555] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 09/07/2020] [Accepted: 01/10/2021] [Indexed: 01/06/2023]
Abstract
Humans seamlessly make sense of a rapidly changing environment, using a seemingly limitless knowledgebase to recognize and adapt to most situations we encounter. This knowledgebase is called semantic memory. Embodied cognition theories suggest that we represent this knowledge through simulation: understanding the meaning of coffee entails reinstantiating the neural states involved in touching, smelling, seeing, and drinking coffee. Distributional semantic theories suggest that we are sensitive to statistical regularities in natural language, and that a cognitive mechanism picks up on these regularities and transforms them into usable semantic representations reflecting the contextual usage of language. These appear to present contrasting views on semantic memory, but do they? Recent years have seen a push toward combining these approaches under a common framework. These hybrid approaches augment our understanding of semantic memory in important ways, but current versions remain unsatisfactory in part because they treat sensory-perceptual and distributional-linguistic data as interacting but distinct types of data that must be combined. We synthesize several approaches which, taken together, suggest that linguistic and embodied experience should instead be considered as inseparably entangled: just as sensory and perceptual systems are reactivated to understand meaning, so are experience-based representations endemic to linguistic processing; further, sensory-perceptual experience is susceptible to the same distributional principles as language experience. This conclusion produces a characterization of semantic memory that accounts for the interdependencies between linguistic and embodied data that arise across multiple timescales, giving rise to concept representations that reflect our shared and unique experiences. This article is categorized under: Psychology > Language Neuroscience > Cognition Linguistics > Language in Mind and Brain.
Collapse
Affiliation(s)
- Charles P Davis
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA.,Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut, USA
| | - Eiling Yee
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA.,Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
14
|
Kabbach A, Herbelot A. Avoiding Conflict: When Speaker Coordination Does Not Require Conceptual Agreement. Front Artif Intell 2021; 3:523920. [PMID: 33733196 PMCID: PMC7861244 DOI: 10.3389/frai.2020.523920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Accepted: 10/19/2020] [Indexed: 11/19/2022] Open
Abstract
In this paper we discuss the socialization hypothesis-the idea that speakers of the same (linguistic) community should share similar concepts given that they are exposed to similar environments and operate in highly-coordinated social contexts-and challenge the fact that it is assumed to constitute a prerequisite to successful communication. We do so using distributional semantic models of meaning (DSMs) which create lexical representations via latent aggregation of co-occurrence information between words and contexts. We argue that DSMs constitute particularly adequate tools for exploring the socialization hypothesis given that 1) they provide full control over the notion of background environment, formally characterized as the training corpus from which distributional information is aggregated; and 2) their geometric structure allows for exploiting alignment-based similarity metrics to measure inter-subject alignment over an entire semantic space, rather than a set of limited entries. We propose to model coordination between two different DSMs trained on two distinct corpora as dimensionality selection over a dense matrix obtained via Singular Value Decomposition This approximates an ad-hoc coordination scenario between two speakers as the attempt to align their similarity ratings on a set of word pairs. Our results underline the specific way in which linguistic information is spread across singular vectors, and highlight the need to distinguish agreement from mere compatibility in alignment-based notions of conceptual similarity. Indeed, we show that compatibility emerges from idiosyncrasy so that the unique and distinctive aspects of speakers' background experiences can actually facilitate-rather than impede-coordination and communication between them. We conclude that the socialization hypothesis may constitute an unnecessary prerequisite to successful communication and that, all things considered, communication is probably best formalized as the cooperative act of avoiding conflict, rather than maximizing agreement.
Collapse
Affiliation(s)
- Alexandre Kabbach
- Department of Linguistics, University of Geneva, Geneva, Switzerland
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Aurélie Herbelot
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
- Department of Information Engineering and Computer Science, University of Trento, Trento, Italy
| |
Collapse
|
15
|
The Lancaster Sensorimotor Norms: multidimensional measures of perceptual and action strength for 40,000 English words. Behav Res Methods 2020; 52:1271-1291. [PMID: 31832879 PMCID: PMC7280349 DOI: 10.3758/s13428-019-01316-z] [Citation(s) in RCA: 80] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Sensorimotor information plays a fundamental role in cognition. However, the existing materials that measure the sensorimotor basis of word meanings and concepts have been restricted in terms of their sample size and breadth of sensorimotor experience. Here we present norms of sensorimotor strength for 39,707 concepts across six perceptual modalities (touch, hearing, smell, taste, vision, and interoception) and five action effectors (mouth/throat, hand/arm, foot/leg, head excluding mouth/throat, and torso), gathered from a total of 3,500 individual participants using Amazon’s Mechanical Turk platform. The Lancaster Sensorimotor Norms are unique and innovative in a number of respects: They represent the largest-ever set of semantic norms for English, at 40,000 words × 11 dimensions (plus several informative cross-dimensional variables), they extend perceptual strength norming to the new modality of interoception, and they include the first norming of action strength across separate bodily effectors. In the first study, we describe the data collection procedures, provide summary descriptives of the dataset, and interpret the relations observed between sensorimotor dimensions. We then report two further studies, in which we (1) extracted an optimal single-variable composite of the 11-dimension sensorimotor profile (Minkowski 3 strength) and (2) demonstrated the utility of both perceptual and action strength in facilitating lexical decision times and accuracy in two separate datasets. These norms provide a valuable resource to researchers in diverse areas, including psycholinguistics, grounded cognition, cognitive semantics, knowledge representation, machine learning, and big-data approaches to the analysis of language and conceptual representations. The data are accessible via the Open Science Framework (http://osf.io/7emr6/) and an interactive web application (https://www.lancaster.ac.uk/psychology/lsnorms/).
Collapse
|
16
|
Günther F, Petilli MA, Vergallito A, Marelli M. Images of the unseen: extrapolating visual representations for abstract and concrete words in a data-driven computational model. PSYCHOLOGICAL RESEARCH 2020; 86:2512-2532. [PMID: 33180152 DOI: 10.1007/s00426-020-01429-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Theories of grounded cognition assume that conceptual representations are grounded in sensorimotor experience. However, abstract concepts such as jealousy or childhood have no directly associated referents with which such sensorimotor experience can be made; therefore, the grounding of abstract concepts has long been a topic of debate. Here, we propose (a) that systematic relations exist between semantic representations learned from language on the one hand and perceptual experience on the other hand, (b) that these relations can be learned in a bottom-up fashion, and (c) that it is possible to extrapolate from this learning experience to predict expected perceptual representations for words even where direct experience is missing. To test this, we implement a data-driven computational model that is trained to map language-based representations (obtained from text corpora, representing language experience) onto vision-based representations (obtained from an image database, representing perceptual experience), and apply its mapping function onto language-based representations for abstract and concrete words outside the training set. In three experiments, we present participants with these words, accompanied by two images: the image predicted by the model and a random control image. Results show that participants' judgements were in line with model predictions even for the most abstract words. This preference was stronger for more concrete items and decreased for the more abstract ones. Taken together, our findings have substantial implications in support of the grounding of abstract words, suggesting that we can tap into our previous experience to create possible visual representation we don't have.
Collapse
Affiliation(s)
| | | | - Alessandra Vergallito
- University of Milano-Bicocca, Milan, Italy.,NeuroMI, Milan Center for Neuroscience, Milan, Italy
| | - Marco Marelli
- University of Milano-Bicocca, Milan, Italy.,NeuroMI, Milan Center for Neuroscience, Milan, Italy
| |
Collapse
|
17
|
Barsalou LW. Challenges and Opportunities for Grounding Cognition. J Cogn 2020; 3:31. [PMID: 33043241 PMCID: PMC7528688 DOI: 10.5334/joc.116] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 07/20/2020] [Indexed: 01/09/2023] Open
Abstract
According to the grounded perspective, cognition emerges from the interaction of classic cognitive processes with the modalities, the body, and the environment. Rather than being an autonomous impenetrable module, cognition incorporates these other domains intrinsically into its operation. The Situated Action Cycle offers one way of understanding how the modalities, the body, and the environment become integrated to ground cognition. Seven challenges and opportunities are raised for this perspective: (1) How does cognition emerge from the Situated Action Cycle and in turn support it? (2) How can we move beyond simply equating embodiment with action, additionally establishing how embodiment arises in the autonomic, neuroendocrine, immune, cardiovascular, respiratory, digestive, and integumentary systems? (3) How can we better understand the mechanisms underlying multimodal simulation, its functions across the Situated Action Cycle, and its integration with other representational systems? (4) How can we develop and assess theoretical accounts of symbolic processing from the grounded perspective (perhaps using the construct of simulators)? (5) How can we move beyond the simplistic distinction between concrete and abstract concepts, instead addressing how concepts about the external and internal worlds pattern to support the Situated Action Cycle? (6) How do individual differences emerge from different populations of situational memories as the Situated Action Cycle manifests itself differently across individuals? (7) How can constructs from grounded cognition provide insight into the replication and generalization crises, perhaps from a quantum perspective on mechanisms (as exemplified by simulators).
Collapse
Affiliation(s)
- Lawrence W. Barsalou
- Institute of Neuroscience and Psychology, School of Psychology, University of Glasgow, Glasgow, UK
| |
Collapse
|
18
|
Abstract
Studies on the presence of mental simulations during language comprehension have typically focused only on single object properties. This study investigates whether two objects are combined in mental simulations, and whether this is influenced by task instructions. In both experiments, participants read sentences describing animals using a tool in some way. After each sentence, they saw an image of a cartoon animal holding a tool, and they indicated whether the animal (Experiment 1) or the tool (Experiment 2) was mentioned in the previous sentence or not. The shown image completely matched, partially matched, partially mismatched, or completely mismatched the preceding sentence. In total, 90 Dutch psychology students took part in Experiment 1, and 92 students took part in Experiment 2, both experiments were pre-registered. The results suggest that mental simulations indeed combine multiple objects during language comprehension and that this is not influenced by task instructions. Regardless of the instruction type, participants always responded quickest in the complete match condition compared to the partial match condition, suggesting that language comprehension leads to the creation of a complete mental simulation.
Collapse
Affiliation(s)
| | | | - Rolf A Zwaan
- Erasmus University Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
19
|
Amenta S, Crepaldi D, Marelli M. Consistency measures individuate dissociating semantic modulations in priming paradigms: A new look on semantics in the processing of (complex) words. Q J Exp Psychol (Hove) 2020; 73:1546-1563. [PMID: 32419617 DOI: 10.1177/1747021820927663] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In human language the mapping between form and meaning is arbitrary, as there is no direct connection between words and the objects that they represent. However, within a given language, it is possible to recognise systematic associations that support productivity and comprehension. In this work, we focus on the consistency between orthographic forms and meaning, and we investigate how the cognitive system may exploit it to process words. We take morphology as our case study, since it arguably represents one of the most notable examples of systematicity in form-meaning mapping. In a series of three experiments, we investigate the impact of form-meaning mapping in word processing by testing new consistency metrics as predictors of priming magnitude in primed lexical decision. In Experiment 1, we re-analyse data from five masked morphological priming studies and show that orthography-semantics-consistency explains independent variance in priming magnitude, suggesting that word semantics is accessed already at early stages of word processing and that crucially semantic access is constrained by word orthography. In Experiments 2 and 3, we investigate whether this pattern is replicated when looking at semantic priming. In Experiment 2, we show that orthography-semantics-consistency is not a viable predictor of priming magnitude with longer stimulus onset asynchrony (SOA). However, in Experiment 3, we develop a new semantic consistency measure based on the semantic density of target neighbourhoods. This measure is shown to significantly predict independent variance in semantic priming effect. Overall, our results indicate that consistency measures provide crucial information for the understanding of word processing. Specifically, the dissociation between measures and priming paradigms shows that different priming conditions are associated with the activation of different semantic cohorts.
Collapse
Affiliation(s)
- Simona Amenta
- Centre for Mind/Brain Sciences University of Trento, Rovereto, Italy.,Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Davide Crepaldi
- Cognitive Neuroscience Area, International School for Advanced Studies (SISSA), Trieste, Italy.,Milan Center for Neuroscience (NeuroMI), Milan, Italy
| | - Marco Marelli
- Milan Center for Neuroscience (NeuroMI), Milan, Italy.,Department of Psychology, University of Milano-Bicocca, Milan, Italy
| |
Collapse
|
20
|
Bottini R, Ferraro S, Nigri A, Cuccarini V, Bruzzone MG, Collignon O. Brain Regions Involved in Conceptual Retrieval in Sighted and Blind People. J Cogn Neurosci 2020; 32:1009-1025. [PMID: 32013684 DOI: 10.1162/jocn_a_01538] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
If conceptual retrieval is partially based on the simulation of sensorimotor experience, people with a different sensorimotor experience, such as congenitally blind people, should retrieve concepts in a different way. However, studies investigating the neural basis of several conceptual domains (e.g., actions, objects, places) have shown a very limited impact of early visual deprivation. We approached this problem by investigating brain regions that encode the perceptual similarity of action and color concepts evoked by spoken words in sighted and congenitally blind people. At first, and in line with previous findings, a contrast between action and color concepts (independently of their perceptual similarity) revealed similar activations in sighted and blind people for action concepts and partially different activations for color concepts, but outside visual areas. On the other hand, adaptation analyses based on subjective ratings of perceptual similarity showed compelling differences across groups. Perceptually similar colors and actions induced adaptation in the posterior occipital cortex of sighted people only, overlapping with regions known to represent low-level visual features of those perceptual domains. Early-blind people instead showed a stronger adaptation for perceptually similar concepts in temporal regions, arguably indexing higher reliance on a lexical-semantic code to represent perceptual knowledge. Overall, our results show that visual deprivation does changes the neural bases of conceptual retrieval, but mostly at specific levels of representation supporting perceptual similarity discrimination, reconciling apparently contrasting findings in the field.
Collapse
Affiliation(s)
| | | | - Anna Nigri
- Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy
| | | | | | | |
Collapse
|
21
|
Perceptual modality norms for 1,121 Italian words: A comparison with concreteness and imageability scores and an analysis of their impact in word processing tasks. Behav Res Methods 2020; 52:1599-1616. [DOI: 10.3758/s13428-019-01337-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
22
|
Perspective in the conceptualization of categories. PSYCHOLOGICAL RESEARCH 2019; 85:697-719. [PMID: 31773254 DOI: 10.1007/s00426-019-01269-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 11/11/2019] [Indexed: 10/25/2022]
Abstract
The ability to differently perceive and represent entities depending on their perspective is crucial for humans. We report five experiments that investigate how the different perspectives adopted while experiencing entities are reflected in conceptualizations (towards vs. away, near vs. far, beside vs. above, inside vs. outside and vision vs. audition vs. touch). Different groups of participants generated object properties while imagining the same scenario from different perspectives (e.g. entities coming toward them/going away from them while on a highway overpass). If conceptualizations have perspectives, then participants should produce features from a perspective entrenched in memory that reflects typical interactions with objects, independently of their assigned perspective (entrenched perspective). In addition, the perspective adopted in a given experiment should influence the properties generated (situated perspective). Results across the experiments indicate that conceptualizations contain both entrenched and situational perspectives. While entrenched perspectives emerge from canonical actions typically performed with objects, locations and entities, situational perspectives reflect online adaptations to current task contexts. The implications of the interplay between entrenched and situational perspectives for grounded cognition are discussed.
Collapse
|
23
|
Günther F, Rinaldi L, Marelli M. Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Misconceptions. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2019; 14:1006-1033. [DOI: 10.1177/1745691619861372] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Models that represent meaning as high-dimensional numerical vectors—such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the aggregate language environment (BEAGLE), topic models, global vectors (GloVe), and word2vec—have been introduced as extremely powerful machine-learning proxies for human semantic representations and have seen an explosive rise in popularity over the past 2 decades. However, despite their considerable advancements and spread in the cognitive sciences, one can observe problems associated with the adequate presentation and understanding of some of their features. Indeed, when these models are examined from a cognitive perspective, a number of unfounded arguments tend to appear in the psychological literature. In this article, we review the most common of these arguments and discuss (a) what exactly these models represent at the implementational level and their plausibility as a cognitive theory, (b) how they deal with various aspects of meaning such as polysemy or compositionality, and (c) how they relate to the debate on embodied and grounded cognition. We identify common misconceptions that arise as a result of incomplete descriptions, outdated arguments, and unclear distinctions between theory and implementation of the models. We clarify and amend these points to provide a theoretical basis for future research and discussions on vector models of semantic representation.
Collapse
Affiliation(s)
- Fritz Günther
- Department of Psychology, University of Milano–Bicocca
| | - Luca Rinaldi
- Department of Psychology, University of Milano–Bicocca
- NeuroMI, Milan Center for Neuroscience, Milan, Italy
| | - Marco Marelli
- Department of Psychology, University of Milano–Bicocca
- NeuroMI, Milan Center for Neuroscience, Milan, Italy
| |
Collapse
|
24
|
Speed LJ, Majid A. Grounding language in the neglected senses of touch, taste, and smell. Cogn Neuropsychol 2019; 37:363-392. [DOI: 10.1080/02643294.2019.1623188] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Laura J. Speed
- Department of Psychology, University of York, York, England
| | - Asifa Majid
- Department of Psychology, University of York, York, England
| |
Collapse
|
25
|
Lynott D, Walsh M, McEnery T, Connell L, Cross L, O'Brien K. Are You What You Read? Predicting Implicit Attitudes to Immigration Based on Linguistic Distributional Cues From Newspaper Readership; A Pre-registered Study. Front Psychol 2019; 10:842. [PMID: 31130888 PMCID: PMC6509147 DOI: 10.3389/fpsyg.2019.00842] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 03/29/2019] [Indexed: 11/13/2022] Open
Abstract
The implicit association test (IAT) measures bias towards often controversial topics (e.g., race, religion), while newspapers typically take strong positive/negative stances on such issues. In a pre-registered study, we developed and administered an immigration IAT to readers of the Daily Mail (a typically anti-immigration publication) and the Guardian (a typically pro-immigration publication) newspapers. IAT materials were constructed based on co-occurrence frequencies from each newspapers’ website for immigration-related terms (migrant/immigrant) and positive/negative attributes (skilled/unskilled). Target stimuli showed stronger negative associations with immigration concepts in the Daily Mail compared to the Guardian, and stronger positive associations in the Guardian corpus compared to the Daily Mail corpus. Consistent with these linguistic distributional differences, Daily Mail readers exhibited a larger IAT bias, revealing stronger negative associations to immigration concepts compared to Guardian readers. This difference in overall bias was not fully explained by other variables, and raises the possibility that exposure to biased language contributes to biased implicit attitudes.
Collapse
Affiliation(s)
- Dermot Lynott
- Department of Psychology, Fylde College, Lancaster University, Bailrigg, United Kingdom
| | - Michael Walsh
- Institute for Natural Language Processing, University of Stuttgart, Stuttgart, Germany
| | - Tony McEnery
- Linguistics and English Language, Lancaster University, Bailrigg, United Kingdom
| | - Louise Connell
- Department of Psychology, Fylde College, Lancaster University, Bailrigg, United Kingdom
| | - Liam Cross
- Department of Psychology, School of Health and Wellbeing, University of Wolverhampton, Wolverhampton, United Kingdom
| | - Kerry O'Brien
- School of Social Sciences, Monash University, Caulfield East, VIC, Australia
| |
Collapse
|
26
|
Abstract
Modality exclusivity norms have been developed in different languages for research on the relationship between perceptual and conceptual systems. This paper sets up the first modality exclusivity norms for Chinese, a Sino-Tibetan language with semantics as its orthographically relevant level. The norms are collected through two studies based on Chinese sensory words. The experimental designs take into consideration the morpho-lexical and orthographic structures of Chinese. Study 1 provides a set of norms for Mandarin Chinese single-morpheme words in mean ratings of the extent to which a word is experienced through the five sense modalities. The degrees of modality exclusivity are also provided. The collected norms are further analyzed to examine how sub-lexical orthographic representations of sense modalities in Chinese characters affect speakers’ interpretation of the sensory words. In particular, we found higher modality exclusivity rating for the sense modality explicitly represented by a semantic radical component, as well as higher auditory dominant modality rating for characters with transparent phonetic symbol components. Study 2 presents the mean ratings and modality exclusivity of coordinate disyllabic compounds involving multiple sense modalities. These studies open new perspectives in the study of modality exclusivity. First, links between modality exclusivity and writing systems have been established which has strengthened previous accounts of the influence of orthography in the processing of visual information in reading. Second, a new set of modality exclusivity norms of compounds is proposed to show the competition of influence on modality exclusivity from different linguistic factors and potentially allow such norms to be linked to studies on synesthesia and semantic transparency.
Collapse
|
27
|
Winter B. Chapter 6. Synaesthetic metaphors are neither synaesthetic nor metaphorical. PERCEPTION METAPHORS 2019. [DOI: 10.1075/celcr.19.06win] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
28
|
Vision dominates in perceptual language: English sensory vocabulary is optimized for usage. Cognition 2018; 179:213-220. [DOI: 10.1016/j.cognition.2018.05.008] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2017] [Revised: 05/02/2018] [Accepted: 05/08/2018] [Indexed: 11/19/2022]
|
29
|
Iatropoulos G, Herman P, Lansner A, Karlgren J, Larsson M, Olofsson JK. The language of smell: Connecting linguistic and psychophysical properties of odor descriptors. Cognition 2018; 178:37-49. [DOI: 10.1016/j.cognition.2018.05.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2017] [Revised: 04/12/2018] [Accepted: 05/07/2018] [Indexed: 10/16/2022]
|
30
|
Lupyan G, Winter B. Language is more abstract than you think, or, why aren't languages more iconic? Philos Trans R Soc Lond B Biol Sci 2018; 373:20170137. [PMID: 29915005 PMCID: PMC6015821 DOI: 10.1098/rstb.2017.0137] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2018] [Indexed: 01/29/2023] Open
Abstract
How abstract is language? We show that abstractness pervades every corner of language, going far beyond the usual examples of freedom and justice In the light of the ubiquity of abstract words, the need to understand where abstract meanings come from becomes ever more acute. We argue that the best source of knowledge about abstract meanings may be language itself. We then consider a seemingly unrelated question: Why isn't language more iconic? Iconicity-a resemblance between the form of words and their meanings-can be immensely useful in language learning and communication. Languages could be much more iconic than they currently are. So why aren't they? We suggest that one reason is that iconicity is inimical to abstraction because iconic forms are too connected to specific contexts and sensory depictions. Form-meaning arbitrariness may allow language to better convey abstract meanings.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'.
Collapse
Affiliation(s)
- Gary Lupyan
- Department of Psychology, University of Wisconsin, Madison, WI 53706, USA
| | - Bodo Winter
- Department of English Language and Applied Linguistics, University of Birmingham, Birmingham, UK
| |
Collapse
|
31
|
Louwerse MM. Knowing the Meaning of a Word by the Linguistic and Perceptual Company It Keeps. Top Cogn Sci 2018; 10:573-589. [PMID: 29851286 DOI: 10.1111/tops.12349] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 02/17/2018] [Accepted: 02/20/2018] [Indexed: 11/30/2022]
Abstract
Debates on meaning and cognition suggest that an embodied cognition account is exclusive of a symbolic cognition account. Decades of research in the cognitive sciences have, however, shown that these accounts are not at all mutually exclusive. Acknowledging cognition is both symbolic and embodied generates more relevant questions that propel, rather than divide, the cognitive sciences: questions such as how computational symbolic findings map onto experimental embodied findings, and under what conditions cognition is relatively more symbolic or embodied in nature. The current paper revisits the Symbol Interdependency Hypothesis, which argues that language encodes perceptual information and that language users rely on these language statistics in cognitive processes. It argues that the claim that words are abstract, amodal, and arbitrary symbols and therefore must always be grounded to become meaningful is an oversimplification of the language system. Instead, language has evolved such that it maps onto the perceptual system, whereby language users rely on language statistics, which allow for bootstrapping meaning also when grounding is limited.
Collapse
Affiliation(s)
- Max M Louwerse
- Cognitive Science & Artificial Intelligence, Tilburg University
| |
Collapse
|
32
|
Malhi SK, Buchanan L. A test of the symbol interdependency hypothesis with both concrete and abstract stimuli. PLoS One 2018; 13:e0192719. [PMID: 29590121 PMCID: PMC5873929 DOI: 10.1371/journal.pone.0192719] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 01/29/2018] [Indexed: 11/19/2022] Open
Abstract
In Experiment 1, the symbol interdependency hypothesis was tested with both concrete and abstract stimuli. Symbolic (i.e., semantic neighbourhood distance) and embodied (i.e., iconicity) factors were manipulated in two tasks-one that tapped symbolic relations (i.e., semantic relatedness judgment) and another that tapped embodied relations (i.e., iconicity judgment). Results supported the symbol interdependency hypothesis in that the symbolic factor was recruited for the semantic relatedness task and the embodied factor was recruited for the iconicity task. Across tasks, and especially in the iconicity task, abstract stimuli resulted in shorter RTs. This finding was in contrast to the concreteness effect where concrete words result in shorter RTs. Experiment 2 followed up on this finding by replicating the iconicity task from Experiment 1 in an ERP paradigm. Behavioural results continued to show a reverse concreteness effect with shorter RTs for abstract stimuli. However, ERP results paralleled the N400 and anterior N700 concreteness effects found in the literature, with more negative amplitudes for concrete stimuli.
Collapse
Affiliation(s)
| | - Lori Buchanan
- Department of Psychology, University of Windsor, Windsor, Ontario, Canada
| |
Collapse
|
33
|
McRae K, Nedjadrasul D, Pau R, Lo BPH, King L. Abstract Concepts and Pictures of Real-World Situations Activate One Another. Top Cogn Sci 2018; 10:518-532. [PMID: 29498490 DOI: 10.1111/tops.12328] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Revised: 12/22/2017] [Accepted: 01/25/2018] [Indexed: 11/29/2022]
Abstract
concepts typically are defined in terms of lacking physical or perceptual referents. We argue instead that they are not devoid of perceptual information because knowledge of real-world situations is an important component of learning and using many abstract concepts. Although the relationship between perceptual information and abstract concepts is less straightforward than for concrete concepts, situation-based perceptual knowledge is part of many abstract concepts. In Experiment 1, participants made lexical decisions to abstract words that were preceded by related and unrelated pictures of situations. For example, share was preceded by a picture of two girls sharing a cob of corn. When pictures were presented for 500 ms, latencies did not differ. However, when pictures were presented for 1,000 ms, decision latencies were significantly shorter for abstract words preceded by related versus unrelated pictures. Because the abstract concepts corresponded to the pictured situation as a whole, rather than a single concrete object or entity, the necessary relational processing takes time. In Experiment 2, on each trial, an abstract word was presented for 250 ms, immediately followed by a picture. Participants indicated whether or not the picture showed a normal situation. Decision latencies were significantly shorter for pictures preceded by related versus unrelated abstract words. Our experiments provide evidence that knowledge of events and situations is important for learning and using at least some types of abstract concepts. That is, abstract concepts are grounded in situations, but in a more complex manner than for concrete concepts. Although people's understanding of abstract concepts certainly includes knowledge gained from language describing situations and events for which those concepts are relevant, sensory and motor information experienced during real-life events is important as well.
Collapse
Affiliation(s)
- Ken McRae
- Department of Psychology and Brain & Mind Institute, University of Western Ontario
| | - Daniel Nedjadrasul
- Department of Psychology and Brain & Mind Institute, University of Western Ontario
| | - Raymond Pau
- Department of Psychology and Brain & Mind Institute, University of Western Ontario
| | - Bethany Pui-Hei Lo
- Department of Psychology and Brain & Mind Institute, University of Western Ontario
| | - Lisa King
- Department of Psychology and Brain & Mind Institute, University of Western Ontario
| |
Collapse
|
34
|
Tillman R, Louwerse M. Estimating Emotions Through Language Statistics and Embodied Cognition. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:159-167. [PMID: 29018982 DOI: 10.1007/s10936-017-9522-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Recent research has suggested that language processing activates perceptual simulations. We have demonstrated that findings that have been attributed to an embodied cognition account can also be explained by language statistics, because language encodes perceptual information. We investigated whether comprehension of emotion words can be explained by an embodied cognition or a language statistics account. A corpus linguistic study comparing emotions words showed that words denoting the same emotions (happy-delighted) co-occur more frequently than different emotions (happy-angry). These findings were used in two experiments in which participants read same-emotion and different-emotion sentence pairs. Sentence pairs with different emotions yielded longer RTs than sentences with the same emotions both in a cognitive task tailored toward linguistic representations and a task tailored toward embodied representations. These findings contribute to a growing body of literature that demonstrates that language processing does not always rely solely on perceptual simulation.
Collapse
Affiliation(s)
- Richard Tillman
- Psychology Department, University of Cincinnati, PO BOX 210376, Cincinnati, OH, 45221, USA.
| | - Max Louwerse
- Tilburg Center for Cognition and Communication, Tilburg University, PO BOX 90153, 5000 LE, Tilburg, Netherlands
| |
Collapse
|
35
|
Barsalou LW. What does semantic tiling of the cortex tell us about semantics? Neuropsychologia 2017; 105:18-38. [DOI: 10.1016/j.neuropsychologia.2017.04.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Revised: 04/05/2017] [Accepted: 04/06/2017] [Indexed: 11/30/2022]
|
36
|
Abstract
We live our lives surrounded by symbols (e.g., road signs, logos, but especially words and numbers), and throughout our life we use them to evoke, communicate and reflect upon ideas and things that are not currently present to our senses. Symbols are represented in our brains at different levels of complexity: at the first and most simple level, as physical entities, in the corresponding primary and secondary sensory cortices. The crucial property of symbols, however, is that, despite the simplicity of their surface forms, they have the power of evoking higher order multifaceted representations that are implemented in distributed neural networks spanning a large portion of the cortex. The rich internal states that reflect our knowledge of the meaning of symbols are what we call semantic representations. In this review paper, we summarize our current knowledge of both the cognitive and neural substrates of semantic representations, focusing on concrete words (i.e., nouns or verbs referring to concrete objects and actions), which, together with numbers, are the most-studied and well defined classes of symbols. Following a systematic descriptive approach, we will organize this literature review around two key questions: what is the content of semantic representations? And, how are semantic representations implemented in the brain, in terms of localization and dynamics? While highlighting the main current opposing perspectives on these topics, we propose that a fruitful way to make substantial progress in this domain would be to adopt a geometrical view of semantic representations as points in high dimensional space, and to operationally partition the space of concrete word meaning into motor-perceptual and conceptual dimensions. By giving concrete examples of the kinds of research that can be done within this perspective, we illustrate how we believe this framework will foster theoretical speculations as well as empirical research.
Collapse
Affiliation(s)
- Valentina Borghesani
- École Doctorale Cerveau-Cognition-Comportement, Université Pierre et Marie Curie - Paris 6, 75005 Paris, France; Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, U992, F-91191 Gif/Yvette, France; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy.
| | - Manuela Piazza
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, U992, F-91191 Gif/Yvette, France; Center for Mind/Brain Sciences, University of Trento, 38068 Rovereto, Italy
| |
Collapse
|
37
|
Jones LL, Wurm LH, Calcaterra RD, Ofen N. Integrative Priming of Compositional and Locative Relations. Front Psychol 2017; 8:359. [PMID: 28360872 PMCID: PMC5350123 DOI: 10.3389/fpsyg.2017.00359] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Accepted: 02/24/2017] [Indexed: 11/24/2022] Open
Abstract
Integrative priming refers to the facilitated recognition of a target word (bench) as a real word following a prime (park). Prior integrative priming studies have used a wide variety of integrative relations including temporal (summer rain), topical (travel book), locative (forest river), and compositional (peach pie) relations. Yet differences in the types of integrative relations may yield differences in the underlying explanatory processes of integrative priming. In this study, we compared the magnitude, time course, and three theoretically based correlates of integrative priming for compositional (stone table) and locative (patio table) pairs in a lexical decision task across four stimulus onset asynchronies (SOAs; 50, 300, 800, and 1,600 ms). Based on the Complementary Role Activation theory, integrative ratings (the extent to which the prime and target can be combined into a meaningful phrase) were predicted to facilitate target RTs. Based on the Embodied Conceptual Combination (ECCo) theory, the local co-occurrence of the prime and target, and the ability to perceptually simulate (visually experience) the prime-target pair were tested as predictors. In comparison to unrelated pairs (nose table), target RTs were faster for the compositional and locative pairs, though did not differ between these relations. In support of the Complementary Role Activation theory, integrative ratings predicted target RTs above and beyond our control variables. In support of the ECCo theory, co-occurrence emerged as an early predictor of target RTs, and visual experience ratings was a reliable predictor at the 300 ms SOA, though only for the compositional relations.
Collapse
Affiliation(s)
- Lara L Jones
- Department of Psychology, Wayne State University Detroit, MI, USA
| | - Lee H Wurm
- Department of Psychology, Wayne State University Detroit, MI, USA
| | | | - Noa Ofen
- Department of Psychology, Wayne State UniversityDetroit, MI, USA; Institute of Gerontology, Wayne State UniversityDetroit, MI, USA
| |
Collapse
|
38
|
Dutch modality exclusivity norms: Simulating perceptual modality in space. Behav Res Methods 2017; 49:2204-2218. [DOI: 10.3758/s13428-017-0852-3] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
39
|
Rasheed N, Amin SH, Sultana U, Shakoor R, Zareen N, Bhatti AR. Theoretical accounts to practical models: Grounding phenomenon for abstract words in cognitive robots. COGN SYST RES 2016. [DOI: 10.1016/j.cogsys.2016.05.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
40
|
Abstract
The 15 articles in this special issue on The Representation of Concepts illustrate the rich variety of theoretical positions and supporting research that characterize the area. Although much agreement exists among contributors, much disagreement exists as well, especially about the roles of grounding and abstraction in conceptual processing. I first review theoretical approaches raised in these articles that I believe are Quixotic dead ends, namely, approaches that are principled and inspired but likely to fail. In the process, I review various theories of amodal symbols, their distortions of grounded theories, and fallacies in the evidence used to support them. Incorporating further contributions across articles, I then sketch a theoretical approach that I believe is likely to be successful, which includes grounding, abstraction, flexibility, explaining classic conceptual phenomena, and making contact with real-world situations. This account further proposes that (1) a key element of grounding is neural reuse, (2) abstraction takes the forms of multimodal compression, distilled abstraction, and distributed linguistic representation (but not amodal symbols), and (3) flexible context-dependent representations are a hallmark of conceptual processing.
Collapse
Affiliation(s)
- Lawrence W Barsalou
- Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow, G12 8QB, UK.
| |
Collapse
|
41
|
Shen M, Xie J, Liu W, Lin W, Chen Z, Marmolejo-Ramos F, Wang R. Interplay Between the Object and Its Symbol: The Size-Congruency Effect. Adv Cogn Psychol 2016; 12:115-29. [PMID: 27512529 PMCID: PMC4976128 DOI: 10.5709/acp-0191-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Accepted: 06/07/2016] [Indexed: 11/23/2022] Open
Abstract
Grounded cognition suggests that conceptual processing shares cognitive resources with perceptual processing. Hence, conceptual processing should be affected by perceptual processing, and vice versa. The current study explored the relationship between conceptual and perceptual processing of size. Within a pair of words, we manipulated the font size of each word, which was either congruent or incongruent with the actual size of the referred object. In Experiment 1a, participants compared object sizes that were referred to by word pairs. Higher accuracy was observed in the congruent condition (e.g., word pairs referring to larger objects in larger font sizes) than in the incongruent condition. This is known as the size-congruency effect. In Experiments 1b and 2, participants compared the font sizes of these word pairs. The size-congruency effect was not observed. In Experiments 3a and 3b, participants compared object and font sizes of word pairs depending on a task cue. Results showed that perceptual processing affected conceptual processing, and vice versa. This suggested that the association between conceptual and perceptual processes may be bidirectional but further modulated by semantic processing. Specifically, conceptual processing might only affect perceptual processing when semantic information is activated. The current study.
Collapse
Affiliation(s)
- Manqiong Shen
- Center for Studies of Psychological Application, Guangdong Provincial
Key Laboratory of Mental Health and Cognitive Science, School of Psychology,
South China Normal University, Guangzhou, China
| | - Jiushu Xie
- Center for Studies of Psychological Application, Guangdong Provincial
Key Laboratory of Mental Health and Cognitive Science, School of Psychology,
South China Normal University, Guangzhou, China
| | - Wenjuan Liu
- Center for Studies of Psychological Application, Guangdong Provincial
Key Laboratory of Mental Health and Cognitive Science, School of Psychology,
South China Normal University, Guangzhou, China
| | - Wenjie Lin
- Center for Studies of Psychological Application, Guangdong Provincial
Key Laboratory of Mental Health and Cognitive Science, School of Psychology,
South China Normal University, Guangzhou, China
| | - Zhuoming Chen
- Language Disorder Center, The First Affiliated Hospital of Jinan
University, Guangzhou, China
| | | | - Ruiming Wang
- Center for Studies of Psychological Application, Guangdong Provincial
Key Laboratory of Mental Health and Cognitive Science, School of Psychology,
South China Normal University, Guangzhou, China
| |
Collapse
|
42
|
Zhou P, Christianson K. I “hear” what you're “saying”: Auditory perceptual simulation, reading speed, and reading comprehension. Q J Exp Psychol (Hove) 2016; 69:972-95. [DOI: 10.1080/17470218.2015.1018282] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Auditory perceptual simulation (APS) during silent reading refers to situations in which the reader actively simulates the voice of a character or other person depicted in a text. In three eye-tracking experiments, APS effects were investigated as people read utterances attributed to a native English speaker, a non-native English speaker, or no speaker at all. APS effects were measured via online eye movements and offline comprehension probes. Results demonstrated that inducing APS during silent reading resulted in observable differences in reading speed when readers simulated the speech of faster compared to slower speakers and compared to silent reading without APS. Social attitude survey results indicated that readers’ attitudes towards the native and non-native speech did not consistently influence APS-related effects. APS of both native speech and non-native speech increased reading speed, facilitated deeper, less good-enough sentence processing, and improved comprehension compared to normal silent reading.
Collapse
Affiliation(s)
- Peiyun Zhou
- Department of Educational Psychology, University of Illinois, Urbana-Champaign, Urbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana-Champaign, Urbana, IL, USA
| | - Kiel Christianson
- Department of Educational Psychology, University of Illinois, Urbana-Champaign, Urbana, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana-Champaign, Urbana, IL, USA
| |
Collapse
|
43
|
Recchia G, Louwerse MM. Reproducing affective norms with lexical co-occurrence statistics: Predicting valence, arousal, and dominance. Q J Exp Psychol (Hove) 2015; 68:1584-98. [DOI: 10.1080/17470218.2014.941296] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Gabriel Recchia
- Department of Psychology, Institute for Intelligent Systems, University of Memphis, TN, USA
| | - Max M. Louwerse
- Tilburg Center for Cognition and Communication, Tilburg University, Netherlands
| |
Collapse
|
44
|
De Deyne S, Verheyen S, Storms G. The role of corpus size and syntax in deriving lexico-semantic representations for a wide range of concepts. Q J Exp Psychol (Hove) 2015; 68:1643-64. [DOI: 10.1080/17470218.2014.994098] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
| | | | - Gert Storms
- Department of Psychology, University of Leuven, Belgium
| |
Collapse
|
45
|
Recchia G, Sahlgren M, Kanerva P, Jones MN. Encoding sequential information in semantic space models: comparing holographic reduced representation and random permutation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2015; 2015:986574. [PMID: 25954306 PMCID: PMC4405220 DOI: 10.1155/2015/986574] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2014] [Accepted: 02/26/2015] [Indexed: 11/24/2022]
Abstract
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, "noisy" permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.
Collapse
Affiliation(s)
| | | | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, CA 94720, USA
| | | |
Collapse
|
46
|
Streicher MC, Estes Z. Touch and Go: Merely Grasping a Product Facilitates Brand Perception and Choice. APPLIED COGNITIVE PSYCHOLOGY 2015. [DOI: 10.1002/acp.3109] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Mathias C. Streicher
- Department of Strategic Management, Marketing & Tourism; University of Innsbruck; Austria
| | - Zachary Estes
- Department of Marketing; Bocconi University; Milan Italy
| |
Collapse
|
47
|
Lebois LAM, Wilson-Mendenhall CD, Barsalou LW. Are Automatic Conceptual Cores the Gold Standard of Semantic Processing? The Context-Dependence of Spatial Meaning in Grounded Congruency Effects. Cogn Sci 2014; 39:1764-801. [DOI: 10.1111/cogs.12174] [Citation(s) in RCA: 109] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Revised: 02/07/2014] [Accepted: 02/24/2014] [Indexed: 11/30/2022]
|
48
|
Horchak OV, Giger JC, Cabral M, Pochwatko G. From demonstration to theory in embodied language comprehension: A review. COGN SYST RES 2014. [DOI: 10.1016/j.cogsys.2013.09.002] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
49
|
Borghi AM, Capirci O, Gianfreda G, Volterra V. The body and the fading away of abstract concepts and words: a sign language analysis. Front Psychol 2014; 5:811. [PMID: 25120515 PMCID: PMC4114187 DOI: 10.3389/fpsyg.2014.00811] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 07/08/2014] [Indexed: 11/29/2022] Open
Abstract
One of the most important challenges for embodied and grounded theories of cognition concerns the representation of abstract concepts, such as “freedom.” Many embodied theories of abstract concepts have been proposed. Some proposals stress the similarities between concrete and abstract concepts showing that they are both grounded in perception and action system while other emphasize their difference favoring a multiple representation view. An influential view proposes that abstract concepts are mapped to concrete ones through metaphors. Furthermore, some theories underline the fact that abstract concepts are grounded in specific contents, as situations, introspective states, emotions. These approaches are not necessarily mutually exclusive, since it is possible that they can account for different subsets of abstract concepts and words. One novel and fruitful way to understand the way in which abstract concepts are represented is to analyze how sign languages encode concepts into signs. In the present paper we will discuss these theoretical issues mostly relying on examples taken from Italian Sign Language (LIS, Lingua dei Segni Italiana), the visual-gestural language used within the Italian Deaf community. We will verify whether and to what extent LIS signs provide evidence favoring the different theories of abstract concepts. In analyzing signs we will distinguish between direct forms of involvement of the body and forms in which concepts are grounded differently, for example relying on linguistic experience. In dealing with the LIS evidence, we will consider the possibility that different abstract concepts are represented using different levels of embodiment. The collected evidence will help us to discuss whether a unitary embodied theory of abstract concepts is possible or whether the different theoretical proposals can account for different aspects of their representation.
Collapse
Affiliation(s)
- Anna M Borghi
- Department of Psychology, University of Bologna and Institute of Cognitive Sciences and Technologies, Italian National Research Council Rome, Italy
| | - Olga Capirci
- Institute of Cognitive Sciences and Technologies, Italian National Research Council Rome, Italy
| | | | - Virginia Volterra
- Institute of Cognitive Sciences and Technologies, Italian National Research Council Rome, Italy
| |
Collapse
|
50
|
Borghi AM, Cangelosi A. Action and language integration: from humans to cognitive robots. Top Cogn Sci 2014; 6:344-58. [PMID: 24943900 DOI: 10.1111/tops.12103] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2014] [Accepted: 04/25/2014] [Indexed: 11/27/2022]
Abstract
The topic is characterized by a highly interdisciplinary approach to the issue of action and language integration. Such an approach, combining computational models and cognitive robotics experiments with neuroscience, psychology, philosophy, and linguistic approaches, can be a powerful means that can help researchers disentangle ambiguous issues, provide better and clearer definitions, and formulate clearer predictions on the links between action and language. In the introduction we briefly describe the papers and discuss the challenges they pose to future research. We identify four important phenomena the papers address and discuss in light of empirical and computational evidence: (a) the role played not only by sensorimotor and emotional information but also of natural language in conceptual representation; (b) the contextual dependency and high flexibility of the interaction between action, concepts, and language; (c) the involvement of the mirror neuron system in action and language processing; (d) the way in which the integration between action and language can be addressed by developmental robotics and Human-Robot Interaction.
Collapse
Affiliation(s)
- Anna M Borghi
- Department of Psychology, University of Bologna; Institute of Cognitive Sciences and Technologies, Italian National Research Council
| | | |
Collapse
|