1
|
Sidhu DM, Wingate TG, Bourdage JS, Pexman PM. Would you hire Liam over Kirk? Name sound symbolism and hiring. Acta Psychol (Amst) 2025; 256:104978. [PMID: 40245669 DOI: 10.1016/j.actpsy.2025.104978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2024] [Revised: 02/14/2025] [Accepted: 04/01/2025] [Indexed: 04/19/2025] Open
Abstract
Sound symbolism is the phenomenon by which certain language sounds evoke particular associations. Previous work has demonstrated that names evoke personality associations based on the sounds they contain, with names containing sonorant consonants evoking different associations than those containing voiceless stops. Here we examined whether these associations would impact a mock hiring task. We created job ads that described an ideal candidate as being high in one of the six factors of the HEXACO framework of personality. Participants were given a pair of candidates, one whose name contained sonorants (e.g., "Molly") and one whose name contained voiceless stops (e.g., "Katie"). Whether job ads contained three personality adjectives (Experiment 1), a single adjective (Experiment 2), or a single adjective and a picture (Experiment 3) participants were more likely to choose the candidate with the sonorant name for certain personality factors. In Experiment 4 participants saw videotaped mock interviews of candidates presented with a sonorant or voiceless stop name. Names were less influential in the presence of audiovisual information than perceived name fit. These results demonstrate the impact of name sound symbolism in a more material scenario. They also help establish boundary conditions and moderators for name sound symbolism.
Collapse
|
2
|
Winter B. The size and shape of sound: The role of articulation and acoustics in iconicity and crossmodal correspondencesa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:2636-2656. [PMID: 40202363 DOI: 10.1121/10.0036362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 03/14/2025] [Indexed: 04/10/2025]
Abstract
Onomatopoeias like hiss and peep are iconic because their forms resemble their meanings. Iconicity can also involve forms and meanings in different modalities, such as when people match the nonce words bouba and kiki to round and angular objects, and mil and mal to small and large ones, also known as "sound symbolism." This paper focuses on what specific analogies motivate such correspondences in spoken language: do people associate shapes and size with how phonemes sound (auditory), or how they are produced (articulatory)? Based on a synthesis of empirical evidence probing the cognitive mechanisms underlying different types of sound symbolism, this paper argues that analogies based on acoustics alone are often sufficient, rendering extant articulatory explanations for many iconic phenomena superfluous. This paper further suggests that different types of crossmodal iconicity in spoken language can fruitfully be understood as an extension of onomatopoeia: when speakers iconically depict such perceptual characteristics as size and shape, they mimic the acoustics that are correlated with these characteristics in the natural world.
Collapse
Affiliation(s)
- Bodo Winter
- Department of Linguistics and Communication, University of Birmingham, Birmingham B15 2TT, United Kingdom
| |
Collapse
|
3
|
Imai M, Kita S, Akita K, Saji N, Ohba M, Namatame M. Does sound symbolism need sound?: The role of articulatory movement in detecting iconicity between sound and meaninga). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:137-148. [PMID: 39791996 DOI: 10.1121/10.0034832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 12/06/2024] [Indexed: 01/12/2025]
Abstract
Ever since de Saussure [Course in General Lingustics (Columbia University Press, 1916)], theorists of language have assumed that the relation between form and meaning of words is arbitrary. However, recently, a body of empirical research has established that language is embodied and contains iconicity. Sound symbolism, an intrinsic link language users perceive between word sound and properties of referents, is a representative example of iconicity in language and has offered profound insights into theories of language pertaining to language processing, language acquisition, and evolution. However, on what basis people detect iconicity between sound and meaning has not yet been made clear. One way to address this question is to ask whether one needs to be able to hear sound to detect sound symbolism. Here, it is shown that (1) deaf-and-Hard-of-Hearing (DHH) participants, even those with profound hearing loss, could judge the sound symbolic match between shapes and words at the same level of accuracy as hearing participants do; and (2) restriction of articulatory movements negatively affects DHH individuals' judgments. The results provided support for the articulatory theory of sound symbolism and lead to a possibility that linguistic symbols may have emerged through iconic mappings across different sensory modality-in particular, oral gesture and sensory experience of the world in the case of speech.
Collapse
Affiliation(s)
- Mutsumi Imai
- Faculty of Environment and Information Studies, Keio University, Fujisawa, Kanagawa 252-0882, Japan
| | - Sotaro Kita
- Department of Psychology, University of Warwick, Warwick, Coventry CV4 7AL, United Kingdom
| | - Kimi Akita
- Department of English Linguistics, Nagoya University, Nagoya, Aichi 464-8601, Japan
| | - Noburo Saji
- Faculty of Human Sciences, Waseda University, Tokorozawa, Saitama 359-1192, Japan
| | - Masato Ohba
- Graduate School of Media and Governance, Keio University, Fujisawa, Kanagawa 252-0882, Japan
| | - Miki Namatame
- Department of Apparel and Space Design, Kyoto Women's University, Kyoto, Kyoto 605-8501, Japan
| |
Collapse
|
4
|
Barker H, Bozic M. Forms, Mechanisms, and Roles of Iconicity in Spoken Language: A Review. Psychol Rep 2024:332941241310119. [PMID: 39705711 DOI: 10.1177/00332941241310119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2024]
Abstract
Historically, debates over relationships between spoken lexical form and meaning have been dominated by views of arbitrariness. However more recent research revealed a different perspective, in which non-arbitrary mappings play an important role in the makeup of a lexicon. It is now clear that phoneme-sound symbolism - along with other types of form-to-meaning mappings - contributes to non-arbitrariness (iconicity) of spoken words, which is present in many forms and degrees in different languages. Attempts have been made to provide a mechanistic explanation of the phenomenon, and these theories largely centre around cross-modal correspondences. We build on these views to explore iconicity within the evolutionary context and the neurobiological framework for human language processing. We argue that the multimodal bihemsipheric communicative system, to which iconicity is integral, has important phylogenetic and ontogenetic advantages, facilitating language learning, comprehension, and processing. Despite its numerous advantages however, iconicity must compete with arbitrariness, forcing language systems to balance the competing needs of perceptual grounding of the linguistic form and ensuring an effective signal. We conclude that, on balance, iconicity should be viewed as integral to language, and not merely a marginal phenomenon.
Collapse
Affiliation(s)
- Harry Barker
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Mirjana Bozic
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
5
|
Ćwiek A, Anselme R, Dediu D, Fuchs S, Kawahara S, Oh GE, Paul J, Perlman M, Petrone C, Reiter S, Ridouane R, Zeller J, Winter B. The alveolar trill is perceived as jagged/rough by speakers of different languagesa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3468-3479. [PMID: 39565142 DOI: 10.1121/10.0034416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 10/26/2024] [Indexed: 11/21/2024]
Abstract
Typological research shows that across languages, trilled [r] sounds are more common in adjectives describing rough as opposed to smooth surfaces. In this study, this lexical research is built on with an experiment with speakers of 28 different languages from 12 different families. Participants were presented with images of a jagged and a straight line and imagined running their finger along each. They were then played an alveolar trill [r] and an alveolar approximant [l] and matched each sound to one of the lines. Participants showed a strong tendency to match [r] with the jagged line and [l] with the straight line, even more consistently than in a comparable cross-cultural investigation of the bouba/kiki effect. The pattern is strongest for matching [r] to the jagged line, but also very strong for matching [l] to the straight line. While this effect was found with speakers of languages with different phonetic realizations of the rhotic sound, it was weaker when trilled [r] was the primary variant. This suggests that when a sound is used phonologically to make systemic meaning contrasts, its iconic potential may become more limited. These findings extend our understanding of iconic crossmodal correspondences, highlighting deep-rooted connections between auditory perception and touch/vision.
Collapse
Affiliation(s)
| | - Rémi Anselme
- Laboratoire Dynamique Du Langage UMR 5596, Université Lumière Lyon 2, Lyon, 69363, France
| | - Dan Dediu
- Department of Catalan Philology and General Linguistics, University of Barcelona, Barcelona, 08007, Spain
- Universitat de Barcelona Institute of Complex Systems (UBICS), Barcelona, 08038, Spain
- Catalan Institute for Research and Advanced Studies (ICREA), Barcelona, 08010, Spain
| | - Susanne Fuchs
- Leibniz-Centre General Linguistics, Berlin, 10719, Germany
- IMéRA Institute for Advanced Studies of Aix-Marseille University, Marseille, 13004, France
| | - Shigeto Kawahara
- The Institute of Cultural and Linguistic Studies, Keio University, Mita Minatoku, Tokyo, 108-8345, Japan
| | - Grace E Oh
- Department of English Language and Literature, Konkuk University, Seoul 05029, South Korea
| | - Jing Paul
- Asian Studies Program, Agnes Scott College, Decatur, Georgia 30030, USA
| | - Marcus Perlman
- Department of Linguistics and Communication, University of Birmingham, Birmingham, B15 2TT, United Kingdom
| | - Caterina Petrone
- Aix-Marseille Université, CNRS, Laboratoire Parole et Langage, 13100 Aix-en-Provence, France
| | - Sabine Reiter
- Departamento de Polonês, Alemão e Letras Clássicas, Universidade Federal do Paraná, 80060-150 Curitiba, Brazil
| | - Rachid Ridouane
- Laboratoire de Phonétique et Phonologie, UMR 7018, CNRS, Université Sorbonne Nouvelle, 75005 Paris, France
| | - Jochen Zeller
- School of Arts, Linguistics Discipline, University of KwaZulu-Natal, Durban 4041, South Africa
| | - Bodo Winter
- Department of Linguistics and Communication, University of Birmingham, Birmingham, B15 2TT, United Kingdom
| |
Collapse
|
6
|
Fink L, Fiehn H, Wald-Fuhrmann M. The role of audiovisual congruence in aesthetic appreciation of contemporary music and visual art. Sci Rep 2024; 14:20923. [PMID: 39251764 PMCID: PMC11384752 DOI: 10.1038/s41598-024-71399-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 08/27/2024] [Indexed: 09/11/2024] Open
Abstract
Does congruence between auditory and visual modalities affect aesthetic experience? While cross-modal correspondences between vision and hearing are well-documented, previous studies show conflicting results regarding whether audiovisual correspondence affects subjective aesthetic experience. Here, in collaboration with the Kentler International Drawing Space (NYC, USA), we depart from previous research by using music specifically composed to pair with visual art in the professionally-curated Music as Image and Metaphor exhibition. Our pre-registered online experiment consisted of 4 conditions: Audio, Visual, Audio-Visual-Intended (artist-intended pairing of art/music), and Audio-Visual-Random (random shuffling). Participants (N = 201) were presented with 16 pieces and could click to proceed to the next piece whenever they liked. We used time spent as an implicit index of aesthetic interest. Additionally, after each piece, participants were asked about their subjective experience (e.g., feeling moved). We found that participants spent significantly more time with Audio, followed by Audiovisual, followed by Visual pieces; however, they felt most moved in the Audiovisual (bi-modal) conditions. Ratings of audiovisual correspondence were significantly higher for the Audiovisual-Intended compared to Audiovisual-Random condition; interestingly, though, there were no significant differences between intended and random conditions on any other subjective rating scale, or for time spent. Collectively, these results call into question the relationship between cross-modal correspondence and aesthetic appreciation. Additionally, the results complicate the use of time spent as an implicit measure of aesthetic experience.
Collapse
Affiliation(s)
- Lauren Fink
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt Am Main, HE, Germany.
- Max Planck-NYU Center for Language, Music, & Emotion, Frankfurt Am Main, HE, Germany.
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, ON, Canada.
| | - Hannah Fiehn
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt Am Main, HE, Germany
- Institute of Psychology, Goethe University, Frankfurt Am Main, HE, Germany
| | - Melanie Wald-Fuhrmann
- Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt Am Main, HE, Germany
- Max Planck-NYU Center for Language, Music, & Emotion, Frankfurt Am Main, HE, Germany
| |
Collapse
|
7
|
Kilpatrick A, Ćwiek A. Using artificial intelligence to explore sound symbolic expressions of gender in American English. PeerJ Comput Sci 2024; 10:e1811. [PMID: 38283586 PMCID: PMC10821993 DOI: 10.7717/peerj-cs.1811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 12/18/2023] [Indexed: 01/30/2024]
Abstract
This study investigates the extent to which gender can be inferred from the phonemes that make up given names and words in American English. Two extreme gradient boosted algorithms were constructed to classify words according to gender, one using a list of the most common given names (N∼1,000) in North America and the other using the Glasgow Norms (N∼5,500), a corpus consisting of nouns, verbs, adjectives, and adverbs which have each been assigned a psycholinguistic score of how they are associated with male or female behaviour. Both models report significant findings, but the model constructed using given names achieves a greater accuracy despite being trained on a smaller dataset suggesting that gender is expressed more robustly in given names than in other word classes. Feature importance was examined to determine which features were contributing to the decision-making process. Feature importance scores revealed a general pattern across both models, but also show that not all word classes express gender the same way. Finally, the models were reconstructed and tested on the opposite dataset to determine whether they were useful in classifying opposite samples. The results showed that the models were not as accurate when classifying opposite samples, suggesting that they are more suited to classifying words of the same class.
Collapse
Affiliation(s)
- Alexander Kilpatrick
- International Communication, Nagoya University of Commerce and Business, Nagoya, Aichi, Japan
| | | |
Collapse
|
8
|
Sidhu DM, Athanasopoulou A, Archer SL, Czarnecki N, Curtin S, Pexman PM. The maluma/takete effect is late: No longitudinal evidence for shape sound symbolism in the first year. PLoS One 2023; 18:e0287831. [PMID: 37943758 PMCID: PMC10635456 DOI: 10.1371/journal.pone.0287831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 06/14/2023] [Indexed: 11/12/2023] Open
Abstract
The maluma/takete effect refers to an association between certain language sounds (e.g., /m/ and /o/) and round shapes, and other language sounds (e.g., /t/ and /i/) and spiky shapes. This is an example of sound symbolism and stands in opposition to arbitrariness of language. It is still unknown when sensitivity to sound symbolism emerges. In the present series of studies, we first confirmed that the classic maluma/takete effect would be observed in adults using our novel 3-D object stimuli (Experiments 1a and 1b). We then conducted the first longitudinal test of the maluma/takete effect, testing infants at 4-, 8- and 12-months of age (Experiment 2). Sensitivity to sound symbolism was measured with a looking time preference task, in which infants were shown images of a round and a spiky 3-D object while hearing either a round- or spiky-sounding nonword. We did not detect a significant difference in looking time based on nonword type. We also collected a series of individual difference measures including measures of vocabulary, movement ability and babbling. Analyses of these measures revealed that 12-month olds who babbled more showed a greater sensitivity to sound symbolism. Finally, in Experiment 3, we had parents take home round or spiky 3-D printed objects, to present to 7- to 8-month-old infants paired with either congruent or incongruent nonwords. This language experience had no effect on subsequent measures of sound symbolism sensitivity. Taken together these studies demonstrate that sound symbolism is elusive in the first year, and shed light on the mechanisms that may contribute to its eventual emergence.
Collapse
Affiliation(s)
- David M. Sidhu
- Department of Psychology, Carleton University, Ottawa, Canada
| | - Angeliki Athanasopoulou
- School of Languages, Linguistics, Literatures, and Cultures, University of Calgary, Calgary, Canada
| | | | | | - Suzanne Curtin
- Department of Child and Youth Studies, Brock University, St. Catharines, Canada
| | - Penny M. Pexman
- Department of Psychology, University of Calgary, Calgary, Canada
| |
Collapse
|
9
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural basis of sound-symbolic pseudoword-shape correspondences. Neuropsychologia 2023; 188:108657. [PMID: 37543139 PMCID: PMC10529692 DOI: 10.1016/j.neuropsychologia.2023.108657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 06/23/2023] [Accepted: 08/02/2023] [Indexed: 08/07/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants (n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA
| | - Kaitlyn L Matthews
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA; Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO, 63130, USA
| | - Lynne C Nygaard
- Department of Psychology, Emory University, Atlanta, GA, 30322, USA
| | - K Sathian
- Department of Neurology, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Neural & Behavioral Sciences, Penn State College of Medicine, Hershey, PA, 17033-0859, USA; Department of Psychology, Penn State College of Liberal Arts, University Park, PA, 16802, USA.
| |
Collapse
|
10
|
Barany DA, Lacey S, Matthews KL, Nygaard LC, Sathian K. Neural Basis Of Sound-Symbolic Pseudoword-Shape Correspondences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.14.536865. [PMID: 37425853 PMCID: PMC10327042 DOI: 10.1101/2023.04.14.536865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Non-arbitrary mapping between the sound of a word and its meaning, termed sound symbolism, is commonly studied through crossmodal correspondences between sounds and visual shapes, e.g., auditory pseudowords, like 'mohloh' and 'kehteh', are matched to rounded and pointed visual shapes, respectively. Here, we used functional magnetic resonance imaging (fMRI) during a crossmodal matching task to investigate the hypotheses that sound symbolism (1) involves language processing; (2) depends on multisensory integration; (3) reflects embodiment of speech in hand movements. These hypotheses lead to corresponding neuroanatomical predictions of crossmodal congruency effects in (1) the language network; (2) areas mediating multisensory processing, including visual and auditory cortex; (3) regions responsible for sensorimotor control of the hand and mouth. Right-handed participants ( n = 22) encountered audiovisual stimuli comprising a simultaneously presented visual shape (rounded or pointed) and an auditory pseudoword ('mohloh' or 'kehteh') and indicated via a right-hand keypress whether the stimuli matched or not. Reaction times were faster for congruent than incongruent stimuli. Univariate analysis showed that activity was greater for the congruent compared to the incongruent condition in the left primary and association auditory cortex, and left anterior fusiform/parahippocampal gyri. Multivoxel pattern analysis revealed higher classification accuracy for the audiovisual stimuli when congruent than when incongruent, in the pars opercularis of the left inferior frontal (Broca's area), the left supramarginal, and the right mid-occipital gyri. These findings, considered in relation to the neuroanatomical predictions, support the first two hypotheses and suggest that sound symbolism involves both language processing and multisensory integration. HIGHLIGHTS fMRI investigation of sound-symbolic correspondences between auditory pseudowords and visual shapesFaster reaction times for congruent than incongruent audiovisual stimuliGreater activation in auditory and visual cortices for congruent stimuliHigher classification accuracy for congruent stimuli in language and visual areasSound symbolism involves language processing and multisensory integration.
Collapse
Affiliation(s)
- Deborah A. Barany
- Department of Kinesiology, University of Georgia and Augusta University/University of Georgia Medical Partnership, Athens, GA, 30602, USA
| | - Simon Lacey
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| | - Kaitlyn L. Matthews
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
- Present address: Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130
| | - Lynne C. Nygaard
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - K. Sathian
- Department of Neurology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Neural & Behavioral Sciences, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA 17033-0859, USA
| |
Collapse
|
11
|
Sciortino P, Kayser C. Steady state visual evoked potentials reveal a signature of the pitch-size crossmodal association in visual cortex. Neuroimage 2023; 273:120093. [PMID: 37028733 DOI: 10.1016/j.neuroimage.2023.120093] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023] Open
Abstract
Crossmodal correspondences describe our tendency to associate sensory features from different modalities with each other, such as the pitch of a sound with the size of a visual object. While such crossmodal correspondences (or associations) are described in many behavioural studies their neurophysiological correlates remain unclear. Under the current working model of multisensory perception both a low- and a high-level account seem plausible. That is, the neurophysiological processes shaping these associations could commence in low-level sensory regions, or may predominantly emerge in high-level association regions of semantic and object identification networks. We exploited steady-state visual evoked potentials (SSVEP) to directly probe this question, focusing on the associations between pitch and the visual features of size, hue or chromatic saturation. We found that SSVEPs over occipital regions are sensitive to the congruency between pitch and size, and a source analysis pointed to an origin around primary visual cortices. We speculate that this signature of the pitch-size association in low-level visual cortices reflects the successful pairing of congruent visual and acoustic object properties and may contribute to establishing causal relations between multisensory objects. Besides this, our study also provides a paradigm can be exploited to study other crossmodal associations involving visual stimuli in the future.
Collapse
|
12
|
Kilpatrick A, Ćwiek A, Lewis E, Kawahara S. A cross-linguistic, sound symbolic relationship between labial consonants, voiced plosives, and Pokémon friendship. Front Psychol 2023; 14:1113143. [PMID: 36910799 PMCID: PMC10000297 DOI: 10.3389/fpsyg.2023.1113143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/07/2023] [Indexed: 03/14/2023] Open
Abstract
Introduction This paper presents a cross-linguistic study of sound symbolism, analysing a six-language corpus of all Pokémon names available as of January 2022. It tests the effects of labial consonants and voiced plosives on a Pokémon attribute known as friendship. Friendship is a mechanic in the core series of Pokémon video games that arguably reflects how friendly each Pokémon is. Method Poisson regression is used to examine the relationship between the friendship mechanic and the number of times /p/, /b/, /d/, /m/, /g/, and /w/ occur in the names of English, Japanese, Korean, Chinese, German, and French Pokémon. Results Bilabial plosives, /p/ and /b/, typically represent high friendship values in Pokémon names while /m/, /d/, and /g/ typically represent low friendship values. No association is found for /w/ in any language. Discussion Many of the previously known cases of cross-linguistic sound symbolic patterns can be explained by the relationship between how sounds in words are articulated and the physical qualities of the referents. This study, however, builds upon the underexplored relationship between sound symbolism and abstract qualities.
Collapse
|