1
|
Li Y, Noguchi Y. The role of beta band phase resetting in audio-visual temporal order judgment. Cogn Neurodyn 2025; 19:28. [PMID: 39823079 PMCID: PMC11735826 DOI: 10.1007/s11571-024-10183-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 10/26/2024] [Accepted: 12/13/2024] [Indexed: 01/19/2025] Open
Abstract
The integration of auditory and visual stimuli is essential for effective language processing and social perception. The present study aimed to elucidate the mechanisms underlying audio-visual (A-V) integration by investigating the temporal dynamics of multisensory regions in the human brain. Specifically, we evaluated inter-trial coherence (ITC), a neural index indicative of phase resetting, through scalp electroencephalography (EEG) while participants performed a temporal-order judgment task that involved auditory (beep, A) and visual (flash, V) stimuli. The results indicated that ITC phase resetting was greater for bimodal (A + V) stimuli compared to unimodal (A or V) stimuli in the posterior temporal region, which resembled the responses of A-V multisensory neurons reported in animal studies. Furthermore, the ITC got lager as the stimulus-onset asynchrony (SOA) between beep and flash approached 0 ms. This enhancement in ITC was most clearly seen in the beta band (13-30 Hz). Overall, these findings highlight the importance of beta rhythm activity in the posterior temporal cortex for the detection of synchronous audiovisual stimuli, as assessed through temporal order judgment tasks. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-024-10183-0.
Collapse
Affiliation(s)
- Yueying Li
- Department of Psychology, Graduate School of Humanities, Kobe University, 1-1 Rokkodai- cho, Nada, Kobe, 657-8501 Japan
| | - Yasuki Noguchi
- Department of Psychology, Graduate School of Humanities, Kobe University, 1-1 Rokkodai- cho, Nada, Kobe, 657-8501 Japan
| |
Collapse
|
2
|
Winter B. The size and shape of sound: The role of articulation and acoustics in iconicity and crossmodal correspondencesa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2025; 157:2636-2656. [PMID: 40202363 DOI: 10.1121/10.0036362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 03/14/2025] [Indexed: 04/10/2025]
Abstract
Onomatopoeias like hiss and peep are iconic because their forms resemble their meanings. Iconicity can also involve forms and meanings in different modalities, such as when people match the nonce words bouba and kiki to round and angular objects, and mil and mal to small and large ones, also known as "sound symbolism." This paper focuses on what specific analogies motivate such correspondences in spoken language: do people associate shapes and size with how phonemes sound (auditory), or how they are produced (articulatory)? Based on a synthesis of empirical evidence probing the cognitive mechanisms underlying different types of sound symbolism, this paper argues that analogies based on acoustics alone are often sufficient, rendering extant articulatory explanations for many iconic phenomena superfluous. This paper further suggests that different types of crossmodal iconicity in spoken language can fruitfully be understood as an extension of onomatopoeia: when speakers iconically depict such perceptual characteristics as size and shape, they mimic the acoustics that are correlated with these characteristics in the natural world.
Collapse
Affiliation(s)
- Bodo Winter
- Department of Linguistics and Communication, University of Birmingham, Birmingham B15 2TT, United Kingdom
| |
Collapse
|
3
|
Bortolotti A, Chen N, Spence C, Palumbo R. Color-taste correspondences influence visual binding errors. Acta Psychol (Amst) 2025; 254:104785. [PMID: 39954634 DOI: 10.1016/j.actpsy.2025.104785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2024] [Revised: 02/05/2025] [Accepted: 02/06/2025] [Indexed: 02/17/2025] Open
Abstract
People consistently associate tastes with colors (e.g., sweet-red, sour-yellow, salty-blue). Here, we examined the effect of the congruency of color-taste correspondences on unimodal visual feature binding by studying illusory conjunctions (binding errors). The visual stimuli were typical food words associated with sweet, sour, and salty tastes, and were presented in red, yellow, and blue font. The participants reported the font color of one of the two words with food names presented in pairs briefly under conditions of divided spatial attention. The words were either congruent or incongruent with the color-taste correspondences. The participants made more Illusory conjunctions in the incongruent condition (e.g., sweet-yellow and sour-red) than in the congruent condition (e.g., sweet-red and sour-yellow). These results suggest that the congruency of color-taste correspondences can bias unimodal visual binding errors, likely through a top-down effect.
Collapse
Affiliation(s)
| | - Na Chen
- The Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat Gan, Israel
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Riccardo Palumbo
- Department of Neuroscience e Imaging University of Chieti, Italy.
| |
Collapse
|
4
|
Brožová N, Vollmer L, Kampa B, Kayser C, Fels J. Cross-modal congruency modulates evidence accumulation, not decision thresholds. Front Neurosci 2025; 19:1513083. [PMID: 40052091 PMCID: PMC11882578 DOI: 10.3389/fnins.2025.1513083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/30/2025] [Indexed: 03/09/2025] Open
Abstract
Audiovisual cross-modal correspondences (CMCs) refer to the brain's inherent ability to subconsciously connect auditory and visual information. These correspondences reveal essential aspects of multisensory perception and influence behavioral performance, enhancing reaction times and accuracy. However, the impact of different types of CMCs-arising from statistical co-occurrences or shaped by semantic associations-on information processing and decision-making remains underexplored. This study utilizes the Implicit Association Test, where unisensory stimuli are sequentially presented and linked via CMCs within an experimental block by the specific response instructions (either congruent or incongruent). Behavioral data are integrated with EEG measurements through neurally informed drift-diffusion modeling to examine how neural activity across both auditory and visual trials is modulated by CMCs. Our findings reveal distinct neural components that differentiate between congruent and incongruent stimuli regardless of modality, offering new insights into the role of congruency in shaping multisensory perceptual decision-making. Two key neural stages were identified: an Early component enhancing sensory encoding in congruent trials and a Late component affecting evidence accumulation, particularly in incongruent trials. These results suggest that cross-modal congruency primarily influences the processing and accumulation of sensory information rather than altering decision thresholds.
Collapse
Affiliation(s)
- Natálie Brožová
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| | - Lukas Vollmer
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| | - Björn Kampa
- Systems Neurophysiology Department, Institute of Zoology, RWTH Aachen University, Aachen, Germany
| | - Christoph Kayser
- Department of Cognitive Neuroscience, Universität Bielefeld, Bielefeld, Germany
| | - Janina Fels
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
5
|
Indraccolo A, Del Gatto C, Pedale T, Santangelo V, Spence C, Brunetti R. Assessing the limits on size-pitch mapping reveals the interplay between top-down and bottom-up influences on relative crossmodal correspondences. PSYCHOLOGICAL RESEARCH 2025; 89:53. [PMID: 39960509 DOI: 10.1007/s00426-025-02082-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 01/29/2025] [Indexed: 04/30/2025]
Abstract
Certain sensory dimensions, such as visual size and auditory pitch, are consistently associated, resulting in performance facilitation or inhibition. The mechanisms underlying these crossmodal correspondences are still the subject of debate: The relative or absolute nature of crossmodal mappings is connected to this debate, as an absolute mapping points to a bottom-up process, whereas a relative one is evidence of stronger top-down influences. Three experiments were conducted (including overall N = 207 participants), based on two different tasks, designed to explore a wide range of size-pitch crossmodal mappings. In Experiment 1, the participants were instructed to freely manipulate stimuli varing along a given dimension to 'match' the other. The results revealed evidence for a quasi-absolute mapping, but the correspondences shifted depending on the participants' auditory or visual attentional focus. In Experiment 2, the participants performed a visual speeded categorization task, involving a wide range of auditory task-irrelevant pitches, including the "preferred" ones, estimated on the basis of the results of Experiment 1. The results revealed a rather relative mapping, corroborating a top-down influence on the correspondence effect. Experiment 3 was designed to determine whether the relative mapping involved has boundary. The results confirmed that the larger the interval between pitches (i.e., more perceptually salient), the stronger the congruence effect, thus highlighting bottom-up facilitation. Taken together, these findings reveal that the size-pitch correspondences are sensitive to task-related top-down factors, as well as to stimulus-related bottom-up influences, ultimately revealing the adaptive nature of this kind of multisensory integration.
Collapse
Affiliation(s)
- Allegra Indraccolo
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Claudia Del Gatto
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy
| | - Tiziana Pedale
- Functional Neuroimaging Laboratory, IRCCS Santa Lucia, Rome, Italy
| | - Valerio Santangelo
- Department of Philosophy, Social Sciences & Education, University of Perugia, Perugia, Italy
- Functional Neuroimaging Laboratory, IRCCS Santa Lucia, Rome, Italy
| | - Charles Spence
- Department of Experimental Psychology, Oxford University, Oxford, UK
| | - Riccardo Brunetti
- Experimental and Applied Psychology Laboratory, Department of Human Sciences, Università Europea di Roma, Rome, Italy.
| |
Collapse
|
6
|
McEwan J, Kritikos A, Zeljko M. Crossmodal correspondence of elevation/pitch and size/pitch is driven by real-world features. Atten Percept Psychophys 2024; 86:2821-2833. [PMID: 39461934 DOI: 10.3758/s13414-024-02975-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/09/2024] [Indexed: 10/28/2024]
Abstract
Crossmodal correspondences are consistent associations between sensory features from different modalities, with some theories suggesting they may either reflect environmental correlations or stem from innate neural structures. This study investigates this question by examining whether retinotopic or representational features of stimuli induce crossmodal congruency effects. Participants completed an auditory pitch discrimination task paired with visual stimuli varying in their sensory (retinotopic) or representational (scene integrated) nature, for both the elevation/pitch and size/pitch correspondences. Results show that only representational visual stimuli produced crossmodal congruency effects on pitch discrimination. These results support an environmental statistics hypothesis, suggesting crossmodal correspondences rely on real-world features rather than on sensory representations.
Collapse
Affiliation(s)
- John McEwan
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4072, Australia.
| | - Ada Kritikos
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4072, Australia
| | - Mick Zeljko
- School of Psychology, The University of Queensland, St. Lucia, QLD, 4072, Australia
| |
Collapse
|
7
|
McEwan J, Kritikos A, Zeljko M. Involvement of the superior colliculi in crossmodal correspondences. Atten Percept Psychophys 2024; 86:931-941. [PMID: 38418807 PMCID: PMC11062976 DOI: 10.3758/s13414-024-02866-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/11/2024] [Indexed: 03/02/2024]
Abstract
There is an increasing body of evidence suggesting that there are low-level perceptual processes involved in crossmodal correspondences. In this study, we investigate the involvement of the superior colliculi in three basic crossmodal correspondences: elevation/pitch, lightness/pitch, and size/pitch. Using a psychophysical design, we modulate visual input to the superior colliculus to test whether the superior colliculus is required for behavioural crossmodal congruency effects to manifest in an unspeeded multisensory discrimination task. In the elevation/pitch task, superior colliculus involvement is required for a behavioural elevation/pitch congruency effect to manifest in the task. In the lightness/pitch and size/pitch task, we observed a behavioural elevation/pitch congruency effect regardless of superior colliculus involvement. These results suggest that the elevation/pitch correspondence may be processed differently to other low-level crossmodal correspondences. The implications of a distributed model of crossmodal correspondence processing in the brain are discussed.
Collapse
Affiliation(s)
- John McEwan
- School of Psychology, The University of Queensland, St. Lucia, Queensland, 4072, Australia.
| | - Ada Kritikos
- School of Psychology, The University of Queensland, St. Lucia, Queensland, 4072, Australia
| | - Mick Zeljko
- School of Psychology, The University of Queensland, St. Lucia, Queensland, 4072, Australia
| |
Collapse
|
8
|
Bánki A, Köster M, Cichy RM, Hoehl S. Communicative signals during joint attention promote neural processes of infants and caregivers. Dev Cogn Neurosci 2024; 65:101321. [PMID: 38061133 PMCID: PMC10754706 DOI: 10.1016/j.dcn.2023.101321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 10/13/2023] [Accepted: 11/04/2023] [Indexed: 01/01/2024] Open
Abstract
Communicative signals such as eye contact increase infants' brain activation to visual stimuli and promote joint attention. Our study assessed whether communicative signals during joint attention enhance infant-caregiver dyads' neural responses to objects, and their neural synchrony. To track mutual attention processes, we applied rhythmic visual stimulation (RVS), presenting images of objects to 12-month-old infants and their mothers (n = 37 dyads), while we recorded dyads' brain activity (i.e., steady-state visual evoked potentials, SSVEPs) with electroencephalography (EEG) hyperscanning. Within dyads, mothers either communicatively showed the images to their infant or watched the images without communicative engagement. Communicative cues increased infants' and mothers' SSVEPs at central-occipital-parietal, and central electrode sites, respectively. Infants showed significantly more gaze behaviour to images during communicative engagement. Dyadic neural synchrony (SSVEP amplitude envelope correlations, AECs) was not modulated by communicative cues. Taken together, maternal communicative cues in joint attention increase infants' neural responses to objects, and shape mothers' own attention processes. We show that communicative cues enhance cortical visual processing, thus play an essential role in social learning. Future studies need to elucidate the effect of communicative cues on neural synchrony during joint attention. Finally, our study introduces RVS to study infant-caregiver neural dynamics in social contexts.
Collapse
Affiliation(s)
- Anna Bánki
- University of Vienna, Faculty of Psychology, Vienna, Austria.
| | - Moritz Köster
- University of Regensburg, Institute for Psychology, Regensburg, Germany; Freie Universität Berlin, Faculty of Education and Psychology, Berlin, Germany
| | | | - Stefanie Hoehl
- University of Vienna, Faculty of Psychology, Vienna, Austria
| |
Collapse
|