1
|
Sadeghlo N, Selvanathan J, Koshkebaghi D, Cioffi I. Aberrant occlusal sensitivity in adults with increased somatosensory amplification: a case-control study. Clin Oral Investig 2024; 28:250. [PMID: 38613726 DOI: 10.1007/s00784-024-05628-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 03/20/2024] [Indexed: 04/15/2024]
Abstract
OBJECTIVES Occlusal sensitivity (OS)-the ability to detect fine objects between opposing teeth-mainly relies on the activity of mechanoreceptors located in the periodontal ligament. We tested whether somatosensory amplification (SSA)-the tendency to perceive normal somatic sensations as being intense, noxious, and disturbing, which plays a critical role in hypervigilance-affects OS. MATERIALS AND METHODS We measured OS in 66 adults divided into three groups based on their SSA scores (LowSSA, Intermediate - IntSSA, HighSSA) by asking them to bite on aluminum foils (8 to 72 μm thick) and a sham foil, and report whether they felt each foil. We performed 20 trials for each thickness and sham condition (each participant was tested 120 times), and compared the frequency of correct answers (%correct) among groups after adjusting for participants' trait anxiety, depression, self-reported oral behaviors, and masseter cross-sectional area. RESULTS %correct was affected by the interaction Foil Thickness-by-SSA (p = 0.007). When tested with the 8 μm foil, the HighSSA group had a lower %correct than the IntSSA (contrast estimate [95% CI]: -14.2 [-25.8 - -2.6]; p = 0.012) and the LowSSA groups (-19.1 [-31.5 - -6.6]; p = 0.001). Similarly, with the 24 μm foil, the HighSSA group had a lower %correct compared to the IntSSA (-12.4 [-24.8-0.1]; p = 0.048) and the LowSSA groups (-10.8 [-22.5-0.8]; p = 0.073). CONCLUSION Individuals with high SSA present with an aberrant occlusal sensitivity. CLINICAL RELEVANCE Our findings provide novel insights into the relationship between occlusal perception and psychological factors, which may influence an individual's ability to adapt to dental work.
Collapse
Affiliation(s)
- Negin Sadeghlo
- Faculty of Dentistry, Centre for Multimodal Sensorimotor and Pain Research, University of Toronto, 124 Edward Street, Toronto, ON, M5G 1X3, Canada
| | - Janannii Selvanathan
- Faculty of Dentistry, Centre for Multimodal Sensorimotor and Pain Research, University of Toronto, 124 Edward Street, Toronto, ON, M5G 1X3, Canada
| | - Dursa Koshkebaghi
- Faculty of Dentistry, Centre for Multimodal Sensorimotor and Pain Research, University of Toronto, 124 Edward Street, Toronto, ON, M5G 1X3, Canada
| | - Iacopo Cioffi
- Faculty of Dentistry, Centre for Multimodal Sensorimotor and Pain Research, University of Toronto, 124 Edward Street, Toronto, ON, M5G 1X3, Canada.
| |
Collapse
|
2
|
Wang QJ, Thomadsen JK, Amidi A. Can metaphors help us better remember wines? The effect of wine evaluation style on short-term recognition of red wines. Food Res Int 2024; 179:114009. [PMID: 38342534 DOI: 10.1016/j.foodres.2024.114009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 01/06/2024] [Accepted: 01/09/2024] [Indexed: 02/13/2024]
Abstract
People are generally poor at remembering complex food stimuli, such as wine. While writing a description has been shown to improve memory performance, talking about wine is generally a difficult task for novices. However, giving novices a framework in which to evaluate the wine may help with the memory process. Using a short-term recognition task, this experiment compared different forms of wine evaluation on the to-be-remembered wine sample, using either 1) a classic smell and taste evaluation, 2) a multisensory metaphor selection task with visual, auditory, and tactile metaphors, or 3) a control condition with no writing. Results from 153 participants revealed that recognition performance between the three groups was not significantly different. Secondary analysis revealed that recognition accuracy was correlated with wine liking for the control group, suggesting that in the absence of explicitly evaluating the wine, participants relied on wine liking as a cue for memory. Implications for theory development and applications in wine education are discussed.
Collapse
Affiliation(s)
- Qian Janice Wang
- Department of Food Science, University of Copenhagen, Frederiksberg, Denmark.
| | | | - Ali Amidi
- Department of Psychology and Behavioural Sciences, Aarhus University, Aarhus, Denmark
| |
Collapse
|
3
|
Friehs MA, Stegemann MJ, Merz S, Geißler C, Meyerhoff HS, Frings C. The influence of tDCS on perceived bouncing/streaming. Exp Brain Res 2023; 241:59-66. [PMID: 36357591 PMCID: PMC9870834 DOI: 10.1007/s00221-022-06505-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 11/03/2022] [Indexed: 11/12/2022]
Abstract
Processing ambiguous situations is a constant challenge in everyday life and sensory input from different modalities needs to be integrated to form a coherent mental representation on the environment. The bouncing/streaming illusion can be studied to provide insights into the ambiguous perception and processing of multi-modal environments. In short, the likelihood of reporting bouncing rather than streaming impressions increases when a sound coincides with the moment of overlap between two moving disks. Neuroimaging studies revealed that the right posterior parietal cortex is crucial in cross-modal integration and is active during the bouncing/streaming illusion. Consequently, in the present study, we used transcranial direct current stimulation to stimulate this brain area. In the active stimulation conditions, a 9 cm2 electrode was positioned over the P4-EEG position and the 35 cm2 reference positioned over the left upper arm. The stimulation lasted 15 min. Each participant did the bouncing/streaming task three times: before, during and after anodal or sham stimulation. In a sample of N = 60 healthy, young adults, we found no influence of anodal tDCS. Bayesian analysis showed strong evidence against tDCS effects. There are two possible explanations for the finding that anodal tDCS over perceptual areas did not modulate multimodal integration. First, upregulation of multimodal integration is not possible using tDCS over the PPC as the integration process already functions at maximum capacity. Second, prefrontal decision-making areas may have overruled any modulated input from the PPC as it may not have matched their decision-making criterion and compensated for the modulation.
Collapse
Affiliation(s)
- Maximilian A. Friehs
- Lise-Meitner Research Group Cognition and Plasticity, Max-Planck-Institute for Human Cognitive and Brain Science, Leipzig, Germany ,School of Psychology, University College Dublin, Dublin, Ireland
| | | | | | | | | | | |
Collapse
|
4
|
Tachmatzidou O, Paraskevoudi N, Vatakis A. Exposure to multisensory and visual static or moving stimuli enhances processing of nonoptimal visual rhythms. Atten Percept Psychophys 2022. [PMID: 36241841 DOI: 10.3758/s13414-022-02569-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2022] [Indexed: 11/25/2022]
Abstract
Research has shown that visual moving and multisensory stimuli can efficiently mediate rhythmic information. It is possible, therefore, that the previously reported auditory dominance in rhythm perception is due to the use of nonoptimal visual stimuli. Yet it remains unknown whether exposure to multisensory or visual-moving rhythms would benefit the processing of rhythms consisting of nonoptimal static visual stimuli. Using a perceptual learning paradigm, we tested whether the visual component of the multisensory training pair can affect processing of metric simple two integer-ratio nonoptimal visual rhythms. Participants were trained with static (AVstat), moving-inanimate (AVinan), or moving-animate (AVan) visual stimuli along with auditory tones and a regular beat. In the pre- and posttraining tasks, participants responded whether two static-visual rhythms differed or not. Results showed improved posttraining performance for all training groups irrespective of the type of visual stimulation. To assess whether this benefit was auditory driven, we introduced visual-only training with a moving or static stimulus and a regular beat (Vinan). Comparisons between Vinan and Vstat showed that, even in the absence of auditory information, training with visual-only moving or static stimuli resulted in an enhanced posttraining performance. Overall, our findings suggest that audiovisual and visual static or moving training can benefit processing of nonoptimal visual rhythms.
Collapse
|
5
|
Chang DHF, Thinnes D, Au PY, Maziero D, Stenger VA, Sinnett S, Vibell J. Sound-modulations of visual motion perception implicate the cortico-vestibular brain. Neuroimage 2022; 257:119285. [PMID: 35537600 DOI: 10.1016/j.neuroimage.2022.119285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 04/20/2022] [Accepted: 05/05/2022] [Indexed: 11/19/2022] Open
Abstract
A widely used example of the intricate (yet poorly understood) intertwining of multisensory signals in the brain is the audiovisual bounce inducing effect (ABE). This effect presents two identical objects moving along the azimuth with uniform motion and towards opposite directions. The perceptual interpretation of the motion is ambiguous and is modulated if a transient (sound) is presented in coincidence with the point of overlap of the two objects' motion trajectories. This phenomenon has long been written-off to simple attentional or decision-making mechanisms, although the neurological underpinnings for the effect are not well understood. Using behavioural metrics concurrently with event-related fMRI, we show that sound-induced modulations of motion perception can be further modulated by changing motion dynamics of the visual targets. The phenomenon engages the posterior parietal cortex and the parieto-insular-vestibular cortical complex, with a close correspondence of activity in these regions with behaviour. These findings suggest that the insular cortex is engaged in deriving a probabilistic perceptual solution through the integration of multisensory data.
Collapse
Affiliation(s)
- Dorita H F Chang
- Department of Psychology and The State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong.
| | - David Thinnes
- Department of Psychology, University of Hawai'i at Mānoa, Hawaii, USA; Faculty of Medicine, Systems Neuroscience & Neurotechnology Unit, Saarland University & HTW Saar, Germany
| | - Pak Yam Au
- Department of Psychology and The State Key Laboratory of Brain and Cognitive Sciences, The University of Hong Kong, Hong Kong
| | - Danilo Maziero
- Department of Medicine, MR Research Program, John A. Burns School of Medicine, University of Hawai'i, HI, USA
| | - Victor Andrew Stenger
- Department of Medicine, MR Research Program, John A. Burns School of Medicine, University of Hawai'i, HI, USA
| | - Scott Sinnett
- Department of Psychology, University of Hawai'i at Mānoa, Hawaii, USA
| | - Jonas Vibell
- Department of Psychology, University of Hawai'i at Mānoa, Hawaii, USA.
| |
Collapse
|
6
|
Chau E, Murray CA, Shams L. Hierarchical drift diffusion modeling uncovers multisensory benefit in numerosity discrimination tasks. PeerJ 2021; 9:e12273. [PMID: 34760356 PMCID: PMC8556708 DOI: 10.7717/peerj.12273] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 09/19/2021] [Indexed: 11/30/2022] Open
Abstract
Studies of accuracy and reaction time in decision making often observe a speed-accuracy tradeoff, where either accuracy or reaction time is sacrificed for the other. While this effect may mask certain multisensory benefits in performance when accuracy and reaction time are separately measured, drift diffusion models (DDMs) are able to consider both simultaneously. However, drift diffusion models are often limited by large sample size requirements for reliable parameter estimation. One solution to this restriction is the use of hierarchical Bayesian estimation for DDM parameters. Here, we utilize hierarchical drift diffusion models (HDDMs) to reveal a multisensory advantage in auditory-visual numerosity discrimination tasks. By fitting this model with a modestly sized dataset, we also demonstrate that large sample sizes are not necessary for reliable parameter estimation.
Collapse
Affiliation(s)
- Edwin Chau
- Department of Mathematics, University of California, Los Angeles, Los Angeles, California, USA
| | - Carolyn A Murray
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, USA
| | - Ladan Shams
- Department of Psychology, BioEngineering, and Interdepartmental Neuroscience Program, University of California, Los Angeles, Los Angeles, California, USA
| |
Collapse
|
7
|
Scurry AN, Lovelady Z, Jiang F. Task-dependent audiovisual temporal sensitivity is not affected by stimulus intensity levels. Vision Res 2021; 186:71-79. [PMID: 34058622 PMCID: PMC8273142 DOI: 10.1016/j.visres.2021.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 04/30/2021] [Accepted: 05/12/2021] [Indexed: 10/21/2022]
Abstract
Flexibility and robustness of multisensory temporal recalibration is paramount for maintaining perceptual constancy of the surrounding natural world. Different environments impart various impediments, distances and routes that alter the propagation times of sight and sound cues comprising a multimodal event. One's ability to rapidly calibrate and account for these external variations allows for maintained perception of synchrony which is crucial for coherent and consistent perception. The two common paradigms used to compare precision of temporal processing between experimental and control groups, the simultaneity judgment (SJ) and temporal order judgment (TOJ) tasks, often use supra-threshold stimuli. However, few studies have specifically examined the effects of normalizing stimulus intensities to participant's unisensory detection thresholds. The current project presented multiple combinations of auditory and visual stimulus intensity levels, based on individual detection thresholds, during a TOJ and a SJ task. While no effect of stimulus intensity was found on temporal sensitivity or perceived temporal synchrony, there was a significant difference in point of subjective simultaneity (PSS) measures between tasks. In addition, PSS estimates were audio-leading, rather than visual-leading as previously reported, suggesting that exposure to the particular combinations of stimulus intensity levels used influenced temporal synchrony perception. Overall, these results support the use of supra-threshold stimuli in TOJ and SJ tasks as a way of minimizing the confound from differences in unisensory processing.
Collapse
Affiliation(s)
- Alexandra N Scurry
- Department of Psychology, University of Nevada, 1664 N. Virginia St., Reno, NV 89557, USA.
| | - Zachary Lovelady
- Department of Psychology, University of Nevada, 1664 N. Virginia St., Reno, NV 89557, USA
| | - Fang Jiang
- Department of Psychology, University of Nevada, 1664 N. Virginia St., Reno, NV 89557, USA
| |
Collapse
|
8
|
Hu DZ, Wen K, Chen LH, Yu C. Perceptual learning evidence for supramodal representation of stimulus orientation at a conceptual level. Vision Res 2021; 187:120-128. [PMID: 34252727 DOI: 10.1016/j.visres.2021.06.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 06/09/2021] [Accepted: 06/16/2021] [Indexed: 11/28/2022]
Abstract
When stimulus inputs from different senses are integrated to form a coherent percept, inputs from a more precise sense are typically more dominant than those from a less precise sense. Furthermore, we hypothesized that some basic stimulus features, such as orientation, can be supramodal-represented at a conceptual level that is independent of the original modality precision. This hypothesis was tested with perceptual learning experiments. Specifically, participants practiced coarser tactile orientation discrimination, which initially had little impact on finer visual orientation discrimination (tactile vs. visual orientation thresholds = 3:1). However, if participants also practiced a functionally orthogonal visual contrast discrimination task in a double training design, their visual orientation performance was improved at both tactile-trained and untrained orientations, as much as through direct visual orientation training. The complete tactile-to-visual learning transfer is consistent with a conceptual supramodal representation of orientation unconstrained by original modality precision, likely through certain forms of input standardization. Moreover, this conceptual supramodal representation, when improved through perceptual learning in one sense, can in turn facilitate orientation discrimination in an untrained sense.
Collapse
Affiliation(s)
- Ding-Zhi Hu
- PKU-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Kai Wen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Li-Han Chen
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China.
| | - Cong Yu
- PKU-Tsinghua Center for Life Sciences, Peking University, Beijing, China; School of Psychological and Cognitive Sciences, Peking University, Beijing, China; IDG-McGovern Institute for Brain Research, Peking University, Beijing, China.
| |
Collapse
|
9
|
Villalonga MB, Sussman RF, Sekuler R. Perceptual timing precision with vibrotactile, auditory, and multisensory stimuli. Atten Percept Psychophys 2021; 83:2267-80. [PMID: 33772447 DOI: 10.3758/s13414-021-02254-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/17/2021] [Indexed: 11/08/2022]
Abstract
The growing use of vibrotactile signaling devices makes it important to understand the perceptual limits on vibrotactile information processing. To promote that understanding, we carried out a pair of experiments on vibrotactile, auditory, and bimodal (synchronous vibrotactile and auditory) temporal acuity. On each trial, subjects experienced a set of isochronous, standard intervals (400 ms each), followed by one interval of variable duration (400 ± 1-80 ms). Intervals were demarcated by short vibrotactile, auditory, or bimodal pulses. Subjects categorized the timing of the last interval by describing the final pulse as either "early" or "late" relative to its predecessors. In Experiment 1, each trial contained three isochronous standard intervals, followed by an interval of variable length. In Experiment 2, the number of isochronous standard intervals per trial varied, from one to four. Psychometric modeling revealed that vibrotactile stimulation produced poorer temporal discrimination than either auditory or bimodal stimulation. Moreover, auditory signals dominated bimodal sensitivity, and inter-individual differences in temporal discriminability were reduced with bimodal stimulation. Additionally, varying the number of isochronous intervals in a trial failed to improve temporal sensitivity in either modality, suggesting that memory played a key role in judgments of interval duration.
Collapse
|
10
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
11
|
Schmitz L, Knoblich G, Deroy O, Vesper C. Crossmodal correspondences as common ground for joint action. Acta Psychol (Amst) 2021; 212:103222. [PMID: 33302228 PMCID: PMC7755874 DOI: 10.1016/j.actpsy.2020.103222] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 09/25/2020] [Accepted: 11/05/2020] [Indexed: 11/19/2022] Open
Abstract
When performing joint actions, people rely on common ground - shared information that provides the required basis for mutual understanding. Common ground can be based on people's interaction history or on knowledge and expectations people share, e.g., because they belong to the same culture or social class. Here, we suggest that people rely on yet another form of common ground, one that originates in their similarities in multisensory processing. Specifically, we focus on 'crossmodal correspondences' - nonarbitrary associations that people make between stimulus features in different sensory modalities, e.g., between stimuli in the auditory and the visual modality such as high-pitched sounds and small objects. Going beyond previous research that focused on investigating crossmodal correspondences in individuals, we propose that people can use these correspondences for communicating and coordinating with others. Initial support for our proposal comes from a communication game played in a public space (an art gallery) by pairs of visitors. We observed that pairs created nonverbal communication systems by spontaneously relying on 'crossmodal common ground'. Based on these results, we conclude that crossmodal correspondences not only occur within individuals but that they can also be actively used in joint action to facilitate the coordination between individuals.
Collapse
Affiliation(s)
- Laura Schmitz
- Department of Cognitive Science, Central European University, Budapest, Hungary; Institute for Sports Science, Leibniz Universität Hannover, Hannover, Germany
| | - Günther Knoblich
- Department of Cognitive Science, Central European University, Budapest, Hungary
| | - Ophelia Deroy
- Faculty of Philosophy, Ludwig-Maximilians-Universität, Munich, Germany; Munich Centre for Neuroscience, Ludwig-Maximilians-Universität, Munich, Germany; Institute of Philosophy, School of Advanced Study, University of London, London, UK
| | - Cordula Vesper
- Department of Cognitive Science, Central European University, Budapest, Hungary; Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark; Interacting Minds Centre, Aarhus University, Aarhus, Denmark.
| |
Collapse
|
12
|
Stacey JE, Howard CJ, Mitra S, Stacey PC. Audio-visual integration in noise: Influence of auditory and visual stimulus degradation on eye movements and perception of the McGurk effect. Atten Percept Psychophys 2020; 82:3544-57. [PMID: 32533526 DOI: 10.3758/s13414-020-02042-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Seeing a talker’s face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162–171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601–615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.
Collapse
|
13
|
Abstract
From playing basketball to ordering at a food counter, we frequently and effortlessly coordinate our attention with others towards a common focus: we look at the ball, or point at a piece of cake. This non-verbal coordination of attention plays a fundamental role in our social lives: it ensures that we refer to the same object, develop a shared language, understand each other's mental states, and coordinate our actions. Models of joint attention generally attribute this accomplishment to gaze coordination. But are visual attentional mechanisms sufficient to achieve joint attention, in all cases? Besides cases where visual information is missing, we show how combining it with other senses can be helpful, and even necessary to certain uses of joint attention. We explain the two ways in which non-visual cues contribute to joint attention: either as enhancers, when they complement gaze and pointing gestures in order to coordinate joint attention on visible objects, or as modality pointers, when joint attention needs to be shifted away from the whole object to one of its properties, say weight or texture. This multisensory approach to joint attention has important implications for social robotics, clinical diagnostics, pedagogy and theoretical debates on the construction of a shared world.
Collapse
Affiliation(s)
- Lucas Battich
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany.
- Graduate School of Systemic Neurosciences, Ludwig Maximilian University Munich, Munich, Germany.
| | - Merle Fairhurst
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institut für Psychologie, Fakultät für Humanwissenschaften, Universität der Bundeswehr München, Munich, Germany
| | - Ophelia Deroy
- Faculty of Philosophy and Philosophy of Science, Ludwig Maximilian University Munich, Geschwister-Scholl-Platz 1, Munich, 80359, Germany
- Munich Center for Neuroscience, Ludwig Maximilian University Munich, Munich, Germany
- Institute of Philosophy, School of Advanced Study, University of London, London, UK
| |
Collapse
|
14
|
Abstract
Traditionally, architectural practice has been dominated by the eye/sight. In recent decades, though, architects and designers have increasingly started to consider the other senses, namely sound, touch (including proprioception, kinesthesis, and the vestibular sense), smell, and on rare occasions, even taste in their work. As yet, there has been little recognition of the growing understanding of the multisensory nature of the human mind that has emerged from the field of cognitive neuroscience research. This review therefore provides a summary of the role of the human senses in architectural design practice, both when considered individually and, more importantly, when studied collectively. For it is only by recognizing the fundamentally multisensory nature of perception that one can really hope to explain a number of surprising crossmodal environmental or atmospheric interactions, such as between lighting colour and thermal comfort and between sound and the perceived safety of public space. At the same time, however, the contemporary focus on synaesthetic design needs to be reframed in terms of the crossmodal correspondences and multisensory integration, at least if the most is to be made of multisensory interactions and synergies that have been uncovered in recent years. Looking to the future, the hope is that architectural design practice will increasingly incorporate our growing understanding of the human senses, and how they influence one another. Such a multisensory approach will hopefully lead to the development of buildings and urban spaces that do a better job of promoting our social, cognitive, and emotional development, rather than hindering it, as has too often been the case previously.
Collapse
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology, Crossmodal Research Laboratory, University of Oxford, Anna Watts Building, Oxford, OX2 6GG, UK.
| |
Collapse
|
15
|
Lloyd-Esenkaya T, Lloyd-Esenkaya V, O'Neill E, Proulx MJ. Multisensory inclusive design with sensory substitution. Cogn Res Princ Implic 2020; 5:37. [PMID: 32770416 PMCID: PMC7415050 DOI: 10.1186/s41235-020-00240-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 07/13/2020] [Indexed: 11/10/2022]
Abstract
Sensory substitution techniques are perceptual and cognitive phenomena used to represent one sensory form with an alternative. Current applications of sensory substitution techniques are typically focused on the development of assistive technologies whereby visually impaired users can acquire visual information via auditory and tactile cross-modal feedback. But despite their evident success in scientific research and furthering theory development in cognition, sensory substitution techniques have not yet gained widespread adoption within sensory-impaired populations. Here we argue that shifting the focus from assistive to mainstream applications may resolve some of the current issues regarding the use of sensory substitution devices to improve outcomes for those with disabilities. This article provides a tutorial guide on how to use research into multisensory processing and sensory substitution techniques from the cognitive sciences to design new inclusive cross-modal displays. A greater focus on developing inclusive mainstream applications could lead to innovative technologies that could be enjoyed by every person.
Collapse
Affiliation(s)
- Tayfun Lloyd-Esenkaya
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK.,Department of Computer Science, University of Bath, Bath, UK
| | | | - Eamonn O'Neill
- Department of Computer Science, University of Bath, Bath, UK
| | - Michael J Proulx
- Crossmodal Cognition Lab, University of Bath, Bath, BA2 7AY, UK. .,Department of Psychology, University of Bath, Bath, UK.
| |
Collapse
|
16
|
Vroomen J, Keetels M. Perception of causality and synchrony dissociate in the audiovisual bounce-inducing effect (ABE). Cognition 2020; 204:104340. [PMID: 32569946 DOI: 10.1016/j.cognition.2020.104340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Revised: 05/24/2020] [Accepted: 05/26/2020] [Indexed: 11/17/2022]
Abstract
A sound can cause 2 visual streaming objects appear to bounce (the audiovisual bounce-inducing effect, ABE). Here we examined whether the stream/bounce percept affects perception of audiovisual synchrony. Participants saw 2 disks that either clearly streamed, clearly bounced, or were ambiguous, and heard a sound around the point of contact (POC). They reported, on each trial, whether they perceived the disks to 'stream' or 'bounce', and whether the sound was 'synchronous' or 'asynchronous' with the POC. Results showed that the optimal time of the sound to induce a bounce was before the POC (-59 msec), whereas audiovisual synchrony was maximal when the sound came after the POC (+16 msec). The range of temporal asynchronies perceived as 'synchronous', the temporal binding window (TBW), was wider when disks were perceived as bouncing than streaming, with no difference between ambiguous and non-ambiguous visual displays. These results demonstrate 1) that causality differs from synchrony, 2) that causality widens the TBW, and 3) that the ABE is perceptually real.
Collapse
Affiliation(s)
- Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, Warandelaan 2, P.O. Box 90153, 5000, LE, Tilburg, the Netherlands.
| | - Mirjam Keetels
- Department of Cognitive Neuropsychology, Tilburg University, Warandelaan 2, P.O. Box 90153, 5000, LE, Tilburg, the Netherlands
| |
Collapse
|
17
|
Rau PP, Zheng J. Cross-modal psychological refractory period in vision, audition, and haptics. Atten Percept Psychophys 2020; 82:1573-85. [PMID: 32052346 DOI: 10.3758/s13414-020-01978-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People's parallel-processing ability is limited, as demonstrated by the psychological refractory period (PRP) effect: The reaction time to the second stimulus (RT2) increases as the stimulus onset asynchrony (SOA) between two stimuli decreases. Most theoretical models of PRP are independent of modalities. Previous research on PRP mainly focused on vision and audition as input modalities; tactile stimuli have not been fully explored. Research using other paradigms and involving tactile stimuli, however, found that dual-task performance depended on input modalities. This study explored PRP with all the combinations of input modalities. Thirty participants judged the magnitude (small or large) of two stimuli presented in different modalities with an SOA of 75-1,200 ms. PRP effect was observed, i.e., RT2 increased with a decreasing SOA, in all the modalities. Only in the auditory-tactile condition did the accuracy of Task 2 decrease with a decreasing SOA. In the auditory-tactile and tactile-visual conditions, RT to the first stimulus also increased with a decreasing SOA. Current models could only explain part of the results, and modality characteristics help to explain the overall data pattern better. Limitations and directions for future studies regarding reaction time, task difficulty, and response modalities are discussed.
Collapse
|
18
|
Jones SA, Beierholm U, Meijer D, Noppeney U. Older adults sacrifice response speed to preserve multisensory integration performance. Neurobiol Aging 2019; 84:148-157. [PMID: 31586863 DOI: 10.1016/j.neurobiolaging.2019.08.017] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2018] [Revised: 07/10/2019] [Accepted: 08/17/2019] [Indexed: 01/27/2023]
Abstract
Aging has been shown to impact multisensory perception, but the underlying computational mechanisms are unclear. For effective interactions with the environment, observers should integrate signals that share a common source, weighted by their reliabilities, and segregate those from separate sources. Observers are thought to accumulate evidence about the world's causal structure over time until a decisional threshold is reached. Combining psychophysics and Bayesian modeling, we investigated how aging affects audiovisual perception of spatial signals. Older and younger adults were comparable in their final localization and common-source judgment responses under both speeded and unspeeded conditions, but were disproportionately slower for audiovisually incongruent trials. Bayesian modeling showed that aging did not affect the ability to arbitrate between integration and segregation under either unspeeded or speeded conditions. However, modeling the within-trial dynamics of evidence accumulation under speeded conditions revealed that older observers accumulate noisier auditory representations for longer, set higher decisional thresholds, and have impaired motor speed. Older observers preserve audiovisual localization performance, despite noisier sensory representations, by sacrificing response speed.
Collapse
Affiliation(s)
- Samuel A Jones
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK; The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | | | - David Meijer
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
19
|
Jensen A, Merz S, Spence C, Frings C. Interference of irrelevant information in multisensory selection depends on attentional set. Atten Percept Psychophys 2020; 82:1176-95. [PMID: 31444699 DOI: 10.3758/s13414-019-01848-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In the multisensory world in which we live, certain objects and events are of more relevance than others. In the laboratory, this broadly equates to the distinction between targets and distractors. In selection situations like the flanker task, the evidence suggests that the processing of multisensory distractors is influenced by attention. Here, multisensory distractor processing was investigated by modulating attentional set in three experiments in a flanker interference task, in which the targets were unisensory while the distractors were multisensory. Attentional set was modulated by making the target modality either predictable or unpredictable (Experiments 1 vs. 2, respectively). In Experiment 3, this manipulation was implemented on a within-experiment basis. Furthermore, the third experiment compared audiovisual distractors (used in all experiments) with distractors with one feature in a neutral modality (i.e., touch), that never appeared as the target modality in the flanker task. The results demonstrate that there was no interference from the response-compatible crossmodal distractor feature when the target modality was predictable (i.e., blocked). However, when the modality was varied on a trial-by-trial basis, this crossmodal feature significantly influenced information processing. By contrast, a multisensory distractor with a neutral crossmodal feature never influenced behavior. This finding suggests that the processing of multisensory distractors depends on attentional set. When the target modality varies randomly, participants include features from both modalities in their attentional set and the irrelevant crossmodal feature, now part of the set, influences information processing. In contrast, interference from the crossmodal distractor feature does not occur when it is not part of the attentional set.
Collapse
|
20
|
Abstract
When repeatedly exposed to simultaneously presented stimuli, associations between these stimuli are nearly always established, both within as well as between sensory modalities. Such associations guide our subsequent actions and may also play a role in multisensory selection. Thus, crossmodal associations (i.e., associations between stimuli from different modalities) learned in a multisensory interference task might affect subsequent information processing. The aim of this study was to investigate the processing level of multisensory stimuli in multisensory selection by means of crossmodal aftereffects. Either feature or response associations were induced in a multisensory flanker task while the amount of interference in a subsequent crossmodal flanker task was measured. The results of Experiment 1 revealed the existence of crossmodal interference after multisensory selection. Experiments 2 and 3 then went on to demonstrate the dependence of this effect on the perceptual associations between features themselves, rather than on the associations between feature and response. Establishing response associations did not lead to a subsequent crossmodal interference effect (Experiment 2), while stimulus feature associations without response associations (obtained by changing the response effectors) did (Experiment 3). Taken together, this pattern of results suggests that associations in multisensory selection, and the interference of (crossmodal) distractors, predominantly work at the perceptual, rather than at the response, level.
Collapse
|
21
|
Meijer D, Veselič S, Calafiore C, Noppeney U. Integration of audiovisual spatial signals is not consistent with maximum likelihood estimation. Cortex 2019; 119:74-88. [PMID: 31082680 PMCID: PMC6864592 DOI: 10.1016/j.cortex.2019.03.026] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 03/26/2019] [Accepted: 03/28/2019] [Indexed: 01/01/2023]
Abstract
Multisensory perception is regarded as one of the most prominent examples where human behaviour conforms to the computational principles of maximum likelihood estimation (MLE). In particular, observers are thought to integrate auditory and visual spatial cues weighted in proportion to their relative sensory reliabilities into the most reliable and unbiased percept consistent with MLE. Yet, evidence to date has been inconsistent. The current pre-registered, large-scale (N = 36) replication study investigated the extent to which human behaviour for audiovisual localization is in line with maximum likelihood estimation. The acquired psychophysics data show that while observers were able to reduce their multisensory variance relative to the unisensory variances in accordance with MLE, they weighed the visual signals significantly stronger than predicted by MLE. Simulations show that this dissociation can be explained by a greater sensitivity of standard estimation procedures to detect deviations from MLE predictions for sensory weights than for audiovisual variances. Our results therefore suggest that observers did not integrate audiovisual spatial signals weighted exactly in proportion to their relative reliabilities for localization. These small deviations from the predictions of maximum likelihood estimation may be explained by observers' uncertainty about the world's causal structure as accounted for by Bayesian causal inference.
Collapse
Affiliation(s)
- David Meijer
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK.
| | - Sebastijan Veselič
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Carmelo Calafiore
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| | - Uta Noppeney
- Computational Cognitive Neuroimaging Laboratory, Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, Birmingham, UK
| |
Collapse
|
22
|
Abstract
Although music and dance are often experienced simultaneously, it is unclear what modulates their perceptual integration. This study investigated how two factors related to music-dance correspondences influenced audiovisual binding of their rhythms: the metrical match between the music and dance, and the kinematic familiarity of the dance movement. Participants watched a point-light figure dancing synchronously to a triple-meter rhythm that they heard in parallel, whereby the dance communicated a triple (congruent) or a duple (incongruent) visual meter. The movement was either the participant's own or that of another participant. Participants attended to both streams while detecting a temporal perturbation in the auditory beat. The results showed lower sensitivity to the auditory deviant when the visual dance was metrically congruent to the auditory rhythm and when the movement was the participant's own. This indicated stronger audiovisual binding and a more coherent bimodal rhythm in these conditions, thus making a slight auditory deviant less noticeable. Moreover, binding in the metrically incongruent condition involving self-generated visual stimuli was correlated with self-recognition of the movement, suggesting that action simulation mediates the perceived coherence between one's own movement and a mismatching auditory rhythm. Overall, the mechanisms of rhythm perception and action simulation could inform the perceived compatibility between music and dance, thus modulating the temporal integration of these audiovisual stimuli.
Collapse
|
23
|
Mason GM, Goldstein MH, Schwade JA. The role of multisensory development in early language learning. J Exp Child Psychol 2019; 183:48-64. [PMID: 30856417 DOI: 10.1016/j.jecp.2018.12.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2018] [Revised: 12/14/2018] [Accepted: 12/15/2018] [Indexed: 01/11/2023]
Abstract
In typical development, communicative skills such as language emerge from infants' ability to combine multisensory information into cohesive percepts. For example, the act of associating the visual or tactile experience of an object with its spoken name is commonly used as a measure of early word learning, and social attention and speech perception frequently involve integrating both visual and auditory attributes. Early perspectives once regarded perceptual integration as one of infants' primary challenges, whereas recent work suggests that caregivers' social responses contain structured patterns that may facilitate infants' perception of multisensory social cues. In the current review, we discuss the regularities within caregiver feedback that may allow infants to more easily discriminate and learn from social signals. We focus on the statistical regularities that emerge in the moment-by-moment behaviors observed in studies of naturalistic caregiver-infant play. We propose that the spatial form and contingencies of caregivers' responses to infants' looks and prelinguistic vocalizations facilitate communicative and cognitive development. We also explore how individual differences in infants' sensory and motor abilities may reciprocally influence caregivers' response patterns, in turn regulating and constraining the types of social learning opportunities that infants experience across early development. We end by discussing implications for neurodevelopmental conditions affecting both multisensory integration and communication (i.e., autism) and suggest avenues for further research and intervention.
Collapse
Affiliation(s)
- Gina M Mason
- Department of Psychology, Cornell University, Ithaca, NY 14853, USA.
| | | | | |
Collapse
|
24
|
McBeath MK, Addie JD, Krynen RC. Auditory capture of visual apparent motion, both laterally and looming. Acta Psychol (Amst) 2019; 193:105-112. [PMID: 30602130 DOI: 10.1016/j.actpsy.2018.12.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 12/10/2018] [Accepted: 12/20/2018] [Indexed: 11/16/2022] Open
Abstract
Traditional tests of multisensory stimuli typically support that vision dominates spatial judgments and audition dominates temporal ones. Here, we examine if unambiguous auditory spatial cues can capture ambiguous visual ones in judgments of direction of apparent motion. The visual motion judgments include both lateral movement and movement in depth, each when coupled with auditory stimuli moving at one of four rates. Experiment 1 tested lateral visual movement judgments (leftward vs rightward) coupled with auditory stimuli that moved laterally. Experiment 2 tested depth visual movement judgments (approaching vs receding) coupled with auditory stimuli that got louder or quieter. Results of Experiment 1 revealed and replicated an overall leftward motion bias, but with additional acoustic capture to experience visual movement away from the side on which sound initially occurred, and no effect of auditory motion speed. Results of Experiment 2 revealed and replicated an approaching motion bias, but with no effect of initial sound intensity, and an additional systematic capture effect of auditory motion speed. Faster changes in acoustic intensity produced larger visual motion capture consistent with the direction of acoustic intensity change. Findings of both experiments generalized over conditions of listening device (head phones vs speakers) and test-setting (Laboratory vs Web-based data-collection). The leftward and approaching motion bias results replicate previous research. Our principal new findings, the auditory motion capture effects, confirm the multisensory nature of dynamic spatial perception and support that extent of inter-sensory capture is a function of the relative reliability of spatial information acquired by each sensory modality.
Collapse
Affiliation(s)
- Michael K McBeath
- Department of Psychology, Arizona State University, United States of America.
| | - Jason D Addie
- Department of Psychology, Arizona State University, United States of America
| | - R Chandler Krynen
- Department of Psychology, Arizona State University, United States of America
| |
Collapse
|
25
|
Di Cosmo G, Costantini M, Salone A, Martinotti G, Di Iorio G, Di Giannantonio M, Ferri F. Peripersonal space boundary in schizotypy and schizophrenia. Schizophr Res 2018; 197:589-590. [PMID: 29269210 DOI: 10.1016/j.schres.2017.12.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/28/2017] [Revised: 12/03/2017] [Accepted: 12/11/2017] [Indexed: 11/28/2022]
Affiliation(s)
- Giulio Di Cosmo
- Department of Neuroscience, Imaging and Clinical Science, "G.d'Annunzio" University of Chieti, Chieti 60100, Italy
| | - Marcello Costantini
- Department of Neuroscience, Imaging and Clinical Science, "G.d'Annunzio" University of Chieti, Chieti 60100, Italy; Department of Psychology, University of Essex, Colchester CO4 3SQ, UK; ITAB - Institute for Advanced Biomedical Technologies, Chieti 60100, Italy
| | - Anatolia Salone
- Department of Neuroscience, Imaging and Clinical Science, "G.d'Annunzio" University of Chieti, Chieti 60100, Italy; ITAB - Institute for Advanced Biomedical Technologies, Chieti 60100, Italy
| | - Giovanni Martinotti
- Department of Neuroscience, Imaging and Clinical Science, "G.d'Annunzio" University of Chieti, Chieti 60100, Italy; ITAB - Institute for Advanced Biomedical Technologies, Chieti 60100, Italy
| | - Giuseppe Di Iorio
- Department of Neuroscience, Imaging and Clinical Science, "G.d'Annunzio" University of Chieti, Chieti 60100, Italy; ITAB - Institute for Advanced Biomedical Technologies, Chieti 60100, Italy
| | - Massimo Di Giannantonio
- Department of Neuroscience, Imaging and Clinical Science, "G.d'Annunzio" University of Chieti, Chieti 60100, Italy; ITAB - Institute for Advanced Biomedical Technologies, Chieti 60100, Italy
| | - Francesca Ferri
- Department of Psychology, University of Essex, Colchester CO4 3SQ, UK.
| |
Collapse
|
26
|
Jicol C, Proulx MJ, Pollick FE, Petrini K. Long-term music training modulates the recalibration of audiovisual simultaneity. Exp Brain Res 2018; 236:1869-1880. [PMID: 29687204 DOI: 10.1007/s00221-018-5269-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 04/17/2018] [Indexed: 11/27/2022]
Abstract
To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.
Collapse
Affiliation(s)
- Crescent Jicol
- Department of Psychology, University of Bath, Bath, UK.
- Department of Computer Science, University of Bath, Claverton Down, Bath, BA2 7AY, UK.
| | | | | | - Karin Petrini
- Department of Psychology, University of Bath, Bath, UK
| |
Collapse
|
27
|
Abstract
When I am looking at my coffee machine that makes funny noises, this is an instance of multisensory perception – I perceive this event by means of both vision and audition. But very often we only receive sensory stimulation from a multisensory event by means of one sense modality, for example, when I hear the noisy coffee machine in the next room, that is, without seeing it. The aim of this paper is to bring together empirical findings about multimodal perception and empirical findings about (visual, auditory, tactile) mental imagery and argue that on occasions like this, we have multimodal mental imagery: perceptual processing in one sense modality (here: vision) that is triggered by sensory stimulation in another sense modality (here: audition). Multimodal mental imagery is not a rare and obscure phenomenon. The vast majority of what we perceive are multisensory events: events that can be perceived in more than one sense modality – like the noisy coffee machine. And most of the time we are only acquainted with these multisensory events via a subset of the sense modalities involved – all the other aspects of these multisensory events are represented by means of multisensory mental imagery. This means that multisensory mental imagery is a crucial element of almost all instances of everyday perception.
Collapse
Affiliation(s)
- Bence Nanay
- University of Antwerp, Antwerp, Belgium; Peterhouse, University of Cambridge, Cambridge, UK.
| |
Collapse
|
28
|
Moana-Filho EJ, Alonso AA, Kapos FP, Leon-Salazar V, Durand SH, Hodges JS, Nixdorf DR. Multifactorial assessment of measurement errors affecting intraoral quantitative sensory testing reliability. Scand J Pain 2017; 16:93-98. [PMID: 28850419 DOI: 10.1016/j.sjpain.2017.03.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Revised: 03/11/2017] [Accepted: 03/29/2017] [Indexed: 12/22/2022]
Abstract
BACKGROUND AND PURPOSE (AIMS) Measurement error of intraoral quantitative sensory testing (QST) has been assessed using traditional methods for reliability, such as intraclass correlation coefficients (ICCs). Most studies reporting QST reliability focused on assessing one source of measurement error at a time, e.g., inter- or intra-examiner (test-retest) reliabilities and employed two examiners to test inter-examiner reliability. The present study used a complex design with multiple examiners with the aim of assessing the reliability of intraoral QST taking account of multiple sources of error simultaneously. METHODS Four examiners of varied experience assessed 12 healthy participants in two visits separated by 48h. Seven QST procedures to determine sensory thresholds were used: cold detection (CDT), warmth detection (WDT), cold pain (CPT), heat pain (HPT), mechanical detection (MDT), mechanical pain (MPT) and pressure pain (PPT). Mixed linear models were used to estimate variance components for reliability assessment; dependability coefficients were used to simulate alternative test scenarios. RESULTS Most intraoral QST variability arose from differences between participants (8.8-30.5%), differences between visits within participant (4.6-52.8%), and error (13.3-28.3%). For QST procedures other than CDT and MDT, increasing the number of visits with a single examiner performing the procedures would lead to improved dependability (dependability coefficient ranges: single visit, four examiners=0.12-0.54; four visits, single examiner=0.27-0.68). A wide range of reliabilities for QST procedures, as measured by ICCs, was noted for inter- (0.39-0.80) and intra-examiner (0.10-0.62) variation. CONCLUSION Reliability of sensory testing can be better assessed by measuring multiple sources of error simultaneously instead of focusing on one source at a time. In experimental settings, large numbers of participants are needed to obtain accurate estimates of treatment effects based on QST measurements. This is different from clinical use, where variation between persons (the person main effect) is not a concern because clinical measurements are done on a single person. IMPLICATIONS Future studies assessing sensory testing reliability in both clinical and experimental settings would benefit from routinely measuring multiple sources of error. The methods and results of this study can be used by clinical researchers to improve assessment of measurement error related to intraoral sensory testing. This should lead to improved resource allocation when designing studies that use intraoral quantitative sensory testing in clinical and experimental settings.
Collapse
Affiliation(s)
- Estephan J Moana-Filho
- Division of TMD and Orofacial Pain, School of Dentistry, University of Minnesota, 6-320d Moos Tower, 515 Delaware St. SE, Minneapolis, MN 55455, United States.
| | - Aurelio A Alonso
- Center for Translational Pain Medicine, Department of Anesthesiology, Duke University School of Medicine, United States.
| | - Flavia P Kapos
- Department of Epidemiology, School of Public Health, University of Washington, United States; Department of Oral Health Sciences, School of Dentistry, University of Washington, United States.
| | - Vladimir Leon-Salazar
- Division of Pediatric Dentistry, School of Dentistry, University of Minnesota, United States.
| | - Scott H Durand
- Private Dental Practice, 115 East Main Street, Wabasha, MN, 55981, United States.
| | - James S Hodges
- Division of Biostatistics, School of Public Health, University of Minnesota, United States.
| | - Donald R Nixdorf
- Division of TMD and Orofacial Pain, School of Dentistry, University of Minnesota, 6-320d Moos Tower, 515 Delaware St. SE, Minneapolis, MN 55455, United States; Department of Neurology, Medical School, University of Minnesota, United States; HealthPartners Institute for Education and Research, United States.
| |
Collapse
|
29
|
Danielson DK, Bruderer AG, Kandhadai P, Vatikiotis-Bateson E, Werker JF. The organization and reorganization of audiovisual speech perception in the first year of life. Cogn Dev 2017; 42:37-48. [PMID: 28970650 PMCID: PMC5621752 DOI: 10.1016/j.cogdev.2017.02.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.
Collapse
Affiliation(s)
- D. Kyle Danielson
- Department of Psychology, The University of British Columbia, 2136
West Mall, Vancouver BC V6T 1Z4, Canada
| | - Alison G. Bruderer
- School of Audiology and Speech Sciences, The University of British
Columbia, 2177 Wesbrook Mall, Vancouver BC V6T 1Z3, Canada
| | - Padmapriya Kandhadai
- Department of Psychology, The University of British Columbia, 2136
West Mall, Vancouver BC V6T 1Z4, Canada
| | - Eric Vatikiotis-Bateson
- Department of Linguistics, The University of British Columbia, 2613
West Mall, Vancouver BC V6T 1Z4, Canada
| | - Janet F. Werker
- Department of Psychology, The University of British Columbia, 2136
West Mall, Vancouver BC V6T 1Z4, Canada
| |
Collapse
|
30
|
Reinoso Carvalho F, Wang QJ, van Ee R, Persoone D, Spence C. "Smooth operator": Music modulates the perceived creaminess, sweetness, and bitterness of chocolate. Appetite 2016; 108:383-390. [PMID: 27784634 DOI: 10.1016/j.appet.2016.10.026] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2016] [Revised: 10/11/2016] [Accepted: 10/21/2016] [Indexed: 11/25/2022]
Abstract
There has been a recent growth of interest in determining whether sound (specifically music and soundscapes) can enhance not only the basic taste attributes associated with food and beverage items (such as sweetness, bitterness, sourness, etc.), but also other important components of the tasting experience, such as, for instance, crunchiness, creaminess, and/or carbonation. In the present study, participants evaluated the perceived creaminess of chocolate. Two contrasting soundtracks were produced with such texture-correspondences in mind, and validated by means of a pre-test. The participants tasted the same chocolate twice (without knowing that the chocolates were identical), each time listening to one of the soundtracks. The 'creamy' soundtrack enhanced the perceived creaminess and sweetness of the chocolates, as compared to the ratings given while listening to the 'rough' soundtrack. Moreover, while the participants preferred the creamy soundtrack, this difference did not appear to affect their overall enjoyment of the chocolates. Interestingly, and in contrast with previous similar studies, these results demonstrate that in certain cases, sounds can have a perceptual effect on gustatory food attributes without necessarily altering the hedonic experience.
Collapse
Affiliation(s)
- Felipe Reinoso Carvalho
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium; Brain & Cognition, Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium.
| | - Qian Janice Wang
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| | - Raymond van Ee
- Brain & Cognition, Laboratory of Experimental Psychology, KU Leuven, Leuven, Belgium; Donders Institute, Radboud University, Department of Biophysics, Nijmegen, The Netherlands; Philips Research Laboratories, Department of Brain, Body & Behavior, Eindhoven, The Netherlands
| | | | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, Oxford University, Oxford, UK
| |
Collapse
|
31
|
Ostrand R, Blumstein SE, Ferreira VS, Morgan JL. What you see isn't always what you get: Auditory word signals trump consciously perceived words in lexical access. Cognition 2016; 151:96-107. [PMID: 27011021 PMCID: PMC4850493 DOI: 10.1016/j.cognition.2016.02.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2015] [Revised: 02/17/2016] [Accepted: 02/27/2016] [Indexed: 11/28/2022]
Abstract
Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another.
Collapse
Affiliation(s)
- Rachel Ostrand
- Department of Cognitive Science, University of California, San Diego, United States.
| | - Sheila E Blumstein
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, United States
| | - Victor S Ferreira
- Department of Psychology, University of California, San Diego, United States
| | - James L Morgan
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, United States
| |
Collapse
|
32
|
Bolognini N, Convento S, Casati C, Mancini F, Brighina F, Vallar G. Multisensory integration in hemianopia and unilateral spatial neglect: Evidence from the sound induced flash illusion. Neuropsychologia 2016; 87:134-143. [PMID: 27197073 DOI: 10.1016/j.neuropsychologia.2016.05.015] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2016] [Revised: 05/13/2016] [Accepted: 05/14/2016] [Indexed: 11/24/2022]
Abstract
Recent neuropsychological evidence suggests that acquired brain lesions can, in some instances, abolish the ability to integrate inputs from different sensory modalities, disrupting multisensory perception. We explored the ability to perceive multisensory events, in particular the integrity of audio-visual processing in the temporal domain, in brain-damaged patients with visual field defects (VFD), or with unilateral spatial neglect (USN), by assessing their sensitivity to the 'Sound-Induced Flash Illusion' (SIFI). The study yielded two key findings. Firstly, the 'fission' illusion (namely, seeing multiple flashes when a single flash is paired with multiple sounds) is reduced in both left- and right-brain-damaged patients with VFD, but not in right-brain-damaged patients with left USN. The disruption of the fission illusion is proportional to the extent of the occipital damage. Secondly, a reliable 'fusion' illusion (namely, seeing less flashes when a single sound is paired with multiple flashes) is evoked in USN patients, but neither in VFD patients nor in healthy participants. A control experiment showed that the fusion, but not the fission, illusion is lost in older participants (>50 year-old), as compared with younger healthy participants (<30 year-old). This evidence indicates that the fission and fusion illusions are dissociable multisensory phenomena, altered differently by impairments of visual perception (i.e. VFD) and spatial attention (i.e. USN). The occipital cortex represents a key cortical site for binding auditory and visual stimuli in the SIFI, while damage to right-hemisphere areas mediating spatial attention and awareness does not prevent the integration of audio-visual inputs in the temporal domain.
Collapse
Affiliation(s)
- Nadia Bolognini
- Department of Psychology, and Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, and Department of Neurorehabilitation Sciences, IRCSS Istituto Auxologico, Milano, Italy.
| | - Silvia Convento
- Department of Psychology, and Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Department of Neuroscience, Baylor College of Medicine, Houston, USA
| | - Carlotta Casati
- Laboratory of Neuropsychology, and Department of Neurorehabilitation Sciences, IRCSS Istituto Auxologico, Milano, Italy
| | - Flavia Mancini
- Department of Neuroscience, Physiology & Pharmacology, University College London, London, UK
| | - Filippo Brighina
- Department of Experimental Biomedicine and Clinical Neuroscience, University of Palermo, Palermo, Italy
| | - Giuseppe Vallar
- Department of Psychology, and Milan Center for Neuroscience - NeuroMi, University of Milano-Bicocca, Milano, Italy; Laboratory of Neuropsychology, and Department of Neurorehabilitation Sciences, IRCSS Istituto Auxologico, Milano, Italy
| |
Collapse
|
33
|
Jiang L, Kang J. Combined acoustical and visual performance of noise barriers in mitigating the environmental impact of motorways. Sci Total Environ 2016; 543:52-60. [PMID: 26584069 DOI: 10.1016/j.scitotenv.2015.11.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2015] [Revised: 10/22/2015] [Accepted: 11/03/2015] [Indexed: 06/05/2023]
Abstract
This study investigated the overall performance of noise barriers in mitigating environmental impact of motorways, taking into consideration their effects on reducing noise and visual intrusions of moving traffic, but also potentially inducing visual impact themselves. A laboratory experiment was carried out, using computer-visualised video scenes and motorway traffic noise recordings to present experimental scenarios covering two traffic levels, two distances of receiver to road, two types of background landscape, and five barrier conditions including motorway only, motorway with tree belt, motorways with 3 m timber barrier, 5m timber barrier, and 5m transparent barrier. Responses from 30 participants of university students were gathered and perceived barrier performance analysed. The results show that noise barriers were always beneficial in mitigating environmental impact of motorways, or made no significant changes in environmental quality when the impact of motorways was low. Overall, barriers only offered similar mitigation effect as compared to tree belt, but showed some potential to be more advantageous when traffic level went high. 5m timber barrier tended to perform better than the 3m one at the distance of 300 m but not at 100 m possibly due to its negative visual effect when getting closer. The transparent barrier did not perform much differently from the timber barriers but tended to be the least effective in most scenarios. Some low positive correlations were found between aesthetic preference for barriers and environmental impact reduction by the barriers.
Collapse
Affiliation(s)
- Like Jiang
- School of Architecture, University of Sheffield, Sheffield S10 2TN, United Kingdom.
| | - Jian Kang
- School of Architecture, University of Sheffield, Sheffield S10 2TN, United Kingdom.
| |
Collapse
|
34
|
de Boisferon AH, Dupierrix E, Quinn PC, Lœvenbruck H, Lewkowicz DJ, Lee K, Pascalis O. Perception of Multisensory Gender Coherence in 6- and 9-month-old Infants. Infancy 2015; 20:661-674. [PMID: 26561475 PMCID: PMC4637175 DOI: 10.1111/infa.12088] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Accepted: 04/27/2015] [Indexed: 11/29/2022]
Abstract
One of the most salient social categories conveyed by human faces and voices is gender. We investigated the developmental emergence of the ability to perceive the coherence of auditory and visual attributes of gender in 6- and 9-month-old infants. Infants viewed two side-by-side video clips of a man and a woman singing a nursery rhyme and heard a synchronous male or female soundtrack. Results showed that 6-month-old infants did not match the audible and visible attributes of gender, and 9-month-old infants matched only female faces and voices. These findings indicate that the ability to perceive the multisensory coherence of gender emerges relatively late in infancy and that it reflects the greater experience that most infants have with female faces and voices.
Collapse
Affiliation(s)
| | - Eve Dupierrix
- Laboratoire de Psychologie et NeuroCognition, Université Grenoble Alpes, CNRS-UMR 5105, Grenoble, France
| | - Paul C. Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, Delaware, USA
| | - Hélène Lœvenbruck
- Laboratoire de Psychologie et NeuroCognition, Université Grenoble Alpes, CNRS-UMR 5105, Grenoble, France
| | - David J. Lewkowicz
- Department of Communication Sciences & Disorders, Northeastern University, Boston, Massachusetts, USA
| | - Kang Lee
- Institute of Child Study, University of Toronto, Toronto, Canada
| | - Olivier Pascalis
- Laboratoire de Psychologie et NeuroCognition, Université Grenoble Alpes, CNRS-UMR 5105, Grenoble, France
| |
Collapse
|
35
|
Lewkowicz DJ, Minar NJ, Tift AH, Brandon M. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience. J Exp Child Psychol 2015; 130:147-62. [PMID: 25462038 PMCID: PMC4258456 DOI: 10.1016/j.jecp.2014.10.006] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Revised: 10/11/2014] [Accepted: 10/13/2014] [Indexed: 11/15/2022]
Abstract
To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.
Collapse
Affiliation(s)
- David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA 02115, USA.
| | - Nicholas J Minar
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Amy H Tift
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Melissa Brandon
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA
| |
Collapse
|
36
|
Abstract
The aim of this study was to investigate neural dynamics of audiovisual temporal fusion processes in 6-month-old infants using event-related brain potentials (ERPs). In a habituation-test paradigm, infants did not show any behavioral signs of discrimination of an audiovisual asynchrony of 200 ms, indicating perceptual fusion. In a subsequent EEG experiment, audiovisual synchronous stimuli and stimuli with a visual delay of 200 ms were presented in random order. In contrast to the behavioral data, brain activity differed significantly between the two conditions. Critically, N1 and P2 latency delays were not observed between synchronous and fused items, contrary to previously observed N1 and P2 latency delays between synchrony and perceived asynchrony. Hence, temporal interaction processes in the infant brain between the two sensory modalities varied as a function of perceptual fusion versus asynchrony perception. The visual recognition components Pb and Nc were modulated prior to sound onset, emphasizing the importance of anticipatory visual events for the prediction of auditory signals. Results suggest mechanisms by which young infants predictively adjust their ongoing neural activity to the temporal synchrony relations to be expected between vision and audition.
Collapse
Affiliation(s)
- Franziska Kopp
- Max Planck Institute for Human Development, Berlin, Germany.
| |
Collapse
|
37
|
Abstract
This paper examines the applicability of the object concept to the chemical senses, by evaluating them against a set of criteria for object-hood. Taste and chemesthesis do not generate objects. Their parts, perceptible from birth, never combine. Orthonasal olfaction (sniffing) presents a strong case for generating objects. Odorants have many parts yet they are perceived as wholes, this process is based on learning, and there is figure-ground segregation. While flavors are multimodal representations bound together by learning, there is no functional need for flavor objects in the mouth. Rather, food identification occurs prior to ingestion using the eye and nose, with the latter retrieving multimodal flavor objects via sniffing (e.g., sweet smelling caramel). While there are differences in object perception between vision, audition, and orthonasal olfaction, the commonalities suggest that the brain has adopted the same basic solution when faced with extracting meaning from complex stimulus arrays.
Collapse
|