1
|
The development of visuotactile congruency effects for sequences of events. J Exp Child Psychol 2021; 207:105094. [PMID: 33714049 DOI: 10.1016/j.jecp.2021.105094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 12/11/2020] [Accepted: 01/07/2021] [Indexed: 11/23/2022]
Abstract
Sensitivity to the temporal coherence of visual and tactile signals increases perceptual reliability and is evident during infancy. However, it is not clear how, or whether, bidirectional visuotactile interactions change across childhood. Furthermore, no study has explored whether viewing a body modulates how children perceive visuotactile sequences of events. Here, children aged 5-7 years (n = 19), 8 and 9 years (n = 21), and 10-12 years (n = 24) and adults (n = 20) discriminated the number of target events (one or two) in a task-relevant modality (touch or vision) and ignored distractors (one or two) in the opposing modality. While participants performed the task, an image of either a hand or an object was presented. Children aged 5-7 years and 8 and 9 years showed larger crossmodal interference from visual distractors when discriminating tactile targets than the converse. Across age groups, this was strongest when two visual distractors were presented with one tactile target, implying a "fission-like" crossmodal effect (perceiving one event as two events). There was no influence of visual context (viewing a hand or non-hand image) on visuotactile interactions for any age group. Our results suggest robust interference from discontinuous visual information on tactile discrimination of sequences of events during early and middle childhood. These findings are discussed with respect to age-related changes in sensory dominance, selective attention, and multisensory processing.
Collapse
|
2
|
O' Dowd A, Sorgini F, Newell FN. Seeing an image of the hand affects performance on a crossmodal congruency task for sequences of events. Conscious Cogn 2020; 80:102900. [DOI: 10.1016/j.concog.2020.102900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Revised: 01/25/2020] [Accepted: 02/12/2020] [Indexed: 10/24/2022]
|
3
|
Smit S, Rich AN, Zopf R. Visual body form and orientation cues do not modulate visuo-tactile temporal integration. PLoS One 2019; 14:e0224174. [PMID: 31841510 PMCID: PMC6913941 DOI: 10.1371/journal.pone.0224174] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 10/07/2019] [Indexed: 11/19/2022] Open
Abstract
Body ownership relies on spatiotemporal correlations between multisensory signals and visual cues specifying oneself such as body form and orientation. The mechanism for the integration of bodily signals remains unclear. One approach to model multisensory integration that has been influential in the multisensory literature is Bayesian causal inference. This specifies that the brain integrates spatial and temporal signals coming from different modalities when it infers a common cause for inputs. As an example, the rubber hand illusion shows that visual form and orientation cues can promote the inference of a common cause (one's body) leading to spatial integration shown by a proprioceptive drift of the perceived location of the real hand towards the rubber hand. Recent studies investigating the effect of visual cues on temporal integration, however, have led to conflicting findings. These could be due to task differences, variation in ecological validity of stimuli and/or small samples. In this pre-registered study, we investigated the influence of visual information on temporal integration using a visuo-tactile temporal order judgement task with realistic stimuli and a sufficiently large sample determined by Bayesian analysis. Participants viewed videos of a touch being applied to plausible or implausible visual stimuli for one's hand (hand oriented plausibly, hand rotated 180 degrees, or a sponge) while also being touched at varying stimulus onset asynchronies. Participants judged which stimulus came first: viewed or felt touch. Results show that visual cues do not modulate visuo-tactile temporal order judgements. This is not in line with the idea that bodily signals indicating oneself influence the integration of multisensory signals in the temporal domain. The current study emphasises the importance of rigour in our methodologies and analyses to advance the understanding of how properties of multisensory events affect the encoding of temporal information in the brain.
Collapse
Affiliation(s)
- Sophie Smit
- Perception in Action Research Centre & Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, Australia
| | - Anina N. Rich
- Perception in Action Research Centre & Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, Australia
- Centre for Elite Performance, Expertise & Training, Macquarie University, Sydney, Australia
| | - Regine Zopf
- Perception in Action Research Centre & Department of Cognitive Science, Faculty of Human Sciences, Macquarie University, Sydney, Australia
- Body Image and Ingestion Group, Macquarie University, Sydney, Australia
| |
Collapse
|
4
|
Mas-Casadesús A, Gherri E. Ignoring Irrelevant Information: Enhanced Intermodal Attention in Synaesthetes. Multisens Res 2017; 30:253-277. [PMID: 31287079 DOI: 10.1163/22134808-00002566] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Accepted: 03/22/2017] [Indexed: 11/19/2022]
Abstract
Despite the fact that synaesthetes experience additional percepts during their inducer-concurrent associations that are often unrelated or irrelevant to their daily activities, they appear to be relatively unaffected by this potentially distracting information. This might suggest that synaesthetes are particularly good at ignoring irrelevant perceptual information coming from different sensory modalities. To investigate this hypothesis, the performance of a group of synaesthetes was compared to that of a matched non-synaesthete group in two different conflict tasks aimed at assessing participants' abilities to ignore irrelevant information. In order to match the sensory modality of the task-irrelevant distractors (vision) with participants' synaesthetic attentional filtering experience, we tested only synaesthetes experiencing at least one synaesthesia subtype triggering visual concurrents (e.g., grapheme-colour synaesthesia or sequence-space synaesthesia). Synaesthetes and controls performed a classic flanker task (FT) and a visuo-tactile cross-modal congruency task (CCT) in which they had to attend to tactile targets while ignoring visual distractors. While no differences were observed between synaesthetes and controls in the FT, synaesthetes showed reduced interference by the irrelevant distractors of the CCT. These findings provide the first direct evidence that synaesthetes might be more efficient than non-synaesthetes at dissociating conflicting information from different sensory modalities when the irrelevant modality correlates with their synaesthetic concurrent modality (here vision).
Collapse
Affiliation(s)
- Anna Mas-Casadesús
- School of Philosophy, Psychology, and Language Sciences, Department of Psychology, The University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
| | - Elena Gherri
- School of Philosophy, Psychology, and Language Sciences, Department of Psychology, The University of Edinburgh, 7 George Square, Edinburgh EH8 9JZ, UK
| |
Collapse
|
5
|
Pozeg P, Galli G, Blanke O. Those are Your Legs: The Effect of Visuo-Spatial Viewpoint on Visuo-Tactile Integration and Body Ownership. Front Psychol 2015; 6:1749. [PMID: 26635663 PMCID: PMC4646976 DOI: 10.3389/fpsyg.2015.01749] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Accepted: 10/31/2015] [Indexed: 11/13/2022] Open
Abstract
Experiencing a body part as one's own, i.e., body ownership, depends on the integration of multisensory bodily signals (including visual, tactile, and proprioceptive information) with the visual top-down signals from peripersonal space. Although it has been shown that the visuo-spatial viewpoint from where the body is seen is an important visual top-down factor for body ownership, different studies have reported diverging results. Furthermore, the role of visuo-spatial viewpoint (sometime also called first-person perspective) has only been studied for hands or the whole body, but not for the lower limbs. We thus investigated whether and how leg visuo-tactile integration and leg ownership depended on the visuo-spatial viewpoint from which the legs were seen and the anatomical similarity of the visual leg stimuli. Using a virtual leg illusion, we tested the strength of visuo-tactile integration of leg stimuli using the crossmodal congruency effect (CCE) as well as the subjective sense of leg ownership (assessed by a questionnaire). Fifteen participants viewed virtual legs or non-corporeal control objects, presented either from their habitual first-person viewpoint or from a viewpoint that was rotated by 90°(third-person viewpoint), while applying visuo-tactile stroking between the participants legs and the virtual legs shown on a head-mounted display. The data show that the first-person visuo-spatial viewpoint significantly boosts the visuo-tactile integration as well as the sense of leg ownership. Moreover, the viewpoint-dependent increment of the visuo-tactile integration was only found in the conditions when participants viewed the virtual legs (absent for control objects). These results confirm the importance of first person visuo-spatial viewpoint for the integration of visuo-tactile stimuli and extend findings from the upper extremity and the trunk to visuo-tactile integration and ownership for the legs.
Collapse
Affiliation(s)
- Polona Pozeg
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne , Lausanne, Switzerland ; Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne , Lausanne, Switzerland
| | - Giulia Galli
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne , Lausanne, Switzerland ; Istituti di Ricovero e Cura a Carattere Scientifico, Fondazione Santa Lucia , Rome, Italy
| | - Olaf Blanke
- Center for Neuroprosthetics, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne , Lausanne, Switzerland ; Laboratory of Cognitive Neuroscience, Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne , Lausanne, Switzerland ; Department of Neurology, University Hospital of Geneva , Geneva, Switzerland
| |
Collapse
|
6
|
Abstract
The authors examined the resolution of a discrepancy between visual and proprioceptive estimates of arm position in 10 participants. The participants fixed their right shoulder at 0°, 30°, or 60° of transverse adduction while they viewed a video on a head-mounted display that showed their right arm extended in front of the trunk for 30 min. The perceived arm position more closely approached the seen arm position on the display as the difference between the actual and visually displayed arm positions increased. In the extreme case of a 90° discrepancy, the seen arm position on the display was very gradually perceived as approaching the actual arm position. The magnitude of changes in sensory estimates was larger for proprioception (20%) than for vision (< 10%).
Collapse
Affiliation(s)
- Junya Masumoto
- a The Joint Graduate School in Science of School Education, Hyogo University of Teacher Education , Kato , Japan
| | | |
Collapse
|
7
|
Wesslein AK, Spence C, Frings C. Vision of embodied rubber hands enhances tactile distractor processing. Exp Brain Res 2014; 233:477-86. [PMID: 25354970 DOI: 10.1007/s00221-014-4129-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 10/11/2014] [Indexed: 10/24/2022]
Abstract
Previous research has demonstrated that viewing one's hand can induce tactile response compatibility effects at the hands. Here, we investigated the question of whether vision of one's own hand is actually necessary. The Eriksen flanker task was combined with the rubber hand illusion in order to determine whether tactile distractors presented to the hand would be processed up to the level of response selection when a pair of rubber hands was seen (while one's own hands were not). Our results demonstrate that only if the rubber hands are perceived as belonging to one's own body, is enhanced distractor processing (up to the level of response selection) observed at the hands. In conclusion, vision of a pair of fake hands enhances tactile distractor processing at the hands if, and only if, it happens to be incorporated into the body representation.
Collapse
Affiliation(s)
- Ann-Katrin Wesslein
- Department of Psychology, Cognitive Psychology, University of Trier, 54286, Trier, Germany,
| | | | | |
Collapse
|
8
|
Pavani F, Rigo P, Galfano G. From body shadows to bodily attention: automatic orienting of tactile attention driven by cast shadows. Conscious Cogn 2014; 29:56-67. [PMID: 25123629 DOI: 10.1016/j.concog.2014.07.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2013] [Revised: 06/01/2014] [Accepted: 07/16/2014] [Indexed: 11/24/2022]
Abstract
Body shadows orient attention to the body-part casting the shadow. We have investigated the automaticity of this phenomenon, by addressing its time-course and its resistance to contextual manipulations. When targets were tactile stimuli at the hands (Exp.1) or visual stimuli near the body-shadow (Exp.2), cueing effects emerged regardless of the delay between shadow and target onset (100, 600, 1200, 2400ms). This suggests a fast and sustained attention orienting to body-shadows, that involves both the space occupied by shadows (extra-personal space) and the space the shadow refers to (own body). When target type became unpredictable (tactile or visual), shadow-cueing effects remained robust only for tactile targets, as visual stimuli showed no overall reliable effects, regardless of whether they occurred near the shadow (Exp.3) or near the body (Exp.4). We conclude that mandatory attention shifts triggered by body-shadows are limited to tactile targets and, instead, are less automatic for visual stimuli.
Collapse
Affiliation(s)
- Francesco Pavani
- Center for Mind/Brain Sciences, University of Trento, Italy; Department of Psychology and Cognitive Science, University of Trento, Italy.
| | - Paola Rigo
- Department of Psychology and Cognitive Science, University of Trento, Italy
| | - Giovanni Galfano
- Department of Developmental and Social Psychology, University of Padua, Italy; Center for Cognitive Neuroscience, University of Padua, Italy
| |
Collapse
|
9
|
van Elk M. The effect of manipulability and religion on the multisensory integration of objects in peripersonal space. Cogn Neurosci 2013; 5:36-44. [DOI: 10.1080/17588928.2013.808612] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
10
|
Hagura N, Hirose S, Matsumura M, Naito E. Am I seeing my hand? Visual appearance and knowledge of controllability both contribute to the visual capture of a person's own body. Proc Biol Sci 2012; 279:3476-81. [PMID: 22648159 PMCID: PMC3396906 DOI: 10.1098/rspb.2012.0750] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
When confronted with complex visual scenes in daily life, how do we know which visual information represents our own hand? We investigated the cues used to assign visual information to one's own hand. Wrist tendon vibration elicits an illusory sensation of wrist movement. The intensity of this illusion attenuates when the actual motionless hand is visually presented. Testing what kind of visual stimuli attenuate this illusion will elucidate factors contributing to visual detection of one's own hand. The illusion was reduced when a stationary object was shown, but only when participants knew it was controllable with their hands. In contrast, the visual image of their own hand attenuated the illusion even when participants knew that it was not controllable. We suggest that long-term knowledge about the appearance of the body and short-term knowledge about controllability of a visual object are combined to robustly extract our own body from a visual scene.
Collapse
Affiliation(s)
- Nobuhiro Hagura
- ATR Brain Information Communication Research Laboratory Group, Kyoto 619-0288, Japan.
| | | | | | | |
Collapse
|
11
|
The role of the human extrastriate visual cortex in mirror symmetry discrimination: A TMS-adaptation study. Brain Cogn 2011; 77:120-7. [DOI: 10.1016/j.bandc.2011.04.007] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2011] [Revised: 04/17/2011] [Accepted: 04/25/2011] [Indexed: 11/19/2022]
|
12
|
van Elk M, Blanke O. Manipulable objects facilitate cross-modal integration in peripersonal space. PLoS One 2011; 6:e24641. [PMID: 21949738 PMCID: PMC3176228 DOI: 10.1371/journal.pone.0024641] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2011] [Accepted: 08/15/2011] [Indexed: 11/18/2022] Open
Abstract
Previous studies have shown that tool use often modifies one's peripersonal space – i.e. the space directly surrounding our body. Given our profound experience with manipulable objects (e.g. a toothbrush, a comb or a teapot) in the present study we hypothesized that the observation of pictures representing manipulable objects would result in a remapping of peripersonal space as well. Subjects were required to report the location of vibrotactile stimuli delivered to the right hand, while ignoring visual distractors superimposed on pictures representing everyday objects. Pictures could represent objects that were of high manipulability (e.g. a cell phone), medium manipulability (e.g. a soap dispenser) and low manipulability (e.g. a computer screen). In the first experiment, when subjects attended to the action associated with the objects, a strong cross-modal congruency effect (CCE) was observed for pictures representing medium and high manipulability objects, reflected in faster reaction times if the vibrotactile stimulus and the visual distractor were in the same location, whereas no CCE was observed for low manipulability objects. This finding was replicated in a second experiment in which subjects attended to the visual properties of the objects. These findings suggest that the observation of manipulable objects facilitates cross-modal integration in peripersonal space.
Collapse
Affiliation(s)
- Michiel van Elk
- Laboratory of Cognitive Neuroscience, Brain Mind Institute, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | | |
Collapse
|
13
|
Mattavelli G, Cattaneo Z, Papagno C. Transcranial magnetic stimulation of medial prefrontal cortex modulates face expressions processing in a priming task. Neuropsychologia 2011; 49:992-998. [PMID: 21281653 DOI: 10.1016/j.neuropsychologia.2011.01.038] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2010] [Revised: 12/27/2010] [Accepted: 01/22/2011] [Indexed: 10/18/2022]
Abstract
The medial prefrontal cortex (mPFC) and the right somatosensory cortex (rSC) are known to be involved in emotion processing and face expression recognition, although the possibility of segregated circuits for specific emotions in these regions remains unclear. To investigate this issue, we used transcranial magnetic stimulation (TMS) together with a priming paradigm to modulate the activation state of the mPFC and the rSC during emotional expressions discrimination. This novel paradigm allows analyzing how TMS interacts with the ongoing activity of different neuronal populations following prime processing. Participants were asked to discriminate between angry and happy faces that were preceded by a congruent prime (a word expressing the same emotion), an incongruent prime (a word expressing the opposite emotion) or a neutral prime. In TMS trials, a single pulse was delivered over the mPFC, rSC or Vertex (control site) between prime and target presentation. TMS applied over the mPFC significantly affected the priming effect, by selectively increasing response latencies in congruent trials. This indicates that the mPFC contains different neural representations for angry and happy expressions. TMS over rSC did not significantly affect the priming effect, suggesting that rSC is not involved in processing verbal emotional stimuli.
Collapse
Affiliation(s)
- G Mattavelli
- Department of Psychology, University of Milano-Bicocca, Piazza Ateneo Nuovo, 1, 20126 Milano, Italy.
| | - Z Cattaneo
- Department of Psychology, University of Milano-Bicocca, Piazza Ateneo Nuovo, 1, 20126 Milano, Italy
| | - C Papagno
- Department of Psychology, University of Milano-Bicocca, Piazza Ateneo Nuovo, 1, 20126 Milano, Italy
| |
Collapse
|
14
|
Hartcher-O'Brien J, Levitan C, Spence C. Extending visual dominance over touch for input off the body. Brain Res 2010; 1362:48-55. [DOI: 10.1016/j.brainres.2010.09.036] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2009] [Revised: 09/07/2010] [Accepted: 09/08/2010] [Indexed: 10/19/2022]
|
15
|
Influence of the body on crossmodal interference effects between tactile and two-dimensional visual stimuli. Exp Brain Res 2010; 204:419-30. [DOI: 10.1007/s00221-010-2267-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2009] [Accepted: 04/16/2010] [Indexed: 10/19/2022]
|
16
|
Igarashi Y, Kimura Y, Spence C, Ichihara S. The selective effect of the image of a hand on visuotactile interactions as assessed by performance on the crossmodal congruency task. Exp Brain Res 2007; 184:31-8. [PMID: 17726605 DOI: 10.1007/s00221-007-1076-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2006] [Accepted: 07/19/2007] [Indexed: 10/22/2022]
Abstract
Seeing one's own body (either directly or indirectly) can influence visuotactile crossmodal interactions. Previously, it has been shown that even viewing a simple line drawing of a hand can also modulate such crossmodal interactions, as if viewing the picture of a hand somehow primes the representation of one's own hand. However, factors other than the sight of a symbolic picture of a hand may have modulated the crossmodal interactions reported in previous research. In the present study, we examined the crossmodal modulatory effects of viewing five different visual images (photograph of a hand, line drawing of a hand, line drawing of a car, an U-shape, and an ellipse) on tactile performance. Participants made speeded discrimination responses regarding the location of brief vibrotactile targets presented to either the tip or base of their left index finger, while trying to ignore visual distractors presented to either the left or right of central fixation. We compared the visuotactile congruency effects elicited when the five different visual images were presented superimposed over the visual distractors. Participants' tactile discrimination performance was modulated to a significantly greater extent by viewing the photograph of a hand than when viewing the outline drawing of a hand. No such crossmodal congruency effects were reported in any of the other conditions. These results therefore suggest that visuotactile interactions are specifically modulated by the image of the hand rather than just by any simple orientation cues that may be provided by the image of a hand.
Collapse
Affiliation(s)
- Yuka Igarashi
- Department of Psychology, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji, Tokyo 192-0397, Japan.
| | | | | | | |
Collapse
|