51
|
Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neurosci Biobehav Rev 2014; 41:64-77. [DOI: 10.1016/j.neubiorev.2013.10.006] [Citation(s) in RCA: 103] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 08/13/2013] [Accepted: 10/03/2013] [Indexed: 11/20/2022]
|
52
|
Chi Y, Yue Z, Liu Y, Mo L, Chen Q. Dissociable identity- and modality-specific neural representations as revealed by cross-modal nonspatial inhibition of return. Hum Brain Mapp 2014; 35:4002-15. [PMID: 24453184 DOI: 10.1002/hbm.22454] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2012] [Revised: 11/01/2013] [Accepted: 12/10/2013] [Indexed: 11/10/2022] Open
Abstract
There are ongoing debates on whether object concepts are coded as supramodal identity-based or modality-specific representations in the human brain. In this fMRI study, we adopted a cross-modal "prime-neutral cue-target" semantic priming paradigm, in which the prime-target relationship was manipulated along both the identity and the modality dimensions. The prime and the target could refer to either the same or different semantic identities, and could be delivered via either the same or different sensory modalities. By calculating the main effects and interactions of this 2 (identity cue validity: "Identity_Cued" vs. "Identity_Uncued") × 2 (modality cue validity: "Modality_Cued" vs. "Modality_Uncued") factorial design, we aimed at dissociating three neural networks involved in creating novel identity-specific representations independent of sensory modality, in creating modality-specific representations independent of semantic identity, and in evaluating changes of an object along both the identity and the modality dimensions, respectively. Our results suggested that bilateral lateral occipital cortex was involved in creating a new supramodal semantic representation irrespective of the input modality, left dorsal premotor cortex, and left intraparietal sulcus were involved in creating a new modality-specific representation irrespective of its semantic identity, and bilateral superior temporal sulcus was involved in creating a representation when the identity and modality properties were both cued or both uncued. In addition, right inferior frontal gyrus showed enhanced neural activity only when both the identity and the modality of the target were new, indicating its functional role in novelty detection.
Collapse
Affiliation(s)
- Yukai Chi
- Center for Studies of Psychological Application and School of Psychology, South China Normal University, Guangzhou, China
| | | | | | | | | |
Collapse
|
53
|
Abstract
Humans typically rely upon vision to identify object shape, but we can also recognize shape via touch (haptics). Our haptic shape recognition ability raises an intriguing question: To what extent do visual cortical shape recognition mechanisms support haptic object recognition? We addressed this question using a haptic fMRI repetition design, which allowed us to identify neuronal populations sensitive to the shape of objects that were touched but not seen. In addition to the expected shape-selective fMRI responses in dorsal frontoparietal areas, we observed widespread shape-selective responses in the ventral visual cortical pathway, including primary visual cortex. Our results indicate that shape processing via touch engages many of the same neural mechanisms as visual object recognition. The shape-specific repetition effects we observed in primary visual cortex show that visual sensory areas are engaged during the haptic exploration of object shape, even in the absence of concurrent shape-related visual input. Our results complement related findings in visually deprived individuals and highlight the fundamental role of the visual system in the processing of object shape.
Collapse
|
54
|
Maidenbaum S, Abboud S, Amedi A. Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci Biobehav Rev 2013; 41:3-15. [PMID: 24275274 DOI: 10.1016/j.neubiorev.2013.11.007] [Citation(s) in RCA: 89] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2013] [Revised: 10/06/2013] [Accepted: 11/08/2013] [Indexed: 11/25/2022]
Abstract
Sensory substitution devices (SSDs) have come a long way since first developed for visual rehabilitation. They have produced exciting experimental results, and have furthered our understanding of the human brain. Unfortunately, they are still not used for practical visual rehabilitation, and are currently considered as reserved primarily for experiments in controlled settings. Over the past decade, our understanding of the neural mechanisms behind visual restoration has changed as a result of converging evidence, much of which was gathered with SSDs. This evidence suggests that the brain is more than a pure sensory-machine but rather is a highly flexible task-machine, i.e., brain regions can maintain or regain their function in vision even with input from other senses. This complements a recent set of more promising behavioral achievements using SSDs and new promising technologies and tools. All these changes strongly suggest that the time has come to revive the focus on practical visual rehabilitation with SSDs and we chart several key steps in this direction such as training protocols and self-train tools.
Collapse
Affiliation(s)
- Shachar Maidenbaum
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Sami Abboud
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, The Institute for Medical Research Israel-Canada, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel; The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Hebrew University of Jerusalem, Jerusalem 91220, Israel.
| |
Collapse
|
55
|
Disintegration of multisensory signals from the real hand reduces default limb self-attribution: an fMRI study. J Neurosci 2013; 33:13350-66. [PMID: 23946393 DOI: 10.1523/jneurosci.1363-13.2013] [Citation(s) in RCA: 148] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
The perception of our limbs in space is built upon the integration of visual, tactile, and proprioceptive signals. Accumulating evidence suggests that these signals are combined in areas of premotor, parietal, and cerebellar cortices. However, it remains to be determined whether neuronal populations in these areas integrate hand signals according to basic temporal and spatial congruence principles of multisensory integration. Here, we developed a setup based on advanced 3D video technology that allowed us to manipulate the spatiotemporal relationships of visuotactile (VT) stimuli delivered on a healthy human participant's real hand during fMRI and investigate the ensuing neural and perceptual correlates. Our experiments revealed two novel findings. First, we found responses in premotor, parietal, and cerebellar regions that were dependent upon the spatial and temporal congruence of VT stimuli. This multisensory integration effect required a simultaneous match between the seen and felt postures of the hand, which suggests that congruent visuoproprioceptive signals from the upper limb are essential for successful VT integration. Second, we observed that multisensory conflicts significantly disrupted the default feeling of ownership of the seen real limb, as indexed by complementary subjective, psychophysiological, and BOLD measures. The degree to which self-attribution was impaired could be predicted from the attenuation of neural responses in key multisensory areas. These results elucidate the neural bases of the integration of multisensory hand signals according to basic spatiotemporal principles and demonstrate that the disintegration of these signals leads to "disownership" of the seen real hand.
Collapse
|
56
|
Pasqualotto A, Finucane CM, Newell FN. Ambient visual information confers a context-specific, long-term benefit on memory for haptic scenes. Cognition 2013; 128:363-79. [DOI: 10.1016/j.cognition.2013.04.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Revised: 04/25/2013] [Accepted: 04/30/2013] [Indexed: 11/25/2022]
|
57
|
Berger C, Ehrsson H. Mental Imagery Changes Multisensory Perception. Curr Biol 2013; 23:1367-72. [PMID: 23810539 DOI: 10.1016/j.cub.2013.06.012] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2013] [Revised: 04/30/2013] [Accepted: 06/05/2013] [Indexed: 10/26/2022]
|
58
|
Isolating shape from semantics in haptic-visual priming. Exp Brain Res 2013; 227:311-22. [DOI: 10.1007/s00221-013-3489-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Accepted: 03/14/2013] [Indexed: 11/26/2022]
|
59
|
Eck J, Kaas AL, Goebel R. Crossmodal interactions of haptic and visual texture information in early sensory cortex. Neuroimage 2013; 75:123-135. [PMID: 23507388 DOI: 10.1016/j.neuroimage.2013.02.075] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Revised: 01/21/2013] [Accepted: 02/28/2013] [Indexed: 02/08/2023] Open
Abstract
Both visual and haptic information add to the perception of surface texture. While prior studies have reported crossmodal interactions of both sensory modalities at the behavioral level, neuroimaging studies primarily investigated texture perception in separate visual and haptic paradigms. These experimental designs, however, only allowed to identify overlap in both sensory processing streams but no interaction of visual and haptic texture processing. By varying texture characteristics in a bimodal task, the current study investigated how these crossmodal interactions are reflected at the cortical level. We used fMRI to compare cortical activation in response to matching versus non-matching visual-haptic texture information. We expected that passive simultaneous presentation of matching visual-haptic input would be sufficient to induce BOLD responses graded with varying texture characteristics. Since no cognitive evaluation of the stimuli was required, we expected to find changes primarily at a rather early processing stage. Our results confirmed our assumptions by showing crossmodal interactions of visual-haptic texture information in early somatosensory and visual cortex. However, the nature of the crossmodal effects was slightly different in both sensory cortices. In early visual cortex, matching visual-haptic information increased the average activation level and induced parametric BOLD signal variations with varying texture characteristics. In early somatosensory cortex only the latter was true. These results challenge the notion that visual and haptic texture information is processed independently and indicate a crossmodal interaction of sensory information already at an early cortical processing stage.
Collapse
Affiliation(s)
- Judith Eck
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands; Brain Innovation B.V., Maastricht, The Netherlands.
| | - Amanda L Kaas
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Maastricht University, The Netherlands; Brain Innovation B.V., Maastricht, The Netherlands; Netherlands Institute for Neuroscience, Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, The Netherlands
| |
Collapse
|
60
|
Striem-Amit E, Bubic R, Amedi A. Neurophysiological Mechanisms Underlying Plastic Changes and Rehabilitation following Sensory Loss in Blindness and Deafness. Front Neurosci 2013. [DOI: 10.1201/9781439812174-27] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023] Open
|
61
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Vision holds a greater share in visuo-haptic object recognition than touch. Neuroimage 2013; 65:59-68. [DOI: 10.1016/j.neuroimage.2012.09.054] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 09/19/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022] Open
|
62
|
Mancini F, Bolognini N, Haggard P, Vallar G. tDCS Modulation of Visually Induced Analgesia. J Cogn Neurosci 2012; 24:2419-27. [DOI: 10.1162/jocn_a_00293] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Multisensory interactions can produce analgesic effects. In particular, viewing one's own body reduces pain levels, perhaps because of changes in connectivity between visual areas specialized for body representation, and sensory areas underlying pain perception. We tested the causal role of the extrastriate visual cortex in triggering visually induced analgesia by modulating the excitability of this region with transcranial direct current stimulation (tDCS). Anodal, cathodal, or sham tDCS (2 mA, 10 min) was administered to 24 healthy participants over the right occipital or over the centro-parietal areas thought to be involved in the sensory processing of pain. Participants were required to rate the intensity of painful electrical stimuli while viewing either their left hand or an object occluding the left hand, both before and immediately after tDCS. We found that the analgesic effect of viewing the body was enhanced selectively by anodal stimulation of the occipital cortex. The effect was specific for the polarity and the site of stimulation. The present results indicate that visually induced analgesia may depend on neural signals from the extrastriate visual cortex.
Collapse
Affiliation(s)
| | - Nadia Bolognini
- 2University of Milano-Bicocca
- 3IRCCS Istituto Auxologico Italiano
| | | | - Giuseppe Vallar
- 2University of Milano-Bicocca
- 3IRCCS Istituto Auxologico Italiano
| |
Collapse
|
63
|
Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies. Cognition 2012; 126:135-48. [PMID: 23102553 DOI: 10.1016/j.cognition.2012.08.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2012] [Revised: 08/16/2012] [Accepted: 08/19/2012] [Indexed: 11/20/2022]
Abstract
We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when these objects are, again, perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic, and thus more ecologically valid than objects that are typically used in haptic or visual-haptic experiments. Based on these stimuli, we developed the See and Grasp data set, a data set containing both visual and haptic features of the Fribbles, and are making this data set freely available on the world wide web. Second, complementary to previous research such as studies asking if people transfer knowledge of object identity across visual and haptic domains, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns multisensory representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire multisensory representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model provides an excellent qualitative account of our experimental data, thereby illustrating the potential importance of multisensory representations and sensory-specific forward models to multisensory perception.
Collapse
|
64
|
Gaißert N, Waterkamp S, Fleming RW, Bülthoff I. Haptic categorical perception of shape. PLoS One 2012; 7:e43062. [PMID: 22900089 PMCID: PMC3416786 DOI: 10.1371/journal.pone.0043062] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2012] [Accepted: 07/17/2012] [Indexed: 11/18/2022] Open
Abstract
Categorization and categorical perception have been extensively studied, mainly in vision and audition. In the haptic domain, our ability to categorize objects has also been demonstrated in earlier studies. Here we show for the first time that categorical perception also occurs in haptic shape perception. We generated a continuum of complex shapes by morphing between two volumetric objects. Using similarity ratings and multidimensional scaling we ensured that participants could haptically discriminate all objects equally. Next, we performed classification and discrimination tasks. After a short training with the two shape categories, both tasks revealed categorical perception effects. Training leads to between-category expansion resulting in higher discriminability of physical differences between pairs of stimuli straddling the category boundary. Thus, even brief training can alter haptic representations of shape. This suggests that the weights attached to various haptic shape features can be changed dynamically in response to top-down information about class membership.
Collapse
Affiliation(s)
- Nina Gaißert
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail: (IB); (NG)
| | | | - Roland W. Fleming
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- University of Giessen, Giessen, Germany
| | - Isabelle Bülthoff
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- * E-mail: (IB); (NG)
| |
Collapse
|
65
|
Toussaint L, Caissie AF, Blandin Y. Does mental rotation ability depend on sensory-specific experience? JOURNAL OF COGNITIVE PSYCHOLOGY 2012. [DOI: 10.1080/20445911.2011.641529] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
66
|
Likova LT. Drawing enhances cross-modal memory plasticity in the human brain: a case study in a totally blind adult. Front Hum Neurosci 2012; 6:44. [PMID: 22593738 PMCID: PMC3350955 DOI: 10.3389/fnhum.2012.00044] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2011] [Accepted: 02/22/2012] [Indexed: 11/13/2022] Open
Abstract
In a memory-guided drawing task under blindfolded conditions, we have recently used functional Magnetic Resonance Imaging (fMRI) to demonstrate that the primary visual cortex (V1) may operate as the visuo-spatial buffer, or “sketchpad,” for working memory. The results implied, however, a modality-independent or amodal form of its operation. In the present study, to validate the role of V1 in non-visual memory, we eliminated not only the visual input but all levels of visual processing by replicating the paradigm in a congenitally blind individual. Our novel Cognitive-Kinesthetic method was used to train this totally blind subject to draw complex images guided solely by tactile memory. Control tasks of tactile exploration and memorization of the image to be drawn, and memory-free scribbling were also included. FMRI was run before training and after training. Remarkably, V1 of this congenitally blind individual, which before training exhibited noisy, immature, and non-specific responses, after training produced full-fledged response time-courses specific to the tactile-memory drawing task. The results reveal the operation of a rapid training-based plasticity mechanism that recruits the resources of V1 in the process of learning to draw. The learning paradigm allowed us to investigate for the first time the evolution of plastic re-assignment in V1 in a congenitally blind subject. These findings are consistent with a non-visual memory involvement of V1, and specifically imply that the observed cortical reorganization can be empowered by the process of learning to draw.
Collapse
Affiliation(s)
- Lora T Likova
- The Smith-Kettlewell Eye Research Institute, San Francisco CA, USA
| |
Collapse
|
67
|
Russo E, Treves A. Cortical free-association dynamics: distinct phases of a latching network. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 85:051920. [PMID: 23004800 DOI: 10.1103/physreve.85.051920] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2011] [Indexed: 06/01/2023]
Abstract
A Potts associative memory network has been proposed as a simplified model of macroscopic cortical dynamics, in which each Potts unit stands for a patch of cortex, which can be activated in one of S local attractor states. The internal neuronal dynamics of the patch is not described by the model, rather it is subsumed into an effective description in terms of graded Potts units, with adaptation effects both specific to each attractor state and generic to the patch. If each unit, or patch, receives effective (tensor) connections from C other units, the network has been shown to be able to store a large number p of global patterns, or network attractors, each with a fraction a of the units active, where the critical load p_{c} scales roughly like p_{c}≈CS^{2}/aln(1/a) (if the patterns are randomly correlated). Interestingly, after retrieving an externally cued attractor, the network can continue jumping, or latching, from attractor to attractor, driven by adaptation effects. The occurrence and duration of latching dynamics is found through simulations to depend critically on the strength of local attractor states, expressed in the Potts model by a parameter w. Here we describe with simulations and then analytically the boundaries between distinct phases of no latching, of transient and sustained latching, deriving a phase diagram in the plane w-T, where T parametrizes thermal noise effects. Implications for real cortical dynamics are briefly reviewed in the conclusions.
Collapse
Affiliation(s)
- Eleonora Russo
- SISSA, Cognitive Neuroscience, via Bonomea 265, 34136 Trieste, Italy.
| | | |
Collapse
|
68
|
Martinovic J, Lawson R, Craddock M. Time course of information processing in visual and haptic object classification. Front Hum Neurosci 2012; 6:49. [PMID: 22470327 PMCID: PMC3311268 DOI: 10.3389/fnhum.2012.00049] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2011] [Accepted: 02/24/2012] [Indexed: 11/13/2022] Open
Abstract
Vision identifies objects rapidly and efficiently. In contrast, object recognition by touch is much slower. Furthermore, haptics usually serially accumulates information from different parts of objects, whereas vision typically processes object information in parallel. Is haptic object identification slower simply due to sequential information acquisition and the resulting memory load or due to more fundamental processing differences between the senses? To compare the time course of visual and haptic object recognition, we slowed visual processing using a novel, restricted viewing technique. In an electroencephalographic (EEG) experiment, participants discriminated familiar, nameable from unfamiliar, unnamable objects both visually and haptically. Analyses focused on the evoked and total fronto-central theta-band (5-7 Hz; a marker of working memory) and the occipital upper alpha-band (10-12 Hz; a marker of perceptual processing) locked to the onset of classification. Decreases in total upper alpha-band activity for haptic identification of objects indicate a likely processing role of multisensory extrastriate areas. Long-latency modulations of alpha-band activity differentiated between familiar and unfamiliar objects in haptics but not in vision. In contrast, theta-band activity showed a general increase over time for the slowed-down visual recognition task only. We conclude that haptic object recognition relies on common representations with vision but also that there are fundamental differences between the senses that do not merely arise from differences in their speed of processing.
Collapse
Affiliation(s)
| | - Rebecca Lawson
- School of Psychology, University of LiverpoolLiverpool, UK
| | - Matt Craddock
- School of Psychology, University of LiverpoolLiverpool, UK
- Institut für Psychologie, Universität LeipzigLeipzig, Germany
| |
Collapse
|
69
|
Touching motion: rTMS on the human middle temporal complex interferes with tactile speed perception. Brain Topogr 2012; 25:389-98. [PMID: 22367586 DOI: 10.1007/s10548-012-0223-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2011] [Accepted: 02/13/2012] [Indexed: 10/28/2022]
Abstract
Brain functional and psychophysical studies have clearly demonstrated that visual motion perception relies on the activity of the middle temporal complex (hMT+). However, recent studies have shown that hMT+ seems to be also activated during tactile motion perception, suggesting that this visual extrastriate area is involved in the processing and integration of motion, irrespective of the sensorial modality. In the present study, we used repetitive transcranial magnetic stimulation (rTMS) to assess whether hMT+ plays a causal role in tactile motion processing. Blindfolded participants detected changes in the speed of a grid of tactile moving points with their finger (i.e. tactile modality). The experiment included three different conditions: a control condition with no TMS and two TMS conditions, i.e. hMT+-rTMS and posterior parietal cortex (PPC)-rTMS. Accuracies were significantly impaired during hMT+-rTMS but not in the other two conditions (No-rTMS or PPC-rTMS), moreover, thresholds for detecting speed changes were significantly higher in the hMT+-rTMS with respect to the control TMS conditions. These findings provide stronger evidence that the activity of the hMT+ area is involved in tactile speed processing, which may be consistent with the hypothesis of a supramodal role for that cortical region in motion processing.
Collapse
|
70
|
Ageing affects event-related potentials and brain oscillations: A behavioral and electrophysiological study using a haptic recognition memory task. Neuropsychologia 2011; 49:3967-80. [PMID: 22027172 DOI: 10.1016/j.neuropsychologia.2011.10.013] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2011] [Revised: 10/11/2011] [Accepted: 10/14/2011] [Indexed: 11/24/2022]
|
71
|
Gaissert N, Wallraven C. Categorizing natural objects: a comparison of the visual and the haptic modalities. Exp Brain Res 2011; 216:123-34. [PMID: 22048319 DOI: 10.1007/s00221-011-2916-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2011] [Accepted: 10/18/2011] [Indexed: 11/24/2022]
Abstract
Although the hands are the most important tool for humans to manipulate objects, only little is known about haptic processing of natural objects. Here, we selected a unique set of natural objects, namely seashells, which vary along a variety of object features, while others are shared across all stimuli. To correctly interact with objects, they have to be identified or categorized. For both processes, measuring similarities between objects is crucial. Our goal is to better understand the haptic similarity percept by comparing it to the visual similarity percept. First, direct similarity measures were analyzed using multidimensional scaling techniques to visualize the perceptual spaces of both modalities. We find that the visual and the haptic modality form almost identical perceptual spaces. Next, we performed three different categorization tasks. All tasks exhibit a highly accurate processing of complex shapes of the haptic modality. Moreover, we find that objects grouped into the same category form regions within the perceptual space. Hence, in both modalities, perceived similarity constitutes the basis for categorizing objects. Moreover, both modalities focus on shape to form categories. Taken together, our results lead to the assumption that the same cognitive processes link haptic and visual similarity perception and the resulting categorization behavior.
Collapse
Affiliation(s)
- Nina Gaissert
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | | |
Collapse
|
72
|
Lacey S, Lin JB, Sathian K. Object and spatial imagery dimensions in visuo-haptic representations. Exp Brain Res 2011; 213:267-73. [PMID: 21424255 PMCID: PMC3121910 DOI: 10.1007/s00221-011-2623-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2010] [Accepted: 03/04/2011] [Indexed: 10/18/2022]
Abstract
Visual imagery comprises object and spatial dimensions. Both types of imagery encode shape but a key difference is that object imagers are more likely to encode surface properties than spatial imagers. Since visual and haptic object representations share many characteristics, we investigated whether haptic and multisensory representations also share an object-spatial continuum. Experiment 1 involved two tasks in both visual and haptic within-modal conditions, one requiring discrimination of shape across changes in texture, the other discrimination of texture across changes in shape. In both modalities, spatial imagers could ignore changes in texture but not shape, whereas object imagers could ignore changes in shape but not texture. Experiment 2 re-analyzed a cross-modal version of the shape discrimination task from an earlier study. We found that spatial imagers could discriminate shape across changes in texture but object imagers could not and that the more one preferred object imagery, the more texture changes impaired discrimination. These findings are the first evidence that object and spatial dimensions of imagery can be observed in haptic and multisensory representations.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine, WMB-6000, 101 Woodruff Circle, Atlanta, GA 30322, USA.
| | | | | |
Collapse
|
73
|
Gaißert N, Bülthoff HH, Wallraven C. Similarity and categorization: from vision to touch. Acta Psychol (Amst) 2011; 138:219-30. [PMID: 21752344 DOI: 10.1016/j.actpsy.2011.06.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2010] [Revised: 06/15/2011] [Accepted: 06/19/2011] [Indexed: 11/25/2022] Open
Abstract
Even though human perceptual development relies on combining multiple modalities, most categorization studies so far have focused on the visual modality. To better understand the mechanisms underlying multisensory categorization, we analyzed visual and haptic perceptual spaces and compared them with human categorization behavior. As stimuli we used a three-dimensional object space of complex, parametrically-defined objects. First, we gathered similarity ratings for all objects and analyzed the perceptual spaces of both modalities using multidimensional scaling analysis. Next, we performed three different categorization tasks which are representative of every-day learning scenarios: in a fully unconstrained task, objects were freely categorized, in a semi-constrained task, exactly three groups had to be created, whereas in a constrained task, participants received three prototype objects and had to assign all other objects accordingly. We found that the haptic modality was on par with the visual modality both in recovering the topology of the physical space and in solving the categorization tasks. We also found that within-category similarity was consistently higher than across-category similarity for all categorization tasks and thus show how perceptual spaces based on similarity can explain visual and haptic object categorization. Our results suggest that both modalities employ similar processes in forming categories of complex objects.
Collapse
|
74
|
Striem-Amit E, Bubic R, Amedi A. Neurophysiological Mechanisms Underlying Plastic Changes and Rehabilitation following Sensory Loss in Blindness and Deafness. Front Neurosci 2011. [DOI: 10.1201/b11092-27] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
75
|
|
76
|
|
77
|
Recruitment of occipital cortex during sensory substitution training linked to subjective experience of seeing in people with blindness. PLoS One 2011; 6:e23264. [PMID: 21853098 PMCID: PMC3154329 DOI: 10.1371/journal.pone.0023264] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2010] [Accepted: 07/14/2011] [Indexed: 11/19/2022] Open
Abstract
Over three months of intensive training with a tactile stimulation device, 18 blind and 10 blindfolded seeing subjects improved in their ability to identify geometric figures by touch. Seven blind subjects spontaneously reported ‘visual qualia’, the subjective sensation of seeing flashes of light congruent with tactile stimuli. In the latter subjects tactile stimulation evoked activation of occipital cortex on electroencephalography (EEG). None of the blind subjects who failed to experience visual qualia, despite identical tactile stimulation training, showed EEG recruitment of occipital cortex. None of the blindfolded seeing humans reported visual-like sensations during tactile stimulation. These findings support the notion that the conscious experience of seeing is linked to the activation of occipital brain regions in people with blindness. Moreover, the findings indicate that provision of visual information can be achieved through non-visual sensory modalities which may help to minimize the disability of blind individuals, affording them some degree of object recognition and navigation aid.
Collapse
|
78
|
Mancini F, Bolognini N, Bricolo E, Vallar G. Cross-modal Processing in the Occipito-temporal Cortex: A TMS Study of the Müller-Lyer Illusion. J Cogn Neurosci 2011; 23:1987-97. [DOI: 10.1162/jocn.2010.21561] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The Müller-Lyer illusion occurs both in vision and in touch, and transfers cross-modally from vision to haptics [Mancini, F., Bricolo, E., & Vallar, G. Multisensory integration in the Müller-Lyer illusion: From vision to haptics. Quarterly Journal of Experimental Psychology, 63, 818–830, 2010]. Recent evidence suggests that the neural underpinnings of the Müller-Lyer illusion in the visual modality involve the bilateral lateral occipital complex (LOC) and right superior parietal cortex (SPC). Conversely, the neural correlates of the haptic and cross-modal illusions have never been investigated previously. Here we used repetitive TMS (rTMS) to address the causal role of the regions activated by the visual illusion in the generation of the visual, haptic, and cross-modal visuo-haptic illusory effects, investigating putative modality-specific versus cross-modal underlying processes. rTMS was administered to the right and the left hemisphere, over occipito-temporal cortex or SPC. rTMS over left and right occipito-temporal cortex impaired both unisensory (visual, haptic) and cross-modal processing of the illusion in a similar fashion. Conversely, rTMS interference over left and right SPC did not affect the illusion in any modality. These results demonstrate the causal involvement of bilateral occipito-temporal cortex in the representation of the visual, haptic, and cross-modal Müller-Lyer illusion, in favor of the hypothesis of shared underlying processes. This indicates that occipito-temporal cortex plays a cross-modal role in perception both of illusory and nonillusory shapes.
Collapse
Affiliation(s)
- Flavia Mancini
- 1University of Milano-Bicocca, Milan, Italy
- 2IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Nadia Bolognini
- 1University of Milano-Bicocca, Milan, Italy
- 2IRCCS Istituto Auxologico Italiano, Milan, Italy
| | | | - Giuseppe Vallar
- 1University of Milano-Bicocca, Milan, Italy
- 2IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
79
|
|
80
|
Yalachkov Y, Kaiser J, Görres A, Seehaus A, Naumer MJ. Smoking experience modulates the cortical integration of vision and haptics. Neuroimage 2011; 59:547-55. [PMID: 21835248 DOI: 10.1016/j.neuroimage.2011.07.041] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2011] [Revised: 07/12/2011] [Accepted: 07/14/2011] [Indexed: 10/17/2022] Open
Abstract
Human neuroplasticity of multisensory integration has been studied mainly in the context of natural or artificial training situations in healthy subjects. However, regular smokers also offer the opportunity to assess the impact of intensive daily multisensory interactions with smoking-related objects on the neural correlates of crossmodal object processing. The present functional magnetic resonance imaging study revealed that smokers show a comparable visuo-haptic integration pattern for both smoking paraphernalia and control objects in the left lateral occipital complex, a region playing a crucial role in crossmodal object recognition. Moreover, the degree of nicotine dependence correlated positively with the magnitude of visuo-haptic integration in the left lateral occipital complex (LOC) for smoking-associated but not for control objects. In contrast, in the left LOC non-smokers displayed a visuo-haptic integration pattern for control objects, but not for smoking paraphernalia. This suggests that prolonged smoking-related multisensory experiences in smokers facilitate the merging of visual and haptic inputs in the lateral occipital complex for the respective stimuli. Studying clinical populations who engage in compulsive activities may represent an ecologically valid approach to investigating the neuroplasticity of multisensory integration.
Collapse
Affiliation(s)
- Yavor Yalachkov
- Institute of Medical Psychology, Goethe-University, Heinrich-Hoffmann-Strasse 10, D-60528 Frankfurt am Main, Germany.
| | | | | | | | | |
Collapse
|
81
|
Sathian K, Lacey S, Stilla R, Gibson GO, Deshpande G, Hu X, Laconte S, Glielmi C. Dual pathways for haptic and visual perception of spatial and texture information. Neuroimage 2011; 57:462-75. [PMID: 21575727 PMCID: PMC3128427 DOI: 10.1016/j.neuroimage.2011.05.001] [Citation(s) in RCA: 87] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2011] [Revised: 04/29/2011] [Accepted: 05/02/2011] [Indexed: 11/19/2022] Open
Abstract
Segregation of information flow along a dorsally directed pathway for processing object location and a ventrally directed pathway for processing object identity is well established in the visual and auditory systems, but is less clear in the somatosensory system. We hypothesized that segregation of location vs. identity information in touch would be evident if texture is the relevant property for stimulus identity, given the salience of texture for touch. Here, we used functional magnetic resonance imaging (fMRI) to investigate whether the pathways for haptic and visual processing of location and texture are segregated, and the extent of bisensory convergence. Haptic texture-selectivity was found in the parietal operculum and posterior visual cortex bilaterally, and in parts of left inferior frontal cortex. There was bisensory texture-selectivity at some of these sites in posterior visual and left inferior frontal cortex. Connectivity analyses demonstrated, in each modality, flow of information from unisensory non-selective areas to modality-specific texture-selective areas and further to bisensory texture-selective areas. Location-selectivity was mostly bisensory, occurring in dorsal areas, including the frontal eye fields and multiple regions around the intraparietal sulcus bilaterally. Many of these regions received input from unisensory areas in both modalities. Together with earlier studies, the activation and connectivity analyses of the present study establish that somatosensory processing flows into segregated pathways for location and object identity information. The location-selective somatosensory pathway converges with its visual counterpart in dorsal frontoparietal cortex, while the texture-selective somatosensory pathway runs through the parietal operculum before converging with its visual counterpart in visual and frontal cortex. Both segregation of sensory processing according to object property and multisensory convergence appear to be universal organizing principles.
Collapse
Affiliation(s)
- K Sathian
- Department of Neurology, Emory University, Atlanta, GA, USA.
| | | | | | | | | | | | | | | |
Collapse
|
82
|
Kassuba T, Klinge C, Hölig C, Menz MM, Ptito M, Röder B, Siebner HR. The left fusiform gyrus hosts trisensory representations of manipulable objects. Neuroimage 2011; 56:1566-77. [DOI: 10.1016/j.neuroimage.2011.02.032] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2011] [Revised: 02/09/2011] [Accepted: 02/10/2011] [Indexed: 11/25/2022] Open
|
83
|
James TW, Stevenson RA, Kim S, Vanderklok RM, James KH. Shape from sound: evidence for a shape operator in the lateral occipital cortex. Neuropsychologia 2011; 49:1807-15. [PMID: 21397616 PMCID: PMC3100397 DOI: 10.1016/j.neuropsychologia.2011.03.004] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2010] [Revised: 02/21/2011] [Accepted: 03/03/2011] [Indexed: 11/22/2022]
Abstract
A recent view of cortical functional specialization suggests that the primary organizing principle of the cortex is based on task requirements, rather than sensory modality. Consistent with this view, recent evidence suggests that a region of the lateral occipitotemporal cortex (LO) may process object shape information regardless of the modality of sensory input. There is considerable evidence that area LO is involved in processing visual and haptic shape information. However, sound can also carry acoustic cues to an object's shape, for example, when a sound is produced by an object's impact with a surface. Thus, the current study used auditory stimuli that were created from recordings of objects impacting a hard surface to test the hypothesis that area LO is also involved in auditory shape processing. The objects were of two shapes, rods and balls, and of two materials, metal and wood. Subjects were required to categorize the impact sounds in one of three tasks, (1) by the shape of the object while ignoring material, (2) by the material of the object while ignoring shape, or (3) by using all the information available. Area LO was more strongly recruited when subjects discriminated impact sounds based on the shape of the object that made them, compared to when subjects discriminated those same sounds based on material. The current findings suggest that activation in area LO is shape selective regardless of sensory input modality, and are consistent with an emerging theory of perceptual functional specialization of the brain that is task-based rather than sensory modality-based.
Collapse
Affiliation(s)
- Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, United States.
| | | | | | | | | |
Collapse
|
84
|
Abstract
Our vision remains stable even though the movements of our eyes, head and bodies create a motion pattern on the retina. One of the most important, yet basic, feats of the visual system is to correctly determine whether this retinal motion is owing to real movement in the world or rather our own self-movement. This problem has occupied many great thinkers, such as Descartes and Helmholtz, at least since the time of Alhazen. This theme issue brings together leading researchers from animal neurophysiology, clinical neurology, psychophysics and cognitive neuroscience to summarize the state of the art in the study of visual stability. Recently, there has been significant progress in understanding the limits of visual stability in humans and in identifying many of the brain circuits involved in maintaining a stable percept of the world. Clinical studies and new experimental methods, such as transcranial magnetic stimulation, now make it possible to test the causal role of different brain regions in creating visual stability and also allow us to measure the consequences when the mechanisms of visual stability break down.
Collapse
Affiliation(s)
- David Melcher
- Faculty of Cognitive Science, University of Trento, Italy.
| |
Collapse
|
85
|
A Ventral Visual Stream Reading Center Independent of Visual Experience. Curr Biol 2011; 21:363-8. [PMID: 21333539 DOI: 10.1016/j.cub.2011.01.040] [Citation(s) in RCA: 195] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2010] [Revised: 01/07/2011] [Accepted: 01/14/2011] [Indexed: 11/22/2022]
|
86
|
Lucan JN, Foxe JJ, Gomez-Ramirez M, Sathian K, Molholm S. Tactile shape discrimination recruits human lateral occipital complex during early perceptual processing. Hum Brain Mapp 2011; 31:1813-21. [PMID: 20162607 DOI: 10.1002/hbm.20983] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Neuroimaging studies investigating somatosensory-based object recognition in humans have revealed activity in the lateral occipital complex, a cluster of regions primarily associated with visual object recognition. To date, determining whether this activity occurs during or subsequent to recognition per se, has been difficult to assess due to the low temporal resolution of the hemodynamic response. To more finely measure the timing of somatosensory object recognition processes we employed high density EEG using a modified version of a paradigm previously applied to neuroimaging experiments. Simple geometric shapes were presented to the right index finger of 10 participants while the ongoing EEG was measured time locked to the stimulus. In the condition of primary interest participants discriminated the shape of the stimulus. In the alternate condition they judged stimulus duration. Using traditional event-related potential analysis techniques we found significantly greater amplitudes in the evoked potentials of the shape discrimination condition between 140 and 160 ms, a timeframe in which LOC mediated perceptual processes are believed to occur during visual object recognition. Scalp voltage topography and source analysis procedures indicated the lateral occipital complex as the likely source behind this effect. This finding supports a multisensory role for the lateral occipital complex during object recognition.
Collapse
Affiliation(s)
- Joshua N Lucan
- Program in Cognitive Neuroscience, Department of Psychology, City College of the City University of New York, New York, USA
| | | | | | | | | |
Collapse
|
87
|
Toroj M, Szubielska M. Prior visual experience, and perception and memory of shape in people with total blindness. BRITISH JOURNAL OF VISUAL IMPAIRMENT 2011. [DOI: 10.1177/0264619610387554] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The aim of this study was to explore the role of prior visual experience for tactile differentiation of object shapes. The study investigated whether people who lost their vision later in life were able to identify and recognize object shapes more accurately and faster than those who were blind from their birth. Four experiments were conducted. The first two were concerned with tactile shape differentiation, the second two with shape recognition. The hypotheses were only partially confirmed. The ‘late’ blind participants distinguished shapes more accurately than the congenitally blind (particularly in ‘simple’ perception tasks). This finding may suggest that people who have prior visual experience use an allocentric strategy when visualizing object shapes in their imagery. The ‘late’ blind participants performed the tasks more slowly than those who were congenitally blind. This may be explained by the complexity of the task, the time needed to create an allocentric representation, and discrepancy in the tactile experiences between the congenitally and late blind groups. A number of implications for further research are outlined.
Collapse
Affiliation(s)
- Malgorzata Toroj
- Faculty of Psychology, University of Humanities and Economics in Lodz,
| | | |
Collapse
|
88
|
|
89
|
Seemüller A, Fiehler K, Rösler F. Unimodal and crossmodal working memory representations of visual and kinesthetic movement trajectories. Acta Psychol (Amst) 2011; 136:52-9. [PMID: 20970103 DOI: 10.1016/j.actpsy.2010.09.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2010] [Revised: 09/27/2010] [Accepted: 09/30/2010] [Indexed: 11/16/2022] Open
Abstract
The present study investigated whether visual and kinesthetic stimuli are stored as multisensory or modality-specific representations in unimodal and crossmodal working memory tasks. To this end, angle-shaped movement trajectories were presented to 16 subjects in delayed matching-to-sample tasks either visually or kinesthetically during encoding and recognition. During the retention interval, a secondary visual or kinesthetic interference task was inserted either immediately or with a delay after encoding. The modality of the interference task interacted significantly with the encoding modality. After visual encoding, memory was more impaired by a visual than by a kinesthetic secondary task, while after kinesthetic encoding the pattern was reversed. The time when the secondary task had to be performed interacted with the encoding modality as well. For visual encoding, memory was more impaired, when the secondary task had to be performed at the beginning of the retention interval. In contrast, memory after kinesthetic encoding was more affected, when the secondary task was introduced later in the retention interval. The findings suggest that working memory traces are maintained in a modality-specific format characterized by distinct consolidation processes that take longer after kinesthetic than after visual encoding.
Collapse
|
90
|
Yau JM, Weber AI, Bensmaia SJ. Separate mechanisms for audio-tactile pitch and loudness interactions. Front Psychol 2010; 1:160. [PMID: 21887147 PMCID: PMC3157934 DOI: 10.3389/fpsyg.2010.00160] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2010] [Accepted: 09/09/2010] [Indexed: 11/13/2022] Open
Abstract
A major goal in perceptual neuroscience is to understand how signals from different sensory modalities are combined to produce stable and coherent representations. We previously investigated interactions between audition and touch, motivated by the fact that both modalities are sensitive to environmental oscillations. In our earlier study, we characterized the effect of auditory distractors on tactile frequency and intensity perception. Here, we describe the converse experiments examining the effect of tactile distractors on auditory processing. Because the two studies employ the same psychophysical paradigm, we combined their results for a comprehensive view of how auditory and tactile signals interact and how these interactions depend on the perceptual task. Together, our results show that temporal frequency representations are perceptually linked regardless of the attended modality. In contrast, audio-tactile loudness interactions depend on the attended modality: Tactile distractors influence judgments of auditory intensity, but judgments of tactile intensity are impervious to auditory distraction. Lastly, we show that audio-tactile loudness interactions depend critically on stimulus timing, while pitch interactions do not. These results reveal that auditory and tactile inputs are combined differently depending on the perceptual task. That distinct rules govern the integration of auditory and tactile signals in pitch and loudness perception implies that the two are mediated by separate neural mechanisms. These findings underscore the complexity and specificity of multisensory interactions.
Collapse
Affiliation(s)
- Jeffrey M Yau
- Department of Neurology, Division of Cognitive Neuroscience, Johns Hopkins University School of Medicine Baltimore, MD, USA
| | | | | |
Collapse
|
91
|
Haag S. Effects of vision and haptics on categorizing common objects. Cogn Process 2010; 12:33-9. [PMID: 20721600 DOI: 10.1007/s10339-010-0369-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2010] [Accepted: 08/02/2010] [Indexed: 11/25/2022]
Abstract
Most research on object recognition and categorization centers on vision. However, these phenomena are likely influenced by the commonly used modality of touch. The present study tested this notion by having participants explore three-dimensional objects using vision and haptics in naming and sorting tasks. Results showed greater difficulty naming (recognizing) and sorting (categorizing) objects haptically. For both conditions, error increased from the concrete attribute of size to the more abstract quality of predation, providing behavioral evidence for shared object representation in vision and haptics.
Collapse
Affiliation(s)
- Susan Haag
- Department of Psychology, Arizona State University, Tempe, AZ, USA.
| |
Collapse
|
92
|
Kim S, James TW. Enhanced effectiveness in visuo-haptic object-selective brain regions with increasing stimulus salience. Hum Brain Mapp 2010; 31:678-93. [PMID: 19830683 DOI: 10.1002/hbm.20897] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
The occipital and parietal lobes contain regions that are recruited for both visual and haptic object processing. The purpose of the present study was to characterize the underlying neural mechanisms for bimodal integration of vision and haptics in these visuo-haptic object-selective brain regions to find out whether these brain regions are sites of neuronal or areal convergence. Our sensory conditions consisted of visual-only (V), haptic-only (H), and visuo-haptic (VH), which allowed us to evaluate integration using the superadditivity metric. We also presented each stimulus condition at two different levels of signal-to-noise ratio or salience. The salience manipulation allowed us to assess integration using the rule of inverse effectiveness. We were able to localize previously described visuo-haptic object-selective regions in the lateral occipital cortex (lateral occipital tactile-visual area) and the intraparietal sulcus, and also localized a new region in the left anterior fusiform gyrus. There was no evidence of superadditivity with the VH stimulus at either level of salience in any of the regions. There was, however, a strong effect of salience on multisensory enhancement: the response to the VH stimulus was more enhanced at higher salience across all regions. In other words, the regions showed enhanced integration of the VH stimulus with increasing effectiveness of the unisensory stimuli. We called the effect "enhanced effectiveness." The presence of enhanced effectiveness in visuo-haptic object-selective brain regions demonstrates neuronal convergence of visual and haptic sensory inputs for the purpose of processing object shape.
Collapse
Affiliation(s)
- Sunah Kim
- Cognitive Science Program, Indiana University, Bloomington, Indiana 47405, USA.
| | | |
Collapse
|
93
|
Naumer MJ, Ratz L, Yalachkov Y, Polony A, Doehrmann O, Van De Ven V, Müller NG, Kaiser J, Hein G. Visuohaptic convergence in a corticocerebellar network. Eur J Neurosci 2010; 31:1730-6. [DOI: 10.1111/j.1460-9568.2010.07208.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
94
|
Deshpande G, Hu X, Lacey S, Stilla R, Sathian K. Object familiarity modulates effective connectivity during haptic shape perception. Neuroimage 2010; 49:1991-2000. [PMID: 19732841 PMCID: PMC3073838 DOI: 10.1016/j.neuroimage.2009.08.052] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2009] [Accepted: 08/23/2009] [Indexed: 10/20/2022] Open
Abstract
In the preceding paper (Lacey, S., Flueckiger, P., Stilla, R., Lava, M., Sathian, K., 2009a. Object familiarity modulates involvement of visual imagery in haptic shape perception), we showed that the activations evoked by visual imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. Here we used task-specific analyses of functional and effective connectivity to provide convergent evidence. These analyses showed that the visual imagery and familiar haptic shape tasks activated similar networks, whereas the unfamiliar haptic shape task activated a different network. Multivariate Granger causality analyses of effective connectivity, in both a conventional form and one purged of zero-lag correlations, showed that the visual imagery and familiar haptic shape networks involved top-down paths from prefrontal cortex into the lateral occipital complex (LOC), whereas the unfamiliar haptic shape network was characterized by bottom-up, somatosensory inputs into the LOC. We conclude that shape representations in the LOC are flexibly accessible, either top-down or bottom-up, according to task demands, and that visual imagery is more involved in LOC activation during haptic shape perception when objects are familiar, compared to unfamiliar.
Collapse
Affiliation(s)
- Gopikrishna Deshpande
- Coulter Department of Biomedical Engineering, Emory University & Georgia Institute of Technology, Atlanta, GA, USA
| | - Xiaoping Hu
- Coulter Department of Biomedical Engineering, Emory University & Georgia Institute of Technology, Atlanta, GA, USA
| | - Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Randall Stilla
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - K. Sathian
- Department of Neurology, Emory University, Atlanta, GA, USA
- Department of Rehabilitation Medicine, Emory University, Atlanta, GA, USA
- Department of Psychology, Emory University, Atlanta, GA, USA
- Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
95
|
Chan JS, Simões-Franklin C, Garavan H, Newell FN. Static images of novel, moveable objects learned through touch activate visual area hMT+. Neuroimage 2010; 49:1708-16. [DOI: 10.1016/j.neuroimage.2009.09.068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2009] [Revised: 09/28/2009] [Accepted: 09/29/2009] [Indexed: 11/16/2022] Open
|
96
|
Lacey S, Flueckiger P, Stilla R, Lava M, Sathian K. Object familiarity modulates the relationship between visual object imagery and haptic shape perception. Neuroimage 2009; 49:1977-90. [PMID: 19896540 DOI: 10.1016/j.neuroimage.2009.10.081] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2009] [Revised: 10/23/2009] [Accepted: 10/29/2009] [Indexed: 11/20/2022] Open
Abstract
Although visual cortical engagement in haptic shape perception is well established, its relationship with visual imagery remains controversial. We addressed this using functional magnetic resonance imaging during separate visual object imagery and haptic shape perception tasks. Two experiments were conducted. In the first experiment, the haptic shape task employed unfamiliar, meaningless objects, whereas familiar objects were used in the second experiment. The activations evoked by visual object imagery overlapped more extensively, and their magnitudes were more correlated, with those evoked during haptic shape perception of familiar, compared to unfamiliar, objects. In the companion paper (Deshpande et al., this issue), we used task-specific functional and effective connectivity analyses to provide convergent evidence: these analyses showed that the neural networks underlying visual imagery were similar to those underlying haptic shape perception of familiar, but not unfamiliar, objects. We conclude that visual object imagery is more closely linked to haptic shape perception when objects are familiar, compared to when they are unfamiliar.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA 30322, USA
| | | | | | | | | |
Collapse
|
97
|
|