1
|
Borra E, Gerbella M, Rozzi S, Luppino G. Neural substrate for the engagement of the ventral visual stream in motor control in the macaque monkey. Cereb Cortex 2024; 34:bhae354. [PMID: 39227311 DOI: 10.1093/cercor/bhae354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 07/05/2024] [Accepted: 08/16/2024] [Indexed: 09/05/2024] Open
Abstract
The present study aimed to describe the cortical connectivity of a sector located in the ventral bank of the superior temporal sulcus in the macaque (intermediate area TEa and TEm [TEa/m]), which appears to represent the major source of output of the ventral visual stream outside the temporal lobe. The retrograde tracer wheat germ agglutinin was injected in the intermediate TEa/m in four macaque monkeys. The results showed that 58-78% of labeled cells were located within ventral visual stream areas other than the TE complex. Outside the ventral visual stream, there were connections with the memory-related medial temporal area 36 and the parahippocampal cortex, orbitofrontal areas involved in encoding subjective values of stimuli for action selection, and eye- or hand-movement related parietal (LIP, AIP, and SII), prefrontal (12r, 45A, and 45B) areas, and a hand-related dysgranular insula field. Altogether these data provide a solid substrate for the engagement of the ventral visual stream in large scale cortical networks for skeletomotor or oculomotor control. Accordingly, the role of the ventral visual stream could go beyond pure perceptual processes and could be also finalized to the neural mechanisms underlying the control of voluntary motor behavior.
Collapse
Affiliation(s)
- Elena Borra
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| | - Marzio Gerbella
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| | - Stefano Rozzi
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| | - Giuseppe Luppino
- Dipartimento di Medicina e Chirurgia, Unità di Neuroscienze, Università di Parma, Parma, Italy
| |
Collapse
|
2
|
Morfoisse T, Herrera Altamira G, Angelini L, Clément G, Beraneck M, McIntyre J, Tagliabue M. Modality-Independent Effect of Gravity in Shaping the Internal Representation of 3D Space for Visual and Haptic Object Perception. J Neurosci 2024; 44:e2457202023. [PMID: 38267257 PMCID: PMC10977025 DOI: 10.1523/jneurosci.2457-20.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 12/20/2023] [Accepted: 12/22/2023] [Indexed: 01/26/2024] Open
Abstract
Visual and haptic perceptions of 3D shape are plagued by distortions, which are influenced by nonvisual factors, such as gravitational vestibular signals. Whether gravity acts directly on the visual or haptic systems or at a higher, modality-independent level of information processing remains unknown. To test these hypotheses, we examined visual and haptic 3D shape perception by asking male and female human subjects to perform a "squaring" task in upright and supine postures and in microgravity. Subjects adjusted one edge of a 3D object to match the length of another in each of the three canonical reference planes, and we recorded the matching errors to obtain a characterization of the perceived 3D shape. The results show opposing, body-centered patterns of errors for visual and haptic modalities, whose amplitudes are negatively correlated, suggesting that they arise in distinct, modality-specific representations that are nevertheless linked at some level. On the other hand, weightlessness significantly modulated both visual and haptic perceptual distortions in the same way, indicating a common, modality-independent origin for gravity's effects. Overall, our findings show a link between modality-specific visual and haptic perceptual distortions and demonstrate a role of gravity-related signals on a modality-independent internal representation of the body and peripersonal 3D space used to interpret incoming sensory inputs.
Collapse
Affiliation(s)
- Theo Morfoisse
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| | - Gabriela Herrera Altamira
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| | - Leonardo Angelini
- HumanTech Institute, University of Applied Sciences Western Switzerland//HES-SO, Fribourg 1700, Switzerland
- School of Management Fribourg, University of Applied Sciences Western Switzerland//HES-SO, Fribourg 1700, Switzerland
| | - Gilles Clément
- Université de Caen Normandie, Inserm, COMETE U1075, CYCERON, CHU de Caen, Normandie Univ, Caen 14000, France
| | - Mathieu Beraneck
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| | - Joseph McIntyre
- Tecnalia, Basque Research and Technology Alliance, San Sebastian 20009, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao 48009, Spain
| | - Michele Tagliabue
- Université Paris Cité, CNRS UMR 8002, INCC - Integrative Neuroscience and Cognition Center, Paris F-75006, France
| |
Collapse
|
3
|
Tivadar RI, Franceschiello B, Minier A, Murray MM. Learning and navigating digitally rendered haptic spatial layouts. NPJ SCIENCE OF LEARNING 2023; 8:61. [PMID: 38102127 PMCID: PMC10724186 DOI: 10.1038/s41539-023-00208-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 11/28/2023] [Indexed: 12/17/2023]
Abstract
Learning spatial layouts and navigating through them rely not simply on sight but rather on multisensory processes, including touch. Digital haptics based on ultrasounds are effective for creating and manipulating mental images of individual objects in sighted and visually impaired participants. Here, we tested if this extends to scenes and navigation within them. Using only tactile stimuli conveyed via ultrasonic feedback on a digital touchscreen (i.e., a digital interactive map), 25 sighted, blindfolded participants first learned the basic layout of an apartment based on digital haptics only and then one of two trajectories through it. While still blindfolded, participants successfully reconstructed the haptically learned 2D spaces and navigated these spaces. Digital haptics were thus an effective means to learn and translate, on the one hand, 2D images into 3D reconstructions of layouts and, on the other hand, navigate actions within real spaces. Digital haptics based on ultrasounds represent an alternative learning tool for complex scenes as well as for successful navigation in previously unfamiliar layouts, which can likely be further applied in the rehabilitation of spatial functions and mitigation of visual impairments.
Collapse
Affiliation(s)
- Ruxandra I Tivadar
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- Centre for Integrative and Complementary Medicine, Department of Anesthesiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Cognitive Computational Neuroscience Group, Institute for Computer Science, University of Bern, Bern, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| | - Benedetta Franceschiello
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
- Institute of Systems Engineering, School of Engineering, University of Applied Sciences Western Switzerland (HES-SO Valais), Sion, Switzerland
| | - Astrid Minier
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Micah M Murray
- The Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland.
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland.
- The Sense Innovation and Research Center, Lausanne and Sion, Switzerland.
| |
Collapse
|
4
|
Xu Y, Vignali L, Sigismondi F, Crepaldi D, Bottini R, Collignon O. Similar object shape representation encoded in the inferolateral occipitotemporal cortex of sighted and early blind people. PLoS Biol 2023; 21:e3001930. [PMID: 37490508 PMCID: PMC10368275 DOI: 10.1371/journal.pbio.3001930] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 06/23/2023] [Indexed: 07/27/2023] Open
Abstract
We can sense an object's shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups' bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups' left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.
Collapse
Affiliation(s)
- Yangwen Xu
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Lorenzo Vignali
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- International School for Advanced Studies (SISSA), Trieste, Italy
| | | | - Davide Crepaldi
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Roberto Bottini
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
- Psychological Sciences Research Institute (IPSY) and Institute of NeuroScience (IoNS), University of Louvain, Louvain-la-Neuve, Belgium
- School of Health Sciences, HES-SO Valais-Wallis, The Sense Innovation and Research Center, Lausanne and Sion, Switzerland
| |
Collapse
|
5
|
Camponogara I. The integration of action-oriented multisensory information from target and limb within the movement planning and execution. Neurosci Biobehav Rev 2023; 151:105228. [PMID: 37201591 DOI: 10.1016/j.neubiorev.2023.105228] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 04/14/2023] [Accepted: 05/07/2023] [Indexed: 05/20/2023]
Abstract
The planning and execution of a grasping or reaching movement toward targets we sense with the other hand requires integrating multiple sources of sensory information about the limb performing the movement and the target of the action. In the last two decades, several sensory and motor control theories have thoroughly described how this multisensory-motor integration process occurs. However, even though these theories were very influential in their respective field, they lack a clear, unified vision of how target-related and movement-related multisensory information integrates within the action planning and execution phases. This brief review aims to summarize the most influential theories in multisensory integration and sensory-motor control by underscoring their critical points and hidden connections, providing new ideas on the multisensory-motor integration process. Throughout the review, I wll propose an alternative view of how the multisensory integration process unfolds along the action planning and execution and I will make several connections with the existent multisensory-motor control theories.
Collapse
Affiliation(s)
- Ivan Camponogara
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
6
|
Yizhar O, Tal Z, Amedi A. Loss of action-related function and connectivity in the blind extrastriate body area. Front Neurosci 2023; 17:973525. [PMID: 36968509 PMCID: PMC10035577 DOI: 10.3389/fnins.2023.973525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 02/23/2023] [Indexed: 03/11/2023] Open
Abstract
The Extrastriate Body Area (EBA) participates in the visual perception and motor actions of body parts. We recently showed that EBA’s perceptual function develops independently of visual experience, responding to stimuli with body-part information in a supramodal fashion. However, it is still unclear if the EBA similarly maintains its action-related function. Here, we used fMRI to study motor-evoked responses and connectivity patterns in the congenitally blind brain. We found that, unlike the case of perception, EBA does not develop an action-related response without visual experience. In addition, we show that congenital blindness alters EBA’s connectivity profile in a counterintuitive way—functional connectivity with sensorimotor cortices dramatically decreases, whereas connectivity with perception-related visual occipital cortices remains high. To the best of our knowledge, we show for the first time that action-related functions and connectivity in the visual cortex could be contingent on visuomotor experience. We further discuss the role of the EBA within the context of visuomotor control and predictive coding theory.
Collapse
Affiliation(s)
- Or Yizhar
- Department of Cognitive and Brain Sciences, The Hebrew University of Jerusalem, Jerusalem, Israel
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- Research Group Adaptive Memory and Decision Making, Max Planck Institute for Human Development, Berlin, Germany
- *Correspondence: Or Yizhar,
| | - Zohar Tal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Amir Amedi
- Ivcher School of Psychology, The Institute for Brain, Mind and Technology, Reichman University, Herzliya, Israel
- The Ruth & Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel
| |
Collapse
|
7
|
Bernard-Espina J, Dal Canto D, Beraneck M, McIntyre J, Tagliabue M. How Tilting the Head Interferes With Eye-Hand Coordination: The Role of Gravity in Visuo-Proprioceptive, Cross-Modal Sensory Transformations. Front Integr Neurosci 2022; 16:788905. [PMID: 35359704 PMCID: PMC8961421 DOI: 10.3389/fnint.2022.788905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
To correctly position the hand with respect to the spatial location and orientation of an object to be reached/grasped, visual information about the target and proprioceptive information from the hand must be compared. Since visual and proprioceptive sensory modalities are inherently encoded in a retinal and musculo-skeletal reference frame, respectively, this comparison requires cross-modal sensory transformations. Previous studies have shown that lateral tilts of the head interfere with the visuo-proprioceptive transformations. It is unclear, however, whether this phenomenon is related to the neck flexion or to the head-gravity misalignment. To answer to this question, we performed three virtual reality experiments in which we compared a grasping-like movement with lateral neck flexions executed in an upright seated position and while lying supine. In the main experiment, the task requires cross-modal transformations, because the target information is visually acquired, and the hand is sensed through proprioception only. In the other two control experiments, the task is unimodal, because both target and hand are sensed through one, and the same, sensory channel (vision and proprioception, respectively), and, hence, cross-modal processing is unnecessary. The results show that lateral neck flexions have considerably different effects in the seated and supine posture, but only for the cross-modal task. More precisely, the subjects’ response variability and the importance associated to the visual encoding of the information significantly increased when supine. We show that these findings are consistent with the idea that head-gravity misalignment interferes with the visuo-proprioceptive cross-modal processing. Indeed, the principle of statistical optimality in multisensory integration predicts the observed results if the noise associated to the visuo-proprioceptive transformations is assumed to be affected by gravitational signals, and not by neck proprioceptive signals per se. This finding is also consistent with the observation of otolithic projections in the posterior parietal cortex, which is involved in the visuo-proprioceptive processing. Altogether these findings represent a clear evidence of the theorized central role of gravity in spatial perception. More precisely, otolithic signals would contribute to reciprocally align the reference frames in which the available sensory information can be encoded.
Collapse
Affiliation(s)
- Jules Bernard-Espina
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Daniele Dal Canto
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Mathieu Beraneck
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
| | - Joseph McIntyre
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
- Ikerbasque Science Foundation, Bilbao, Spain
- TECNALIA, Basque Research and Technology Alliance (BRTA), San Sebastian, Spain
| | - Michele Tagliabue
- Université de Paris, CNRS, Integrative Neuroscience and Cognition Center, Paris, France
- *Correspondence: Michele Tagliabue,
| |
Collapse
|
8
|
Alipour A, Beggs JM, Brown JW, James TW. A computational examination of the two-streams hypothesis: which pathway needs a longer memory? Cogn Neurodyn 2022; 16:149-165. [PMID: 35126775 PMCID: PMC8807798 DOI: 10.1007/s11571-021-09703-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 06/26/2021] [Accepted: 07/14/2021] [Indexed: 02/03/2023] Open
Abstract
The two visual streams hypothesis is a robust example of neural functional specialization that has inspired countless studies over the past four decades. According to one prominent version of the theory, the fundamental goal of the dorsal visual pathway is the transformation of retinal information for visually-guided motor behavior. To that end, the dorsal stream processes input using absolute (or veridical) metrics only when the movement is initiated, necessitating very little, or no, memory. Conversely, because the ventral visual pathway does not involve motor behavior (its output does not influence the real world), the ventral stream processes input using relative (or illusory) metrics and can accumulate or integrate sensory evidence over long time constants, which provides a substantial capacity for memory. In this study, we tested these relations between functional specialization, processing metrics, and memory by training identical recurrent neural networks to perform either a viewpoint-invariant object classification task or an orientation/size determination task. The former task relies on relative metrics, benefits from accumulating sensory evidence, and is usually attributed to the ventral stream. The latter task relies on absolute metrics, can be computed accurately in the moment, and is usually attributed to the dorsal stream. To quantify the amount of memory required for each task, we chose two types of neural network models. Using a long-short-term memory (LSTM) recurrent network, we found that viewpoint-invariant object categorization (object task) required a longer memory than orientation/size determination (orientation task). Additionally, to dissect this memory effect, we considered factors that contributed to longer memory in object tasks. First, we used two different sets of objects, one with self-occlusion of features and one without. Second, we defined object classes either strictly by visual feature similarity or (more liberally) by semantic label. The models required greater memory when features were self-occluded and when object classes were defined by visual feature similarity, showing that self-occlusion and visual similarity among object task samples are contributing to having a long memory. The same set of tasks modeled using modified leaky-integrator echo state recurrent networks (LiESN), however, did not replicate the results, except under some conditions. This may be because LiESNs cannot perform fine-grained memory adjustments due to their network-wide memory coefficient and fixed recurrent weights. In sum, the LSTM simulations suggest that longer memory is advantageous for performing viewpoint-invariant object classification (a putative ventral stream function) because it allows for interpolation of features across viewpoints. The results further suggest that orientation/size determination (a putative dorsal stream function) does not benefit from longer memory. These findings are consistent with the two visual streams theory of functional specialization. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11571-021-09703-z.
Collapse
Affiliation(s)
- Abolfazl Alipour
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - John M Beggs
- Program in Neuroscience, Indiana University, Bloomington, IN USA
- Department of Physics, Indiana University, Bloomington, IN USA
| | - Joshua W Brown
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| | - Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN USA
- Program in Neuroscience, Indiana University, Bloomington, IN USA
| |
Collapse
|
9
|
Visual and Tactile Sensory Systems Share Common Features in Object Recognition. eNeuro 2021; 8:ENEURO.0101-21.2021. [PMID: 34544756 PMCID: PMC8493885 DOI: 10.1523/eneuro.0101-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/24/2021] [Accepted: 08/31/2021] [Indexed: 11/24/2022] Open
Abstract
Although we use our visual and tactile sensory systems interchangeably for object recognition on a daily basis, little is known about the mechanism underlying this ability. This study examined how 3D shape features of objects form two congruent and interchangeable visual and tactile perceptual spaces in healthy male and female participants. Since active exploration plays an important role in shape processing, a virtual reality environment was used to visually explore 3D objects called digital embryos without using the tactile sense. In addition, during the tactile procedure, blindfolded participants actively palpated a 3D-printed version of the same objects with both hands. We first demonstrated that the visual and tactile perceptual spaces were highly similar. We then extracted a series of 3D shape features to investigate how visual and tactile exploration can lead to the correct identification of the relationships between objects. The results indicate that both modalities share the same shape features to form highly similar veridical spaces. This finding suggests that visual and tactile systems might apply similar cognitive processes to sensory inputs that enable humans to rely merely on one modality in the absence of another to recognize surrounding objects.
Collapse
|
10
|
Heimler B, Behor T, Dehaene S, Izard V, Amedi A. Core knowledge of geometry can develop independently of visual experience. Cognition 2021; 212:104716. [PMID: 33895652 DOI: 10.1016/j.cognition.2021.104716] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 03/28/2021] [Accepted: 03/29/2021] [Indexed: 01/29/2023]
Abstract
Geometrical intuitions spontaneously drive visuo-spatial reasoning in human adults, children and animals. Is their emergence intrinsically linked to visual experience, or does it reflect a core property of cognition shared across sensory modalities? To address this question, we tested the sensitivity of blind-from-birth adults to geometrical-invariants using a haptic deviant-figure detection task. Blind participants spontaneously used many geometric concepts such as parallelism, right angles and geometrical shapes to detect intruders in haptic displays, but experienced difficulties with symmetry and complex spatial transformations. Across items, their performance was highly correlated with that of sighted adults performing the same task in touch (blindfolded) and in vision, as well as with the performances of uneducated preschoolers and Amazonian adults. Our results support the existence of an amodal core-system of geometry that arises independently of visual experience. However, performance at selecting geometric intruders was generally higher in the visual compared to the haptic modality, suggesting that sensory-specific spatial experience may play a role in refining the properties of this core-system of geometry.
Collapse
Affiliation(s)
- Benedetta Heimler
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel; Center of Advanced Technologies in Rehabilitation (CATR), Sheba Medical Center, Tel Hashomer, Israel.
| | - Tomer Behor
- The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin Center, 91191 Gif/Yvette, France; Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France
| | - Véronique Izard
- Integrative Neuroscience and Cognition Center, Université de Paris, 45 rue des Saints-Pères, 75006 Paris, France; CNRS UMR 8002, 45 rue des Saints-Pères, 75006 Paris, France
| | - Amir Amedi
- Department of Medical Neurobiology, Hebrew University of Jerusalem, Hadassah Ein-Kerem, Jerusalem, Israel; The Baruch Ivcher Institute For Brain, Cognition & Technology, The Baruch Ivcher School of Psychology, Interdisciplinary Center (IDC), Herzeliya, Israel; The Cognitive Science Program, The Hebrew University of Jerusalem, Jerusalem, Israel
| |
Collapse
|
11
|
Rahman MS, Barnes KA, Crommett LE, Tommerdahl M, Yau JM. Auditory and tactile frequency representations are co-embedded in modality-defined cortical sensory systems. Neuroimage 2020; 215:116837. [PMID: 32289461 PMCID: PMC7292761 DOI: 10.1016/j.neuroimage.2020.116837] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 03/17/2020] [Accepted: 04/06/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory information is represented and elaborated in hierarchical cortical systems that are thought to be dedicated to individual sensory modalities. This traditional view of sensory cortex organization has been challenged by recent evidence of multimodal responses in primary and association sensory areas. Although it is indisputable that sensory areas respond to multiple modalities, it remains unclear whether these multimodal responses reflect selective information processing for particular stimulus features. Here, we used fMRI adaptation to identify brain regions that are sensitive to the temporal frequency information contained in auditory, tactile, and audiotactile stimulus sequences. A number of brain regions distributed over the parietal and temporal lobes exhibited frequency-selective temporal response modulation for both auditory and tactile stimulus events, as indexed by repetition suppression effects. A smaller set of regions responded to crossmodal adaptation sequences in a frequency-dependent manner. Despite an extensive overlap of multimodal frequency-selective responses across the parietal and temporal lobes, representational similarity analysis revealed a cortical "regional landscape" that clearly reflected distinct somatosensory and auditory processing systems that converged on modality-invariant areas. These structured relationships between brain regions were also evident in spontaneous signal fluctuation patterns measured at rest. Our results reveal that multimodal processing in human cortex can be feature-specific and that multimodal frequency representations are embedded in the intrinsically hierarchical organization of cortical sensory systems.
Collapse
Affiliation(s)
- Md Shoaibur Rahman
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA
| | - Kelly Anne Barnes
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA; Department of Behavioral and Social Sciences, San Jacinto College - South, Houston, 13735 Beamer Rd, S13.269, Houston, TX, 77089, USA
| | - Lexi E Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA
| | - Mark Tommerdahl
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, CB No. 7575, Chapel Hill, NC, 27599, USA
| | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA.
| |
Collapse
|
12
|
Reduced Cerebral Blood Flow in the Visual Cortex and Its Correlation With Glaucomatous Structural Damage to the Retina in Patients With Mild to Moderate Primary Open-angle Glaucoma. J Glaucoma 2019; 27:816-822. [PMID: 29952821 DOI: 10.1097/ijg.0000000000001017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE Altered ocular and cerebral vascular autoregulation and vasoreactivity have been demonstrated in patients with primary open-angle glaucoma (POAG). In the present study, we investigated the correlations between reduced cerebral blood flow (CBF) in early and higher-tier visual cortical areas and glaucomatous changes in the retinas of patients with mild to moderate POAG. PATIENTS AND METHODS 3-dimensional pseudocontinuous arterial spin labelling magnetic resonance imaging at 3 T was performed in 20 normal controls and 15 mild to moderate POAG patients. Regions of interest were selected based on the Population-Average, Landmark- and Surface-based (PALS) atlas of the human cerebral cortex. Arterial spin labelling-measured CBF values were extracted in the early and higher-tier visual cortical areas and were compared between patients and controls using a 2-sample t test. Pearson correlation analyses were used to assess the correlations between reduced CBF and cup-to-disc ratio, retinal nerve fiber layer thickness, and ganglion cell complex thickness. RESULTS Reduced CBF in early visual cortical areas (V1, V2, and ventral posterior area) and in the higher-tier visual left lateral occipital cortex was presented in mild to moderate POAG patients compared with controls. Furthermore, reduced CBF of the right areas V2 and ventral posterior area was correlated with cup-to-disc ratio, total ganglion cell complex thickness, and average retinal nerve fiber layer thickness. CONCLUSIONS In conclusion, the complex pathologic progress of POAG includes abnormal cerebral perfusion within the visual cortex since the mild to moderate disease stages. The association of cerebral perfusion changes with alterations of the optic disc and the retina may contribute to the early diagnosis of POAG.
Collapse
|
13
|
O'Dowd A, Cooney SM, McGovern DP, Newell FN. Do synaesthesia and mental imagery tap into similar cross-modal processes? Philos Trans R Soc Lond B Biol Sci 2019; 374:20180359. [PMID: 31630660 DOI: 10.1098/rstb.2018.0359] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Synaesthesia has previously been linked with imagery abilities, although an understanding of a causal role for mental imagery in broader synaesthetic experiences remains elusive. This can be partly attributed to our relatively poor understanding of imagery in sensory domains beyond vision. Investigations into the neural and behavioural underpinnings of mental imagery have nevertheless identified an important role for imagery in perception, particularly in mediating cross-modal interactions. However, the phenomenology of synaesthesia gives rise to the assumption that associated cross-modal interactions may be encapsulated and specific to synaesthesia. As such, evidence for a link between imagery and perception may not generalize to synaesthesia. Here, we present results that challenge this idea: first, we found enhanced somatosensory imagery evoked by visual stimuli of body parts in mirror-touch synaesthetes, relative to other synaesthetes or controls. Moreover, this enhanced imagery generalized to tactile object properties not directly linked to their synaesthetic associations. Second, we report evidence that concurrent experience evoked in grapheme-colour synaesthesia was sufficient to trigger visual-to-tactile correspondences that are common to all. Together, these findings show that enhanced mental imagery is a consistent hallmark of synaesthesia, and suggest the intriguing possibility that imagery may facilitate the cross-modal interactions that underpin synaesthesic experiences. This article is part of a discussion meeting issue 'Bridging senses: novel insights from synaesthesia'.
Collapse
Affiliation(s)
- Alan O'Dowd
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin D02 PN40, Ireland
| | - Sarah M Cooney
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin D02 PN40, Ireland
| | - David P McGovern
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin D02 PN40, Ireland.,School of Psychology, Dublin City University, Dublin D09 W6Y4, Ireland
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin D02 PN40, Ireland
| |
Collapse
|
14
|
Borra E, Luppino G. Large-scale temporo–parieto–frontal networks for motor and cognitive motor functions in the primate brain. Cortex 2019; 118:19-37. [DOI: 10.1016/j.cortex.2018.09.024] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 09/21/2018] [Accepted: 09/28/2018] [Indexed: 10/28/2022]
|
15
|
Mueller S, de Haas B, Metzger A, Drewing K, Fiehler K. Neural correlates of top-down modulation of haptic shape versus roughness perception. Hum Brain Mapp 2019; 40:5172-5184. [PMID: 31430005 PMCID: PMC6864886 DOI: 10.1002/hbm.24764] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 07/03/2019] [Accepted: 08/01/2019] [Indexed: 01/10/2023] Open
Abstract
Exploring an object's shape by touch also renders information about its surface roughness. It has been suggested that shape and roughness are processed distinctly in the brain, a result based on comparing brain activation when exploring objects that differed in one of these features. To investigate the neural mechanisms of top‐down control on haptic perception of shape and roughness, we presented the same multidimensional objects but varied the relevance of each feature. Specifically, participants explored two objects that varied in shape (oblongness of cuboids) and surface roughness. They either had to compare the shape or the roughness in an alternative‐forced‐choice‐task. Moreover, we examined whether the activation strength of the identified brain regions as measured by functional magnetic resonance imaging (fMRI) can predict the behavioral performance in the haptic discrimination task. We observed a widespread network of activation for shape and roughness perception comprising bilateral precentral and postcentral gyrus, cerebellum, and insula. Task‐relevance of the object's shape increased activation in the right supramarginal gyrus (SMG/BA 40) and the right precentral gyrus (PreCG/BA 44) suggesting that activation in these areas does not merely reflect stimulus‐driven processes, such as exploring shape, but also entails top‐down controlled processes driven by task‐relevance. Moreover, the strength of the SMG/PreCG activation predicted individual performance in the shape but not in the roughness discrimination task. No activation was found for the reversed contrast (roughness > shape). We conclude that macrogeometric properties, such as shape, can be modulated by top‐down mechanisms whereas roughness, a microgeometric feature, seems to be processed automatically.
Collapse
Affiliation(s)
- Stefanie Mueller
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany.,Leibniz Institute of Psychology Information (ZPID), Trier, Germany
| | - Benjamin de Haas
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany
| | - Anna Metzger
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany
| | - Knut Drewing
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany
| | - Katja Fiehler
- Department of Experimental Psychology, Justus Liebig University, Giessen, Germany.,Center for Mind, Brain, and Behavior (CMBB), Marburg University and Justus Liebig University, Giessen, Germany
| |
Collapse
|
16
|
Bola Ł, Matuszewski J, Szczepanik M, Droździel D, Sliwinska MW, Paplińska M, Jednoróg K, Szwed M, Marchewka A. Functional hierarchy for tactile processing in the visual cortex of sighted adults. Neuroimage 2019; 202:116084. [PMID: 31400530 DOI: 10.1016/j.neuroimage.2019.116084] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 07/07/2019] [Accepted: 08/06/2019] [Indexed: 12/15/2022] Open
Abstract
Perception via different sensory modalities was traditionally believed to be supported by largely separate brain systems. However, a growing number of studies demonstrate that the visual cortices of typical, sighted adults are involved in tactile and auditory perceptual processing. Here, we investigated the spatiotemporal dynamics of the visual cortex's involvement in a complex tactile task: Braille letter recognition. Sighted subjects underwent Braille training and then participated in a transcranial magnetic stimulation (TMS) study in which they tactually identified single Braille letters. During this task, TMS was applied to their left early visual cortex, visual word form area (VWFA), and left early somatosensory cortex at five time windows from 20 to 520 ms following the Braille letter presentation's onset. The subjects' response accuracy decreased when TMS was applied to the early visual cortex at the 120-220 ms time window and when TMS was applied to the VWFA at the 320-420 ms time window. Stimulation of the early somatosensory cortex did not have a time-specific effect on the accuracy of the subjects' Braille letter recognition, but rather caused a general slowdown during this task. Our results indicate that the involvement of sighted people's visual cortices in tactile perception respects the canonical visual hierarchy-the early tactile processing stages involve the early visual cortex, whereas more advanced tactile computations involve high-level visual areas. Our findings are compatible with the metamodal account of brain organization and suggest that the whole visual cortex may potentially support spatial perception in a task-specific, sensory-independent manner.
Collapse
Affiliation(s)
- Łukasz Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland; Institute of Psychology, Jagiellonian University, 6 Ingardena Street, 30-060, Krakow, Poland.
| | - Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | - Michał Szczepanik
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | - Dawid Droździel
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | | | - Małgorzata Paplińska
- The Maria Grzegorzewska University, 40 Szczęśliwicka Street, 02-353, Warsaw, Poland
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland
| | - Marcin Szwed
- Institute of Psychology, Jagiellonian University, 6 Ingardena Street, 30-060, Krakow, Poland.
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteura Street, 02-093, Warsaw, Poland.
| |
Collapse
|
17
|
Norman JF. The Recognition of Solid Object Shape: The Importance of Inhomogeneity. Iperception 2019; 10:2041669519870553. [PMID: 31448073 PMCID: PMC6693026 DOI: 10.1177/2041669519870553] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Accepted: 07/26/2019] [Indexed: 11/15/2022] Open
Abstract
A single experiment evaluated the haptic-visual cross-modal matching of solid object shape. One set of randomly shaped artificial objects was used (sinusoidally modulated spheres, SMS) as well as two sets of naturally shaped objects (bell peppers, Capsicum annuum and sweet potatoes, Ipomoea batatas). A total of 66 adults participated in the study. The participants' task was to haptically explore a single object on any particular trial and subsequently indicate which of 12 simultaneously visible objects possessed the same shape. The participants' performance for the natural objects was 60.9 and 78.7 percent correct for the bell peppers and sweet potatoes, respectively. The analogous performance for the SMS objects, while better than chance, was far worse (18.6 percent correct). All of these types of stimulus objects possess a rich geometrical structure (e.g., they all possess multiple elliptic, hyperbolic, and parabolic surface regions). Nevertheless, these three types of stimulus objects are perceived differently: Individual members of sweet potatoes and bell peppers are largely identifiable to human participants, while the individual SMS objects are not. Analyses of differential geometry indicate that these natural objects (e.g., bell peppers and sweet potatoes) possess heterogeneous spatial configurations of distinctly curved surface regions, and this heterogeneity is lacking in SMS objects. The current results therefore suggest that increases in surface structure heterogeneity facilitate human object recognition.
Collapse
Affiliation(s)
- J. Farley Norman
- Department of Psychological Sciences, Ogden College of Science and Engineering, Western Kentucky University, Bowling Green, KY, USA
| |
Collapse
|
18
|
Tivadar RI, Rouillard T, Chappaz C, Knebel JF, Turoman N, Anaflous F, Roche J, Matusz PJ, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects. Front Integr Neurosci 2019; 13:7. [PMID: 30930756 PMCID: PMC6427928 DOI: 10.3389/fnint.2019.00007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 02/25/2019] [Indexed: 11/13/2022] Open
Abstract
Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I. Tivadar
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | | | - Jean-François Knebel
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
| | - Nora Turoman
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Fatima Anaflous
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Pawel J. Matusz
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - Micah M. Murray
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
19
|
Cohen ZZ, Arend I, Yuen K, Naparstek S, Gliksman Y, Veksler R, Henik A. Tactile enumeration: A case study of acalculia. Brain Cogn 2018; 127:60-71. [PMID: 30340181 DOI: 10.1016/j.bandc.2018.10.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2018] [Revised: 09/30/2018] [Accepted: 10/07/2018] [Indexed: 10/28/2022]
Abstract
Enumeration is one of the building blocks of arithmetic and fingers are used as a counting tool in early steps. Subitizing-fast and accurate enumeration of small quantities-has been vastly studied in the visual modality, but less in the tactile modality. We explored tactile enumeration using fingers, and gray matter (GM) changes using voxel-based morphometry (VBM), in acalculia. We examined JD, a 22-year-old female with acalculia following a stroke to the left inferior parietal cortex. JD and a neurologically healthy normal comparison (NC) group reported how many fingers were stimulated. JD was tested at several time points, including at acute and chronic phases. Using the sensory intact hand for tactile enumeration, JD showed deficit in the acute phase, compared to the NC group, and improvement in the chronic phase of (1) the RT slope of enumerating up to four stimuli, (2) enumerating neighboring fingers, and (3) arithmetic fluency performance. Moreover, VBM analysis showed a larger GM volume for JD relative to the NC group in the right middle occipital cortex, most profoundly in the chronic phase. JD's performance serves as a first glance of tactile enumeration in acalculia. Pattern-recognition-based results support the suggestion of subitizing being the enumeration process when using one hand. Moreover, the increase in GM in the occipital cortex lays the groundwork for studying the innate and primitive ability to perceive and evaluate sizes or amounts-"sense of magnitude"- as a multisensory magnitude area and as part of a recovery path for deficits in basic numerical abilities.
Collapse
Affiliation(s)
- Zahira Z Cohen
- Department of Psychology, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel.
| | - Isabel Arend
- Department of Psychology, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel
| | - Kenneth Yuen
- Neuroimaging Center (NIC), Focus Program Translational Neuroscience, Johannes Gutenberg University Medical Center, Langenbeckstraße 1, 55131 Mainz, Germany.
| | - Sharon Naparstek
- Department of Psychology, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Department of Rehabilitation, Soroka University Medical Center, POB 151, Beer-Sheva, Israel.
| | - Yarden Gliksman
- Department of Psychology, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel
| | - Ronel Veksler
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Departments of Physiology and Cell Biology & Biomedical Engineering, Faculty of Health Sciences, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Department of Radiology, Soroka University Medical Center, POB 151, Beer-Sheva, Israel
| | - Avishai Henik
- Department of Psychology, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel; Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, POB 653, Beer-Sheva, Israel.
| |
Collapse
|
20
|
Pundik S, Scoco A, Skelly M, McCabe JP, Daly JJ. Greater Cortical Thickness Is Associated With Enhanced Sensory Function After Arm Rehabilitation in Chronic Stroke. Neurorehabil Neural Repair 2018; 32:590-601. [DOI: 10.1177/1545968318778810] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Objective. Somatosensory function is critical to normal motor control. After stroke, dysfunction of the sensory systems prevents normal motor function and degrades quality of life. Structural neuroplasticity underpinnings of sensory recovery after stroke are not fully understood. The objective of this study was to identify changes in bilateral cortical thickness (CT) that may drive recovery of sensory acuity. Methods. Chronic stroke survivors (n = 20) were treated with 12 weeks of rehabilitation. Measures were sensory acuity (monofilament), Fugl-Meyer upper limb and CT change. Permutation-based general linear regression modeling identified cortical regions in which change in CT was associated with change in sensory acuity. Results. For the ipsilesional hemisphere in response to treatment, CT increase was significantly associated with sensory improvement in the area encompassing the occipital pole, lateral occipital cortex (inferior and superior divisions), intracalcarine cortex, cuneal cortex, precuneus cortex, inferior temporal gyrus, occipital fusiform gyrus, supracalcarine cortex, and temporal occipital fusiform cortex. For the contralesional hemisphere, increased CT was associated with improved sensory acuity within the posterior parietal cortex that included supramarginal and angular gyri. Following upper limb therapy, monofilament test score changed from 45.0 ± 13.3 to 42.6 ± 12.9 mm ( P = .063) and Fugl-Meyer score changed from 22.1 ± 7.8 to 32.3 ± 10.1 ( P < .001). Conclusions. Rehabilitation in the chronic stage after stroke produced structural brain changes that were strongly associated with enhanced sensory acuity. Improved sensory perception was associated with increased CT in bilateral high-order association sensory cortices reflecting the complex nature of sensory function and recovery in response to rehabilitation.
Collapse
Affiliation(s)
- Svetlana Pundik
- Case Western Reserve University, Cleveland, OH, USA
- Cleveland VA Medical Center, Cleveland, OH, USA
| | - Aleka Scoco
- Case Western Reserve University, Cleveland, OH, USA
| | | | | | - Janis J. Daly
- University of Florida, Gainesville, FL, USA
- Gainesville VA Medical Center, Gainesville, FL, USA
| |
Collapse
|
21
|
Scanning movements during haptic search: similarity with fixations during visual search. Behav Brain Sci 2018; 40:e151. [PMID: 29342610 DOI: 10.1017/s0140525x16000212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Finding relevant objects through vision, or visual search, is a crucial function that has received considerable attention in the literature. After decades of research, data suggest that visual fixations are more crucial to understanding how visual search works than are the attributes of stimuli. This idea receives further support from the field of haptic search.
Collapse
|
22
|
Toprak S, Navarro-Guerrero N, Wermter S. Evaluating Integration Strategies for Visuo-Haptic Object Recognition. Cognit Comput 2017; 10:408-425. [PMID: 29881470 PMCID: PMC5971043 DOI: 10.1007/s12559-017-9536-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Accepted: 12/05/2017] [Indexed: 11/24/2022]
Abstract
In computational systems for visuo-haptic object recognition, vision and haptics are often modeled as separate processes. But this is far from what really happens in the human brain, where cross- as well as multimodal interactions take place between the two sensory modalities. Generally, three main principles can be identified as underlying the processing of the visual and haptic object-related stimuli in the brain: (1) hierarchical processing, (2) the divergence of the processing onto substreams for object shape and material perception, and (3) the experience-driven self-organization of the integratory neural circuits. The question arises whether an object recognition system can benefit in terms of performance from adopting these brain-inspired processing principles for the integration of the visual and haptic inputs. To address this, we compare the integration strategy that incorporates all three principles to the two commonly used integration strategies in the literature. We collected data with a NAO robot enhanced with inexpensive contact microphones as tactile sensors. The results of our experiments involving every-day objects indicate that (1) the contact microphones are a good alternative to capturing tactile information and that (2) organizing the processing of the visual and haptic inputs hierarchically and in two pre-processing streams is helpful performance-wise. Nevertheless, further research is needed to effectively quantify the role of each identified principle by itself as well as in combination with others.
Collapse
Affiliation(s)
- Sibel Toprak
- Knowledge Technology, Department of Informatics, Universität Hamburg, Vogt-Kölln-Str. 30, 22527 Hamburg, Germany
| | - Nicolás Navarro-Guerrero
- Knowledge Technology, Department of Informatics, Universität Hamburg, Vogt-Kölln-Str. 30, 22527 Hamburg, Germany
| | - Stefan Wermter
- Knowledge Technology, Department of Informatics, Universität Hamburg, Vogt-Kölln-Str. 30, 22527 Hamburg, Germany
| |
Collapse
|
23
|
Functional anatomy of the macaque temporo-parieto-frontal connectivity. Cortex 2017; 97:306-326. [DOI: 10.1016/j.cortex.2016.12.007] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Revised: 11/21/2016] [Accepted: 12/04/2016] [Indexed: 01/19/2023]
|
24
|
Recruitment of Foveal Retinotopic Cortex During Haptic Exploration of Shapes and Actions in the Dark. J Neurosci 2017; 37:11572-11591. [PMID: 29066555 DOI: 10.1523/jneurosci.2428-16.2017] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2016] [Revised: 10/05/2017] [Indexed: 12/23/2022] Open
Abstract
The role of the early visual cortex and higher-order occipitotemporal cortex has been studied extensively for visual recognition and to a lesser degree for haptic recognition and visually guided actions. Using a slow event-related fMRI experiment, we investigated whether tactile and visual exploration of objects recruit the same "visual" areas (and in the case of visual cortex, the same retinotopic zones) and if these areas show reactivation during delayed actions in the dark toward haptically explored objects (and if so, whether this reactivation might be due to imagery). We examined activation during visual or haptic exploration of objects and action execution (grasping or reaching) separated by an 18 s delay. Twenty-nine human volunteers (13 females) participated in this study. Participants had their eyes open and fixated on a point in the dark. The objects were placed below the fixation point and accordingly visual exploration activated the cuneus, which processes retinotopic locations in the lower visual field. Strikingly, the occipital pole (OP), representing foveal locations, showed higher activation for tactile than visual exploration, although the stimulus was unseen and location in the visual field was peripheral. Moreover, the lateral occipital tactile-visual area (LOtv) showed comparable activation for tactile and visual exploration. Psychophysiological interaction analysis indicated that the OP showed stronger functional connectivity with anterior intraparietal sulcus and LOtv during the haptic than visual exploration of shapes in the dark. After the delay, the cuneus, OP, and LOtv showed reactivation that was independent of the sensory modality used to explore the object. These results show that haptic actions not only activate "visual" areas during object touch, but also that this information appears to be used in guiding grasping actions toward targets after a delay.SIGNIFICANCE STATEMENT Visual presentation of an object activates shape-processing areas and retinotopic locations in early visual areas. Moreover, if the object is grasped in the dark after a delay, these areas show "reactivation." Here, we show that these areas are also activated and reactivated for haptic object exploration and haptically guided grasping. Touch-related activity occurs not only in the retinotopic location of the visual stimulus, but also at the occipital pole (OP), corresponding to the foveal representation, even though the stimulus was unseen and located peripherally. That is, the same "visual" regions are implicated in both visual and haptic exploration; however, touch also recruits high-acuity central representation within early visual areas during both haptic exploration of objects and subsequent actions toward them. Functional connectivity analysis shows that the OP is more strongly connected with ventral and dorsal stream areas when participants explore an object in the dark than when they view it.
Collapse
|
25
|
Lee Masson H, Kang HM, Petit L, Wallraven C. Neuroanatomical correlates of haptic object processing: combined evidence from tractography and functional neuroimaging. Brain Struct Funct 2017; 223:619-633. [PMID: 28905126 DOI: 10.1007/s00429-017-1510-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 09/05/2017] [Indexed: 11/25/2022]
Abstract
Touch delivers a wealth of information already from birth, helping infants to acquire knowledge about a variety of important object properties using their hands. Despite the fact that we are touch experts as much as we are visual experts, surprisingly, little is known how our perceptual ability in touch is linked to either functional or structural aspects of the brain. The present study, therefore, investigates and identifies neuroanatomical correlates of haptic perceptual performance using a novel, multi-modal approach. For this, participants' performance in a difficult shape categorization task was first measured in the haptic domain. Using a multi-modal functional magnetic resonance imaging and diffusion-weighted magnetic resonance imaging analysis pipeline, functionally defined and anatomically constrained white-matter pathways were extracted and their microstructural characteristics correlated with individual variability in haptic categorization performance. Controlling for the effects of age, total intracranial volume and head movements in the regression model, haptic performance was found to correlate significantly with higher axial diffusivity in functionally defined superior longitudinal fasciculus (fSLF) linking frontal and parietal areas. These results were further localized in specific sub-parts of fSLF. Using additional data from a second group of participants, who first learned the categories in the visual domain and then transferred to the haptic domain, haptic performance correlates were obtained in the functionally defined inferior longitudinal fasciculus. Our results implicate SLF linking frontal and parietal areas as an important white-matter track in processing touch-specific information during object processing, whereas ILF relays visually learned information during haptic processing. Taken together, the present results chart for the first time potential neuroanatomical correlates and interactions of touch-related object processing.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Brain and Cognition, KU Leuven, 3000, Louvain, Belgium
| | - Hyeok-Mook Kang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Laurent Petit
- Groupe d'Imagerie Neurofonctionnelle, Institut Des Maladies Neurodégénératives, UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea.
| |
Collapse
|
26
|
Borra E, Gerbella M, Rozzi S, Luppino G. The macaque lateral grasping network: A neural substrate for generating purposeful hand actions. Neurosci Biobehav Rev 2017; 75:65-90. [DOI: 10.1016/j.neubiorev.2017.01.017] [Citation(s) in RCA: 61] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 12/22/2016] [Accepted: 01/12/2017] [Indexed: 10/20/2022]
|
27
|
Top-down and bottom-up neurodynamic evidence in patients with tinnitus. Hear Res 2016; 342:86-100. [DOI: 10.1016/j.heares.2016.10.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/18/2016] [Revised: 09/12/2016] [Accepted: 10/06/2016] [Indexed: 12/14/2022]
|
28
|
Lee Masson H, Wallraven C, Petit L. "Can touch this": Cross-modal shape categorization performance is associated with microstructural characteristics of white matter association pathways. Hum Brain Mapp 2016; 38:842-854. [PMID: 27696592 DOI: 10.1002/hbm.23422] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2015] [Revised: 09/23/2016] [Accepted: 09/25/2016] [Indexed: 11/07/2022] Open
Abstract
Previous studies on visuo-haptic shape processing provide evidence that visually learned shape information can transfer to the haptic domain. In particular, recent neuroimaging studies have shown that visually learned novel objects that were haptically tested recruited parts of the ventral pathway from early visual cortex to the temporal lobe. Interestingly, in such tasks considerable individual variation in cross-modal transfer performance was observed. Here, we investigate whether this individual variation may be reflected in microstructural characteristics of white-matter (WM) pathways. We first trained participants on a fine-grained categorization task of novel shapes in the visual domain, followed by a haptic categorization test. We then correlated visual training-performance and haptic test-performance, as well as performance on a symbol-coding task requiring visuo-motor dexterity with microstructural properties of WM bundles potentially involved in visuo-haptic processing (the inferior longitudinal fasciculus [ILF], the fronto-temporal part of the superior longitudinal fasciculus [SLFft ] and the vertical occipital fasciculus [VOF]). Behavioral results showed that haptic categorization performance was good on average but exhibited large inter-individual variability. Haptic performance also was correlated with performance in the symbol-coding task. WM analyses showed that fast visual learners exhibited higher fractional anisotropy (FA) in left SLFft and left VOF. Importantly, haptic test-performance (and symbol-coding performance) correlated with FA in ILF and with axial diffusivity in SLFft . These findings provide clear evidence that individual variation in visuo-haptic performance can be linked to microstructural characteristics of WM pathways. Hum Brain Mapp 38:842-854, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Laurent Petit
- Groupe d'Imagerie Neurofonctionnelle, Institut Des Maladies Neurodégénératives - UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| |
Collapse
|
29
|
Gohel B, Lee P, Jeong Y. Modality-specific spectral dynamics in response to visual and tactile sequential shape information processing tasks: An MEG study using multivariate pattern classification analysis. Brain Res 2016; 1644:39-52. [DOI: 10.1016/j.brainres.2016.04.068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2015] [Revised: 03/15/2016] [Accepted: 04/28/2016] [Indexed: 11/29/2022]
|
30
|
Sathian K. Analysis of haptic information in the cerebral cortex. J Neurophysiol 2016; 116:1795-1806. [PMID: 27440247 DOI: 10.1152/jn.00546.2015] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Accepted: 07/20/2016] [Indexed: 11/22/2022] Open
Abstract
Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level.
Collapse
Affiliation(s)
- K Sathian
- Departments of Neurology, Rehabilitation Medicine and Psychology, Emory University, Atlanta, Georgia; and Center for Visual and Neurocognitive Rehabilitation, Atlanta Department of Veterans Affairs Medical Center, Decatur, Georgia
| |
Collapse
|
31
|
Yau JM, DeAngelis GC, Angelaki DE. Dissecting neural circuits for multisensory integration and crossmodal processing. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140203. [PMID: 26240418 DOI: 10.1098/rstb.2014.0203] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
We rely on rich and complex sensory information to perceive and understand our environment. Our multisensory experience of the world depends on the brain's remarkable ability to combine signals across sensory systems. Behavioural, neurophysiological and neuroimaging experiments have established principles of multisensory integration and candidate neural mechanisms. Here we review how targeted manipulation of neural activity using invasive and non-invasive neuromodulation techniques have advanced our understanding of multisensory processing. Neuromodulation studies have provided detailed characterizations of brain networks causally involved in multisensory integration. Despite substantial progress, important questions regarding multisensory networks remain unanswered. Critically, experimental approaches will need to be combined with theory in order to understand how distributed activity across multisensory networks collectively supports perception.
Collapse
Affiliation(s)
- Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| | - Gregory C DeAngelis
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA
| | - Dora E Angelaki
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
32
|
Xue Z, Zeng X, Koehl L, Shen L. Interpretation of Fabric Tactile Perceptions through Visual Features for Textile Products. J SENS STUD 2016. [DOI: 10.1111/joss.12201] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Affiliation(s)
- Z. Xue
- Department of clothing design and engineering, School of Textiles and Clothing; Jiangnan university; Wuxi Jiangsu province 214122 P.R China
- Research group of human centered design (HCD), Laboratoire de Génie et Matériaux Textiles (GEMTEX), Ecole Nationale Supérieure des Arts et Industries Textiles (ENSAIT); 2 allée Louise et Victor Champier, BP30329, F-59056 Roubaix Cedex 1 France
| | - X. Zeng
- Research group of human centered design (HCD), Laboratoire de Génie et Matériaux Textiles (GEMTEX), Ecole Nationale Supérieure des Arts et Industries Textiles (ENSAIT); 2 allée Louise et Victor Champier, BP30329, F-59056 Roubaix Cedex 1 France
| | - L. Koehl
- Research group of human centered design (HCD), Laboratoire de Génie et Matériaux Textiles (GEMTEX), Ecole Nationale Supérieure des Arts et Industries Textiles (ENSAIT); 2 allée Louise et Victor Champier, BP30329, F-59056 Roubaix Cedex 1 France
| | - L. Shen
- Department of clothing design and engineering, School of Textiles and Clothing; Jiangnan university; Wuxi Jiangsu province 214122 P.R China
| |
Collapse
|
33
|
Abstract
UNLABELLED The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. SIGNIFICANCE STATEMENT The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch.
Collapse
|
34
|
Stone KD, Gonzalez CLR. The contributions of vision and haptics to reaching and grasping. Front Psychol 2015; 6:1403. [PMID: 26441777 PMCID: PMC4584943 DOI: 10.3389/fpsyg.2015.01403] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2015] [Accepted: 09/02/2015] [Indexed: 11/23/2022] Open
Abstract
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference.
Collapse
Affiliation(s)
- Kayla D Stone
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| | - Claudia L R Gonzalez
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| |
Collapse
|
35
|
Jao RJ, James TW, James KH. Crossmodal enhancement in the LOC for visuohaptic object recognition over development. Neuropsychologia 2015; 77:76-89. [PMID: 26272239 DOI: 10.1016/j.neuropsychologia.2015.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 08/05/2015] [Accepted: 08/07/2015] [Indexed: 10/23/2022]
Abstract
Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7-8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and children.
Collapse
Affiliation(s)
- R Joanne Jao
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
| | - Thomas W James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| | - Karin Harman James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| |
Collapse
|
36
|
Ortiz Alonso T, Santos JM, Ortiz Terán L, Borrego Hernández M, Poch Broto J, de Erausquin GA. Differences in Early Stages of Tactile ERP Temporal Sequence (P100) in Cortical Organization during Passive Tactile Stimulation in Children with Blindness and Controls. PLoS One 2015. [PMID: 26225827 PMCID: PMC4520520 DOI: 10.1371/journal.pone.0124527] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Compared to their seeing counterparts, people with blindness have a greater tactile capacity. Differences in the physiology of object recognition between people with blindness and seeing people have been well documented, but not when tactile stimuli require semantic processing. We used a passive vibrotactile device to focus on the differences in spatial brain processing evaluated with event related potentials (ERP) in children with blindness (n = 12) vs. normally seeing children (n = 12), when learning a simple spatial task (lines with different orientations) or a task involving recognition of letters, to describe the early stages of its temporal sequence (from 80 to 220 msec) and to search for evidence of multi-modal cortical organization. We analysed the P100 of the ERP. Children with blindness showed earlier latencies for cognitive (perceptual) event related potentials, shorter reaction times, and (paradoxically) worse ability to identify the spatial direction of the stimulus. On the other hand, they are equally proficient in recognizing stimuli with semantic content (letters). The last observation is consistent with the role of P100 on somatosensory-based recognition of complex forms. The cortical differences between seeing control and blind groups, during spatial tactile discrimination, are associated with activation in visual pathway (occipital) and task-related association (temporal and frontal) areas. The present results show that early processing of tactile stimulation conveying cross modal information differs in children with blindness or with normal vision.
Collapse
Affiliation(s)
- Tomás Ortiz Alonso
- Department of Psychiatry, Facultad de Medicina, Universidad Complutense, Madrid, Spain
| | - Juan Matías Santos
- Department of Psychology, Universidad de Atacama, Copiapó, Chile and Fundación J Robert Cade/CONICET, Córdoba, Argentina
| | - Laura Ortiz Terán
- Athinoula A Martinos Center, Department of Radiology, Massachusetts General Hospital, Harvard University, Boston, Massachusetts, United States of America
| | | | - Joaquín Poch Broto
- Department of Ear, Nose and Throat (ENT), Hospital Clínico Universitario San Carlos, Universidad Complutense, Madrid, Spain
| | - Gabriel Alejandro de Erausquin
- Center for Neuromodulation and Roskamp Laboratory of Brain Development, Modulation and Repair, Departments of Psychiatry, Neurology and Neurosurgery, University of South Florida, Tampa, Florida, United States of America
- * E-mail:
| |
Collapse
|
37
|
Lee Masson H, Bulthé J, Op de Beeck HP, Wallraven C. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences. Cereb Cortex 2015. [DOI: 10.1093/cercor/bhv170] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
38
|
Gurtubay-Antolin A, Rodriguez-Herreros B, Rodríguez-Fornells A. The speed of object recognition from a haptic glance: event-related potential evidence. J Neurophysiol 2015; 113:3069-75. [PMID: 25744887 PMCID: PMC4455565 DOI: 10.1152/jn.00836.2014] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2014] [Accepted: 02/27/2015] [Indexed: 11/22/2022] Open
Abstract
Recognition of an object usually involves a wide range of sensory inputs. Accumulating evidence shows that first brain responses associated with the visual discrimination of objects emerge around 150 ms, but fewer studies have been devoted to measure the first neural signature of haptic recognition. To investigate the speed of haptic processing, we recorded event-related potentials (ERPs) during a shape discrimination task without visual information. After a restricted exploratory procedure, participants (n = 27) were instructed to judge whether the touched object corresponded to an expected object whose name had been previously presented in a screen. We encountered that any incongruence between the presented word and the shape of the object evoked a frontocentral negativity starting at ∼175 ms. With the use of source analysis and L2 minimum-norm estimation, the neural sources of this differential activity were located in higher level somatosensory areas and prefrontal regions involved in error monitoring and cognitive control. Our findings reveal that the somatosensory system is able to complete an amount of haptic processing substantial enough to trigger conflict-related responses in medial and prefrontal cortices in <200 ms. The present results show that our haptic system is a fast recognition device closely interlinked with error- and conflict-monitoring processes.
Collapse
Affiliation(s)
- Ane Gurtubay-Antolin
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, Spain; Department of Basic Psychology, Campus Bellvitge, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, Spain; and
| | - Borja Rodriguez-Herreros
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, Spain; Department of Basic Psychology, Campus Bellvitge, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, Spain; and
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, Spain; Department of Basic Psychology, Campus Bellvitge, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, Spain; and Catalan Institution for Research and Advanced Studies, Barcelona, Spain
| |
Collapse
|
39
|
Abstract
The idea that faces are represented within a structured face space (Valentine Quarterly Journal of Experimental Psychology 43: 161-204, 1991) has gained considerable experimental support, from both physiological and perceptual studies. Recent work has also shown that faces can even be recognized haptically-that is, from touch alone. Although some evidence favors congruent processing strategies in the visual and haptic processing of faces, the question of how similar the two modalities are in terms of face processing remains open. Here, this question was addressed by asking whether there is evidence for a haptic face space, and if so, how it compares to visual face space. For this, a physical face space was created, consisting of six laser-scanned individual faces, their morphed average, 50%-morphs between two individual faces, as well as 50%-morphs of the individual faces with the average, resulting in a set of 19 faces. Participants then rated either the visual or haptic pairwise similarity of the tangible 3-D face shapes. Multidimensional scaling analyses showed that both modalities extracted perceptual spaces that conformed to critical predictions of the face space framework, hence providing support for similar processing of complex face shapes in haptics and vision. Despite the overall similarities, however, systematic differences also emerged between the visual and haptic data. These differences are discussed in the context of face processing and complex-shape processing in vision and haptics.
Collapse
|
40
|
Lacey S, Sathian K. CROSSMODAL AND MULTISENSORY INTERACTIONS BETWEEN VISION AND TOUCH. SCHOLARPEDIA 2015; 10:7957. [PMID: 26783412 DOI: 10.4249/scholarpedia.7957] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Affiliation(s)
- Simon Lacey
- Departments of Neurology, Emory University, Atlanta, GA, USA
| | - K Sathian
- Departments of Neurology, Emory University, Atlanta, GA, USA; Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Psychology, Emory University, Atlanta, GA, USA; Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
41
|
Pure associative tactile agnosia for the left hand: Clinical and anatomo-functional correlations. Cortex 2014; 58:206-16. [DOI: 10.1016/j.cortex.2014.06.015] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2014] [Revised: 05/13/2014] [Accepted: 06/18/2014] [Indexed: 11/17/2022]
|
42
|
Sensory modality-specific spatio-temporal dynamics in response to counting tasks. Neurosci Lett 2014; 581:20-5. [PMID: 25130313 DOI: 10.1016/j.neulet.2014.08.015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Revised: 07/21/2014] [Accepted: 08/06/2014] [Indexed: 11/22/2022]
Abstract
From perception to behavior, the human brain processes information in a flexible and abstract manner independent of an input sensory modality. However, the mechanism of such multisensory neural information processing in the brain remains under debate. Relatedly, studies often aim to investigate whether certain brain regions behave in a modality-specific manner or invariantly. Previous studies regarding multisensory information processing have commonly reported only on the activation of brain regions in response to unimodal or multimodal sensory stimuli. However, less attention has been given to the modality effect on the dynamics of such regions, which could advance our understanding of neuronal information processing. In this study, we investigated whether brain regions show modality-specific or invariant high-temporal dynamics. Electrocardiogram (EEG) was recorded from healthy, normal subjects during beep-, flash- and click-counting tasks, which corresponded to auditory, visual and tactile modalities, respectively. EEG dynamics regarding event-related spectral perturbations (ERSP) in ICA time-series data were compared across the sensory modalities using a multivariate pattern analysis. We found modality-specific EEG dynamics in the prefrontal cortex, whereas we found modality-specific and cross-modal dynamics in the early visual cortex.
Collapse
|
43
|
Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts. Atten Percept Psychophys 2014; 76:541-58. [PMID: 24197503 DOI: 10.3758/s13414-013-0559-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Collapse
|
44
|
Abstract
We investigated whether the relative position of objects and the body would influence haptic recognition. People felt objects on the right or left side of their body midline, using their right hand. Their head was turned towards or away from the object, and they could not see their hands or the object. People were better at naming 2-D raised line drawings and 3-D small-scale models of objects and also real, everyday objects when they looked towards them. However, this head-towards benefit was reliable only when their right hand crossed their body midline to feel objects on their left side. Thus, haptic object recognition was influenced by people's head position, although vision of their hand and the object was blocked. This benefit of turning the head towards the object being explored suggests that proprioceptive and haptic inputs are remapped into an external coordinate system and that this remapping is harder when the body is in an unusual position (with the hand crossing the body midline and the head turned away from the hand). The results indicate that haptic processes align sensory inputs from the hand and head even though either hand-centered or object-centered coordinate systems should suffice for haptic object recognition.
Collapse
|
45
|
Fryer L, Freeman J, Pring L. Touching words is not enough: How visual experience influences haptic–auditory associations in the “Bouba–Kiki” effect. Cognition 2014; 132:164-73. [DOI: 10.1016/j.cognition.2014.03.015] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Revised: 11/21/2013] [Accepted: 03/31/2014] [Indexed: 10/25/2022]
|
46
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
47
|
Podrebarac SK, Goodale MA, Snow JC. Are visual texture-selective areas recruited during haptic texture discrimination? Neuroimage 2014; 94:129-137. [DOI: 10.1016/j.neuroimage.2014.03.013] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Revised: 02/02/2014] [Accepted: 03/07/2014] [Indexed: 11/25/2022] Open
|
48
|
Man K, Kaplan J, Damasio H, Damasio A. Neural convergence and divergence in the mammalian cerebral cortex: from experimental neuroanatomy to functional neuroimaging. J Comp Neurol 2014; 521:4097-111. [PMID: 23840023 DOI: 10.1002/cne.23408] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 04/30/2013] [Accepted: 06/28/2013] [Indexed: 11/08/2022]
Abstract
A development essential for understanding the neural basis of complex behavior and cognition is the description, during the last quarter of the twentieth century, of detailed patterns of neuronal circuitry in the mammalian cerebral cortex. This effort established that sensory pathways exhibit successive levels of convergence, from the early sensory cortices to sensory-specific and multisensory association cortices, culminating in maximally integrative regions. It was also established that this convergence is reciprocated by successive levels of divergence, from the maximally integrative areas all the way back to the early sensory cortices. This article first provides a brief historical review of these neuroanatomical findings, which were relevant to the study of brain and mind-behavior relationships and to the proposal of heuristic anatomofunctional frameworks. In a second part, the article reviews new evidence that has accumulated from studies of functional neuroimaging, employing both univariate and multivariate analyses, as well as electrophysiology, in humans and other mammals, that the integration of information across the auditory, visual, and somatosensory-motor modalities proceeds in a content-rich manner. Behaviorally and cognitively relevant information is extracted from and conserved across the different modalities, both in higher order association cortices and in early sensory cortices. Such stimulus-specific information is plausibly relayed along the neuroanatomical pathways alluded to above. The evidence reviewed here suggests the need for further in-depth exploration of the intricate connectivity of the mammalian cerebral cortex in experimental neuroanatomical studies.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | | | | | | |
Collapse
|
49
|
Lacey S, Stilla R, Sreenivasan K, Deshpande G, Sathian K. Spatial imagery in haptic shape perception. Neuropsychologia 2014; 60:144-58. [PMID: 25017050 DOI: 10.1016/j.neuropsychologia.2014.05.008] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Revised: 04/27/2014] [Accepted: 05/13/2014] [Indexed: 12/14/2022]
Abstract
We have proposed that haptic activation of the shape-selective lateral occipital complex (LOC) reflects a model of multisensory object representation in which the role of visual imagery is modulated by object familiarity. Supporting this, a previous functional magnetic resonance imaging (fMRI) study from our laboratory used inter-task correlations of blood oxygenation level-dependent (BOLD) signal magnitude and effective connectivity (EC) patterns based on the BOLD signals to show that the neural processes underlying visual object imagery (objIMG) are more similar to those mediating haptic perception of familiar (fHS) than unfamiliar (uHS) shapes. Here we employed fMRI to test a further hypothesis derived from our model, that spatial imagery (spIMG) would evoke activation and effective connectivity patterns more related to uHS than fHS. We found that few of the regions conjointly activated by spIMG and either fHS or uHS showed inter-task correlations of BOLD signal magnitudes, with parietal foci featuring in both sets of correlations. This may indicate some involvement of spIMG in HS regardless of object familiarity, contrary to our hypothesis, although we cannot rule out alternative explanations for the commonalities between the networks, such as generic imagery or spatial processes. EC analyses, based on inferred neuronal time series obtained by deconvolution of the hemodynamic response function from the measured BOLD time series, showed that spIMG shared more common paths with uHS than fHS. Re-analysis of our previous data, using the same EC methods as those used here, showed that, by contrast, objIMG shared more common paths with fHS than uHS. Thus, although our model requires some refinement, its basic architecture is supported: a stronger relationship between spIMG and uHS compared to fHS, and a stronger relationship between objIMG and fHS compared to uHS.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Randall Stilla
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Karthik Sreenivasan
- AU MRI Research Center, Department of Electrical & Computer Engineering, Auburn University, Auburn, AL, USA
| | - Gopikrishna Deshpande
- AU MRI Research Center, Department of Electrical & Computer Engineering, Auburn University, Auburn, AL, USA; Department of Psychology, Auburn University, Auburn, AL, USA
| | - K Sathian
- Department of Neurology, Emory University, Atlanta, GA, USA; Department of Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Department of Psychology, Emory University, Atlanta, GA, USA; Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA.
| |
Collapse
|
50
|
Schmidt TT, Ostwald D, Blankenburg F. Imaging tactile imagery: changes in brain connectivity support perceptual grounding of mental images in primary sensory cortices. Neuroimage 2014; 98:216-24. [PMID: 24836010 DOI: 10.1016/j.neuroimage.2014.05.014] [Citation(s) in RCA: 58] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2013] [Revised: 05/05/2014] [Accepted: 05/06/2014] [Indexed: 12/31/2022] Open
Abstract
Constructing mental representations in the absence of sensory stimulation is a fundamental ability of the human mind and has been investigated in numerous brain imaging studies. However, it is still unclear how brain areas facilitating mental construction processes interact with brain regions related to specific sensory representations. In this fMRI study subjects formed mental representations of tactile stimuli either from memory (imagery) or from presentation of actual corresponding vibrotactile patterned stimuli. First our analysis addressed the question of whether tactile imagery recruits primary somatosensory cortex (SI), because the activation of early perceptual areas is classically interpreted as perceptual grounding of the mental image. We also tested whether a network, referred to as 'core construction system', is involved in the generation of mental representations in the somatosensory domain. In fact, we observed imagery-induced activation of SI. We further found support for the notion of a modality independent construction network with the retrosplenial cortices and the precuneus as core components, which were supplemented with the left inferior frontal gyrus (IFG). Finally, psychophysiological interaction (PPI) analyses revealed robust imagery-modulated changes in the connectivity of these construction related areas, which suggests that they orchestrate the assembly of an abstract mental representation. Interestingly, we found increased coupling between prefrontal cortex (left IFG) and SI during mental imagery, indicating the augmentation of an abstract mental representation by reactivating perceptually grounded sensory details.
Collapse
Affiliation(s)
- Timo Torsten Schmidt
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany; Max Planck Institute for Human Development, Center for Adaptive Rationality (ARC), 14195 Berlin, Germany.
| | - Dirk Ostwald
- Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany; Max Planck Institute for Human Development, Center for Adaptive Rationality (ARC), 14195 Berlin, Germany
| | - Felix Blankenburg
- Neurocomputation and Neuroimaging Unit (NNU), Department of Education and Psychology, Freie Universität Berlin, 14195 Berlin, Germany; Bernstein Center for Computational Neuroscience, 10115 Berlin, Germany; Max Planck Institute for Human Development, Center for Adaptive Rationality (ARC), 14195 Berlin, Germany
| |
Collapse
|