1
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
2
|
Dai R, Huang Z, Weng X, He S. Early visual exposure primes future cross-modal specialization of the fusiform face area in tactile face processing in the blind. Neuroimage 2022; 253:119062. [PMID: 35263666 DOI: 10.1016/j.neuroimage.2022.119062] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 02/21/2022] [Accepted: 03/05/2022] [Indexed: 10/18/2022] Open
Abstract
The fusiform face area (FFA) is a core cortical region for face information processing. Evidence suggests that its sensitivity to faces is largely innate and tuned by visual experience. However, how experience in different time windows shape the plasticity of the FFA remains unclear. In this study, we investigated the role of visual experience at different time points of an individual's early development in the cross-modal face specialization of the FFA. Participants (n = 74) were classified into five groups: congenital blind, early blind, late blind, low vision, and sighted control. Functional magnetic resonance imaging data were acquired when the participants haptically processed carved faces and other objects. Our results showed a robust and highly consistent face-selective activation in the FFA region in the early blind participants, invariant to size and level of abstraction of the face stimuli. The cross-modal face activation in the FFA was much less consistent in other groups. These results suggest that early visual experience primes cross-modal specialization of the FFA, and even after the absence of visual experience for more than 14 years in early blind participants, their FFA can engage in cross-modal processing of face information.
Collapse
Affiliation(s)
- Rui Dai
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China
| | - Zirui Huang
- Center for Consciousness Science, Department of Anesthesiology, University of Michigan Medical School, Ann Arbor, MI 48109, USA
| | - Xuchu Weng
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou 510631, China.
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 20031, China; University of Chinese Academy of Sciences, Beijing 100049, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
3
|
Slivkoff S, Gallant JL. Design of complex neuroscience experiments using mixed-integer linear programming. Neuron 2021; 109:1433-1448. [PMID: 33689687 DOI: 10.1016/j.neuron.2021.02.019] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 02/05/2021] [Accepted: 02/16/2021] [Indexed: 11/29/2022]
Abstract
Over the past few decades, neuroscience experiments have become increasingly complex and naturalistic. Experimental design has in turn become more challenging, as experiments must conform to an ever-increasing diversity of design constraints. In this article, we demonstrate how this design process can be greatly assisted using an optimization tool known as mixed-integer linear programming (MILP). MILP provides a rich framework for incorporating many types of real-world design constraints into a neuroscience experiment. We introduce the mathematical foundations of MILP, compare MILP to other experimental design techniques, and provide four case studies of how MILP can be used to solve complex experimental design challenges.
Collapse
Affiliation(s)
- Storm Slivkoff
- Department of Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Jack L Gallant
- Department of Bioengineering, University of California, Berkeley, Berkeley, CA 94720, USA; Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA.
| |
Collapse
|
4
|
Fairhall SL, Porter KB, Bellucci C, Mazzetti M, Cipolli C, Gobbini MI. Plastic reorganization of neural systems for perception of others in the congenitally blind. Neuroimage 2017; 158:126-135. [PMID: 28669909 DOI: 10.1016/j.neuroimage.2017.06.057] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Revised: 05/25/2017] [Accepted: 06/22/2017] [Indexed: 11/17/2022] Open
Abstract
Recent evidence suggests that the function of the core system for face perception might extend beyond visual face-perception to a broader role in person perception. To critically test the broader role of core face-system in person perception, we examined the role of the core system during the perception of others in 7 congenitally blind individuals and 15 sighted subjects by measuring their neural responses using fMRI while they listened to voices and performed identity and emotion recognition tasks. We hypothesised that in people who have had no visual experience of faces, core face-system areas may assume a role in the perception of others via voices. Results showed that emotions conveyed by voices can be decoded in homologues of the core face system only in the blind. Moreover, there was a specific enhancement of response to verbal as compared to non-verbal stimuli in bilateral fusiform face areas and the right posterior superior temporal sulcus showing that the core system also assumes some language-related functions in the blind. These results indicate that, in individuals with no history of visual experience, areas of the core system for face perception may assume a role in aspects of voice perception that are relevant to social cognition and perception of others' emotions.
Collapse
Affiliation(s)
- S L Fairhall
- Center for Mind/Brain Sciences, University of Trento, Italy.
| | - K B Porter
- Department of Psychology, Harvard, Cambridge, MA, USA
| | - C Bellucci
- Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale (DIMES), Medical School, University of Bologna, Bologna, Italy
| | - M Mazzetti
- Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale (DIMES), Medical School, University of Bologna, Bologna, Italy
| | - C Cipolli
- Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale (DIMES), Medical School, University of Bologna, Bologna, Italy
| | - M I Gobbini
- Dipartimento di Medicina Specialistica, Diagnostica e Sperimentale (DIMES), Medical School, University of Bologna, Bologna, Italy; Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
5
|
Dai R, Huang Z, Tu H, Wang L, Tanabe S, Weng X, He S, Li D. Interplay between Heightened Temporal Variability of Spontaneous Brain Activity and Task-Evoked Hyperactivation in the Blind. Front Hum Neurosci 2017; 10:632. [PMID: 28066206 PMCID: PMC5169068 DOI: 10.3389/fnhum.2016.00632] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 11/28/2016] [Indexed: 11/13/2022] Open
Abstract
The brain's functional organization can be altered by visual deprivation. This is observed by comparing blind and sighted people's activation response to tactile discrimination tasks, like braille reading. Where, the blind have higher activation than the sighted upon tactile discrimination tasks, especially high activation difference is seen in ventral occipitotemporal (vOT) cortex. However, it remains unknown, whether this vOT hyperactivation is related to alteration of spontaneous activity. To address this question, we examined 16 blind subjects, 19 low-vision individuals, and 21 normally sighted controls using functional magnetic resonance imaging (fMRI). Subjects were scanned in resting-state and discrimination tactile task. In spontaneous activity, when compared to sighted subjects, we found both blind and low vision subjects had increased local signal synchronization and increased temporal variability. During tactile tasks, compared to sighted subjects, blind and low-vision subject's vOT had stronger tactile task-induced activation. Furthermore, through inter-subject partial correlation analysis, we found temporal variability is more related to tactile-task activation, than local signal synchronization's relation to tactile-induced activation. Our results further support that vision impairment induces vOT cortical reorganization. The hyperactivation in the vOT during tactile stimulus processing in the blind may be related to their greater dynamic range of spontaneous activity.
Collapse
Affiliation(s)
- Rui Dai
- School of Life Science, South China Normal University Guangzhou, China
| | - Zirui Huang
- Institute of Mental Health Research, University of Ottawa Ottawa, ON, Canada
| | - Huihui Tu
- Center for Cognition and Brain Disorders, Hangzhou Normal University Hangzhou, China
| | - Luoyu Wang
- Center for Cognition and Brain Disorders, Hangzhou Normal University Hangzhou, China
| | - Sean Tanabe
- Faculty of Science, University of Ottawa Ottawa, ON, Canada
| | - Xuchu Weng
- Center for Cognition and Brain Disorders, Hangzhou Normal University Hangzhou, China
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of SciencesBeijing, China; Department of Psychology, University of MinnesotaMinneapolis, MN, USA
| | - Dongfeng Li
- School of Life Science, South China Normal University Guangzhou, China
| |
Collapse
|
6
|
Abstract
We examined whether a face-inversion effect occurs when participants explore faces by touch. We used a haptic version of the inversion paradigm with 3-D clay facemasks and non-face control objects (teapots) moulded from real objects. Young, neurologically intact, blindfolded participants performed a temporally unconstrained haptic same/different task in each of four stimulus conditions: upright facemasks, inverted facemasks, upright teapots, and inverted teapots. There was a significant inversion effect for faces in terms of accuracy, but none for teapots. The results are considered in terms of the consequences of sequential manual exploration for haptic face processing.
Collapse
Affiliation(s)
- Andrea R Kilgour
- Department of Clinical Health Psychology, University of Manitoba, Winnipeg, Canada.
| | | |
Collapse
|
7
|
Lederman SJ, Klatzky RL, Abramowicz A, Salsman K, Kitada R, Hamilton C. Haptic Recognition of Static and Dynamic Expressions of Emotion in the Live Face. Psychol Sci 2016; 18:158-64. [PMID: 17425537 DOI: 10.1111/j.1467-9280.2007.01866.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
If humans can detect the wealth of tactile and haptic information potentially available in live facial expressions of emotion (FEEs), they should be capable of haptically recognizing the six universal expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) at levels well above chance. We tested this hypothesis in the experiments reported here. With minimal training, subjects' overall mean accuracy was 51% for static FEEs (Experiment 1) and 74% for dynamic FEEs (Experiment 2). All FEEs except static fear were successfully recognized above the chance level of 16.7%. Complementing these findings, overall confidence and information transmission were higher for dynamic than for corresponding static faces. Our performance measures (accuracy and confidence ratings, plus response latency in Experiment 2 only) confirmed that happiness, sadness, and surprise were all highly recognizable, and anger, disgust, and fear less so.
Collapse
|
8
|
Bi Y, Wang X, Caramazza A. Object Domain and Modality in the Ventral Visual Pathway. Trends Cogn Sci 2016; 20:282-290. [PMID: 26944219 DOI: 10.1016/j.tics.2016.02.002] [Citation(s) in RCA: 76] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2015] [Revised: 02/10/2016] [Accepted: 02/11/2016] [Indexed: 10/22/2022]
Abstract
The nature of domain-specific organization in higher-order visual cortex (ventral occipital temporal cortex, VOTC) has been investigated both in the case of visual experience deprivation and of modality of stimulation in sighted individuals. Object domain interacts in an intriguing and revelatory way with visual experience and modality of stimulation: selectivity for artifacts and scene domains is largely immune to visual deprivation and is multi-modal, whereas selectivity for animate items in lateral posterior fusiform gyrus is present only with visual stimulation. This domain-by-modality interaction is not readily accommodated by existing theories of VOTC representation. We conjecture that these effects reflect a distinction between the visual features that characterize different object domains and their interaction with different types of downstream computational systems.
Collapse
Affiliation(s)
- Yanchao Bi
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China.
| | - Xiaoying Wang
- State Key Laboratory of Cognitive Neuroscience and Learning, and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Alfonso Caramazza
- Department of Psychology, Harvard University, Cambridge, MA, USA; Center for Mind/Brain Sciences, University of Trento, Rovereto TN, Italy
| |
Collapse
|
9
|
Jao RJ, James TW, James KH. Crossmodal enhancement in the LOC for visuohaptic object recognition over development. Neuropsychologia 2015; 77:76-89. [PMID: 26272239 DOI: 10.1016/j.neuropsychologia.2015.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 08/05/2015] [Accepted: 08/07/2015] [Indexed: 10/23/2022]
Abstract
Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7-8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and children.
Collapse
Affiliation(s)
- R Joanne Jao
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
| | - Thomas W James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| | - Karin Harman James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| |
Collapse
|
10
|
Abstract
The idea that faces are represented within a structured face space (Valentine Quarterly Journal of Experimental Psychology 43: 161-204, 1991) has gained considerable experimental support, from both physiological and perceptual studies. Recent work has also shown that faces can even be recognized haptically-that is, from touch alone. Although some evidence favors congruent processing strategies in the visual and haptic processing of faces, the question of how similar the two modalities are in terms of face processing remains open. Here, this question was addressed by asking whether there is evidence for a haptic face space, and if so, how it compares to visual face space. For this, a physical face space was created, consisting of six laser-scanned individual faces, their morphed average, 50%-morphs between two individual faces, as well as 50%-morphs of the individual faces with the average, resulting in a set of 19 faces. Participants then rated either the visual or haptic pairwise similarity of the tangible 3-D face shapes. Multidimensional scaling analyses showed that both modalities extracted perceptual spaces that conformed to critical predictions of the face space framework, hence providing support for similar processing of complex face shapes in haptics and vision. Despite the overall similarities, however, systematic differences also emerged between the visual and haptic data. These differences are discussed in the context of face processing and complex-shape processing in vision and haptics.
Collapse
|
11
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
12
|
Konkle T, Moore CI. What can crossmodal aftereffects reveal about neural representation and dynamics? Commun Integr Biol 2012; 2:479-81. [PMID: 22811763 PMCID: PMC3398893 DOI: 10.4161/cib.2.6.9344] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
The brain continuously adapts to incoming sensory stimuli, which can lead to perceptual illusions in the form of aftereffects. Recently we demonstrated that motion aftereffects transfer between vision and touch.(1) Here, the adapted brain state induced by one modality has consequences for processes in another modality, implying that somewhere in the processing stream, visual and tactile motion have shared underlying neural representations. We propose the adaptive processing hypothesis-any area that processes a stimulus adapts to the features of the stimulus it represents, and this adaptation has consequences for perception. This view argues that there is no single locus of an aftereffect. Rather, aftereffects emerge when the test stimulus used to probe the effect of adaptation requires processing of a given type. The illusion will reflect the properties of the brain area(s) that support that specific level of representation. We further suggest that many cortical areas are more process-dependent than modality-dependent, with crossmodal interactions reflecting shared processing demands in even 'early' sensory cortices.
Collapse
|
13
|
Klatzky RL, Lederman SJ. Haptic object perception: spatial dimensionality and relation to vision. Philos Trans R Soc Lond B Biol Sci 2012; 366:3097-105. [PMID: 21969691 DOI: 10.1098/rstb.2011.0153] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Enabled by the remarkable dexterity of the human hand, specialized haptic exploration is a hallmark of object perception by touch. Haptic exploration normally takes place in a spatial world that is three-dimensional; nevertheless, stimuli of reduced spatial dimensionality are also used to display spatial information. This paper examines the consequences of full (three-dimensional) versus reduced (two-dimensional) spatial dimensionality for object processing by touch, particularly in comparison with vision. We begin with perceptual recognition of common human-made artefacts, then extend our discussion of spatial dimensionality in touch and vision to include faces, drawing from research on haptic recognition of facial identity and emotional expressions. Faces have often been characterized as constituting a specialized input for human perception. We find that contrary to vision, haptic processing of common objects is impaired by reduced spatial dimensionality, whereas haptic face processing is not. We interpret these results in terms of fundamental differences in object perception across the modalities, particularly the special role of manual exploration in extracting a three-dimensional structure.
Collapse
Affiliation(s)
- Roberta L Klatzky
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA.
| | | |
Collapse
|
14
|
Kim S, Stevenson RA, James TW. Visuo-haptic neuronal convergence demonstrated with an inversely effective pattern of BOLD activation. J Cogn Neurosci 2011; 24:830-42. [PMID: 22185495 DOI: 10.1162/jocn_a_00176] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
We investigated the neural substrates involved in visuo-haptic neuronal convergence using an additive-factors design in combination with fMRI. Stimuli were explored under three sensory modality conditions: viewing the object through a mirror without touching (V), touching the object with eyes closed (H), or simultaneously viewing and touching the object (VH). This modality factor was crossed with a task difficulty factor, which had two levels. On the basis of an idea similar to the principle of inverse effectiveness, we predicted that increasing difficulty would increase the relative level of multisensory gain in brain regions where visual and haptic sensory inputs converged. An ROI analysis focused on the lateral occipital tactile-visual area found evidence of inverse effectiveness in the left lateral occipital tactile-visual area, but not in the right. A whole-brain analysis also found evidence for the same pattern in the anterior aspect of the intraparietal sulcus, the premotor cortex, and the posterior insula, all in the left hemisphere. In conclusion, this study is the first to demonstrate visuo-haptic neuronal convergence based on an inversely effective pattern of brain activation.
Collapse
Affiliation(s)
- Sunah Kim
- 360 Minor Hall, University of California, Berkeley, Berkeley, CA 94720, USA.
| | | | | |
Collapse
|
15
|
Wolbers T, Klatzky RL, Loomis JM, Wutte MG, Giudice NA. Modality-independent coding of spatial layout in the human brain. Curr Biol 2011; 21:984-9. [PMID: 21620708 PMCID: PMC3119034 DOI: 10.1016/j.cub.2011.04.038] [Citation(s) in RCA: 90] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2010] [Revised: 02/14/2011] [Accepted: 04/21/2011] [Indexed: 11/30/2022]
Abstract
In many nonhuman species, neural computations of navigational information such as position and orientation are not tied to a specific sensory modality [1, 2]. Rather, spatial signals are integrated from multiple input sources, likely leading to abstract representations of space. In contrast, the potential for abstract spatial representations in humans is not known, because most neuroscientific experiments on human navigation have focused exclusively on visual cues. Here, we tested the modality independence hypothesis with two functional magnetic resonance imaging (fMRI) experiments that characterized computations in regions implicated in processing spatial layout [3]. According to the hypothesis, such regions should be recruited for spatial computation of 3D geometric configuration, independent of a specific sensory modality. In support of this view, sighted participants showed strong activation of the parahippocampal place area (PPA) and the retrosplenial cortex (RSC) for visual and haptic exploration of information-matched scenes but not objects. Functional connectivity analyses suggested that these effects were not related to visual recoding, which was further supported by a similar preference for haptic scenes found with blind participants. Taken together, these findings establish the PPA/RSC network as critical in modality-independent spatial computations and provide important evidence for a theory of high-level abstract spatial information processing in the human brain.
Collapse
Affiliation(s)
- Thomas Wolbers
- Centre for Cognitive and Neural Systems, University of Edinburgh, Edinburgh, EH8 9JZ, UK
| | - Roberta L. Klatzky
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Jack M. Loomis
- Department of Psychology, University of California, Santa Barbara, CA 93106, USA
| | - Magdalena G. Wutte
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians University, Munich, Germany
| | - Nicholas A. Giudice
- Department of Spatial Information Science and Engineering, University of Maine, Orono, ME 04469-5711, USA
| |
Collapse
|
16
|
Haptic perception and body representation in lateral and medial occipito-temporal cortices. Neuropsychologia 2011; 49:821-829. [DOI: 10.1016/j.neuropsychologia.2011.01.034] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2010] [Revised: 01/17/2011] [Accepted: 01/18/2011] [Indexed: 11/19/2022]
|
17
|
James TW, Huh E, Kim S. Temporal and spatial integration of face, object, and scene features in occipito-temporal cortex. Brain Cogn 2010; 74:112-22. [PMID: 20727652 DOI: 10.1016/j.bandc.2010.07.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2009] [Revised: 07/14/2010] [Accepted: 07/26/2010] [Indexed: 10/19/2022]
Abstract
In three neuroimaging experiments, face, novel object, and building stimuli were compared under conditions of restricted (aperture) viewing and normal (whole) viewing. Aperture viewing restricted the view to a single face/object feature at a time, with the subjects able to move the aperture continuously though time to reveal different features. An analysis of the proportion of time spent viewing different features showed stereotypical exploration patterns for face, object, and building stimuli, and suggested that subjects constrained their viewing to the features most relevant for recognition. Aperture viewing showed much longer response times than whole viewing, due to sequential exploration of the relevant isolated features. An analysis of BOLD activation revealed face-selective activation with both whole viewing and aperture viewing in the left and right fusiform face areas (FFA). Aperture viewing showed strong and sustained activation throughout exploration, suggesting that aperture viewing recruited similar processes as whole viewing, but for a longer time period. Face-selective recruitment of the FFA with aperture viewing suggests that the FFA is involved in the integration of isolated features for the purpose of recognition.
Collapse
Affiliation(s)
- Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, United
| | | | | |
Collapse
|
18
|
McGregor TA, Klatzky RL, Hamilton C, Lederman SJ. Haptic Classification of Facial Identity in 2D Displays: Configural versus Feature-Based Processing. IEEE TRANSACTIONS ON HAPTICS 2010; 3:48-55. [PMID: 27788089 DOI: 10.1109/toh.2009.49] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Participants learned through feedback to haptically classify the identity of upright versus inverted versus scrambled faces depicted in simple 2D raised-line displays. We investigated whether identity classification would make use of a configural face representation, as is evidenced for vision and 3D haptic facial displays. Upright and scrambled faces produced equivalent accuracy, and both were identified more accurately than inverted faces. The mean magnitude of the haptic inversion effect for 2D facial identity was a sizable 26 percent, indicating that the upright orientation was ¿privileged¿ in the haptic representations of facial identity in these 2D displays, as with other facial modalities. However, given the effect of scrambling, we conclude that configural processing was not employed; rather, only local information about the features was used, the features being treated as oriented objects within a body-centered frame of reference. The results indicate a fundamental difference between haptic identification of 2D facial depictions and 3D faces, paralleling a corresponding difference in recognition of nonface objects.
Collapse
|
19
|
Kitada R, Johnsrude IS, Kochiyama T, Lederman SJ. Brain networks involved in haptic and visual identification of facial expressions of emotion: An fMRI study. Neuroimage 2010; 49:1677-89. [DOI: 10.1016/j.neuroimage.2009.09.014] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2009] [Revised: 09/10/2009] [Accepted: 09/12/2009] [Indexed: 11/28/2022] Open
|
20
|
Kitada R, Johnsrude IS, Kochiyama T, Lederman SJ. Functional Specialization and Convergence in the Occipito-temporal Cortex Supporting Haptic and Visual Identification of Human Faces and Body Parts: An fMRI Study. J Cogn Neurosci 2009; 21:2027-45. [DOI: 10.1162/jocn.2009.21115] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
Collapse
|
21
|
Dopjans L, Wallraven C, Bulthoff HH. Cross-Modal Transfer in Visual and Haptic Face Recognition. IEEE TRANSACTIONS ON HAPTICS 2009; 2:236-240. [PMID: 27788108 DOI: 10.1109/toh.2009.18] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We report four psychophysical experiments investigating cross-modal transfer in visual and haptic face recognition. We found surprisingly good haptic performance and cross-modal transfer for both modalities. Interestingly, transfer was asymmetric depending on which modality was learned first. These findings are discussed in relation to haptic object processing and face processing.
Collapse
|
22
|
Stevenson RA, Kim S, James TW. An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI. Exp Brain Res 2009; 198:183-94. [PMID: 19352638 DOI: 10.1007/s00221-009-1783-8] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2008] [Accepted: 03/20/2009] [Indexed: 11/27/2022]
Abstract
It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA.
| | | | | |
Collapse
|
23
|
Kleinschmidt A, Cohen L. The neural bases of prosopagnosia and pure alexia: recent insights from functional neuroimaging. Curr Opin Neurol 2006; 19:386-91. [PMID: 16914978 DOI: 10.1097/01.wco.0000236619.89710.ee] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW To discuss whether recent functional neuroimaging results can account for clinical phenomenology in visual associative agnosias. RECENT FINDINGS Functional neuroimaging studies in healthy human subjects have identified only two regions of ventral occipitotemporal cortex that invariantly respond to individual faces and visual words, respectively. The signature of face identity coding in the fusiform neural response was shown to be missing in a patient with prosopagnosia. Another case study established that a surgical lesion close to the region sensitive to visual words can result in pure alexia. SUMMARY Evidence is increasing that functional specialization for processing face identity and visual word forms is restricted to two specialized sensory modules in the occipitotemporal cortex. A structural or functional lesion to face-sensitive and word-sensitive regions in the ventral occipitotemporal cortex can provide the most parsimonious account for the clinical syndromes of prosopagnosia and agnosic alexia. This review suggests that functional specialization should be considered in terms of whether exclusively one brain region (instead of many) underpins a defined function and not as whether this brain region underpins exclusively one cognitive function. Such functional specialization seems to exist for at least two higher-order visual perceptual functions, face and word identification.
Collapse
Affiliation(s)
- Andreas Kleinschmidt
- Institut National de la Santé et de la Recherche Médicale, Unit 562, Service Hospitalier Frederic Joliot CEA, Orsay, France.
| | | |
Collapse
|
24
|
Casey SJ, Newell FN. Are representations of unfamiliar faces independent of encoding modality? Neuropsychologia 2006; 45:506-13. [PMID: 16597451 DOI: 10.1016/j.neuropsychologia.2006.02.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2005] [Revised: 02/20/2006] [Accepted: 02/21/2006] [Indexed: 10/24/2022]
Abstract
It is well documented that both featural and configural information are important in visual face recognition. Less is known, however, about the nature of the information underlying haptic face recognition and whether or not this information is the same as in vision. In our experiments we found better within modal than crossmodal face recognition performance suggesting that face representations are largely specific to each modality. Moreover, this cost in crossmodal performance was found to be independent of differences in exploratory procedures across the modalities during encoding. We found that crossmodal face perception was most efficient when configural information of the facial features was preserved suggesting that configural information is shared across modalities. Our findings suggest that face information is processed in a similar manner across vision and touch but that qualitative differences in the nature of the information encoded underlies efficient within modal relative to crossmodal recognition.
Collapse
Affiliation(s)
- Sarah J Casey
- School of Psychology and Institute of Neuroscience, Trinity College, Dublin, Ireland
| | | |
Collapse
|
25
|
James TW, Servos P, Kilgour AR, Huh E, Lederman S. The influence of familiarity on brain activation during haptic exploration of 3-D facemasks. Neurosci Lett 2006; 397:269-73. [PMID: 16420973 DOI: 10.1016/j.neulet.2005.12.052] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2005] [Revised: 10/06/2005] [Accepted: 12/12/2005] [Indexed: 10/25/2022]
Abstract
Little is known about the neural substrates that underlie difficult haptic discrimination of 3-D within-class object stimuli. Recent work [A.R. Kilgour, R. Kitada, P. Servos, T.W. James, S.J. Lederman, Haptic face identification activates ventral occipital and temporal areas: an fMRI study, Brain Cogn. (in press)] suggests that the left fusiform gyrus may contribute to the identification of facemasks that are haptically explored in the absence of vision. Here, we extend this line of research to investigate the influence of familiarity. Subjects were trained extensively to individuate a set of facemasks in the absence of vision using only haptic exploration. Brain activation was then measured using fMRI while subjects performed a haptic face recognition task on familiar and unfamiliar facemasks. A group analysis contrasting familiar and unfamiliar facemasks found that the left fusiform gyrus produced greater activation with familiar facemasks.
Collapse
Affiliation(s)
- Thomas W James
- Department of Psychological and Brain Sciences, 1101 E 10th Street, Indiana University, Bloomington, IN 47405, USA.
| | | | | | | | | |
Collapse
|