1
|
Myga KA, Azañón E, Ambroziak KB, Ferrè ER, Longo MR. Haptic experience of bodies alters body perception. Perception 2024; 53:716-729. [PMID: 39324272 DOI: 10.1177/03010066241270627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2024]
Abstract
Research on media's effects on body perception has mainly focused on the role of vision of extreme body types. However, haptics is a major part of the way children experience bodies. Playing with unrealistically thin dolls has been linked to the emergence of body image concerns, but the perceptual mechanisms remain unknown. We explore the effects of haptic experience of extreme body types on body perception, using adaptation aftereffects. Blindfolded participants judged whether the doll-like stimuli explored haptically were thinner or fatter than the average body before and after adaptation to an underweight or overweight doll. In a second experiment, participants underwent a traditional visual adaptation paradigm to extreme bodies, using stimuli matched to those in Experiment 1. For both modalities, after adaptation to an underweight body test bodies were judged as fatter. Adaptation to an overweight body produced opposite results. For the first time, we show adiposity aftereffects in haptic modality, analogous to those established in vision, using matched stimuli across visual and haptic paradigms.
Collapse
Affiliation(s)
- Kasia A Myga
- Department of Psychological Sciences, Birkbeck, University of London, London, UK; Department of Neurology, Otto-Von-Guericke University, Magdeburg, Germany; Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | - Elena Azañón
- Department of Neurology, Otto-Von-Guericke University, Magdeburg, Germany; Department of Behavioral Neurology, Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany; Center for Intervention and Research on Adaptive and Maladaptive Brain Circuits Underlying Mental Health (C-I-R-C), Jena-Magdeburg-Halle, Germany
| | | | | | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck, University of London, London, UK
| |
Collapse
|
2
|
Saccone EJ, Tian M, Bedny M. Developing cortex is functionally pluripotent: Evidence from blindness. Dev Cogn Neurosci 2024; 66:101360. [PMID: 38394708 PMCID: PMC10899073 DOI: 10.1016/j.dcn.2024.101360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 01/25/2024] [Accepted: 02/19/2024] [Indexed: 02/25/2024] Open
Abstract
How rigidly does innate architecture constrain function of developing cortex? What is the contribution of early experience? We review insights into these questions from visual cortex function in people born blind. In blindness, occipital cortices are active during auditory and tactile tasks. What 'cross-modal' plasticity tells us about cortical flexibility is debated. On the one hand, visual networks of blind people respond to higher cognitive information, such as sentence grammar, suggesting drastic repurposing. On the other, in line with 'metamodal' accounts, sighted and blind populations show shared domain preferences in ventral occipito-temporal cortex (vOTC), suggesting visual areas switch input modality but perform the same or similar perceptual functions (e.g., face recognition) in blindness. Here we bring these disparate literatures together, reviewing and synthesizing evidence that speaks to whether visual cortices have similar or different functions in blind and sighted people. Together, the evidence suggests that in blindness, visual cortices are incorporated into higher-cognitive (e.g., fronto-parietal) networks, which are a major source long-range input to the visual system. We propose the connectivity-constrained experience-dependent account. Functional development is constrained by innate anatomical connectivity, experience and behavioral needs. Infant cortex is pluripotent, the same anatomical constraints develop into different functional outcomes.
Collapse
Affiliation(s)
- Elizabeth J Saccone
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA.
| | - Mengyu Tian
- Center for Educational Science and Technology, Beijing Normal University at Zhuhai, China
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Osteopathic clinical reasoning: An ethnographic study of perceptual diagnostic judgments, and metacognition. INT J OSTEOPATH MED 2018. [DOI: 10.1016/j.ijosm.2018.03.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
4
|
Dai R, Huang Z, Tu H, Wang L, Tanabe S, Weng X, He S, Li D. Interplay between Heightened Temporal Variability of Spontaneous Brain Activity and Task-Evoked Hyperactivation in the Blind. Front Hum Neurosci 2017; 10:632. [PMID: 28066206 PMCID: PMC5169068 DOI: 10.3389/fnhum.2016.00632] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 11/28/2016] [Indexed: 11/13/2022] Open
Abstract
The brain's functional organization can be altered by visual deprivation. This is observed by comparing blind and sighted people's activation response to tactile discrimination tasks, like braille reading. Where, the blind have higher activation than the sighted upon tactile discrimination tasks, especially high activation difference is seen in ventral occipitotemporal (vOT) cortex. However, it remains unknown, whether this vOT hyperactivation is related to alteration of spontaneous activity. To address this question, we examined 16 blind subjects, 19 low-vision individuals, and 21 normally sighted controls using functional magnetic resonance imaging (fMRI). Subjects were scanned in resting-state and discrimination tactile task. In spontaneous activity, when compared to sighted subjects, we found both blind and low vision subjects had increased local signal synchronization and increased temporal variability. During tactile tasks, compared to sighted subjects, blind and low-vision subject's vOT had stronger tactile task-induced activation. Furthermore, through inter-subject partial correlation analysis, we found temporal variability is more related to tactile-task activation, than local signal synchronization's relation to tactile-induced activation. Our results further support that vision impairment induces vOT cortical reorganization. The hyperactivation in the vOT during tactile stimulus processing in the blind may be related to their greater dynamic range of spontaneous activity.
Collapse
Affiliation(s)
- Rui Dai
- School of Life Science, South China Normal University Guangzhou, China
| | - Zirui Huang
- Institute of Mental Health Research, University of Ottawa Ottawa, ON, Canada
| | - Huihui Tu
- Center for Cognition and Brain Disorders, Hangzhou Normal University Hangzhou, China
| | - Luoyu Wang
- Center for Cognition and Brain Disorders, Hangzhou Normal University Hangzhou, China
| | - Sean Tanabe
- Faculty of Science, University of Ottawa Ottawa, ON, Canada
| | - Xuchu Weng
- Center for Cognition and Brain Disorders, Hangzhou Normal University Hangzhou, China
| | - Sheng He
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of SciencesBeijing, China; Department of Psychology, University of MinnesotaMinneapolis, MN, USA
| | - Dongfeng Li
- School of Life Science, South China Normal University Guangzhou, China
| |
Collapse
|
5
|
Lederman SJ, Klatzky RL, Abramowicz A, Salsman K, Kitada R, Hamilton C. Haptic Recognition of Static and Dynamic Expressions of Emotion in the Live Face. Psychol Sci 2016; 18:158-64. [PMID: 17425537 DOI: 10.1111/j.1467-9280.2007.01866.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
If humans can detect the wealth of tactile and haptic information potentially available in live facial expressions of emotion (FEEs), they should be capable of haptically recognizing the six universal expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) at levels well above chance. We tested this hypothesis in the experiments reported here. With minimal training, subjects' overall mean accuracy was 51% for static FEEs (Experiment 1) and 74% for dynamic FEEs (Experiment 2). All FEEs except static fear were successfully recognized above the chance level of 16.7%. Complementing these findings, overall confidence and information transmission were higher for dynamic than for corresponding static faces. Our performance measures (accuracy and confidence ratings, plus response latency in Experiment 2 only) confirmed that happiness, sadness, and surprise were all highly recognizable, and anger, disgust, and fear less so.
Collapse
|
6
|
Jao RJ, James TW, James KH. Crossmodal enhancement in the LOC for visuohaptic object recognition over development. Neuropsychologia 2015; 77:76-89. [PMID: 26272239 DOI: 10.1016/j.neuropsychologia.2015.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 08/05/2015] [Accepted: 08/07/2015] [Indexed: 10/23/2022]
Abstract
Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7-8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and children.
Collapse
Affiliation(s)
- R Joanne Jao
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
| | - Thomas W James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| | - Karin Harman James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| |
Collapse
|
7
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
8
|
Kim S, Stevenson RA, James TW. Visuo-haptic neuronal convergence demonstrated with an inversely effective pattern of BOLD activation. J Cogn Neurosci 2011; 24:830-42. [PMID: 22185495 DOI: 10.1162/jocn_a_00176] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
We investigated the neural substrates involved in visuo-haptic neuronal convergence using an additive-factors design in combination with fMRI. Stimuli were explored under three sensory modality conditions: viewing the object through a mirror without touching (V), touching the object with eyes closed (H), or simultaneously viewing and touching the object (VH). This modality factor was crossed with a task difficulty factor, which had two levels. On the basis of an idea similar to the principle of inverse effectiveness, we predicted that increasing difficulty would increase the relative level of multisensory gain in brain regions where visual and haptic sensory inputs converged. An ROI analysis focused on the lateral occipital tactile-visual area found evidence of inverse effectiveness in the left lateral occipital tactile-visual area, but not in the right. A whole-brain analysis also found evidence for the same pattern in the anterior aspect of the intraparietal sulcus, the premotor cortex, and the posterior insula, all in the left hemisphere. In conclusion, this study is the first to demonstrate visuo-haptic neuronal convergence based on an inversely effective pattern of brain activation.
Collapse
Affiliation(s)
- Sunah Kim
- 360 Minor Hall, University of California, Berkeley, Berkeley, CA 94720, USA.
| | | | | |
Collapse
|
9
|
Haptic perception and body representation in lateral and medial occipito-temporal cortices. Neuropsychologia 2011; 49:821-829. [DOI: 10.1016/j.neuropsychologia.2011.01.034] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2010] [Revised: 01/17/2011] [Accepted: 01/18/2011] [Indexed: 11/19/2022]
|
10
|
James TW, Huh E, Kim S. Temporal and spatial integration of face, object, and scene features in occipito-temporal cortex. Brain Cogn 2010; 74:112-22. [PMID: 20727652 DOI: 10.1016/j.bandc.2010.07.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2009] [Revised: 07/14/2010] [Accepted: 07/26/2010] [Indexed: 10/19/2022]
Abstract
In three neuroimaging experiments, face, novel object, and building stimuli were compared under conditions of restricted (aperture) viewing and normal (whole) viewing. Aperture viewing restricted the view to a single face/object feature at a time, with the subjects able to move the aperture continuously though time to reveal different features. An analysis of the proportion of time spent viewing different features showed stereotypical exploration patterns for face, object, and building stimuli, and suggested that subjects constrained their viewing to the features most relevant for recognition. Aperture viewing showed much longer response times than whole viewing, due to sequential exploration of the relevant isolated features. An analysis of BOLD activation revealed face-selective activation with both whole viewing and aperture viewing in the left and right fusiform face areas (FFA). Aperture viewing showed strong and sustained activation throughout exploration, suggesting that aperture viewing recruited similar processes as whole viewing, but for a longer time period. Face-selective recruitment of the FFA with aperture viewing suggests that the FFA is involved in the integration of isolated features for the purpose of recognition.
Collapse
Affiliation(s)
- Thomas W James
- Department of Psychological and Brain Sciences, Indiana University, United
| | | | | |
Collapse
|
11
|
Abstract
Since Broca's studies on language processing, cortical functional specialization has been considered to be integral to efficient neural processing. A fundamental question in cognitive neuroscience concerns the type of learning that is required for functional specialization to develop. To address this issue with respect to the development of neural specialization for letters, we used functional magnetic resonance imaging (fMRI) to compare brain activation patterns in pre-school children before and after different letter-learning conditions: a sensori-motor group practised printing letters during the learning phase, while the control group practised visual recognition. Results demonstrated an overall left-hemisphere bias for processing letters in these pre-literate participants, but, more interestingly, showed enhanced blood oxygen-level-dependent activation in the visual association cortex during letter perception only after sensori-motor (printing) learning. It is concluded that sensori-motor experience augments processing in the visual system of pre-school children. The change of activation in these neural circuits provides important evidence that 'learning-by-doing' can lay the foundation for, and potentially strengthen, the neural systems used for visual letter recognition.
Collapse
Affiliation(s)
- Karin Harman James
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
| |
Collapse
|
12
|
Kitada R, Johnsrude IS, Kochiyama T, Lederman SJ. Brain networks involved in haptic and visual identification of facial expressions of emotion: An fMRI study. Neuroimage 2010; 49:1677-89. [DOI: 10.1016/j.neuroimage.2009.09.014] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2009] [Revised: 09/10/2009] [Accepted: 09/12/2009] [Indexed: 11/28/2022] Open
|
13
|
Kitada R, Johnsrude IS, Kochiyama T, Lederman SJ. Functional Specialization and Convergence in the Occipito-temporal Cortex Supporting Haptic and Visual Identification of Human Faces and Body Parts: An fMRI Study. J Cogn Neurosci 2009; 21:2027-45. [DOI: 10.1162/jocn.2009.21115] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Humans can recognize common objects by touch extremely well whenever vision is unavailable. Despite its importance to a thorough understanding of human object recognition, the neuroscientific study of this topic has been relatively neglected. To date, the few published studies have addressed the haptic recognition of nonbiological objects. We now focus on haptic recognition of the human body, a particularly salient object category for touch. Neuroimaging studies demonstrate that regions of the occipito-temporal cortex are specialized for visual perception of faces (fusiform face area, FFA) and other body parts (extrastriate body area, EBA). Are the same category-sensitive regions activated when these components of the body are recognized haptically? Here, we use fMRI to compare brain organization for haptic and visual recognition of human body parts. Sixteen subjects identified exemplars of faces, hands, feet, and nonbiological control objects using vision and haptics separately. We identified two discrete regions within the fusiform gyrus (FFA and the haptic face region) that were each sensitive to both haptically and visually presented faces; however, these two regions differed significantly in their response patterns. Similarly, two regions within the lateral occipito-temporal area (EBA and the haptic body region) were each sensitive to body parts in both modalities, although the response patterns differed. Thus, although the fusiform gyrus and the lateral occipito-temporal cortex appear to exhibit modality-independent, category-sensitive activity, our results also indicate a degree of functional specialization related to sensory modality within these structures.
Collapse
|
14
|
Stevenson RA, Kim S, James TW. An additive-factors design to disambiguate neuronal and areal convergence: measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI. Exp Brain Res 2009; 198:183-94. [PMID: 19352638 DOI: 10.1007/s00221-009-1783-8] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2008] [Accepted: 03/20/2009] [Indexed: 11/27/2022]
Abstract
It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA.
| | | | | |
Collapse
|