1
|
Ward RJ, Wuerger SM, Ashraf M, Marshall A. Physicochemical features partially explain olfactory crossmodal correspondences. Sci Rep 2023; 13:10590. [PMID: 37391587 PMCID: PMC10313698 DOI: 10.1038/s41598-023-37770-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 06/27/2023] [Indexed: 07/02/2023] Open
Abstract
During the olfactory perception process, our olfactory receptors are thought to recognize specific chemical features. These features may contribute towards explaining our crossmodal perception. The physicochemical features of odors can be extracted using an array of gas sensors, also known as an electronic nose. The present study investigates the role that the physicochemical features of olfactory stimuli play in explaining the nature and origin of olfactory crossmodal correspondences, which is a consistently overlooked aspect of prior work. Here, we answer the question of whether the physicochemical features of odors contribute towards explaining olfactory crossmodal correspondences and by how much. We found a similarity of 49% between the perceptual and the physicochemical spaces of our odors. All of our explored crossmodal correspondences namely, the angularity of shapes, smoothness of textures, perceived pleasantness, pitch, and colors have significant predictors for various physicochemical features, including aspects of intensity and odor quality. While it is generally recognized that olfactory perception is strongly shaped by context, experience, and learning, our findings show that a link, albeit small (6-23%), exists between olfactory crossmodal correspondences and their underlying physicochemical features.
Collapse
Affiliation(s)
- Ryan J Ward
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, L3 3AF, UK.
- Digital Innovation Facility, University of Liverpool, Liverpool, L69 3RF, UK.
| | - Sophie M Wuerger
- Department of Psychology, University of Liverpool, Liverpool, L69 7ZA, UK
| | - Maliha Ashraf
- Department of Psychology, University of Liverpool, Liverpool, L69 7ZA, UK
| | - Alan Marshall
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, L69 3GJ, UK
| |
Collapse
|
2
|
Wersényi G. Perception Accuracy of a Multi-Channel Tactile Feedback System for Assistive Technology. SENSORS (BASEL, SWITZERLAND) 2022; 22:8962. [PMID: 36433558 PMCID: PMC9695395 DOI: 10.3390/s22228962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 06/16/2023]
Abstract
Assistive technology uses multi-modal feedback devices, focusing on the visual, auditory, and haptic modalities. Tactile devices provide additional information via touch sense. Perception accuracy of vibrations depends on the spectral and temporal attributes of the signal, as well as on the body parts they are attached to. The widespread use of AR/VR devices, wearables, and gaming interfaces requires information about the usability of feedback devices. This paper presents results of an experiment using an 8-channel tactile feedback system with vibrators placed on the wrists, arms, ankles, and forehead. Different vibration patterns were designed and presented using sinusoidal frequency bursts on 2, 4, and 8 channels. In total, 27 subjects reported their sensation formally and informally on questionnaires. Results indicate that 2 and 4 channels could be used simultaneously with high accuracy, and the transducers' optimal placement (best sensitivity) is on the wrists, followed by the ankles. Arm and head positions were inferior and generally inadequate for signal presentation. For optimal performance, signal length should exceed 500 ms. Furthermore, the amplitude level and temporal pattern of the presented signals have to be used for carrying information rather than the frequency of the vibration.
Collapse
Affiliation(s)
- György Wersényi
- Department of Telecommunications, Széchenyi István University, H-9026 Gyor, Hungary
| |
Collapse
|
3
|
Miralles D, Garrofé G, Parés C, González A, Serra G, Soto A, Sevillano X, de Beeck HO, Masson HL. Multi-modal self-adaptation during object recognition in an artificial cognitive system. Sci Rep 2022; 12:3772. [PMID: 35260603 PMCID: PMC8904602 DOI: 10.1038/s41598-022-07424-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 02/11/2022] [Indexed: 11/09/2022] Open
Abstract
The cognitive connection between the senses of touch and vision is probably the best-known case of multimodality. Recent discoveries suggest that the mapping between both senses is learned rather than innate. This evidence opens the door to a dynamic multimodality that allows individuals to adaptively develop within their environment. By mimicking this aspect of human learning, we propose a new multimodal mechanism that allows artificial cognitive systems (ACS) to quickly adapt to unforeseen perceptual anomalies generated by the environment or by the system itself. In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. However, this has not been the case for the haptic modality, where the lack of two-handed dexterous datasets has limited the ability of learning systems to process the tactile information of human object exploration. This data imbalance hinders the creation of synchronized datasets that would enable the development of multimodality in ACS during object exploration. In this work, we use a multimodal dataset recently generated from tactile sensors placed on a collection of objects that capture haptic data from human manipulation, together with the corresponding visual counterpart. Using this data, we create a multimodal learning transfer mechanism capable of both detecting sudden and permanent anomalies in the visual channel and maintaining visual object recognition performance by retraining the visual mode for a few minutes using haptic information. Our proposal for perceptual awareness and self-adaptation is of noteworthy relevance as can be applied by any system that satisfies two very generic conditions: it can classify each mode independently and is provided with a synchronized multimodal data set.
Collapse
Affiliation(s)
- David Miralles
- GTM - Grup de Recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Catalonia, Spain.
| | - Guillem Garrofé
- GTM - Grup de Recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Catalonia, Spain
| | - Carlota Parés
- GTM - Grup de Recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Catalonia, Spain
| | - Alejandro González
- GTM - Grup de Recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Catalonia, Spain
| | - Gerard Serra
- GTM - Grup de Recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Catalonia, Spain
| | - Alberto Soto
- GTM - Grup de Recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Catalonia, Spain
| | - Xavier Sevillano
- GTM - Grup de Recerca en Tecnologies Mèdia, La Salle-Universitat Ramon Llull, Barcelona, Catalonia, Spain
| | - Hans Op de Beeck
- Department of Brain and Cognition, Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Haemy Lee Masson
- Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
4
|
Visual and Tactile Sensory Systems Share Common Features in Object Recognition. eNeuro 2021; 8:ENEURO.0101-21.2021. [PMID: 34544756 PMCID: PMC8493885 DOI: 10.1523/eneuro.0101-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/24/2021] [Accepted: 08/31/2021] [Indexed: 11/24/2022] Open
Abstract
Although we use our visual and tactile sensory systems interchangeably for object recognition on a daily basis, little is known about the mechanism underlying this ability. This study examined how 3D shape features of objects form two congruent and interchangeable visual and tactile perceptual spaces in healthy male and female participants. Since active exploration plays an important role in shape processing, a virtual reality environment was used to visually explore 3D objects called digital embryos without using the tactile sense. In addition, during the tactile procedure, blindfolded participants actively palpated a 3D-printed version of the same objects with both hands. We first demonstrated that the visual and tactile perceptual spaces were highly similar. We then extracted a series of 3D shape features to investigate how visual and tactile exploration can lead to the correct identification of the relationships between objects. The results indicate that both modalities share the same shape features to form highly similar veridical spaces. This finding suggests that visual and tactile systems might apply similar cognitive processes to sensory inputs that enable humans to rely merely on one modality in the absence of another to recognize surrounding objects.
Collapse
|
5
|
Haptic object recognition based on shape relates to visual object recognition ability. PSYCHOLOGICAL RESEARCH 2021; 86:1262-1273. [PMID: 34355269 PMCID: PMC8341045 DOI: 10.1007/s00426-021-01560-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Accepted: 07/16/2021] [Indexed: 11/23/2022]
Abstract
Visual object recognition depends in large part on a domain-general ability (Richler et al. Psychol Rev 126(2): 226–251, 2019). Given evidence pointing towards shared mechanisms for object perception across vision and touch, we ask whether individual differences in haptic and visual object recognition are related. We use existing validated visual tests to estimate visual object recognition ability and relate it to performance on two novel tests of haptic object recognition ability (n = 66). One test includes complex objects that participants chose to explore with a hand grasp. The other test uses a simpler stimulus set that participants chose to explore with just their fingertips. Only performance on the haptic test with complex stimuli correlated with visual object recognition ability, suggesting a shared source of variance across task structures, stimuli, and modalities. A follow-up study using a visual version of the haptic test with simple stimuli shows a correlation with the original visual tests, suggesting that the limited complexity of the stimuli did not limit correlation with visual object recognition ability. Instead, we propose that the manner of exploration may be a critical factor in whether a haptic test relates to visual object recognition ability. Our results suggest a perceptual ability that spans at least across vision and touch, however, it may not be recruited during just fingertip exploration.
Collapse
|
6
|
German JS, Jacobs RA. Can machine learning account for human visual object shape similarity judgments? Vision Res 2020; 167:87-99. [PMID: 31972448 DOI: 10.1016/j.visres.2019.12.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 10/22/2019] [Accepted: 12/12/2019] [Indexed: 11/27/2022]
Abstract
We describe and analyze the performance of metric learning systems, including deep neural networks (DNNs), on a new dataset of human visual object shape similarity judgments of naturalistic, part-based objects known as "Fribbles". In contrast to previous studies which asked participants to judge similarity when objects or scenes were rendered from a single viewpoint, we rendered Fribbles from multiple viewpoints and asked participants to judge shape similarity in a viewpoint-invariant manner. Metrics trained using pixel-based or DNN-based representations fail to explain our experimental data, but a metric trained with a viewpoint-invariant, part-based representation produces a good fit. We also find that although neural networks can learn to extract the part-based representation-and therefore should be capable of learning to model our data-networks trained with a "triplet loss" function based on similarity judgments do not perform well. We analyze this failure, providing a mathematical description of the relationship between the metric learning objective function and the triplet loss function. The poor performance of neural networks appears to be due to the nonconvexity of the optimization problem in network weight space. We conclude that viewpoint insensitivity is a critical aspect of human visual shape perception, and that neural network and other machine learning methods will need to learn viewpoint-insensitive representations in order to account for people's visual object shape similarity judgments.
Collapse
Affiliation(s)
- Joseph Scott German
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, United States.
| | - Robert A Jacobs
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, United States.
| |
Collapse
|
7
|
Abstract
Although some studies have shown that haptic and visual identification seem to rely on similar processes, few studies have directly compared the two. We investigated haptic and visual object identification by asking participants to learn to recognize (Experiments 1, and 3), or to match (Experiment 2) novel objects that varied only in shape. Participants explored objects haptically, visually, or bimodally, and were then asked to identify objects haptically and/or visually. We demonstrated that patterns of identification errors were similar across identification modality, independently of learning and testing condition, suggesting that the haptic and visual representations in memory were similar. We also demonstrated that identification performance depended on both learning and testing conditions: visual identification surpassed haptic identification only when participants explored the objects visually or bimodally. When participants explored the objects haptically, haptic and visual identification were equivalent. Interestingly, when participants were simultaneously presented with two objects (one was presented haptically, and one was presented visually), object similarity only influenced performance when participants were asked to indicate whether the two objects were the same, or when participants had learned about the objects visually-without any haptic input. The results suggest that haptic and visual object representations rely on similar processes, that they may be shared, and that visual processing may not always lead to the best performance.
Collapse
|
8
|
Meade ME, Fernandes MA. Semantic and visual relatedness of distractors impairs episodic retrieval of pictures in a divided attention paradigm. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1344341] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Melissa E. Meade
- Department of Psychology, University of Waterloo, Waterloo, Canada
| | | |
Collapse
|
9
|
Sakamoto M, Watanabe J. Exploring Tactile Perceptual Dimensions Using Materials Associated with Sensory Vocabulary. Front Psychol 2017; 8:569. [PMID: 28450843 PMCID: PMC5390040 DOI: 10.3389/fpsyg.2017.00569] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2016] [Accepted: 03/28/2017] [Indexed: 11/13/2022] Open
Abstract
Considering tactile sensation when designing products is important because the decision to purchase often depends on how products feel. Numerous psychophysical studies have attempted to identify important factors that describe tactile perceptions. However, the numbers and types of major tactile dimensions reported in previous studies have varied because of differences in materials used across experiments. To obtain a more complete picture of perceptual space with regard to touch, our study focuses on using vocabulary that expresses tactile sensations as a guiding principle for collecting material samples because these types of words are expected to cover all the basic categories within tactile perceptual space. We collected 120 materials based on a variety of Japanese sound-symbolic words for tactile sensations, and used the materials to examine tactile perceptual dimensions and their associations with affective evaluations. Analysis revealed six major dimensions: "Affective evaluation and Friction," "Compliance," "Surface," "Volume," "Temperature," and "Naturalness." These dimensions include four factors that previous studies have regarded as fundamental, as well as two new factors: "Volume" and "Naturalness." Additionally, we showed that "Affective evaluation" is more closely related to the "Friction" component (slipperiness and dryness) than to other tactile perceptual features. Our study demonstrates that using vocabulary could be an effective method for selecting material samples to explore tactile perceptual space.
Collapse
Affiliation(s)
- Maki Sakamoto
- Department of Informatics, The University of Electro-CommunicationsTokyo, Japan
| | - Junji Watanabe
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone CorporationKanagawa, Japan
| |
Collapse
|
10
|
Haptic adaptation to slant: No transfer between exploration modes. Sci Rep 2016; 6:34412. [PMID: 27698392 PMCID: PMC5048134 DOI: 10.1038/srep34412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Accepted: 09/12/2016] [Indexed: 11/08/2022] Open
Abstract
Human touch is an inherently active sense: to estimate an object's shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and parallel (static contact with the entire hand) exploration modes provide reliable and similar global shape information, suggesting the possibility that this information is shared early in the sensory cortex. In contrast, we here show the opposite. Using an adaptation-and-transfer paradigm, a change in haptic perception was induced by slant-adaptation using either the serial or parallel exploration mode. A unified shape-based coding would predict that this would equally affect perception using other exploration modes. However, we found that adaptation-induced perceptual changes did not transfer between exploration modes. Instead, serial and parallel exploration components adapted simultaneously, but to different kinaesthetic aspects of exploration behaviour rather than object-shape per se. These results indicate that a potential combination of information from different exploration modes can only occur at down-stream cortical processing stages, at which adaptation is no longer effective.
Collapse
|
11
|
Lee Masson H, Wallraven C, Petit L. "Can touch this": Cross-modal shape categorization performance is associated with microstructural characteristics of white matter association pathways. Hum Brain Mapp 2016; 38:842-854. [PMID: 27696592 DOI: 10.1002/hbm.23422] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2015] [Revised: 09/23/2016] [Accepted: 09/25/2016] [Indexed: 11/07/2022] Open
Abstract
Previous studies on visuo-haptic shape processing provide evidence that visually learned shape information can transfer to the haptic domain. In particular, recent neuroimaging studies have shown that visually learned novel objects that were haptically tested recruited parts of the ventral pathway from early visual cortex to the temporal lobe. Interestingly, in such tasks considerable individual variation in cross-modal transfer performance was observed. Here, we investigate whether this individual variation may be reflected in microstructural characteristics of white-matter (WM) pathways. We first trained participants on a fine-grained categorization task of novel shapes in the visual domain, followed by a haptic categorization test. We then correlated visual training-performance and haptic test-performance, as well as performance on a symbol-coding task requiring visuo-motor dexterity with microstructural properties of WM bundles potentially involved in visuo-haptic processing (the inferior longitudinal fasciculus [ILF], the fronto-temporal part of the superior longitudinal fasciculus [SLFft ] and the vertical occipital fasciculus [VOF]). Behavioral results showed that haptic categorization performance was good on average but exhibited large inter-individual variability. Haptic performance also was correlated with performance in the symbol-coding task. WM analyses showed that fast visual learners exhibited higher fractional anisotropy (FA) in left SLFft and left VOF. Importantly, haptic test-performance (and symbol-coding performance) correlated with FA in ILF and with axial diffusivity in SLFft . These findings provide clear evidence that individual variation in visuo-haptic performance can be linked to microstructural characteristics of WM pathways. Hum Brain Mapp 38:842-854, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Laurent Petit
- Groupe d'Imagerie Neurofonctionnelle, Institut Des Maladies Neurodégénératives - UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| |
Collapse
|
12
|
Erdogan G, Chen Q, Garcea FE, Mahon BZ, Jacobs RA. Multisensory Part-based Representations of Objects in Human Lateral Occipital Cortex. J Cogn Neurosci 2016; 28:869-81. [PMID: 26918587 DOI: 10.1162/jocn_a_00937] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The format of high-level object representations in temporal-occipital cortex is a fundamental and as yet unresolved issue. Here we use fMRI to show that human lateral occipital cortex (LOC) encodes novel 3-D objects in a multisensory and part-based format. We show that visual and haptic exploration of objects leads to similar patterns of neural activity in human LOC and that the shared variance between visually and haptically induced patterns of BOLD contrast in LOC reflects the part structure of the objects. We also show that linear classifiers trained on neural data from LOC on a subset of the objects successfully predict a novel object based on its component part structure. These data demonstrate a multisensory code for object representations in LOC that specifies the part structure of objects.
Collapse
|
13
|
Iwasa K, Ogawa T. Psychological Basis of the Relationship Between the Rorschach Texture Response and Adult Attachment: The Mediational Role of the Accessibility of Tactile Knowledge. J Pers Assess 2015; 98:238-46. [PMID: 26569020 DOI: 10.1080/00223891.2015.1099540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
This study clarifies the psychological basis for the linkage between adult attachment and the texture response on the Rorschach by examining the mediational role of the accessibility of tactile knowledge. Japanese undergraduate students (n = 35) completed the Rorschach Inkblot Method, the Experiences in Close Relationship Scale for General Objects (Nakao & Kato, 2004) and a lexical decision task designed to measure the accessibility of tactile knowledge. A mediation analysis revealed that the accessibility of tactile knowledge partially mediates the association between attachment anxiety and the texture response. These results suggest that our hypothetical model focusing on the response process provides a possible explanation of the relationship between the texture response and adult attachment.
Collapse
Affiliation(s)
- Kazunori Iwasa
- a Department of Educational Psychology , Faculty of Education, Shujitsu University , Okayama , Japan
| | - Toshiki Ogawa
- b Clinical Psychology Program of the School of Graduate Studies, The Open University of Japan , Mihama-ku , Japan
| |
Collapse
|
14
|
Erdogan G, Yildirim I, Jacobs RA. From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach. PLoS Comput Biol 2015; 11:e1004610. [PMID: 26554704 PMCID: PMC4640543 DOI: 10.1371/journal.pcbi.1004610] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Accepted: 10/17/2015] [Indexed: 12/02/2022] Open
Abstract
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models-that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model's percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects' ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.
Collapse
Affiliation(s)
- Goker Erdogan
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| | - Ilker Yildirim
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Laboratory of Neural Systems, The Rockefeller University, New York, New York, United States of America
| | - Robert A. Jacobs
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| |
Collapse
|
15
|
Lee Masson H, Bulthé J, Op de Beeck HP, Wallraven C. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences. Cereb Cortex 2015. [DOI: 10.1093/cercor/bhv170] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
16
|
The eyes grasp, the hands see: metric category knowledge transfers between vision and touch. Psychon Bull Rev 2015; 21:976-85. [PMID: 24307250 DOI: 10.3758/s13423-013-0563-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Categorization of seen objects is often determined by the shapes of objects. However, shape is not exclusive to the visual modality: The haptic system also is expert at identifying shapes. Hence, an important question for understanding shape processing is whether humans store separate modality-dependent shape representations, or whether information is integrated into one multisensory representation. To answer this question, we created a metric space of computer-generated novel objects varying in shape. These objects were then printed using a 3-D printer, to generate tangible stimuli. In a categorization experiment, participants first explored the objects visually and haptically. We found that both modalities led to highly similar categorization behavior. Next, participants were trained either visually or haptically on shape categories within the metric space. As expected, visual training increased visual performance, and haptic training increased haptic performance. Importantly, however, we found that visual training also improved haptic performance, and vice versa. Two additional experiments showed that the location of the categorical boundary in the metric space also transferred across modalities, as did heightened discriminability of objects adjacent to the boundary. This observed transfer of metric category knowledge across modalities indicates that visual and haptic forms of shape information are integrated into a shared multisensory representation.
Collapse
|
17
|
Streicher MC, Estes Z. Touch and Go: Merely Grasping a Product Facilitates Brand Perception and Choice. APPLIED COGNITIVE PSYCHOLOGY 2015. [DOI: 10.1002/acp.3109] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Mathias C. Streicher
- Department of Strategic Management, Marketing & Tourism; University of Innsbruck; Austria
| | - Zachary Estes
- Department of Marketing; Bocconi University; Milan Italy
| |
Collapse
|
18
|
Kooloos JGM, Schepens-Franke AN, Bergman EM, Donders RART, Vorstenbosch MATM. Anatomical knowledge gain through a clay-modeling exercise compared to live and video observations. ANATOMICAL SCIENCES EDUCATION 2014; 7:420-429. [PMID: 24623632 DOI: 10.1002/ase.1443] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2013] [Revised: 12/17/2013] [Accepted: 02/12/2014] [Indexed: 06/03/2023]
Abstract
Clay modeling is increasingly used as a teaching method other than dissection. The haptic experience during clay modeling is supposed to correspond to the learning effect of manipulations during exercises in the dissection room involving tissues and organs. We questioned this assumption in two pretest-post-test experiments. In these experiments, the learning effects of clay modeling were compared to either live observations (Experiment I) or video observations (Experiment II) of the clay-modeling exercise. The effects of learning were measured with multiple choice questions, extended matching questions, and recognition of structures on illustrations of cross-sections. Analysis of covariance with pretest scores as the covariate was used to elaborate the results. Experiment I showed a significantly higher post-test score for the observers, whereas Experiment II showed a significantly higher post-test score for the clay modelers. This study shows that (1) students who perform clay-modeling exercises show less gain in anatomical knowledge than students who attentively observe the same exercise being carried out and (2) performing a clay-modeling exercise is better in anatomical knowledge gain compared to the study of a video of the recorded exercise. The most important learning effect seems to be the engagement in the exercise, focusing attention and stimulating time on task.
Collapse
Affiliation(s)
- Jan G M Kooloos
- Department of Anatomy, Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands
| | | | | | | | | |
Collapse
|
19
|
Recognizing familiar objects by hand and foot: Haptic shape perception generalizes to inputs from unusual locations and untrained body parts. Atten Percept Psychophys 2014; 76:541-58. [PMID: 24197503 DOI: 10.3758/s13414-013-0559-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The limits of generalization of our 3-D shape recognition system to identifying objects by touch was investigated by testing exploration at unusual locations and using untrained effectors. In Experiments 1 and 2, people found identification by hand of real objects, plastic 3-D models of objects, and raised line drawings placed in front of themselves no easier than when exploration was behind their back. Experiment 3 compared one-handed, two-handed, one-footed, and two-footed haptic object recognition of familiar objects. Recognition by foot was slower (7 vs. 13 s) and much less accurate (9 % vs. 47 % errors) than recognition by either one or both hands. Nevertheless, item difficulty was similar across hand and foot exploration, and there was a strong correlation between an individual's hand and foot performance. Furthermore, foot recognition was better with the largest 20 of the 80 items (32 % errors), suggesting that physical limitations hampered exploration by foot. Thus, object recognition by hand generalized efficiently across the spatial location of stimuli, while object recognition by foot seemed surprisingly good given that no prior training was provided. Active touch (haptics) thus efficiently extracts 3-D shape information and accesses stored representations of familiar objects from novel modes of input.
Collapse
|
20
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
21
|
Kappers AML, Bergmann Tiest WM. Haptic perception. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2013; 4:357-374. [PMID: 26304224 DOI: 10.1002/wcs.1238] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Fueled by novel applications, interest in haptic perception is growing. This paper provides an overview of the state of the art of a number of important aspects of haptic perception. By means of touch we can not only perceive quite different material properties, such as roughness, compliance, friction, coldness and slipperiness, but we can also perceive spatial properties, such as shape, curvature, size and orientation. Moreover, the number of objects we have in our hand can be determined, either by counting or subitizing. All these aspects will be presented and discussed in this paper. Although our intuition tells us that touch provides us with veridical information about our environment, the existence of prominent haptic illusions will show otherwise. Knowledge about haptic perception is interesting from a fundamental viewpoint, but it also is of eminent importance in the technological development of haptic devices. At the end of this paper, a few recent applications will be presented. WIREs Cogn Sci 2013, 4:357-374. DOI: 10.1002/wcs.1238 CONFLICT OF INTEREST: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.
Collapse
|
22
|
Baumgartner E, Wiebel CB, Gegenfurtner KR. Visual and Haptic Representations of Material Properties. Multisens Res 2013; 26:429-55. [DOI: 10.1163/22134808-00002429] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Research on material perception has received an increasing amount of attention recently. Clearly, both the visual and the haptic sense play important roles in the perception of materials, yet it is still unclear how both senses compare in material perception tasks. Here, we set out to investigate the degree of correspondence between the visual and the haptic representations of different materials. We asked participants to both categorize and rate 84 different materials for several material properties. In the haptic case, participants were blindfolded and asked to assess the materials based on haptic exploration. In the visual condition, participants assessed the stimuli based on their visual impressions only. While categorization performance was less consistent in the haptic condition than in the visual one, ratings correlated highly between the visual and the haptic modality. PCA revealed that all material samples were similarly organized within the perceptual space in both modalities. Moreover, in both senses the first two principal components were dominated by hardness and roughness. These are two material features that are fundamental for the haptic sense. We conclude that although the haptic sense seems to be crucial for material perception, the information it can gather alone might not be quite fine-grained and rich enough for perfect material recognition.
Collapse
|
23
|
Transfer of object category knowledge across visual and haptic modalities: experimental and computational studies. Cognition 2012; 126:135-48. [PMID: 23102553 DOI: 10.1016/j.cognition.2012.08.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2012] [Revised: 08/16/2012] [Accepted: 08/19/2012] [Indexed: 11/20/2022]
Abstract
We study people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Can the person categorize novel objects from the same categories when these objects are, again, perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic, and thus more ecologically valid than objects that are typically used in haptic or visual-haptic experiments. Based on these stimuli, we developed the See and Grasp data set, a data set containing both visual and haptic features of the Fribbles, and are making this data set freely available on the world wide web. Second, complementary to previous research such as studies asking if people transfer knowledge of object identity across visual and haptic domains, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns multisensory representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire multisensory representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model provides an excellent qualitative account of our experimental data, thereby illustrating the potential importance of multisensory representations and sensory-specific forward models to multisensory perception.
Collapse
|
24
|
Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies". Exp Brain Res 2012; 222:321-32. [PMID: 22918607 DOI: 10.1007/s00221-012-3220-7] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2012] [Accepted: 08/04/2012] [Indexed: 10/28/2022]
Abstract
A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.
Collapse
|
25
|
Gaißert N, Waterkamp S, Fleming RW, Bülthoff I. Haptic categorical perception of shape. PLoS One 2012; 7:e43062. [PMID: 22900089 PMCID: PMC3416786 DOI: 10.1371/journal.pone.0043062] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2012] [Accepted: 07/17/2012] [Indexed: 11/18/2022] Open
Abstract
Categorization and categorical perception have been extensively studied, mainly in vision and audition. In the haptic domain, our ability to categorize objects has also been demonstrated in earlier studies. Here we show for the first time that categorical perception also occurs in haptic shape perception. We generated a continuum of complex shapes by morphing between two volumetric objects. Using similarity ratings and multidimensional scaling we ensured that participants could haptically discriminate all objects equally. Next, we performed classification and discrimination tasks. After a short training with the two shape categories, both tasks revealed categorical perception effects. Training leads to between-category expansion resulting in higher discriminability of physical differences between pairs of stimuli straddling the category boundary. Thus, even brief training can alter haptic representations of shape. This suggests that the weights attached to various haptic shape features can be changed dynamically in response to top-down information about class membership.
Collapse
Affiliation(s)
- Nina Gaißert
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- * E-mail: (IB); (NG)
| | | | - Roland W. Fleming
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- University of Giessen, Giessen, Germany
| | - Isabelle Bülthoff
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- * E-mail: (IB); (NG)
| |
Collapse
|