1
|
Cooney SM, Holmes CA, Cappagli G, Cocchi E, Gori M, Newell FN. Susceptibility to spatial illusions does not depend on visual experience: Evidence from sighted and blind children. Q J Exp Psychol (Hove) 2025:17470218251336082. [PMID: 40205750 DOI: 10.1177/17470218251336082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Visuospatial illusions may be a by-product of learned regularities in the environment or they may reflect the recruitment of sensory mechanisms that, in some contexts, provide an erroneous spatial estimate. Young children experience visual illusions, and blind adults are susceptible using touch alone, suggesting that the perceptual inferences influencing illusions are amodal and rapidly acquired. However, other evidence, such as visual illusions in the newly sighted, points to the involvement of innate mechanisms. To help tease apart cognitive from sensory influences, we investigated susceptibility to the Ebbinghaus, Müller-Lyer and Vertical-Horizontal illusions in children aged 6-14 years following visual-only, haptic-only and bimodal exploration. Consistent with previous findings, children of all ages were susceptible to all three visual illusions. In addition, illusions of extent but not of size were experienced using haptics alone. We then tested 17 congenitally blind children to investigate whether illusions were mediated by vision. Similar to their sighted counterparts, blind children were also susceptible to illusions following haptic exploration suggesting that early visual experience is not necessary for spatial illusions to be perceived. Reduced susceptibility in older children to some illusions further implies that explicit or formal knowledge of spatial relations is unlikely to mediate these experiences. Instead, the results are consistent with previous evidence for cross-modal interactions in 'visual' brain regions and point to the possibility that illusions may be driven by innate developmental processes that are not entirely dependent on, although are refined by, visual experience.
Collapse
Affiliation(s)
- Sarah M Cooney
- Institute of Neuroscience and School of Psychology, Trinity College Dublin, Ireland
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Corinne A Holmes
- Institute of Neuroscience and School of Psychology, Trinity College Dublin, Ireland
| | - Giulia Cappagli
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Elena Cocchi
- Istituto David Chiossone per Ciechi ed Ipovedenti ONLUS, Genova, Italy
| | - Monica Gori
- U-VIP: Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Fiona N Newell
- Institute of Neuroscience and School of Psychology, Trinity College Dublin, Ireland
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, UAE
| |
Collapse
|
2
|
Dhar S, Ahmad F, Deshpande A, Rana SS, Ahmed A T, Priyadarsini S. 3-Dimensional printing and bioprinting in neurological sciences: applications in surgery, imaging, tissue engineering, and pharmacology and therapeutics. JOURNAL OF MATERIALS SCIENCE. MATERIALS IN MEDICINE 2025; 36:32. [PMID: 40205004 PMCID: PMC11982170 DOI: 10.1007/s10856-025-06877-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Accepted: 03/19/2025] [Indexed: 04/11/2025]
Abstract
The rapid evolution of three-dimensional printing (3DP) has significantly impacted the medical field. In neurology for instance, 3DP has been pivotal in personalized surgical planning and education. Additionally, it has facilitated the creation of implants, microfluidic devices, and optogenetic probes, offering substantial implications for medical and research applications. Additionally, 3D printed nasal casts are showing great promise for targeted brain drug delivery. 3DP has also aided in creating 3D "phantoms" aligning with advancements in neuroimaging, and in the design of intricate objects for investigating the neurobiology of sensory perception. Furthermore, the emergence of 3D bioprinting (3DBP), a fusion of 3D printing and cell biology, has created new avenues in neural tissue engineering. Effective and ethical creation of tissue-like biomimetic constructs has enabled mechanistic, regenerative, and therapeutic evaluations. While individual reviews have explored the applications of 3DP or 3DBP, a comprehensive review encompassing the success stories across multiple facets of both technologies in neurosurgery, neuroimaging, and neuro-regeneration has been lacking. This review aims to consolidate recent achievements of both 3DP and 3DBP across various neurological science domains to encourage interdisciplinary research among neurologists, neurobiologists, and engineers, in order to promote further exploration of 3DP and 3DBP methodologies to novel areas of neurological science research and practice.
Collapse
Affiliation(s)
- Sreejita Dhar
- Department of Biotechnology, School of Bio Sciences and Technology (SBST), Vellore Institute of Technology (VIT), Vellore, 632014, India
| | - Faraz Ahmad
- Department of Biotechnology, School of Bio Sciences and Technology (SBST), Vellore Institute of Technology (VIT), Vellore, 632014, India.
| | - Aditi Deshpande
- Department of Biotechnology, School of Bio Sciences and Technology (SBST), Vellore Institute of Technology (VIT), Vellore, 632014, India
| | - Sandeep Singh Rana
- Department of Bio Sciences, School of Bio Sciences and Technology (SBST), Vellore Institute of Technology (VIT), Vellore, 632014, India
| | - Toufeeq Ahmed A
- Department of Biotechnology, School of Bio Sciences and Technology (SBST), Vellore Institute of Technology (VIT), Vellore, 632014, India
| | | |
Collapse
|
3
|
Wen X, Malchin L, Womelsdorf T. A Toolbox for Generating Multidimensional 3-D Objects with Fine-Controlled Feature Space: Quaddle 2.0. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.12.19.629479. [PMID: 39763807 PMCID: PMC11702700 DOI: 10.1101/2024.12.19.629479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/15/2025]
Abstract
Multidimensional 3D-rendered objects are an important component of vision research and video-gaming applications, but it has remained challenging to parametrically control and efficiently generate those objects. Here, we describe a toolbox for controlling and efficiently generating 3D rendered objects composed of ten separate visual feature dimensions that can be fine-adjusted using python scripts. The toolbox defines objects as multi-dimensional feature vectors with primary dimensions (object body related features), secondary dimensions (head related features) and accessory dimensions (including arms, ears, or beaks). The toolkit interfaces with the freely available Blender software to create objects. The toolbox allows to gradually morph features of multiple feature dimensions, determine the desired feature similarity among objects, and automatize the generation of multiple objects in 3D object and 2D image formats. We document the use of multidimensional objects in a sequence learning task that embeds objects in a 3D-rendered augmented reality environment controlled by the gaming engine unity. Taken together, the toolbox enables the efficient generation of multidimensional objects with fine control of low-level features and higher-level object similarity useful for visual cognitive research and immersive visual environments.
Collapse
Affiliation(s)
- Xuan Wen
- Department of Psychology, Vanderbilt University, Nashville, TN 37240
- Vanderbilt Brain Institute, Nashville, TN 37240
| | - Leo Malchin
- Department of Psychology, Vanderbilt University, Nashville, TN 37240
| | - Thilo Womelsdorf
- Department of Psychology, Vanderbilt University, Nashville, TN 37240
- Vanderbilt Brain Institute, Nashville, TN 37240
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN 37240
| |
Collapse
|
4
|
Sun Y, Yao L, Fu Q. Crossmodal Correspondence Mediates Crossmodal Transfer from Visual to Auditory Stimuli in Category Learning. J Intell 2024; 12:80. [PMID: 39330459 PMCID: PMC11433196 DOI: 10.3390/jintelligence12090080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 08/12/2024] [Accepted: 08/26/2024] [Indexed: 09/28/2024] Open
Abstract
This article investigated whether crossmodal correspondence, as a sensory translation phenomenon, can mediate crossmodal transfer from visual to auditory stimuli in category learning and whether multimodal category learning can influence the crossmodal correspondence between auditory and visual stimuli. Experiment 1 showed that the category knowledge acquired from elevation stimuli affected the categorization of pitch stimuli when there were robust crossmodal correspondence effects between elevation and size, indicating that crossmodal transfer occurred between elevation and pitch stimuli. Experiments 2 and 3 revealed that the size category knowledge could not be transferred to the categorization of pitches, but interestingly, size and pitch category learning determined the direction of the pitch-size correspondence, suggesting that the pitch-size correspondence was not stable and could be determined using multimodal category learning. Experiment 4 provided further evidence that there was no crossmodal transfer between size and pitch, due to the absence of a robust pitch-size correspondence. These results demonstrated that crossmodal transfer can occur between audio-visual stimuli with crossmodal correspondence, and multisensory category learning can change the corresponding relationship between audio-visual stimuli. These findings suggest that crossmodal transfer and crossmodal correspondence share similar abstract representations, which can be mediated by semantic content such as category labels.
Collapse
Affiliation(s)
- Ying Sun
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; (Y.S.); (L.Y.)
- University of Chinese Academy of Sciences, Beijing 101408, China
- College of Humanities and Education, Inner Mongolia Medical University, Hohhot 010110, China
| | - Liansheng Yao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; (Y.S.); (L.Y.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qiufang Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; (Y.S.); (L.Y.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| |
Collapse
|
5
|
AlAhmed F, Rau A, Wallraven C. Visuo-haptic processing of unfamiliar shapes: Comparing children and adults. PLoS One 2023; 18:e0286905. [PMID: 37889903 PMCID: PMC10610448 DOI: 10.1371/journal.pone.0286905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 05/26/2023] [Indexed: 10/29/2023] Open
Abstract
The question of how our sensory perception abilities develop has been an active area of research, establishing trajectories of development from infancy that last well into late childhood and even adolescence. In this context, several studies have established changes in sensory processing of vision and touch around the age of 8 to 9 years. In this experiment, we explored the visual and haptic perceptual development of elementary school children of ages 6-11 in similarity-rating tasks of unfamiliar objects and compared their performance to adults. The participants were presented with parametrically-defined objects to be explored haptically and visually in separate groups for both children and adults. Our results showed that the raw similarity ratings of the children had more variability compared to adults. A detailed multidimensional scaling analysis revealed that the reconstructed perceptual space of the adult haptic group was significantly closer to the parameter space compared to the children group, whereas both groups' visual perceptual space was similarly well reconstructed. Beyond this, however, we found no clear evidence for an age effect in either modality within the children group. These results suggest that haptic processing of unfamiliar, abstract shapes may continue to develop beyond the age of 11 years later into adolescence.
Collapse
Affiliation(s)
- Furat AlAhmed
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Anne Rau
- Department of Psychology, Eberhard Karls University of Tübingen, Tübingen, Germany
- Department of Psychiatry, University Hospital Tübingen, Tübingen, Germany
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
6
|
Visual and Tactile Sensory Systems Share Common Features in Object Recognition. eNeuro 2021; 8:ENEURO.0101-21.2021. [PMID: 34544756 PMCID: PMC8493885 DOI: 10.1523/eneuro.0101-21.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 08/24/2021] [Accepted: 08/31/2021] [Indexed: 11/24/2022] Open
Abstract
Although we use our visual and tactile sensory systems interchangeably for object recognition on a daily basis, little is known about the mechanism underlying this ability. This study examined how 3D shape features of objects form two congruent and interchangeable visual and tactile perceptual spaces in healthy male and female participants. Since active exploration plays an important role in shape processing, a virtual reality environment was used to visually explore 3D objects called digital embryos without using the tactile sense. In addition, during the tactile procedure, blindfolded participants actively palpated a 3D-printed version of the same objects with both hands. We first demonstrated that the visual and tactile perceptual spaces were highly similar. We then extracted a series of 3D shape features to investigate how visual and tactile exploration can lead to the correct identification of the relationships between objects. The results indicate that both modalities share the same shape features to form highly similar veridical spaces. This finding suggests that visual and tactile systems might apply similar cognitive processes to sensory inputs that enable humans to rely merely on one modality in the absence of another to recognize surrounding objects.
Collapse
|
7
|
Perceived similarity ratings predict generalization success after traditional category learning and a new paired-associate learning task. Psychon Bull Rev 2021; 27:791-800. [PMID: 32472329 DOI: 10.3758/s13423-020-01754-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The current study investigated category learning across two experiments using face-blend stimuli that formed face families controlled for within- and between-category similarity. Experiment 1 was a traditional feedback-based category-learning task, with three family names serving as category labels. In Experiment 2, the shared family name was encountered in the context of a face-full name paired-associate learning task, with a unique first name for each face. A subsequent test that required participants to categorize new faces from each family showed successful generalization in both experiments. Furthermore, perceived similarity ratings for pairs of faces were collected before and after learning, prior to generalization test. In Experiment 1, similarity ratings increased for faces within a family and decreased for faces that were physically similar but belonged to different families. In Experiment 2, overall similarity ratings decreased after learning, driven primarily by decreases for physically similar faces from different families. The post-learning category bias in similarity ratings was predictive of subsequent generalization success in both experiments. The results indicate that individuals formed generalizable category knowledge prior to an explicit demand to generalize and did so both when attention was directed towards category-relevant features (Experiment 1) and when attention was directed towards individuating faces within a family (Experiment 2). The results tie together research on category learning and categorical perception and extend them beyond a traditional category-learning task.
Collapse
|
8
|
Kamermans KL, Pouw W, Mast FW, Paas F. Reinterpretation in visual imagery is possible without visual cues: a validation of previous research. PSYCHOLOGICAL RESEARCH 2019; 83:1237-1250. [PMID: 29242975 PMCID: PMC6647238 DOI: 10.1007/s00426-017-0956-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 12/04/2017] [Indexed: 11/20/2022]
Abstract
Is visual reinterpretation of bistable figures (e.g., duck/rabbit figure) in visual imagery possible? Current consensus suggests that it is in principle possible because of converging evidence of quasi-pictorial functioning of visual imagery. Yet, studies that have directly tested and found evidence for reinterpretation in visual imagery, allow for the possibility that reinterpretation was already achieved during memorization of the figure(s). One study resolved this issue, providing evidence for reinterpretation in visual imagery (Mast and Kosslyn, Cognition 86:57-70, 2002). However, participants in that study performed reinterpretations with aid of visual cues. Hence, reinterpretation was not performed with mental imagery alone. Therefore, in this study we assessed the possibility of reinterpretation without visual support. We further explored the possible role of haptic cues to assess the multimodal nature of mental imagery. Fifty-three participants were consecutively presented three to be remembered bistable 2-D figures (reinterpretable when rotated 180°), two of which were visually inspected and one was explored hapticly. After memorization of the figures, a visually bistable exemplar figure was presented to ensure understanding of the concept of visual bistability. During recall, 11 participants (out of 36; 30.6%) who did not spot bistability during memorization successfully performed reinterpretations when instructed to mentally rotate their visual image, but additional haptic cues during mental imagery did not inflate reinterpretation ability. This study validates previous findings that reinterpretation in visual imagery is possible.
Collapse
Affiliation(s)
- Kevin L Kamermans
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Wim Pouw
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands.
- Department of Psychological Sciences, University of Connecticut, Storrs, USA.
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Fred Paas
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Early Start Research Institute, University of Wollongong, Wollongong, Australia
| |
Collapse
|
9
|
Kamermans KL, Pouw W, Fassi L, Aslanidou A, Paas F, Hostetter AB. The role of gesture as simulated action in reinterpretation of mental imagery. Acta Psychol (Amst) 2019; 197:131-142. [PMID: 31146090 DOI: 10.1016/j.actpsy.2019.05.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Revised: 03/19/2019] [Accepted: 05/08/2019] [Indexed: 11/30/2022] Open
Abstract
In two experiments, we examined the role of gesture in reinterpreting a mental image. In Experiment 1, we found that participants gestured more about a figure they had learned through manual exploration than about a figure they had learned through vision. This supports claims that gestures emerge from the activation of perception-relevant actions during mental imagery. In Experiment 2, we investigated whether such gestures have a causal role in affecting the quality of mental imagery. Participants were randomly assigned to gesture, not gesture, or engage in a manual interference task as they attempted to reinterpret a figure they had learned through manual exploration. We found that manual interference significantly impaired participants' success on the task. Taken together, these results suggest that gestures reflect mental imaginings of interactions with a mental image and that these imaginings are critically important for mental manipulation and reinterpretation of that image. However, our results suggest that enacting the imagined movements in gesture is not critically important on this particular task.
Collapse
Affiliation(s)
- Kevin L Kamermans
- Erasmus University Rotterdam, Department of Psychology, Education and Child Studies, the Netherlands.
| | - Wim Pouw
- Erasmus University Rotterdam, Department of Psychology, Education and Child Studies, the Netherlands; University of Connecticut, Department of Psychological Sciences, USA
| | - Luisa Fassi
- Erasmus University Rotterdam, Department of Psychology, Education and Child Studies, the Netherlands
| | - Asimina Aslanidou
- Erasmus University Rotterdam, Department of Psychology, Education and Child Studies, the Netherlands
| | - Fred Paas
- Erasmus University Rotterdam, Department of Psychology, Education and Child Studies, the Netherlands; University of Wollongong, School of Education/Early Start, Australia
| | | |
Collapse
|
10
|
Tivadar RI, Rouillard T, Chappaz C, Knebel JF, Turoman N, Anaflous F, Roche J, Matusz PJ, Murray MM. Mental Rotation of Digitally-Rendered Haptic Objects. Front Integr Neurosci 2019; 13:7. [PMID: 30930756 PMCID: PMC6427928 DOI: 10.3389/fnint.2019.00007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 02/25/2019] [Indexed: 11/13/2022] Open
Abstract
Sensory substitution is an effective means to rehabilitate many visual functions after visual impairment or blindness. Tactile information, for example, is particularly useful for functions such as reading, mental rotation, shape recognition, or exploration of space. Extant haptic technologies typically rely on real physical objects or pneumatically driven renderings and thus provide a limited library of stimuli to users. New developments in digital haptic technologies now make it possible to actively simulate an unprecedented range of tactile sensations. We provide a proof-of-concept for a new type of technology (hereafter haptic tablet) that renders haptic feedback by modulating the friction of a flat screen through ultrasonic vibrations of varying shapes to create the sensation of texture when the screen is actively explored. We reasoned that participants should be able to create mental representations of letters presented in normal and mirror-reversed haptic form without the use of any visual information and to manipulate such representations in a mental rotation task. Healthy sighted, blindfolded volunteers were trained to discriminate between two letters (either L and P, or F and G; counterbalanced across participants) on a haptic tablet. They then tactually explored all four letters in normal or mirror-reversed form at different rotations (0°, 90°, 180°, and 270°) and indicated letter form (i.e., normal or mirror-reversed) by pressing one of two mouse buttons. We observed the typical effect of rotation angle on object discrimination performance (i.e., greater deviation from 0° resulted in worse performance) for trained letters, consistent with mental rotation of these haptically-rendered objects. We likewise observed generally slower and less accurate performance with mirror-reversed compared to prototypically oriented stimuli. Our findings extend existing research in multisensory object recognition by indicating that a new technology simulating active haptic feedback can support the generation and spatial manipulation of mental representations of objects. Thus, such haptic tablets can offer a new avenue to mitigate visual impairments and train skills dependent on mental object-based representations and their spatial manipulation.
Collapse
Affiliation(s)
- Ruxandra I. Tivadar
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | | | | | - Jean-François Knebel
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
| | - Nora Turoman
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
| | - Fatima Anaflous
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Jean Roche
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
| | - Pawel J. Matusz
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Information Systems Institute at the University of Applied Sciences Western Switzerland (HES-SO Valais), Sierre, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - Micah M. Murray
- The Laboratory for Investigative Neurophysiology (LINE), Department of Radiology and Clinical Neurosciences, University Hospital Center and University of Lausanne, Lausanne, Switzerland
- Department of Ophthalmology, Fondation Asile des Aveugles, Lausanne, Switzerland
- Electroencephalography Brain Mapping Core, Center for Biomedical Imaging (CIBM) of Lausanne and Geneva, Lausanne, Switzerland
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
11
|
Quaddles: A multidimensional 3-D object set with parametrically controlled and customizable features. Behav Res Methods 2018; 51:2522-2532. [PMID: 30088255 DOI: 10.3758/s13428-018-1097-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Many studies of vision and cognition require novel three-dimensional object sets defined by a parametric feature space. Creating such sets and verifying that they are suitable for a given task, however, can be time-consuming and effortful. Here we present a new set of multidimensional objects, Quaddles, designed for studies of feature-based learning and attention, but adaptable for many research purposes. Quaddles have features that are all equally visible from any angle around the vertical axis and can be designed to be equally discriminable along feature dimensions; these objects do not show strong or consistent response biases, with a small number of quantified exceptions. They are available as two-dimensional images, rotating videos, and FBX object files suitable for use with any modern video game engine. We also provide scripts that can be used to generate hundreds of thousands of further Quaddles, as well as examples and tutorials for modifying Quaddles or creating completely new object sets from scratch, with the aim to speed up the development time of future novel-object studies.
Collapse
|
12
|
Lee YS, Sehlstedt I, Olausson H, Jung WM, Wallraven C, Chae Y. Visual and physical affective touch delivered by a rotary tactile stimulation device: A human psychophysical study. Physiol Behav 2018; 185:55-60. [DOI: 10.1016/j.physbeh.2017.12.022] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2017] [Revised: 12/19/2017] [Accepted: 12/19/2017] [Indexed: 01/10/2023]
|
13
|
Carducci P, Schwing R, Huber L, Truppa V. Tactile information improves visual object discrimination in kea, Nestor notabilis, and capuchin monkeys, Sapajus spp. Anim Behav 2018. [DOI: 10.1016/j.anbehav.2017.11.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
14
|
Haptic adaptation to slant: No transfer between exploration modes. Sci Rep 2016; 6:34412. [PMID: 27698392 PMCID: PMC5048134 DOI: 10.1038/srep34412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Accepted: 09/12/2016] [Indexed: 11/08/2022] Open
Abstract
Human touch is an inherently active sense: to estimate an object's shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and parallel (static contact with the entire hand) exploration modes provide reliable and similar global shape information, suggesting the possibility that this information is shared early in the sensory cortex. In contrast, we here show the opposite. Using an adaptation-and-transfer paradigm, a change in haptic perception was induced by slant-adaptation using either the serial or parallel exploration mode. A unified shape-based coding would predict that this would equally affect perception using other exploration modes. However, we found that adaptation-induced perceptual changes did not transfer between exploration modes. Instead, serial and parallel exploration components adapted simultaneously, but to different kinaesthetic aspects of exploration behaviour rather than object-shape per se. These results indicate that a potential combination of information from different exploration modes can only occur at down-stream cortical processing stages, at which adaptation is no longer effective.
Collapse
|
15
|
Lee Masson H, Wallraven C, Petit L. "Can touch this": Cross-modal shape categorization performance is associated with microstructural characteristics of white matter association pathways. Hum Brain Mapp 2016; 38:842-854. [PMID: 27696592 DOI: 10.1002/hbm.23422] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2015] [Revised: 09/23/2016] [Accepted: 09/25/2016] [Indexed: 11/07/2022] Open
Abstract
Previous studies on visuo-haptic shape processing provide evidence that visually learned shape information can transfer to the haptic domain. In particular, recent neuroimaging studies have shown that visually learned novel objects that were haptically tested recruited parts of the ventral pathway from early visual cortex to the temporal lobe. Interestingly, in such tasks considerable individual variation in cross-modal transfer performance was observed. Here, we investigate whether this individual variation may be reflected in microstructural characteristics of white-matter (WM) pathways. We first trained participants on a fine-grained categorization task of novel shapes in the visual domain, followed by a haptic categorization test. We then correlated visual training-performance and haptic test-performance, as well as performance on a symbol-coding task requiring visuo-motor dexterity with microstructural properties of WM bundles potentially involved in visuo-haptic processing (the inferior longitudinal fasciculus [ILF], the fronto-temporal part of the superior longitudinal fasciculus [SLFft ] and the vertical occipital fasciculus [VOF]). Behavioral results showed that haptic categorization performance was good on average but exhibited large inter-individual variability. Haptic performance also was correlated with performance in the symbol-coding task. WM analyses showed that fast visual learners exhibited higher fractional anisotropy (FA) in left SLFft and left VOF. Importantly, haptic test-performance (and symbol-coding performance) correlated with FA in ILF and with axial diffusivity in SLFft . These findings provide clear evidence that individual variation in visuo-haptic performance can be linked to microstructural characteristics of WM pathways. Hum Brain Mapp 38:842-854, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Laurent Petit
- Groupe d'Imagerie Neurofonctionnelle, Institut Des Maladies Neurodégénératives - UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| |
Collapse
|
16
|
Tal Z, Geva R, Amedi A. The origins of metamodality in visual object area LO: Bodily topographical biases and increased functional connectivity to S1. Neuroimage 2015; 127:363-375. [PMID: 26673114 PMCID: PMC4758827 DOI: 10.1016/j.neuroimage.2015.11.058] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2015] [Revised: 10/16/2015] [Accepted: 11/24/2015] [Indexed: 11/14/2022] Open
Abstract
Recent evidence from blind participants suggests that visual areas are task-oriented and sensory modality input independent rather than sensory-specific to vision. Specifically, visual areas are thought to retain their functional selectivity when using non-visual inputs (touch or sound) even without having any visual experience. However, this theory is still controversial since it is not clear whether this also characterizes the sighted brain, and whether the reported results in the sighted reflect basic fundamental a-modal processes or are an epiphenomenon to a large extent. In the current study, we addressed these questions using a series of fMRI experiments aimed to explore visual cortex responses to passive touch on various body parts and the coupling between the parietal and visual cortices as manifested by functional connectivity. We show that passive touch robustly activated the object selective parts of the lateral–occipital (LO) cortex while deactivating almost all other occipital–retinotopic-areas. Furthermore, passive touch responses in the visual cortex were specific to hand and upper trunk stimulations. Psychophysiological interaction (PPI) analysis suggests that LO is functionally connected to the hand area in the primary somatosensory homunculus (S1), during hand and shoulder stimulations but not to any of the other body parts. We suggest that LO is a fundamental hub that serves as a node between visual-object selective areas and S1 hand representation, probably due to the critical evolutionary role of touch in object recognition and manipulation. These results might also point to a more general principle suggesting that recruitment or deactivation of the visual cortex by other sensory input depends on the ecological relevance of the information conveyed by this input to the task/computations carried out by each area or network. This is likely to rely on the unique and differential pattern of connectivity for each visual area with the rest of the brain. We studied cross-modal effects of passive somatosensory inputs on the visual cortex. Passive touch on the body evoked massive deactivation in the visual cortex. Passive hand stimulation evoked unique activation in visual object area LO. This area was also uniquely connected to the hand area in Penfield's homunculus — S1.
Collapse
Affiliation(s)
- Zohar Tal
- Department of Medical Neurobiology, Institute of Medical Research Israel - Canada (IMRIC), Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel.
| | - Ran Geva
- Department of Medical Neurobiology, Institute of Medical Research Israel - Canada (IMRIC), Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Institute of Medical Research Israel - Canada (IMRIC), Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem 91220, Israel; The Edmond and Lily Safra Center for Brain Science (ELSC), The Hebrew University of Jerusalem, Jerusalem 91220, Israel; Program of Cognitive Science, The Hebrew University of Jerusalem, Jerusalem 91220, Israel
| |
Collapse
|
17
|
Erdogan G, Yildirim I, Jacobs RA. From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach. PLoS Comput Biol 2015; 11:e1004610. [PMID: 26554704 PMCID: PMC4640543 DOI: 10.1371/journal.pcbi.1004610] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Accepted: 10/17/2015] [Indexed: 12/02/2022] Open
Abstract
People learn modality-independent, conceptual representations from modality-specific sensory signals. Here, we hypothesize that any system that accomplishes this feat will include three components: a representational language for characterizing modality-independent representations, a set of sensory-specific forward models for mapping from modality-independent representations to sensory signals, and an inference algorithm for inverting forward models-that is, an algorithm for using sensory signals to infer modality-independent representations. To evaluate this hypothesis, we instantiate it in the form of a computational model that learns object shape representations from visual and/or haptic signals. The model uses a probabilistic grammar to characterize modality-independent representations of object shape, uses a computer graphics toolkit and a human hand simulator to map from object representations to visual and haptic features, respectively, and uses a Bayesian inference algorithm to infer modality-independent object representations from visual and/or haptic signals. Simulation results show that the model infers identical object representations when an object is viewed, grasped, or both. That is, the model's percepts are modality invariant. We also report the results of an experiment in which different subjects rated the similarity of pairs of objects in different sensory conditions, and show that the model provides a very accurate account of subjects' ratings. Conceptually, this research significantly contributes to our understanding of modality invariance, an important type of perceptual constancy, by demonstrating how modality-independent representations can be acquired and used. Methodologically, it provides an important contribution to cognitive modeling, particularly an emerging probabilistic language-of-thought approach, by showing how symbolic and statistical approaches can be combined in order to understand aspects of human perception.
Collapse
Affiliation(s)
- Goker Erdogan
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| | - Ilker Yildirim
- Department of Brain & Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Laboratory of Neural Systems, The Rockefeller University, New York, New York, United States of America
| | - Robert A. Jacobs
- Department of Brain & Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| |
Collapse
|
18
|
Stone KD, Gonzalez CLR. The contributions of vision and haptics to reaching and grasping. Front Psychol 2015; 6:1403. [PMID: 26441777 PMCID: PMC4584943 DOI: 10.3389/fpsyg.2015.01403] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2015] [Accepted: 09/02/2015] [Indexed: 11/23/2022] Open
Abstract
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference.
Collapse
Affiliation(s)
- Kayla D Stone
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| | - Claudia L R Gonzalez
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| |
Collapse
|
19
|
Prause N, Park J, Leung S, Miller G. Women's Preferences for Penis Size: A New Research Method Using Selection among 3D Models. PLoS One 2015; 10:e0133079. [PMID: 26332467 PMCID: PMC4558040 DOI: 10.1371/journal.pone.0133079] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Accepted: 06/22/2015] [Indexed: 11/18/2022] Open
Abstract
Women's preferences for penis size may affect men's comfort with their own bodies and may have implications for sexual health. Studies of women's penis size preferences typically have relied on their abstract ratings or selecting amongst 2D, flaccid images. This study used haptic stimuli to allow assessment of women's size recall accuracy for the first time, as well as examine their preferences for erect penis sizes in different relationship contexts. Women (N = 75) selected amongst 33, 3D models. Women recalled model size accurately using this method, although they made more errors with respect to penis length than circumference. Women preferred a penis of slightly larger circumference and length for one-time (length = 6.4 inches/16.3 cm, circumference = 5.0 inches/12.7 cm) versus long-term (length = 6.3 inches/16.0 cm, circumference = 4.8 inches/12.2 cm) sexual partners. These first estimates of erect penis size preferences using 3D models suggest women accurately recall size and prefer penises only slightly larger than average.
Collapse
Affiliation(s)
- Nicole Prause
- Department of Psychiatry, University of California Los Angeles, Los Angeles, California, United States of America
- * E-mail:
| | - Jaymie Park
- Department of Psychiatry, University of California Los Angeles, Los Angeles, California, United States of America
| | - Shannon Leung
- Department of Psychiatry, University of California Los Angeles, Los Angeles, California, United States of America
| | - Geoffrey Miller
- Department of Psychology, University of New Mexico; Albuquerque, New Mexico, United States of America
| |
Collapse
|
20
|
Lee Masson H, Bulthé J, Op de Beeck HP, Wallraven C. Visual and Haptic Shape Processing in the Human Brain: Unisensory Processing, Multisensory Convergence, and Top-Down Influences. Cereb Cortex 2015. [DOI: 10.1093/cercor/bhv170] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
21
|
Truppa V, Carducci P, Trapanese C, Hanus D. Does presentation format influence visual size discrimination in tufted capuchin monkeys (Sapajus spp.)? PLoS One 2015; 10:e0126001. [PMID: 25927363 PMCID: PMC4416040 DOI: 10.1371/journal.pone.0126001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 03/27/2015] [Indexed: 11/19/2022] Open
Abstract
Most experimental paradigms to study visual cognition in humans and non-human species are based on discrimination tasks involving the choice between two or more visual stimuli. To this end, different types of stimuli and procedures for stimuli presentation are used, which highlights the necessity to compare data obtained with different methods. The present study assessed whether, and to what extent, capuchin monkeys' ability to solve a size discrimination problem is influenced by the type of procedure used to present the problem. Capuchins' ability to generalise knowledge across different tasks was also evaluated. We trained eight adult tufted capuchin monkeys to select the larger of two stimuli of the same shape and different sizes by using pairs of food items (Experiment 1), computer images (Experiment 1) and objects (Experiment 2). Our results indicated that monkeys achieved the learning criterion faster with food stimuli compared to both images and objects. They also required consistently fewer trials with objects than with images. Moreover, female capuchins had higher levels of acquisition accuracy with food stimuli than with images. Finally, capuchins did not immediately transfer the solution of the problem acquired in one task condition to the other conditions. Overall, these findings suggest that--even in relatively simple visual discrimination problems where a single perceptual dimension (i.e., size) has to be judged--learning speed strongly depends on the mode of presentation.
Collapse
Affiliation(s)
- Valentina Truppa
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), Rome, Italy
| | - Paola Carducci
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), Rome, Italy
- Department of Biology, University of Rome Tor Vergata, Rome, Italy
| | - Cinzia Trapanese
- Institute of Cognitive Sciences and Technologies, National Research Council (CNR), Rome, Italy
| | - Daniel Hanus
- Department of Developmental and Comparative Psychology, Max Planck Institute for Evolutionary Anthropology, Leipzig, Germany
| |
Collapse
|
22
|
Folstein JR, Palmeri TJ, Gauthier I. Perceptual advantage for category-relevant perceptual dimensions: the case of shape and motion. Front Psychol 2014; 5:1394. [PMID: 25520691 PMCID: PMC4249057 DOI: 10.3389/fpsyg.2014.01394] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Accepted: 11/14/2014] [Indexed: 11/20/2022] Open
Abstract
Category learning facilitates perception along relevant stimulus dimensions, even when tested in a discrimination task that does not require categorization. While this general phenomenon has been demonstrated previously, perceptual facilitation along dimensions has been documented by measuring different specific phenomena in different studies using different kinds of objects. Across several object domains, there is support for acquired distinctiveness, the stretching of a perceptual dimension relevant to learned categories. Studies using faces and studies using simple separable visual dimensions have also found evidence of acquired equivalence, the shrinking of a perceptual dimension irrelevant to learned categories, and categorical perception, the local stretching across the category boundary. These later two effects are rarely observed with complex non-face objects. Failures to find these effects with complex non-face objects may have been because the dimensions tested previously were perceptually integrated. Here we tested effects of category learning with non-face objects categorized along dimensions that have been found to be processed by different areas of the brain, shape and motion. While we replicated acquired distinctiveness, we found no evidence for acquired equivalence or categorical perception.
Collapse
Affiliation(s)
| | - Thomas J Palmeri
- Psychological Sciences, Vanderbilt University Nashville, TN, USA
| | - Isabel Gauthier
- Psychological Sciences, Vanderbilt University Nashville, TN, USA
| |
Collapse
|
23
|
Learning multisensory representations for auditory-visual transfer of sequence category knowledge: a probabilistic language of thought approach. Psychon Bull Rev 2014; 22:673-86. [PMID: 25338656 DOI: 10.3758/s13423-014-0734-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2014] [Revised: 08/29/2014] [Accepted: 09/04/2014] [Indexed: 11/08/2022]
Abstract
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
Collapse
|
24
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|