1
|
Abstract
Categorical perception refers to the enhancement of perceptual sensitivity near category boundaries, generally along dimensions that are informative about category membership. However, it remains unclear exactly which dimensions are treated as informative and why. This article reports a series of experiments in which subjects were asked to learn statistically defined categories in a novel, unfamiliar 2D perceptual space of shapes. Perceptual discrimination was tested before and after category learning of various features in the space, each defined by its position and orientation relative to the maximally informative dimension. The results support a remarkably simple generalization: The magnitude of improvement in perceptual discrimination of each feature is proportional to the mutual information between the feature and the category variable. This finding suggests a rational basis for categorical perception in which the precision of perceptual discrimination is tuned to the statistical structure of the environment.
Collapse
Affiliation(s)
- Jacob Feldman
- Department of Psychology, Center for Cognitive Science, Rutgers University
| |
Collapse
|
2
|
Face-voice space: Integrating visual and auditory cues in judgments of person distinctiveness. Atten Percept Psychophys 2020; 82:3710-3727. [PMID: 32696231 DOI: 10.3758/s13414-020-02084-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Faces and voices each convey multiple cues enabling us to tell people apart. Research on face and voice distinctiveness commonly utilizes multidimensional space to represent these complex, perceptual abilities. We extend this framework to examine how a combined face-voice space would relate to its constituent face and voice spaces. Participants rated videos of speakers for their dissimilarity in face only, voice only, and face-voice together conditions. Multiple dimensional scaling (MDS) and regression analyses showed that whereas face-voice space more closely resembled face space, indicating visual dominance, face-voice distinctiveness was best characterized by a multiplicative integration of face-only and voice-only distinctiveness, indicating that auditory and visual cues are used interactively in person-distinctiveness judgments. Further, the multiplicative integration could not be explained by the small correlation found between face-only and voice-only distinctiveness. As an exploratory analysis, we next identified auditory and visual features that correlated with the dimensions in the MDS solutions. Features pertaining to facial width, lip movement, spectral centroid, fundamental frequency, and loudness variation were identified as important features in face-voice space. We discuss the implications of our findings in terms of person perception, recognition, and face-voice matching abilities.
Collapse
|
3
|
Kamermans KL, Pouw W, Mast FW, Paas F. Reinterpretation in visual imagery is possible without visual cues: a validation of previous research. PSYCHOLOGICAL RESEARCH 2019; 83:1237-1250. [PMID: 29242975 PMCID: PMC6647238 DOI: 10.1007/s00426-017-0956-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 12/04/2017] [Indexed: 11/20/2022]
Abstract
Is visual reinterpretation of bistable figures (e.g., duck/rabbit figure) in visual imagery possible? Current consensus suggests that it is in principle possible because of converging evidence of quasi-pictorial functioning of visual imagery. Yet, studies that have directly tested and found evidence for reinterpretation in visual imagery, allow for the possibility that reinterpretation was already achieved during memorization of the figure(s). One study resolved this issue, providing evidence for reinterpretation in visual imagery (Mast and Kosslyn, Cognition 86:57-70, 2002). However, participants in that study performed reinterpretations with aid of visual cues. Hence, reinterpretation was not performed with mental imagery alone. Therefore, in this study we assessed the possibility of reinterpretation without visual support. We further explored the possible role of haptic cues to assess the multimodal nature of mental imagery. Fifty-three participants were consecutively presented three to be remembered bistable 2-D figures (reinterpretable when rotated 180°), two of which were visually inspected and one was explored hapticly. After memorization of the figures, a visually bistable exemplar figure was presented to ensure understanding of the concept of visual bistability. During recall, 11 participants (out of 36; 30.6%) who did not spot bistability during memorization successfully performed reinterpretations when instructed to mentally rotate their visual image, but additional haptic cues during mental imagery did not inflate reinterpretation ability. This study validates previous findings that reinterpretation in visual imagery is possible.
Collapse
Affiliation(s)
- Kevin L Kamermans
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands
| | - Wim Pouw
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands.
- Department of Psychological Sciences, University of Connecticut, Storrs, USA.
| | - Fred W Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Fred Paas
- Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, The Netherlands
- Early Start Research Institute, University of Wollongong, Wollongong, Australia
| |
Collapse
|
4
|
Lee Masson H, Kang HM, Petit L, Wallraven C. Neuroanatomical correlates of haptic object processing: combined evidence from tractography and functional neuroimaging. Brain Struct Funct 2017; 223:619-633. [PMID: 28905126 DOI: 10.1007/s00429-017-1510-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 09/05/2017] [Indexed: 11/25/2022]
Abstract
Touch delivers a wealth of information already from birth, helping infants to acquire knowledge about a variety of important object properties using their hands. Despite the fact that we are touch experts as much as we are visual experts, surprisingly, little is known how our perceptual ability in touch is linked to either functional or structural aspects of the brain. The present study, therefore, investigates and identifies neuroanatomical correlates of haptic perceptual performance using a novel, multi-modal approach. For this, participants' performance in a difficult shape categorization task was first measured in the haptic domain. Using a multi-modal functional magnetic resonance imaging and diffusion-weighted magnetic resonance imaging analysis pipeline, functionally defined and anatomically constrained white-matter pathways were extracted and their microstructural characteristics correlated with individual variability in haptic categorization performance. Controlling for the effects of age, total intracranial volume and head movements in the regression model, haptic performance was found to correlate significantly with higher axial diffusivity in functionally defined superior longitudinal fasciculus (fSLF) linking frontal and parietal areas. These results were further localized in specific sub-parts of fSLF. Using additional data from a second group of participants, who first learned the categories in the visual domain and then transferred to the haptic domain, haptic performance correlates were obtained in the functionally defined inferior longitudinal fasciculus. Our results implicate SLF linking frontal and parietal areas as an important white-matter track in processing touch-specific information during object processing, whereas ILF relays visually learned information during haptic processing. Taken together, the present results chart for the first time potential neuroanatomical correlates and interactions of touch-related object processing.
Collapse
Affiliation(s)
- Haemy Lee Masson
- Department of Brain and Cognition, KU Leuven, 3000, Louvain, Belgium
| | - Hyeok-Mook Kang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea
| | - Laurent Petit
- Groupe d'Imagerie Neurofonctionnelle, Institut Des Maladies Neurodégénératives, UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 136-713, Korea.
| |
Collapse
|
5
|
Victor JD, Rizvi SM, Conte MM. Two representations of a high-dimensional perceptual space. Vision Res 2017; 137:1-23. [PMID: 28549921 DOI: 10.1016/j.visres.2017.05.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 04/27/2017] [Accepted: 05/03/2017] [Indexed: 12/01/2022]
Abstract
A perceptual space is a mental workspace of points in a sensory domain that supports similarity and difference judgments and enables further processing such as classification and naming. Perceptual spaces are present across sensory modalities; examples include colors, faces, auditory textures, and odors. Color is perhaps the best-studied perceptual space, but it is atypical in two respects. First, the dimensions of color space are directly linked to the three cone absorption spectra, but the dimensions of generic perceptual spaces are not as readily traceable to single-neuron properties. Second, generic perceptual spaces have more than three dimensions. This is important because representing each distinguishable point in a high-dimensional space by a separate neuron or population is unwieldy; combinatorial strategies may be needed to overcome this hurdle. To study the representation of a complex perceptual space, we focused on a well-characterized 10-dimensional domain of visual textures. Within this domain, we determine perceptual distances in a threshold task (segmentation) and a suprathreshold task (border salience comparison). In N=4 human observers, we find both quantitative and qualitative differences between these sets of measurements. Quantitatively, observers' segmentation thresholds were inconsistent with their uncertainty determined from border salience comparisons. Qualitatively, segmentation thresholds suggested that distances are determined by a coordinate representation with Euclidean geometry. Border salience comparisons, in contrast, indicated a global curvature of the space, and that distances are determined by activity patterns across broadly tuned elements. Thus, our results indicate two representations of this perceptual space, and suggest that they use differing combinatorial strategies. SIGNIFICANCE STATEMENT To move from sensory signals to decisions and actions, the brain carries out a sequence of transformations. An important stage in this process is the construction of a "perceptual space" - an internal workspace of sensory information that captures similarities and differences, and enables further processing, such as classification and naming. Perceptual spaces for color, faces, visual and haptic textures and shapes, sounds, and odors (among others) are known to exist. How such spaces are represented is at present unknown. Here, using visual textures as a model, we investigate this. Psychophysical measurements suggest roles for two combinatorial strategies: one based on projections onto coordinate-like axes, and one based on patterns of activity across broadly tuned elements scattered throughout the space.
Collapse
Affiliation(s)
- Jonathan D Victor
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, United States.
| | - Syed M Rizvi
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, United States
| | - Mary M Conte
- Feil Family Brain and Mind Research Institute, Weill Cornell Medical College, 1300 York Avenue, New York, NY 10065, United States
| |
Collapse
|
6
|
Haptic adaptation to slant: No transfer between exploration modes. Sci Rep 2016; 6:34412. [PMID: 27698392 PMCID: PMC5048134 DOI: 10.1038/srep34412] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Accepted: 09/12/2016] [Indexed: 11/08/2022] Open
Abstract
Human touch is an inherently active sense: to estimate an object's shape humans often move their hand across its surface. This way the object is sampled both in a serial (sampling different parts of the object across time) and parallel fashion (sampling using different parts of the hand simultaneously). Both the serial (moving a single finger) and parallel (static contact with the entire hand) exploration modes provide reliable and similar global shape information, suggesting the possibility that this information is shared early in the sensory cortex. In contrast, we here show the opposite. Using an adaptation-and-transfer paradigm, a change in haptic perception was induced by slant-adaptation using either the serial or parallel exploration mode. A unified shape-based coding would predict that this would equally affect perception using other exploration modes. However, we found that adaptation-induced perceptual changes did not transfer between exploration modes. Instead, serial and parallel exploration components adapted simultaneously, but to different kinaesthetic aspects of exploration behaviour rather than object-shape per se. These results indicate that a potential combination of information from different exploration modes can only occur at down-stream cortical processing stages, at which adaptation is no longer effective.
Collapse
|
7
|
Piantadosi ST, Jacobs RA. Four Problems Solved by the Probabilistic Language of Thought. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2016. [DOI: 10.1177/0963721415609581] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We argue for the advantages of the probabilistic language of thought (pLOT), a recently emerging approach to modeling human cognition. Work using this framework demonstrates how the pLOT (a) refines the debate between symbols and statistics in cognitive modeling, (b) permits theories that draw on insights from both nativist and empiricist approaches, (c) explains the origins of novel and complex computational concepts, and (d) provides a framework for abstraction that can link sensation and conception. In each of these areas, the pLOT provides a productive middle ground between historical divides in cognitive psychology, pointing to a promising way forward for the field.
Collapse
Affiliation(s)
| | - Robert A. Jacobs
- Department of Brain and Cognitive Sciences, University of
Rochester
| |
Collapse
|
8
|
Prause N, Park J, Leung S, Miller G. Women's Preferences for Penis Size: A New Research Method Using Selection among 3D Models. PLoS One 2015; 10:e0133079. [PMID: 26332467 PMCID: PMC4558040 DOI: 10.1371/journal.pone.0133079] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Accepted: 06/22/2015] [Indexed: 11/18/2022] Open
Abstract
Women's preferences for penis size may affect men's comfort with their own bodies and may have implications for sexual health. Studies of women's penis size preferences typically have relied on their abstract ratings or selecting amongst 2D, flaccid images. This study used haptic stimuli to allow assessment of women's size recall accuracy for the first time, as well as examine their preferences for erect penis sizes in different relationship contexts. Women (N = 75) selected amongst 33, 3D models. Women recalled model size accurately using this method, although they made more errors with respect to penis length than circumference. Women preferred a penis of slightly larger circumference and length for one-time (length = 6.4 inches/16.3 cm, circumference = 5.0 inches/12.7 cm) versus long-term (length = 6.3 inches/16.0 cm, circumference = 4.8 inches/12.2 cm) sexual partners. These first estimates of erect penis size preferences using 3D models suggest women accurately recall size and prefer penises only slightly larger than average.
Collapse
Affiliation(s)
- Nicole Prause
- Department of Psychiatry, University of California Los Angeles, Los Angeles, California, United States of America
- * E-mail:
| | - Jaymie Park
- Department of Psychiatry, University of California Los Angeles, Los Angeles, California, United States of America
| | - Shannon Leung
- Department of Psychiatry, University of California Los Angeles, Los Angeles, California, United States of America
| | - Geoffrey Miller
- Department of Psychology, University of New Mexico; Albuquerque, New Mexico, United States of America
| |
Collapse
|
9
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|