1
|
Lee JL, Ma WJ. Point-estimating observer models for latent cause detection. PLoS Comput Biol 2021; 17:e1009159. [PMID: 34714835 PMCID: PMC8580258 DOI: 10.1371/journal.pcbi.1009159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 11/10/2021] [Accepted: 10/05/2021] [Indexed: 11/30/2022] Open
Abstract
The spatial distribution of visual items allows us to infer the presence of latent causes in the world. For instance, a spatial cluster of ants allows us to infer the presence of a common food source. However, optimal inference requires the integration of a computationally intractable number of world states in real world situations. For example, optimal inference about whether a common cause exists based on N spatially distributed visual items requires marginalizing over both the location of the latent cause and 2N possible affiliation patterns (where each item may be affiliated or non-affiliated with the latent cause). How might the brain approximate this inference? We show that subject behaviour deviates qualitatively from Bayes-optimal, in particular showing an unexpected positive effect of N (the number of visual items) on the false-alarm rate. We propose several “point-estimating” observer models that fit subject behaviour better than the Bayesian model. They each avoid a costly computational marginalization over at least one of the variables of the generative model by “committing” to a point estimate of at least one of the two generative model variables. These findings suggest that the brain may implement partially committal variants of Bayesian models when detecting latent causes based on complex real world data. Perceptual systems are designed to make sense of fragmented sensory data by inferring common, latent causes. Seeing a cluster of insects might allow us to infer the presence of a common food source, whereas the same number of insects scattered over a larger area of land might not evoke the same suspicions. The ability to reliably make this inference based on statistical information about the environment is surprisingly non-trivial: making the best possible inference requires making full use of the probabilistic information provided by the sensory data, which would require considering a combinatorially explosive number of hypothetical world states. In this paper, we test human subjects on their ability to perform a causal detection task: subjects are asked to judge whether an underlying cause of clustering is present or absent, based on the spatial distribution of those items. We show that subjects do not reason optimally on this task, and that particular computational short cuts (“committing” to certain world states over others, rather than representing them all) might underlie perceptual decision-making in these causal detection schemes.
Collapse
Affiliation(s)
- Jennifer Laura Lee
- Center for Neural Science, New York University, New York City, New York, United States of Amercia
- * E-mail: (JLL); (WJM)
| | - Wei Ji Ma
- Center for Neural Science, New York University, New York City, New York, United States of Amercia
- * E-mail: (JLL); (WJM)
| |
Collapse
|
2
|
Capparelli F, Pawelzik K, Ernst U. Constrained inference in sparse coding reproduces contextual effects and predicts laminar neural dynamics. PLoS Comput Biol 2019; 15:e1007370. [PMID: 31581240 PMCID: PMC6793885 DOI: 10.1371/journal.pcbi.1007370] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 10/15/2019] [Accepted: 09/02/2019] [Indexed: 01/16/2023] Open
Abstract
When probed with complex stimuli that extend beyond their classical receptive field, neurons in primary visual cortex display complex and non-linear response characteristics. Sparse coding models reproduce some of the observed contextual effects, but still fail to provide a satisfactory explanation in terms of realistic neural structures and cortical mechanisms, since the connection scheme they propose consists only of interactions among neurons with overlapping input fields. Here we propose an extended generative model for visual scenes that includes spatial dependencies among different features. We derive a neurophysiologically realistic inference scheme under the constraint that neurons have direct access only to local image information. The scheme can be interpreted as a network in primary visual cortex where two neural populations are organized in different layers within orientation hypercolumns that are connected by local, short-range and long-range recurrent interactions. When trained with natural images, the model predicts a connectivity structure linking neurons with similar orientation preferences matching the typical patterns found for long-ranging horizontal axons and feedback projections in visual cortex. Subjected to contextual stimuli typically used in empirical studies, our model replicates several hallmark effects of contextual processing and predicts characteristic differences for surround modulation between the two model populations. In summary, our model provides a novel framework for contextual processing in the visual system proposing a well-defined functional role for horizontal axons and feedback projections.
Collapse
Affiliation(s)
- Federica Capparelli
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
- * E-mail:
| | - Klaus Pawelzik
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| | - Udo Ernst
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| |
Collapse
|
3
|
Van Humbeeck N, Meghanathan RN, Wagemans J, van Leeuwen C, Nikolaev AR. Presaccadic EEG activity predicts visual saliency in free-viewing contour integration. Psychophysiology 2018; 55:e13267. [PMID: 30069911 DOI: 10.1111/psyp.13267] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 04/26/2018] [Accepted: 06/11/2018] [Indexed: 11/28/2022]
Abstract
While viewing a scene, the eyes are attracted to salient stimuli. We set out to identify the brain signals controlling this process. In a contour integration task, in which participants searched for a collinear contour in a field of randomly oriented Gabor elements, a previously established model was applied to calculate a visual saliency value for each fixation location. We studied brain activity related to the modeled saliency values, using coregistered eye tracking and EEG. To disentangle EEG signals reflecting salience in free viewing from overlapping EEG responses to sequential eye movements, we adopted generalized additive mixed modeling (GAMM) to single epochs of saccade-related EEG. We found that, when saliency at the next fixation location was high, amplitude of the presaccadic EEG activity was low. Since presaccadic activity reflects covert attention to the saccade target, our results indicate that larger attentional effort is needed for selecting less salient saccade targets than more salient ones. This effect was prominent in contour-present conditions (half of the trials), but ambiguous in the contour-absent condition. Presaccadic EEG activity may thus be indicative of bottom-up factors in saccade guidance. The results underscore the utility of GAMM for EEG-eye movement coregistration research.
Collapse
Affiliation(s)
| | | | - Johan Wagemans
- Brain & Cognition Research Unit, KU Leuven-University of Leuven, Leuven, Belgium
| | - Cees van Leeuwen
- Brain & Cognition Research Unit, KU Leuven-University of Leuven, Leuven, Belgium
| | - Andrey R Nikolaev
- Brain & Cognition Research Unit, KU Leuven-University of Leuven, Leuven, Belgium
| |
Collapse
|
4
|
A Dynamic Bayesian Observer Model Reveals Origins of Bias in Visual Path Integration. Neuron 2018; 99:194-206.e5. [PMID: 29937278 DOI: 10.1016/j.neuron.2018.05.040] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2017] [Revised: 03/23/2018] [Accepted: 05/30/2018] [Indexed: 01/06/2023]
Abstract
Path integration is a strategy by which animals track their position by integrating their self-motion velocity. To identify the computational origins of bias in visual path integration, we asked human subjects to navigate in a virtual environment using optic flow and found that they generally traveled beyond the goal location. Such a behavior could stem from leaky integration of unbiased self-motion velocity estimates or from a prior expectation favoring slower speeds that causes velocity underestimation. Testing both alternatives using a probabilistic framework that maximizes expected reward, we found that subjects' biases were better explained by a slow-speed prior than imperfect integration. When subjects integrate paths over long periods, this framework intriguingly predicts a distance-dependent bias reversal due to buildup of uncertainty, which we also confirmed experimentally. These results suggest that visual path integration in noisy environments is limited largely by biases in processing optic flow rather than by leaky integration.
Collapse
|
5
|
Grzymisch A, Grimsen C, Ernst UA. Contour Integration in Dynamic Scenes: Impaired Detection Performance in Extended Presentations. Front Psychol 2017; 8:1501. [PMID: 28928692 PMCID: PMC5591827 DOI: 10.3389/fpsyg.2017.01501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2017] [Accepted: 08/18/2017] [Indexed: 11/13/2022] Open
Abstract
Since scenes in nature are highly dynamic, perception requires an on-going and robust integration of local information into global representations. In vision, contour integration (CI) is one of these tasks, and it is performed by our brain in a seemingly effortless manner. Following the rule of good continuation, oriented line segments are linked into contour percepts, thus supporting important visual computations such as the detection of object boundaries. This process has been studied almost exclusively using static stimuli, raising the question of whether the observed robustness and "pop-out" quality of CI carries over to dynamic scenes. We investigate contour detection in dynamic stimuli where targets appear at random times by Gabor elements aligning themselves to form contours. In briefly presented displays (230 ms), a situation comparable to classical paradigms in CI, performance is about 87%. Surprisingly, we find that detection performance decreases to 67% in extended presentations (about 1.9-3.8 s) for the same target stimuli. In order to observe the same reduction with briefly presented stimuli, presentation time has to be drastically decreased to intervals as short as 50 ms. Cueing a specific contour position or shape helps in partially compensating this deterioration, and only in extended presentations combining a location and a shape cue was more efficient than providing a single cue. Our findings challenge the notion of CI as a mainly stimulus-driven process leading to pop-out percepts, indicating that top-down processes play a much larger role in supporting fundamental integration processes in dynamic scenes than previously thought.
Collapse
Affiliation(s)
- Axel Grzymisch
- Department of Physics, Institute for Theoretical Physics, University of BremenBremen, Germany
| | - Cathleen Grimsen
- Institute for Human Neurobiology, University of BremenBremen, Germany
| | - Udo A. Ernst
- Department of Physics, Institute for Theoretical Physics, University of BremenBremen, Germany
| |
Collapse
|
6
|
Blusseau S, Carboni A, Maiche A, Morel J, Grompone von Gioi R. Measuring the visual salience of alignments by their non-accidentalness. Vision Res 2016; 126:192-206. [DOI: 10.1016/j.visres.2015.08.014] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2015] [Revised: 08/24/2015] [Accepted: 08/26/2015] [Indexed: 11/24/2022]
|
7
|
A unified account of tilt illusions, association fields, and contour detection based on elastica. Vision Res 2016; 126:164-173. [DOI: 10.1016/j.visres.2015.05.021] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2015] [Revised: 05/20/2015] [Accepted: 05/30/2015] [Indexed: 11/21/2022]
|
8
|
Persike M, Meinhardt G. Contour integration with corners. Vision Res 2016; 127:132-140. [PMID: 27542687 DOI: 10.1016/j.visres.2016.07.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2016] [Revised: 06/11/2016] [Accepted: 07/04/2016] [Indexed: 10/21/2022]
Abstract
Contour integration refers to the ability of the visual system to bind disjoint local elements into coherent global shapes. In cluttered images containing randomly oriented elements a contour becomes salient when its elements are coaligned with a smooth global trajectory, as described by the Gestalt law of good continuation. Abrupt changes of curvature strongly diminish contour salience. Here we show that by inserting local corner elements at points of angular discontinuity, a jagged contour becomes as salient as a straight one. We report results from detection experiments for contours with and without corner elements which indicate their psychophysical equivalence. This presents a challenge to the notion that contour integration mostly relies on local interactions between neurons tuned to single orientations, and suggests that a site where single orientations and more complex local features are combined constitutes the early basis of contour and 2D shape processing.
Collapse
Affiliation(s)
- Malte Persike
- Psychological Institute, Department of Statistical Methods, Johannes Gutenberg University Mainz, Wallstr. 3, D-55122 Mainz, Germany.
| | - Günter Meinhardt
- Psychological Institute, Department of Statistical Methods, Johannes Gutenberg University Mainz, Wallstr. 3, D-55122 Mainz, Germany.
| |
Collapse
|
9
|
Wilder J, Feldman J, Singh M. The role of shape complexity in the detection of closed contours. Vision Res 2015; 126:220-231. [PMID: 26505685 DOI: 10.1016/j.visres.2015.10.011] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2015] [Revised: 10/15/2015] [Accepted: 10/17/2015] [Indexed: 11/19/2022]
Abstract
The detection of contours in noise has been extensively studied, but the detection of closed contours, such as the boundaries of whole objects, has received relatively little attention. Closed contours pose substantial challenges not present in the simple (open) case, because they form the outlines of whole shapes and thus take on a range of potentially important configural properties. In this paper we consider the detection of closed contours in noise as a probabilistic decision problem. Previous work on open contours suggests that contour complexity, quantified as the negative log probability (Description Length, DL) of the contour under a suitably chosen statistical model, impairs contour detectability; more complex (statistically surprising) contours are harder to detect. In this study we extended this result to closed contours, developing a suitable probabilistic model of whole shapes that gives rise to several distinct though interrelated measures of shape complexity. We asked subjects to detect either natural shapes (Exp. 1) or experimentally manipulated shapes (Exp. 2) embedded in noise fields. We found systematic effects of global shape complexity on detection performance, demonstrating how aspects of global shape and form influence the basic process of object detection.
Collapse
Affiliation(s)
- John Wilder
- Department of Computer Science, University of Toronto, Toronto, Canada.
| | - Jacob Feldman
- Department of Psychology, Center for Cognitive Science, Rutgers University - New Brunswick, USA
| | - Manish Singh
- Department of Psychology, Center for Cognitive Science, Rutgers University - New Brunswick, USA
| |
Collapse
|
10
|
Froyen V, Feldman J, Singh M. Bayesian hierarchical grouping: Perceptual grouping as mixture estimation. Psychol Rev 2015; 122:575-97. [PMID: 26322548 DOI: 10.1037/a0039540] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We propose a novel framework for perceptual grouping based on the idea of mixture models, called Bayesian hierarchical grouping (BHG). In BHG, we assume that the configuration of image elements is generated by a mixture of distinct objects, each of which generates image elements according to some generative assumptions. Grouping, in this framework, means estimating the number and the parameters of the mixture components that generated the image, including estimating which image elements are "owned" by which objects. We present a tractable implementation of the framework, based on the hierarchical clustering approach of Heller and Ghahramani (2005). We illustrate it with examples drawn from a number of classical perceptual grouping problems, including dot clustering, contour integration, and part decomposition. Our approach yields an intuitive hierarchical representation of image elements, giving an explicit decomposition of the image into mixture components, along with estimates of the probability of various candidate decompositions. We show that BHG accounts well for a diverse range of empirical data drawn from the literature. Because BHG provides a principled quantification of the plausibility of grouping interpretations over a wide range of grouping problems, we argue that it provides an appealing unifying account of the elusive Gestalt notion of Prägnanz.
Collapse
Affiliation(s)
- Vicky Froyen
- Department of Psychology, Center for Cognitive Science, Rutgers University
| | - Jacob Feldman
- Department of Psychology, Center for Cognitive Science, Rutgers University
| | - Manish Singh
- Department of Psychology, Center for Cognitive Science, Rutgers University
| |
Collapse
|
11
|
Abstract
Itis well-known that "smooth" chains of oriented elements-contours-are more easily detected amid background noise than more undulating (i.e., "less smooth") chains. Here, we develop a Bayesian framework for contour detection and show that it predicts that contour detection performance should decrease with the contour's complexity, quantified as the description length (DL; i.e., the negative logarithm of probability integrated along the contour). We tested this prediction in two experiments in which subjects were asked to detect simple open contours amid pixel noise. In Experiment 1, we demonstrate a consistent decline in performance with increasingly complex contours, as predicted by the Bayesian model. In Experiment 2, we confirmed that this effect is due to integrated complexity along the contour, and does not seem to depend on local stretches of linear structure. The results corroborate the probabilistic model of contours, and show how contour detection can be understood as a special case of a more general process-the identification of organized patterns in the environment.
Collapse
|
12
|
Persike M, Meinhardt G. Effects of Spatial Frequency Similarity and Dissimilarity on Contour Integration. PLoS One 2015; 10:e0126449. [PMID: 26057620 PMCID: PMC4461267 DOI: 10.1371/journal.pone.0126449] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2014] [Accepted: 03/31/2015] [Indexed: 11/18/2022] Open
Abstract
We examined the effects of spatial frequency similarity and dissimilarity on human contour integration under various conditions of uncertainty. Participants performed a temporal 2AFC contour detection task. Spatial frequency jitter up to 3.0 octaves was applied either to background elements, or to contour and background elements, or to none of both. Results converge on four major findings. (1) Contours defined by spatial frequency similarity alone are only scarcely visible, suggesting the absence of specialized cortical routines for shape detection based on spatial frequency similarity. (2) When orientation collinearity and spatial frequency similarity are combined along a contour, performance amplifies far beyond probability summation when compared to the fully heterogenous condition but only to a margin compatible with probability summation when compared to the fully homogenous case. (3) Psychometric functions are steeper but not shifted for homogenous contours in heterogenous backgrounds indicating an advantageous signal-to-noise ratio. The additional similarity cue therefore not so much improves contour detection performance but primarily reduces observer uncertainty about whether a potential candidate is a contour or just a false positive. (4) Contour integration is a broadband mechanism which is only moderately impaired by spatial frequency dissimilarity.
Collapse
Affiliation(s)
- Malte Persike
- Johannes Gutenberg University, Mainz, Germany
- * E-mail:
| | | |
Collapse
|
13
|
Lüdge T, Urbanczik R, Senn W. Modulation of orientation-selective neurons by motion: when additive, when multiplicative? Front Comput Neurosci 2014; 8:67. [PMID: 24999328 PMCID: PMC4064552 DOI: 10.3389/fncom.2014.00067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Accepted: 06/02/2014] [Indexed: 11/23/2022] Open
Abstract
The recurrent interaction among orientation-selective neurons in the primary visual cortex (V1) is suited to enhance contours in a noisy visual scene. Motion is known to have a strong pop-up effect in perceiving contours, but how motion-sensitive neurons in V1 support contour detection remains vastly elusive. Here we suggest how the various types of motion-sensitive neurons observed in V1 should be wired together in a micro-circuitry to optimally extract contours in the visual scene. Motion-sensitive neurons can be selective about the direction of motion occurring at some spot or respond equally to all directions (pandirectional). We show that, in the light of figure-ground segregation, direction-selective motion neurons should additively modulate the corresponding orientation-selective neurons with preferred orientation orthogonal to the motion direction. In turn, to maximally enhance contours, pandirectional motion neurons should multiplicatively modulate all orientation-selective neurons with co-localized receptive fields. This multiplicative modulation amplifies the local V1-circuitry among co-aligned orientation-selective neurons for detecting elongated contours. We suggest that the additive modulation by direction-specific motion neurons is achieved through synaptic projections to the somatic region, and the multiplicative modulation by pandirectional motion neurons through projections to the apical region of orientation-specific pyramidal neurons. For the purpose of contour detection, the V1-intrinsic integration of motion information is advantageous over a downstream integration as it exploits the recurrent V1-circuitry designed for that task.
Collapse
Affiliation(s)
- Torsten Lüdge
- Computational Neuroscience Group, Department of Physiology, University of BernBern, Switzerland
| | | | | |
Collapse
|
14
|
Abstract
Contour integration is a fundamental visual process. The constraints on integrating discrete contour elements and the associated neural mechanisms have typically been investigated using static contour paths. However, in our dynamic natural environment objects and scenes vary over space and time. With the aim of investigating the parameters affecting spatiotemporal contour path integration, we measured human contrast detection performance of a briefly presented foveal target embedded in dynamic collinear stimulus sequences (comprising five short ‘predictor’ bars appearing consecutively towards the fovea, followed by the ‘target’ bar) in four experiments. The data showed that participants' target detection performance was relatively unchanged when individual contour elements were separated by up to 2° spatial gap or 200 ms temporal gap. Randomising the luminance contrast or colour of the predictors, on the other hand, had similar detrimental effect on grouping dynamic contour path and subsequent target detection performance. Randomising the orientation of the predictors reduced target detection performance greater than introducing misalignment relative to the contour path. The results suggest that the visual system integrates dynamic path elements to bias target detection even when the continuity of path is disrupted in terms of spatial (2°), temporal (200 ms), colour (over 10 colours) and luminance (−25% to 25%) information. We discuss how the findings can be largely reconciled within the functioning of V1 horizontal connections.
Collapse
|
15
|
Vancleef K, Wagemans J. Component processes in contour integration: a direct comparison between snakes and ladders in a detection and a shape discrimination task. Vision Res 2013; 92:39-46. [PMID: 24051198 DOI: 10.1016/j.visres.2013.09.003] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2013] [Revised: 07/20/2013] [Accepted: 09/09/2013] [Indexed: 12/01/2022]
Abstract
In contour integration, a relevant question is whether snakes and ladders are processed similarly. Higher presentation time thresholds for ladders in detection tasks indicate this is not the case. However, in a detection task only processing differences at the level of element linking and possibly contour localization might be picked up, while differences at the shape encoding level cannot be noticed. In this study, we make a direct comparison of detection and shape discrimination tasks to investigate if processing differences in the visual system between snakes and ladders are limited to contour detection or extend to higher level contour processing, like shape encoding. Stimuli consisted of elements that were oriented collinearly (snakes) or orthogonally (ladders) to the contour path and were surrounded by randomly oriented background elements. In two tasks, six experienced subjects either detected the contour when presented with a contour and a completely random stimulus or performed a shape discrimination task when presented with two contours with different curvature. Presentation time was varied in 9 steps between 8 and 492 ms. By applying a generalized linear mixed model we found that differences in snake and ladder processing are not limited to a detection stage but are also apparent at a shape encoding stage.
Collapse
Affiliation(s)
- Kathleen Vancleef
- Laboratory of Experimental Psychology, University of Leuven, Leuven, Belgium.
| | | |
Collapse
|
16
|
Ma WJ. Organizing probabilistic models of perception. Trends Cogn Sci 2012; 16:511-8. [PMID: 22981359 DOI: 10.1016/j.tics.2012.08.010] [Citation(s) in RCA: 77] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Revised: 08/22/2012] [Accepted: 08/22/2012] [Indexed: 10/27/2022]
Abstract
Probability has played a central role in models of perception for more than a century, but a look at probabilistic concepts in the literature raises many questions. Is being Bayesian the same as being optimal? Are recent Bayesian models fundamentally different from classic signal detection theory models? Do findings of near-optimal inference provide evidence that neurons compute with probability distributions? This review aims to disentangle these concepts and to classify empirical evidence accordingly.
Collapse
Affiliation(s)
- Wei Ji Ma
- Department of Neuroscience, Baylor College of Medicine, 1 Baylor Plaza, Houston, TX 77030, USA.
| |
Collapse
|