1
|
Butz MV, Achimova A, Bilkey D, Knott A. Event-Predictive Cognition: A Root for Conceptual Human Thought. Top Cogn Sci 2020; 13:10-24. [PMID: 33274596 DOI: 10.1111/tops.12522] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 10/11/2020] [Accepted: 10/11/2020] [Indexed: 12/11/2022]
Abstract
Our minds navigate a continuous stream of sensorimotor experiences, selectively compressing them into events. Event-predictive encodings and processing abilities have evolved because they mirror interactions between agents and objects-and the pursuance or avoidance of critical interactions lies at the heart of survival and reproduction. However, it appears that these abilities have evolved not only to pursue live-enhancing events and to avoid threatening events, but also to distinguish food sources, to produce and to use tools, to cooperate, and to communicate. They may have even set the stage for the formation of larger societies and the development of cultural identities. Research on event-predictive cognition investigates how events and conceptualizations thereof are learned, structured, and processed dynamically. It suggests that event-predictive encodings and processes optimally mediate between sensorimotor processes and language. On the one hand, they enable us to perceive and control physical interactions with our world in a highly adaptive, versatile, goal-directed manner. On the other hand, they allow us to coordinate complex social interactions and, in particular, to comprehend and produce language. Event-predictive learning segments sensorimotor experiences into event-predictive encodings. Once first encodings are formed, the mind learns progressively higher order compositional structures, which allow reflecting on the past, reasoning, and planning on multiple levels of abstraction. We conclude that human conceptual thought may be grounded in the principles of event-predictive cognition constituting its root.
Collapse
Affiliation(s)
- Martin V Butz
- Neuro-Cognitive Modeling Group, Department of Computer Science, Department of Psychology, Faculty of Science, University of Tübingen
| | - Asya Achimova
- Neuro-Cognitive Modeling Group, Department of Computer Science, Department of Psychology, Faculty of Science, University of Tübingen
| | | | | |
Collapse
|
2
|
Weghenkel B, Wiskott L. Slowness as a Proxy for Temporal Predictability: An Empirical Comparison. Neural Comput 2018; 30:1151-1179. [PMID: 29566353 DOI: 10.1162/neco_a_01070] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The computational principles of slowness and predictability have been proposed to describe aspects of information processing in the visual system. From the perspective of slowness being a limited special case of predictability we investigate the relationship between these two principles empirically. On a collection of real-world data sets we compare the features extracted by slow feature analysis (SFA) to the features of three recently proposed methods for predictable feature extraction: forecastable component analysis, predictable feature analysis, and graph-based predictable feature analysis. Our experiments show that the predictability of the learned features is highly correlated, and, thus, SFA appears to effectively implement a method for extracting predictable features according to different measures of predictability.
Collapse
Affiliation(s)
- Björn Weghenkel
- Institut für Neuroinformatik, Ruhr-Universität, 44801 Bochum, Germany
| | - Laurenz Wiskott
- Institut für Neuroinformatik, Ruhr-Universität, 44801 Bochum, Germany
| |
Collapse
|
3
|
Vidybida A, Shchur O. Information reduction in a reverberatory neuronal network through convergence to complex oscillatory firing patterns. Biosystems 2017; 161:24-30. [PMID: 28756163 DOI: 10.1016/j.biosystems.2017.07.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Revised: 07/20/2017] [Accepted: 07/24/2017] [Indexed: 11/28/2022]
Abstract
Dynamics of a reverberating neural net is studied by means of computer simulation. The net, which is composed of 9 leaky integrate-and-fire (LIF) neurons arranged in a square lattice, is fully connected with interneuronal communication delay proportional to the corresponding distance. The network is initially stimulated with different stimuli and then goes freely. For each stimulus, in the course of free evolution, activity either dies out completely or the network converges to a periodic trajectory, which may be different for different stimuli. The latter is observed for a set of 285290 initial stimuli which constitutes 83% of all stimuli applied. After applying each stimulus from the set, 102 different periodic end-states are found. The conclusion is made, after analyzing the trajectories, that neuronal firing is the necessary prerequisite for merging different trajectories into a single one, which eventually transforms into a periodic regime. Observed phenomena of self-organization in the time domain are discussed as a possible model for processes taking place during perception. The repetitive firing in the periodic regimes could underpin memory formation.
Collapse
Affiliation(s)
- A Vidybida
- Bogolyubov Institute for Theoretical Physics, Metrologichna Str., 14-B, Kyiv 03680, Ukraine.
| | - O Shchur
- Taras Shevchenko National University of Kyiv, Volodymyrska Str., 60, Kyiv 01033, Ukraine.
| |
Collapse
|
4
|
Adiguzel Y. Biophysical and biological perspective in biosemiotics. PROGRESS IN BIOPHYSICS AND MOLECULAR BIOLOGY 2016; 121:245-54. [PMID: 27374938 DOI: 10.1016/j.pbiomolbio.2016.06.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 06/27/2016] [Accepted: 06/28/2016] [Indexed: 11/17/2022]
Abstract
The cell and its basic constituents are introduced here through a biophysical and information communication theoretic approach in biology and biosemiotics. With this purpose, the requirements of primordial cellular structures, single binding events, and signalling cascades are first mentioned stepwise, in relation to the model of the cellular sensing mechanism. This is followed by the concepts of cross reactions in sensing and pattern recognitions, wherein an information theoretic approach is addressed and the features of multicellularity are discussed along. Multicellularity is introduced as the path that leads to the loss of the direct causal relations. The loss of true causal relation is considered as a form of translation that enables meaning-encoded communication over the informative processes. In this sense, semiosis may not be exclusive. Synthetic biology is exemplified as a form of artificial selection mechanisms for the generation of 'self-reproducing' systems with information coding and processing machineries. These discussions are summarised at the end.
Collapse
Affiliation(s)
- Yekbun Adiguzel
- Department of Biophysics, School of Medicine, Istanbul Kemerburgaz University, Kartaltepe Mahallesi Incirli Caddesi No: 11, 34147 Bakirkoy, Istanbul, Turkey.
| |
Collapse
|
5
|
Butz MV. Toward a Unified Sub-symbolic Computational Theory of Cognition. Front Psychol 2016; 7:925. [PMID: 27445895 PMCID: PMC4915327 DOI: 10.3389/fpsyg.2016.00925] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 06/03/2016] [Indexed: 11/13/2022] Open
Abstract
This paper proposes how various disciplinary theories of cognition may be combined into a unifying, sub-symbolic, computational theory of cognition. The following theories are considered for integration: psychological theories, including the theory of event coding, event segmentation theory, the theory of anticipatory behavioral control, and concept development; artificial intelligence and machine learning theories, including reinforcement learning and generative artificial neural networks; and theories from theoretical and computational neuroscience, including predictive coding and free energy-based inference. In the light of such a potential unification, it is discussed how abstract cognitive, conceptualized knowledge and understanding may be learned from actively gathered sensorimotor experiences. The unification rests on the free energy-based inference principle, which essentially implies that the brain builds a predictive, generative model of its environment. Neural activity-oriented inference causes the continuous adaptation of the currently active predictive encodings. Neural structure-oriented inference causes the longer term adaptation of the developing generative model as a whole. Finally, active inference strives for maintaining internal homeostasis, causing goal-directed motor behavior. To learn abstract, hierarchical encodings, however, it is proposed that free energy-based inference needs to be enhanced with structural priors, which bias cognitive development toward the formation of particular, behaviorally suitable encoding structures. As a result, it is hypothesized how abstract concepts can develop from, and thus how they are structured by and grounded in, sensorimotor experiences. Moreover, it is sketched-out how symbol-like thought can be generated by a temporarily active set of predictive encodings, which constitute a distributed neural attractor in the form of an interactive free-energy minimum. The activated, interactive network attractor essentially characterizes the semantics of a concept or a concept composition, such as an actual or imagined situation in our environment. Temporal successions of attractors then encode unfolding semantics, which may be generated by a behavioral or mental interaction with an actual or imagined situation in our environment. Implications, further predictions, possible verification, and falsifications, as well as potential enhancements into a fully spelled-out unified theory of cognition are discussed at the end of the paper.
Collapse
Affiliation(s)
- Martin V Butz
- Cognitive Modeling, Department of Computer Science and Department of Psychology, Eberhard Karls University of Tübingen Tübingen, Germany
| |
Collapse
|
6
|
Kneissler J, Drugowitsch J, Friston K, Butz MV. Simultaneous learning and filtering without delusions: a Bayes-optimal combination of Predictive Inference and Adaptive Filtering. Front Comput Neurosci 2015; 9:47. [PMID: 25983690 PMCID: PMC4415408 DOI: 10.3389/fncom.2015.00047] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2014] [Accepted: 04/05/2015] [Indexed: 11/13/2022] Open
Abstract
Predictive coding appears to be one of the fundamental working principles of brain processing. Amongst other aspects, brains often predict the sensory consequences of their own actions. Predictive coding resembles Kalman filtering, where incoming sensory information is filtered to produce prediction errors for subsequent adaptation and learning. However, to generate prediction errors given motor commands, a suitable temporal forward model is required to generate predictions. While in engineering applications, it is usually assumed that this forward model is known, the brain has to learn it. When filtering sensory input and learning from the residual signal in parallel, a fundamental problem arises: the system can enter a delusional loop when filtering the sensory information using an overly trusted forward model. In this case, learning stalls before accurate convergence because uncertainty about the forward model is not properly accommodated. We present a Bayes-optimal solution to this generic and pernicious problem for the case of linear forward models, which we call Predictive Inference and Adaptive Filtering (PIAF). PIAF filters incoming sensory information and learns the forward model simultaneously. We show that PIAF is formally related to Kalman filtering and to the Recursive Least Squares linear approximation method, but combines these procedures in a Bayes optimal fashion. Numerical evaluations confirm that the delusional loop is precluded and that the learning of the forward model is more than 10-times faster when compared to a naive combination of Kalman filtering and Recursive Least Squares.
Collapse
Affiliation(s)
- Jan Kneissler
- Chair of Cognitive Modeling, Department of Computer Science, Faculty of Science, Eberhard Karls University of Tübingen Tübingen, Germany
| | - Jan Drugowitsch
- Départment des Neurosciences Fondamentales, Université de Genève Geneva, Switzerland
| | - Karl Friston
- The Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London London, UK
| | - Martin V Butz
- Chair of Cognitive Modeling, Department of Computer Science, Faculty of Science, Eberhard Karls University of Tübingen Tübingen, Germany
| |
Collapse
|
7
|
Atıl İ, Kalkan S. Towards an Embodied Developing Vision System. KUNSTLICHE INTELLIGENZ 2015. [DOI: 10.1007/s13218-015-0351-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
8
|
|
9
|
Butz MV. Separating goals from behavioral control: Implications from learning predictive modularizations. NEW IDEAS IN PSYCHOLOGY 2013. [DOI: 10.1016/j.newideapsych.2013.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
10
|
Krüger N, Janssen P, Kalkan S, Lappe M, Leonardis A, Piater J, Rodríguez-Sánchez AJ, Wiskott L. Deep hierarchies in the primate visual cortex: what can we learn for computer vision? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:1847-1871. [PMID: 23787340 DOI: 10.1109/tpami.2012.272] [Citation(s) in RCA: 107] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Collapse
Affiliation(s)
- Norbert Krüger
- Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvej 55, Odense M 5230, Denmark.
| | | | | | | | | | | | | | | |
Collapse
|
11
|
Predictions in the light of your own action repertoire as a general computational principle. Behav Brain Sci 2013; 36:219-20. [PMID: 23663324 DOI: 10.1017/s0140525x12002294] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
We argue that brains generate predictions only within the constraints of the action repertoire. This makes the computational complexity tractable and fosters a step-by-step parallel development of sensory and motor systems. Hence, it is more of a benefit than a literal constraint and may serve as a universal normative principle to understand sensorimotor coupling and interactions with the world.
Collapse
|
12
|
Abstract
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.
Collapse
|
13
|
Gepperth A. Efficient online bootstrapping of sensory representations. Neural Netw 2012; 41:39-50. [PMID: 23266481 DOI: 10.1016/j.neunet.2012.11.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2012] [Revised: 10/31/2012] [Accepted: 11/03/2012] [Indexed: 10/27/2022]
Abstract
This is a simulation-based contribution exploring a novel approach to the open-ended formation of multimodal representations in autonomous agents. In particular, we address the issue of transferring ("bootstrapping") feature selectivities between two modalities, from a previously learned or innate reference representation to a new induced representation. We demonstrate the potential of this algorithm by several experiments with synthetic inputs modeled after a robotics scenario where multimodal object representations are "bootstrapped" from a (reference) representation of object affordances. We focus on typical challenges in autonomous agents: absence of human supervision, changing environment statistics and limited computing power. We propose an autonomous and local neural learning algorithm termed PROPRE (projection-prediction) that updates induced representations based on predictability: competitive advantages are given to those feature-sensitive elements that are inferable from activities in the reference representation. PROPRE implements a bi-directional interaction of clustering ("projection") and inference ("prediction"), the key ingredient being an efficient online measure of predictability controlling learning in the projection step. We show that the proposed method is computationally efficient and stable, and that the multimodal transfer of feature selectivity is successful and robust under resource constraints. Furthermore, we successfully demonstrate robustness to noisy reference representations, non-stationary input statistics and uninformative inputs.
Collapse
Affiliation(s)
- Alexander Gepperth
- École Nationale Superieure de Techniques Avancées, 858 Blvd des Maréchaux, 91762 Palaiseau, France.
| |
Collapse
|
14
|
Mathews Z, i Badia SB, Verschure PF. PASAR: An integrated model of prediction, anticipation, sensation, attention and response for artificial sensorimotor systems. Inf Sci (N Y) 2012. [DOI: 10.1016/j.ins.2011.09.042] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
15
|
Auffarth B. Activity-dependent memory organization in the early mammalian olfactory pathway for decorrelation, noise reduction, and sparseness-enhancement. BMC Neurosci 2011. [PMCID: PMC3240285 DOI: 10.1186/1471-2202-12-s1-p186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
16
|
VIDYBIDA ALEXANDER. TESTING OF INFORMATION CONDENSATION IN A MODEL REVERBERATING SPIKING NEURAL NETWORK. Int J Neural Syst 2011; 21:187-98. [DOI: 10.1142/s0129065711002742] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Information about external world is delivered to the brain in the form of structured in time spike trains. During further processing in higher areas, information is subjected to a certain condensation process, which results in formation of abstract conceptual images of external world, apparently, represented as certain uniform spiking activity partially independent on the input spike trains details. Possible physical mechanism of condensation at the level of individual neuron was discussed recently. In a reverberating spiking neural network, due to this mechanism the dynamics should settle down to the same uniform/ periodic activity in response to a set of various inputs. Since the same periodic activity may correspond to different input spike trains, we interpret this as possible candidate for information condensation mechanism in a network. Our purpose is to test this possibility in a network model consisting of five fully connected neurons, particularly, the influence of geometric size of the network, on its ability to condense information. Dynamics of 20 spiking neural networks of different geometric sizes are modelled by means of computer simulation. Each network was propelled into reverberating dynamics by applying various initial input spike trains. We run the dynamics until it becomes periodic. The Shannon's formula is used to calculate the amount of information in any input spike train and in any periodic state found. As a result, we obtain explicit estimate of the degree of information condensation in the networks, and conclude that it depends strongly on the net's geometric size.
Collapse
Affiliation(s)
- ALEXANDER VIDYBIDA
- Department of Synergetics, Bogolyubov Institute for Theoretical Physics, Metrologichna Str., 14-B, 03680 Kyiv, Ukraine
| |
Collapse
|
17
|
Pugeault N, Wörgötter F, Krüger N. Disambiguating multi-modal scene representations using perceptual grouping constraints. PLoS One 2010; 5:e10663. [PMID: 20544006 PMCID: PMC2882939 DOI: 10.1371/journal.pone.0010663] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2009] [Accepted: 04/20/2010] [Indexed: 12/02/2022] Open
Abstract
In its early stages, the visual system suffers from a lot of ambiguity and noise that severely limits the performance of early vision algorithms. This article presents feedback mechanisms between early visual processes, such as perceptual grouping, stereopsis and depth reconstruction, that allow the system to reduce this ambiguity and improve early representation of visual information. In the first part, the article proposes a local perceptual grouping algorithm that - in addition to commonly used geometric information - makes use of a novel multi-modal measure between local edge/line features. The grouping information is then used to: 1) disambiguate stereopsis by enforcing that stereo matches preserve groups; and 2) correct the reconstruction error due to the image pixel sampling using a linear interpolation over the groups. The integration of mutual feedback between early vision processes is shown to reduce considerably ambiguity and noise without the need for global constraints.
Collapse
Affiliation(s)
- Nicolas Pugeault
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, UK.
| | | | | |
Collapse
|
18
|
Duff A, Verschure PF. Unifying perceptual and behavioral learning with a correlative subspace learning rule. Neurocomputing 2010. [DOI: 10.1016/j.neucom.2009.11.048] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
19
|
Weiller D, Läer L, Engel AK, König P. Unsupervised learning of reflexive and action-based affordances to model adaptive navigational behavior. Front Neurorobot 2010; 4:2. [PMID: 20485463 PMCID: PMC2871689 DOI: 10.3389/fnbot.2010.00002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2009] [Accepted: 02/22/2010] [Indexed: 11/23/2022] Open
Abstract
Here we introduce a cognitive model capable to model a variety of behavioral domains and apply it to a navigational task. We used place cells as sensory representation, such that the cells’ place fields divided the environment into discrete states. The robot learns knowledge of the environment by memorizing the sensory outcome of its motor actions. This is composed of a central process, learning the probability of state-to-state transitions by motor actions and a distal processing routine, learning the extent to which these state-to-state transitions are caused by sensory-driven reflex behavior (obstacle avoidance). Navigational decision making integrates central and distal learned environmental knowledge to select an action that leads to a goal state. Differentiating distal and central processing increases the behavioral accuracy of the selected actions and the ability of behavioral adaptation to a changed environment. We propose that the system can canonically be expanded to model other behaviors, using alternative definitions of states and actions. The emphasis of this paper is to test this general cognitive model on a robot in a real-world environment.
Collapse
Affiliation(s)
- Daniel Weiller
- Institute of Cognitive Science, Department of Neurobiopsychology, University of Osnabrück Osnabrück, Germany
| | | | | | | |
Collapse
|
20
|
Weiller D, Märtin R, Dähne S, Engel AK, König P. Involving motor capabilities in the formation of sensory space representations. PLoS One 2010; 5:e10377. [PMID: 20442849 PMCID: PMC2860999 DOI: 10.1371/journal.pone.0010377] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2009] [Accepted: 03/21/2010] [Indexed: 11/19/2022] Open
Abstract
A goal of sensory coding is to capture features of sensory input that are behaviorally relevant. Therefore, a generic principle of sensory coding should take into account the motor capabilities of an agent. Up to now, unsupervised learning of sensory representations with respect to generic coding principles has been limited to passively received sensory input. Here we propose an algorithm that reorganizes an agent's representation of sensory space by maximizing the predictability of sensory state transitions given a motor action. We applied the algorithm to the sensory spaces of a number of simple, simulated agents with different motor parameters, moving in two-dimensional mazes. We find that the optimization algorithm generates compact, isotropic representations of space, comparable to hippocampal place fields. As expected, the size and spatial distribution of these place fields-like representations adapt to the motor parameters of the agent as well as to its environment. The representations prove to be well suited as a basis for path planning and navigation. They not only possess a high degree of state-transition predictability, but also are temporally stable. We conclude that the coding principle of predictability is a promising candidate for understanding place field formation as the result of sensorimotor reorganization.
Collapse
Affiliation(s)
- Daniel Weiller
- Department of Neurobiopsychology, Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
| | | | | | | | | |
Collapse
|
21
|
Roepstorff A. Things to think with: words and objects as material symbols. Philos Trans R Soc Lond B Biol Sci 2008; 363:2049-54. [PMID: 18299276 DOI: 10.1098/rstb.2008.0015] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
This paper integrates archaeology, anthropology and functional brain imaging in an examination of the cognition of words and objects. Based on a review of recent brain imaging experiments, it is argued that in cognition and action, material symbols may be the link between internal representations and objects and words in the world. This principle is applied to the sapient paradox, the slow development of material innovation at the advent of the anatomically modern human. This translates the paradox into a long-term build-up of extended and distributed cognition supported by development in the complexity of material symbols.
Collapse
Affiliation(s)
- Andreas Roepstorff
- Department of Social Anthropology and Centre for Functionally Integrative Neuroscience, University of Aarhus, 8000 Aarhus, Denmark.
| |
Collapse
|
22
|
Supp GG, Schlögl A, Trujillo-Barreto N, Müller MM, Gruber T. Directed cortical information flow during human object recognition: analyzing induced EEG gamma-band responses in brain's source space. PLoS One 2007; 2:e684. [PMID: 17668062 PMCID: PMC1925146 DOI: 10.1371/journal.pone.0000684] [Citation(s) in RCA: 116] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2007] [Accepted: 06/28/2007] [Indexed: 11/18/2022] Open
Abstract
The increase of induced gamma-band responses (iGBRs; oscillations >30 Hz) elicited by familiar (meaningful) objects is well established in electroencephalogram (EEG) research. This frequency-specific change at distinct locations is thought to indicate the dynamic formation of local neuronal assemblies during the activation of cortical object representations. As analytically power increase is just a property of a single location, phase-synchrony was introduced to investigate the formation of large-scale networks between spatially distant brain sites. However, classical phase-synchrony reveals symmetric, pair-wise correlations and is not suited to uncover the directionality of interactions. Here, we investigated the neural mechanism of visual object processing by means of directional coupling analysis going beyond recording sites, but rather assessing the directionality of oscillatory interactions between brain areas directly. This study is the first to identify the directionality of oscillatory brain interactions in source space during human object recognition and suggests that familiar, but not unfamiliar, objects engage widespread reciprocal information flow. Directionality of cortical information-flow was calculated based upon an established Granger-Causality coupling-measure (partial-directed coherence; PDC) using autoregressive modeling. To enable comparison with previous coupling studies lacking directional information, phase-locking analysis was applied, using wavelet-based signal decompositions. Both, autoregressive modeling and wavelet analysis, revealed an augmentation of iGBRs during the presentation of familiar objects relative to unfamiliar controls, which was localized to inferior-temporal, superior-parietal and frontal brain areas by means of distributed source reconstruction. The multivariate analysis of PDC evaluated each possible direction of brain interaction and revealed widespread reciprocal information-transfer during familiar object processing. In contrast, unfamiliar objects entailed a sparse number of only unidirectional connections converging to parietal areas. Considering the directionality of brain interactions, the current results might indicate that successful activation of object representations is realized through reciprocal (feed-forward and feed-backward) information-transfer of oscillatory connections between distant, functionally specific brain areas.
Collapse
Affiliation(s)
- Gernot G. Supp
- Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf, University of Hamburg, Hamburg, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Alois Schlögl
- Institute of Human-Computer Interfaces, University of Technology, Graz, Austria
- Intelligent Data Analysis Group, Fraunhofer Institute FIRST, Institute Computer Architecture and Software Technology, Berlin, Germany
| | | | | | - Thomas Gruber
- Institute of Psychology I, University of Leipzig, Leipzig, Germany
| |
Collapse
|
23
|
Learning Temporally Stable Representations from Natural Sounds: Temporal Stability as a General Objective Underlying Sensory Processing. ACTA ACUST UNITED AC 2007. [DOI: 10.1007/978-3-540-74695-9_14] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|