1
|
Jurasz K, Kościelnik D, Szyduczyński J, Machowski W. A New Successive Time Balancing Time-to-Digital Conversion Method. SENSORS (BASEL, SWITZERLAND) 2023; 23:9712. [PMID: 38139557 PMCID: PMC10747889 DOI: 10.3390/s23249712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 12/02/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
This paper presents a new self-clocked time-to-digital conversion method based on a binary successive approximation (SA) algorithm. Its novelty consists in combining fully clockless operation with direct conversion of the measured time interval. The lack of any reference clock makes the presented method potentially predisposed to low-power solutions. Furthermore, its circuit representation is extremely simple, thereby the ability to direct conversion of time intervals is not burdened by a significant amount of components. The method is intended to measure relatively long time intervals, i.e., hundreds of microseconds. Therefore, it is suitable for e.g., biomedical applications using time-mode signal processing.
Collapse
Affiliation(s)
- Konrad Jurasz
- Department of Electronics, AGH University of Science and Technology, 30-059 Krakow, Poland; (J.S.); (W.M.)
| | - Dariusz Kościelnik
- Department of Electronics, AGH University of Science and Technology, 30-059 Krakow, Poland; (J.S.); (W.M.)
| | | | | |
Collapse
|
2
|
|
3
|
Florescu D, Coca D. Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data. Neural Comput 2018; 30:670-707. [PMID: 29342394 DOI: 10.1162/neco_a_01051] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.
Collapse
Affiliation(s)
- Dorian Florescu
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| | - Daniel Coca
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| |
Collapse
|
4
|
Lazar AA, Slutskiy YB, Zhou Y. Massively parallel neural circuits for stereoscopic color vision: encoding, decoding and identification. Neural Netw 2015; 63:254-71. [PMID: 25594573 DOI: 10.1016/j.neunet.2014.10.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2014] [Revised: 10/23/2014] [Accepted: 10/28/2014] [Indexed: 10/24/2022]
Abstract
Past work demonstrated how monochromatic visual stimuli could be faithfully encoded and decoded under Nyquist-type rate conditions. Color visual stimuli were then traditionally encoded and decoded in multiple separate monochromatic channels. The brain, however, appears to mix information about color channels at the earliest stages of the visual system, including the retina itself. If information about color is mixed and encoded by a common pool of neurons, how can colors be demixed and perceived? We present Color Video Time Encoding Machines (Color Video TEMs) for encoding color visual stimuli that take into account a variety of color representations within a single neural circuit. We then derive a Color Video Time Decoding Machine (Color Video TDM) algorithm for color demixing and reconstruction of color visual scenes from spikes produced by a population of visual neurons. In addition, we formulate Color Video Channel Identification Machines (Color Video CIMs) for functionally identifying color visual processing performed by a spiking neural circuit. Furthermore, we derive a duality between TDMs and CIMs that unifies the two and leads to a general theory of neural information representation for stereoscopic color vision. We provide examples demonstrating that a massively parallel color visual neural circuit can be first identified with arbitrary precision and its spike trains can be subsequently used to reconstruct the encoded stimuli. We argue that evaluation of the functional identification methodology can be effectively and intuitively performed in the stimulus space. In this space, a signal reconstructed from spike trains generated by the identified neural circuit can be compared to the original stimulus.
Collapse
Affiliation(s)
- Aurel A Lazar
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| | - Yevgeniy B Slutskiy
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| | - Yiyin Zhou
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| |
Collapse
|
5
|
Lazar AA, Slutskiy YB. Channel identification machines for multidimensional receptive fields. Front Comput Neurosci 2014; 8:117. [PMID: 25309413 PMCID: PMC4176398 DOI: 10.3389/fncom.2014.00117] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2014] [Accepted: 08/31/2014] [Indexed: 12/04/2022] Open
Abstract
We present algorithms for identifying multidimensional receptive fields directly from spike trains produced by biophysically-grounded neuron models. We demonstrate that only the projection of a receptive field onto the input stimulus space may be perfectly identified and derive conditions under which this identification is possible. We also provide detailed examples of identification of neural circuits incorporating spatiotemporal and spectrotemporal receptive fields.
Collapse
Affiliation(s)
- Aurel A Lazar
- Bionet Group, Department of Electrical Engineering, Columbia University in the City of New York New York, NY, USA
| | - Yevgeniy B Slutskiy
- Bionet Group, Department of Electrical Engineering, Columbia University in the City of New York New York, NY, USA
| |
Collapse
|
6
|
Spiking neural circuits with dendritic stimulus processors : encoding, decoding, and identification in reproducing kernel Hilbert spaces. J Comput Neurosci 2014; 38:1-24. [PMID: 25175020 DOI: 10.1007/s10827-014-0522-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Revised: 06/03/2014] [Accepted: 07/25/2014] [Indexed: 10/24/2022]
Abstract
We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.
Collapse
|
7
|
Addition of visual noise boosts evoked potential-based brain-computer interface. Sci Rep 2014; 4:4953. [PMID: 24828128 PMCID: PMC4021798 DOI: 10.1038/srep04953] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2013] [Accepted: 04/17/2014] [Indexed: 11/08/2022] Open
Abstract
Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7–36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.
Collapse
|
8
|
Modeling the formation process of grouping stimuli sets through cortical columns and microcircuits to feature neurons. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2013; 2013:290358. [PMID: 24369455 PMCID: PMC3863480 DOI: 10.1155/2013/290358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Revised: 09/24/2013] [Accepted: 10/08/2013] [Indexed: 11/18/2022]
Abstract
A computational model of a self-structuring neuronal net is presented in which repetitively applied pattern sets induce the formation of cortical columns and microcircuits which decode distinct patterns after a learning phase. In a case study, it is demonstrated how specific neurons in a feature classifier layer become orientation selective if they receive bar patterns of different slopes from an input layer. The input layer is mapped and intertwined by self-evolving neuronal microcircuits to the feature classifier layer. In this topical overview, several models are discussed which indicate that the net formation converges in its functionality to a mathematical transform which maps the input pattern space to a feature representing output space. The self-learning of the mathematical transform is discussed and its implications are interpreted. Model assumptions are deduced which serve as a guide to apply model derived repetitive stimuli pattern sets to in vitro cultures of neuron ensembles to condition them to learn and execute a mathematical transform.
Collapse
|
9
|
Afshar S, Cohen GK, Wang RM, Van Schaik A, Tapson J, Lehmann T, Hamilton TJ. The ripple pond: enabling spiking networks to see. Front Neurosci 2013; 7:212. [PMID: 24298234 PMCID: PMC3829577 DOI: 10.3389/fnins.2013.00212] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2013] [Accepted: 10/23/2013] [Indexed: 11/24/2022] Open
Abstract
We present the biologically inspired Ripple Pond Network (RPN), a simply connected spiking neural network which performs a transformation converting two dimensional images to one dimensional temporal patterns (TP) suitable for recognition by temporal coding learning and memory networks. The RPN has been developed as a hardware solution linking previously implemented neuromorphic vision and memory structures such as frameless vision sensors and neuromorphic temporal coding spiking neural networks. Working together such systems are potentially capable of delivering end-to-end high-speed, low-power and low-resolution recognition for mobile and autonomous applications where slow, highly sophisticated and power hungry signal processing solutions are ineffective. Key aspects in the proposed approach include utilizing the spatial properties of physically embedded neural networks and propagating waves of activity therein for information processing, using dimensional collapse of imagery information into amenable TP and the use of asynchronous frames for information binding.
Collapse
Affiliation(s)
- Saeed Afshar
- Bioelectronics and Neurosciences, The MARCS Institute, University of Western Sydney Penrith, NSW, Australia ; School of Electrical Engineering and Telecommunications, The University of New South Wales Sydney, NSW, Australia
| | | | | | | | | | | | | |
Collapse
|
10
|
Functional identification and evaluation of massively parallel neural circuits. BMC Neurosci 2013. [PMCID: PMC3704760 DOI: 10.1186/1471-2202-14-s1-p431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
|
11
|
The power of connectivity: identity preserving transformations on visual streams in the spike domain. Neural Netw 2013; 44:22-35. [PMID: 23545540 DOI: 10.1016/j.neunet.2013.02.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2012] [Revised: 02/07/2013] [Accepted: 02/27/2013] [Indexed: 11/21/2022]
Abstract
We investigate neural architectures for identity preserving transformations (IPTs) on visual stimuli in the spike domain. The stimuli are encoded with a population of spiking neurons; the resulting spikes are processed and finally decoded. A number of IPTs are demonstrated including faithful stimulus recovery, as well as simple transformations on the original visual stimulus such as translations, rotations and zoomings. We show that if the set of receptive fields satisfies certain symmetry properties, then IPTs can easily be realized and additionally, the same basic stimulus decoding algorithm can be employed to recover the transformed input stimulus. Using group theoretic methods we advance two different neural encoding architectures and discuss the realization of exact and approximate IPTs. These are realized in the spike domain processing block by a "switching matrix" that regulates the input/output connectivity between the stimulus encoding and decoding blocks. For example, for a particular connectivity setting of the switching matrix, the original stimulus is faithfully recovered. For other settings, translations, rotations and dilations (or combinations of these operations) of the original video stream are obtained. We evaluate our theoretical derivations through extensive simulations on natural video scenes, and discuss implications of our results on the problem of invariant object recognition in the spike domain.
Collapse
|
12
|
Cao Y, He H, Man H. SOMKE: kernel density estimation over data streams by sequences of self-organizing maps. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:1254-1268. [PMID: 24807522 DOI: 10.1109/tnnls.2012.2201167] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we propose a novel method SOMKE, for kernel density estimation (KDE) over data streams based on sequences of self-organizing map (SOM). In many stream data mining applications, the traditional KDE methods are infeasible because of the high computational cost, processing time, and memory requirement. To reduce the time and space complexity, we propose a SOM structure in this paper to obtain well-defined data clusters to estimate the underlying probability distributions of incoming data streams. The main idea of this paper is to build a series of SOMs over the data streams via two operations, that is, creating and merging the SOM sequences. The creation phase produces the SOM sequence entries for windows of the data, which obtains clustering information of the incoming data streams. The size of the SOM sequences can be further reduced by combining the consecutive entries in the sequence based on the measure of Kullback-Leibler divergence. Finally, the probability density functions over arbitrary time periods along the data streams can be estimated using such SOM sequences. We compare SOMKE with two other KDE methods for data streams, the M-kernel approach and the cluster kernel approach, in terms of accuracy and processing time for various stationary data streams. Furthermore, we also investigate the use of SOMKE over nonstationary (evolving) data streams, including a synthetic nonstationary data stream, a real-world financial data stream and a group of network traffic data streams. The simulation results illustrate the effectiveness and efficiency of the proposed approach.
Collapse
|
13
|
Massively parallel neural encoding and decoding of visual stimuli. Neural Netw 2012; 32:303-12. [PMID: 22397951 DOI: 10.1016/j.neunet.2012.02.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2011] [Revised: 01/03/2012] [Accepted: 02/07/2012] [Indexed: 11/21/2022]
Abstract
The massively parallel nature of video Time Encoding Machines (TEMs) calls for scalable, massively parallel decoders that are implemented with neural components. The current generation of decoding algorithms is based on computing the pseudo-inverse of a matrix and does not satisfy these requirements. Here we consider video TEMs with an architecture built using Gabor receptive fields and a population of Integrate-and-Fire neurons. We show how to build a scalable architecture for video Time Decoding Machines using recurrent neural networks. Furthermore, we extend our architecture to handle the reconstruction of visual stimuli encoded with massively parallel video TEMs having neurons with random thresholds. Finally, we discuss in detail our algorithms and demonstrate their scalability and performance on a large scale GPU cluster.
Collapse
|
14
|
Folowosele F, Hamilton TJ, Etienne-Cummings R. Silicon modeling of the Mihalaş-Niebur neuron. ACTA ACUST UNITED AC 2011; 22:1915-27. [PMID: 21990331 DOI: 10.1109/tnn.2011.2167020] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There are a number of spiking and bursting neuron models with varying levels of complexity, ranging from the simple integrate-and-fire model to the more complex Hodgkin-Huxley model. The simpler models tend to be easily implemented in silicon but yet not biologically plausible. Conversely, the more complex models tend to occupy a large area although they are more biologically plausible. In this paper, we present the 0.5 μm complementary metal-oxide-semiconductor (CMOS) implementation of the Mihalaş-Niebur neuron model--a generalized model of the leaky integrate-and-fire neuron with adaptive threshold--that is able to produce most of the known spiking and bursting patterns that have been observed in biology. Our implementation modifies the original proposed model, making it more amenable to CMOS implementation and more biologically plausible. All but one of the spiking properties--tonic spiking, class 1 spiking, phasic spiking, hyperpolarized spiking, rebound spiking, spike frequency adaptation, accommodation, threshold variability, integrator and input bistability--are demonstrated in this model.
Collapse
Affiliation(s)
- Fopefolu Folowosele
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | |
Collapse
|