1
|
Tayarani-Najaran MH, Schmuker M. Event-Based Sensing and Signal Processing in the Visual, Auditory, and Olfactory Domain: A Review. Front Neural Circuits 2021; 15:610446. [PMID: 34135736 PMCID: PMC8203204 DOI: 10.3389/fncir.2021.610446] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
The nervous systems converts the physical quantities sensed by its primary receptors into trains of events that are then processed in the brain. The unmatched efficiency in information processing has long inspired engineers to seek brain-like approaches to sensing and signal processing. The key principle pursued in neuromorphic sensing is to shed the traditional approach of periodic sampling in favor of an event-driven scheme that mimicks sampling as it occurs in the nervous system, where events are preferably emitted upon the change of the sensed stimulus. In this paper we highlight the advantages and challenges of event-based sensing and signal processing in the visual, auditory and olfactory domains. We also provide a survey of the literature covering neuromorphic sensing and signal processing in all three modalities. Our aim is to facilitate research in event-based sensing and signal processing by providing a comprehensive overview of the research performed previously as well as highlighting conceptual advantages, current progress and future challenges in the field.
Collapse
Affiliation(s)
| | - Michael Schmuker
- School of Physics, Engineering and Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| |
Collapse
|
2
|
Shi C, Li J, Wang Y, Luo G. Exploiting Lightweight Statistical Learning for Event-Based Vision Processing. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2018; 6:19396-19406. [PMID: 29750138 PMCID: PMC5937990 DOI: 10.1109/access.2018.2823260] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper presents a lightweight statistical learning framework potentially suitable for low-cost event-based vision systems, where visual information is captured by a dynamic vision sensor (DVS) and represented as an asynchronous stream of pixel addresses (events) indicating a relative intensity change on those locations. A simple random ferns classifier based on randomly selected patch-based binary features is employed to categorize pixel event flows. Our experimental results demonstrate that compared to existing event-based processing algorithms, such as spiking convolutional neural networks (SCNNs) and the state-of-the-art bag-of-events (BoE)-based statistical algorithms, our framework excels in high processing speed (2× faster than the BoE statistical methods and >100× faster than previous SCNNs in training speed) with extremely simple online learning process, and achieves state-of-the-art classification accuracy on four popular address-event representation data sets: MNIST-DVS, Poker-DVS, Posture-DVS, and CIFAR10-DVS. Hardware estimation shows that our algorithm will be preferable for low-cost embedded system implementations.
Collapse
Affiliation(s)
- Cong Shi
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114 USA
| | - Jiajun Li
- State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100864, China
| | - Ying Wang
- State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100864, China
| | - Gang Luo
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Department of Ophthalmology, Harvard Medical School, Boston, MA 02114 USA
| |
Collapse
|
3
|
Park J, Yu T, Joshi S, Maier C, Cauwenberghs G. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2017; 28:2408-2422. [PMID: 27483491 DOI: 10.1109/tnnls.2016.2572164] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×107 synaptic events per second per 16k-neuron node in the hierarchy.
Collapse
Affiliation(s)
- Jongkil Park
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | | | - Siddharth Joshi
- Department of Electrical and Computer Engineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | - Christoph Maier
- Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| | - Gert Cauwenberghs
- Department of Bioengineering, Jacobs School of Engineering, Institute of Neural Computation, University of California at San Diego, La Jolla, CA, USA
| |
Collapse
|
4
|
Serrano-Gotarredona T, Linares-Barranco B. Poker-DVS and MNIST-DVS. Their History, How They Were Made, and Other Details. Front Neurosci 2015; 9:481. [PMID: 26733794 PMCID: PMC4686704 DOI: 10.3389/fnins.2015.00481] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2015] [Accepted: 11/30/2015] [Indexed: 11/20/2022] Open
Abstract
This article reports on two databases for event-driven object recognition using a Dynamic Vision Sensor (DVS). The first, which we call Poker-DVS and is being released together with this article, was obtained by browsing specially made poker card decks in front of a DVS camera for 2–4 s. Each card appeared on the screen for about 20–30 ms. The poker pips were tracked and isolated off-line to constitute the 131-recording Poker-DVS database. The second database, which we call MNIST-DVS and which was released in December 2013, consists of a set of 30,000 DVS camera recordings obtained by displaying 10,000 moving symbols from the standard MNIST 70,000-picture database on an LCD monitor for about 2–3 s each. Each of the 10,000 symbols was displayed at three different scales, so that event-driven object recognition algorithms could easily be tested for different object sizes. This article tells the story behind both databases, covering, among other aspects, details of how they work and the reasons for their creation. We provide not only the databases with corresponding scripts, but also the scripts and data used to generate the figures shown in this article (as Supplementary Material).
Collapse
Affiliation(s)
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla Sevilla, Spain
| |
Collapse
|
5
|
Lenero-Bardallo JA, Bryn DH, Hafliger P. Bio-Inspired Asynchronous Pixel Event Tricolor Vision Sensor. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2014; 8:345-357. [PMID: 23934671 DOI: 10.1109/tbcas.2013.2271382] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This article investigates the potential of the first ever prototype of a vision sensor that combines tricolor stacked photo diodes with the bio-inspired asynchronous pixel event communication protocol known as Address Event Representation (AER). The stacked photo diodes are implemented in a 22 × 22 pixel array in a standard STM 90 nm CMOS process. Dynamic range is larger than 60 dB and pixels fill factor is 28%. The pixels employ either simple pulse frequency modulation (PFM) or a Time-to-First-Spike (TFS) mode. A heuristic linear combination of the chip's inherent pseudo colors serves to approximate RGB color representation. Furthermore, the sensor outputs can be processed to represent the radiation in the near infrared (NIR) band without employing external filters, and to color-encode direction of motion due to an asymmetry in the update rates of the different diode layers.
Collapse
|
6
|
Benosman R, Clercq C, Lagorce X, Ieng SH, Bartolozzi C. Event-based visual flow. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2014; 25:407-417. [PMID: 24807038 DOI: 10.1109/tnnls.2013.2273537] [Citation(s) in RCA: 81] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper introduces a new methodology to compute dense visual flow using the precise timings of spikes from an asynchronous event-based retina. Biological retinas, and their artificial counterparts, are totally asynchronous and data-driven and rely on a paradigm of light acquisition radically different from most of the currently used frame-grabber technologies. This paper introduces a framework to estimate visual flow from the local properties of events' spatiotemporal space. We will show that precise visual flow orientation and amplitude can be estimated using a local differential approach on the surface defined by coactive events. Experimental results are presented; they show the method adequacy with high data sparseness and temporal resolution of event-based acquisition that allows the computation of motion flow with microsecond accuracy and at very low computational cost.
Collapse
|
7
|
Mahdiani HR, Fakhraie SM, Lucas C. Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2012; 23:1215-1228. [PMID: 24807519 DOI: 10.1109/tnnls.2012.2199517] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Reliability should be identified as the most important challenge in future nano-scale very large scale integration (VLSI) implementation technologies for the development of complex integrated systems. Normally, fault tolerance (FT) in a conventional system is achieved by increasing its redundancy, which also implies higher implementation costs and lower performance that sometimes makes it even infeasible. In contrast to custom approaches, a new class of applications is categorized in this paper, which is inherently capable of absorbing some degrees of vulnerability and providing FT based on their natural properties. Neural networks are good indicators of imprecision-tolerant applications. We have also proposed a new class of FT techniques called relaxed fault-tolerant (RFT) techniques which are developed for VLSI implementation of imprecision-tolerant applications. The main advantage of RFT techniques with respect to traditional FT solutions is that they exploit inherent FT of different applications to reduce their implementation costs while improving their performance. To show the applicability as well as the efficiency of the RFT method, the experimental results for implementation of a face-recognition computationally intensive neural network and its corresponding RFT realization are presented in this paper. The results demonstrate promising higher performance of artificial neural network VLSI solutions for complex applications in faulty nano-scale implementation environments.
Collapse
|
8
|
Nere A, Olcese U, Balduzzi D, Tononi G. A neuromorphic architecture for object recognition and motion anticipation using burst-STDP. PLoS One 2012; 7:e36958. [PMID: 22615855 PMCID: PMC3352850 DOI: 10.1371/journal.pone.0036958] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2012] [Accepted: 04/16/2012] [Indexed: 01/24/2023] Open
Abstract
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips.
Collapse
Affiliation(s)
- Andrew Nere
- Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
| | - Umberto Olcese
- Department of Psychiatry, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
- Department of Neuroscience and Brain Technologies, Istituto Italiano di Tecnologia, Genova, Italy
| | - David Balduzzi
- Department of Empirical Inference, Max Planck Institute for Intelligent Systems, Tubingen, Germany
| | - Giulio Tononi
- Department of Psychiatry, University of Wisconsin-Madison, Madison, Wisconsin, United States of America
- * E-mail:
| |
Collapse
|
9
|
Selective change driven imaging: a biomimetic visual sensing strategy. SENSORS 2011; 11:11000-20. [PMID: 22346684 PMCID: PMC3274326 DOI: 10.3390/s111111000] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2011] [Revised: 11/15/2011] [Accepted: 11/18/2011] [Indexed: 12/02/2022]
Abstract
Selective Change Driven (SCD) Vision is a biologically inspired strategy for acquiring, transmitting and processing images that significantly speeds up image sensing. SCD vision is based on a new CMOS image sensor which delivers, ordered by the absolute magnitude of its change, the pixels that have changed after the last time they were read out. Moreover, the traditional full frame processing hardware and programming methodology has to be changed, as a part of this biomimetic approach, to a new processing paradigm based on pixel processing in a data flow manner, instead of full frame image processing.
Collapse
|
10
|
Zamarreño-Ramos C, Camuñas-Mesa LA, Pérez-Carrasco JA, Masquelier T, Serrano-Gotarredona T, Linares-Barranco B. On spike-timing-dependent-plasticity, memristive devices, and building a self-learning visual cortex. Front Neurosci 2011; 5:26. [PMID: 21442012 PMCID: PMC3062969 DOI: 10.3389/fnins.2011.00026] [Citation(s) in RCA: 118] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2010] [Accepted: 02/19/2011] [Indexed: 11/13/2022] Open
Abstract
In this paper we present a very exciting overlap between emergent nanotechnology and neuroscience, which has been discovered by neuromorphic engineers. Specifically, we are linking one type of memristor nanotechnology devices to the biological synaptic update rule known as spike-time-dependent-plasticity (STDP) found in real biological synapses. Understanding this link allows neuromorphic engineers to develop circuit architectures that use this type of memristors to artificially emulate parts of the visual cortex. We focus on the type of memristors referred to as voltage or flux driven memristors and focus our discussions on a behavioral macro-model for such devices. The implementations result in fully asynchronous architectures with neurons sending their action potentials not only forward but also backward. One critical aspect is to use neurons that generate spikes of specific shapes. We will see how by changing the shapes of the neuron action potential spikes we can tune and manipulate the STDP learning rules for both excitatory and inhibitory synapses. We will see how neurons and memristors can be interconnected to achieve large scale spiking learning systems, that follow a type of multiplicative STDP learning rule. We will briefly extend the architectures to use three-terminal transistors with similar memristive behavior. We will illustrate how a V1 visual cortex layer can assembled and how it is capable of learning to extract orientations from visual data coming from a real artificial CMOS spiking retina observing real life scenes. Finally, we will discuss limitations of currently available memristors. The results presented are based on behavioral simulations and do not take into account non-idealities of devices and interconnects. The aim of this paper is to present, in a tutorial manner, an initial framework for the possible development of fully asynchronous STDP learning neuromorphic architectures exploiting two or three-terminal memristive type devices. All files used for the simulations are made available through the journal web site.
Collapse
Affiliation(s)
- Carlos Zamarreño-Ramos
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | - Luis A. Camuñas-Mesa
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | - Jose A. Pérez-Carrasco
- Mixed Signal Design, Instituto de Microelectrónica de Sevilla (IMSE–CNM–CSIC)Sevilla, Spain
| | | | | | | |
Collapse
|