1
|
Hou X, Zhang F, Gulati D, Tan T, Zhang W. E2VIDX: improved bridge between conventional vision and bionic vision. Front Neurorobot 2023; 17:1277160. [PMID: 37954492 PMCID: PMC10639115 DOI: 10.3389/fnbot.2023.1277160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/05/2023] [Indexed: 11/14/2023] Open
Abstract
Common RGBD, CMOS, and CCD-based cameras produce motion blur and incorrect exposure under high-speed and improper lighting conditions. According to the bionic principle, the event camera developed has the advantages of low delay, high dynamic range, and no motion blur. However, due to its unique data representation, it encounters significant obstacles in practical applications. The image reconstruction algorithm based on an event camera solves the problem by converting a series of "events" into common frames to apply existing vision algorithms. Due to the rapid development of neural networks, this field has made significant breakthroughs in past few years. Based on the most popular Events-to-Video (E2VID) method, this study designs a new network called E2VIDX. The proposed network includes group convolution and sub-pixel convolution, which not only achieves better feature fusion but also the network model size is reduced by 25%. Futhermore, we propose a new loss function. The loss function is divided into two parts, first part calculates the high level features and the second part calculates the low level features of the reconstructed image. The experimental results clearly outperform against the state-of-the-art method. Compared with the original method, Structural Similarity (SSIM) increases by 1.3%, Learned Perceptual Image Patch Similarity (LPIPS) decreases by 1.7%, Mean Squared Error (MSE) decreases by 2.5%, and it runs faster on GPU and CPU. Additionally, we evaluate the results of E2VIDX with application to image classification, object detection, and instance segmentation. The experiments show that conversions using our method can help event cameras directly apply existing vision algorithms in most scenarios.
Collapse
Affiliation(s)
- Xujia Hou
- School of Marine Science and Technology, Northwestern Polytechnical University, Xi'An, China
| | - Feihu Zhang
- School of Marine Science and Technology, Northwestern Polytechnical University, Xi'An, China
| | | | - Tingfeng Tan
- School of Marine Science and Technology, Northwestern Polytechnical University, Xi'An, China
| | - Wei Zhang
- School of Marine Science and Technology, Northwestern Polytechnical University, Xi'An, China
| |
Collapse
|
2
|
Cohen K, Hershko O, Levy H, Mendlovic D, Raviv D. Illumination-Based Color Reconstruction for the Dynamic Vision Sensor. Sensors (Basel) 2023; 23:8327. [PMID: 37837157 PMCID: PMC10575428 DOI: 10.3390/s23198327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 10/06/2023] [Accepted: 10/07/2023] [Indexed: 10/15/2023]
Abstract
This work demonstrates a novel, state-of-the-art method to reconstruct colored images via the dynamic vision sensor (DVS). The DVS is an image sensor that indicates only a binary change in brightness, with no information about the captured wavelength (color) or intensity level. However, the reconstruction of the scene's color could be essential for many tasks in computer vision and DVS. We present a novel method for reconstructing a full spatial resolution, colored image utilizing the DVS and an active colored light source. We analyze the DVS response and present two reconstruction algorithms: linear-based and convolutional-neural-network-based. Our two presented methods reconstruct the colored image with high quality, and they do not suffer from any spatial resolution degradation as other methods. In addition, we demonstrate the robustness of our algorithm to changes in environmental conditions, such as illumination and distance. Finally, compared with previous works, we show how we reach the state-of-the-art results. We share our code on GitHub.
Collapse
Affiliation(s)
| | | | | | | | - Dan Raviv
- The Faculty of Engineering, Department of Physical Electronics, Tel Aviv University, Tel Aviv 69978, Israel; (K.C.)
| |
Collapse
|
3
|
Valerdi JL, Bartolozzi C, Glover A. Insights into Batch Selection for Event-Camera Motion Estimation. Sensors (Basel) 2023; 23:3699. [PMID: 37050759 PMCID: PMC10099241 DOI: 10.3390/s23073699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/24/2023] [Accepted: 03/25/2023] [Indexed: 06/19/2023]
Abstract
Event cameras measure scene changes with high temporal resolutions, making them well-suited for visual motion estimation. The activation of pixels results in an asynchronous stream of digital data (events), which rolls continuously over time without the discrete temporal boundaries typical of frame-based cameras (where a data packet or frame is emitted at a fixed temporal rate). As such, it is not trivial to define a priori how to group/accumulate events in a way that is sufficient for computation. The suitable number of events can greatly vary for different environments, motion patterns, and tasks. In this paper, we use neural networks for rotational motion estimation as a scenario to investigate the appropriate selection of event batches to populate input tensors. Our results show that batch selection has a large impact on the results: training should be performed on a wide variety of different batches, regardless of the batch selection method; a simple fixed-time window is a good choice for inference with respect to fixed-count batches, and it also demonstrates comparable performance to more complex methods. Our initial hypothesis that a minimal amount of events is required to estimate motion (as in contrast maximization) is not valid when estimating motion with a neural network.
Collapse
|
4
|
Fang Y, Piao Y, Xie X, Li M, Li X, Ji H, Xu W, Gao T. Visualization and Object Detection Based on Event Information. Sensors (Basel) 2023; 23:1839. [PMID: 36850443 PMCID: PMC9962390 DOI: 10.3390/s23041839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/19/2023] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
A dynamic vision sensor is an optical sensor that focuses on dynamic changes and outputs event information containing only position, time, and polarity. It has the advantages of high temporal resolution, high dynamic range, low data volume, and low power consumption. However, a single event can only indicate that the increase or decrease in light exceeds the threshold at a certain pixel position and a certain moment. In order to further study the ability and characteristics of event information to represent targets, this paper proposes an event information visualization method with adaptive temporal resolution. Compared with methods with constant time intervals and a constant number of events, it can better convert event information into pseudo-frame images. Additionally, in order to explore whether the pseudo-frame image can efficiently complete the task of target detection according to its characteristics, this paper designs a target detection network named YOLOE. Compared with other algorithms, it has a more balanced detection effect. By constructing a dataset and conducting experimental verification, the detection accuracy of the image obtained by the event information visualization method with adaptive temporal resolution was 5.11% and 4.74% higher than that obtained using methods with a constant time interval and number of events, respectively. The average detection accuracy of pseudo-frame images in the YOLOE network designed in this paper is 85.11%, and the number of detection frames per second is 109. Therefore, the effectiveness of the proposed visualization method and the good performance of the designed detection network are verified.
Collapse
Affiliation(s)
- Yinghong Fang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| | - Yongjie Piao
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| | - Xiaoguang Xie
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| | - Miao Li
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| | - Xiaodong Li
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| | - Haolin Ji
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| | - Wei Xu
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| | - Tan Gao
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
- Key Laboratory of Space-Based Dynamic & Rapid Optical Imaging Technology, Chinese Academy of Sciences, Changchun 130033, China
| |
Collapse
|
5
|
Cekus D, Depta F, Kubanek M, Kuczyński Ł, Kwiatoń P. Event Visualization and Trajectory Tracking of the Load Carried by Rotary Crane. Sensors (Basel) 2022; 22:480. [PMID: 35062441 PMCID: PMC8781732 DOI: 10.3390/s22020480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 01/03/2022] [Accepted: 01/05/2022] [Indexed: 06/14/2023]
Abstract
Tracking the trajectory of the load carried by the rotary crane is an important problem that allows reducing the possibility of its damage by hitting an obstacle in its working area. On the basis of the trajectory, it is also possible to determine an appropriate control system that would allow for the safe transport of the load. This work concerns research on the load motion carried by a rotary crane. For this purpose, the laboratory crane model was designed in Solidworks software, and numerical simulations were made using the Motion module. The developed laboratory model is a scaled equivalent of the real Liebherr LTM 1020 object. The crane control included two movements: changing the inclination angle of the crane's boom and rotation of the jib with the platform. On the basis of the developed model, a test stand was built, which allowed for the verification of numerical results. Event visualization and trajectory tracking were made using a dynamic vision sensor (DVS) and the Tracker program. Based on the obtained experimental results, the developed numerical model was verified. The proposed trajectory tracking method can be used to develop a control system to prevent collisions during the crane's duty cycle.
Collapse
Affiliation(s)
- Dawid Cekus
- Department of Mechanics and Machine Design Fundamentals, Faculty of Mechanical Engineering and Computer Science, Czestochowa University of Technology, Dąbrowskiego 73, 42-201 Częstochowa, Poland;
| | - Filip Depta
- Department of Computer Science, Faculty of Mechanical Engineering and Computer Science, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Częstochowa, Poland; (F.D.); (M.K.); (Ł.K.)
| | - Mariusz Kubanek
- Department of Computer Science, Faculty of Mechanical Engineering and Computer Science, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Częstochowa, Poland; (F.D.); (M.K.); (Ł.K.)
| | - Łukasz Kuczyński
- Department of Computer Science, Faculty of Mechanical Engineering and Computer Science, Czestochowa University of Technology, Dabrowskiego 73, 42-201 Częstochowa, Poland; (F.D.); (M.K.); (Ł.K.)
| | - Paweł Kwiatoń
- Department of Mechanics and Machine Design Fundamentals, Faculty of Mechanical Engineering and Computer Science, Czestochowa University of Technology, Dąbrowskiego 73, 42-201 Częstochowa, Poland;
| |
Collapse
|
6
|
Lin Y, Ding W, Qiang S, Deng L, Li G. ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks. Front Neurosci 2021; 15:726582. [PMID: 34899154 PMCID: PMC8655353 DOI: 10.3389/fnins.2021.726582] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 10/22/2021] [Indexed: 11/16/2022] Open
Abstract
With event-driven algorithms, especially spiking neural networks (SNNs), achieving continuous improvement in neuromorphic vision processing, a more challenging event-stream dataset is urgently needed. However, it is well-known that creating an ES-dataset is a time-consuming and costly task with neuromorphic cameras like dynamic vision sensors (DVS). In this work, we propose a fast and effective algorithm termed Omnidirectional Discrete Gradient (ODG) to convert the popular computer vision dataset ILSVRC2012 into its event-stream (ES) version, generating about 1,300,000 frame-based images into ES-samples in 1,000 categories. In this way, we propose an ES-dataset called ES-ImageNet, which is dozens of times larger than other neuromorphic classification datasets at present and completely generated by the software. The ODG algorithm implements image motion to generate local value changes with discrete gradient information in different directions, providing a low-cost and high-speed method for converting frame-based images into event streams, along with Edge-Integral to reconstruct the high-quality images from event streams. Furthermore, we analyze the statistics of ES-ImageNet in multiple ways, and a performance benchmark of the dataset is also provided using both famous deep neural network algorithms and spiking neural network algorithms. We believe that this work shall provide a new large-scale benchmark dataset for SNNs and neuromorphic vision.
Collapse
Affiliation(s)
- Yihan Lin
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, China
| | - Wei Ding
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, China
| | - Shaohua Qiang
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, China
| | - Lei Deng
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, China
| | - Guoqi Li
- Department of Precision Instrument, Center for Brain Inspired Computing Research, Tsinghua University, Beijing, China
| |
Collapse
|
7
|
Abstract
Event camera (EC) emerges as a bio-inspired sensor which can be an alternative or complementary vision modality with the benefits of energy efficiency, high dynamic range, and high temporal resolution coupled with activity dependent sparse sensing. In this study we investigate with ECs the problem of face pose alignment, which is an essential pre-processing stage for facial processing pipelines. EC-based alignment can unlock all these benefits in facial applications, especially where motion and dynamics carry the most relevant information due to the temporal change event sensing. We specifically aim at efficient processing by developing a coarse alignment method to handle large pose variations in facial applications. For this purpose, we have prepared by multiple human annotations a dataset of extreme head rotations with varying motion intensity. We propose a motion detection based alignment approach in order to generate activity dependent pose-events that prevents unnecessary computations in the absence of pose change. The alignment is realized by cascaded regression of extremely randomized trees. Since EC sensors perform temporal differentiation, we characterize the performance of the alignment in terms of different levels of head movement speeds and face localization uncertainty ranges as well as face resolution and predictor complexity. Our method obtained 2.7% alignment failure on average, whereas annotator disagreement was 1%. The promising coarse alignment performance on EC sensor data together with a comprehensive analysis demonstrate the potential of ECs in facial applications.
Collapse
Affiliation(s)
- Arman Savran
- Department of Computer Engineering, Yasar University, 35100 Izmir, Turkey
| | - Chiara Bartolozzi
- Event-Driven Perception for Robotics, Istituto Italiano di Tecnologia, 16163 Genova, Italy;
| |
Collapse
|
8
|
Huang X, Muthusamy R, Hassan E, Niu Z, Seneviratne L, Gan D, Zweiri Y. Neuromorphic Vision Based Contact-Level Classification in Robotic Grasping Applications. Sensors (Basel) 2020; 20:s20174724. [PMID: 32825656 PMCID: PMC7506874 DOI: 10.3390/s20174724] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 07/28/2020] [Accepted: 07/29/2020] [Indexed: 11/16/2022]
Abstract
In recent years, robotic sorting is widely used in the industry, which is driven by necessity and opportunity. In this paper, a novel neuromorphic vision-based tactile sensing approach for robotic sorting application is proposed. This approach has low latency and low power consumption when compared to conventional vision-based tactile sensing techniques. Two Machine Learning (ML) methods, namely, Support Vector Machine (SVM) and Dynamic Time Warping-K Nearest Neighbor (DTW-KNN), are developed to classify material hardness, object size, and grasping force. An Event-Based Object Grasping (EBOG) experimental setup is developed to acquire datasets, where 243 experiments are produced to train the proposed classifiers. Based on predictions of the classifiers, objects can be automatically sorted. If the prediction accuracy is below a certain threshold, the gripper re-adjusts and re-grasps until reaching a proper grasp. The proposed ML method achieves good prediction accuracy, which shows the effectiveness and the applicability of the proposed approach. The experimental results show that the developed SVM model outperforms the DTW-KNN model in term of accuracy and efficiency for real time contact-level classification.
Collapse
Affiliation(s)
- Xiaoqian Huang
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi 127788, UAE; (R.M.); (E.H.); (Z.N.); (L.S.); (Y.Z.)
- Correspondence:
| | - Rajkumar Muthusamy
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi 127788, UAE; (R.M.); (E.H.); (Z.N.); (L.S.); (Y.Z.)
| | - Eman Hassan
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi 127788, UAE; (R.M.); (E.H.); (Z.N.); (L.S.); (Y.Z.)
| | - Zhenwei Niu
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi 127788, UAE; (R.M.); (E.H.); (Z.N.); (L.S.); (Y.Z.)
| | - Lakmal Seneviratne
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi 127788, UAE; (R.M.); (E.H.); (Z.N.); (L.S.); (Y.Z.)
| | - Dongming Gan
- School of Engineering Technology, Purdue University, West Lafayette, IN 47907, USA;
| | - Yahya Zweiri
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University of Science and Technology, Abu Dhabi 127788, UAE; (R.M.); (E.H.); (Z.N.); (L.S.); (Y.Z.)
- Faculty of Science, Engineering and Computing, Kingston University, London SW15 3DW, UK
| |
Collapse
|
9
|
Baghaei Naeini F, Makris D, Gan D, Zweiri Y. Dynamic-Vision-Based Force Measurements Using Convolutional Recurrent Neural Networks. Sensors (Basel) 2020; 20:E4469. [PMID: 32785095 PMCID: PMC7472272 DOI: 10.3390/s20164469] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/31/2020] [Accepted: 08/07/2020] [Indexed: 11/18/2022]
Abstract
In this paper, a novel dynamic Vision-Based Measurement method is proposed to measure contact force independent of the object sizes. A neuromorphic camera (Dynamic Vision Sensor) is utilizused to observe intensity changes within the silicone membrane where the object is in contact. Three deep Long Short-Term Memory neural networks combined with convolutional layers are developed and implemented to estimate the contact force from intensity changes over time. Thirty-five experiments are conducted using three objects with different sizes to validate the proposed approach. We demonstrate that the networks with memory gates are robust against variable contact sizes as the networks learn object sizes in the early stage of a grasp. Moreover, spatial and temporal features enable the sensor to estimate the contact force every 10 ms accurately. The results are promising with Mean Squared Error of less than 0.1 N for grasping and holding contact force using leave-one-out cross-validation method.
Collapse
Affiliation(s)
| | - Dimitrios Makris
- Faculty of Science, Engineering and Computing, London SW15 3DW, UK; (D.M.); (Y.Z.)
| | - Dongming Gan
- School of Engineering Technology, Purdue University, West Lafayette, IN 47907, USA;
| | - Yahya Zweiri
- Faculty of Science, Engineering and Computing, London SW15 3DW, UK; (D.M.); (Y.Z.)
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi P.O. Box 127788, UAE
| |
Collapse
|
10
|
Abstract
Object tracking based on the event-based camera or dynamic vision sensor (DVS) remains a challenging task due to the noise events, rapid change of event-stream shape, chaos of complex background textures, and occlusion. To address the challenges, this paper presents a robust event-stream object tracking method based on correlation filter mechanism and convolutional neural network (CNN) representation. In the proposed method, rate coding is used to encode the event-stream object. Feature representations from hierarchical convolutional layers of a pre-trained CNN are used to represent the appearance of the rate encoded event-stream object. Results prove that the proposed method not only achieves good tracking performance in many complicated scenes with noise events, complex background textures, occlusion, and intersected trajectories, but also is robust to variable scale, variable pose, and non-rigid deformations. In addition, the correlation filter-based method has the advantage of high speed. The proposed approach will promote the potential applications of these event-based vision sensors in autonomous driving, robots and many other high-speed scenes.
Collapse
Affiliation(s)
- Hongmin Li
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
| | - Luping Shi
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
| |
Collapse
|
11
|
Miao S, Chen G, Ning X, Zi Y, Ren K, Bing Z, Knoll A. Neuromorphic Vision Datasets for Pedestrian Detection, Action Recognition, and Fall Detection. Front Neurorobot 2019; 13:38. [PMID: 31275128 PMCID: PMC6591450 DOI: 10.3389/fnbot.2019.00038] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 05/24/2019] [Indexed: 12/02/2022] Open
Affiliation(s)
- Shu Miao
- College of Automotive Engineering, Tongji University, Shanghai, China
| | - Guang Chen
- College of Automotive Engineering, Tongji University, Shanghai, China
- Robotics, Artificial Intelligence and Real-Time Systems, Technische Universität München, München, Germany
| | - Xiangyu Ning
- College of Automotive Engineering, Tongji University, Shanghai, China
| | - Yang Zi
- College of Automotive Engineering, Tongji University, Shanghai, China
| | - Kejia Ren
- College of Automotive Engineering, Tongji University, Shanghai, China
| | - Zhenshan Bing
- Robotics, Artificial Intelligence and Real-Time Systems, Technische Universität München, München, Germany
| | - Alois Knoll
- Robotics, Artificial Intelligence and Real-Time Systems, Technische Universität München, München, Germany
| |
Collapse
|
12
|
Barrios-Avilés J, Rosado-Muñoz A, Medus LD, Bataller-Mompeán M, Guerrero-Martínez JF. Less Data Same Information for Event-Based Sensors: A Bioinspired Filtering and Data Reduction Algorithm. Sensors (Basel) 2018; 18:E4122. [PMID: 30477237 DOI: 10.3390/s18124122] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 11/21/2018] [Accepted: 11/22/2018] [Indexed: 11/28/2022]
Abstract
Sensors provide data which need to be processed after acquisition to remove noise and extract relevant information. When the sensor is a network node and acquired data are to be transmitted to other nodes (e.g., through Ethernet), the amount of generated data from multiple nodes can overload the communication channel. The reduction of generated data implies the possibility of lower hardware requirements and less power consumption for the hardware devices. This work proposes a filtering algorithm (LDSI—Less Data Same Information) which reduces the generated data from event-based sensors without loss of relevant information. It is a bioinspired filter, i.e., event data are processed using a structure resembling biological neuronal information processing. The filter is fully configurable, from a “transparent mode” to a very restrictive mode. Based on an analysis of configuration parameters, three main configurations are given: weak, medium and restrictive. Using data from a DVS event camera, results for a similarity detection algorithm show that event data can be reduced up to 30% while maintaining the same similarity index when compared to unfiltered data. Data reduction can reach 85% with a penalty of 15% in similarity index compared to the original data. An object tracking algorithm was also used to compare results of the proposed filter with other existing filter. The LDSI filter provides less error (4.86 ± 1.87) when compared to the background activity filter (5.01 ± 1.93). The algorithm was tested under a PC using pre-recorded datasets, and its FPGA implementation was also carried out. A Xilinx Virtex6 FPGA received data from a 128 × 128 DVS camera, applied the LDSI algorithm, created a AER dataflow and sent the data to the PC for data analysis and visualization. The FPGA could run at 177 MHz clock speed with a low resource usage (671 LUT and 40 Block RAM for the whole system), showing real time operation capabilities and very low resource usage. The results show that, using an adequate filter parameter tuning, the relevant information from the scene is kept while fewer events are generated (i.e., fewer generated data).
Collapse
|
13
|
Rigi A, Baghaei Naeini F, Makris D, Zweiri Y. A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS). Sensors (Basel) 2018; 18:s18020333. [PMID: 29364190 PMCID: PMC5856167 DOI: 10.3390/s18020333] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Revised: 01/19/2018] [Accepted: 01/22/2018] [Indexed: 11/16/2022]
Abstract
In this paper, a novel approach to detect incipient slip based on the contact area between a transparent silicone medium and different objects using a neuromorphic event-based vision sensor (DAVIS) is proposed. Event-based algorithms are developed to detect incipient slip, slip, stress distribution and object vibration. Thirty-seven experiments were performed on five objects with different sizes, shapes, materials and weights to compare precision and response time of the proposed approach. The proposed approach is validated by using a high speed constitutional camera (1000 FPS). The results indicate that the sensor can detect incipient slippage with an average of 44.1 ms latency in unstructured environment for various objects. It is worth mentioning that the experiments were conducted in an uncontrolled experimental environment, therefore adding high noise levels that affected results significantly. However, eleven of the experiments had a detection latency below 10 ms which shows the capability of this method. The results are very promising and show a high potential of the sensor being used for manipulation applications especially in dynamic environments.
Collapse
Affiliation(s)
- Amin Rigi
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
| | - Fariborz Baghaei Naeini
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
| | - Dimitrios Makris
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
| | - Yahya Zweiri
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
- Visiting Associate Professor, Robotics Institute, Khalifa University of Science and Technology, P.O. Box 127788, Abu Dhabi 999041, United Arab Emirates.
| |
Collapse
|
14
|
Milde MB, Blum H, Dietmüller A, Sumislawska D, Conradt J, Indiveri G, Sandamirskaya Y. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System. Front Neurorobot 2017; 11:28. [PMID: 28747883 PMCID: PMC5507184 DOI: 10.3389/fnbot.2017.00028] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 05/22/2017] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.
Collapse
Affiliation(s)
- Moritz B Milde
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Hermann Blum
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Alexander Dietmüller
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Dora Sumislawska
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Jörg Conradt
- Neuroscientific System Theory, Department of Electrical and Computer Engineering, Technical University of MunichMunich, Germany
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Yulia Sandamirskaya
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| |
Collapse
|
15
|
Delbruck T, van Schaik A, Hasler J. Research topic: neuromorphic engineering systems and applications. A snapshot of neuromorphic systems engineering. Front Neurosci 2014; 8:424. [PMID: 25565952 PMCID: PMC4271593 DOI: 10.3389/fnins.2014.00424] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2014] [Accepted: 12/03/2014] [Indexed: 11/17/2022] Open
Affiliation(s)
- Tobi Delbruck
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - André van Schaik
- Bioelectronics and Neuroscience, The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jennifer Hasler
- School of Electrical and Computer Engineering, Georgia Institute of Technology Atlanta, GA, USA
| |
Collapse
|