1
|
Zhang Y, Zhao H, Chen Z, Liu Z, Huang H, Qu Y, Liu Y, Sun M, Sun D, Zhao X. Optical tweezer-assisted cell pairing and fusion for somatic cell nuclear transfer within an open microchannel. LAB ON A CHIP 2024; 24:5215-5224. [PMID: 39503358 DOI: 10.1039/d4lc00561a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/20/2024]
Abstract
Somatic cell nuclear transfer (SCNT), referred to as somatic cell cloning, is a pivotal biotechnological technique utilized across various applications. Although robotic SCNT is currently available, the subsequent oocyte electrical activation/reconstructed embryo electrofusion is still manually completed by skilled operators, presenting challenges in efficient manipulation due to the uncontrollable positioning of the reconstructed embryo. This study introduces a robotic SCNT-electrofusion system to enable high-precision batch SCNT cloning. The proposed system integrates optical tweezers and microfluidic technologies. An optical tweezer is employed to facilitate somatic cells in precisely reaching the fusion site, and a specific polydimethylsiloxane (PDMS) chip is designed to assist in positioning and pairing oocytes and somatic cells. Enhancement in the electric field distribution between two parallel electrodes by PDMS pillars significantly reduces the required external voltage for electrofusion/electrical activation. We employed porcine oocytes and porcine fetal fibroblasts for SCNT experiments. The experimental results show that 90.56% of oocytes successfully paired with somatic cells to form reconstructed embryos, 76.43% of the reconstructed embryos successfully fused, and 70.55% of these embryos underwent cleavage. It demonstrates that the present system achieves the robotic implementation of oocyte electrical activation/reconstructed embryo electrofusion. By leveraging the advantages of batch operations using microfluidics, it proposes an innovative robotic cloning procedure that scales embryo cloning.
Collapse
Affiliation(s)
- Yidi Zhang
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Institute of Robotics and Automatic Information System (IRAIS), Nankai University, Tianjin 300350, China.
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, SAR, China.
| | - Han Zhao
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, SAR, China.
| | - Zhenlin Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, SAR, China.
| | - Zhen Liu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, SAR, China.
| | - Hanjin Huang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, SAR, China.
| | - Yun Qu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, SAR, China.
| | - Yaowei Liu
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Institute of Robotics and Automatic Information System (IRAIS), Nankai University, Tianjin 300350, China.
- Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen 518083, China
| | - Mingzhu Sun
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Institute of Robotics and Automatic Information System (IRAIS), Nankai University, Tianjin 300350, China.
- Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen 518083, China
| | - Dong Sun
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, SAR, China.
| | - Xin Zhao
- National Key Laboratory of Intelligent Tracking and Forecasting for Infectious Diseases, Engineering Research Center of Trusted Behavior Intelligence, Ministry of Education, Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Institute of Robotics and Automatic Information System (IRAIS), Nankai University, Tianjin 300350, China.
- Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Shenzhen 518083, China
| |
Collapse
|
2
|
Lele AS, Fang Y, Anwar A, Raychowdhury A. Bio-mimetic high-speed target localization with fused frame and event vision for edge application. Front Neurosci 2022; 16:1010302. [PMID: 36507348 PMCID: PMC9732385 DOI: 10.3389/fnins.2022.1010302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 10/24/2022] [Indexed: 11/26/2022] Open
Abstract
Evolution has honed predatory skills in the natural world where localizing and intercepting fast-moving prey is required. The current generation of robotic systems mimics these biological systems using deep learning. High-speed processing of the camera frames using convolutional neural networks (CNN) (frame pipeline) on such constrained aerial edge-robots gets resource-limited. Adding more compute resources also eventually limits the throughput at the frame rate of the camera as frame-only traditional systems fail to capture the detailed temporal dynamics of the environment. Bio-inspired event cameras and spiking neural networks (SNN) provide an asynchronous sensor-processor pair (event pipeline) capturing the continuous temporal details of the scene for high-speed but lag in terms of accuracy. In this work, we propose a target localization system combining event-camera and SNN-based high-speed target estimation and frame-based camera and CNN-driven reliable object detection by fusing complementary spatio-temporal prowess of event and frame pipelines. One of our main contributions involves the design of an SNN filter that borrows from the neural mechanism for ego-motion cancelation in houseflies. It fuses the vestibular sensors with the vision to cancel the activity corresponding to the predator's self-motion. We also integrate the neuro-inspired multi-pipeline processing with task-optimized multi-neuronal pathway structure in primates and insects. The system is validated to outperform CNN-only processing using prey-predator drone simulations in realistic 3D virtual environments. The system is then demonstrated in a real-world multi-drone set-up with emulated event data. Subsequently, we use recorded actual sensory data from multi-camera and inertial measurement unit (IMU) assembly to show desired working while tolerating the realistic noise in vision and IMU sensors. We analyze the design space to identify optimal parameters for spiking neurons, CNN models, and for checking their effect on the performance metrics of the fused system. Finally, we map the throughput controlling SNN and fusion network on edge-compatible Zynq-7000 FPGA to show a potential 264 outputs per second even at constrained resource availability. This work may open new research directions by coupling multiple sensing and processing modalities inspired by discoveries in neuroscience to break fundamental trade-offs in frame-based computer vision.
Collapse
Affiliation(s)
- Ashwin Sanjay Lele
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Yan Fang
- Department of Electrical and Computer Engineering, Kennesaw State University, Marietta, GA, United States
| | - Aqeel Anwar
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Arijit Raychowdhury
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| |
Collapse
|
3
|
Hu X, Liu Z, Wang X, Yang L, Wang G. Event-Based Obstacle Sensing and Avoidance for an UAV Through Deep Reinforcement Learning. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20503-3_32] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
4
|
Gallego G, Delbruck T, Orchard G, Bartolozzi C, Taba B, Censi A, Leutenegger S, Davison AJ, Conradt J, Daniilidis K, Scaramuzza D. Event-Based Vision: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:154-180. [PMID: 32750812 DOI: 10.1109/tpami.2020.3008413] [Citation(s) in RCA: 227] [Impact Index Per Article: 75.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of μs), very high dynamic range (140 dB versus 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.
Collapse
|
5
|
Beck M, Maier G, Flitter M, Gruna R, Längle T, Heizmann M, Beyerer J. An Extended Modular Processing Pipeline for Event-Based Vision in Automatic Visual Inspection. SENSORS 2021; 21:s21186143. [PMID: 34577349 PMCID: PMC8472878 DOI: 10.3390/s21186143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 09/02/2021] [Accepted: 09/03/2021] [Indexed: 11/16/2022]
Abstract
Dynamic Vision Sensors differ from conventional cameras in that only intensity changes of individual pixels are perceived and transmitted as an asynchronous stream instead of an entire frame. The technology promises, among other things, high temporal resolution and low latencies and data rates. While such sensors currently enjoy much scientific attention, there are only little publications on practical applications. One field of application that has hardly been considered so far, yet potentially fits well with the sensor principle due to its special properties, is automatic visual inspection. In this paper, we evaluate current state-of-the-art processing algorithms in this new application domain. We further propose an algorithmic approach for the identification of ideal time windows within an event stream for object classification. For the evaluation of our method, we acquire two novel datasets that contain typical visual inspection scenarios, i.e., the inspection of objects on a conveyor belt and during free fall. The success of our algorithmic extension for data processing is demonstrated on the basis of these new datasets by showing that classification accuracy of current algorithms is highly increased. By making our new datasets publicly available, we intend to stimulate further research on application of Dynamic Vision Sensors in machine vision applications.
Collapse
Affiliation(s)
- Moritz Beck
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Georg Maier
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
- Correspondence:
| | - Merle Flitter
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Robin Gruna
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Thomas Längle
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
| | - Michael Heizmann
- Institute of Industrial Information Technology (IIIT), Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany;
| | - Jürgen Beyerer
- Fraunhofer IOSB, Karlsruhe, Institute of Optronics, System Technologies and Image Exploitation IOSB, 76131 Karlsruhe, Germany; (M.B.); (M.F.); (R.G.); (T.L.); (J.B.)
- Vision and Fusion Laboratory (IES), Karlsruhe Institute of Technology (KIT), 76131 Karlsruhe, Germany
| |
Collapse
|
6
|
Tayarani-Najaran MH, Schmuker M. Event-Based Sensing and Signal Processing in the Visual, Auditory, and Olfactory Domain: A Review. Front Neural Circuits 2021; 15:610446. [PMID: 34135736 PMCID: PMC8203204 DOI: 10.3389/fncir.2021.610446] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
The nervous systems converts the physical quantities sensed by its primary receptors into trains of events that are then processed in the brain. The unmatched efficiency in information processing has long inspired engineers to seek brain-like approaches to sensing and signal processing. The key principle pursued in neuromorphic sensing is to shed the traditional approach of periodic sampling in favor of an event-driven scheme that mimicks sampling as it occurs in the nervous system, where events are preferably emitted upon the change of the sensed stimulus. In this paper we highlight the advantages and challenges of event-based sensing and signal processing in the visual, auditory and olfactory domains. We also provide a survey of the literature covering neuromorphic sensing and signal processing in all three modalities. Our aim is to facilitate research in event-based sensing and signal processing by providing a comprehensive overview of the research performed previously as well as highlighting conceptual advantages, current progress and future challenges in the field.
Collapse
Affiliation(s)
| | - Michael Schmuker
- School of Physics, Engineering and Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| |
Collapse
|
7
|
Seeing through Events: Real-Time Moving Object Sonification for Visually Impaired People Using Event-Based Camera. SENSORS 2021; 21:s21103558. [PMID: 34065360 PMCID: PMC8161033 DOI: 10.3390/s21103558] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 05/11/2021] [Accepted: 05/12/2021] [Indexed: 11/25/2022]
Abstract
Scene sonification is a powerful technique to help Visually Impaired People (VIP) understand their surroundings. Existing methods usually perform sonification on the entire images of the surrounding scene acquired by a standard camera or on the priori static obstacles acquired by image processing algorithms on the RGB image of the surrounding scene. However, if all the information in the scene are delivered to VIP simultaneously, it will cause information redundancy. In fact, biological vision is more sensitive to moving objects in the scene than static objects, which is also the original intention of the event-based camera. In this paper, we propose a real-time sonification framework to help VIP understand the moving objects in the scene. First, we capture the events in the scene using an event-based camera and cluster them into multiple moving objects without relying on any prior knowledge. Then, sonification based on MIDI is enabled on these objects synchronously. Finally, we conduct comprehensive experiments on the scene video with sonification audio attended by 20 VIP and 20 Sighted People (SP). The results show that our method allows both participants to clearly distinguish the number, size, motion speed, and motion trajectories of multiple objects. The results show that our method is more comfortable to hear than existing methods in terms of aesthetics.
Collapse
|
8
|
Maro JM, Ieng SH, Benosman R. Event-Based Gesture Recognition With Dynamic Background Suppression Using Smartphone Computational Capabilities. Front Neurosci 2020; 14:275. [PMID: 32327968 PMCID: PMC7160298 DOI: 10.3389/fnins.2020.00275] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 03/10/2020] [Indexed: 11/13/2022] Open
Abstract
In this paper, we introduce a framework for dynamic gesture recognition with background suppression operating on the output of a moving event-based camera. The system is developed to operate in real-time using only the computational capabilities of a mobile phone. It introduces a new development around the concept of time-surfaces. It also presents a novel event-based methodology to dynamically remove backgrounds that uses the high temporal resolution properties of event-based cameras. To our knowledge, this is the first Android event-based framework for vision-based recognition of dynamic gestures running on a smartphone without off-board processing. We assess the performances by considering several scenarios in both indoors and outdoors, for static and dynamic conditions, in uncontrolled lighting conditions. We also introduce a new event-based dataset for gesture recognition with static and dynamic backgrounds (made publicly available). The set of gestures has been selected following a clinical trial to allow human-machine interaction for the visually impaired and older adults. We finally report comparisons with prior work that addressed event-based gesture recognition reporting comparable results, without the use of advanced classification techniques nor power greedy hardware.
Collapse
Affiliation(s)
| | - Sio-Hoi Ieng
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- CHNO des Quinze-Vingts, INSERM-DGOS CIC 1423, Paris, France
| | - Ryad Benosman
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- CHNO des Quinze-Vingts, INSERM-DGOS CIC 1423, Paris, France
- Departments of Ophthalmology/ECE/BioE, University of Pittsburgh, Pittsburgh, PA, United States
- Department of Computer Science, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
9
|
Ramesh B, Ussa A, Della Vedova L, Yang H, Orchard G. Low-Power Dynamic Object Detection and Classification With Freely Moving Event Cameras. Front Neurosci 2020; 14:135. [PMID: 32153357 PMCID: PMC7044237 DOI: 10.3389/fnins.2020.00135] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 02/03/2020] [Indexed: 11/13/2022] Open
Abstract
We present the first purely event-based, energy-efficient approach for dynamic object detection and categorization with a freely moving event camera. Compared to traditional cameras, event-based object recognition systems are considerably behind in terms of accuracy and algorithmic maturity. To this end, this paper presents an event-based feature extraction method devised by accumulating local activity across the image frame and then applying principal component analysis (PCA) to the normalized neighborhood region. Subsequently, we propose a backtracking-free k-d tree mechanism for efficient feature matching by taking advantage of the low-dimensionality of the feature representation. Additionally, the proposed k-d tree mechanism allows for feature selection to obtain a lower-dimensional object representation when hardware resources are limited to implement PCA. Consequently, the proposed system can be realized on a field-programmable gate array (FPGA) device leading to high performance over resource ratio. The proposed system is tested on real-world event-based datasets for object categorization, showing superior classification performance compared to state-of-the-art algorithms. Additionally, we verified the real-time FPGA performance of the proposed object detection method, trained with limited data as opposed to deep learning methods, under a closed-loop aerial vehicle flight mode. We also compare the proposed object categorization framework to pre-trained convolutional neural networks using transfer learning and highlight the drawbacks of using frame-based sensors under dynamic camera motion. Finally, we provide critical insights about the feature extraction method and the classification parameters on the system performance, which aids in understanding the framework to suit various low-power (less than a few watts) application scenarios.
Collapse
Affiliation(s)
- Bharath Ramesh
- Life Science Institute, The N.1 Institute for Health, National University of Singapore, Singapore, Singapore.,Temasek Laboratories, National University of Singapore, Singapore, Singapore
| | - Andrés Ussa
- Life Science Institute, The N.1 Institute for Health, National University of Singapore, Singapore, Singapore.,Temasek Laboratories, National University of Singapore, Singapore, Singapore
| | - Luca Della Vedova
- Temasek Laboratories, National University of Singapore, Singapore, Singapore
| | - Hong Yang
- Temasek Laboratories, National University of Singapore, Singapore, Singapore
| | - Garrick Orchard
- Life Science Institute, The N.1 Institute for Health, National University of Singapore, Singapore, Singapore.,Temasek Laboratories, National University of Singapore, Singapore, Singapore
| |
Collapse
|
10
|
Machine Learning Methodology in a System Applying the Adaptive Strategy for Teaching Human Motions. SENSORS 2020; 20:s20010314. [PMID: 31935910 PMCID: PMC6982902 DOI: 10.3390/s20010314] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 12/30/2019] [Accepted: 01/04/2020] [Indexed: 01/25/2023]
Abstract
The teaching of motion activities in rehabilitation, sports, and professional work has great social significance. However, the automatic teaching of these activities, particularly those involving fast motions, requires the use of an adaptive system that can adequately react to the changing stages and conditions of the teaching process. This paper describes a prototype of an automatic system that utilizes the online classification of motion signals to select the proper teaching algorithm. The knowledge necessary to perform the classification process is acquired from experts by the use of the machine learning methodology. The system utilizes multidimensional motion signals that are captured using MEMS (Micro-Electro-Mechanical Systems) sensors. Moreover, an array of vibrotactile actuators is used to provide feedback to the learner. The main goal of the presented article is to prove that the effectiveness of the described teaching system is higher than the system that controls the learning process without the use of signal classification. Statistical tests carried out by the use of a prototype system confirmed that thesis. This is the main outcome of the presented study. An important contribution is also a proposal to standardize the system structure. The standardization facilitates the system configuration and implementation of individual, specialized teaching algorithms.
Collapse
|
11
|
Li H, Shi L. Robust Event-Based Object Tracking Combining Correlation Filter and CNN Representation. Front Neurorobot 2019; 13:82. [PMID: 31649524 PMCID: PMC6795673 DOI: 10.3389/fnbot.2019.00082] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 09/20/2019] [Indexed: 11/13/2022] Open
Abstract
Object tracking based on the event-based camera or dynamic vision sensor (DVS) remains a challenging task due to the noise events, rapid change of event-stream shape, chaos of complex background textures, and occlusion. To address the challenges, this paper presents a robust event-stream object tracking method based on correlation filter mechanism and convolutional neural network (CNN) representation. In the proposed method, rate coding is used to encode the event-stream object. Feature representations from hierarchical convolutional layers of a pre-trained CNN are used to represent the appearance of the rate encoded event-stream object. Results prove that the proposed method not only achieves good tracking performance in many complicated scenes with noise events, complex background textures, occlusion, and intersected trajectories, but also is robust to variable scale, variable pose, and non-rigid deformations. In addition, the correlation filter-based method has the advantage of high speed. The proposed approach will promote the potential applications of these event-based vision sensors in autonomous driving, robots and many other high-speed scenes.
Collapse
Affiliation(s)
- Hongmin Li
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
| | - Luping Shi
- Department of Precision Instrument, Center for Brain-Inspired Computing Research, Tsinghua University, Beijing, China
| |
Collapse
|
12
|
Gehrig D, Rebecq H, Gallego G, Scaramuzza D. EKLT: Asynchronous Photometric Feature Tracking Using Events and Frames. Int J Comput Vis 2019. [DOI: 10.1007/s11263-019-01209-w] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
13
|
Deep representation via convolutional neural network for classification of spatiotemporal event streams. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.02.019] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Marcireau A, Ieng SH, Simon-Chane C, Benosman RB. Event-Based Color Segmentation With a High Dynamic Range Sensor. Front Neurosci 2018; 12:135. [PMID: 29695948 PMCID: PMC5904265 DOI: 10.3389/fnins.2018.00135] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 02/20/2018] [Indexed: 12/01/2022] Open
Abstract
This paper introduces a color asynchronous neuromorphic event-based camera and a methodology to process color output from the device to perform color segmentation and tracking at the native temporal resolution of the sensor (down to one microsecond). Our color vision sensor prototype is a combination of three Asynchronous Time-based Image Sensors, sensitive to absolute color information. We devise a color processing algorithm leveraging this information. It is designed to be computationally cheap, thus showing how low level processing benefits from asynchronous acquisition and high temporal resolution data. The resulting color segmentation and tracking performance is assessed both with an indoor controlled scene and two outdoor uncontrolled scenes. The tracking's mean error to the ground truth for the objects of the outdoor scenes ranges from two to twenty pixels.
Collapse
Affiliation(s)
- Alexandre Marcireau
- Institut National de la Santé et de la Recherche Médicale, UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision, Paris, France
| | - Sio-Hoi Ieng
- Institut National de la Santé et de la Recherche Médicale, UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision, Paris, France
| | - Camille Simon-Chane
- Institut National de la Santé et de la Recherche Médicale, UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision, Paris, France
| | - Ryad B Benosman
- Institut National de la Santé et de la Recherche Médicale, UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision, Paris, France
| |
Collapse
|
15
|
Rigi A, Baghaei Naeini F, Makris D, Zweiri Y. A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS). SENSORS 2018; 18:s18020333. [PMID: 29364190 PMCID: PMC5856167 DOI: 10.3390/s18020333] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Revised: 01/19/2018] [Accepted: 01/22/2018] [Indexed: 11/16/2022]
Abstract
In this paper, a novel approach to detect incipient slip based on the contact area between a transparent silicone medium and different objects using a neuromorphic event-based vision sensor (DAVIS) is proposed. Event-based algorithms are developed to detect incipient slip, slip, stress distribution and object vibration. Thirty-seven experiments were performed on five objects with different sizes, shapes, materials and weights to compare precision and response time of the proposed approach. The proposed approach is validated by using a high speed constitutional camera (1000 FPS). The results indicate that the sensor can detect incipient slippage with an average of 44.1 ms latency in unstructured environment for various objects. It is worth mentioning that the experiments were conducted in an uncontrolled experimental environment, therefore adding high noise levels that affected results significantly. However, eleven of the experiments had a detection latency below 10 ms which shows the capability of this method. The results are very promising and show a high potential of the sensor being used for manipulation applications especially in dynamic environments.
Collapse
Affiliation(s)
- Amin Rigi
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
| | - Fariborz Baghaei Naeini
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
| | - Dimitrios Makris
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
| | - Yahya Zweiri
- Faculty of Science, Engineering and Computing, Kingston University London, London SW15 3DW, UK.
- Visiting Associate Professor, Robotics Institute, Khalifa University of Science and Technology, P.O. Box 127788, Abu Dhabi 999041, United Arab Emirates.
| |
Collapse
|
16
|
Glover A, Vasco V, Iacono M, Bartolozzi C. The Event-Driven Software Library for YARP—With Algorithms and iCub Applications. Front Robot AI 2018. [DOI: 10.3389/frobt.2017.00073] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
17
|
Asynchronous, Photometric Feature Tracking Using Events and Frames. COMPUTER VISION – ECCV 2018 2018. [DOI: 10.1007/978-3-030-01258-8_46] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
18
|
Sabatier Q, Ieng SH, Benosman R. Asynchronous Event-Based Fourier Analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:2192-2202. [PMID: 28186889 DOI: 10.1109/tip.2017.2661702] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper introduces a method to compute the FFT of a visual scene at a high temporal precision of around 1- [Formula: see text] output from an asynchronous event-based camera. Event-based cameras allow to go beyond the widespread and ingrained belief that acquiring series of images at some rate is a good way to capture visual motion. Each pixel adapts its own sampling rate to the visual input it receives and defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. As a consequence, the sampling process is no longer governed by a fixed timing source but by the signal to be sampled itself, or more precisely by the variations of the signal in the amplitude domain. Event-based cameras acquisition paradigm allows to go beyond the current conventional method to compute the FFT. The event-driven FFT algorithm relies on a heuristic methodology designed to operate directly on incoming gray level events to update incrementally the FFT while reducing both computation and data load. We show that for reasonable levels of approximations at equivalent frame rates beyond the millisecond, the method performs faster and more efficiently than conventional image acquisition. Several experiments are carried out on indoor and outdoor scenes where both conventional and event-driven FFT computation is shown and compared.
Collapse
|
19
|
Mishra A, Ghosh R, Principe JC, Thakor NV, Kukreja SL. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors. Front Neurosci 2017; 11:83. [PMID: 28316563 PMCID: PMC5334512 DOI: 10.3389/fnins.2017.00083] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 02/06/2017] [Indexed: 11/25/2022] Open
Abstract
Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%.
Collapse
Affiliation(s)
- Abhishek Mishra
- Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore
| | - Rohan Ghosh
- Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore
| | - Jose C Principe
- Department of Electrical and Computer Engineering, University of Florida Gainesville, FL, USA
| | - Nitish V Thakor
- Singapore Institute for Neurotechnology, National University of SingaporeSingapore, Singapore; Biomedical Engineering Department, Johns Hopkins UniversityBaltimore, MD, USA
| | - Sunil L Kukreja
- Singapore Institute for Neurotechnology, National University of Singapore Singapore, Singapore
| |
Collapse
|
20
|
Pacchierotti C, Scheggi S, Prattichizzo D, Misra S. Haptic Feedback for Microrobotics Applications: A Review. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00053] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
|
21
|
Reverter Valeiras D, Lagorce X, Clady X, Bartolozzi C, Ieng SH, Benosman R. An Asynchronous Neuromorphic Event-Driven Visual Part-Based Shape Tracking. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:3045-3059. [PMID: 25794399 DOI: 10.1109/tnnls.2015.2401834] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Object tracking is an important step in many artificial vision tasks. The current state-of-the-art implementations remain too computationally demanding for the problem to be solved in real time with high dynamics. This paper presents a novel real-time method for visual part-based tracking of complex objects from the output of an asynchronous event-based camera. This paper extends the pictorial structures model introduced by Fischler and Elschlager 40 years ago and introduces a new formulation of the problem, allowing the dynamic processing of visual input in real time at high temporal resolution using a conventional PC. It relies on the concept of representing an object as a set of basic elements linked by springs. These basic elements consist of simple trackers capable of successfully tracking a target with an ellipse-like shape at several kilohertz on a conventional computer. For each incoming event, the method updates the elastic connections established between the trackers and guarantees a desired geometric structure corresponding to the tracked object in real time. This introduces a high temporal elasticity to adapt to projective deformations of the tracked object in the focal plane. The elastic energy of this virtual mechanical system provides a quality criterion for tracking and can be used to determine whether the measured deformations are caused by the perspective projection of the perceived object or by occlusions. Experiments on real-world data show the robustness of the method in the context of dynamic face tracking.
Collapse
|
22
|
Tan N, Clevy C, Laurent GJ, Sandoz P, Chaillet N. Accuracy Quantification and Improvement of Serial Micropositioning Robots for In-Plane Motions. IEEE T ROBOT 2015. [DOI: 10.1109/tro.2015.2498301] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
23
|
Lagorce X, Meyer C, Ieng SH, Filliat D, Benosman R. Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2015; 26:1710-1720. [PMID: 25248193 DOI: 10.1109/tnnls.2014.2352401] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a number of new methods for visual tracking using the output of an event-based asynchronous neuromorphic dynamic vision sensor. It allows the tracking of multiple visual features in real time, achieving an update rate of several hundred kilohertz on a standard desktop PC. The approach has been specially adapted to take advantage of the event-driven properties of these sensors by combining both spatial and temporal correlations of events in an asynchronous iterative framework. Various kernels, such as Gaussian, Gabor, combinations of Gabor functions, and arbitrary user-defined kernels, are used to track features from incoming events. The trackers described in this paper are capable of handling variations in position, scale, and orientation through the use of multiple pools of trackers. This approach avoids the N(2) operations per event associated with conventional kernel-based convolution operations with N × N kernels. The tracking performance was evaluated experimentally for each type of kernel in order to demonstrate the robustness of the proposed solution.
Collapse
|
24
|
Abstract
This paper focuses on the development of a weakly calibrated three-view-based visual servoing control law applied to the laser steering process. It proposes to revisit the conventional trifocal constraints governing a three-view geometry for a more suitable use in the design of an efficient trifocal vision-based control. Thereby, an explicit control law is derived, without any matrix inversion, which allows to simply prove the global exponential stability of the control. Moreover, only ‘[Formula: see text]’ are necessary to design a fast trifocal control system. Thanks to the simplicity of the implementation, our control law is fast, accurate, robust to errors on the weak calibration and it exhibits good behavior in terms of convergence and decoupling. This was demonstrated by different experimental validations performed on a test-bench for the steering of laser spot on a 2D and 3D surface using a two degrees-of-freedom commercial piezoelectric mirror, as well as in preliminary cadaver trials using an endoluminal micromirror prototype.
Collapse
Affiliation(s)
- Nicolas Andreff
- Automatic Control and Micro-Mechatronic Systems (AS2M)
Department, FEMTO-ST Institute, France
| | - Brahim Tamadazte
- Automatic Control and Micro-Mechatronic Systems (AS2M)
Department, FEMTO-ST Institute, France
| |
Collapse
|
25
|
Liu J, Gong Z, Tang K, Lu Z, Ru C, Luo J, Xie S, Sun Y. Locating End-Effector Tips in Robotic Micromanipulation. IEEE T ROBOT 2014. [DOI: 10.1109/tro.2013.2280060] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|