1
|
Lesage X, Tran R, Mancini S, Fesquet L. Velocity and Color Estimation Using Event-Based Clustering. SENSORS (BASEL, SWITZERLAND) 2023; 23:9768. [PMID: 38139614 PMCID: PMC10747939 DOI: 10.3390/s23249768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 11/27/2023] [Accepted: 12/04/2023] [Indexed: 12/24/2023]
Abstract
Event-based clustering provides a low-power embedded solution for low-level feature extraction in a scene. The algorithm utilizes the non-uniform sampling capability of event-based image sensors to measure local intensity variations within a scene. Consequently, the clustering algorithm forms similar event groups while simultaneously estimating their attributes. This work proposes taking advantage of additional event information in order to provide new attributes for further processing. We elaborate on the estimation of the object velocity using the mean motion of the cluster. Next, we are examining a novel form of events, which includes intensity measurement of the color at the concerned pixel. These events may be processed to estimate the rough color of a cluster, or the color distribution in a cluster. Lastly, this paper presents some applications that utilize these features. The resulting algorithms are applied and exercised thanks to a custom event-based simulator, which generates videos of outdoor scenes. The velocity estimation methods provide satisfactory results with a trade-off between accuracy and convergence speed. Regarding color estimation, the luminance estimation is challenging in the test cases, while the chrominance is precisely estimated. The estimated quantities are adequate for accurately classifying objects into predefined categories.
Collapse
Affiliation(s)
- Xavier Lesage
- Univ. Grenoble Alpes, CNRS (National Centre for Scientific Research), Grenoble INP (Institute of Engineering), TIMA (Techniques of Informatics and Microelectronics for Integrated Systems Architecture), F-38000 Grenoble, France; (X.L.); (R.T.); (S.M.)
- Orioma, F-38430 Moirans, France
| | - Rosalie Tran
- Univ. Grenoble Alpes, CNRS (National Centre for Scientific Research), Grenoble INP (Institute of Engineering), TIMA (Techniques of Informatics and Microelectronics for Integrated Systems Architecture), F-38000 Grenoble, France; (X.L.); (R.T.); (S.M.)
| | - Stéphane Mancini
- Univ. Grenoble Alpes, CNRS (National Centre for Scientific Research), Grenoble INP (Institute of Engineering), TIMA (Techniques of Informatics and Microelectronics for Integrated Systems Architecture), F-38000 Grenoble, France; (X.L.); (R.T.); (S.M.)
| | - Laurent Fesquet
- Univ. Grenoble Alpes, CNRS (National Centre for Scientific Research), Grenoble INP (Institute of Engineering), TIMA (Techniques of Informatics and Microelectronics for Integrated Systems Architecture), F-38000 Grenoble, France; (X.L.); (R.T.); (S.M.)
| |
Collapse
|
2
|
Zhu L, Dong S, Li J, Huang T, Tian Y. Ultra-High Temporal Resolution Visual Reconstruction From a Fovea-Like Spike Camera via Spiking Neuron Model. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:1233-1249. [PMID: 35085071 DOI: 10.1109/tpami.2022.3146140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Neuromorphic vision sensor is a new bio-inspired imaging paradigm emerged in recent years. It uses the asynchronous spike signals instead of the traditional frame-based manner to achieve ultra-high speed sampling. Unlike the dynamic vision sensor (DVS) that perceives movement by imitating the retinal periphery, the spike camera was developed recently to perceive fine textures by simulating a small retinal region called the fovea. For this new type of neuromorphic camera, how to reconstruct ultra-high speed visual images from spike data becomes an important yet challenging issue in visual scene perception, analysis, and recognition applications. In this paper, a bio-inspired visual reconstruction framework for the spike camera is proposed for the first time. Its core idea is to use the biologically inspired adaptive adjustment mechanisms, combined with the spatiotemporal spike information extracted by the proposed model, to reconstruct the full texture of natural scenes in an ultra-high temporal resolution. Specifically, the proposed model consists of a motion local excitation layer, a spike refining layer and a visual reconstruction layer motivated by the bio-realistic leaky integrate-and-fire (LIF) neurons and synapse connection with spike-timing dependent plasticity (STDP) rule. To evaluate the performance, a spike dataset was constructed for normal and high-speed scenes in real-world recorded by the spike camera. The experimental results show that the proposed approach can reconstruct the visual images with 40,000 frames per second in both normal and high-speed scenes, while achieving high dynamic range and high image quality.
Collapse
|
3
|
Blair S, Garcia M, Davis T, Zhu Z, Liang Z, Konopka C, Kauffman K, Colanceski R, Ferati I, Kondov B, Stojanoski S, Todorovska MB, Dimitrovska NT, Jakupi N, Miladinova D, Petrusevska G, Kondov G, Dobrucki WL, Nie S, Gruev V. Hexachromatic bioinspired camera for image-guided cancer surgery. Sci Transl Med 2021; 13:13/592/eaaw7067. [PMID: 33952675 DOI: 10.1126/scitranslmed.aaw7067] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 12/22/2020] [Accepted: 04/14/2021] [Indexed: 12/25/2022]
Abstract
Cancer affects one in three people worldwide. Surgery remains the primary curative option for localized cancers, but good prognoses require complete removal of primary tumors and timely recognition of metastases. To expand surgical capabilities and enhance patient outcomes, we developed a six-channel color/near-infrared image sensor inspired by the mantis shrimp visual system that enabled near-infrared fluorescence image guidance during surgery. The mantis shrimp's unique eye, which maximizes the number of photons contributing to and the amount of information contained in each glimpse of its surroundings, is recapitulated in our single-chip imaging system that integrates arrays of vertically stacked silicon photodetectors and pixelated spectral filters. To provide information about tumor location unavailable from a single instrument, we tuned three color channels to permit an intuitive perspective of the surgical procedure and three near-infrared channels to permit multifunctional imaging of optical probes highlighting cancerous tissue. In nude athymic mice bearing human prostate tumors, our image sensor enabled simultaneous detection of two tumor-targeted fluorophores, distinguishing diseased from healthy tissue in an estimated 92% of cases. It also permitted extraction of near-infrared structured illumination enabling the mapping of the three-dimensional topography of tumors and surgical sites to within 1.2-mm error. In the operating room, during surgical resection in 18 patients with breast cancer, our image sensor further enabled sentinel lymph node mapping using clinically approved near-infrared fluorophores. The flexibility and performance afforded by this simple and compact architecture highlights the benefits of biologically inspired sensors in image-guided surgery.
Collapse
Affiliation(s)
- Steven Blair
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Missael Garcia
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Tyler Davis
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Zhongmin Zhu
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Zuodong Liang
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Christian Konopka
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Kevin Kauffman
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI 48109, USA
| | - Risto Colanceski
- University Clinic Hospital, Department of Thoracic and Vascular Surgery, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Imran Ferati
- University Clinic Hospital, Department of Thoracic and Vascular Surgery, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Borislav Kondov
- University Clinic Hospital, Department of Thoracic and Vascular Surgery, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Sinisa Stojanoski
- University Clinic Hospital, Institute of Pathophysiology and Nuclear Medicine, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Magdalena Bogdanovska Todorovska
- University Clinic Hospital, Department of Pathology, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Natasha Toleska Dimitrovska
- University Clinic Hospital, Department of Thoracic and Vascular Surgery, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Nexhat Jakupi
- University Clinic Hospital, Department of Thoracic and Vascular Surgery, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Daniela Miladinova
- University Clinic Hospital, Institute of Pathophysiology and Nuclear Medicine, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Gordana Petrusevska
- University Clinic Hospital, Department of Pathology, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Goran Kondov
- University Clinic Hospital, Department of Thoracic and Vascular Surgery, Ss. Cyril and Methodius University of Skopje, 1000 Skopje, Republic of North Macedonia
| | - Wawrzyniec Lawrence Dobrucki
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.,Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Urbana, IL 61820, USA
| | - Shuming Nie
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.,Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.,Department of Chemistry, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.,Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA
| | - Viktor Gruev
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA. .,Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA.,Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Urbana, IL 61820, USA
| |
Collapse
|
4
|
Steffen L, Reichard D, Weinland J, Kaiser J, Roennau A, Dillmann R. Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms. Front Neurorobot 2019; 13:28. [PMID: 31191287 PMCID: PMC6546825 DOI: 10.3389/fnbot.2019.00028] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 05/07/2019] [Indexed: 11/16/2022] Open
Abstract
Any visual sensor, whether artificial or biological, maps the 3D-world on a 2D-representation. The missing dimension is depth and most species use stereo vision to recover it. Stereo vision implies multiple perspectives and matching, hence it obtains depth from a pair of images. Algorithms for stereo vision are also used prosperously in robotics. Although, biological systems seem to compute disparities effortless, artificial methods suffer from high energy demands and latency. The crucial part is the correspondence problem; finding the matching points of two images. The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint—time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, Spiking Neural Networks take advantage of this constraint. In this work, we investigate sensors and algorithms for event-based stereo vision leading to more biologically plausible robots. Hereby, we focus mainly on binocular stereo vision.
Collapse
Affiliation(s)
- Lea Steffen
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Daniel Reichard
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Jakob Weinland
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Arne Roennau
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Rüdiger Dillmann
- FZI Research Center for Information Technology, Karlsruhe, Germany.,Humanoids and Intelligence Systems Lab, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| |
Collapse
|
5
|
Zhu Q, Fu Y, Liu Z. A bio-inspired model for bidirectional polarisation detection. BIOINSPIRATION & BIOMIMETICS 2018; 13:066002. [PMID: 30156563 DOI: 10.1088/1748-3190/aadd64] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This study investigated a novel polarisation detection model based on the microstructure of rhabdom in mantis shrimp eyes, in which a single unit can detect two directions of orthogonal polarisation. The bionic model incorporated multi-layered orthogonal Si wire grids, and the finite-difference time-domain method was used to simulate light absorption. A single-layer Si wire grid was simulated to study the effects of thickness and duty cycle on extinction ratios. A multi-layer orthogonal wire grid was simulated to study the effects of distance between adjacent layers. The simulations revealed that the bionic model can achieve orthogonal polarisation detection. Additionally, for 600 coupled layers, the extinction ratios in both directions were greater than 60, and light absorption in the absorptive directions exceeded 96%.
Collapse
Affiliation(s)
- Qifan Zhu
- School of Opto-Engineering, Changchun University of Science and Technology, Changchun 130022, People's Republic of China. Key Laboratory of Opto-electronic Measurement and Optical Information Transmission Technology, Changchun University of Science and Technology, Changchun 130022, People's Republic of China
| | | | | |
Collapse
|
7
|
Serrano-Gotarredona T, Linares-Barranco B. Poker-DVS and MNIST-DVS. Their History, How They Were Made, and Other Details. Front Neurosci 2015; 9:481. [PMID: 26733794 PMCID: PMC4686704 DOI: 10.3389/fnins.2015.00481] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2015] [Accepted: 11/30/2015] [Indexed: 11/20/2022] Open
Abstract
This article reports on two databases for event-driven object recognition using a Dynamic Vision Sensor (DVS). The first, which we call Poker-DVS and is being released together with this article, was obtained by browsing specially made poker card decks in front of a DVS camera for 2–4 s. Each card appeared on the screen for about 20–30 ms. The poker pips were tracked and isolated off-line to constitute the 131-recording Poker-DVS database. The second database, which we call MNIST-DVS and which was released in December 2013, consists of a set of 30,000 DVS camera recordings obtained by displaying 10,000 moving symbols from the standard MNIST 70,000-picture database on an LCD monitor for about 2–3 s each. Each of the 10,000 symbols was displayed at three different scales, so that event-driven object recognition algorithms could easily be tested for different object sizes. This article tells the story behind both databases, covering, among other aspects, details of how they work and the reasons for their creation. We provide not only the databases with corresponding scripts, but also the scripts and data used to generate the figures shown in this article (as Supplementary Material).
Collapse
Affiliation(s)
| | - Bernabé Linares-Barranco
- Instituto de Microelectrónica de Sevilla (IMSE-CNM), CSIC and Universidad de Sevilla Sevilla, Spain
| |
Collapse
|