1
|
Du Z, Gupta M, Xu F, Zhang K, Zhang J, Zhou Y, Liu Y, Wang Z, Wrachtrup J, Wong N, Li C, Chu Z. Widefield Diamond Quantum Sensing with Neuromorphic Vision Sensors. Adv Sci (Weinh) 2024; 11:e2304355. [PMID: 37939304 PMCID: PMC10787069 DOI: 10.1002/advs.202304355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/04/2023] [Indexed: 11/10/2023]
Abstract
Despite increasing interest in developing ultrasensitive widefield diamond magnetometry for various applications, achieving high temporal resolution and sensitivity simultaneously remains a key challenge. This is largely due to the transfer and processing of massive amounts of data from the frame-based sensor to capture the widefield fluorescence intensity of spin defects in diamonds. In this study, a neuromorphic vision sensor to encode the changes of fluorescence intensity into spikes in the optically detected magnetic resonance (ODMR) measurements is adopted, closely resembling the operation of the human vision system, which leads to highly compressed data volume and reduced latency. It also results in a vast dynamic range, high temporal resolution, and exceptional signal-to-background ratio. After a thorough theoretical evaluation, the experiment with an off-the-shelf event camera demonstrated a 13× improvement in temporal resolution with comparable precision of detecting ODMR resonance frequencies compared with the state-of-the-art highly specialized frame-based approach. It is successfully deploy this technology in monitoring dynamically modulated laser heating of gold nanoparticles coated on a diamond surface, a recognizably difficult task using existing approaches. The current development provides new insights for high-precision and low-latency widefield quantum sensing, with possibilities for integration with emerging memory devices to realize more intelligent quantum sensors.
Collapse
Affiliation(s)
- Zhiyuan Du
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Madhav Gupta
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Feng Xu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Kai Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, 518000, China
| | - Jiahua Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Yan Zhou
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, 518000, China
| | - Yiyao Liu
- Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, 510006, China
| | - Zhenyu Wang
- Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, School of Physics and Telecommunication Engineering, South China Normal University, Guangzhou, 510006, China
- Frontier Research Institute for Physics, South China Normal University, Guangzhou, 510006, China
| | - Jörg Wrachtrup
- 3rd Institute of Physics, Research Center SCoPE and IQST, University of Stuttgart, 70569, Stuttgart, Germany
- Max Planck Institute for Solid State Research, 70569, Stuttgart, Germany
| | - Ngai Wong
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Can Li
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
| | - Zhiqin Chu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, 999077, P. R. China
- School of Biomedical Sciences, The University of Hong Kong, Hong Kong, 999077, P. R. China
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Hong Kong, 999077, P. R. China
| |
Collapse
|
2
|
Lee J, Kim S, Park S, Lee J, Hwang W, Cho SW, Lee K, Kim SM, Seong TY, Park C, Lee S, Yi H. An Artificial Tactile Neuron Enabling Spiking Representation of Stiffness and Disease Diagnosis. Adv Mater 2022; 34:e2201608. [PMID: 35436369 DOI: 10.1002/adma.202201608] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 04/11/2022] [Indexed: 06/14/2023]
Abstract
Mechanical properties of biological systems provide useful information about the biochemical status of cells and tissues. Here, an artificial tactile neuron enabling spiking representation of stiffness and spiking neural network (SNN)-based learning for disease diagnosis is reported. An artificial spiking tactile neuron based on an ovonic threshold switch serving as an artificial soma and a piezoresistive sensor as an artificial mechanoreceptor is developed and shown to encode the elastic stiffness of pressed materials into spike frequency evolution patterns. SNN-based learning of ultrasound elastography images abstracted by spike frequency evolution rate enables the classification of malignancy status of breast tumors with a recognition accuracy up to 95.8%. The stiffness-encoding artificial tactile neuron and learning of spiking-represented stiffness patterns hold a great promise for the identification and classification of tumors for disease diagnosis and robot-assisted surgery with low power consumption, low latency, and yet high accuracy.
Collapse
Affiliation(s)
- Junseok Lee
- Post-Silicon Semiconductor Institute, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- YU-KIST, Yonsei University, Seoul, 03722, Republic of Korea
- Department of Materials Science and Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Seonjeong Kim
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Department of Materials Science and Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Seongjin Park
- Post-Silicon Semiconductor Institute, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
| | - Jaesang Lee
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Department of Materials Science and Engineering, Seoul National University, Seoul, 08826, Republic of Korea
| | - Wonseop Hwang
- Post-Silicon Semiconductor Institute, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
| | - Seong Won Cho
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Department of Materials Science and Engineering, Seoul National University, Seoul, 08826, Republic of Korea
| | - Kyuho Lee
- Department of Materials Science and Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Sun Mi Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, 13620, Republic of Korea
| | - Tae-Yeon Seong
- Department of Materials Science and Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Cheolmin Park
- YU-KIST, Yonsei University, Seoul, 03722, Republic of Korea
- Department of Materials Science and Engineering, Yonsei University, Seoul, 03722, Republic of Korea
| | - Suyoun Lee
- Center for Neuromorphic Engineering, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- Division of Nano & Information Technology, Korea University of Science and Technology, Daejeon, 34316, Republic of Korea
| | - Hyunjung Yi
- Post-Silicon Semiconductor Institute, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
- YU-KIST, Yonsei University, Seoul, 03722, Republic of Korea
| |
Collapse
|
3
|
Steffen L, Elfgen M, Ulbrich S, Roennau A, Dillmann R. A Benchmark Environment for Neuromorphic Stereo Vision. Front Robot AI 2021; 8:647634. [PMID: 34095240 PMCID: PMC8170485 DOI: 10.3389/frobt.2021.647634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
Without neuromorphic hardware, artificial stereo vision suffers from high resource demands and processing times impeding real-time capability. This is mainly caused by high frame rates, a quality feature for conventional cameras, generating large amounts of redundant data. Neuromorphic visual sensors generate less redundant and more relevant data solving the issue of over- and undersampling at the same time. However, they require a rethinking of processing as established techniques in conventional stereo vision do not exploit the potential of their event-based operation principle. Many alternatives have been recently proposed which have yet to be evaluated on a common data basis. We propose a benchmark environment offering the methods and tools to compare different algorithms for depth reconstruction from two event-based sensors. To this end, an experimental setup consisting of two event-based and one depth sensor as well as a framework enabling synchronized, calibrated data recording is presented. Furthermore, we define metrics enabling a meaningful comparison of the examined algorithms, covering aspects such as performance, precision and applicability. To evaluate the benchmark, a stereo matching algorithm was implemented as a testing candidate and multiple experiments with different settings and camera parameters have been carried out. This work is a foundation for a robust and flexible evaluation of the multitude of new techniques for event-based stereo vision, allowing a meaningful comparison.
Collapse
Affiliation(s)
- L. Steffen
- Interactive Diagnosis and Service Systems (IDS), Intelligent Systems and Production Engineering (ISPE), FZI Research Center for Information Technology, Karlsruhe, Germany
| | | | | | | | | |
Collapse
|
4
|
Abstract
In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor.
Collapse
Affiliation(s)
- Lukas Everding
- Department of Electrical and Computer Engineering, Neuroscientific Systemtheory, Technical University of Munich, Munich, Germany
| | - Jörg Conradt
- Department of Electrical and Computer Engineering, Neuroscientific Systemtheory, Technical University of Munich, Munich, Germany
| |
Collapse
|
5
|
Vanarse A, Osseiran A, Rassau A. A Review of Current Neuromorphic Approaches for Vision, Auditory, and Olfactory Sensors. Front Neurosci 2016; 10:115. [PMID: 27065784 PMCID: PMC4809886 DOI: 10.3389/fnins.2016.00115] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Accepted: 03/07/2016] [Indexed: 11/19/2022] Open
Abstract
Conventional vision, auditory, and olfactory sensors generate large volumes of redundant data and as a result tend to consume excessive power. To address these shortcomings, neuromorphic sensors have been developed. These sensors mimic the neuro-biological architecture of sensory organs using aVLSI (analog Very Large Scale Integration) and generate asynchronous spiking output that represents sensing information in ways that are similar to neural signals. This allows for much lower power consumption due to an ability to extract useful sensory information from sparse captured data. The foundation for research in neuromorphic sensors was laid more than two decades ago, but recent developments in understanding of biological sensing and advanced electronics, have stimulated research on sophisticated neuromorphic sensors that provide numerous advantages over conventional sensors. In this paper, we review the current state-of-the-art in neuromorphic implementation of vision, auditory, and olfactory sensors and identify key contributions across these fields. Bringing together these key contributions we suggest a future research direction for further development of the neuromorphic sensing field.
Collapse
Affiliation(s)
- Anup Vanarse
- School of Engineering, Edith Cowan University Joondalup, WA, Australia
| | - Adam Osseiran
- School of Engineering, Edith Cowan University Joondalup, WA, Australia
| | - Alexander Rassau
- School of Engineering, Edith Cowan University Joondalup, WA, Australia
| |
Collapse
|