1
|
Kapitany V, Fatima A, Zickus V, Whitelaw J, McGhee E, Insall R, Machesky L, Faccio D. Single-sample image-fusion upsampling of fluorescence lifetime images. Sci Adv 2024; 10:eadn0139. [PMID: 38781345 PMCID: PMC11114222 DOI: 10.1126/sciadv.adn0139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Accepted: 04/17/2024] [Indexed: 05/25/2024]
Abstract
Fluorescence lifetime imaging microscopy (FLIM) provides detailed information about molecular interactions and biological processes. A major bottleneck for FLIM is image resolution at high acquisition speeds due to the engineering and signal-processing limitations of time-resolved imaging technology. Here, we present single-sample image-fusion upsampling, a data-fusion approach to computational FLIM super-resolution that combines measurements from a low-resolution time-resolved detector (that measures photon arrival time) and a high-resolution camera (that measures intensity only). To solve this otherwise ill-posed inverse retrieval problem, we introduce statistically informed priors that encode local and global correlations between the two "single-sample" measurements. This bypasses the risk of out-of-distribution hallucination as in traditional data-driven approaches and delivers enhanced images compared, for example, to standard bilinear interpolation. The general approach laid out by single-sample image-fusion upsampling can be applied to other image super-resolution problems where two different datasets are available.
Collapse
Affiliation(s)
- Valentin Kapitany
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| | - Areeba Fatima
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| | - Vytautas Zickus
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
- Department of Laser Technologies, Center for Physical Sciences and Technology, LT-10257 Vilnius, Lithuania
| | | | - Ewan McGhee
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
- Cancer Research UK, Beatson Institute, Glasgow, UK
| | | | | | - Daniele Faccio
- School of Physics & Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| |
Collapse
|
2
|
Tan C, Kong W, Huang G, Jia S, Liu Q, Han Q, Hou J, Xue R, Yu S, Shu R. Development of a near-infrared single-photon 3D imaging LiDAR based on 64×64 InGaAs/InP array detector and Risley-prism scanner. Opt Express 2024; 32:7426-7447. [PMID: 38439423 DOI: 10.1364/oe.514159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 02/07/2024] [Indexed: 03/06/2024]
Abstract
A near-infrared single-photon lidar system, equipped with a 64×64 resolution array and a Risley prism scanner, has been engineered for daytime long-range and high-resolution 3D imaging. The system's detector, leveraging Geiger-mode InGaAs/InP avalanche photodiode technology, attains a single-photon detection efficiency of over 15% at the lidar's 1064 nm wavelength. This efficiency, in tandem with a narrow pulsed laser that boasts a single-pulse energy of 0.5 mJ, facilitates 3D imaging capabilities for distances reaching approximately 6 kilometers. The Risley scanner, composing two counter-rotating wedge prisms, is designed to perform scanning measurements across a 6-degree circular field-of-view. Precision calibration of the scanning angle and the beam's absolute direction was achieved using a precision dual-axis turntable and a collimator, culminating in 3D imaging with an exceptional scanning resolution of 28 arcseconds. Additionally, this work has developed a novel spatial domain local statistical filtering framework, specifically designed to separate daytime background noise photons from the signal photons, enhancing the system's imaging efficacy in varied lighting conditions. This paper showcases the advantages of array-based single-photon lidar image-side scanning technology in simultaneously achieving high resolution, a wide field-of-view, and extended detection range.
Collapse
|
3
|
Qi F, Zhang P. High-resolution multi-spectral snapshot 3D imaging with a SPAD array camera. Opt Express 2023; 31:30118-30129. [PMID: 37710561 DOI: 10.1364/oe.492581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 08/14/2023] [Indexed: 09/16/2023]
Abstract
Currently, mainstream light detection and ranging (LiDAR) systems usually involve a mechanical scanner component, which enables large-scale, high-resolution and multi-spectral imaging, but is difficult to assemble and has a larger system size. Furthermore, the mechanical wear on the moving parts of the scanner reduces its usage lifetime. Here, we propose a high-resolution scan-less multi-spectral three-dimensional (3D) imaging system, which improves the resolution with a four-times increase in the pixel number and can achieve multi-spectral imaging in a single snapshot. This system utilizes a specially designed multiple field-of-view (multi-FOV) system to separate four-wavelength echoes carrying depth and spectral reflectance information with predetermined temporal intervals, such that one single pixel of the SPAD array can sample four adjacent positions through the four channels' FOVs with subpixel offset. The positions and reflectivity are thus mapped to wavelengths in different time-bins. Our results show that the system can achieve high-resolution multi-spectral 3D imaging in a single exposure without scanning component. This scheme is the first to realize scan-less single-exposure high-resolution and multi-spectral imaging with a SPAD array sensor.
Collapse
|
4
|
Mora-Martín G, Scholes S, Ruget A, Henderson R, Leach J, Gyongy I. Video super-resolution for single-photon LIDAR. Opt Express 2023; 31:7060-7072. [PMID: 36859845 DOI: 10.1364/oe.478308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/04/2023] [Indexed: 06/18/2023]
Abstract
3D time-of-flight (ToF) image sensors are used widely in applications such as self-driving cars, augmented reality (AR), and robotics. When implemented with single-photon avalanche diodes (SPADs), compact, array format sensors can be made that offer accurate depth maps over long distances, without the need for mechanical scanning. However, array sizes tend to be small, leading to low lateral resolution, which combined with low signal-to-background ratio (SBR) levels under high ambient illumination, may lead to difficulties in scene interpretation. In this paper, we use synthetic depth sequences to train a 3D convolutional neural network (CNN) for denoising and upscaling (×4) depth data. Experimental results, based on synthetic as well as real ToF data, are used to demonstrate the effectiveness of the scheme. With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.
Collapse
|
5
|
Dilliway C, Dyer O, Mandrou E, Mitchell D, Menon G, Sparks H, Kapitany V, Payne-Dwyer A. Working at the interface of physics and biology: An early career researcher perspective. iScience 2022; 25:105615. [DOI: 10.1016/j.isci.2022.105615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
|
6
|
Kang Y, Xue R, Wang X, Zhang T, Meng F, Li L, Zhao W. High-resolution depth imaging with a small-scale SPAD array based on the temporal-spatial filter and intensity image guidance. Opt Express 2022; 30:33994-34011. [PMID: 36242422 DOI: 10.1364/oe.459787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 08/25/2022] [Indexed: 06/16/2023]
Abstract
Currently single-photon avalanche diode (SPAD) arrays suffer from a small-scale pixel count, which makes it difficult to achieve high-resolution 3D imaging directly through themselves. We established a CCD camera-assisted SPAD array depth imaging system. Based on illumination laser lattice generated by a diffractive optical element (DOE), the registration of the low-resolution depth image gathered by SPAD and the high-resolution intensity image gathered by CCD is realized. The intensity information is used to guide the reconstruction of a resolution-enhanced depth image through a proposed method consisting of total generalized variation (TGV) regularization and temporal-spatial (T-S) filtering algorithm. Experimental results show that an increasement of 4 × 4 times for native depth image resolution is achieved and the depth imaging quality is also improved by applying the proposed method.
Collapse
|
7
|
Zang Z, Xiao D, Wang Q, Li Z, Xie W, Chen Y, Li DDU. Fast Analysis of Time-Domain Fluorescence Lifetime Imaging via Extreme Learning Machine. Sensors 2022; 22:s22103758. [PMID: 35632167 PMCID: PMC9146214 DOI: 10.3390/s22103758] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 05/11/2022] [Accepted: 05/13/2022] [Indexed: 01/25/2023]
Abstract
We present a fast and accurate analytical method for fluorescence lifetime imaging microscopy (FLIM), using the extreme learning machine (ELM). We used extensive metrics to evaluate ELM and existing algorithms. First, we compared these algorithms using synthetic datasets. The results indicate that ELM can obtain higher fidelity, even in low-photon conditions. Afterwards, we used ELM to retrieve lifetime components from human prostate cancer cells loaded with gold nanosensors, showing that ELM also outperforms the iterative fitting and non-fitting algorithms. By comparing ELM with a computational efficient neural network, ELM achieves comparable accuracy with less training and inference time. As there is no back-propagation process for ELM during the training phase, the training speed is much higher than existing neural network approaches. The proposed strategy is promising for edge computing with online training.
Collapse
Affiliation(s)
- Zhenya Zang
- Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0RE, UK; (Z.Z.); (D.X.); (Q.W.); (W.X.)
| | - Dong Xiao
- Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0RE, UK; (Z.Z.); (D.X.); (Q.W.); (W.X.)
| | - Quan Wang
- Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0RE, UK; (Z.Z.); (D.X.); (Q.W.); (W.X.)
| | - Zinuo Li
- Department of Physics, University of Strathclyde, Glasgow G4 0NG, UK; (Z.L.); (Y.C.)
| | - Wujun Xie
- Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0RE, UK; (Z.Z.); (D.X.); (Q.W.); (W.X.)
| | - Yu Chen
- Department of Physics, University of Strathclyde, Glasgow G4 0NG, UK; (Z.L.); (Y.C.)
| | - David Day Uei Li
- Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0RE, UK; (Z.Z.); (D.X.); (Q.W.); (W.X.)
- Correspondence:
| |
Collapse
|
8
|
Ruget A, McLaughlin S, Henderson RK, Gyongy I, Halimi A, Leach J. Robust super-resolution depth imaging via a multi-feature fusion deep network. Opt Express 2021; 29:11917-11937. [PMID: 33984963 DOI: 10.1364/oe.415563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 02/04/2021] [Indexed: 06/12/2023]
Abstract
The number of applications that use depth imaging is increasing rapidly, e.g. self-driving autonomous vehicles and auto-focus assist on smartphone cameras. Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates. However, the spatial resolution of this technology is typically low in comparison to the intensity images recorded by conventional cameras. To increase the native resolution of depth images from a SPAD camera, we develop a deep network built to take advantage of the multiple features that can be extracted from a camera's histogram data. The network is designed for a SPAD camera operating in a dual-mode such that it captures alternate low resolution depth and high resolution intensity images at high frame rates, thus the system does not require any additional sensor to provide intensity images. The network then uses the intensity images and multiple features extracted from down-sampled histograms to guide the up-sampling of the depth. Our network provides significant image resolution enhancement and image denoising across a wide range of signal-to-noise ratios and photon levels. Additionally, we show that the network can be applied to other data types of SPAD data, demonstrating the generality of the algorithm.
Collapse
|