1
|
Sultan T, Reza SA, Velten A. Towards a more accurate light transport model for non-line-of-sight imaging. OPTICS EXPRESS 2024; 32:7731-7761. [PMID: 38439448 DOI: 10.1364/oe.508034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 01/12/2024] [Indexed: 03/06/2024]
Abstract
Non-line-of-sight (NLOS) imaging systems involve the measurement of an optical signal at a diffuse surface. A forward model encodes the physics of these measurements mathematically and can be inverted to generate a reconstruction of the hidden scene. Some existing NLOS imaging techniques rely on illuminating the diffuse surface and measuring the photon time of flight (ToF) of multi-bounce light paths. Alternatively, some methods depend on measuring high-frequency variations caused by shadows cast by occluders in the hidden scene. While forward models for ToF-NLOS and Shadow-NLOS have been developed separately, there has been limited work on unifying these two imaging modalities. Dove et al introduced a unified mathematical framework capable of modeling both imaging techniques [Opt. Express27, 18016 (2019)10.1364/OE.27.018016]. The authors utilize this general forward model, known as the two frequency spatial Wigner distribution (TFSWD), to discuss the implications of reconstruction resolution for combining the two modalities but only when the occluder geometry is known a priori. In this work, we develop a graphical representation of the TFSWD forward model and apply it to novel experimental setups with potential applications in NLOS imaging. Furthermore, we use this unified framework to explore the potential of combining these two imaging modalities in situations where the occluder geometry is not known in advance.
Collapse
|
2
|
Czajkowski R, Murray-Bruce J. Two-edge-resolved three-dimensional non-line-of-sight imaging with an ordinary camera. Nat Commun 2024; 15:1162. [PMID: 38326381 DOI: 10.1038/s41467-024-45397-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 01/21/2024] [Indexed: 02/09/2024] Open
Abstract
We introduce an approach for three-dimensional full-colour non-line-of-sight imaging with an ordinary camera that relies on a complementary combination of a new measurement acquisition strategy, scene representation model, and tailored reconstruction method. From an ordinary photograph of a matte line-of-sight surface illuminated by the hidden scene, our approach reconstructs a three-dimensional image of the scene hidden behind an occluding structure by exploiting two orthogonal edges of the structure for transverse resolution along azimuth and elevation angles and an information orthogonal scene representation for accurate range resolution. Prior demonstrations beyond two-dimensional reconstructions used expensive, specialized optical systems to gather information about the hidden scene. Here, we achieve accurate three-dimensional imaging using inexpensive, and ubiquitous hardware, without requiring a calibration image. Thus, our system may find use in indoor situations like reconnaissance and search-and-rescue.
Collapse
Affiliation(s)
- Robinson Czajkowski
- Department of Computer Science and Engineering, University of South Florida, 4202 E. Fowler Avenue, Tampa, FL, 33620, USA
| | - John Murray-Bruce
- Department of Computer Science and Engineering, University of South Florida, 4202 E. Fowler Avenue, Tampa, FL, 33620, USA.
| |
Collapse
|
3
|
Liu Y, Wornell GW, Freeman WT, Durand F. Imaging privacy threats from an ambient light sensor. SCIENCE ADVANCES 2024; 10:eadj3608. [PMID: 38198551 PMCID: PMC10780887 DOI: 10.1126/sciadv.adj3608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/14/2023] [Indexed: 01/12/2024]
Abstract
Embedded sensors in smart devices pose privacy risks, often unintentionally leaking user information. We investigate how combining an ambient light sensor with a device display can capture an image of touch interaction without a camera. By displaying a known video sequence, we use the light sensor to capture reflected light intensity variations partially blocked by the touching hand, formulating an inverse problem similar to single-pixel imaging. Because of the sensors' heavy quantization and low sensitivity, we propose an inversion algorithm involving an ℓp-norm dequantizer and a deep denoiser as natural image priors, to reconstruct images from the screen's perspective. We demonstrate touch interactions and eavesdropping hand gestures on an off-the-shelf Android tablet. Despite limitations in resolution and speed, we aim to raise awareness of potential security/privacy threats induced by the combination of passive and active components in smart devices and promote the development of ways to mitigate them.
Collapse
Affiliation(s)
- Yang Liu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Gregory W. Wornell
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - William T. Freeman
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Frédo Durand
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| |
Collapse
|
4
|
Seidel S, Rueda-Chacón H, Cusini I, Villa F, Zappa F, Yu C, Goyal VK. Non-line-of-sight snapshots and background mapping with an active corner camera. Nat Commun 2023; 14:3677. [PMID: 37344498 DOI: 10.1038/s41467-023-39327-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 06/02/2023] [Indexed: 06/23/2023] Open
Abstract
The ability to form reconstructions beyond line-of-sight view could be transformative in a variety of fields, including search and rescue, autonomous vehicle navigation, and reconnaissance. Most existing active non-line-of-sight (NLOS) imaging methods use data collection steps in which a pulsed laser is directed at several points on a relay surface, one at a time. The prevailing approaches include raster scanning of a rectangular grid on a vertical wall opposite the volume of interest to generate a collection of confocal measurements. These and a recent method that uses a horizontal relay surface are inherently limited by the need for laser scanning. Methods that avoid laser scanning to operate in a snapshot mode are limited to treating the hidden scene of interest as one or two point targets. In this work, based on more complete optical response modeling yet still without multiple illumination positions, we demonstrate accurate reconstructions of foreground objects while also introducing the capability of mapping the stationary scenery behind moving objects. The ability to count, localize, and characterize the sizes of hidden objects, combined with mapping of the stationary hidden scene, could greatly improve indoor situational awareness in a variety of applications.
Collapse
Affiliation(s)
- Sheila Seidel
- Electrical and Computer Engineering, Boston University, 8 St. Mary's Street, Boston, MA, 02215, USA
- Charles Stark Draper Laboratory, 555 Technology Square, Cambridge, MA, 02139, USA
| | - Hoover Rueda-Chacón
- Electrical and Computer Engineering, Boston University, 8 St. Mary's Street, Boston, MA, 02215, USA
- Computer Science, Universidad Industrial de Santander, Carrera 29 Calle 7, Bucaramanga, Santander, 680002, Colombia
| | - Iris Cusini
- Dip. Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci, 32, Milano, I-20133, Italy
| | - Federica Villa
- Dip. Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci, 32, Milano, I-20133, Italy
| | - Franco Zappa
- Dip. Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo Da Vinci, 32, Milano, I-20133, Italy
| | - Christopher Yu
- Charles Stark Draper Laboratory, 555 Technology Square, Cambridge, MA, 02139, USA
| | - Vivek K Goyal
- Electrical and Computer Engineering, Boston University, 8 St. Mary's Street, Boston, MA, 02215, USA.
| |
Collapse
|
5
|
Sheinin M, Schechner YY, Kutulakos KN. Computational Imaging on the Electric Grid. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:8728-8739. [PMID: 30843801 DOI: 10.1109/tpami.2019.2903035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Night beats with alternating current (AC) illumination. By passively sensing this beat, we reveal new scene information which includes: the type of bulbs in the scene, the phases of the electric grid up to city scale, and the light transport matrix. This information yields unmixing of reflections and semi-reflections, nocturnal high dynamic range, and scene rendering with bulbs not observed during acquisition. The latter is facilitated by a dataset of bulb response functions for a range of sources, which we collected and provide. To do all this, we built a novel coded-exposure high-dynamic-range imaging technique, specifically designed to operate on the grid's AC lighting.
Collapse
|
6
|
Rego JD, Chen H, Li S, Gu J, Jayasuriya S. Deep camera obscura: an image restoration pipeline for pinhole photography. OPTICS EXPRESS 2022; 30:27214-27235. [PMID: 36236897 DOI: 10.1364/oe.460636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 06/18/2022] [Indexed: 06/16/2023]
Abstract
Modern machine learning has enhanced the image quality for consumer and mobile photography through low-light denoising, high dynamic range (HDR) imaging, and improved demosaicing among other applications. While most of these advances have been made for normal lens-based cameras, there has been an emerging body of research for improved photography for lensless cameras using thin optics such as amplitude or phase masks, diffraction gratings, or diffusion layers. These lensless cameras are suited for size and cost-constrained applications such as tiny robotics and microscopy that prohibit the use of a large lens. However, the earliest and simplest camera design, the camera obscura or pinhole camera, has been relatively overlooked for machine learning pipelines with minimal research on enhancing pinhole camera images for everyday photography applications. In this paper, we develop an image restoration pipeline of the pinhole system to enhance the pinhole image quality through joint denoising and deblurring. Our pipeline integrates optics-based filtering and reblur losses for reconstructing high resolution still images (2600 × 1952) as well as temporal consistency for video reconstruction to enable practical exposure times (30 FPS) for high resolution video (1920 × 1080). We demonstrate high 2D image quality on real pinhole images that is on-par or slightly improved compared to other lensless cameras. This work opens up the potential of pinhole cameras to be used for photography in size-limited devices such as smartphones in the future.
Collapse
|
7
|
Rapp J, Saunders C, Tachella J, Murray-Bruce J, Altmann Y, Tourneret JY, McLaughlin S, Dawson RMA, Wong FNC, Goyal VK. Seeing around corners with edge-resolved transient imaging. Nat Commun 2020; 11:5929. [PMID: 33230217 PMCID: PMC7683558 DOI: 10.1038/s41467-020-19727-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Accepted: 10/23/2020] [Indexed: 11/23/2022] Open
Abstract
Non-line-of-sight (NLOS) imaging is a rapidly growing field seeking to form images of objects outside the field of view, with potential applications in autonomous navigation, reconnaissance, and even medical imaging. The critical challenge of NLOS imaging is that diffuse reflections scatter light in all directions, resulting in weak signals and a loss of directional information. To address this problem, we propose a method for seeing around corners that derives angular resolution from vertical edges and longitudinal resolution from the temporal response to a pulsed light source. We introduce an acquisition strategy, scene response model, and reconstruction algorithm that enable the formation of 2.5-dimensional representations—a plan view plus heights—and a 180∘ field of view for large-scale scenes. Our experiments demonstrate accurate reconstructions of hidden rooms up to 3 meters in each dimension despite a small scan aperture (1.5-centimeter radius) and only 45 measurement locations. Non-line-of-sight imaging is typically limited by loss of directional information due to diffuse reflections scattering light in all directions. Here, the authors see around corners by using vertical edges and temporal response to pulsed light to obtain angular and longitudinal resolution, respectively.
Collapse
Affiliation(s)
- Joshua Rapp
- Department of Electrical and Computer Engineering, Boston University, 1 Silber Way, Boston, MA, 02215, USA.,Charles Stark Draper Laboratory, 555 Technology Square, Cambridge, MA, 02139, USA
| | - Charles Saunders
- Department of Electrical and Computer Engineering, Boston University, 1 Silber Way, Boston, MA, 02215, USA
| | - Julián Tachella
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| | - John Murray-Bruce
- Department of Electrical and Computer Engineering, Boston University, 1 Silber Way, Boston, MA, 02215, USA.,Department of Computer Science and Engineering, University of South Florida, 4202 E. Fowler Avenue, Tampa, FL, 33620, USA
| | - Yoann Altmann
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| | - Jean-Yves Tourneret
- INP/ENSEEHIT-IRIT-TeSA, University of Toulouse, Toulouse Cedex 7, Toulouse, 31071, France
| | - Stephen McLaughlin
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| | - Robin M A Dawson
- Charles Stark Draper Laboratory, 555 Technology Square, Cambridge, MA, 02139, USA
| | - Franco N C Wong
- Research Laboratory of Electronics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, 02139, USA
| | - Vivek K Goyal
- Department of Electrical and Computer Engineering, Boston University, 1 Silber Way, Boston, MA, 02215, USA.
| |
Collapse
|
8
|
Divitt S, Gardner DF, Watnik AT. Imaging around corners in the mid-infrared using speckle correlations. OPTICS EXPRESS 2020; 28:11051-11064. [PMID: 32403624 DOI: 10.1364/oe.388260] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 03/12/2020] [Indexed: 06/11/2023]
Abstract
Speckle correlation imaging offers the ability to see objects through diffusive materials and around corners. Imaging self-illuminating thermal objects in non-line-of-sight scenarios is of particular interest. Here, using bispectrum and phase retrieval methods, we demonstrate speckle correlation imaging of mid-infrared objects through diffusers and around corners at resolutions near the diffraction limit. The images agree well with those recorded by conventional cameras with line-of-sight to the same objects.
Collapse
|
9
|
Abstract
Abstract
This paper presents a non-line-of-sight technique to estimate the position and temperature of an occluded object from a camera via reflection on a wall. Because objects with heat emit far infrared light with respect to their temperature, positions and temperatures are estimated from reflections on a wall. A key idea is that light paths from a hidden object to the camera depend on the position of the hidden object. The position of the object is recovered from the angular distribution of specular and diffuse reflection component, and the temperature of the heat source is recovered from the estimated position and the intensity of reflection. The effectiveness of our method is evaluated by conducting real-world experiments, showing that the position and the temperature of the hidden object can be recovered from the reflection destination of the wall by using a conventional thermal camera.
Collapse
|
10
|
Computational periscopy with an ordinary digital camera. Nature 2019; 565:472-475. [DOI: 10.1038/s41586-018-0868-6] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 12/12/2018] [Indexed: 11/09/2022]
|
11
|
Koppal SJ, Narasimhan SG. Beyond perspective dual photography with illumination masks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:2083-2097. [PMID: 25794389 DOI: 10.1109/tip.2015.2413291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Scene appearance from the point of view of a light source is called a reciprocal or dual view. Since there exists a large diversity in illumination, these virtual views may be nonperspective and multiviewpoint in nature. In this paper, we demonstrate the use of occluding masks to recover these dual views, which we term shadow cameras. We first show how to render a single reciprocal scene view by swapping the camera and light source positions. We then extend this technique for multiple views by both building a virtual shadow camera array and by exploiting area sources. We also capture nonperspective views such as orthographic, cross-slit and a pushbroom variant, while introducing novel applications such as converting between camera projections and removing refractive and catadioptric distortions. Finally, since a shadow camera is artificial, we can manipulate any of its intrinsic parameters, such as camera skew, to create perspective distortions. We demonstrate a variety of indoor and outdoor results and show a rendering application for capturing the light-field of a light-source.
Collapse
|