1
|
Baek WJ, Park J, Gao L. Depth-resolved imaging through scattering media using time-gated light field tomography. OPTICS LETTERS 2024; 49:6581-6584. [PMID: 39546724 DOI: 10.1364/ol.541549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2024] [Accepted: 10/08/2024] [Indexed: 11/17/2024]
Abstract
We present a novel, to the best of our knowledge, approach to overcome the limitations imposed by scattering media using time-gated light field tomography. By integrating the time-gating technique with light field imaging, we demonstrate the ability to capture and reconstruct images with different depths through highly scattering environments. Our method exploits the temporal characteristics of light propagation to selectively isolate ballistic photons, enabling enhanced depth resolution and improved imaging quality. Through comprehensive experimental validation and analysis, we showcase the effectiveness of our technique in resolving depth information with high fidelity, even in the presence of significant scattering. The resultant system can simultaneously acquire multi-angled projections of the object without requiring prior knowledge of the media or the target. This advancement holds promise for a wide range of applications, including non-invasive medical imaging, environmental monitoring, and industrial inspection, where imaging through scattering media is critical for an accurate and reliable analysis.
Collapse
|
2
|
Yang Y, Yang K, Zhang A. Influence of Target Surface BRDF on Non-Line-of-Sight Imaging. J Imaging 2024; 10:273. [PMID: 39590737 PMCID: PMC11595747 DOI: 10.3390/jimaging10110273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 10/25/2024] [Accepted: 10/25/2024] [Indexed: 11/28/2024] Open
Abstract
The surface material of an object is a key factor that affects non-line-of-sight (NLOS) imaging. In this paper, we introduce the bidirectional reflectance distribution function (BRDF) into NLOS imaging to study how the target surface material influences the quality of NLOS images. First, the BRDF of two surface materials (aluminized insulation material and white paint board) was modeled using deep neural networks and compared with a five-parameter empirical model to validate the method's accuracy. The method was then applied to fit BRDF data for different common materials. Finally, NLOS target simulations with varying surface materials were reconstructed using the confocal diffusion tomography algorithm. The reconstructed NLOS images were classified via a convolutional neural network to assess how different surface materials impacted imaging quality. The results show that image clarity improves when decreasing the specular reflection and increasing the diffuse reflection, with the best results obtained for surfaces exhibiting a high diffuse reflection and no specular reflection.
Collapse
Affiliation(s)
- Yufeng Yang
- College of Automation & Information Engineering, Xi’an University of Technology, Xi’an 710048, China; (Y.Y.); (A.Z.)
- Xi’an Key Laboratory of Wireless Optical Communication and Network Research, Xi’an 710048, China
- Nanjing Institute of Multi-Platform Observation Technology, Nanjing 211500, China
| | - Kailei Yang
- College of Automation & Information Engineering, Xi’an University of Technology, Xi’an 710048, China; (Y.Y.); (A.Z.)
| | - Ao Zhang
- College of Automation & Information Engineering, Xi’an University of Technology, Xi’an 710048, China; (Y.Y.); (A.Z.)
| |
Collapse
|
3
|
Rosen J, Alford S, Allan B, Anand V, Arnon S, Arockiaraj FG, Art J, Bai B, Balasubramaniam GM, Birnbaum T, Bisht NS, Blinder D, Cao L, Chen Q, Chen Z, Dubey V, Egiazarian K, Ercan M, Forbes A, Gopakumar G, Gao Y, Gigan S, Gocłowski P, Gopinath S, Greenbaum A, Horisaki R, Ierodiaconou D, Juodkazis S, Karmakar T, Katkovnik V, Khonina SN, Kner P, Kravets V, Kumar R, Lai Y, Li C, Li J, Li S, Li Y, Liang J, Manavalan G, Mandal AC, Manisha M, Mann C, Marzejon MJ, Moodley C, Morikawa J, Muniraj I, Narbutis D, Ng SH, Nothlawala F, Oh J, Ozcan A, Park Y, Porfirev AP, Potcoava M, Prabhakar S, Pu J, Rai MR, Rogalski M, Ryu M, Choudhary S, Salla GR, Schelkens P, Şener SF, Shevkunov I, Shimobaba T, Singh RK, Singh RP, Stern A, Sun J, Zhou S, Zuo C, Zurawski Z, Tahara T, Tiwari V, Trusiak M, Vinu RV, Volotovskiy SG, Yılmaz H, De Aguiar HB, Ahluwalia BS, Ahmad A. Roadmap on computational methods in optical imaging and holography [invited]. APPLIED PHYSICS. B, LASERS AND OPTICS 2024; 130:166. [PMID: 39220178 PMCID: PMC11362238 DOI: 10.1007/s00340-024-08280-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 07/10/2024] [Indexed: 09/04/2024]
Abstract
Computational methods have been established as cornerstones in optical imaging and holography in recent years. Every year, the dependence of optical imaging and holography on computational methods is increasing significantly to the extent that optical methods and components are being completely and efficiently replaced with computational methods at low cost. This roadmap reviews the current scenario in four major areas namely incoherent digital holography, quantitative phase imaging, imaging through scattering layers, and super-resolution imaging. In addition to registering the perspectives of the modern-day architects of the above research areas, the roadmap also reports some of the latest studies on the topic. Computational codes and pseudocodes are presented for computational methods in a plug-and-play fashion for readers to not only read and understand but also practice the latest algorithms with their data. We believe that this roadmap will be a valuable tool for analyzing the current trends in computational methods to predict and prepare the future of computational methods in optical imaging and holography. Supplementary Information The online version contains supplementary material available at 10.1007/s00340-024-08280-3.
Collapse
Affiliation(s)
- Joseph Rosen
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Simon Alford
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Blake Allan
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Vijayakumar Anand
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Shlomi Arnon
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Francis Gracy Arockiaraj
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Jonathan Art
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Ganesh M. Balasubramaniam
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Tobias Birnbaum
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- Swave BV, Gaston Geenslaan 2, 3001 Leuven, Belgium
| | - Nandan S. Bisht
- Applied Optics and Spectroscopy Laboratory, Department of Physics, Soban Singh Jeena University Campus Almora, Almora, Uttarakhand 263601 India
| | - David Blinder
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Liangcai Cao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
| | - Ziyang Chen
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Vishesh Dubey
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Karen Egiazarian
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Mert Ercan
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Andrew Forbes
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - G. Gopakumar
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala India
| | - Yunhui Gao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Sylvain Gigan
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Paweł Gocłowski
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | | | - Alon Greenbaum
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
- Bioinformatics Research Center, North Carolina State University, Raleigh, NC 27695 USA
| | - Ryoichi Horisaki
- Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan
| | - Daniel Ierodiaconou
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Saulius Juodkazis
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Tanushree Karmakar
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Vladimir Katkovnik
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Svetlana N. Khonina
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
- Samara National Research University, 443086 Samara, Russia
| | - Peter Kner
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Vladislav Kravets
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Ravi Kumar
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Yingming Lai
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Chen Li
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Jiaji Li
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shaoheng Li
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Gokul Manavalan
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Aditya Chandra Mandal
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Manisha Manisha
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Christopher Mann
- Department of Applied Physics and Materials Science, Northern Arizona University, Flagstaff, AZ 86011 USA
- Center for Materials Interfaces in Research and Development, Northern Arizona University, Flagstaff, AZ 86011 USA
| | - Marcin J. Marzejon
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Chané Moodley
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Junko Morikawa
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Inbarasan Muniraj
- LiFE Lab, Department of Electronics and Communication Engineering, Alliance School of Applied Engineering, Alliance University, Bangalore, Karnataka 562106 India
| | - Donatas Narbutis
- Institute of Theoretical Physics and Astronomy, Faculty of Physics, Vilnius University, Sauletekio 9, 10222 Vilnius, Lithuania
| | - Soon Hock Ng
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Fazilah Nothlawala
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Jeonghun Oh
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
- Tomocube Inc., Daejeon, 34051 South Korea
| | - Alexey P. Porfirev
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Mariana Potcoava
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Shashi Prabhakar
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Jixiong Pu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Mani Ratnam Rai
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Mikołaj Rogalski
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Meguya Ryu
- Research Institute for Material and Chemical Measurement, National Metrology Institute of Japan (AIST), 1-1-1 Umezono, Tsukuba, 305-8563 Japan
| | - Sakshi Choudhary
- Department Chemical Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Shiva, Israel
| | - Gangi Reddy Salla
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Peter Schelkens
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
| | - Sarp Feykun Şener
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Igor Shevkunov
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Tomoyoshi Shimobaba
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Rakesh K. Singh
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Ravindra P. Singh
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Adrian Stern
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Jiasong Sun
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shun Zhou
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Chao Zuo
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Zack Zurawski
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Tatsuki Tahara
- Applied Electromagnetic Research Center, Radio Research Institute, National Institute of Information and Communications Technology (NICT), 4-2-1 Nukuikitamachi, Koganei, Tokyo 184-8795 Japan
| | - Vipin Tiwari
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Maciej Trusiak
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - R. V. Vinu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Sergey G. Volotovskiy
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Hasan Yılmaz
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
| | - Hilton Barbosa De Aguiar
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Balpreet S. Ahluwalia
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Azeem Ahmad
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| |
Collapse
|
4
|
Naich AY, Carrión JR. LiDAR-Based Intensity-Aware Outdoor 3D Object Detection. SENSORS (BASEL, SWITZERLAND) 2024; 24:2942. [PMID: 38733047 PMCID: PMC11086319 DOI: 10.3390/s24092942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 04/28/2024] [Accepted: 05/01/2024] [Indexed: 05/13/2024]
Abstract
LiDAR-based 3D object detection and localization are crucial components of autonomous navigation systems, including autonomous vehicles and mobile robots. Most existing LiDAR-based 3D object detection and localization approaches primarily use geometric or structural feature abstractions from LiDAR point clouds. However, these approaches can be susceptible to environmental noise due to adverse weather conditions or the presence of highly scattering media. In this work, we propose an intensity-aware voxel encoder for robust 3D object detection. The proposed voxel encoder generates an intensity histogram that describes the distribution of point intensities within a voxel and is used to enhance the voxel feature set. We integrate this intensity-aware encoder into an efficient single-stage voxel-based detector for 3D object detection. Experimental results obtained using the KITTI dataset show that our method achieves comparable results with respect to the state-of-the-art method for car objects in 3D detection and from a bird's-eye view and superior results for pedestrian and cyclic objects. Furthermore, our model can achieve a detection rate of 40.7 FPS during inference time, which is higher than that of the state-of-the-art methods and incurs a lower computational cost.
Collapse
Affiliation(s)
- Ammar Yasir Naich
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
| | - Jesús Requena Carrión
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
| |
Collapse
|
5
|
Yu Q, Cai H, Zhu X, Liu Z, Yin H, Li L. Terahertz bistatic three-dimensional computational imaging of hidden objects through random media. Sci Rep 2024; 14:6147. [PMID: 38480807 PMCID: PMC11636813 DOI: 10.1038/s41598-024-56535-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Accepted: 03/07/2024] [Indexed: 12/14/2024] Open
Abstract
Random media pose limitations on the imaging capability of photoelectric detection devices. Currently, imaging techniques employed through random media primarily operate within the laser wavelength range, leaving the imaging potential of terahertz waves unexplored. In this study, we present an approach for terahertz bistatic three-dimensional imaging (TBTCI) of hidden objects through random media. By deducing the field distribution of bistatic terahertz time-domain spectroscopy system, and proposing an explicit point spread function of the random media, we conducted three-dimensional imaging of hidden objects obscured by the random media. Our proposed method exhibits promising applications in imaging scenarios with millimeter-wave radar, including non-invasive testing and biological imaging.
Collapse
Affiliation(s)
- Quanchun Yu
- National Key Laboratory of Scattering and Radiation, Beijing, 100854, People's Republic of China
| | - He Cai
- National Key Laboratory of Scattering and Radiation, Beijing, 100854, People's Republic of China
| | - Xianli Zhu
- National Key Laboratory of Scattering and Radiation, Beijing, 100854, People's Republic of China
| | - Zihao Liu
- National Key Laboratory of Scattering and Radiation, Beijing, 100854, People's Republic of China
| | - Hongcheng Yin
- National Key Laboratory of Scattering and Radiation, Beijing, 100854, People's Republic of China
| | - Liangsheng Li
- National Key Laboratory of Scattering and Radiation, Beijing, 100854, People's Republic of China.
| |
Collapse
|
6
|
Park CI, Choe S, Lee W, Choi W, Kim M, Seung HM, Kim YY. Ultrasonic barrier-through imaging by Fabry-Perot resonance-tailoring panel. Nat Commun 2023; 14:7818. [PMID: 38016968 PMCID: PMC10684589 DOI: 10.1038/s41467-023-43675-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 11/16/2023] [Indexed: 11/30/2023] Open
Abstract
Imaging technologies that provide detailed information on intricate shapes and states of an object play critical roles in nanoscale dynamics, bio-organ and cell studies, medical diagnostics, and underwater detection. However, ultrasonic imaging of an object hidden by a nearly impenetrable metal barrier remains intractable. Here, we present the experimental results of ultrasonic imaging of an object in water behind a metal barrier of a high impedance mismatch. In comparison to direct ultrasonic images, our method yields sufficient object information on the shapes and locations with minimal errors. While our imaging principle is based on the Fabry-Perot (FP) resonance, our strategy for reducing attenuation in our experiments focuses on customising the resonance at any desired frequency. To tailor the resonance frequency, we placed an elaborately engineered panel of a specific material and thickness, called the FP resonance-tailoring panel (RTP), and installed the panel in front of a barrier at a controlled distance. Since our RTP-based imaging technique is readily compatible with conventional ultrasound devices, it can realise underwater barrier-through imaging and communication and enhance skull-through ultrasonic brain imaging.
Collapse
Affiliation(s)
- Chung Il Park
- Department of Mechanical Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
- Institute of Advanced Machines and Design, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
| | - Seungah Choe
- Department of Mechanical Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
- Institute of Advanced Machines and Design, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
| | - Woorim Lee
- Department of Mechanical Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
- Institute of Advanced Machines and Design, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea
| | - Wonjae Choi
- Intelligent Wave Engineering Team, Korea Research Institute of Standards and Science (KRISS), 267 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea
- Department of Precision Measurement, University of Science and Technology (UST), 217 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea
| | - Miso Kim
- School of Advanced Materials Science and Engineering, Sungkyunkwan University (SKKU), 2066 Seobu-ro, Jangan-gu, Suwon, 16419, Republic of Korea
- SKKU Institute of Energy Science and Technology (SIEST), Sungkyunkwan University (SKKU), 2066 Seobu-ro, Jangan-gu, Suwon, 16419, Republic of Korea
| | - Hong Min Seung
- Intelligent Wave Engineering Team, Korea Research Institute of Standards and Science (KRISS), 267 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea.
- Department of Precision Measurement, University of Science and Technology (UST), 217 Gajeong-ro, Yuseong-gu, Daejeon, 34113, Republic of Korea.
| | - Yoon Young Kim
- Department of Mechanical Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
- Institute of Advanced Machines and Design, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| |
Collapse
|
7
|
Shi Y, Sheng W, Fu Y, Liu Y. Overlapping speckle correlation algorithm for high-resolution imaging and tracking of objects in unknown scattering media. Nat Commun 2023; 14:7742. [PMID: 38007546 PMCID: PMC10676403 DOI: 10.1038/s41467-023-43674-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 11/16/2023] [Indexed: 11/27/2023] Open
Abstract
Optical imaging in scattering media is important to many fields but remains challenging. Recent methods have focused on imaging through thin scattering layers or thicker scattering media with prior knowledge of the sample, but this still limits practical applications. Here, we report an imaging method named 'speckle kinetography' that enables high-resolution imaging in unknown scattering media with thicknesses up to about 6 transport mean free paths. Speckle kinetography non-invasively records a series of incoherent speckle images accompanied by object motion and the inherently retained object information is extracted through an overlapping speckle correlation algorithm to construct the object's autocorrelation for imaging. Under single-colour light-emitting diode, white light, and fluorescence illumination, we experimentally demonstrate 1 μm resolution imaging and tracking of objects moving in scattering samples, while reducing the requirements for prior knowledge. We anticipate this method will enable imaging in currently inaccessible scenarios.
Collapse
Affiliation(s)
- Yaoyao Shi
- College of Physics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
- College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
- Key Laboratory of Radar Imaging and Microwave Photonics, Ministry of Education, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
| | - Wei Sheng
- College of Physics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
| | - Yangyang Fu
- College of Physics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
| | - Youwen Liu
- College of Physics, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
| |
Collapse
|
8
|
Wiesel B, Arnon S. Imaging inside highly scattering media using hybrid deep learning and analytical algorithm. JOURNAL OF BIOPHOTONICS 2023; 16:e202300127. [PMID: 37434270 DOI: 10.1002/jbio.202300127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 06/15/2023] [Accepted: 07/10/2023] [Indexed: 07/13/2023]
Abstract
Imaging through highly scattering media is a challenging problem with numerous applications in biomedical and remote-sensing fields. Existing methods that use analytical or deep learning tools are limited by simplified forward models or a requirement for prior physical knowledge, resulting in blurry images or a need for large training databases. To address these limitations, we propose a hybrid scheme called Hybrid-DOT that combines analytically derived image estimates with a deep learning network. Our analysis demonstrates that Hybrid-DOT outperforms a state-of-the-art ToF-DOT algorithm by improving the PSNR ratio by 4.6 dB and reducing the resolution by a factor of 2.5. Furthermore, when compared to a deep learning stand-alone model, Hybrid-DOT achieves a 0.8 dB increase in PSNR, 1.5 times the resolution, and a significant reduction in the required dataset size (factor of 1.6-3). The proposed model remains effective at higher depths, providing similar improvements for up to 160 mean-free paths.
Collapse
Affiliation(s)
- Ben Wiesel
- Ben-Gurion University of the Negev, Department of Electrical and Computer Engineering, Beer-Sheva, Israel
| | - Shlomi Arnon
- Ben-Gurion University of the Negev, Department of Electrical and Computer Engineering, Beer-Sheva, Israel
| |
Collapse
|
9
|
Debnath B, M S M, Dharmadhikari JA, Chaudhuri S, Philip R, Ramachandran H. Acousto-optic modulator-based improvement in imaging through scattering media. APPLIED OPTICS 2023; 62:6609-6613. [PMID: 37706792 DOI: 10.1364/ao.496770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 07/28/2023] [Indexed: 09/15/2023]
Abstract
Reduced visibility is a common problem when light traverses through a scattering medium, and it becomes difficult to identify an object in such scenarios. What we believe to be a novel proof-of-principle technique for improving image visibility based on the quadrature lock-in discrimination algorithm in which the demodulation is performed using an acousto-optic modulator is presented here. A significant improvement in image visibility is achieved using a series of frames. We have also performed systematic imaging by varying the camera parameters, such as exposure time, frame rate, and series length, to investigate their effect on enhancing image visibility.
Collapse
|
10
|
Deng R, Jin X, Du D, Li Z. Scan-free time-of-flight-based three-dimensional imaging through a scattering layer. OPTICS EXPRESS 2023; 31:23662-23677. [PMID: 37475446 DOI: 10.1364/oe.492864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 06/11/2023] [Indexed: 07/22/2023]
Abstract
Reconstructing an object's three-dimensional shape behind a scattering layer with a single exposure is of great significance in real-life applications. However, due to the little information captured by a single exposure while strongly perturbed by the scattering layer and encoded by free-space propagation, existing methods cannot achieve scan-free three-dimensional reconstruction through the scattering layer in macroscopic scenarios using a short acquisition time of seconds. In this paper, we proposed a scan-free time-of-flight-based three-dimensional reconstruction method based on explicitly modeling and inverting the time-of-flight-based scattering light propagation in a non-confocal imaging system. The non-confocal time-of-flight-based scattering imaging model is developed to map the three-dimensional object shape information to the time-resolved measurements, by encoding the three-dimensional object shape into the free-space propagation result and then convolving with the scattering blur kernel derived from the diffusion equation. To solve the inverse problem, a three-dimensional shape reconstruction algorithm consisting of the deconvolution and diffractive wave propagation is developed to invert the effects caused by the scattering diffusion and the free-space propagation, which reshapes the temporal and spatial distribution of scattered signal photons and recovers the object shape information. Experiments on a real scattering imaging system are conducted to demonstrate the effectiveness of the proposed method. The single exposure used in the experiment only takes 3.5 s, which is more than 200 times faster than confocal scanning methods. Experimental results show that the proposed method outperforms existing methods in terms of three-dimensional reconstruction accuracy and imaging limit subjectively and objectively. Even though the signal photons captured by a single exposure are too highly scattered and attenuated to present any valid information in time gating, the proposed method can reconstruct three-dimensional objects located behind the scattering layer of 9.6 transport mean free paths (TMFPs), corresponding to the round-trip scattering length of 19.2 TMFPs.
Collapse
|
11
|
Maccarone A, Drummond K, McCarthy A, Steinlehner UK, Tachella J, Garcia DA, Pawlikowska A, Lamb RA, Henderson RK, McLaughlin S, Altmann Y, Buller GS. Submerged single-photon LiDAR imaging sensor used for real-time 3D scene reconstruction in scattering underwater environments. OPTICS EXPRESS 2023; 31:16690-16708. [PMID: 37157743 DOI: 10.1364/oe.487129] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We demonstrate a fully submerged underwater LiDAR transceiver system based on single-photon detection technologies. The LiDAR imaging system used a silicon single-photon avalanche diode (SPAD) detector array fabricated in complementary metal-oxide semiconductor (CMOS) technology to measure photon time-of-flight using picosecond resolution time-correlated single-photon counting. The SPAD detector array was directly interfaced to a Graphics Processing Unit (GPU) for real-time image reconstruction capability. Experiments were performed with the transceiver system and target objects immersed in a water tank at a depth of 1.8 meters, with the targets placed at a stand-off distance of approximately 3 meters. The transceiver used a picosecond pulsed laser source with a central wavelength of 532 nm, operating at a repetition rate of 20 MHz and average optical power of up to 52 mW, dependent on scattering conditions. Three-dimensional imaging was demonstrated by implementing a joint surface detection and distance estimation algorithm for real-time processing and visualization, which achieved images of stationary targets with up to 7.5 attenuation lengths between the transceiver and the target. The average processing time per frame was approximately 33 ms, allowing real-time three-dimensional video demonstrations of moving targets at ten frames per second at up to 5.5 attenuation lengths between transceiver and target.
Collapse
|
12
|
Pan L, Shen Y, Qi J, Shi J, Feng X. Single photon single pixel imaging into thick scattering medium. OPTICS EXPRESS 2023; 31:13943-13958. [PMID: 37157269 DOI: 10.1364/oe.484874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Imaging into thick scattering medium is a long-standing challenge. Beyond the quasi-ballistic regime, multiple scattering scrambles the spatiotemporal information of incident/emitted light, making canonical imaging based on light focusing nearly impossible. Diffusion optical tomography (DOT) is one of the most popular approach to look inside scattering medium, but quantitatively inverting the diffusion equation is ill-posed, and prior information of the medium is typically necessary, which is nontrivial to obtain. Here, we show theoretically and experimentally that, by synergizing the one-way light scattering characteristic of single pixel imaging with ultrasensitive single photon detection and a metric-guided image reconstruction, single photon single pixel imaging can serve as a simple and powerful alternative to DOT for imaging into thick scattering medium without prior knowledge or inverting the diffusion equation. We demonstrated an image resolution of 12 mm inside a 60 mm thick (∼ 78 mean free paths) scattering medium.
Collapse
|
13
|
Santos J, Rodrigo PJ, Petersen PM, Pedersen C. Confocal LiDAR for remote high-resolution imaging of auto-fluorescence in aquatic media. Sci Rep 2023; 13:4807. [PMID: 36959390 PMCID: PMC10036608 DOI: 10.1038/s41598-023-32036-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 03/21/2023] [Indexed: 03/25/2023] Open
Abstract
Spatially resolved in situ monitoring of plankton can provide insights on the impacts of climate change on aquatic ecosystems due to their vital role in the biological carbon pump. However, high-resolution underwater imaging is technically complex and restricted to small close-range volumes with current techniques. Here, we report a novel inelastic scanning confocal light detection and ranging (LiDAR) system for remote underwater volumetric imaging of fluorescent objects. A continuous wave excitation beam is combined with a pinhole in a conjugated detection plane to reject out-of-focus scattering and accomplish near-diffraction limited probe volumes. The combination of bi-directional scanning with remote focusing enables the acquisition of three-dimensional data. We experimentally determine the point spread and axial weighting functions, and demonstrate selective volumetric imaging of obstructed layers through spatial filtering. Finally, we spatially resolve in vivo autofluorescence from sub-millimeter Acocyclops royi copepods to demonstrate the applicability of our novel instrument in non-intrusive morphological and spectroscopic studies of aquatic fauna. The proposed system constitutes a unique tool e.g. for profiling chlorophyll distributions and for quantitative studies of zooplankton with reduced interference from intervening scatterers in the water column that degrade the the performance of conventional imaging systems currently in place.
Collapse
Affiliation(s)
- Joaquim Santos
- DTU Electro, Department of Electrical and Photonics Engineering, Technical University of Denmark, Frederiksborgvej 399, 4000, Roskilde, Denmark.
| | - Peter John Rodrigo
- DTU Electro, Department of Electrical and Photonics Engineering, Technical University of Denmark, Frederiksborgvej 399, 4000, Roskilde, Denmark
| | - Paul Michael Petersen
- DTU Electro, Department of Electrical and Photonics Engineering, Technical University of Denmark, Frederiksborgvej 399, 4000, Roskilde, Denmark
| | - Christian Pedersen
- DTU Electro, Department of Electrical and Photonics Engineering, Technical University of Denmark, Frederiksborgvej 399, 4000, Roskilde, Denmark
| |
Collapse
|
14
|
Zhao Y, Raghuram A, Wang F, Kim SH, Hielscher A, Robinson JT, Veeraraghavan A. Unrolled-DOT: an interpretable deep network for diffuse optical tomography. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:036002. [PMID: 36908760 PMCID: PMC9995139 DOI: 10.1117/1.jbo.28.3.036002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 02/09/2023] [Indexed: 06/18/2023]
Abstract
Significance Imaging through scattering media is critical in many biomedical imaging applications, such as breast tumor detection and functional neuroimaging. Time-of-flight diffuse optical tomography (ToF-DOT) is one of the most promising methods for high-resolution imaging through scattering media. ToF-DOT and many traditional DOT methods require an image reconstruction algorithm. Unfortunately, this algorithm often requires long computational runtimes and may produce lower quality reconstructions in the presence of model mismatch or improper hyperparameter tuning. Aim We used a data-driven unrolled network as our ToF-DOT inverse solver. The unrolled network is faster than traditional inverse solvers and achieves higher reconstruction quality by accounting for model mismatch. Approach Our model "Unrolled-DOT" uses the learned iterative shrinkage thresholding algorithm. In addition, we incorporate a refinement U-Net and Visual Geometry Group (VGG) perceptual loss to further increase the reconstruction quality. We trained and tested our model on simulated and real-world data and benchmarked against physics-based and learning-based inverse solvers. Results In experiments on real-world data, Unrolled-DOT outperformed learning-based algorithms and achieved over 10× reduction in runtime and mean-squared error, compared to traditional physics-based solvers. Conclusion We demonstrated a learning-based ToF-DOT inverse solver that achieves state-of-the-art performance in speed and reconstruction quality, which can aid in future applications for noninvasive biomedical imaging.
Collapse
Affiliation(s)
- Yongyi Zhao
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Ankit Raghuram
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Fay Wang
- Columbia University, Department of Biomedical Engineering, New York, New York, United States
| | - Stephen Hyunkeol Kim
- Columbia University Irvine Medical Center, Department of Radiology, New York, New York, United States
- New York University - Tandon School of Engineering, Department of Biomedical Engineering, New York, New York, United States
| | - Andreas Hielscher
- New York University - Tandon School of Engineering, Department of Biomedical Engineering, New York, New York, United States
| | - Jacob T. Robinson
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Ashok Veeraraghavan
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| |
Collapse
|
15
|
Zhang Y, Li S, Sun J, Zhang X, Liu D, Zhou X, Li H, Hou Y. Three-dimensional single-photon imaging through realistic fog in an outdoor environment during the day. OPTICS EXPRESS 2022; 30:34497-34509. [PMID: 36242460 DOI: 10.1364/oe.464297] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/21/2022] [Indexed: 06/16/2023]
Abstract
Due to the strong scattering of fog and the strong background noise, the signal-to-background ratio (SBR) is extremely low, which severely limits the 3D imaging capability of single-photon detector array through fog. Here, we propose an outdoor three-dimensional imaging algorithm through fog, which can separate signal photons from non-signal photons (scattering and noise photons) with SBR as low as 0.003. This is achieved by using the observation model based on multinomial distribution to compensate for the pile-up, and using dual-Gamma estimation to eliminate non-signal photons. We show that the proposed algorithm enables accurate 3D imaging of 1.4 km in the visibility of 1.7 km. Compared with the traditional algorithms, the target recovery (TR) of the reconstructed image is improved by 20.5%, and the relative average ranging error (RARE) is reduced by 28.2%. It has been successfully demonstrated for targets at different distances and imaging times. This research successfully expands the fog scattering estimation model from indoor to outdoor environment, and improves the weather adaptability of the single-photon detector array.
Collapse
|
16
|
Laurenzis M, Christnacher F. Time domain analysis of photon scattering and Huygens-Fresnel back projection. OPTICS EXPRESS 2022; 30:30441-30454. [PMID: 36242148 DOI: 10.1364/oe.468668] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 07/21/2022] [Indexed: 06/16/2023]
Abstract
Stand-off detection and characterization of scattering media such as fog and aerosols is an important task in environmental monitoring and related applications. We present, for the first time, a stand-off characterization of sprayed water fog in the time domain. Using a time correlated single photon counting, we measure transient signatures of photons reflected off a target within the fog volume. We can distinguish ballistic from scattered photon. By application of a forward propagation model, we reconstruct the scattered photon paths and determine the fog's mean scattering length μscat. in a range of 1.55 m to 1.86m. Moreover, in a second analysis, we project the recorded transients back to reconstruct the scene using virtual Huygens-Fresnel wavefronts. While in medium-density fog some contribution of ballistic remain in the signatures, we could demonstrate that in high-density fog, all recorded photons are at least scattered a single time. This work may path the way to novel characterization tools of and enhanced imaging in scattering media.
Collapse
|
17
|
Luesia P, Crespo M, Jarabo A, Redo-Sanchez A. Non-line-of-sight imaging in the presence of scattering media using phasor fields. OPTICS LETTERS 2022; 47:3796-3799. [PMID: 35913317 DOI: 10.1364/ol.463296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Non-line-of-sight (NLOS) imaging aims to reconstruct partially or completely occluded scenes. Recent approaches have demonstrated high-quality reconstructions of complex scenes with arbitrary reflectance, occlusions, and significant multi-path effects. However, previous works focused on surface scattering only, which reduces the generality in more challenging scenarios such as scenes submerged in scattering media. In this work, we investigate current state-of-the-art NLOS imaging methods based on phasor fields to reconstruct scenes submerged in scattering media. We empirically analyze the capability of phasor fields in reconstructing complex synthetic scenes submerged in thick scattering media. We also apply the method to real scenes, showing that it performs similarly to recent diffuse optical tomography methods.
Collapse
|
18
|
Xu S, Yang X, Liu W, Jönsson J, Qian R, Konda PC, Zhou KC, Kreiß L, Wang H, Dai Q, Berrocal E, Horstmeyer R. Imaging Dynamics Beneath Turbid Media via Parallelized Single-Photon Detection. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2022; 9:e2201885. [PMID: 35748188 PMCID: PMC9404405 DOI: 10.1002/advs.202201885] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 05/16/2022] [Indexed: 05/05/2023]
Abstract
Noninvasive optical imaging through dynamic scattering media has numerous important biomedical applications but still remains a challenging task. While standard diffuse imaging methods measure optical absorption or fluorescent emission, it is also well-established that the temporal correlation of scattered coherent light diffuses through tissue much like optical intensity. Few works to date, however, have aimed to experimentally measure and process such temporal correlation data to demonstrate deep-tissue video reconstruction of decorrelation dynamics. In this work, a single-photon avalanche diode array camera is utilized to simultaneously monitor the temporal dynamics of speckle fluctuations at the single-photon level from 12 different phantom tissue surface locations delivered via a customized fiber bundle array. Then a deep neural network is applied to convert the acquired single-photon measurements into video of scattering dynamics beneath rapidly decorrelating tissue phantoms. The ability to reconstruct images of transient (0.1-0.4 s) dynamic events occurring up to 8 mm beneath a decorrelating tissue phantom with millimeter-scale resolution is demonstrated, and it is highlighted how the model can flexibly extend to monitor flow speed within buried phantom vessels.
Collapse
Affiliation(s)
- Shiqi Xu
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | - Xi Yang
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | - Wenhui Liu
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
- Department of AutomationTsinghua UniversityBeijing100084China
| | - Joakim Jönsson
- Division of Combustion PhysicsDepartment of PhysicsLund UniversityLund22100Sweden
| | - Ruobing Qian
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | | | - Kevin C. Zhou
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
| | - Lucas Kreiß
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
- Institute of Medical BiotechnologyFriedrich‐Alexander‐University Erlangen‐Nürnberg (FAU)Erlangen91054Germany
| | - Haoqian Wang
- Tsinghua Shenzhen International Graduate SchoolTsinghua UniversityShenzhen518055China
| | - Qionghai Dai
- Department of AutomationTsinghua UniversityBeijing100084China
| | - Edouard Berrocal
- Division of Combustion PhysicsDepartment of PhysicsLund UniversityLund22100Sweden
| | - Roarke Horstmeyer
- Department of Biomedical EngineeringDuke UniversityDurhamNC27708USA
- Department of Electrical and Computer EngineeringDuke UniversityDurhamNC27708USA
- Department of PhysicsDuke UniversityDurhamNC27708USA
| |
Collapse
|
19
|
A boundary migration model for imaging within volumetric scattering media. Nat Commun 2022; 13:3234. [PMID: 35680924 PMCID: PMC9184484 DOI: 10.1038/s41467-022-30948-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 05/12/2022] [Indexed: 11/25/2022] Open
Abstract
Effectively imaging within volumetric scattering media is of great importance and challenging especially in macroscopic applications. Recent works have demonstrated the ability to image through scattering media or within the weak volumetric scattering media using spatial distribution or temporal characteristics of the scattered field. Here, we focus on imaging Lambertian objects embedded in highly scattering media, where signal photons are dramatically attenuated during propagation and highly coupled with background photons. We address these challenges by providing a time-to-space boundary migration model (BMM) of the scattered field to convert the scattered measurements in spectral form to the scene information in the temporal domain using all of the optical signals. The experiments are conducted under two typical scattering scenarios: 2D and 3D Lambertian objects embedded in the polyethylene foam and the fog, which demonstrate the effectiveness of the proposed algorithm. It outperforms related works including time gating in terms of reconstruction precision and scattering strength. Even though the proportion of signal photons is only 0.75%, Lambertian objects located at more than 25 transport mean free paths (TMFPs), corresponding to the round-trip scattering length of more than 50 TMFPs, can be reconstructed. Also, the proposed method provides low reconstruction complexity and millisecond-scale runtime, which significantly benefits its application. Imaging in scattering media is challenging due to signal attenuation and strong coupling of scattered and signal photons. The authors present a boundary migration model of the scattered field, converting scattered measurements in spectral form to scene information in temporal domain, and image Lambertian objects in highly scattering media.
Collapse
|
20
|
Bentz BZ, Pattyn CA, van der Laan JD, Redman BJ, Glen A, Sanchez AL, Westlake K, Wright JB. Incorporating the effects of objects in an approximate model of light transport in scattering media. OPTICS LETTERS 2022; 47:2000-2003. [PMID: 35427321 DOI: 10.1364/ol.451725] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 02/13/2022] [Indexed: 06/14/2023]
Abstract
A computationally efficient radiative transport model is presented that predicts a camera measurement and accounts for the light reflected and blocked by an object in a scattering medium. The model is in good agreement with experimental data acquired at the Sandia National Laboratory Fog Chamber Facility (SNLFC). The model is applicable in computational imaging to detect, localize, and image objects hidden in scattering media. Here, a statistical approach was implemented to study object detection limits in fog.
Collapse
|
21
|
Balasubramaniam GM, Biton N, Arnon S. Imaging through diffuse media using multi-mode vortex beams and deep learning. Sci Rep 2022; 12:1561. [PMID: 35091633 PMCID: PMC8799672 DOI: 10.1038/s41598-022-05358-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 01/11/2022] [Indexed: 01/20/2023] Open
Abstract
Optical imaging through diffuse media is a challenging issue and has attracted applications in many fields such as biomedical imaging, non-destructive testing, and computer-assisted surgery. However, light interaction with diffuse media leads to multiple scattering of the photons in the angular and spatial domain, severely degrading the image reconstruction process. In this article, a novel method to image through diffuse media using multiple modes of vortex beams and a new deep learning network named "LGDiffNet" is derived. A proof-of-concept numerical simulation is conducted using this method, and the results are experimentally verified. In this technique, the multiple modes of Gaussian and Laguerre-Gaussian beams illuminate the displayed digits dataset number, and the beams are then propagated through the diffuser before being captured on the beam profiler. Furthermore, we investigated whether imaging through diffuse media using multiple modes of vortex beams instead of Gaussian beams improves the imaging system's imaging capability and enhances the network's reconstruction ability. Our results show that illuminating the diffuser using vortex beams and employing the "LGDiffNet" network provides enhanced image reconstruction compared to existing modalities. An enhancement of ~ 1 dB, in terms of PSNR, is achieved using this method when a highly scattering diffuser of grit 220 and width 2 mm (7.11 times the mean free path) is used. No additional optimizations or reference beams were used in the imaging system, revealing the robustness of the "LGDiffNet" network and the adaptability of the imaging system for practical applications in medical imaging.
Collapse
Affiliation(s)
- Ganesh M Balasubramaniam
- Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8441405, Beersheba, Israel.
| | - Netanel Biton
- Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8441405, Beersheba, Israel
| | - Shlomi Arnon
- Department of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8441405, Beersheba, Israel
| |
Collapse
|
22
|
Zhao X, Jiang X, Han A, Mao T, He W, Chen Q. Photon-efficient 3D reconstruction employing a edge enhancement method. OPTICS EXPRESS 2022; 30:1555-1569. [PMID: 35209313 DOI: 10.1364/oe.446369] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
Photon-efficient 3D reconstruction under sparse photon conditions remains challenges. Especially for scene edge locations, the light scattering results in a weaker echo signal than non-edge locations. Depth images can be viewed as smooth regions stitched together by edge segmentation, yet none of the existing methods focus on how to improve the accuracy of edge reconstruction when performing 3D reconstruction. Moreover, the impact of edge reconstruction to overall depth reconstruction hasn't been investigated. In this paper, we explore how to improve the edge reconstruction accuracy from various aspects such as improving the network structure, employing hybrid loss functions and taking advantages of the non-local correlation of SPAD measurements. Meanwhile, we investigate the correlation between the edge reconstruction accuracy and the reconstruction accuracy of overall depth based on quantitative metrics. The experimental results show that the proposed method achieves superior performance in both edge reconstruction and overall depth reconstruction compared with other state-of-the-art methods. Besides, it proves that the improvement of edge reconstruction accuracy promotes the reconstruction accuracy of depth map.
Collapse
|
23
|
Stellinga D, Phillips DB, Mekhail SP, Selyem A, Turtaev S, Čižmár T, Padgett MJ. Time-of-flight 3D imaging through multimode optical fibers. Science 2021; 374:1395-1399. [PMID: 34882470 DOI: 10.1126/science.abl3771] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
[Figure: see text].
Collapse
Affiliation(s)
- Daan Stellinga
- School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| | - David B Phillips
- School of Physics and Astronomy, University of Exeter, Exeter EX4 4QL, UK
| | | | - Adam Selyem
- Fraunhofer Centre for Applied Photonics, Glasgow G1 1RD, UK
| | - Sergey Turtaev
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straße 9, 07745 Jena, Germany
| | - Tomáš Čižmár
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straße 9, 07745 Jena, Germany.,Institute of Scientific Instruments of the CAS, Královopolská 147, 612 64 Brno, Czech Republic
| | - Miles J Padgett
- School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, UK
| |
Collapse
|
24
|
Fast non-line-of-sight imaging with high-resolution and wide field of view using synthetic wavelength holography. Nat Commun 2021; 12:6647. [PMID: 34789724 PMCID: PMC8599621 DOI: 10.1038/s41467-021-26776-w] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 10/01/2021] [Indexed: 11/12/2022] Open
Abstract
The presence of a scattering medium in the imaging path between an object and an observer is known to severely limit the visual acuity of the imaging system. We present an approach to circumvent the deleterious effects of scattering, by exploiting spectral correlations in scattered wavefronts. Our Synthetic Wavelength Holography (SWH) method is able to recover a holographic representation of hidden targets with sub-mm resolution over a nearly hemispheric angular field of view. The complete object field is recorded within 46 ms, by monitoring the scattered light return in a probe area smaller than 6 cm × 6 cm. This unique combination of attributes opens up a plethora of new Non-Line-of-Sight imaging applications ranging from medical imaging and forensics, to early-warning navigation systems and reconnaissance. Adapting the findings of this work to other wave phenomena will help unlock a wider gamut of applications beyond those envisioned in this paper.
Collapse
|
25
|
Morland I, Zhu F, Martín GM, Gyongy I, Leach J. Intensity-corrected 4D light-in-flight imaging. OPTICS EXPRESS 2021; 29:22504-22516. [PMID: 34266012 DOI: 10.1364/oe.425930] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 05/18/2021] [Indexed: 06/13/2023]
Abstract
Light-in-flight (LIF) imaging is the measurement and reconstruction of light's path as it moves and interacts with objects. It is well known that relativistic effects can result in apparent velocities that differ significantly from the speed of light. However, less well known is that Rayleigh scattering and the effects of imaging optics can lead to observed intensities changing by several orders of magnitude along light's path. We develop a model that enables us to correct for all of these effects, thus we can accurately invert the observed data and reconstruct the true intensity-corrected optical path of a laser pulse as it travels in air. We demonstrate the validity of our model by observing the photon arrival time and intensity distribution obtained from single-photon avalanche detector (SPAD) array data for a laser pulse propagating towards and away from the camera. We can then reconstruct the true intensity-corrected path of the light in four dimensions (three spatial dimensions and time).
Collapse
|
26
|
Zhao Y, Raghuram A, Kim HK, Hielscher AH, Robinson JT, Veeraraghavan A. High Resolution, Deep Imaging Using Confocal Time-of-Flight Diffuse Optical Tomography. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:2206-2219. [PMID: 33891548 PMCID: PMC8270678 DOI: 10.1109/tpami.2021.3075366] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Light scattering by tissue severely limits how deep beneath the surface one can image, and the spatial resolution one can obtain from these images. Diffuse optical tomography (DOT) is one of the most powerful techniques for imaging deep within tissue - well beyond the conventional ∼ 10-15 mean scattering lengths tolerated by ballistic imaging techniques such as confocal and two-photon microscopy. Unfortunately, existing DOT systems are limited, achieving only centimeter-scale resolution. Furthermore, they suffer from slow acquisition times and slow reconstruction speeds making real-time imaging infeasible. We show that time-of-flight diffuse optical tomography (ToF-DOT) and its confocal variant (CToF-DOT), by exploiting the photon travel time information, allow us to achieve millimeter spatial resolution in the highly scattered diffusion regime ( mean free paths). In addition, we demonstrate two additional innovations: focusing on confocal measurements, and multiplexing the illumination sources allow us to significantly reduce the measurement acquisition time. Finally, we rely on a novel convolutional approximation that allows us to develop a fast reconstruction algorithm, achieving a 100× speedup in reconstruction time compared to traditional DOT reconstruction techniques. Together, we believe that these technical advances serve as the first step towards real-time, millimeter resolution, deep tissue imaging using DOT.
Collapse
|
27
|
Kufcsák A, Bagnaninchi P, Erdogan AT, Henderson RK, Krstajić N. Time-resolved spectral-domain optical coherence tomography with CMOS SPAD sensors. OPTICS EXPRESS 2021; 29:18720-18733. [PMID: 34154122 DOI: 10.1364/oe.422648] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 05/14/2021] [Indexed: 06/13/2023]
Abstract
We present a first spectral-domain optical coherence tomography (SD-OCT) system deploying a complementary metal-oxide-semiconductor (CMOS) single-photon avalanche diode (SPAD) based, time-resolved line sensor. The sensor with 1024 pixels achieves a sensitivity of 87 dB at an A-scan rate of 1 kHz using a supercontinuum laser source with a repetition rate of 20 MHz, 38 nm bandwidth, and 2 mW power at 850 nm centre wavelength. In the time-resolved mode of the sensor, the system combines low-coherence interferometry (LCI) and massively parallel time-resolved single-photon counting to control the detection of interference spectra on the single-photon level based on the time-of-arrival of photons. For proof of concept demonstration of the combined detection scheme we show the acquisition of time-resolved interference spectra and the reconstruction of OCT images from selected time bins. Then, we exemplify the temporal discrimination feature with 50 ps time resolution and 249 ps timing uncertainty by removing unwanted reflections from along the optical path at a 30 mm distance from the sample. The current limitations of the proposed technique in terms of sensor parameters are analysed and potential improvements are identified for advanced photonic applications.
Collapse
|
28
|
Tobin R, Halimi A, McCarthy A, Soan PJ, Buller GS. Robust real-time 3D imaging of moving scenes through atmospheric obscurant using single-photon LiDAR. Sci Rep 2021; 11:11236. [PMID: 34045553 PMCID: PMC8159934 DOI: 10.1038/s41598-021-90587-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 05/11/2021] [Indexed: 01/16/2023] Open
Abstract
Recently, time-of-flight LiDAR using the single-photon detection approach has emerged as a potential solution for three-dimensional imaging in challenging measurement scenarios, such as over distances of many kilometres. The high sensitivity and picosecond timing resolution afforded by single-photon detection offers high-resolution depth profiling of remote, complex scenes while maintaining low power optical illumination. These properties are ideal for imaging in highly scattering environments such as through atmospheric obscurants, for example fog and smoke. In this paper we present the reconstruction of depth profiles of moving objects through high levels of obscurant equivalent to five attenuation lengths between transceiver and target at stand-off distances up to 150 m. We used a robust statistically based processing algorithm designed for the real time reconstruction of single-photon data obtained in the presence of atmospheric obscurant, including providing uncertainty estimates in the depth reconstruction. This demonstration of real-time 3D reconstruction of moving scenes points a way forward for high-resolution imaging from mobile platforms in degraded visual environments.
Collapse
Affiliation(s)
- Rachael Tobin
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK.
| | - Abderrahim Halimi
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| | - Aongus McCarthy
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| | - Philip J Soan
- Defence Science and Technology Laboratory, Porton Down, Salisbury, SP4 0LQ, UK
| | - Gerald S Buller
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| |
Collapse
|
29
|
Maruca S, Rehain P, Sua YM, Zhu S, Huang Y. Non-invasive single photon imaging through strongly scattering media. OPTICS EXPRESS 2021; 29:9981-9990. [PMID: 33820159 DOI: 10.1364/oe.417299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Accepted: 01/28/2021] [Indexed: 06/12/2023]
Abstract
Non-invasive optical imaging through opaque and multi-scattering media remains highly desirable across many application domains. The random scattering and diffusion of light in such media inflict exponential decay and aberration, prohibiting diffraction-limited imaging. By non-interferometric few picoseconds optical gating of backscattered photons, we demonstrate single photon sensitive non-invasive 3D imaging of targets occluded by strongly scattering media with optical thicknesses reaching 9.5ls (19ls round trip). It achieves diffraction-limited imaging of a target placed 130 cm away through the opaque media, with millimeter lateral and depth resolution while requiring only one photon detection out of 50,000 probe pulses. Our single photon sensitive imaging technique does not require wavefront shaping nor computationally-intensive image reconstruction algorithms, promising practical solutions for diffraction-limited imaging through highly opaque and diffusive media with low illumination power.
Collapse
|
30
|
O'Connor T, Hawxhurst C, Shor LM, Javidi B. Red blood cell classification in lensless single random phase encoding using convolutional neural networks. OPTICS EXPRESS 2020; 28:33504-33515. [PMID: 33115011 DOI: 10.1364/oe.405563] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 10/13/2020] [Indexed: 06/11/2023]
Abstract
Rapid cell identification is achieved in a compact and field-portable system employing single random phase encoding to record opto-biological signatures of living biological cells of interest. The lensless, 3D-printed system uses a diffuser to encode the complex amplitude of the sample, then the encoded signal is recorded by a CMOS image sensor for classification. Removal of lenses in this 3D sensing system removes restrictions on the field of view, numerical aperture, and depth of field normally imposed by objective lenses in comparable microscopy systems to enable robust 3D capture of biological volumes. Opto-biological signatures for two classes of animal red blood cells, situated in a microfluidic device, are captured then input into a convolutional neural network for classification, wherein the AlexNet architecture, pretrained on the ImageNet database is used as the deep learning model. Video data was recorded of the opto-biological signatures for multiple samples, then each frame was treated as an input image to the network. The pre-trained network was fine-tuned and evaluated using a dataset of over 36,000 images. The results show improved performance in comparison to a previously studied Random Forest classification model using extracted statistical features from the opto-biological signatures. The system is further compared to and outperforms a similar shearing-based 3D digital holographic microscopy system for cell classification. In addition to improvements in classification performance, the use of convolutional neural networks in this work is further demonstrated to provide improved performance in the presence of noise. Red blood cell identification as presented here, may serve as a key step toward lensless pseudorandom phase encoding applications in rapid disease screening. To the best of our knowledge this is the first report of lensless cell identification in single random phase encoding using convolutional neural networks.
Collapse
|