1
|
Zhang X, Wang B, Li S, Liang K, Guan H, Chen Q, Zuo C. Lensless imaging with a programmable Fresnel zone aperture. SCIENCE ADVANCES 2025; 11:eadt3909. [PMID: 40117355 PMCID: PMC11927639 DOI: 10.1126/sciadv.adt3909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2024] [Accepted: 02/18/2025] [Indexed: 03/23/2025]
Abstract
Optical imaging has long been dominated by traditional lens-based systems that, despite their success, are inherently limited by size, weight, and cost. Lensless imaging seeks to overcome these limitations by replacing lenses with thinner, lighter, and cheaper optical modulators and reconstructing images computationally, while facing trade-offs in image quality, artifacts, and flexibility inherent in traditional static modulation. Here, we propose a lensless imaging method with programmable Fresnel zone aperture (FZA), termed LIP. With a commercial liquid crystal display, we designed an integrated LIP module and demonstrated its capability of high-quality artifact-free reconstruction through dynamic modulation and offset-FZA parallel merging. Compared to static-modulation approaches, LIP achieves a 2.5× resolution enhancement and a 3 decibels improvement in signal-to-noise ratio in "static mode" while maintaining an interaction frame rate of 15 frames per second in "dynamic mode." Experimental results demonstrate LIP's potential as a miniaturized platform for versatile advanced imaging tasks like virtual reality and human-computer interaction.
Collapse
Affiliation(s)
- Xu Zhang
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province 210094, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| | - Bowen Wang
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province 210094, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| | - Sheng Li
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province 210094, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| | - Kunyao Liang
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province 210094, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| | - Haitao Guan
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province 210094, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| | - Qian Chen
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province 210094, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| | - Chao Zuo
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, No. 200 Xiaolingwei Street, Nanjing, Jiangsu Province 210094, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing, Jiangsu Province 210094, China
- Smart Computational Imaging Laboratory (SCILab), Nanjing University of Science and Technology, Nanjing, Jiangsu Province 210094, China
| |
Collapse
|
2
|
Liu Z, Zeng T, Zhan X, Zhang X, Lam EY. Generative approach for lensless imaging in low-light conditions. OPTICS EXPRESS 2025; 33:3021-3039. [PMID: 39876436 DOI: 10.1364/oe.544875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Accepted: 01/06/2025] [Indexed: 01/30/2025]
Abstract
Lensless imaging offers a lightweight, compact alternative to traditional lens-based systems, ideal for exploration in space-constrained environments. However, the absence of a focusing lens and limited lighting in such environments often results in low-light conditions, where the measurements suffer from complex noise interference due to insufficient capture of photons. This study presents a robust reconstruction method for high-quality imaging in low-light scenarios, employing two complementary perspectives: model-driven and data-driven. First, we apply a physics-model-driven perspective to reconstruct the range space of the pseudo-inverse of the measurement model-as a first guidance to extract information in the noisy measurements. Then, we integrate a generative-model-based perspective to suppress residual noises-as the second guidance to suppress noises in the initial noisy results. Specifically, a learnable Wiener filter-based module generates an initial, noisy reconstruction. Then, for fast and, more importantly, stable generation of the clear image from the noisy version, we implement a modified conditional generative diffusion module. This module converts the raw image into the latent wavelet domain for efficiency and uses modified bidirectional training processes for stabilization. Simulations and real-world experiments demonstrate substantial improvements in overall visual quality, advancing lensless imaging in challenging low-light environments.
Collapse
|
3
|
Zhang J, Geng R, Du X, Chen Y, Li H, Hu Y. Passive Non-Line-of-Sight Imaging with Light Transport Modulation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; PP:410-424. [PMID: 40030648 DOI: 10.1109/tip.2024.3518097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Passive non-line-of-sight (NLOS) imaging has witnessed rapid development in recent years, due to its ability to image objects that are out of sight. The light transport condition plays an important role in this task since changing the conditions will lead to different imaging models. Existing learning-based NLOS methods usually train independent models for different light transport conditions, which is computationally inefficient and impairs the practicality of the models. In this work, we propose NLOS-LTM, a novel passive NLOS imaging method that effectively handles multiple light transport conditions with a single network. We achieve this by inferring a latent light transport representation from the projection image and using this representation to modulate the network that reconstructs the hidden image from the projection image. We train a light transport encoder together with a vector quantizer to obtain the light transport representation. To further regulate this representation, we jointly learn both the reconstruction network and the reprojection network during training. A set of light transport modulation blocks is used to modulate the two jointly trained networks in a multi-scale way. Extensive experiments on a large-scale passive NLOS dataset demonstrate the superiority of the proposed method. The code is available at https://github.com/JerryOctopus/NLOS-LTM.
Collapse
|
4
|
Molani A, Pennati F, Ravazzani S, Scarpellini A, Storti FM, Vegetali G, Paganelli C, Aliverti A. Advances in Portable Optical Microscopy Using Cloud Technologies and Artificial Intelligence for Medical Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:6682. [PMID: 39460161 PMCID: PMC11510803 DOI: 10.3390/s24206682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2024] [Revised: 10/11/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024]
Abstract
The need for faster and more accessible alternatives to laboratory microscopy is driving many innovations throughout the image and data acquisition chain in the biomedical field. Benchtop microscopes are bulky, lack communications capabilities, and require trained personnel for analysis. New technologies, such as compact 3D-printed devices integrated with the Internet of Things (IoT) for data sharing and cloud computing, as well as automated image processing using deep learning algorithms, can address these limitations and enhance the conventional imaging workflow. This review reports on recent advancements in microscope miniaturization, with a focus on emerging technologies such as photoacoustic microscopy and more established approaches like smartphone-based microscopy. The potential applications of IoT in microscopy are examined in detail. Furthermore, this review discusses the evolution of image processing in microscopy, transitioning from traditional to deep learning methods that facilitate image enhancement and data interpretation. Despite numerous advancements in the field, there is a noticeable lack of studies that holistically address the entire microscopy acquisition chain. This review aims to highlight the potential of IoT and artificial intelligence (AI) in combination with portable microscopy, emphasizing the importance of a comprehensive approach to the microscopy acquisition chain, from portability to image analysis.
Collapse
|
5
|
Wdowiak E, Rogalski M, Arcab P, Zdańkowski P, Józwik M, Trusiak M. Quantitative phase imaging verification in large field-of-view lensless holographic microscopy via two-photon 3D printing. Sci Rep 2024; 14:23611. [PMID: 39384947 PMCID: PMC11464779 DOI: 10.1038/s41598-024-74866-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Accepted: 09/30/2024] [Indexed: 10/11/2024] Open
Abstract
Large field-of-view (FOV) microscopic imaging (over 100 mm2) with high lateral resolution (1-2 μm) plays a pivotal role in biomedicine and biophotonics, especially within the label-free regime. Lensless digital holographic microscopy (LDHM) is promising in this context but ensuring accurate quantitative phase imaging (QPI) in large FOV LDHM is challenging. While phantoms, 3D printed by two-photon polymerization (TPP), have facilitated testing small FOV lens-based QPI systems, an equivalent evaluation for lensless techniques remains elusive, compounded by issues such as twin-image and beam distortions, particularly towards the detector's edges. Here, we propose an application of TPP over large area to examine phase consistency in LDHM. Our research involves fabricating widefield phase test targets with galvo and piezo scanning, scrutinizing them under single-shot twin-image corrupted conditions and multi-frame iterative twin-image minimization scenarios. By measuring the structures near the detector's edges, we verified LDHM phase imaging errors across the entire FOV, with less than 12% phase value difference between areas. Our findings indicate that TPP, followed by LDHM and Linnik interferometry cross-verification, requires new design considerations for precise large-area photonic manufacturing. This research paves the way for quantitative benchmarking of large FOV lensless phase imaging, enhancing understanding and further development of LDHM technique.
Collapse
Affiliation(s)
- Emilia Wdowiak
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., Warsaw, 02-525, Poland.
| | - Mikołaj Rogalski
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., Warsaw, 02-525, Poland
| | - Piotr Arcab
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., Warsaw, 02-525, Poland
| | - Piotr Zdańkowski
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., Warsaw, 02-525, Poland
| | - Michał Józwik
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., Warsaw, 02-525, Poland
| | - Maciej Trusiak
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., Warsaw, 02-525, Poland.
| |
Collapse
|
6
|
Rosen J, Alford S, Allan B, Anand V, Arnon S, Arockiaraj FG, Art J, Bai B, Balasubramaniam GM, Birnbaum T, Bisht NS, Blinder D, Cao L, Chen Q, Chen Z, Dubey V, Egiazarian K, Ercan M, Forbes A, Gopakumar G, Gao Y, Gigan S, Gocłowski P, Gopinath S, Greenbaum A, Horisaki R, Ierodiaconou D, Juodkazis S, Karmakar T, Katkovnik V, Khonina SN, Kner P, Kravets V, Kumar R, Lai Y, Li C, Li J, Li S, Li Y, Liang J, Manavalan G, Mandal AC, Manisha M, Mann C, Marzejon MJ, Moodley C, Morikawa J, Muniraj I, Narbutis D, Ng SH, Nothlawala F, Oh J, Ozcan A, Park Y, Porfirev AP, Potcoava M, Prabhakar S, Pu J, Rai MR, Rogalski M, Ryu M, Choudhary S, Salla GR, Schelkens P, Şener SF, Shevkunov I, Shimobaba T, Singh RK, Singh RP, Stern A, Sun J, Zhou S, Zuo C, Zurawski Z, Tahara T, Tiwari V, Trusiak M, Vinu RV, Volotovskiy SG, Yılmaz H, De Aguiar HB, Ahluwalia BS, Ahmad A. Roadmap on computational methods in optical imaging and holography [invited]. APPLIED PHYSICS. B, LASERS AND OPTICS 2024; 130:166. [PMID: 39220178 PMCID: PMC11362238 DOI: 10.1007/s00340-024-08280-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 07/10/2024] [Indexed: 09/04/2024]
Abstract
Computational methods have been established as cornerstones in optical imaging and holography in recent years. Every year, the dependence of optical imaging and holography on computational methods is increasing significantly to the extent that optical methods and components are being completely and efficiently replaced with computational methods at low cost. This roadmap reviews the current scenario in four major areas namely incoherent digital holography, quantitative phase imaging, imaging through scattering layers, and super-resolution imaging. In addition to registering the perspectives of the modern-day architects of the above research areas, the roadmap also reports some of the latest studies on the topic. Computational codes and pseudocodes are presented for computational methods in a plug-and-play fashion for readers to not only read and understand but also practice the latest algorithms with their data. We believe that this roadmap will be a valuable tool for analyzing the current trends in computational methods to predict and prepare the future of computational methods in optical imaging and holography. Supplementary Information The online version contains supplementary material available at 10.1007/s00340-024-08280-3.
Collapse
Affiliation(s)
- Joseph Rosen
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Simon Alford
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Blake Allan
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Vijayakumar Anand
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Shlomi Arnon
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Francis Gracy Arockiaraj
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Jonathan Art
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Ganesh M. Balasubramaniam
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Tobias Birnbaum
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- Swave BV, Gaston Geenslaan 2, 3001 Leuven, Belgium
| | - Nandan S. Bisht
- Applied Optics and Spectroscopy Laboratory, Department of Physics, Soban Singh Jeena University Campus Almora, Almora, Uttarakhand 263601 India
| | - David Blinder
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Liangcai Cao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
| | - Ziyang Chen
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Vishesh Dubey
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Karen Egiazarian
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Mert Ercan
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Andrew Forbes
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - G. Gopakumar
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Amritapuri, Vallikavu, Kerala India
| | - Yunhui Gao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084 China
| | - Sylvain Gigan
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Paweł Gocłowski
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | | | - Alon Greenbaum
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
- Bioinformatics Research Center, North Carolina State University, Raleigh, NC 27695 USA
| | - Ryoichi Horisaki
- Graduate School of Information Science and Technology, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656 Japan
| | - Daniel Ierodiaconou
- Faculty of Science Engineering and Built Environment, Deakin University, Princes Highway, Warrnambool, VIC 3280 Australia
| | - Saulius Juodkazis
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Tanushree Karmakar
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Vladimir Katkovnik
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Svetlana N. Khonina
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
- Samara National Research University, 443086 Samara, Russia
| | - Peter Kner
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Vladislav Kravets
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Ravi Kumar
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Yingming Lai
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Chen Li
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Jiaji Li
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shaoheng Li
- School of Electrical and Computer Engineering, University of Georgia, Athens, GA 30602 USA
| | - Yuzhu Li
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, Varennes, QC J3X1Pd7 Canada
| | - Gokul Manavalan
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Aditya Chandra Mandal
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Manisha Manisha
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Christopher Mann
- Department of Applied Physics and Materials Science, Northern Arizona University, Flagstaff, AZ 86011 USA
- Center for Materials Interfaces in Research and Development, Northern Arizona University, Flagstaff, AZ 86011 USA
| | - Marcin J. Marzejon
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Chané Moodley
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Junko Morikawa
- World Research Hub Initiative (WRHI), Tokyo Institute of Technology, 2-12-1, Ookayama, Tokyo, 152-8550 Japan
| | - Inbarasan Muniraj
- LiFE Lab, Department of Electronics and Communication Engineering, Alliance School of Applied Engineering, Alliance University, Bangalore, Karnataka 562106 India
| | - Donatas Narbutis
- Institute of Theoretical Physics and Astronomy, Faculty of Physics, Vilnius University, Sauletekio 9, 10222 Vilnius, Lithuania
| | - Soon Hock Ng
- Optical Sciences Center and ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), School of Science, Computing and Engineering Technologies, Optical Sciences Center, Swinburne University of Technology, Hawthorn, Melbourne, VIC 3122 Australia
| | - Fazilah Nothlawala
- School of Physics, University of the Witwatersrand, Johannesburg, South Africa
| | - Jeonghun Oh
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, Bioengineering Department, California NanoSystems Institute, University of California, Los Angeles (UCLA), Los Angeles, CA USA
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141 South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141 South Korea
- Tomocube Inc., Daejeon, 34051 South Korea
| | - Alexey P. Porfirev
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Mariana Potcoava
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Shashi Prabhakar
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Jixiong Pu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Mani Ratnam Rai
- Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695 USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695 USA
| | - Mikołaj Rogalski
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - Meguya Ryu
- Research Institute for Material and Chemical Measurement, National Metrology Institute of Japan (AIST), 1-1-1 Umezono, Tsukuba, 305-8563 Japan
| | - Sakshi Choudhary
- Department Chemical Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Shiva, Israel
| | - Gangi Reddy Salla
- Department of Physics, SRM University – AP, Amaravati, Andhra Pradesh 522502 India
| | - Peter Schelkens
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel VUB), Pleinlaan 2, 1050 Brussel, Belgium
- IMEC, Kapeldreef 75, 3001 Leuven, Belgium
| | - Sarp Feykun Şener
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
- Department of Physics, Bilkent University, 06800 Ankara, Turkey
| | - Igor Shevkunov
- Computational Imaging Group, Faculty of Information Technology and Communication Sciences, Tampere University, 33100 Tampere, Finland
| | - Tomoyoshi Shimobaba
- Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba, Chiba Japan
| | - Rakesh K. Singh
- Laboratory of Information Photonics and Optical Metrology, Department of Physics, Indian Institute of Technology (Banaras Hindu University), Varanasi, Uttar Pradesh 221005 India
| | - Ravindra P. Singh
- Quantum Science and Technology Laboratory, Physical Research Laboratory, Navrangpura, Ahmedabad, 380009 India
| | - Adrian Stern
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, 8410501 Beer-Sheva, Israel
| | - Jiasong Sun
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Shun Zhou
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Chao Zuo
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Laboratory (SCILab), School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing, 210094 Jiangsu China
- Smart Computational Imaging Research Institute (SCIRI), Nanjing, 210019 Jiangsu China
| | - Zack Zurawski
- Department of Anatomy and Cell Biology, University of Illinois at Chicago, 808 South Wood Street, Chicago, IL 60612 USA
| | - Tatsuki Tahara
- Applied Electromagnetic Research Center, Radio Research Institute, National Institute of Information and Communications Technology (NICT), 4-2-1 Nukuikitamachi, Koganei, Tokyo 184-8795 Japan
| | - Vipin Tiwari
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Maciej Trusiak
- Institute of Micromechanics and Photonics, Warsaw University of Technology, 8 Sw. A. Boboli St., 02-525 Warsaw, Poland
| | - R. V. Vinu
- Fujian Provincial Key Laboratory of Light Propagation and Transformation, College of Information Science and Engineering, Huaqiao University, Xiamen, 361021 Fujian China
| | - Sergey G. Volotovskiy
- IPSI RAS-Branch of the FSRC “Crystallography and Photonics” RAS, 443001 Samara, Russia
| | - Hasan Yılmaz
- Institute of Materials Science and Nanotechnology, National Nanotechnology Research Center (UNAM), Bilkent University, 06800 Ankara, Turkey
| | - Hilton Barbosa De Aguiar
- Laboratoire Kastler Brossel, Centre National de la Recherche Scientifique (CNRS) UMR 8552, Sorbonne Universite ´, Ecole Normale Supe ´rieure-Paris Sciences et Lettres (PSL) Research University, Collège de France, 24 rue Lhomond, 75005 Paris, France
| | - Balpreet S. Ahluwalia
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| | - Azeem Ahmad
- Department of Physics and Technology, UiT The Arctic University of Norway, 9037 Tromsø, Norway
| |
Collapse
|
7
|
Ni C, Yang C, Zhang X, Li Y, Zhang W, Zhai Y, He W, Chen Q. Address model mismatch and defocus in FZA lensless imaging via model-driven CycleGAN. OPTICS LETTERS 2024; 49:4170-4173. [PMID: 39090886 DOI: 10.1364/ol.528502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 07/01/2024] [Indexed: 08/04/2024]
Abstract
Mask-based lensless imaging systems suffer from model mismatch and defocus. In this Letter, we propose a model-driven CycleGAN, MDGAN, to reconstruct objects within a long distance. MDGAN includes two translation cycles for objects and measurements respectively, each consisting of a forward propagation and a backward reconstruction module. The backward module resembles the Wiener-U-Net, and the forward module consists of the estimated image formation model of a Fresnel zone aperture camera (FZACam), followed by CNN to compensate for the model mismatch. By imposing cycle consistency, the backward module can adaptively match the actual depth-varying imaging process. We demonstrate that MDGAN based on either a simulated or calibrated imaging model produces a higher-quality image compared to existing methods. Thus, it can be applied to other mask-based systems.
Collapse
|
8
|
Olorocisimo JP, Ohta Y, Regonia PR, Castillo VCG, Yoshimoto J, Takehara H, Sasagawa K, Ohta J. Brain-implantable needle-type CMOS imaging device enables multi-layer dissection of seizure calcium dynamics in the hippocampus. J Neural Eng 2024; 21:046022. [PMID: 38925109 DOI: 10.1088/1741-2552/ad5c03] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 06/26/2024] [Indexed: 06/28/2024]
Abstract
Objective: Current neuronal imaging methods mostly use bulky lenses that either impede animal behavior or prohibit multi-depth imaging. To overcome these limitations, we developed a lightweight lensless biophotonic system for neuronal imaging, enabling compact and simultaneous visualization of multiple brain layers.Approach: Our developed 'CIS-NAIST' device integrates a micro-CMOS image sensor, thin-film fluorescence filter, micro-LEDs, and a needle-shaped flexible printed circuit. With this device, we monitored neuronal calcium dynamics during seizures across the different layers of the hippocampus and employed machine learning techniques for seizure classification and prediction.Main results: The CIS-NAIST device revealed distinct calcium activity patterns across the CA1, molecular interlayer, and dentate gyrus. Our findings indicated an elevated calcium amplitude activity specifically in the dentate gyrus compared to other layers. Then, leveraging the multi-layer data obtained from the device, we successfully classified seizure calcium activity and predicted seizure behavior using Long Short-Term Memory and Hidden Markov models.Significance: Taken together, our 'CIS-NAIST' device offers an effective and minimally invasive method of seizure monitoring that can help elucidate the mechanisms of temporal lobe epilepsy.
Collapse
Affiliation(s)
| | - Yasumi Ohta
- Division of Materials Science, Nara Institute of Science and Technology, Ikoma, Japan
| | - Paul R Regonia
- Department of Computer Science, University of the Philippines Diliman, Manila, The Philippines
| | - Virgil C G Castillo
- Division of Materials Science, Nara Institute of Science and Technology, Ikoma, Japan
| | - Junichiro Yoshimoto
- Department of Biomedical Data Science, Fujita Health University School of Medicine, Toyoake, Japan
| | - Hironari Takehara
- Division of Materials Science, Nara Institute of Science and Technology, Ikoma, Japan
| | - Kiyotaka Sasagawa
- Division of Materials Science, Nara Institute of Science and Technology, Ikoma, Japan
| | - Jun Ohta
- Division of Materials Science, Nara Institute of Science and Technology, Ikoma, Japan
| |
Collapse
|
9
|
Yang Q, Guo R, Hu G, Xue Y, Li Y, Tian L. Wide-field, high-resolution reconstruction in computational multi-aperture miniscope using a Fourier neural network. OPTICA 2024; 11:860-871. [PMID: 39895923 PMCID: PMC11784641 DOI: 10.1364/optica.523636] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/27/2024] [Indexed: 02/04/2025]
Abstract
Traditional fluorescence microscopy is constrained by inherent trade-offs among resolution, field of view, and system complexity. To navigate these challenges, we introduce a simple and low-cost computational multi-aperture miniature microscope, utilizing a microlens array for single-shot wide-field, high-resolution imaging. Addressing the challenges posed by extensive view multiplexing and non-local, shift-variant aberrations in this device, we present SV-FourierNet, a multi-channel Fourier neural network. SV-FourierNet facilitates high-resolution image reconstruction across the entire imaging field through its learned global receptive field. We establish a close relationship between the physical spatially varying point-spread functions and the network's learned effective receptive field. This ensures that SV-FourierNet has effectively encapsulated the spatially varying aberrations in our system and learned a physically meaningful function for image reconstruction. Training of SV-FourierNet is conducted entirely on a physics-based simulator. We showcase wide-field, high-resolution video reconstructions on colonies of freely moving C. elegans and imaging of a mouse brain section. Our computational multi-aperture miniature microscope, augmented with SV-FourierNet, represents a major advancement in computational microscopy and may find broad applications in biomedical research and other fields requiring compact microscopy solutions.
Collapse
Affiliation(s)
- Qianwan Yang
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Ruipeng Guo
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Guorong Hu
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Yujia Xue
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
| | - Yunzhe Li
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, California 94720, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215, USA
- Department of Biomedical Engineering, Boston University, Boston, Massachusetts 02215, USA
- Neurophotonics Center, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
10
|
Tian Z, Li L, Ma J, Cao L, Su P. CFZA camera: a high-resolution lensless imaging technique based on compound Fresnel zone aperture. OPTICS LETTERS 2024; 49:3532-3535. [PMID: 38875663 DOI: 10.1364/ol.527533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/28/2024] [Indexed: 06/16/2024]
Abstract
In lensless imaging using a Fresnel zone aperture (FZA), it is generally believed that the resolution is limited by the outermost ring breadth of the FZA. The limitation has the potential to be broken according to the multi-order property of binary FZAs. In this Letter, we propose to use a high-order component of the FZA as the point spread function (PSF) to develop a high-order transfer function backpropagation (HBP) algorithm to enhance the resolution. The proportion of high-order diffraction energy is low, leading to severe defocus noise in the reconstructed image. To address this issue, we propose a Compound FZA (CFZA), which merges two partial FZAs operating at different orders as the mask to strike a balance between the noise and resolution. Experimental results verify that the CFZA-based camera has a resolution that is double that of a traditional FZA-based camera with an identical outer ring breadth and can be reconstructed with high quality by a single HBP without calibration. Our method offers a cost-effective solution for achieving high-resolution imaging, expanding the potential applications of FZA-based lensless imaging in a variety of areas.
Collapse
|
11
|
Zheng Z, Liu B, Song J, Ding L, Zhong X, Chang L, Wu X, McGloin D, Wang F. Temporal compressive edge imaging enabled by a lensless diffuser camera. OPTICS LETTERS 2024; 49:3058-3061. [PMID: 38824327 DOI: 10.1364/ol.515429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/01/2024] [Indexed: 06/03/2024]
Abstract
Lensless imagers based on diffusers or encoding masks enable high-dimensional imaging from a single-shot measurement and have been applied in various applications. However, to further extract image information such as edge detection, conventional post-processing filtering operations are needed after the reconstruction of the original object images in the diffuser imaging systems. Here, we present the concept of a temporal compressive edge detection method based on a lensless diffuser camera, which can directly recover a time sequence of edge images of a moving object from a single-shot measurement, without further post-processing steps. Our approach provides higher image quality during edge detection, compared with the "conventional post-processing method." We demonstrate the effectiveness of this approach by both numerical simulation and experiments. The proof-of-concept approach can be further developed with other image post-processing operations or versatile computer vision assignments toward task-oriented intelligent lensless imaging systems.
Collapse
|
12
|
Shutler PME, Byard K. Optical experimental results using Singer product apertures. APPLIED OPTICS 2024; 63:2759-2782. [PMID: 38856371 DOI: 10.1364/ao.514108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 02/01/2024] [Indexed: 06/11/2024]
Abstract
We present the first optical experimental results obtained using the recently developed Singer product apertures. We also show that Fenimore and Cannon's fine sampling and delta decoding techniques can be combined with the fast direct vector decoding algorithm for Singer product apertures. We demonstrate resolutions and decoding speeds comparable to, or better than, those currently reported in the optical literature. Taken together these make possible coded aperture video in the optical domain.
Collapse
|
13
|
Yang C, Ni C, Zhang X, Li Y, Zhai Y, He W, Zhang W, Chen Q. Extended depth of field for Fresnel zone aperture camera via fast passive depth estimation. OPTICS EXPRESS 2024; 32:11323-11336. [PMID: 38570982 DOI: 10.1364/oe.519871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2024] [Accepted: 03/03/2024] [Indexed: 04/05/2024]
Abstract
The lensless camera with incoherent illumination has gained significant research interest for its thin and flexible structure. However, it faces challenges in resolving scenes with a wide depth of field (DoF) due to its depth-dependent point spread function (PSF). In this paper, we present a single-shot method for extending the DoF in Fresnel zone aperture (FZA) cameras at visible wavelengths through passive depth estimation. The improved ternary search method is utilized to determine the depth of targets rapidly by evaluating the sharpness of the back propagation reconstruction. Based on the depth estimation results, a set of reconstructed images focused on targets at varying depths are derived from the encoded image. After that, the DoF is extended through focus stacking. The experimental results demonstrate an 8-fold increase compared with the calibrated DoF at 130 mm depth. Moreover, our depth estimation method is five times faster than the traversal method, while maintaining the same level of accuracy. The proposed method facilitates the development of lensless imaging in practical applications such as photography, microscopy, and surveillance.
Collapse
|
14
|
Wu J, Chen Y, Veeraraghavan A, Seidemann E, Robinson JT. Mesoscopic calcium imaging in a head-unrestrained male non-human primate using a lensless microscope. Nat Commun 2024; 15:1271. [PMID: 38341403 PMCID: PMC10858944 DOI: 10.1038/s41467-024-45417-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/22/2024] [Indexed: 02/12/2024] Open
Abstract
Mesoscopic calcium imaging enables studies of cell-type specific neural activity over large areas. A growing body of literature suggests that neural activity can be different when animals are free to move compared to when they are restrained. Unfortunately, existing systems for imaging calcium dynamics over large areas in non-human primates (NHPs) are table-top devices that require restraint of the animal's head. Here, we demonstrate an imaging device capable of imaging mesoscale calcium activity in a head-unrestrained male non-human primate. We successfully miniaturize our system by replacing lenses with an optical mask and computational algorithms. The resulting lensless microscope can fit comfortably on an NHP, allowing its head to move freely while imaging. We are able to measure orientation columns maps over a 20 mm2 field-of-view in a head-unrestrained macaque. Our work establishes mesoscopic imaging using a lensless microscope as a powerful approach for studying neural activity under more naturalistic conditions.
Collapse
Affiliation(s)
- Jimin Wu
- Department of Bioengineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA
| | - Yuzhi Chen
- Department of Neuroscience, University of Texas at Austin, 100 E 24th St., Austin, TX, 78712, USA
- Department of Psychology, University of Texas at Austin, 108 E Dean Keeton St., Austin, TX, 78712, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA
- Department of Computer Science, Rice University, 6100 Main Street, Houston, TX, 77005, USA
| | - Eyal Seidemann
- Department of Neuroscience, University of Texas at Austin, 100 E 24th St., Austin, TX, 78712, USA.
- Department of Psychology, University of Texas at Austin, 108 E Dean Keeton St., Austin, TX, 78712, USA.
| | - Jacob T Robinson
- Department of Bioengineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA.
- Department of Electrical and Computer Engineering, Rice University, 6100 Main Street, Houston, TX, 77005, USA.
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, TX, 77030, USA.
| |
Collapse
|
15
|
Xu F, Wu Z, Tan C, Liao Y, Wang Z, Chen K, Pan A. Fourier Ptychographic Microscopy 10 Years on: A Review. Cells 2024; 13:324. [PMID: 38391937 PMCID: PMC10887115 DOI: 10.3390/cells13040324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/31/2024] [Accepted: 02/08/2024] [Indexed: 02/24/2024] Open
Abstract
Fourier ptychographic microscopy (FPM) emerged as a prominent imaging technique in 2013, attracting significant interest due to its remarkable features such as precise phase retrieval, expansive field of view (FOV), and superior resolution. Over the past decade, FPM has become an essential tool in microscopy, with applications in metrology, scientific research, biomedicine, and inspection. This achievement arises from its ability to effectively address the persistent challenge of achieving a trade-off between FOV and resolution in imaging systems. It has a wide range of applications, including label-free imaging, drug screening, and digital pathology. In this comprehensive review, we present a concise overview of the fundamental principles of FPM and compare it with similar imaging techniques. In addition, we present a study on achieving colorization of restored photographs and enhancing the speed of FPM. Subsequently, we showcase several FPM applications utilizing the previously described technologies, with a specific focus on digital pathology, drug screening, and three-dimensional imaging. We thoroughly examine the benefits and challenges associated with integrating deep learning and FPM. To summarize, we express our own viewpoints on the technological progress of FPM and explore prospective avenues for its future developments.
Collapse
Affiliation(s)
- Fannuo Xu
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zipei Wu
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China
| | - Chao Tan
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Electronics and Information Engineering, Sichuan University, Chengdu 610065, China
| | - Yizheng Liao
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhiping Wang
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Physical Science and Technology, Lanzhou University, Lanzhou 730000, China
| | - Keru Chen
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- School of Automation Science and Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - An Pan
- State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China; (F.X.); (Z.W.); (C.T.); (Y.L.); (Z.W.); (K.C.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
16
|
Wang Z, Zheng S, Ding Z, Guo C. Dual-constrained physics-enhanced untrained neural network for lensless imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2024; 41:165-173. [PMID: 38437329 DOI: 10.1364/josaa.510147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/10/2023] [Indexed: 03/06/2024]
Abstract
An untrained neural network (UNN) paves a new way to realize lensless imaging from single-frame intensity data. Based on the physics engine, such methods utilize the smoothness property of a convolutional kernel and provide an iterative self-supervised learning framework to release the needs of an end-to-end training scheme with a large dataset. However, the intrinsic overfitting problem of UNN is a challenging issue for stable and robust reconstruction. To address it, we model the phase retrieval problem into a dual-constrained untrained network, in which a phase-amplitude alternating optimization framework is designed to split the intensity-to-phase problem into two tasks: phase and amplitude optimization. In the process of phase optimization, we combine a deep image prior with a total variation prior to retrain the loss function for the phase update. In the process of amplitude optimization, a total variation denoising-based Wirtinger gradient descent method is constructed to form an amplitude constraint. Alternative iterations of the two tasks result in high-performance wavefield reconstruction. Experimental results demonstrate the superiority of our method.
Collapse
|
17
|
Huang Y, Krishnan G, Goswami S, Javidi B. Underwater optical signal detection system using diffuser-based lensless imaging. OPTICS EXPRESS 2024; 32:1489-1500. [PMID: 38297699 DOI: 10.1364/oe.512438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 12/18/2023] [Indexed: 02/02/2024]
Abstract
We propose a diffuser-based lensless underwater optical signal detection system. The system consists of a lensless one-dimensional (1D) camera array equipped with random phase modulators for signal acquisition and one-dimensional integral imaging convolutional neural network (1DInImCNN) for signal classification. During the acquisition process, the encoded signal transmitted by a light-emitting diode passes through a turbid medium as well as partial occlusion. The 1D diffuser-based lensless camera array is used to capture the transmitted information. The captured pseudorandom patterns are then classified through the 1DInImCNN to output the desired signal. We compared our proposed underwater lensless optical signal detection system with an equivalent lens-based underwater optical signal detection system in terms of detection performance and computational cost. The results show that the former outperforms the latter. Moreover, we use dimensionality reduction on the lensless pattern and study their theoretical computational costs and detection performance. The results show that the detection performance of lensless systems does not suffer appreciably. This makes lensless systems a great candidate for low-cost compressive underwater optical imaging and signal detection.
Collapse
|
18
|
Baik M, Shin S, Kumar S, Seo D, Lee I, Jun HS, Kang KW, Kim BS, Nam MH, Seo S. Label-Free CD34+ Cell Identification Using Deep Learning and Lens-Free Shadow Imaging Technology. BIOSENSORS 2023; 13:993. [PMID: 38131753 PMCID: PMC10741567 DOI: 10.3390/bios13120993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 11/16/2023] [Accepted: 11/17/2023] [Indexed: 12/23/2023]
Abstract
Accurate and efficient classification and quantification of CD34+ cells are essential for the diagnosis and monitoring of leukemia. Current methods, such as flow cytometry, are complex, time-consuming, and require specialized expertise and equipment. This study proposes a novel approach for the label-free identification of CD34+ cells using a deep learning model and lens-free shadow imaging technology (LSIT). LSIT is a portable and user-friendly technique that eliminates the need for cell staining, enhances accessibility to nonexperts, and reduces the risk of sample degradation. The study involved three phases: sample preparation, dataset generation, and data analysis. Bone marrow and peripheral blood samples were collected from leukemia patients, and mononuclear cells were isolated using Ficoll density gradient centrifugation. The samples were then injected into a cell chip and analyzed using a proprietary LSIT-based device (Cellytics). A robust dataset was generated, and a custom AlexNet deep learning model was meticulously trained to distinguish CD34+ from non-CD34+ cells using the dataset. The model achieved a high accuracy in identifying CD34+ cells from 1929 bone marrow cell images, with training and validation accuracies of 97.3% and 96.2%, respectively. The customized AlexNet model outperformed the Vgg16 and ResNet50 models. It also demonstrated a strong correlation with the standard fluorescence-activated cell sorting (FACS) technique for quantifying CD34+ cells across 13 patient samples, yielding a coefficient of determination of 0.81. Bland-Altman analysis confirmed the model's reliability, with a mean bias of -2.29 and 95% limits of agreement between 18.49 and -23.07. This deep-learning-powered LSIT offers a groundbreaking approach to detecting CD34+ cells without the need for cell staining, facilitating rapid CD34+ cell classification, even by individuals without prior expertise.
Collapse
Affiliation(s)
- Minyoung Baik
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea; (M.B.); (S.S.); (S.K.)
| | - Sanghoon Shin
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea; (M.B.); (S.S.); (S.K.)
| | - Samir Kumar
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea; (M.B.); (S.S.); (S.K.)
| | - Dongmin Seo
- Department of Electrical Engineering, Semyung University, Jecheon 27136, Republic of Korea;
| | - Inha Lee
- Department of Biotechnology and Bioinformatics, Korea University, Sejong 30019, Republic of Korea; (I.L.); (H.S.J.)
| | - Hyun Sik Jun
- Department of Biotechnology and Bioinformatics, Korea University, Sejong 30019, Republic of Korea; (I.L.); (H.S.J.)
| | - Ka-Won Kang
- Department of Hematology, Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea; (K.-W.K.); (B.S.K.)
| | - Byung Soo Kim
- Department of Hematology, Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea; (K.-W.K.); (B.S.K.)
| | - Myung-Hyun Nam
- Department of Laboratory Medicine, Anam Hospital, Korea University College of Medicine, Seoul 02841, Republic of Korea
| | - Sungkyu Seo
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea; (M.B.); (S.S.); (S.K.)
| |
Collapse
|
19
|
Li Y, Li Z, Chen K, Guo Y, Rao C. MWDNs: reconstruction in multi-scale feature spaces for lensless imaging. OPTICS EXPRESS 2023; 31:39088-39101. [PMID: 38017997 DOI: 10.1364/oe.501970] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 10/18/2023] [Indexed: 11/30/2023]
Abstract
Lensless cameras, consisting of only a sensor and a mask, are small and flexible enough to be used in many applications with stringent scale constraints. These mask-based imagers encode scenes in caustic patterns. Most existing reconstruction algorithms rely on multiple iterations based on physical model for deconvolution followed by deep learning for perception, among which the main limitation of reconstruction quality is the mismatch between the ideal and the real model. To solve the problem, we in this work learned a class of multi Wiener deconvolution networks (MWDNs), deconvoluting in multi-scale feature spaces with Wiener filters to reduce the information loss and improving the accuracy of the given model by correcting the inputs. A comparison between the proposed and the state-of-the-art algorithms shows that ours achieves much better images and performs well in real-world environments. In addition, our method takes greater advantage of the computational time due to the abandonment of iterations.
Collapse
|
20
|
Ling YC, Yoo SJB. Review: tunable nanophotonic metastructures. NANOPHOTONICS 2023; 12:3851-3870. [PMID: 38013926 PMCID: PMC10566255 DOI: 10.1515/nanoph-2023-0034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 08/08/2023] [Indexed: 11/29/2023]
Abstract
Tunable nanophotonic metastructures offer new capabilities in computing, networking, and imaging by providing reconfigurability in computer interconnect topologies, new optical information processing capabilities, optical network switching, and image processing. Depending on the materials and the nanostructures employed in the nanophotonic metastructure devices, various tuning mechanisms can be employed. They include thermo-optical, electro-optical (e.g. Pockels and Kerr effects), magneto-optical, ionic-optical, piezo-optical, mechano-optical (deformation in MEMS or NEMS), and phase-change mechanisms. Such mechanisms can alter the real and/or imaginary parts of the optical susceptibility tensors, leading to tuning of the optical characteristics. In particular, tunable nanophotonic metastructures with relatively large tuning strengths (e.g. large changes in the refractive index) can lead to particularly useful device applications. This paper reviews various tunable nanophotonic metastructures' tuning mechanisms, tuning characteristics, tuning speeds, and non-volatility. Among the reviewed tunable nanophotonic metastructures, some of the phase-change-mechanisms offer relatively large index change magnitude while offering non-volatility. In particular, Ge-Sb-Se-Te (GSST) and vanadium dioxide (VO2) materials are popular for this reason. Mechanically tunable nanophotonic metastructures offer relatively small changes in the optical losses while offering large index changes. Electro-optically tunable nanophotonic metastructures offer relatively fast tuning speeds while achieving relatively small index changes. Thermo-optically tunable nanophotonic metastructures offer nearly zero changes in optical losses while realizing modest changes in optical index at the expense of relatively large power consumption. Magneto-optically tunable nanophotonic metastructures offer non-reciprocal optical index changes that can be induced by changing the magnetic field strengths or directions. Tunable nanophotonic metastructures can find a very wide range of applications including imaging, computing, communications, and sensing. Practical commercial deployments of these technologies will require scalable, repeatable, and high-yield manufacturing. Most of these technology demonstrations required specialized nanofabrication tools such as e-beam lithography on relatively small fractional areas of semiconductor wafers, however, with advanced CMOS fabrication and heterogeneous integration techniques deployed for photonics, scalable and practical wafer-scale fabrication of tunable nanophotonic metastructures should be on the horizon, driven by strong interests from multiple application areas.
Collapse
Affiliation(s)
- Yi-Chun Ling
- Department of Electrical and Computer Engineering, University of California, Davis, CA95616, USA
| | - Sung Joo Ben Yoo
- Department of Electrical and Computer Engineering, University of California, Davis, CA95616, USA
| |
Collapse
|
21
|
Wu J, Boominathan V, Veeraraghavan A, Robinson JT. Real-time, deep-learning aided lensless microscope. BIOMEDICAL OPTICS EXPRESS 2023; 14:4037-4051. [PMID: 37799697 PMCID: PMC10549754 DOI: 10.1364/boe.490199] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/28/2023] [Accepted: 06/29/2023] [Indexed: 10/07/2023]
Abstract
Traditional miniaturized fluorescence microscopes are critical tools for modern biology. Invariably, they struggle to simultaneously image with a high spatial resolution and a large field of view (FOV). Lensless microscopes offer a solution to this limitation. However, real-time visualization of samples is not possible with lensless imaging, as image reconstruction can take minutes to complete. This poses a challenge for usability, as real-time visualization is a crucial feature that assists users in identifying and locating the imaging target. The issue is particularly pronounced in lensless microscopes that operate at close imaging distances. Imaging at close distances requires shift-varying deconvolution to account for the variation of the point spread function (PSF) across the FOV. Here, we present a lensless microscope that achieves real-time image reconstruction by eliminating the use of an iterative reconstruction algorithm. The neural network-based reconstruction method we show here, achieves more than 10000 times increase in reconstruction speed compared to iterative reconstruction. The increased reconstruction speed allows us to visualize the results of our lensless microscope at more than 25 frames per second (fps), while achieving better than 7 µm resolution over a FOV of 10 mm2. This ability to reconstruct and visualize samples in real-time empowers a more user-friendly interaction with lensless microscopes. The users are able to use these microscopes much like they currently do with conventional microscopes.
Collapse
Affiliation(s)
- Jimin Wu
- Department of Bioengineering,
Rice University, Houston, Texas 77005, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
- Department of Computer Science, Rice University, Houston, Texas 77005, USA
| | - Jacob T. Robinson
- Department of Bioengineering,
Rice University, Houston, Texas 77005, USA
- Department of Electrical and Computer Engineering,
Rice University, Houston, Texas 77005, USA
- Department of Neuroscience, Baylor College of Medicine, One Baylor Plaza, Houston, Texas 77030, USA
| |
Collapse
|
22
|
Li L, Ma J, Sun D, Tian Z, Cao L, Su P. Amp-vortex edge-camera: a lensless multi-modality imaging system with edge enhancement. OPTICS EXPRESS 2023; 31:22519-22531. [PMID: 37475361 DOI: 10.1364/oe.491380] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 05/27/2023] [Indexed: 07/22/2023]
Abstract
We demonstrate a lensless imaging system with edge-enhanced imaging constructed with a Fresnel zone aperture (FZA) mask placed 3 mm away from a CMOS sensor. We propose vortex back-propagation (vortex-BP) and amplitude vortex-BP algorithms for the FZA-based lensless imaging system to remove the noise and achieve the fast reconstruction of high contrast edge enhancement. Directionally controlled anisotropic edge enhancement can be achieved with our proposed superimposed vortex-BP algorithm. With different reconstruction algorithms, the proposed amp-vortex edge-camera in this paper can achieve 2D bright filed imaging, isotropic, and directional controllable anisotropic edge-enhanced imaging with incoherent light illumination, by a single-shot captured hologram. The effect of edge detection is the same as optical edge detection, which is the re-distribution of light energy. Noise-free in-focus edge detection can be achieved by using back-propagation, without a de-noise algorithm, which is an advantage over other lensless imaging technologies. This is expected to be widely used in autonomous driving, artificial intelligence recognition in consumer electronics, etc.
Collapse
|
23
|
Zheng S, Ding Z, Jiang R, Guo C. Lensless masked imaging with self-calibrated phase retrieval. OPTICS LETTERS 2023; 48:3279-3282. [PMID: 37319081 DOI: 10.1364/ol.492476] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 05/25/2023] [Indexed: 06/17/2023]
Abstract
Lensless imaging with a mask is an attractive topic as it enables a compact configuration to acquire wavefront information of a sample with computational approaches. Most existing methods choose a customized phase mask for wavefront modulation and then decode the sample's wave field from modulated diffraction patterns. Different from phase masks, lensless imaging with a binary amplitude mask facilitates a cheaper fabrication cost, but high-quality mask calibration and image reconstruction have not been well resolved. Here we propose a self-calibrated phase retrieval (SCPR) method to realize a joint recovery of a binary mask and sample's wave field for a lensless masked imaging system. Compared with conventional methods, our method shows a high-performance and flexible image recovery without the help of an extra calibration device. Experimental results of different samples demonstrate the superiority of our method.
Collapse
|
24
|
Zhu S, Guo E, Zhang W, Bai L, Liu H, Han J. Deep speckle reassignment: towards bootstrapped imaging in complex scattering states with limited speckle grains. OPTICS EXPRESS 2023; 31:19588-19603. [PMID: 37381370 DOI: 10.1364/oe.487667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Accepted: 05/17/2023] [Indexed: 06/30/2023]
Abstract
Optical imaging through scattering media is a practical challenge with crucial applications in many fields. Many computational imaging methods have been designed for object reconstruction through opaque scattering layers, and remarkable recovery results have been demonstrated in the physical models or learning models. However, most of the imaging approaches are dependent on relatively ideal states with a sufficient number of speckle grains and adequate data volume. Here, the in-depth information with limited speckle grains has been unearthed with speckle reassignment and a bootstrapped imaging method is proposed for reconstruction in complex scattering states. Benefiting from the bootstrap priors-informed data augmentation strategy with a limited training dataset, the validity of the physics-aware learning method has been demonstrated and the high-fidelity reconstruction results through unknown diffusers are obtained. This bootstrapped imaging method with limited speckle grains broadens the way to highly scalable imaging in complex scattering scenes and gives a heuristic reference to practical imaging problems.
Collapse
|
25
|
Wang J, Zhao J, Lin B, Zhang P, Cui G, Hou C. Multi-angle lensless ptychographic imaging via adaptive correction and the Nesterov method. APPLIED OPTICS 2023; 62:2617-2628. [PMID: 37132811 DOI: 10.1364/ao.480923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Lensless systems based on ptychographic imaging can simultaneously achieve a large field of view and high resolution while having the advantages of small size, portability, and low cost compared to traditional lensed imaging. However, lensless imaging systems are susceptible to environmental noise and have a lower resolution of individual images than lens-based imaging systems, which means that they require a longer time to obtain a good result. Therefore, in this paper, to improve the convergence rate and robustness of noise in lensless ptychographic imaging, we propose an adaptive correction method, in which we add an adaptive error term and noise correction term in lensless ptychographic algorithms to reach convergence faster and create a better suppression effect on both Gaussian noise and Poisson noise. The Wirtinger flow and the Nesterov algorithms are used in our method to reduce computational complexity and improve the convergence rate. We applied the method to phase reconstruction for lensless imaging and demonstrated the effectiveness of the method by simulation and experiment. The method can be easily applied to other ptychographic iterative algorithms.
Collapse
|
26
|
Siu DMD, Lee KCM, Chung BMF, Wong JSJ, Zheng G, Tsia KK. Optofluidic imaging meets deep learning: from merging to emerging. LAB ON A CHIP 2023; 23:1011-1033. [PMID: 36601812 DOI: 10.1039/d2lc00813k] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Propelled by the striking advances in optical microscopy and deep learning (DL), the role of imaging in lab-on-a-chip has dramatically been transformed from a silo inspection tool to a quantitative "smart" engine. A suite of advanced optical microscopes now enables imaging over a range of spatial scales (from molecules to organisms) and temporal window (from microseconds to hours). On the other hand, the staggering diversity of DL algorithms has revolutionized image processing and analysis at the scale and complexity that were once inconceivable. Recognizing these exciting but overwhelming developments, we provide a timely review of their latest trends in the context of lab-on-a-chip imaging, or coined optofluidic imaging. More importantly, here we discuss the strengths and caveats of how to adopt, reinvent, and integrate these imaging techniques and DL algorithms in order to tailor different lab-on-a-chip applications. In particular, we highlight three areas where the latest advances in lab-on-a-chip imaging and DL can form unique synergisms: image formation, image analytics and intelligent image-guided autonomous lab-on-a-chip. Despite the on-going challenges, we anticipate that they will represent the next frontiers in lab-on-a-chip imaging that will spearhead new capabilities in advancing analytical chemistry research, accelerating biological discovery, and empowering new intelligent clinical applications.
Collapse
Affiliation(s)
- Dickson M D Siu
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Kelvin C M Lee
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
| | - Bob M F Chung
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Justin S J Wong
- Conzeb Limited, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, USA
| | - Kevin K Tsia
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, Hong Kong.
- Advanced Biomedical Instrumentation Centre, Hong Kong Science Park, Shatin, New Territories, Hong Kong
| |
Collapse
|
27
|
Shin S, Oh S, Seo D, Kumar S, Lee A, Lee S, Kim YR, Lee M, Seo S. Field-portable seawater toxicity monitoring platform using lens-free shadow imaging technology. WATER RESEARCH 2023; 230:119585. [PMID: 36638739 DOI: 10.1016/j.watres.2023.119585] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/23/2022] [Accepted: 01/05/2023] [Indexed: 06/17/2023]
Abstract
The accidental spill of hazardous and noxious substances (HNSs) in the ocean has serious environmental and human health consequences. Assessing the ecotoxicity of seawater exposed to various HNS is challenging due to the constant development of new HNS or mixtures, and assessment methods are also limited. Microalgae viability tests are often used among the various biological indicators for ecotoxicity testing, as they are the primary producers in aquatic ecosystems. However, since the conventional cell growth rate test measures cell viability over three to four days using manual inspection under a conventional optical microscope, it is labor- and time-intensive and prone to subjective errors. In this study, we propose a rapid and automated method to evaluate seawater ecotoxicity by quantification of the morphological changes of microalgae exposed to more than 30 HNSs. This method was further validated using conventional growth rate test results. Dunaliella tertiolecta, a microalgae species without rigid cell walls, was selected as the test organism. Its morphological changes in response to HNS exposure were measured at the single cell level using a custom-developed device that uses lens-free shadow imaging technology. The ecotoxicity evaluation induced by the morphological change could be available in as little as 5 min using the proposed method and device, and it could be effective for 20 HNSs out of 30 HNSs tested. Moreover, the test results of six selected HNSs with high marine transport volume and toxicity revealed that the sensitivity of the proposed method extends to half the maximum effective concentration (EC50) and even to the lowest observed effective concentration (LOEC). Furthermore, the average correlation index between the growth inhibition test (three to four days) and the proposed morphology changes test (5 min) for the six selected HNSs was 0.84, indicating great promise in the field of various point-of-care water quality monitoring. Thus, the proposed equipment and technology may provide a viable alternative to traditional on-site toxicity testing, and the potential of rapid morphological analysis may replace traditional growth inhibition testing.
Collapse
Affiliation(s)
- Sanghoon Shin
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea
| | - Sangwoo Oh
- Maritime Safety & Environmental Research Division, Korea Research Institute of Ships & Ocean Engineering (KRISO), Daejeon 34103, Republic of Korea
| | - Dongmin Seo
- Ocean System Engineering Research Division, Korea Research Institute of Ships & Ocean Engineering (KRISO), Daejeon 34103, Republic of Korea
| | - Samir Kumar
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea
| | - Ahyeon Lee
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea
| | - Sujin Lee
- Marine Eco-Technology Institute, Busan 48520, Republic of Korea
| | - Young-Ryun Kim
- Marine Eco-Technology Institute, Busan 48520, Republic of Korea
| | - Moonjin Lee
- Maritime Safety & Environmental Research Division, Korea Research Institute of Ships & Ocean Engineering (KRISO), Daejeon 34103, Republic of Korea
| | - Sungkyu Seo
- Department of Electronics and Information Engineering, Korea University, Sejong 30019, Republic of Korea.
| |
Collapse
|
28
|
Kingshott O, Antipa N, Bostan E, Akşit K. Unrolled primal-dual networks for lensless cameras. OPTICS EXPRESS 2022; 30:46324-46335. [PMID: 36558589 DOI: 10.1364/oe.475521] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/14/2022] [Indexed: 06/17/2023]
Abstract
Conventional models for lensless imaging assume that each measurement results from convolving a given scene with a single experimentally measured point-spread function. These models fail to simulate lensless cameras truthfully, as these models do not account for optical aberrations or scenes with depth variations. Our work shows that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature without demanding a large network capacity. We show that embedding learnable forward and adjoint models improves the reconstruction quality of lensless images (+5dB PSNR) compared to works that assume a fixed point-spread function.
Collapse
|
29
|
Zhang W, Zhu S, Bai K, Bai L, Guo E, Han J. Locating through dynamic scattering media based on speckle correlations. APPLIED OPTICS 2022; 61:10352-10361. [PMID: 36607093 DOI: 10.1364/ao.470271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 11/14/2022] [Indexed: 06/17/2023]
Abstract
In complex imaging settings, optical scattering often prohibits the formation of a clear target image, and instead, only a speckle without the original spatial structure information is obtained. Scattering seriously interferes with the locating of targets; especially, when the scattering medium is dynamic, the dynamic nature leads to rapid decorrelation of optical information in time, and the challenge increases. Here, a locating method is proposed to detect the target hidden behind a dynamic scattering medium, which uses the a priori information of a known reference object in the neighborhood of the target. The research further designs an automatic calibration method to simplify the locating process, and analyzes the factors affecting positioning accuracy. The proposed method enables us to predict the position of a target from the autocorrelation of the captured speckle pattern; the angle and distance deviations of the target are all within 2.5%. This approach can locate a target using only a single-shot speckle pattern, and it is beneficial for target localization in dynamic scattering conditions.
Collapse
|
30
|
Fu Q, Yan DM, Heidrich W. Diffractive lensless imaging with optimized Voronoi-Fresnel phase. OPTICS EXPRESS 2022; 30:45807-45823. [PMID: 36522977 DOI: 10.1364/oe.475004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Accepted: 11/03/2022] [Indexed: 06/17/2023]
Abstract
Lensless cameras are a class of imaging devices that shrink the physical dimensions to the very close vicinity of the image sensor by replacing conventional compound lenses with integrated flat optics and computational algorithms. Here we report a diffractive lensless camera with spatially-coded Voronoi-Fresnel phase to achieve superior image quality. We propose a design principle of maximizing the acquired information in optics to facilitate the computational reconstruction. By introducing an easy-to-optimize Fourier domain metric, Modulation Transfer Function volume (MTFv), which is related to the Strehl ratio, we devise an optimization framework to guide the optimization of the diffractive optical element. The resulting Voronoi-Fresnel phase features an irregular array of quasi-Centroidal Voronoi cells containing a base first-order Fresnel phase function. We demonstrate and verify the imaging performance for photography applications with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel image sensor in various illumination conditions. Results show that the proposed design outperforms existing lensless cameras, and could benefit the development of compact imaging systems that work in extreme physical conditions.
Collapse
|
31
|
Arcab P, Mirecki B, Stefaniuk M, Pawłowska M, Trusiak M. Experimental optimization of lensless digital holographic microscopy with rotating diffuser-based coherent noise reduction. OPTICS EXPRESS 2022; 30:42810-42828. [PMID: 36522993 DOI: 10.1364/oe.470860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 09/23/2022] [Indexed: 06/17/2023]
Abstract
Laser-based lensless digital holographic microscopy (LDHM) is often spoiled by considerable coherent noise factor. We propose a novel LDHM method with significantly limited coherent artifacts, e.g., speckle noise and parasitic interference fringes. It is achieved by incorporating a rotating diffuser, which introduces partial spatial coherence and preserves high temporal coherence of laser light, crucial for credible in-line hologram reconstruction. We present the first implementation of the classical rotating diffuser concept in LDHM, significantly increasing the signal-to-noise ratio while preserving the straightforwardness and compactness of the LDHM imaging device. Prior to the introduction of the rotating diffusor, we performed LDHM experimental hardware optimization employing 4 light sources, 4 cameras, and 3 different optical magnifications (camera-sample distances). It was guided by the quantitative assessment of numerical amplitude/phase reconstruction of test targets, conducted upon standard deviation calculation (noise factor quantification), and resolution evaluation (information throughput quantification). Optimized rotating diffuser LDHM (RD-LDHM) method was successfully corroborated in technical test target imaging and examination of challenging biomedical sample (60 µm thick mouse brain tissue slice). Physical minimization of coherent noise (up to 50%) was positively verified, while preserving optimal spatial resolution of phase and amplitude imaging. Coherent noise removal, ensured by proposed RD-LDHM method, is especially important in biomedical inference, as speckles can falsely imitate valid biological features. Combining this favorable outcome with large field-of-view imaging can promote the use of reported RD-LDHM technique in high-throughput stain-free biomedical screening.
Collapse
|
32
|
Tian F, Yang W. Learned lensless 3D camera. OPTICS EXPRESS 2022; 30:34479-34496. [PMID: 36242459 PMCID: PMC9576281 DOI: 10.1364/oe.465933] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 07/28/2022] [Accepted: 07/30/2022] [Indexed: 05/25/2023]
Abstract
Single-shot three-dimensional (3D) imaging with compact device footprint, high imaging quality, and fast processing speed is challenging in computational imaging. Mask-based lensless imagers, which replace the bulky optics with customized thin optical masks, are portable and lightweight, and can recover 3D object from a snap-shot image. Existing lensless imaging typically requires extensive calibration of its point spread function and heavy computational resources to reconstruct the object. Here we overcome these challenges and demonstrate a compact and learnable lensless 3D camera for real-time photorealistic imaging. We custom designed and fabricated the optical phase mask with an optimized spatial frequency support and axial resolving ability. We developed a simple and robust physics-aware deep learning model with adversarial learning module for real-time depth-resolved photorealistic reconstructions. Our lensless imager does not require calibrating the point spread function and has the capability to resolve depth and "see-through" opaque obstacles to image features being blocked, enabling broad applications in computational imaging.
Collapse
Affiliation(s)
- Feng Tian
- Department of Electrical and Computer Engineering, University of California, Davis, CA 95616, USA
| | - Weijian Yang
- Department of Electrical and Computer Engineering, University of California, Davis, CA 95616, USA
| |
Collapse
|
33
|
Rego JD, Chen H, Li S, Gu J, Jayasuriya S. Deep camera obscura: an image restoration pipeline for pinhole photography. OPTICS EXPRESS 2022; 30:27214-27235. [PMID: 36236897 DOI: 10.1364/oe.460636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 06/18/2022] [Indexed: 06/16/2023]
Abstract
Modern machine learning has enhanced the image quality for consumer and mobile photography through low-light denoising, high dynamic range (HDR) imaging, and improved demosaicing among other applications. While most of these advances have been made for normal lens-based cameras, there has been an emerging body of research for improved photography for lensless cameras using thin optics such as amplitude or phase masks, diffraction gratings, or diffusion layers. These lensless cameras are suited for size and cost-constrained applications such as tiny robotics and microscopy that prohibit the use of a large lens. However, the earliest and simplest camera design, the camera obscura or pinhole camera, has been relatively overlooked for machine learning pipelines with minimal research on enhancing pinhole camera images for everyday photography applications. In this paper, we develop an image restoration pipeline of the pinhole system to enhance the pinhole image quality through joint denoising and deblurring. Our pipeline integrates optics-based filtering and reblur losses for reconstructing high resolution still images (2600 × 1952) as well as temporal consistency for video reconstruction to enable practical exposure times (30 FPS) for high resolution video (1920 × 1080). We demonstrate high 2D image quality on real pinhole images that is on-par or slightly improved compared to other lensless cameras. This work opens up the potential of pinhole cameras to be used for photography in size-limited devices such as smartphones in the future.
Collapse
|
34
|
Ma Y, Wu J, Chen S, Cao L. Explicit-restriction convolutional framework for lensless imaging. OPTICS EXPRESS 2022; 30:15266-15278. [PMID: 35473252 DOI: 10.1364/oe.456665] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 04/04/2022] [Indexed: 06/14/2023]
Abstract
Mask-based lensless cameras break the constraints of traditional lens-based cameras, introducing highly flexible imaging systems. However, the inherent restrictions of imaging devices lead to low reconstruction quality. To overcome this challenge, we propose an explicit-restriction convolutional framework for lensless imaging, whose forward model effectively incorporates multiple restrictions by introducing the linear and noise-like nonlinear terms. As examples, numerical and experimental reconstructions based on the limitation of sensor size, pixel pitch, and bit depth are analyzed. By tailoring our framework for specific factors, better perceptual image quality or reconstructions with 4× pixel density can be achieved. This proposed framework can be extended to lensless imaging systems with different masks or structures.
Collapse
|