1
|
Ma Y, Zhou W, Ma R, Wang E, Yang S, Tang Y, Zhang XP, Guan X. DOVE: Doodled vessel enhancement for photoacoustic angiography super resolution. Med Image Anal 2024; 94:103106. [PMID: 38387244 DOI: 10.1016/j.media.2024.103106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/12/2023] [Accepted: 02/08/2024] [Indexed: 02/24/2024]
Abstract
Deep-learning-based super-resolution photoacoustic angiography (PAA) has emerged as a valuable tool for enhancing the resolution of blood vessel images and aiding in disease diagnosis. However, due to the scarcity of training samples, PAA super-resolution models do not generalize well, especially in the challenging in-vivo imaging of organs with deep tissue penetration. Furthermore, prolonged exposure to high laser intensity during the image acquisition process can lead to tissue damage and secondary infections. To address these challenges, we propose an approach doodled vessel enhancement (DOVE) that utilizes hand-drawn doodles to train a PAA super-resolution model. With a training dataset consisting of only 32 real PAA images, we construct a diffusion model that interprets hand-drawn doodles as low-resolution images. DOVE enables us to generate a large number of realistic PAA images, achieving a 49.375% fool rate, even among experts in photoacoustic imaging. Subsequently, we employ these generated images to train a self-similarity-based model for super-resolution. During cross-domain tests, our method, trained solely on generated images, achieves a structural similarity value of 0.8591, surpassing the scores of all other models trained with real high-resolution images. DOVE successfully overcomes the limitation of insufficient training samples and unlocks the clinic application potential of super-resolution-based biomedical imaging.
Collapse
Affiliation(s)
- Yuanzheng Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Wangting Zhou
- Engineering Research Center of Molecular & Neuro Imaging of the Ministry of Education, Xidian University, Xi'an, Shaanxi 710126, China
| | - Rui Ma
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Erqi Wang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Sihua Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.
| | - Yansong Tang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiao-Ping Zhang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xun Guan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| |
Collapse
|
2
|
Cho SW, Nguyen VT, DiSpirito A, Yang J, Kim CS, Yao J. Sounding out the dynamics: a concise review of high-speed photoacoustic microscopy. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11521. [PMID: 38323297 PMCID: PMC10846286 DOI: 10.1117/1.jbo.29.s1.s11521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/15/2023] [Accepted: 01/11/2024] [Indexed: 02/08/2024]
Abstract
Significance Photoacoustic microscopy (PAM) offers advantages in high-resolution and high-contrast imaging of biomedical chromophores. The speed of imaging is critical for leveraging these benefits in both preclinical and clinical settings. Ongoing technological innovations have substantially boosted PAM's imaging speed, enabling real-time monitoring of dynamic biological processes. Aim This concise review synthesizes historical context and current advancements in high-speed PAM, with an emphasis on developments enabled by ultrafast lasers, scanning mechanisms, and advanced imaging processing methods. Approach We examine cutting-edge innovations across multiple facets of PAM, including light sources, scanning and detection systems, and computational techniques and explore their representative applications in biomedical research. Results This work delineates the challenges that persist in achieving optimal high-speed PAM performance and forecasts its prospective impact on biomedical imaging. Conclusions Recognizing the current limitations, breaking through the drawbacks, and adopting the optimal combination of each technology will lead to the realization of ultimate high-speed PAM for both fundamental research and clinical translation.
Collapse
Affiliation(s)
- Soon-Woo Cho
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
- Pusan National University, Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Busan, Republic of Korea
| | - Van Tu Nguyen
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Anthony DiSpirito
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Joseph Yang
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Chang-Seok Kim
- Pusan National University, Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Busan, Republic of Korea
| | - Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| |
Collapse
|
3
|
John S, Hester S, Basij M, Paul A, Xavierselvan M, Mehrmohammadi M, Mallidi S. Niche preclinical and clinical applications of photoacoustic imaging with endogenous contrast. PHOTOACOUSTICS 2023; 32:100533. [PMID: 37636547 PMCID: PMC10448345 DOI: 10.1016/j.pacs.2023.100533] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 08/29/2023]
Abstract
In the past decade, photoacoustic (PA) imaging has attracted a great deal of popularity as an emergent diagnostic technology owing to its successful demonstration in both preclinical and clinical arenas by various academic and industrial research groups. Such steady growth of PA imaging can mainly be attributed to its salient features, including being non-ionizing, cost-effective, easily deployable, and having sufficient axial, lateral, and temporal resolutions for resolving various tissue characteristics and assessing the therapeutic efficacy. In addition, PA imaging can easily be integrated with the ultrasound imaging systems, the combination of which confers the ability to co-register and cross-reference various features in the structural, functional, and molecular imaging regimes. PA imaging relies on either an endogenous source of contrast (e.g., hemoglobin) or those of an exogenous nature such as nano-sized tunable optical absorbers or dyes that may boost imaging contrast beyond that provided by the endogenous sources. In this review, we discuss the applications of PA imaging with endogenous contrast as they pertain to clinically relevant niches, including tissue characterization, cancer diagnostics/therapies (termed as theranostics), cardiovascular applications, and surgical applications. We believe that PA imaging's role as a facile indicator of several disease-relevant states will continue to expand and evolve as it is adopted by an increasing number of research laboratories and clinics worldwide.
Collapse
Affiliation(s)
- Samuel John
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Scott Hester
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | | - Mohammad Mehrmohammadi
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Wilmot Cancer Institute, Rochester, NY, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
4
|
He P, Chen G, Huang M, Jing L, Wu W, Kuo HC, Tu CC, Chen SL. Biodegradable germanium nanoparticles as contrast agents for near-infrared-II photoacoustic imaging. NANOSCALE 2023. [PMID: 37366254 DOI: 10.1039/d3nr01594g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Photoacoustic (PA) imaging using contrast agents with strong near-infrared-II (NIR-II, 1000-1700 nm) absorption enables deep penetration into biological tissue. Besides, biocompatibility and biodegradability are essential for clinical translation. Herein, we developed biocompatible and biodegradable germanium nanoparticles (GeNPs) with high photothermal stability as well as strong and broad absorption for NIR-II PA imaging. We first demonstrate the excellent biocompatibility of the GeNPs through experiments, including the zebrafish embryo survival rates, nude mouse body weight curves, and histological images of the major organs. Then, comprehensive PA imaging demonstrations are presented to showcase the versatile imaging capabilities and excellent biodegradability, including in vitro PA imaging which can bypass blood absorption, in vivo dual-wavelength PA imaging which can clearly distinguish the injected GeNPs from the background blood vessels, in vivo and ex vivo PA imaging with deep penetration, in vivo time-lapse PA imaging of a mouse ear for observing biodegradation, ex vivo time-lapse PA imaging of the major organs of a mouse model for observing the biodistribution after intravenous injection, and notably in vivo dual-modality fluorescence and PA imaging of osteosarcoma tumors. The in vivo biodegradation of GeNPs is observed not only in the normal tissue but also in the tumor, making the GeNPs a promising candidate for clinical NIR-II PA imaging applications.
Collapse
Affiliation(s)
- Pengbo He
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Guo Chen
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Mengling Huang
- School of Pharmacy, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lili Jing
- School of Pharmacy, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Wen Wu
- Shanghai Key Laboratory of Orthopaedic Implants, Department of Orthopaedics, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China.
- Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China
| | - Hao-Chung Kuo
- Hon Hai Research Institute, Foxconn Technology Group, Shenzhen 518109, China.
| | - Chang-Ching Tu
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
- Hon Hai Research Institute, Foxconn Technology Group, Shenzhen 518109, China.
- Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China
| | - Sung-Liang Chen
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
- Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
5
|
Tserevelakis GJ, Barmparis GD, Kokosalis N, Giosa ES, Pavlopoulos A, Tsironis GP, Zacharakis G. Deep learning-assisted frequency-domain photoacoustic microscopy. OPTICS LETTERS 2023; 48:2720-2723. [PMID: 37186749 DOI: 10.1364/ol.486624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Frequency-domain photoacoustic microscopy (FD-PAM) constitutes a powerful cost-efficient imaging method integrating intensity-modulated laser beams for the excitation of single-frequency photoacoustic waves. Nevertheless, FD-PAM provides an extremely small signal-to-noise ratio (SNR), which can be up to two orders of magnitude lower than the conventional time-domain (TD) systems. To overcome this inherent SNR limitation of FD-PAM, we utilize a U-Net neural network aiming at image augmentation without the need for excessive averaging or the application of high optical power. In this context, we improve the accessibility of PAM as the system's cost is dramatically reduced, and we expand its applicability to demanding observations while retaining sufficiently high image quality standards.
Collapse
|
6
|
Zhou LX, Xia Y, Dai R, Liu AR, Zhu SW, Shi P, Song W, Yuan XC. Non-uniform image reconstruction for fast photoacoustic microscopy of histology imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:2080-2090. [PMID: 37206133 PMCID: PMC10191656 DOI: 10.1364/boe.487622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/18/2023] [Accepted: 04/02/2023] [Indexed: 05/21/2023]
Abstract
Photoacoustic microscopic imaging utilizes the characteristic optical absorption properties of pigmented materials in tissues to enable label-free observation of fine morphological and structural features. Since DNA/RNA can strongly absorb ultraviolet light, ultraviolet photoacoustic microscopy can highlight the cell nucleus without complicated sample preparations such as staining, which is comparable to the standard pathological images. Further improvements in the imaging acquisition speed are critical to advancing the clinical translation of photoacoustic histology imaging technology. However, improving the imaging speed with additional hardware is hampered by considerable costs and complex design. In this work, considering heavy redundancy in the biological photoacoustic images that overconsume the computing power, we propose an image reconstruction framework called non-uniform image reconstruction (NFSR), which exploits an object detection network to reconstruct low-sampled photoacoustic histology images into high-resolution images. The sampling speed of photoacoustic histology imaging is significantly improved, saving 90% of the time cost. Furthermore, NFSR focuses on the reconstruction of the region of interest while maintaining high PSNR and SSIM evaluation indicators of more than 99% but reducing the overall computation by 60%.
Collapse
Affiliation(s)
- Ling Xiao Zhou
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Yu Xia
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Renxiang Dai
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - An Ran Liu
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Si Wei Zhu
- The Institute of Translational Medicine, Tianjin Union Medical Center of Nankai University, Tianjin, 300121, China
| | - Peng Shi
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Wei Song
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Xiao Cong Yuan
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| |
Collapse
|
7
|
He D, Zhou J, Shang X, Tang X, Luo J, Chen SL. De-Noising of Photoacoustic Microscopy Images by Attentive Generative Adversarial Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1349-1362. [PMID: 37015584 DOI: 10.1109/tmi.2022.3227105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
As a hybrid imaging technology, photoacoustic microscopy (PAM) imaging suffers from noise due to the maximum permissible exposure of laser intensity, attenuation of ultrasound in the tissue, and the inherent noise of the transducer. De-noising is an image processing method to reduce noise, and PAM image quality can be recovered. However, previous de-noising techniques usually heavily rely on manually selected parameters, resulting in unsatisfactory and slow de-noising performance for different noisy images, which greatly hinders practical and clinical applications. In this work, we propose a deep learning-based method to remove noise from PAM images without manual selection of settings for different noisy images. An attention enhanced generative adversarial network is used to extract image features and adaptively remove various levels of Gaussian, Poisson, and Rayleigh noise. The proposed method is demonstrated on both synthetic and real datasets, including phantom (leaf veins) and in vivo (mouse ear blood vessels and zebrafish pigment) experiments. In the in vivo experiments using synthetic datasets, our method achieves the improvement of 6.53 dB and 0.26 in peak signal-to-noise ratio and structural similarity metrics, respectively. The results show that compared with previous PAM de-noising methods, our method exhibits good performance in recovering images qualitatively and quantitatively. In addition, the de-noising processing speed of 0.016 s is achieved for an image with 256×256 pixels, which has the potential for real-time applications. Our approach is effective and practical for the de-noising of PAM images.
Collapse
|
8
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
9
|
Seong D, Lee E, Kim Y, Han S, Lee J, Jeon M, Kim J. Three-dimensional reconstructing undersampled photoacoustic microscopy images using deep learning. PHOTOACOUSTICS 2023; 29:100429. [PMID: 36544533 PMCID: PMC9761854 DOI: 10.1016/j.pacs.2022.100429] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/29/2022] [Accepted: 11/28/2022] [Indexed: 05/31/2023]
Abstract
Spatial sampling density and data size are important determinants of the imaging speed of photoacoustic microscopy (PAM). Therefore, undersampling methods that reduce the number of scanning points are typically adopted to enhance the imaging speed of PAM by increasing the scanning step size. Since undersampling methods sacrifice spatial sampling density, by considering the number of data points, data size, and the characteristics of PAM that provides three-dimensional (3D) volume data, in this study, we newly reported deep learning-based fully reconstructing the undersampled 3D PAM data. The results of quantitative analyses demonstrate that the proposed method exhibits robustness and outperforms interpolation-based reconstruction methods at various undersampling ratios, enhancing the PAM system performance with 80-times faster-imaging speed and 800-times lower data size. The proposed method is demonstrated to be the closest model that can be used under experimental conditions, effectively shortening the imaging time with significantly reduced data size for processing.
Collapse
Affiliation(s)
- Daewoon Seong
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Euimin Lee
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Yoonseok Kim
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Sangyeob Han
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- Institute of Biomedical Engineering, School of Medicine, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Jaeyul Lee
- Department of Bioengineering, University of California, Los Angeles, CA 90095, USA
| | - Mansik Jeon
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Jeehyun Kim
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
10
|
Wang Y, Fu G. A Novel Object Recognition Algorithm Based on Improved YOLOv5 Model for Patient Care Robots. INT J HUM ROBOT 2022. [DOI: 10.1142/s0219843622500104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
11
|
Feng F, Liang S, Luo J, Chen SL. High-fidelity deconvolution for acoustic-resolution photoacoustic microscopy enabled by convolutional neural networks. PHOTOACOUSTICS 2022; 26:100360. [PMID: 35574187 PMCID: PMC9095893 DOI: 10.1016/j.pacs.2022.100360] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/18/2022] [Accepted: 04/18/2022] [Indexed: 05/10/2023]
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) image resolution is determined by the point spread function (PSF) of the imaging system. Previous algorithms, including Richardson-Lucy (R-L) deconvolution and model-based (MB) deconvolution, improve spatial resolution by taking advantage of the PSF as prior knowledge. However, these methods encounter the problems of inaccurate deconvolution, meaning the deconvolved feature size and the original one are not consistent (e.g., the former can be smaller than the latter). We present a novel deep convolution neural network (CNN)-based algorithm featuring high-fidelity recovery of multiscale feature size to improve lateral resolution of AR-PAM. The CNN is trained with simulated image pairs of line patterns, which is to mimic blood vessels. To investigate the suitable CNN model structure and elaborate on the effectiveness of CNN methods compared with non-learning methods, we select five different CNN models, while R-L and directional MB methods are also applied for comparison. Besides simulated data, experimental data including tungsten wires, leaf veins, and in vivo blood vessels are also evaluated. A custom-defined metric of relative size error (RSE) is used to quantify the multiscale feature recovery ability of different methods. Compared to other methods, enhanced deep super resolution (EDSR) network and residual in residual dense block network (RRDBNet) model show better recovery in terms of RSE for tungsten wires with diameters ranging from 30 μ m to 120 μ m . Moreover, AR-PAM images of leaf veins are tested to demonstrate the effectiveness of the optimized CNN methods (by EDSR and RRDBNet) for complex patterns. Finally, in vivo images of mouse ear blood vessels and rat ear blood vessels are acquired and then deconvolved, and the results show that the proposed CNN method (notably RRDBNet) enables accurate deconvolution of multiscale feature size and thus good fidelity.
Collapse
Affiliation(s)
- Fei Feng
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Siqi Liang
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jiajia Luo
- Institute of Medical Technology, Peking University Health Science Center, Beijing 100191, China
- Biomedical Engineering Department, Peking University, Beijing 100191, China
- Peking University People’s Hospital, Beijing 100044, China
- Corresponding author at: Biomedical Engineering Department, Peking University, Beijing 100191, China.
| | - Sung-Liang Chen
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Shanghai Jiao Tong University, Shanghai 200240, China
- Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China
- Corresponding author at: University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
12
|
Cho SW, Park SM, Park B, Kim DY, Lee TG, Kim BM, Kim C, Kim J, Lee SW, Kim CS. High-speed photoacoustic microscopy: A review dedicated on light sources. PHOTOACOUSTICS 2021; 24:100291. [PMID: 34485074 PMCID: PMC8403586 DOI: 10.1016/j.pacs.2021.100291] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/18/2021] [Accepted: 08/03/2021] [Indexed: 05/05/2023]
Abstract
In recent years, many methods have been investigated to improve imaging speed in photoacoustic microscopy (PAM). These methods mainly focused upon three critical factors contributing to fast PAM: laser pulse repetition rate, scanning speed, and computing power of the microprocessors. A high laser repetition rate is fundamentally the most crucial factor to increase the PAM speed. In this paper, we review methods adopted for fast PAM systems in detail, specifically with respect to light sources. To the best of our knowledge, ours is the first review article analyzing the fundamental requirements for developing high-speed PAM and their limitations from the perspective of light sources.
Collapse
Affiliation(s)
- Soon-Woo Cho
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Sang Min Park
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Byullee Park
- Department of Electrical Engineering, Convergence IT Engineering, and Mechanical Engineering, Medical Device Innovation Center, Pohang University of Science and Technology, Pohang, 37673, Republic of Korea
| | - Do Yeon Kim
- Safety Measurement Institute, Korea Research Institute of Standards and Science, Daejeon, 34113, Republic of Korea
- Department of Bio-Convergence Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Tae Geol Lee
- Safety Measurement Institute, Korea Research Institute of Standards and Science, Daejeon, 34113, Republic of Korea
| | - Beop-Min Kim
- Department of Bio-Convergence Engineering, Korea University, Seoul, 02841, Republic of Korea
- Interdisciplinary Program in Precision Public Health, Korea University, Seoul, 02481, Republic of Korea
| | - Chulhong Kim
- Department of Electrical Engineering, Convergence IT Engineering, and Mechanical Engineering, Medical Device Innovation Center, Pohang University of Science and Technology, Pohang, 37673, Republic of Korea
| | - Jeesu Kim
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Sang-Won Lee
- Safety Measurement Institute, Korea Research Institute of Standards and Science, Daejeon, 34113, Republic of Korea
- Department of Medical Physics, University of Science and Technology, Daejeon, 34113, Republic of Korea
| | - Chang-Seok Kim
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan, 46241, Republic of Korea
| |
Collapse
|
13
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
14
|
Chen J, Zhang Y, Bai S, Zhu J, Chirarattananon P, Ni K, Zhou Q, Wang L. Dual-foci fast-scanning photoacoustic microscopy with 3.2-MHz A-line rate. PHOTOACOUSTICS 2021; 23:100292. [PMID: 34430201 PMCID: PMC8367837 DOI: 10.1016/j.pacs.2021.100292] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 07/12/2021] [Accepted: 08/03/2021] [Indexed: 05/02/2023]
Abstract
We report fiber-based dual-foci fast-scanning OR-PAM that can double the scanning rate without compromising the imaging resolution, the field of view, and the detection sensitivity. To achieve fast scanning speed, the OR-PAM system uses a single-axis water-immersible resonant scanning mirror that can confocally scan the optical and acoustic beams at 1018 Hz with a 3-mm range. Pulse energies of 45∼100-nJ are sufficient for acquiring vascular and oxygen-saturation images. The dual-foci method can double the B-scan rate to 2036 Hz. Using two lasers and stimulated Raman scattering, we achieve dual-wavelength excitation on both foci, and the total A-line rate is 3.2-MHz. In in vivo experiments, we inject epinephrine and monitor the hemodynamic and oxygen saturation response in the peripheral vessels at 1.7 Hz over 2.5 × 6.7 mm2. Dual-foci OR-PAM offers a new imaging tool for the study of fast physiological and pathological changes.
Collapse
Affiliation(s)
- Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Yachao Zhang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Songnan Bai
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Jingyi Zhu
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Pakpong Chirarattananon
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Kai Ni
- Division of Advanced Manufacturing, Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Qian Zhou
- Division of Advanced Manufacturing, Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
- City University of Hong Kong Shenzhen Research Institute, Yuexing Yi Dao, Shenzhen, Guang Dong, 518057, China
- Corresponding author at: Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China; City University of Hong Kong Shenzhen Research Institute, Yuexing Yi Dao, Shenzhen, Guang Dong, 518057, China.
| |
Collapse
|
15
|
Vu T, DiSpirito A, Li D, Wang Z, Zhu X, Chen M, Jiang L, Zhang D, Luo J, Zhang YS, Zhou Q, Horstmeyer R, Yao J. Deep image prior for undersampling high-speed photoacoustic microscopy. PHOTOACOUSTICS 2021; 22:100266. [PMID: 33898247 PMCID: PMC8056431 DOI: 10.1016/j.pacs.2021.100266] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/15/2021] [Accepted: 03/23/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic microscopy (PAM) is an emerging imaging method combining light and sound. However, limited by the laser's repetition rate, state-of-the-art high-speed PAM technology often sacrifices spatial sampling density (i.e., undersampling) for increased imaging speed over a large field-of-view. Deep learning (DL) methods have recently been used to improve sparsely sampled PAM images; however, these methods often require time-consuming pre-training and large training dataset with ground truth. Here, we propose the use of deep image prior (DIP) to improve the image quality of undersampled PAM images. Unlike other DL approaches, DIP requires neither pre-training nor fully-sampled ground truth, enabling its flexible and fast implementation on various imaging targets. Our results have demonstrated substantial improvement in PAM images with as few as 1.4 % of the fully sampled pixels on high-speed PAM. Our approach outperforms interpolation, is competitive with pre-trained supervised DL method, and is readily translated to other high-speed, undersampling imaging modalities.
Collapse
Affiliation(s)
- Tri Vu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | | | - Daiwei Li
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Zixuan Wang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Xiaoyi Zhu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Maomao Chen
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Laiming Jiang
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | - Dong Zhang
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Jianwen Luo
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Qifa Zhou
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | | | - Junjie Yao
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
16
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
17
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|