1
|
Zhao M, Zhang Q, Li D, Tao C, Liu X. Highly sensitive self-focused ultrasound transducer with a bionic back-reflector for multiscale-resolution photoacoustic microscopy. OPTICS EXPRESS 2024; 32:1501-1511. [PMID: 38297700 DOI: 10.1364/oe.513574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 12/17/2023] [Indexed: 02/02/2024]
Abstract
In this study, we designed a self-focused ultrasonic transducer made of polyvinylidene fluoride (PVDF). This transducer involves a back-reflector, which is modeled after tapetum lucidum in the eyes of some nocturnal animals. The bionic structure reflects the ultrasound, which passes through the PVDF membrane, back to PVDF and provides a second chance for the PVDF to convert the ultrasound to electric signals. This design increases the amount of ultrasound absorbed by the PVDF, thereby improving the detection sensitivity. Both ultrasonic and photoacoustic (PA) experiments were conduct to characterize the performance of the transducer. The results show that the fabricated transducer has a center frequency of 13.07 MHz, and a bandwidth of 96% at -6 dB. With an acoustic numerical aperture (NA) of 0.64, the transducer provides a lateral resolution of 140µm. Importantly, the bionic design improves the detection sensitivity of the transducer about 30%. Finally, we apply the fabricated transducer to optical-resolution (OR) and acoustic-resolution photoacoustic microscopy (AR-PAM) to achieve multiscale-resolution PA imaging. Imaging of the bamboo leaf and the leaf skeleton demonstrates that the proposed transducer can provide high spatial resolution, better imaging intensity and contrast. Therefore, the proposed transducer design will be useful to enhance the performance of multiscale-resolution PAM.
Collapse
|
2
|
Gao Y, Feng T, Qiu H, Gu Y, Chen Q, Zuo C, Ma H. 4D spectral-spatial computational photoacoustic dermoscopy. PHOTOACOUSTICS 2023; 34:100572. [PMID: 38058749 PMCID: PMC10696115 DOI: 10.1016/j.pacs.2023.100572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 10/16/2023] [Accepted: 11/09/2023] [Indexed: 12/08/2023]
Abstract
Photoacoustic dermoscopy (PAD) is an emerging non-invasive imaging technology aids in the diagnosis of dermatological conditions by obtaining optical absorption information of skin tissues. Despite advances in PAD, it remains unclear how to obtain quantitative accuracy of the reconstructed PAD images according to the optical and acoustic properties of multilayered skin, the wavelength and distribution of excitation light, and the detection performance of ultrasound transducers. In this work, a computing method of four-dimensional (4D) spectral-spatial imaging for PAD is developed to enable quantitative analysis and optimization of structural and functional imaging of skin. This method takes the optical and acoustic properties of heterogeneous skin tissues into account, which can be used to correct the optical field of excitation light, detectable ultrasonic field, and provide accurate single-spectrum analysis or multi-spectral imaging solutions of PAD for multilayered skin tissues. A series of experiments were performed, and simulation datasets obtained from the computational model were used to train neural networks to further improve the imaging quality of the PAD system. All the results demonstrated the method could contribute to the development and optimization of clinical PADs by datasets with multiple variable parameters, and provide clinical predictability of photoacoustic (PA) data for human skin.
Collapse
Affiliation(s)
- Yang Gao
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Smart Computational Imaging Laboratory (SCILab), Nanjing 210094, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing 210094, China
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing 210094, China
| | - Ting Feng
- Fudan University, Academy for Engineering and Technology, Shanghai 200433, China
| | - Haixia Qiu
- First Medical Center of PLA General Hospital, Beijing 100853, China
| | - Ying Gu
- First Medical Center of PLA General Hospital, Beijing 100853, China
| | - Qian Chen
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Smart Computational Imaging Laboratory (SCILab), Nanjing 210094, China
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing 210094, China
| | - Chao Zuo
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Smart Computational Imaging Laboratory (SCILab), Nanjing 210094, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing 210094, China
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing 210094, China
| | - Haigang Ma
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Smart Computational Imaging Laboratory (SCILab), Nanjing 210094, China
- Smart Computational Imaging Research Institute (SCIRI) of Nanjing University of Science and Technology, Nanjing 210094, China
- Nanjing University of Science and Technology, School of Electronic and Optical Engineering, Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing 210094, China
| |
Collapse
|
3
|
Wang R, Zhang Z, Chen R, Yu X, Zhang H, Hu G, Liu Q, Song X. Noise-insensitive defocused signal and resolution enhancement for optical-resolution photoacoustic microscopy via deep learning. JOURNAL OF BIOPHOTONICS 2023; 16:e202300149. [PMID: 37491832 DOI: 10.1002/jbio.202300149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 06/30/2023] [Accepted: 07/22/2023] [Indexed: 07/27/2023]
Abstract
Optical-resolution photoacoustic microscopy suffers from narrow depth of field and a significant deterioration in defocused signal intensity and spatial resolution. Here, a method based on deep learning was proposed to enhance the defocused resolution and signal-to-noise ratio. A virtual optical-resolution photoacoustic microscopy based on k-wave was used to obtain the datasets of deep learning with different noise levels. A fully dense U-Net was trained with randomly distributed sources to improve the quality of photoacoustic images. The results show that the PSNR of defocused signal was enhanced by more than 1.2 times. An over 2.6-fold enhancement in lateral resolution and an over 3.4-fold enhancement in axial resolution of defocused regions were achieved. The large volumetric and high-resolution imaging of blood vessels further verified that the proposed method can effectively overcome the deterioration of the signal and the spatial resolution due to the narrow depth of field of optical-resolution photoacoustic microscopy.
Collapse
Affiliation(s)
- Rui Wang
- School of Information Engineering, Nanchang University, Nanchang, China
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Zhipeng Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Ruiyi Chen
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xiaohai Yu
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Hongyu Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Gang Hu
- Jiangxi Medical College, Nanchang University, Nanchang, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
4
|
Le TD, Min JJ, Lee C. Enhanced resolution and sensitivity acoustic-resolution photoacoustic microscopy with semi/unsupervised GANs. Sci Rep 2023; 13:13423. [PMID: 37591911 PMCID: PMC10435476 DOI: 10.1038/s41598-023-40583-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/13/2023] [Indexed: 08/19/2023] Open
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) enables visualization of biological tissues at depths of several millimeters with superior optical absorption contrast. However, the lateral resolution and sensitivity of AR-PAM are generally lower than those of optical-resolution PAM (OR-PAM) owing to the intrinsic physical acoustic focusing mechanism. Here, we demonstrate a computational strategy with two generative adversarial networks (GANs) to perform semi/unsupervised reconstruction with high resolution and sensitivity in AR-PAM by maintaining its imaging capability at enhanced depths. The b-scan PAM images were prepared as paired (for semi-supervised conditional GAN) and unpaired (for unsupervised CycleGAN) groups for label-free reconstructed AR-PAM b-scan image generation and training. The semi/unsupervised GANs successfully improved resolution and sensitivity in a phantom and in vivo mouse ear test with ground truth. We also confirmed that GANs could enhance resolution and sensitivity of deep tissues without the ground truth.
Collapse
Affiliation(s)
- Thanh Dat Le
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea
| | - Jung-Joon Min
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea
| | - Changho Lee
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea.
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea.
| |
Collapse
|
5
|
De Rosa L, L’Abbate S, Kusmic C, Faita F. Applications of Deep Learning Algorithms to Ultrasound Imaging Analysis in Preclinical Studies on In Vivo Animals. Life (Basel) 2023; 13:1759. [PMID: 37629616 PMCID: PMC10455134 DOI: 10.3390/life13081759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/28/2023] [Accepted: 08/08/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND AND AIM Ultrasound (US) imaging is increasingly preferred over other more invasive modalities in preclinical studies using animal models. However, this technique has some limitations, mainly related to operator dependence. To overcome some of the current drawbacks, sophisticated data processing models are proposed, in particular artificial intelligence models based on deep learning (DL) networks. This systematic review aims to overview the application of DL algorithms in assisting US analysis of images acquired in in vivo preclinical studies on animal models. METHODS A literature search was conducted using the Scopus and PubMed databases. Studies published from January 2012 to November 2022 that developed DL models on US images acquired in preclinical/animal experimental scenarios were eligible for inclusion. This review was conducted according to PRISMA guidelines. RESULTS Fifty-six studies were enrolled and classified into five groups based on the anatomical district in which the DL models were used. Sixteen studies focused on the cardiovascular system and fourteen on the abdominal organs. Five studies applied DL networks to images of the musculoskeletal system and eight investigations involved the brain. Thirteen papers, grouped under a miscellaneous category, proposed heterogeneous applications adopting DL systems. Our analysis also highlighted that murine models were the most common animals used in in vivo studies applying DL to US imaging. CONCLUSION DL techniques show great potential in terms of US images acquired in preclinical studies using animal models. However, in this scenario, these techniques are still in their early stages, and there is room for improvement, such as sample sizes, data preprocessing, and model interpretability.
Collapse
Affiliation(s)
- Laura De Rosa
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
- Department of Information Engineering and Computer Science, University of Trento, 38123 Trento, Italy
| | - Serena L’Abbate
- Institute of Life Sciences, Scuola Superiore Sant’Anna, 56124 Pisa, Italy;
| | - Claudia Kusmic
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
| | - Francesco Faita
- Institute of Clinical Physiology, National Research Council (CNR), 56124 Pisa, Italy; (L.D.R.); (F.F.)
| |
Collapse
|
6
|
John S, Hester S, Basij M, Paul A, Xavierselvan M, Mehrmohammadi M, Mallidi S. Niche preclinical and clinical applications of photoacoustic imaging with endogenous contrast. PHOTOACOUSTICS 2023; 32:100533. [PMID: 37636547 PMCID: PMC10448345 DOI: 10.1016/j.pacs.2023.100533] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 08/29/2023]
Abstract
In the past decade, photoacoustic (PA) imaging has attracted a great deal of popularity as an emergent diagnostic technology owing to its successful demonstration in both preclinical and clinical arenas by various academic and industrial research groups. Such steady growth of PA imaging can mainly be attributed to its salient features, including being non-ionizing, cost-effective, easily deployable, and having sufficient axial, lateral, and temporal resolutions for resolving various tissue characteristics and assessing the therapeutic efficacy. In addition, PA imaging can easily be integrated with the ultrasound imaging systems, the combination of which confers the ability to co-register and cross-reference various features in the structural, functional, and molecular imaging regimes. PA imaging relies on either an endogenous source of contrast (e.g., hemoglobin) or those of an exogenous nature such as nano-sized tunable optical absorbers or dyes that may boost imaging contrast beyond that provided by the endogenous sources. In this review, we discuss the applications of PA imaging with endogenous contrast as they pertain to clinically relevant niches, including tissue characterization, cancer diagnostics/therapies (termed as theranostics), cardiovascular applications, and surgical applications. We believe that PA imaging's role as a facile indicator of several disease-relevant states will continue to expand and evolve as it is adopted by an increasing number of research laboratories and clinics worldwide.
Collapse
Affiliation(s)
- Samuel John
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Scott Hester
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | | - Mohammad Mehrmohammadi
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Wilmot Cancer Institute, Rochester, NY, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
7
|
Gao R, Chen T, Ren Y, Liu L, Chen N, Wong KK, Song L, Ma X, Liu C. Restoring the imaging quality of circular transducer array-based PACT using synthetic aperture focusing technique integrated with 2nd-derivative-based back projection scheme. PHOTOACOUSTICS 2023; 32:100537. [PMID: 37559663 PMCID: PMC10407438 DOI: 10.1016/j.pacs.2023.100537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/16/2023] [Accepted: 07/19/2023] [Indexed: 08/11/2023]
Abstract
Circular-array-based photoacoustic computed tomography (CA-PACT) is a promising imaging tool owing to its broad acoustic detection coverage and fidelity. However, CA-PACT suffers from poor image quality outside the focal zone along both elevational and lateral dimensions. To address this challenge, we proposed a novel reconstruction strategy by integrating the synthetic aperture focusing technique (SAFT) with the 2nd derivative-based back projection (2nd D-BP) algorithm to restore the image quality outside the focal zone along both the elevational and lateral axes. The proposed solution is a two-phase reconstruction scheme. In the first phase, with the assistance of an acoustic lens, we designed a circular array-based SAFT algorithm to restore the resolution and SNR along the elevational axis. The acoustic lens pushes the boundary of the upper limit of the SAFT scheme to achieve enhanced elevational resolution. In the second phase, we proposed a 2nd D-BP scheme to improve the lateral resolution and suppress noises in 3D imaging results. The 2nd D-BP strategy enhances the image quality along the lateral dimension by up-converting the high spatial frequencies of the object's absorption pattern. We validated the effectiveness of the proposed strategy using both phantoms and in vivo human experiments. The experimental results demonstrated superior image quality (7-fold enhancement in elevational resolution, 3-fold enhancement in lateral resolution, and an 11-dB increase in SNR). This strategy provides a new paradigm in the PACT system as it significantly enhances the spatial resolution and imaging contrast in both the elevational and lateral dimensions while maintaining a large focal zone.
Collapse
Affiliation(s)
- Rongkang Gao
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Tao Chen
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yaguang Ren
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Liangjian Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ningbo Chen
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- The University of Hong Kong, Department of Electrical and Electronic Engineering, Hong Kong China
| | - Kenneth K.Y. Wong
- The University of Hong Kong, Department of Electrical and Electronic Engineering, Hong Kong China
| | - Liang Song
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaohui Ma
- The first medical center of Chinese PLA General Hospital, the Department of Vascular and Endovascular Surgery, Beijing, China
| | - Chengbo Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
8
|
Tserevelakis GJ, Barmparis GD, Kokosalis N, Giosa ES, Pavlopoulos A, Tsironis GP, Zacharakis G. Deep learning-assisted frequency-domain photoacoustic microscopy. OPTICS LETTERS 2023; 48:2720-2723. [PMID: 37186749 DOI: 10.1364/ol.486624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Frequency-domain photoacoustic microscopy (FD-PAM) constitutes a powerful cost-efficient imaging method integrating intensity-modulated laser beams for the excitation of single-frequency photoacoustic waves. Nevertheless, FD-PAM provides an extremely small signal-to-noise ratio (SNR), which can be up to two orders of magnitude lower than the conventional time-domain (TD) systems. To overcome this inherent SNR limitation of FD-PAM, we utilize a U-Net neural network aiming at image augmentation without the need for excessive averaging or the application of high optical power. In this context, we improve the accessibility of PAM as the system's cost is dramatically reduced, and we expand its applicability to demanding observations while retaining sufficiently high image quality standards.
Collapse
|
9
|
He D, Zhou J, Shang X, Tang X, Luo J, Chen SL. De-Noising of Photoacoustic Microscopy Images by Attentive Generative Adversarial Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1349-1362. [PMID: 37015584 DOI: 10.1109/tmi.2022.3227105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
As a hybrid imaging technology, photoacoustic microscopy (PAM) imaging suffers from noise due to the maximum permissible exposure of laser intensity, attenuation of ultrasound in the tissue, and the inherent noise of the transducer. De-noising is an image processing method to reduce noise, and PAM image quality can be recovered. However, previous de-noising techniques usually heavily rely on manually selected parameters, resulting in unsatisfactory and slow de-noising performance for different noisy images, which greatly hinders practical and clinical applications. In this work, we propose a deep learning-based method to remove noise from PAM images without manual selection of settings for different noisy images. An attention enhanced generative adversarial network is used to extract image features and adaptively remove various levels of Gaussian, Poisson, and Rayleigh noise. The proposed method is demonstrated on both synthetic and real datasets, including phantom (leaf veins) and in vivo (mouse ear blood vessels and zebrafish pigment) experiments. In the in vivo experiments using synthetic datasets, our method achieves the improvement of 6.53 dB and 0.26 in peak signal-to-noise ratio and structural similarity metrics, respectively. The results show that compared with previous PAM de-noising methods, our method exhibits good performance in recovering images qualitatively and quantitatively. In addition, the de-noising processing speed of 0.016 s is achieved for an image with 256×256 pixels, which has the potential for real-time applications. Our approach is effective and practical for the de-noising of PAM images.
Collapse
|
10
|
Zhang Z, Jin H, Zhang W, Lu W, Zheng Z, Sharma A, Pramanik M, Zheng Y. Adaptive enhancement of acoustic resolution photoacoustic microscopy imaging via deep CNN prior. PHOTOACOUSTICS 2023; 30:100484. [PMID: 37095888 PMCID: PMC10121479 DOI: 10.1016/j.pacs.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a promising medical imaging modality that can be employed for deep bio-tissue imaging. However, its relatively low imaging resolution has greatly hindered its wide applications. Previous model-based or learning-based PAM enhancement algorithms either require design of complex handcrafted prior to achieve good performance or lack the interpretability and flexibility that can adapt to different degradation models. However, the degradation model of AR-PAM imaging is subject to both imaging depth and center frequency of ultrasound transducer, which varies in different imaging conditions and cannot be handled by a single neural network model. To address this limitation, an algorithm integrating both learning-based and model-based method is proposed here so that a single framework can deal with various distortion functions adaptively. The vasculature image statistics is implicitly learned by a deep convolutional neural network, which served as plug and play (PnP) prior. The trained network can be directly plugged into the model-based optimization framework for iterative AR-PAM image enhancement, which fitted for different degradation mechanisms. Based on physical model, the point spread function (PSF) kernels for various AR-PAM imaging situations are derived and used for the enhancement of simulation and in vivo AR-PAM images, which collectively proved the effectiveness of proposed method. Quantitatively, the PSNR and SSIM values have all achieve best performance with the proposed algorithm in all three simulation scenarios; The SNR and CNR values have also significantly raised from 6.34 and 5.79 to 35.37 and 29.66 respectively in an in vivo testing result with the proposed algorithm.
Collapse
Affiliation(s)
- Zhengyuan Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Haoran Jin
- Zhejiang University, College of Mechanical Engineering, The State Key Laboratory of Fluid Power and Mechatronic Systems, Hangzhou 310027, China
| | - Wenwen Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Wenhao Lu
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Zesheng Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Arunima Sharma
- Johns Hopkins University, Electrical and Computer Engineering, Baltimore, MD 21218, USA
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, USA
| | - Yuanjin Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
- Corresponding author.
| |
Collapse
|
11
|
Zhou Y, Sun N, Hu S. Deep Learning-Powered Bessel-Beam Multiparametric Photoacoustic Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3544-3551. [PMID: 35788453 PMCID: PMC9767649 DOI: 10.1109/tmi.2022.3188739] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Enabling simultaneous and high-resolution quantification of the total concentration of hemoglobin ( [Formula: see text]), oxygen saturation of hemoglobin (sO2), and cerebral blood flow (CBF), multi-parametric photoacoustic microscopy (PAM) has emerged as a promising tool for functional and metabolic imaging of the live mouse brain. However, due to the limited depth of focus imposed by the Gaussian-beam excitation, the quantitative measurements become inaccurate when the imaging object is out of focus. To address this problem, we have developed a hardware-software combined approach by integrating Bessel-beam excitation and conditional generative adversarial network (cGAN)-based deep learning. Side-by-side comparison of the new cGAN-powered Bessel-beam multi-parametric PAM against the conventional Gaussian-beam multi-parametric PAM shows that the new system enables high-resolution, quantitative imaging of [Formula: see text], sO2, and CBF over a depth range of [Formula: see text] in the live mouse brain, with errors 13-58 times lower than those of the conventional system. Better fulfilling the rigid requirement of light focusing for accurate hemodynamic measurements, the deep learning-powered Bessel-beam multi-parametric PAM may find applications in large-field functional recording across the uneven brain surface and beyond (e.g., tumor imaging).
Collapse
|
12
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
13
|
Guezzi N, Lee C, Le TD, Seong H, Choi KH, Min JJ, Yu J. Multistage adaptive noise reduction technique for optical resolution photoacoustic microscopy. JOURNAL OF BIOPHOTONICS 2022; 15:e202200164. [PMID: 36053943 DOI: 10.1002/jbio.202200164] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/22/2022] [Accepted: 07/25/2022] [Indexed: 06/15/2023]
Abstract
Photoacoustic microscopy has received great attention due to the benefits of the optical resolution contrast as well as its superior spatial resolution and relatively deep depth. Like other imaging modalities, photoacoustic images suffer from noise, and filtering techniques are required to remove them. To overcome the noise, we proposed a combination of filters, including an adaptive median filter, an effective filter for impulsive noise, and a nonlocal means filter, an effective filter for background noise, for noise removal and image quality enhancement. Our proposed method enhanced the signal-to-noise ratio by 16 dB in an in vivo study compared to the traditional image reconstruction approach and preserved the image detail with minimal blurring, which usually occurs when filtering. These experimental results verified that the proposed adaptive multistage denoising techniques could effectively improve image quality under noisy data acquisition conditions, providing a strong foundation for photoacoustic microscopy with limited laser power.
Collapse
Affiliation(s)
- Nizar Guezzi
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea
- DGIST Robotics Research Center, DGIST, Daegu, South Korea
| | - Changho Lee
- Department of Nuclear Medicine, Chonnam National University Medical School & Hwasun Hospital, Hwasun, Jeollanamado, South Korea
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, South Korea
| | - Thanh Dat Le
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, South Korea
| | - Hyojin Seong
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea
- DGIST Robotics Research Center, DGIST, Daegu, South Korea
| | - Kang-Ho Choi
- Department of Neurology, Chonnam National University Hospital, Gwangju, South Korea
| | - Jung-Joon Min
- Department of Nuclear Medicine, Chonnam National University Medical School & Hwasun Hospital, Hwasun, Jeollanamado, South Korea
| | - Jaesok Yu
- Department of Robotics and Mechatronics Engineering, DGIST, Daegu, South Korea
- DGIST Robotics Research Center, DGIST, Daegu, South Korea
- The Interdisciplinary Studies of Artificial Intelligence, DGIST, Daegu, South Korea
| |
Collapse
|
14
|
Madasamy A, Gujrati V, Ntziachristos V, Prakash J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:106004. [PMID: 36209354 PMCID: PMC9547608 DOI: 10.1117/1.jbo.27.10.106004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect. AIM Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium. APPROACH Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets. RESULTS The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction. CONCLUSIONS The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Collapse
Affiliation(s)
- Arumugaraj Madasamy
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| | - Vipul Gujrati
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
- Technical University of Munich, Munich Institute of Robotics and Machine Intelligence (MIRMI), Munich, Germany
| | - Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| |
Collapse
|
15
|
Meng J, Zhang X, Liu L, Zeng S, Fang C, Liu C. Depth-extended acoustic-resolution photoacoustic microscopy based on a two-stage deep learning network. BIOMEDICAL OPTICS EXPRESS 2022; 13:4386-4397. [PMID: 36032586 PMCID: PMC9408237 DOI: 10.1364/boe.461183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/25/2022] [Accepted: 07/17/2022] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a major modality of photoacoustic imaging. It can non-invasively provide high-resolution morphological and functional information about biological tissues. However, the image quality of AR-PAM degrades rapidly when the targets move far away from the focus. Although some works have been conducted to extend the high-resolution imaging depth of AR-PAM, most of them have a small focal point requirement, which is generally not satisfied in a regular AR-PAM system. Therefore, we propose a two-stage deep learning (DL) reconstruction strategy for AR-PAM to recover high-resolution photoacoustic images at different out-of-focus depths adaptively. The residual U-Net with attention gate was developed to implement the image reconstruction. We carried out phantom and in vivo experiments to optimize the proposed DL network and verify the performance of the proposed reconstruction method. Experimental results demonstrated that our approach extends the depth-of-focus of AR-PAM from 1mm to 3mm under the 4 mJ/cm2 light energy used in the imaging system. In addition, the imaging resolution of the region 2 mm far away from the focus can be improved, similar to the in-focus area. The proposed method effectively improves the imaging ability of AR-PAM and thus could be used in various biomedical studies needing deeper depth.
Collapse
Affiliation(s)
- Jing Meng
- School of Computer, Qufu Normal University, Rizhao 276826, China
- These authors contributed equally to this work
| | - Xueting Zhang
- School of Computer, Qufu Normal University, Rizhao 276826, China
- These authors contributed equally to this work
| | - Liangjian Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- These authors contributed equally to this work
| | - Silue Zeng
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chengbo Liu
- Institute of Biomedical and Health Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
16
|
Xu Z, Pan Y, Chen N, Zeng S, Liu L, Gao R, Zhang J, Fang C, Song L, Liu C. Visualizing tumor angiogenesis and boundary with polygon-scanning multiscale photoacoustic microscopy. PHOTOACOUSTICS 2022; 26:100342. [PMID: 35433255 PMCID: PMC9010793 DOI: 10.1016/j.pacs.2022.100342] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Revised: 02/15/2022] [Accepted: 02/21/2022] [Indexed: 05/05/2023]
Abstract
Recently, we developed an integrated optical-resolution (OR) and acoustic-resolution (AR) PAM, which has multiscale imaging capability using different resolutions. However, limited by the scanning method, a tradeoff exists between the imaging speed and field of view, which impedes its wider applications. Here, we present an improved multiscale PAM which achieves high-speed wide-field imaging based on a homemade polygon scanner. Encoder trigger mode was proposed to avoid jittering of the polygon scanner during imaging. Distortions caused by polygon scanning were analyzed theoretically and compared with traditional types of distortions in optical-scanning PAM. Then a depth correction method was proposed and verified to compensate for the distortions. System characterization of OR-PAM and AR-PAM was performed prior to in vivo imaging. Blood reperfusion of an in vivo mouse ear was imaged continuously to demonstrate the feasibility of the multiscale PAM for high-speed imaging. Results showed that the maximum B-scan rate could be 14.65 Hz in a fixed range of 10 mm. Compared with our previous multiscale system, the imaging speed of the improved system was increased by a factor of 12.35. In vivo imaging of a subcutaneously inoculated B-16 melanoma of a mouse was performed. Results showed that the blood vasculature around the melanoma could be resolved and the melanoma could be visualized at a depth up to 1.6 mm using the multiscale PAM.
Collapse
Affiliation(s)
- Zhiqiang Xu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yinhao Pan
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- College of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou 510006, China
| | - Ningbo Chen
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Silue Zeng
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Liangjian Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Rongkang Gao
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jianhui Zhang
- College of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou 510006, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery I, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Liang Song
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Chengbo Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Corresponding author.
| |
Collapse
|
17
|
Feng F, Liang S, Luo J, Chen SL. High-fidelity deconvolution for acoustic-resolution photoacoustic microscopy enabled by convolutional neural networks. PHOTOACOUSTICS 2022; 26:100360. [PMID: 35574187 PMCID: PMC9095893 DOI: 10.1016/j.pacs.2022.100360] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/18/2022] [Accepted: 04/18/2022] [Indexed: 05/10/2023]
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) image resolution is determined by the point spread function (PSF) of the imaging system. Previous algorithms, including Richardson-Lucy (R-L) deconvolution and model-based (MB) deconvolution, improve spatial resolution by taking advantage of the PSF as prior knowledge. However, these methods encounter the problems of inaccurate deconvolution, meaning the deconvolved feature size and the original one are not consistent (e.g., the former can be smaller than the latter). We present a novel deep convolution neural network (CNN)-based algorithm featuring high-fidelity recovery of multiscale feature size to improve lateral resolution of AR-PAM. The CNN is trained with simulated image pairs of line patterns, which is to mimic blood vessels. To investigate the suitable CNN model structure and elaborate on the effectiveness of CNN methods compared with non-learning methods, we select five different CNN models, while R-L and directional MB methods are also applied for comparison. Besides simulated data, experimental data including tungsten wires, leaf veins, and in vivo blood vessels are also evaluated. A custom-defined metric of relative size error (RSE) is used to quantify the multiscale feature recovery ability of different methods. Compared to other methods, enhanced deep super resolution (EDSR) network and residual in residual dense block network (RRDBNet) model show better recovery in terms of RSE for tungsten wires with diameters ranging from 30 μ m to 120 μ m . Moreover, AR-PAM images of leaf veins are tested to demonstrate the effectiveness of the optimized CNN methods (by EDSR and RRDBNet) for complex patterns. Finally, in vivo images of mouse ear blood vessels and rat ear blood vessels are acquired and then deconvolved, and the results show that the proposed CNN method (notably RRDBNet) enables accurate deconvolution of multiscale feature size and thus good fidelity.
Collapse
Affiliation(s)
- Fei Feng
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Siqi Liang
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jiajia Luo
- Institute of Medical Technology, Peking University Health Science Center, Beijing 100191, China
- Biomedical Engineering Department, Peking University, Beijing 100191, China
- Peking University People’s Hospital, Beijing 100044, China
- Corresponding author at: Biomedical Engineering Department, Peking University, Beijing 100191, China.
| | - Sung-Liang Chen
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Shanghai Jiao Tong University, Shanghai 200240, China
- Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China
- Corresponding author at: University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
18
|
Rajendran P, Pramanik M. High frame rate (∼3 Hz) circular photoacoustic tomography using single-element ultrasound transducer aided with deep learning. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:066005. [PMID: 36452448 PMCID: PMC9209813 DOI: 10.1117/1.jbo.27.6.066005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 06/01/2022] [Indexed: 05/29/2023]
Abstract
SIGNIFICANCE In circular scanning photoacoustic tomography (PAT), it takes several minutes to generate an image of acceptable quality, especially with a single-element ultrasound transducer (UST). The imaging speed can be enhanced by faster scanning (with high repetition rate light sources) and using multiple-USTs. However, artifacts arising from the sparse signal acquisition and low signal-to-noise ratio at higher scanning speeds limit the imaging speed. Thus, there is a need to improve the imaging speed of the PAT systems without hampering the quality of the PAT image. AIM To improve the frame rate (or imaging speed) of the PAT system by using deep learning (DL). APPROACH For improving the frame rate (or imaging speed) of the PAT system, we propose a novel U-Net-based DL framework to reconstruct PAT images from fast scanning data. RESULTS The efficiency of the network was evaluated on both single- and multiple-UST-based PAT systems. Both phantom and in vivo imaging demonstrate that the network can improve the imaging frame rate by approximately sixfold in single-UST-based PAT systems and by approximately twofold in multi-UST-based PAT systems. CONCLUSIONS We proposed an innovative method to improve the frame rate (or imaging speed) by using DL and with this method, the fastest frame rate of ∼ 3 Hz imaging is achieved without hampering the quality of the reconstructed image.
Collapse
Affiliation(s)
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| |
Collapse
|
19
|
Gao R, Xue Q, Ren Y, Zhang H, Song L, Liu C. Achieving depth-independent lateral resolution in AR-PAM using the synthetic-aperture focusing technique. PHOTOACOUSTICS 2022; 26:100328. [PMID: 35242539 PMCID: PMC8861412 DOI: 10.1016/j.pacs.2021.100328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 12/12/2021] [Accepted: 12/23/2021] [Indexed: 05/02/2023]
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) is a promising imaging modality that renders images with ultrasound resolution and extends the imaging depth beyond the optical ballistic regime. To achieve a high lateral resolution, a large numerical aperture (NA) of a focused transducer is usually applied for AR-PAM. However, AR-PAM fails to hold its performance in the out-of-focus region. The lateral resolution and signal-to-noise ratio (SNR) degrade substantially, thereby leading to a significantly deteriorated image quality outside the focal area. Based on the concept of the synthetic-aperture focusing technique (SAFT), various strategies have been developed to address this challenge. These include 1D-SAFT, 2D-SAFT, adaptive-SAFT, spatial impulse response (SIR)-based schemes, and delay-multiply-and-sum (DMAS) strategies. These techniques have shown progress in achieving depth-independent lateral resolution, while several challenges remain. This review aims to introduce these developments in SAFT-based approaches, highlight their fundamental mechanisms, underline the advantages and limitations of each approach, and discuss the outlook of the remaining challenges for future advances.
Collapse
Affiliation(s)
- Rongkang Gao
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Qiang Xue
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- School of Medicine, Southern University of Science and Technology, Shenzhen 518055, China
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, The Shenzhen Medical Ultrasound Engineering Center, Shenzhen People's Hospital, Shenzhen 518020, China
| | - Yaguang Ren
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hai Zhang
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, The Shenzhen Medical Ultrasound Engineering Center, Shenzhen People's Hospital, Shenzhen 518020, China
- Department of Ultrasound, The Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen 518020, China
| | - Liang Song
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Chengbo Liu
- Research Laboratory for Biomedical Optics and Molecular Imaging, CAS Key Laboratory of Health Informatics, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Corresponding author.
| |
Collapse
|
20
|
Zhang H, Bo W, Wang D, DiSpirito A, Huang C, Nyayapathi N, Zheng E, Vu T, Gong Y, Yao J, Xu W, Xia J. Deep-E: A Fully-Dense Neural Network for Improving the Elevation Resolution in Linear-Array-Based Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1279-1288. [PMID: 34928793 PMCID: PMC9161237 DOI: 10.1109/tmi.2021.3137060] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Linear-array-based photoacoustic tomography has shown broad applications in biomedical research and preclinical imaging. However, the elevational resolution of a linear array is fundamentally limited due to the weak cylindrical focus of the transducer element. While several methods have been proposed to address this issue, they have all handled the problem in a less time-efficient way. In this work, we propose to improve the elevational resolution of a linear array through Deep-E, a fully dense neural network based on U-net. Deep-E exhibits high computational efficiency by converting the three-dimensional problem into a two-dimension problem: it focused on training a model to enhance the resolution along elevational direction by only using the 2D slices in the axial and elevational plane and thereby reducing the computational burden in simulation and training. We demonstrated the efficacy of Deep-E using various datasets, including simulation, phantom, and human subject results. We found that Deep-E could improve elevational resolution by at least four times and recover the object's true size. We envision that Deep-E will have a significant impact in linear-array-based photoacoustic imaging studies by providing high-speed and high-resolution image enhancement.
Collapse
|
21
|
Cheng S, Zhou Y, Chen J, Li H, Wang L, Lai P. High-resolution photoacoustic microscopy with deep penetration through learning. PHOTOACOUSTICS 2022; 25:100314. [PMID: 34824976 PMCID: PMC8604673 DOI: 10.1016/j.pacs.2021.100314] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 05/18/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) enjoys superior spatial resolution and has received intense attention in recent years. The application, however, has been limited to shallow depths because of strong scattering of light in biological tissues. In this work, we propose to achieve deep-penetrating OR-PAM performance by using deep learning enabled image transformation on blurry living mouse vascular images that were acquired with an acoustic-resolution photoacoustic microscopy (AR-PAM) setup. A generative adversarial network (GAN) was trained in this study and improved the imaging lateral resolution of AR-PAM from 54.0 µm to 5.1 µm, comparable to that of a typical OR-PAM (4.7 µm). The feasibility of the network was evaluated with living mouse ear data, producing superior microvasculature images that outperforms blind deconvolution. The generalization of the network was validated with in vivo mouse brain data. Moreover, it was shown experimentally that the deep-learning method can retain high resolution at tissue depths beyond one optical transport mean free path. Whilst it can be further improved, the proposed method provides new horizons to expand the scope of OR-PAM towards deep-tissue imaging and wide applications in biomedicine.
Collapse
Affiliation(s)
- Shengfu Cheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Yingying Zhou
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Huanhao Li
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Puxiang Lai
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
22
|
Feng F, Liang S, Chen SL. Image enhancement in acoustic-resolution photoacoustic microscopy enabled by a novel directional algorithm. BIOMEDICAL OPTICS EXPRESS 2022; 13:1026-1044. [PMID: 35284174 PMCID: PMC8884221 DOI: 10.1364/boe.452017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 01/14/2022] [Accepted: 01/17/2022] [Indexed: 05/25/2023]
Abstract
By considering the line pattern of acoustic-resolution photoacoustic microscopy (AR-PAM) vessel images, we develop modified algorithms for synthetic aperture focusing technique (SAFT) and deconvolution based on a directional approach to enhance images. The modified algorithms consist of Fourier accumulation SAFT (FA-SAFT) and directional model-based (D-MB) deconvolution. To evaluate the performance of our algorithms, we conduct a series of imaging experiments and apply our algorithms, and existing SAFT and deconvolution algorithms are also applied for side-by-side comparison. By imaging tungsten wire phantom, our algorithms enable full width at half maximum of 26 - 31 µm over depth of focus of 1.8 mm and minimum resolvable distance of 46 - 49 µm, besting existing SAFT and deconvolution algorithms. Imaging of leaf skeleton phantom and in vivo imaging of mouse blood vessels also prove that our algorithm is capable of providing high-resolution, high-signal-to-noise ratio, and good-fidelity results for complex structures and for in vivo applications, especially for the images with the line pattern. The proposed directional approach can not only be used in AR-PAM but also in other imaging modalities to deal with the line pattern, such as FA-SAFT for ultrasound imaging and D-MB deconvolution for optical coherence tomography angiography.
Collapse
Affiliation(s)
- Fei Feng
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
- These authors contributed equally to this work
| | - Siqi Liang
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
- These authors contributed equally to this work
| | - Sung-Liang Chen
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
- Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
23
|
Zhou Y, Ni J, Wen C, Lai P. Light on osteoarthritic joint: from bench to bed. Am J Cancer Res 2022; 12:542-557. [PMID: 34976200 PMCID: PMC8692899 DOI: 10.7150/thno.64340] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 11/08/2021] [Indexed: 12/19/2022] Open
Abstract
Osteoarthritis (OA) is one of the rapidly growing disability-associated conditions with population aging worldwide. There is a pressing need for precise diagnosis and timely intervention for OA in the early stage. Current clinical imaging modalities, including pain radiography, magnetic resonance imaging, ultrasound, and optical coherent tomography, are limited to provide structural changes when the damage has been established or advanced. It prompts further endeavors in search of novel functional and molecular imaging, which potentially enables early diagnosis and intervention of OA. A hybrid imaging modality based on photothermal effects, photoacoustic imaging, has drawn wide attention in recent years and has seen a variety of biomedical applications, due to its great performance in yielding high-contrast and high-resolution images from structure to function, from tissue down to molecular levels, from animals to human subjects. Photoacoustic imaging has witnessed gratifying potentials and preliminary effects in OA diagnosis. Regarding the treatment of OA, photothermal-triggered therapy has exhibited its attractions for enhanced therapeutic outcomes. In this narrative review, we will discuss photoacoustic imaging for the diagnosis and monitoring of OA at different stages. Structural, functional, and molecular parameter changes associated with OA joints captured by photoacoustics will be summarized, forming the diagnosis perspective of the review. Photothermal therapy applications related to OA will also be discussed herein. Lastly, relevant clinical applications and its potential solutions to extend photoacoustic imaging to deeper OA situations have been proposed. Although some aspects may not be covered, this mini review provides a better understanding of the diagnosis and treatment of OA with exciting innovations based on tissue photothermal effects. It may also inspire more explorations in the field towards earlier and better theranostics of OA.
Collapse
|
24
|
Zhang X, Ma F, Zhang Y, Wang J, Liu C, Meng J. Sparse-sampling photoacoustic computed tomography: Deep learning vs. compressed sensing. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103233] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
25
|
Song X, Chen G, Zhao A, Liu X, Zeng J. Virtual optical-resolution photoacoustic microscopy using the k-Wave method. APPLIED OPTICS 2021; 60:11241-11246. [PMID: 35201116 DOI: 10.1364/ao.444106] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/27/2021] [Indexed: 06/14/2023]
Abstract
Deep learning has been widely used in image processing, quantitative analysis, and other applications in optical-resolution photoacoustic microscopy (OR-PAM). It requires a large amount of photoacoustic data for training and testing. However, due to the complex structure, high cost, slow imaging speed, and other factors of OR-PAM, it is difficult to obtain enough data required by deep learning, which limits the research of deep learning in OR-PAM to a certain extent. To solve this problem, a virtual OR-PAM based on k-Wave is proposed. The virtual photoacoustic microscopy mainly includes the setting of excitation light source and ultrasonic probe, scanning and signal processing, which can realize the common Gaussian-beam and Bessel-beam OR-PAMs. The system performance (lateral resolution, axial resolution, and depth of field) was tested by imaging a vertically tilted fiber, and the effectiveness and feasibility of the virtual simulation platform were verified by 3D imaging of the virtual vascular network. The ability to the generation of the dataset for deep learning was also verified. The construction of the virtual OR-PAM can promote the research of OR-PAM and the application of deep learning in OR-PAM.
Collapse
|
26
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
27
|
Rajendran P, Pramanik M. Deep-learning-based multi-transducer photoacoustic tomography imaging without radius calibration. OPTICS LETTERS 2021; 46:4510-4513. [PMID: 34525034 DOI: 10.1364/ol.434513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Pulsed laser diodes are used in photoacoustic tomography (PAT) as excitation sources because of their low cost, compact size, and high pulse repetition rate. In combination with multiple single-element ultrasound transducers (SUTs) the imaging speed of PAT can be improved. However, during PAT image reconstruction, the exact radius of each SUT is required for accurate reconstruction. Here we developed a novel deep learning approach to alleviate the need for radius calibration. We used a convolutional neural network (fully dense U-Net) aided with a convolutional long short-term memory block to reconstruct the PAT images. Our analysis on the test set demonstrates that the proposed network eliminates the need for radius calibration and improves the peak signal-to-noise ratio by ∼73% without compromising the image quality. In vivo imaging was used to verify the performance of the network.
Collapse
|
28
|
Yazdani A, Agrawal S, Johnstonbaugh K, Kothapalli SR, Monga V. Simultaneous Denoising and Localization Network for Photoacoustic Target Localization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2367-2379. [PMID: 33939612 PMCID: PMC8526152 DOI: 10.1109/tmi.2021.3077187] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A significant research problem of recent interest is the localization of targets like vessels, surgical needles, and tumors in photoacoustic (PA) images.To achieve accurate localization, a high photoacoustic signal-to-noise ratio (SNR) is required. However, this is not guaranteed for deep targets, as optical scattering causes an exponential decay in optical fluence with respect to tissue depth. To address this, we develop a novel deep learning method designed to explicitly exhibit robustness to noise present in photoacoustic radio-frequency (RF) data. More precisely, we describe and evaluate a deep neural network architecture consisting of a shared encoder and two parallel decoders. One decoder extracts the target coordinates from the input RF data while the other boosts the SNR and estimates clean RF data. The joint optimization of the shared encoder and dual decoders lends significant noise robustness to the features extracted by the encoder, which in turn enables the network to contain detailed information about deep targets that may be obscured by noise. Additional custom layers and newly proposed regularizers in the training loss function (designed based on observed RF data signal and noise behavior) serve to increase the SNR in the cleaned RF output and improve model performance. To account for depth-dependent strong optical scattering, our network was trained with simulated photoacoustic datasets of targets embedded at different depths inside tissue media of different scattering levels. The network trained on this novel dataset accurately locates targets in experimental PA data that is clinically relevant with respect to the localization of vessels, needles, or brachytherapy seeds. We verify the merits of the proposed architecture by outperforming the state of the art on both simulated and experimental datasets.
Collapse
|
29
|
Li C, Moatti A, Zhang X, Troy Ghashghaei H, Greenabum A. Deep learning-based autofocus method enhances image quality in light-sheet fluorescence microscopy. BIOMEDICAL OPTICS EXPRESS 2021; 12:5214-5226. [PMID: 34513252 PMCID: PMC8407817 DOI: 10.1364/boe.427099] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Revised: 06/12/2021] [Accepted: 07/07/2021] [Indexed: 05/23/2023]
Abstract
Light-sheet fluorescence microscopy (LSFM) is a minimally invasive and high throughput imaging technique ideal for capturing large volumes of tissue with sub-cellular resolution. A fundamental requirement for LSFM is a seamless overlap of the light-sheet that excites a selective plane in the specimen, with the focal plane of the objective lens. However, spatial heterogeneity in the refractive index of the specimen often results in violation of this requirement when imaging deep in the tissue. To address this issue, autofocus methods are commonly used to refocus the focal plane of the objective-lens on the light-sheet. Yet, autofocus techniques are slow since they require capturing a stack of images and tend to fail in the presence of spherical aberrations that dominate volume imaging. To address these issues, we present a deep learning-based autofocus framework that can estimate the position of the objective-lens focal plane relative to the light-sheet, based on two defocused images. This approach outperforms or provides comparable results with the best traditional autofocus method on small and large image patches respectively. When the trained network is integrated with a custom-built LSFM, a certainty measure is used to further refine the network's prediction. The network performance is demonstrated in real-time on cleared genetically labeled mouse forebrain and pig cochleae samples. Our study provides a framework that could improve light-sheet microscopy and its application toward imaging large 3D specimens with high spatial resolution.
Collapse
Affiliation(s)
- Chen Li
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
| | - Adele Moatti
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
| | - Xuying Zhang
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
- Department of Molecular Biomedical Sciences, North Carolina State University, Raleigh, NC 27695, USA
| | - H. Troy Ghashghaei
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
- Department of Molecular Biomedical Sciences, North Carolina State University, Raleigh, NC 27695, USA
| | - Alon Greenabum
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC 27695, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC 27695, USA
- Bioinformatics Research Center, North Carolina State University, Raleigh, NC 27695, USA
| |
Collapse
|
30
|
Xia J, Lediju Bell MA, Laufer J, Yao J. Translational Photoacoustic Imaging for Disease Diagnosis, Monitoring, and Surgical Guidance: introduction to the feature issue. BIOMEDICAL OPTICS EXPRESS 2021; 12:4115-4118. [PMID: 34457402 PMCID: PMC8367276 DOI: 10.1364/boe.430421] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Indexed: 05/10/2023]
Abstract
This feature issue of Biomedical Optics Express covered all aspects of translational photoacoustic research. Application areas include screening and diagnosis of diseases, imaging of disease progression and therapeutic response, and image-guided treatment, such as surgery, drug delivery, and photothermal/photodynamic therapy. The feature issue also covers relevant developments in photoacoustic instrumentation, contrast agents, image processing and reconstruction algorithms.
Collapse
Affiliation(s)
- Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jan Laufer
- Institut für Physik, Martin-Luther-Universität Halle-Wittenberg, von-Danckelmann-Platz 3, 06120 Halle (Saale), Germany
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
31
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
32
|
Yao J, Wang LV. Perspective on fast-evolving photoacoustic tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210105-PERR. [PMID: 34196136 PMCID: PMC8244998 DOI: 10.1117/1.jbo.26.6.060602] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/17/2021] [Indexed: 05/19/2023]
Abstract
SIGNIFICANCE Acoustically detecting the rich optical absorption contrast in biological tissues, photoacoustic tomography (PAT) seamlessly bridges the functional and molecular sensitivity of optical excitation with the deep penetration and high scalability of ultrasound detection. As a result of continuous technological innovations and commercial development, PAT has been playing an increasingly important role in life sciences and patient care, including functional brain imaging, smart drug delivery, early cancer diagnosis, and interventional therapy guidance. AIM Built on our 2016 tutorial article that focused on the principles and implementations of PAT, this perspective aims to provide an update on the exciting technical advances in PAT. APPROACH This perspective focuses on the recent PAT innovations in volumetric deep-tissue imaging, high-speed wide-field microscopic imaging, high-sensitivity optical ultrasound detection, and machine-learning enhanced image reconstruction and data processing. Representative applications are introduced to demonstrate these enabling technical breakthroughs in biomedical research. CONCLUSIONS We conclude the perspective by discussing the future development of PAT technologies.
Collapse
Affiliation(s)
- Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Lihong V. Wang
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| |
Collapse
|
33
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
34
|
Das D, Sharma A, Rajendran P, Pramanik M. Another decade of photoacoustic imaging. Phys Med Biol 2020; 66. [PMID: 33361580 DOI: 10.1088/1361-6560/abd669] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/23/2020] [Indexed: 01/09/2023]
Abstract
Photoacoustic imaging - a hybrid biomedical imaging modality finding its way to clinical practices. Although the photoacoustic phenomenon was known more than a century back, only in the last two decades it has been widely researched and used for biomedical imaging applications. In this review we focus on the development and progress of the technology in the last decade (2010-2020). From becoming more and more user friendly, cheaper in cost, portable in size, photoacoustic imaging promises a wide range of applications, if translated to clinic. The growth of photoacoustic community is steady, and with several new directions researchers are exploring, it is inevitable that photoacoustic imaging will one day establish itself as a regular imaging system in the clinical practices.
Collapse
Affiliation(s)
- Dhiman Das
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Praveenbalaji Rajendran
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 70 Nanyang Drive, N1.3-B2-11, Singapore, 637457, SINGAPORE
| |
Collapse
|
35
|
Rajendran P, Pramanik M. Deep learning approach to improve tangential resolution in photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2020; 11:7311-7323. [PMID: 33408998 PMCID: PMC7747891 DOI: 10.1364/boe.410145] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/29/2020] [Accepted: 11/15/2020] [Indexed: 05/09/2023]
Abstract
In circular scan photoacoustic tomography (PAT), the axial resolution is spatially invariant and is limited by the bandwidth of the detector. However, the tangential resolution is spatially variant and is dependent on the aperture size of the detector. In particular, the tangential resolution improves with the decreasing aperture size. However, using a detector with a smaller aperture reduces the sensitivity of the transducer. Thus, large aperture size detectors are widely preferred in circular scan PAT imaging systems. Although several techniques have been proposed to improve the tangential resolution, they have inherent limitations such as high cost and the need for customized detectors. Herein, we propose a novel deep learning architecture to counter the spatially variant tangential resolution in circular scanning PAT imaging systems. We used a fully dense U-Net based convolutional neural network architecture along with 9 residual blocks to improve the tangential resolution of the PAT images. The network was trained on the simulated datasets and its performance was verified by experimental in vivo imaging. Results show that the proposed deep learning network improves the tangential resolution by eight folds, without compromising the structural similarity and quality of image.
Collapse
Affiliation(s)
- Praveenbalaji Rajendran
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| |
Collapse
|