1
|
Zhang Y, Tian H, Wan M, Tang S, Ding Z, Huang W, Yang Y, Li W. High resolution photoacoustic vascular image reconstruction through the fast residual dense generative adversarial network. PHOTOACOUSTICS 2025; 43:100720. [PMID: 40241881 PMCID: PMC12000740 DOI: 10.1016/j.pacs.2025.100720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2025] [Revised: 03/17/2025] [Accepted: 03/31/2025] [Indexed: 04/18/2025]
Abstract
Photoacoustic imaging is a powerful technique that provides high-resolution, deep tissue imaging. However, the time-intensive nature of photoacoustic microscopy (PAM) poses a significant challenge, especially when high-resolution images are required for real-time applications. In this study, we proposed an optimized Fast Residual Dense Generative Adversarial Network (FRDGAN) for high-quality PAM reconstruction. Through dataset validation on mouse ear vasculature, FRDGAN demonstrated superior performance in image quality, background noise suppression, and computational efficiency across multiple down-sampling scales (×4, ×8) compared to classical methods. Furthermore, in the in vivo experiments of mouse cerebral vasculature, FRDGAN achieves the improvement of 2.24 dB and 0.0255 in peak signal-to-noise ratio and structural similarity metrics in contrast to SRGAN, respectively. Our FRDGAN method provides a promising solution for fast, high-quality PAM microvascular imaging in biomedical research.
Collapse
Affiliation(s)
- Yameng Zhang
- School of Computer Engineering, Nanjing Institute of Technology, Nanjing, Jiangsu 211167, China
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
| | - Hua Tian
- School of Computer Engineering, Nanjing Institute of Technology, Nanjing, Jiangsu 211167, China
| | - Min Wan
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
| | - Shihao Tang
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
| | - Ziyun Ding
- School of Engineering, University of Birmingham, Birmingham B15 2TT, UK
| | - Wei Huang
- School of Computer Engineering, Nanjing Institute of Technology, Nanjing, Jiangsu 211167, China
| | - Yamin Yang
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
| | - Weitao Li
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu 211106, China
| |
Collapse
|
2
|
Zafar M, Avanaki K. Adaptive Run-Length Encoded DCT: A High-Fidelity Compression Algorithm for Real-Time Photoacoustic Microscopy Imaging in LabVIEW. JOURNAL OF BIOPHOTONICS 2025:e70043. [PMID: 40268498 DOI: 10.1002/jbio.70043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Revised: 03/11/2025] [Accepted: 04/05/2025] [Indexed: 04/25/2025]
Abstract
Continuous photoacoustic microscopy (PAM) imaging generates large volumes of data, resulting in significant storage demands. Here, we propose a high-fidelity real-time compression algorithm for PAM data in LabVIEW by combining Discrete Cosine Transform (DCT) with adaptive thresholding and Run Length Encoding (RLE), which we term Adaptive Run Length Encoded DCT (AR-DCT) compression. This algorithm reduces data storage requirements while preserving all the details of the images. AR-DCT ensures real-time compression, achieving superior compression ratios (CRs) compared to traditional DCT compression. We evaluated the performance of AR-DCT using in vivo mouse brain imaging data, demonstrating a CR of ~50, with a structural similarity index of 0.980 and minimal degradation in signal quality (percentage-root-mean-square-difference of 1.345%). The results show that AR-DCT outperforms traditional DCT, offering higher compression efficiency without significantly sacrificing image quality. These findings suggest that AR-DCT provides an effective solution for applications requiring continuous experiments, such as cerebral hemodynamics studies.
Collapse
Affiliation(s)
- Mohsin Zafar
- The Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Kamran Avanaki
- The Richard and Loan Hill Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
- Department of Dermatology and Pediatric, University of Illinois at Chicago, Chicago, Illinois, USA
| |
Collapse
|
3
|
Zhang S, Li J, Shen L, Zhao Z, Lee M, Qian K, Sun N, Hu B. Structure and oxygen saturation recovery of sparse photoacoustic microscopy images by deep learning. PHOTOACOUSTICS 2025; 42:100687. [PMID: 39896070 PMCID: PMC11787619 DOI: 10.1016/j.pacs.2025.100687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 12/24/2024] [Accepted: 01/08/2025] [Indexed: 02/04/2025]
Abstract
Photoacoustic microscopy (PAM) leverages the photoacoustic effect to provide high-resolution structural and functional imaging. However, achieving high-speed imaging with high spatial resolution remains challenging. To address this, undersampling and deep learning have emerged as common techniques to enhance imaging speed. Yet, existing methods rarely achieve effective recovery of functional images. In this study, we propose Mask-enhanced U-net (MeU-net) for recovering sparsely sampled PAM structural and functional images. The model utilizes dual-channel input, processing photoacoustic data from 532 nm and 558 nm wavelengths. Additionally, we introduce an adaptive vascular attention mask module that focuses on vascular information recovery and design a vessel-specific loss function to enhance restoration accuracy. We simulate data from mouse brain and ear imaging under various levels of sparsity (4 ×, 8 ×, 12 ×) and conduct extensive experiments. The results demonstrate that MeU-net significantly outperforms traditional interpolation methods and other representative models in structural information and oxygen saturation recovery.
Collapse
Affiliation(s)
- Shuyan Zhang
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Jingtan Li
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Lin Shen
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Zhonghao Zhao
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Minjun Lee
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Computer Science and Technology, Beijing Institute of Technology, China
| | - Kun Qian
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Naidi Sun
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Bin Hu
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| |
Collapse
|
4
|
Senthil Anandhi A, Jaiganesh M. An enhanced image restoration using deep learning and transformer based contextual optimization algorithm. Sci Rep 2025; 15:10324. [PMID: 40133442 PMCID: PMC11937541 DOI: 10.1038/s41598-025-94449-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2024] [Accepted: 03/13/2025] [Indexed: 03/27/2025] Open
Abstract
Image processing and restoration are important in computer vision, particularly for images that are damaged by noise, blur, and other issues. Traditional methods often have a hard time with problems like periodic noise and do not effectively combine local and global data during the restoration process. To address these problems, we suggest an enhanced image restoration model that merges Lewin architecture with SwinIR, using advanced deep learning methods. This approach combines these techniques for a better restoration process improved by 4.2%. The model's effectiveness is checked using PSNR and SSIM measurements, showing that it can lower noise while keeping key image details intact. When compared to traditional methods, our model shows better results, creating a new standard in image restoration for difficult situations. Test results show that this combined approach greatly enhances fixing performance across various image datasets, making it a strong solution for clearer images and noise reduction.
Collapse
Affiliation(s)
- A Senthil Anandhi
- Research Scholar-ICE, Anna University, Chennai, India.
- Department of CSE, New Horizon College of Engineering, Bangalore, India.
| | - M Jaiganesh
- Department of IT, Karpagam College of Engineering, Coimbatore, India
| |
Collapse
|
5
|
Tang S, Wan M, Zhang Y, Li J, Tao L, Li W. Method for Selecting the Down-Sampling Factor of Photoacoustic Image by Using Cumulative Power Difference in Frequency Domain. JOURNAL OF BIOPHOTONICS 2025:e70013. [PMID: 40103335 DOI: 10.1002/jbio.70013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Revised: 03/04/2025] [Accepted: 03/10/2025] [Indexed: 03/20/2025]
Abstract
As a novel non-invasive imaging technology, a constraint of photoacoustic microscopy (PAM) is its imaging speed. Often, PAM utilizes sparse spatial sampling, which necessitates extensive prior experimentation to accurately select the down-sampling factors. To overcome this limitation, this study proposes a frequency-domain evaluation index, cumulative power difference (CPD), for rapid selection of the optimal down-sampling factor. We apply the proposed CPD to photoacoustic images of the ear and brain of the mouse. The result shows that as the down-sampling factor increases, there is a similar decreasing trend in the quality of the 20 images. CPD was significantly correlated with PCC/MSE/SSIM (p < 0.001). The findings suggest that CPD, with its ability to evaluate the quality of photoacoustic images and quickly quantify the quality loss of down-sampled images without prior inspection. This study contributes to expanding the application range of PAM and supporting its clinical prospects.
Collapse
Affiliation(s)
- Shihao Tang
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China
| | - Min Wan
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China
| | - Yameng Zhang
- Nanjing Institute of Technology, Nanjing, Jiangsu, China
| | - Jiani Li
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China
| | - Ling Tao
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China
| | - Weitao Li
- Department of Biomedical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, Jiangsu, China
| |
Collapse
|
6
|
Wang L, Meng YC, Qian Y. MSD-Net: Multi-scale dense convolutional neural network for photoacoustic image reconstruction with sparse data. PHOTOACOUSTICS 2025; 41:100679. [PMID: 39802237 PMCID: PMC11720879 DOI: 10.1016/j.pacs.2024.100679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 11/20/2024] [Accepted: 12/10/2024] [Indexed: 01/16/2025]
Abstract
Photoacoustic imaging (PAI) is an emerging hybrid imaging technology that combines the advantages of optical and ultrasound imaging. Despite its excellent imaging capabilities, PAI still faces numerous challenges in clinical applications, particularly sparse spatial sampling and limited view detection. These limitations often result in severe streak artifacts and blurring when using standard methods to reconstruct images from incomplete data. In this work, we propose an improved convolutional neural network (CNN) architecture, called multi-scale dense UNet (MSD-Net), to correct artifacts in 2D photoacoustic tomography (PAT). MSD-Net exploits the advantages of multi-scale information fusion and dense connections to improve the performance of CNN. Experimental validation with both simulated and in vivo datasets demonstrates that our method achieves better reconstructions with improved speed.
Collapse
Affiliation(s)
- Liangjie Wang
- Institute of Fiber Optics, Shanghai University, Shanghai 201800, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai University, Shanghai 200444, China
| | - Yi-Chao Meng
- Institute of Fiber Optics, Shanghai University, Shanghai 201800, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai University, Shanghai 200444, China
| | - Yiming Qian
- Institute of Fiber Optics, Shanghai University, Shanghai 201800, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai University, Shanghai 200444, China
| |
Collapse
|
7
|
Shang R, Luke GP, O'Donnell M. Joint segmentation and image reconstruction with error prediction in photoacoustic imaging using deep learning. PHOTOACOUSTICS 2024; 40:100645. [PMID: 39347464 PMCID: PMC11424948 DOI: 10.1016/j.pacs.2024.100645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 08/16/2024] [Accepted: 09/10/2024] [Indexed: 10/01/2024]
Abstract
Deep learning has been used to improve photoacoustic (PA) image reconstruction. One major challenge is that errors cannot be quantified to validate predictions when ground truth is unknown. Validation is key to quantitative applications, especially using limited-bandwidth ultrasonic linear detector arrays. Here, we propose a hybrid Bayesian convolutional neural network (Hybrid-BCNN) to jointly predict PA image and segmentation with error (uncertainty) predictions. Each output pixel represents a probability distribution where error can be quantified. The Hybrid-BCNN was trained with simulated PA data and applied to both simulations and experiments. Due to the sparsity of PA images, segmentation focuses Hybrid-BCNN on minimizing the loss function in regions with PA signals for better predictions. The results show that accurate PA segmentations and images are obtained, and error predictions are highly statistically correlated to actual errors. To leverage error predictions, confidence processing created PA images above a specific confidence level.
Collapse
Affiliation(s)
- Ruibo Shang
- uWAMIT Center, Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| | - Geoffrey P Luke
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - Matthew O'Donnell
- uWAMIT Center, Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
8
|
Zhu X, Menozzi L, Cho SW, Yao J. High speed innovations in photoacoustic microscopy. NPJ IMAGING 2024; 2:46. [PMID: 39525278 PMCID: PMC11541221 DOI: 10.1038/s44303-024-00052-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 10/17/2024] [Indexed: 11/16/2024]
Abstract
Photoacoustic microscopy (PAM) is a key implementation of photoacoustic imaging (PAI). PAM merges rich optical contrast with deep acoustic detection, allowing for broad biomedical research and diverse clinical applications. Recent advancements in PAM technology have dramatically improved its imaging speed, enabling real-time observation of dynamic biological processes in vivo and motion-sensitive targets in situ, such as brain activities and placental development. This review introduces the engineering principles of high-speed PAM, focusing on various excitation and detection methods, each presenting unique benefits and challenges. Driven by these technological innovations, high-speed PAM has expanded its applications across fundamental, preclinical, and clinical fields. We highlight these notable applications, discuss ongoing technical challenges, and outline future directions for the development of high-speed PAM.
Collapse
Affiliation(s)
- Xiaoyi Zhu
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| | - Luca Menozzi
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| | - Soon-Woo Cho
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| |
Collapse
|
9
|
Wang J, Li B, Zhou T, Liu C, Lu M, Gu W, Liu X, Ta D. Reconstructing Cancellous Bone From Down-Sampled Optical-Resolution Photoacoustic Microscopy Images With Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1459-1471. [PMID: 38972792 DOI: 10.1016/j.ultrasmedbio.2024.05.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Revised: 03/21/2024] [Accepted: 05/30/2024] [Indexed: 07/09/2024]
Abstract
OBJECTIVE Bone diseases deteriorate the microstructure of bone tissue. Optical-resolution photoacoustic microscopy (OR-PAM) enables high spatial resolution of imaging bone tissues. However, the spatiotemporal trade-off limits the application of OR-PAM. The purpose of this study was to improve the quality of OR-PAM images without sacrificing temporal resolution. METHODS In this study, we proposed the Photoacoustic Dense Attention U-Net (PADA U-Net) model, which was used for reconstructing full-scanning images from under-sampled images. Thereby, this approach breaks the trade-off between imaging speed and spatial resolution. RESULTS The proposed method was validated on resolution test targets and bovine cancellous bone samples to demonstrate the capability of PADA U-Net in recovering full-scanning images from under-sampled OR-PAM images. With a down-sampling ratio of [4, 1], compared to bilinear interpolation, the Peak Signal-to-Noise Ratio and Structural Similarity Index Measure values (averaged over the test set of bovine cancellous bone) of the PADA U-Net were improved by 2.325 dB and 0.117, respectively. CONCLUSION The results demonstrate that the PADA U-Net model reconstructed the OR-PAM images well with different levels of sparsity. Our proposed method can further facilitate early diagnosis and treatment of bone diseases using OR-PAM.
Collapse
Affiliation(s)
- Jingxian Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai, China.
| | - Tianhua Zhou
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Mengyang Lu
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Wenting Gu
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai, China; Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| |
Collapse
|
10
|
Sulistyawan IGE, Nishimae D, Ishii T, Saijo Y. Singular value decomposition with weighting matrix applied for optical-resolution photoacoustic microscopes. ULTRASONICS 2024; 143:107424. [PMID: 39084109 DOI: 10.1016/j.ultras.2024.107424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 07/03/2024] [Accepted: 07/25/2024] [Indexed: 08/02/2024]
Abstract
The prestige target selectivity and imaging depth of optical-resolution photoacoustic microscope (OR-PAM) have gained attentions to enable advanced intra-cellular visualizations. However, the broad-band nature of photoacoustic signals is prone to noise and artifacts caused by the inefficient light-to-pressure translation, resulting in poor image quality. The present study foresees application of singular value decomposition (SVD) to effectively extract the photoacoustic signals from these noise and artifacts. Although spatiotemporal SVD succeeded in ultrasound flow signal extraction, the conventional multi frame model is not suitable for data acquired with scanning OR-PAM due to the burden of accessing multiple frames. To utilize SVD on the OR-PAM, this study began with exploring SVD applied on multiple A-lines of photoacoustic signal instead of frames. Upon explorations, an obstacle of uncertain presence of unwanted singular vectors was observed. To tackle this, a data-driven weighting matrix was designed to extract relevant singular vectors based on the analyses of temporal-spatial singular vectors. Evaluation on the extraction capability by the SVD with the weighting matrix showed a superior signal quality with efficient computation against past studies. In summary, this study contributes to the field by providing exploration of SVD applied on A-line signals as well as its practical utilization to distinguish and recover photoacoustic signals from noise and artifact components.
Collapse
Affiliation(s)
| | - Daisuke Nishimae
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Takuro Ishii
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan; Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai, Japan.
| | - Yoshifumi Saijo
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan.
| |
Collapse
|
11
|
Liu Y, Zhou J, Luo Y, Li J, Chen SL, Guo Y, Yang GZ. UPAMNet: A unified network with deep knowledge priors for photoacoustic microscopy. PHOTOACOUSTICS 2024; 38:100608. [PMID: 39669096 PMCID: PMC11636894 DOI: 10.1016/j.pacs.2024.100608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 03/10/2024] [Accepted: 04/16/2024] [Indexed: 12/14/2024]
Abstract
Photoacoustic microscopy (PAM) has gained increasing popularity in biomedical imaging, providing new opportunities for tissue monitoring and characterization. With the development of deep learning techniques, convolutional neural networks have been used for PAM image resolution enhancement and denoising. However, there exist several inherent challenges for this approach. This work presents a Unified PhotoAcoustic Microscopy image reconstruction Network (UPAMNet) for both PAM image super-resolution and denoising. The proposed method takes advantage of deep image priors by incorporating three effective attention-based modules and a mixed training constraint at both pixel and perception levels. The generalization ability of the model is evaluated in details and experimental results on different PAM datasets demonstrate the superior performance of the method. Experimental results show improvements of 0.59 dB and 1.37 dB, respectively, for 1/4 and 1/16 sparse image reconstruction, and 3.9 dB for image denoising in peak signal-to-noise ratio.
Collapse
Affiliation(s)
- Yuxuan Liu
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jiasheng Zhou
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yating Luo
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jinkai Li
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Sung-Liang Chen
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Yao Guo
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Guang-Zhong Yang
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
12
|
Wu J, Zhang K, Huang C, Ma Y, Ma R, Chen X, Guo T, Yang S, Yuan Z, Zhang Z. Parallel diffusion models promote high detail-fidelity photoacoustic microscopy in sparse sampling. OPTICS EXPRESS 2024; 32:27574-27590. [PMID: 39538591 DOI: 10.1364/oe.528474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 07/01/2024] [Indexed: 11/16/2024]
Abstract
Reconstructing sparsely sampled data is fundamental for achieving high spatiotemporal resolution photoacoustic microscopy (PAM) of microvascular morphology in vivo. Convolutional networks (CNN) and generative adversarial networks (GAN) have been introduced to high-speed PAM, but due to the use of upsampling in CNN-based networks to restore details and the instability in GAN training, they struggle to learn the entangled microvascular network structure and vascular texture features, resulting in only achieving low detail-fidelity imaging of microvascular. The diffusion models is richly sampled and can generate high-quality images, which is very helpful for the complex vascular features in PAM. Here, we propose an approach named parallel diffusion models (PDM) with parallel learning of Noise task and Image task, where the Noise task optimizes through variational lower bounds to generate microvascular structures that are visually realistic, and the Image task improves the fidelity of the generated microvascular details through image-based loss. With only 1.56% of fully sampled pixels from photoacoustic human oral data, PDM achieves an LPIPS of 0.199. Additionally, using PDM in high-speed 16x PAM prevents breathing artifacts and image distortion issues caused by low-speed sampling, reduces the standard deviation of the Row-wise Self-Correlation Coefficient, and maintains high image quality. It achieves high confidence in reconstructing detailed information from sparsely sampled data and will promote the application of reconstructed sparsely sampled data in realizing high spatiotemporal resolution PAM.
Collapse
|
13
|
Loc I, Unlu MB. Accelerating photoacoustic microscopy by reconstructing undersampled images using diffusion models. Sci Rep 2024; 14:16996. [PMID: 39043802 PMCID: PMC11266665 DOI: 10.1038/s41598-024-67957-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 07/17/2024] [Indexed: 07/25/2024] Open
Abstract
Photoacoustic Microscopy (PAM) integrates optical and acoustic imaging, offering enhanced penetration depth for detecting optical-absorbing components in tissues. Nonetheless, challenges arise in scanning large areas with high spatial resolution. With speed limitations imposed by laser pulse repetition rates, the potential role of computational methods is highlighted in accelerating PAM imaging. We propose a novel and highly adaptable algorithm named DiffPam that utilizes diffusion models to speed up the photoacoustic imaging process. We leveraged a diffusion model trained exclusively on natural images, comparing its performance with an in-domain trained U-Net model using a dataset focused on PAM images of mice brain microvasculature. Our findings indicate that DiffPam performs similarly to a dedicated U-Net model without needing a large dataset. We demonstrate that scanning can be accelerated fivefold with limited information loss. We achieved a 24.70 % increase in peak signal-to-noise ratio and a 27.54 % increase in structural similarity index compared to the baseline bilinear interpolation method. The study also introduces the efficacy of shortened diffusion processes for reducing computing time without compromising accuracy. DiffPam stands out from existing methods as it does not require supervised training or detailed parameter optimization typically needed for other unsupervised methods. This study underscores the significance of DiffPam as a practical algorithm for reconstructing undersampled PAM images, particularly for researchers with limited artificial intelligence expertise and computational resources.
Collapse
Affiliation(s)
- Irem Loc
- Bogazici University Physics Department, Istanbul, Turkey.
| | - M Burcin Unlu
- Faculty of Engineering, Ozyegin University, Istanbul, Turkey
- Faculty of Aviation and Aeronautical Sciences, Ozyegin University, Istanbul, Turkey
| |
Collapse
|
14
|
Ma X. The analysis of the internet of things database query and optimization using deep learning network model. PLoS One 2024; 19:e0306291. [PMID: 38941309 PMCID: PMC11213290 DOI: 10.1371/journal.pone.0306291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Accepted: 06/14/2024] [Indexed: 06/30/2024] Open
Abstract
To explore the application effect of the deep learning (DL) network model in the Internet of Things (IoT) database query and optimization. This study first analyzes the architecture of IoT database queries, then explores the DL network model, and finally optimizes the DL network model through optimization strategies. The advantages of the optimized model in this study are verified through experiments. Experimental results show that the optimized model has higher efficiency than other models in the model training and parameter optimization stages. Especially when the data volume is 2000, the model training time and parameter optimization time of the optimized model are remarkably lower than that of the traditional model. In terms of resource consumption, the Central Processing Unit and Graphics Processing Unit usage and memory usage of all models have increased as the data volume rises. However, the optimized model exhibits better performance on energy consumption. In throughput analysis, the optimized model can maintain high transaction numbers and data volumes per second when handling large data requests, especially at 4000 data volumes, and its peak time processing capacity exceeds that of other models. Regarding latency, although the latency of all models increases with data volume, the optimized model performs better in database query response time and data processing latency. The results of this study not only reveal the optimized model's superior performance in processing IoT database queries and their optimization but also provide a valuable reference for IoT data processing and DL model optimization. These findings help to promote the application of DL technology in the IoT field, especially in the need to deal with large-scale data and require efficient processing scenarios, and offer a vital reference for the research and practice in related fields.
Collapse
Affiliation(s)
- Xiaowen Ma
- Library, Shandong University of Arts, Jinan, China
| |
Collapse
|
15
|
Ma Y, Zhou W, Ma R, Wang E, Yang S, Tang Y, Zhang XP, Guan X. DOVE: Doodled vessel enhancement for photoacoustic angiography super resolution. Med Image Anal 2024; 94:103106. [PMID: 38387244 DOI: 10.1016/j.media.2024.103106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/12/2023] [Accepted: 02/08/2024] [Indexed: 02/24/2024]
Abstract
Deep-learning-based super-resolution photoacoustic angiography (PAA) has emerged as a valuable tool for enhancing the resolution of blood vessel images and aiding in disease diagnosis. However, due to the scarcity of training samples, PAA super-resolution models do not generalize well, especially in the challenging in-vivo imaging of organs with deep tissue penetration. Furthermore, prolonged exposure to high laser intensity during the image acquisition process can lead to tissue damage and secondary infections. To address these challenges, we propose an approach doodled vessel enhancement (DOVE) that utilizes hand-drawn doodles to train a PAA super-resolution model. With a training dataset consisting of only 32 real PAA images, we construct a diffusion model that interprets hand-drawn doodles as low-resolution images. DOVE enables us to generate a large number of realistic PAA images, achieving a 49.375% fool rate, even among experts in photoacoustic imaging. Subsequently, we employ these generated images to train a self-similarity-based model for super-resolution. During cross-domain tests, our method, trained solely on generated images, achieves a structural similarity value of 0.8591, surpassing the scores of all other models trained with real high-resolution images. DOVE successfully overcomes the limitation of insufficient training samples and unlocks the clinic application potential of super-resolution-based biomedical imaging.
Collapse
Affiliation(s)
- Yuanzheng Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Wangting Zhou
- Engineering Research Center of Molecular & Neuro Imaging of the Ministry of Education, Xidian University, Xi'an, Shaanxi 710126, China
| | - Rui Ma
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Erqi Wang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Sihua Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.
| | - Yansong Tang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiao-Ping Zhang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xun Guan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| |
Collapse
|
16
|
Lu Y, Sun Y, Shen Z, Xu X, Ma T, Peng C, Li F, Ning C, Wang J, Liu S, Liu Z, Xu L, Liu W. Thermal-tagging photoacoustic remote sensing flowmetry. OPTICS LETTERS 2024; 49:1725-1728. [PMID: 38560847 DOI: 10.1364/ol.521564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/07/2024] [Indexed: 04/04/2024]
Abstract
Ultrasound coupling is one of the critical challenges for traditional photoacoustic (or optoacoustic) microscopy (PAM) techniques transferred to the clinical examination of chronic wounds and open tissues. A promising alternative potential solution for breaking the limitation of ultrasound coupling in PAM is photoacoustic remote sensing (PARS), which implements all-optical non-interferometric photoacoustic measurements. Functional imaging of PARS microscopy was demonstrated from the aspects of histopathology and oxygen metabolism, while its performance in hemodynamic quantification remains unexplored. In this Letter, we present an all-optical thermal-tagging flowmetry approach for PARS microscopy and demonstrate it with comprehensive mathematical modeling and ex vivo and in vivo experimental validations. Experimental results demonstrated that the detectable range of the blood flow rate was from 0 to 12 mm/s with a high accuracy (measurement error:±1.2%) at 10-kHz laser pulse repetition rate. The proposed all-optical thermal-tagging flowmetry offers an effective alternative approach for PARS microscopy realizing non-contact dye-free hemodynamic imaging.
Collapse
|
17
|
Gao F, Li B, Chen L, Wei X, Shang Z, Liu C. Ultrasound image super-resolution reconstruction based on semi-supervised CycleGAN. ULTRASONICS 2024; 137:107177. [PMID: 37832382 DOI: 10.1016/j.ultras.2023.107177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 08/31/2023] [Accepted: 10/05/2023] [Indexed: 10/15/2023]
Abstract
In ultrasonic testing, diffraction artifacts generated around defects increase the challenge of quantitatively characterizing defects. In this paper, we propose a label-enhanced semi-supervised CycleGAN network model, referred to as LESS-CycleGAN, which is a conditional cycle generative adversarial network designed for accurately characterizing defect morphology in ultrasonic testing images. The proposed method introduces paired cross-domain image samples during model training to achieve a defect transformation between the ultrasound image domain and the morphology image domain, thereby eliminating artifacts. Furthermore, the method incorporates a novel authenticity loss function to ensure high-precision defect reconstruction capability. To validate the effectiveness and robustness of the model, we use simulated 2D images of defects and corresponding ultrasonic detection images as training and test sets, and an actual ultrasonic phased array image of a test block as the validation set to evaluate the model's application performance. The experimental results demonstrate that the proposed method is convenient and effective, achieving subwavelength-scale defect reconstruction with good robustness.
Collapse
Affiliation(s)
- Fei Gao
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Bing Li
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Lei Chen
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China.
| | - Xiang Wei
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Zhongyu Shang
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| | - Chunman Liu
- State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, 710049, China; International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technology, Xi'an Jiaotong University, Xi'an, 710049, China
| |
Collapse
|
18
|
Cho SW, Nguyen VT, DiSpirito A, Yang J, Kim CS, Yao J. Sounding out the dynamics: a concise review of high-speed photoacoustic microscopy. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11521. [PMID: 38323297 PMCID: PMC10846286 DOI: 10.1117/1.jbo.29.s1.s11521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/15/2023] [Accepted: 01/11/2024] [Indexed: 02/08/2024]
Abstract
Significance Photoacoustic microscopy (PAM) offers advantages in high-resolution and high-contrast imaging of biomedical chromophores. The speed of imaging is critical for leveraging these benefits in both preclinical and clinical settings. Ongoing technological innovations have substantially boosted PAM's imaging speed, enabling real-time monitoring of dynamic biological processes. Aim This concise review synthesizes historical context and current advancements in high-speed PAM, with an emphasis on developments enabled by ultrafast lasers, scanning mechanisms, and advanced imaging processing methods. Approach We examine cutting-edge innovations across multiple facets of PAM, including light sources, scanning and detection systems, and computational techniques and explore their representative applications in biomedical research. Results This work delineates the challenges that persist in achieving optimal high-speed PAM performance and forecasts its prospective impact on biomedical imaging. Conclusions Recognizing the current limitations, breaking through the drawbacks, and adopting the optimal combination of each technology will lead to the realization of ultimate high-speed PAM for both fundamental research and clinical translation.
Collapse
Affiliation(s)
- Soon-Woo Cho
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
- Pusan National University, Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Busan, Republic of Korea
| | - Van Tu Nguyen
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Anthony DiSpirito
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Joseph Yang
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Chang-Seok Kim
- Pusan National University, Engineering Research Center for Color-Modulated Extra-Sensory Perception Technology, Busan, Republic of Korea
| | - Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| |
Collapse
|
19
|
Shen Y, Zhang J, Jiang D, Gao Z, Zheng Y, Gao F, Gao F. S-Wave Accelerates Optimization-based Photoacoustic Image Reconstruction in vivo. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:18-27. [PMID: 37806923 DOI: 10.1016/j.ultrasmedbio.2023.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 07/25/2023] [Accepted: 07/29/2023] [Indexed: 10/10/2023]
Abstract
OBJECTIVE Photoacoustic imaging has undergone rapid development in recent years. To simulate photoacoustic imaging on a computer, the most popular MATLAB toolbox currently used for the forward projection process is k-Wave. However, k-Wave suffers from significant computation time. Here we propose a straightforward simulation approach based on superposed Wave (s-Wave) to accelerate photoacoustic simulation. METHODS In this study, we consider the initial pressure distribution as a collection of individual pixels. By obtaining standard sensor data from a single pixel beforehand, we can easily manipulate the phase and amplitude of the sensor data for specific pixels using loop and multiplication operators. The effectiveness of this approach is validated through an optimization-based reconstruction algorithm. RESULTS The results reveal significantly reduced computation time compared with k-Wave. Particularly in a sparse 3-D configuration, s-Wave exhibits a speed improvement >2000 times compared with k-Wave. In terms of optimization-based image reconstruction, in vivo imaging results reveal that using the s-Wave method yields images highly similar to those obtained using k-Wave, while reducing the reconstruction time by approximately 50 times. CONCLUSION Proposed here is an accelerated optimization-based algorithm for photoacoustic image reconstruction, using the fast s-Wave forward projection simulation. Our method achieves substantial time savings, particularly in sparse system configurations. Future work will focus on further optimizing the algorithm and expanding its applicability to a broader range of photoacoustic imaging scenarios.
Collapse
Affiliation(s)
- Yuting Shen
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Jiadong Zhang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Daohuai Jiang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Zijian Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Yuwei Zheng
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China; Shanghai Engineering Research Center of Energy Efficient and Custom AI IC, Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
20
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
21
|
Li J, Meng YC. Multikernel positional embedding convolutional neural network for photoacoustic reconstruction with sparse data. APPLIED OPTICS 2023; 62:8506-8516. [PMID: 38037963 DOI: 10.1364/ao.504094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 10/14/2023] [Indexed: 12/02/2023]
Abstract
Photoacoustic imaging (PAI) is an emerging noninvasive imaging modality that merges the high contrast of optical imaging with the high resolution of ultrasonic imaging. Low-quality photoacoustic reconstruction with sparse data due to sparse spatial sampling and limited view detection is a major obstacle to the popularization of PAI for medical applications. Deep learning has been considered as the best solution to this problem in the past decade. In this paper, we propose what we believe to be a novel architecture, named DPM-UNet, which consists of the U-Net backbone with additional position embedding block and two multi-kernel-size convolution blocks, a dilated dense block and dilated multi-kernel-size convolution block. Our method was experimentally validated with both simulated data and in vivo data, achieving a SSIM of 0.9824 and a PSNR of 33.2744 dB. Furthermore, the reconstructed images of our proposed method were compared with those obtained by other advanced methods. The results have shown that our proposed DPM-UNet has a great advantage in PAI over other methods with respect to the imaging effect and memory consumption.
Collapse
|
22
|
Patro KK, Allam JP, Sanapala U, Marpu CK, Samee NA, Alabdulhafith M, Plawiak P. An effective correlation-based data modeling framework for automatic diabetes prediction using machine and deep learning techniques. BMC Bioinformatics 2023; 24:372. [PMID: 37784049 PMCID: PMC10544445 DOI: 10.1186/s12859-023-05488-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 09/19/2023] [Indexed: 10/04/2023] Open
Abstract
The rising risk of diabetes, particularly in emerging countries, highlights the importance of early detection. Manual prediction can be a challenging task, leading to the need for automatic approaches. The major challenge with biomedical datasets is data scarcity. Biomedical data is often difficult to obtain in large quantities, which can limit the ability to train deep learning models effectively. Biomedical data can be noisy and inconsistent, which can make it difficult to train accurate models. To overcome the above-mentioned challenges, this work presents a new framework for data modeling that is based on correlation measures between features and can be used to process data effectively for predicting diabetes. The standard, publicly available Pima Indians Medical Diabetes (PIMA) dataset is utilized to verify the effectiveness of the proposed techniques. Experiments using the PIMA dataset showed that the proposed data modeling method improved the accuracy of machine learning models by an average of 9%, with deep convolutional neural network models achieving an accuracy of 96.13%. Overall, this study demonstrates the effectiveness of the proposed strategy in the early and reliable prediction of diabetes.
Collapse
Affiliation(s)
- Kiran Kumar Patro
- Department of ECE, Aditya Institute of Technology and Management, Tekkali, AP, 532201, India
| | - Jaya Prakash Allam
- School of Computer Science and Engineering, VIT Vellore, Katpadi, Vellore, Tamil Nadu, 632014, India.
| | | | - Chaitanya Kumar Marpu
- Department of ECE, Aditya Institute of Technology and Management, Tekkali, AP, 532201, India
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Maali Alabdulhafith
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh, 11671, Saudi Arabia
| | - Pawel Plawiak
- Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Warszawska 24, 31-155, Krakow, Poland
- Institute of Theoretical and Applied Informatics, Polish Academy of Sciences, Bałtycka 5, 44-100, Gliwice, Poland
| |
Collapse
|
23
|
Wang R, Zhang Z, Chen R, Yu X, Zhang H, Hu G, Liu Q, Song X. Noise-insensitive defocused signal and resolution enhancement for optical-resolution photoacoustic microscopy via deep learning. JOURNAL OF BIOPHOTONICS 2023; 16:e202300149. [PMID: 37491832 DOI: 10.1002/jbio.202300149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 06/30/2023] [Accepted: 07/22/2023] [Indexed: 07/27/2023]
Abstract
Optical-resolution photoacoustic microscopy suffers from narrow depth of field and a significant deterioration in defocused signal intensity and spatial resolution. Here, a method based on deep learning was proposed to enhance the defocused resolution and signal-to-noise ratio. A virtual optical-resolution photoacoustic microscopy based on k-wave was used to obtain the datasets of deep learning with different noise levels. A fully dense U-Net was trained with randomly distributed sources to improve the quality of photoacoustic images. The results show that the PSNR of defocused signal was enhanced by more than 1.2 times. An over 2.6-fold enhancement in lateral resolution and an over 3.4-fold enhancement in axial resolution of defocused regions were achieved. The large volumetric and high-resolution imaging of blood vessels further verified that the proposed method can effectively overcome the deterioration of the signal and the spatial resolution due to the narrow depth of field of optical-resolution photoacoustic microscopy.
Collapse
Affiliation(s)
- Rui Wang
- School of Information Engineering, Nanchang University, Nanchang, China
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Zhipeng Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Ruiyi Chen
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xiaohai Yu
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Hongyu Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Gang Hu
- Jiangxi Medical College, Nanchang University, Nanchang, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
24
|
Zheng W, Zhang H, Huang C, Shijo V, Xu C, Xu W, Xia J. Deep Learning Enhanced Volumetric Photoacoustic Imaging of Vasculature in Human. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2301277. [PMID: 37530209 PMCID: PMC10582405 DOI: 10.1002/advs.202301277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 06/26/2023] [Indexed: 08/03/2023]
Abstract
The development of high-performance imaging processing algorithms is a core area of photoacoustic tomography. While various deep learning based image processing techniques have been developed in the area, their applications in 3D imaging are still limited due to challenges in computational cost and memory allocation. To address those limitations, this work implements a 3D fully-dense (3DFD) U-net to linear array based photoacoustic tomography and utilizes volumetric simulation and mixed precision training to increase efficiency and training size. Through numerical simulation, phantom imaging, and in vivo experiments, this work demonstrates that the trained network restores the true object size, reduces the noise level and artifacts, improves the contrast at deep regions, and reveals vessels subject to limited view distortion. With these enhancements, 3DFD U-net successfully produces clear 3D vascular images of the palm, arms, breasts, and feet of human subjects. These enhanced vascular images offer improved capabilities for biometric identification, foot ulcer evaluation, and breast cancer imaging. These results indicate that the new algorithm will have a significant impact on preclinical and clinical photoacoustic tomography.
Collapse
Affiliation(s)
- Wenhan Zheng
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Huijuan Zhang
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Chuqin Huang
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Varun Shijo
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Chenhan Xu
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Wenyao Xu
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Jun Xia
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| |
Collapse
|
25
|
Khan S, Vasudevan S. Biomedical instrumentation of photoacoustic imaging and quantitative sensing for clinical applications. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2023; 94:091502. [PMID: 37747328 DOI: 10.1063/5.0151882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Accepted: 09/02/2023] [Indexed: 09/26/2023]
Abstract
Photoacoustic (PA) imaging has been well researched over the last couple of decades and has found many applications in biomedical engineering. This has evinced interest among many scientists in developing this as a compact instrument for biomedical diagnosis. This review discusses various instrumentation developments for PA experimental setups and their applications in the biomedical diagnostic field. It also covers the PA spectral response or PA sensing technique, which uses the spectral information of the PA signal and performs sensing to deliver a fast, cost-effective, and compact screening tool instead of imaging. Primarily, this review provides an overview of PA imaging concepts and the development of hardware instrumentation systems in both the excitation and acquisition stages of this technique. Later, the paper discusses PA sensing, the quantitative spectral parameter extraction from the PA spectrum, and the correlation study of the spectral parameters with the physical parameters of the tissue. This PA sensing technique was used to diagnose various diseases, such as thyroid nodules, breast cancer, renal disorders, and zoonotic diseases, based on the mechanical and biological characteristics of the tissues. The paper culminates with a discussion section that provides future developments that are necessary to take this technique into clinical applications as a quantitative PA imaging technique.
Collapse
Affiliation(s)
- S Khan
- Department of Electrical Engineering, Indian Institute of Technology, Indore 453552, India
| | - S Vasudevan
- Department of Electrical Engineering, Indian Institute of Technology, Indore 453552, India
| |
Collapse
|
26
|
Le TD, Min JJ, Lee C. Enhanced resolution and sensitivity acoustic-resolution photoacoustic microscopy with semi/unsupervised GANs. Sci Rep 2023; 13:13423. [PMID: 37591911 PMCID: PMC10435476 DOI: 10.1038/s41598-023-40583-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 08/13/2023] [Indexed: 08/19/2023] Open
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) enables visualization of biological tissues at depths of several millimeters with superior optical absorption contrast. However, the lateral resolution and sensitivity of AR-PAM are generally lower than those of optical-resolution PAM (OR-PAM) owing to the intrinsic physical acoustic focusing mechanism. Here, we demonstrate a computational strategy with two generative adversarial networks (GANs) to perform semi/unsupervised reconstruction with high resolution and sensitivity in AR-PAM by maintaining its imaging capability at enhanced depths. The b-scan PAM images were prepared as paired (for semi-supervised conditional GAN) and unpaired (for unsupervised CycleGAN) groups for label-free reconstructed AR-PAM b-scan image generation and training. The semi/unsupervised GANs successfully improved resolution and sensitivity in a phantom and in vivo mouse ear test with ground truth. We also confirmed that GANs could enhance resolution and sensitivity of deep tissues without the ground truth.
Collapse
Affiliation(s)
- Thanh Dat Le
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea
| | - Jung-Joon Min
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea
| | - Changho Lee
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju, 61186, Korea.
- Department of Nuclear Medicine, Chonnam National University Medical School and Hwasun Hospital, 264, Seoyang-ro, Hwasun-eup, Hwasun-gun, 58128, Jeollanam-do, Korea.
| |
Collapse
|
27
|
John S, Hester S, Basij M, Paul A, Xavierselvan M, Mehrmohammadi M, Mallidi S. Niche preclinical and clinical applications of photoacoustic imaging with endogenous contrast. PHOTOACOUSTICS 2023; 32:100533. [PMID: 37636547 PMCID: PMC10448345 DOI: 10.1016/j.pacs.2023.100533] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 08/29/2023]
Abstract
In the past decade, photoacoustic (PA) imaging has attracted a great deal of popularity as an emergent diagnostic technology owing to its successful demonstration in both preclinical and clinical arenas by various academic and industrial research groups. Such steady growth of PA imaging can mainly be attributed to its salient features, including being non-ionizing, cost-effective, easily deployable, and having sufficient axial, lateral, and temporal resolutions for resolving various tissue characteristics and assessing the therapeutic efficacy. In addition, PA imaging can easily be integrated with the ultrasound imaging systems, the combination of which confers the ability to co-register and cross-reference various features in the structural, functional, and molecular imaging regimes. PA imaging relies on either an endogenous source of contrast (e.g., hemoglobin) or those of an exogenous nature such as nano-sized tunable optical absorbers or dyes that may boost imaging contrast beyond that provided by the endogenous sources. In this review, we discuss the applications of PA imaging with endogenous contrast as they pertain to clinically relevant niches, including tissue characterization, cancer diagnostics/therapies (termed as theranostics), cardiovascular applications, and surgical applications. We believe that PA imaging's role as a facile indicator of several disease-relevant states will continue to expand and evolve as it is adopted by an increasing number of research laboratories and clinics worldwide.
Collapse
Affiliation(s)
- Samuel John
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Scott Hester
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | | - Mohammad Mehrmohammadi
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Wilmot Cancer Institute, Rochester, NY, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
28
|
Song W, Guo C, Zhao Y, Wang YC, Zhu S, Min C, Yuan X. Ultraviolet metasurface-assisted photoacoustic microscopy with great enhancement in DOF for fast histology imaging. PHOTOACOUSTICS 2023; 32:100525. [PMID: 37645256 PMCID: PMC10461204 DOI: 10.1016/j.pacs.2023.100525] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 06/14/2023] [Accepted: 06/22/2023] [Indexed: 08/31/2023]
Abstract
Pathology interpretations of tissue rely on the gold standard of histology imaging, potentially hampering timely access to critical information for diagnosis and management of neoplasms because of tedious sample preparations. Slide-free capture of cell nuclei in unprocessed specimens without staining is preferable; however, inevitable irregular surfaces in fresh tissues results in limitations. An ultraviolet metasurface with the ability to generate an ultraviolet optical focus maintaining < 1.1-µm in lateral resolution and ∼290 µm in depth of field (DOF) is proposed for fast, high resolution, label-free photoacoustic histological imaging of unprocessed tissues with uneven surfaces. Microanatomical characteristics of the cell nuclei can be observed, as demonstrated by the mouse brain samples that were cut by hand and a ∼3 × 3-mm2 field of view was imaged in ∼27 min. Therefore, ultraviolet metasurface-assisted photoacoustic microscopy is anticipated to benefit intraoperative pathological assessments and basic scientific research by alleviating laborious tissue preparations.
Collapse
Affiliation(s)
- Wei Song
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics & State Key Laboratory of Radio Frequency Heterogeneous, Shenzhen University, Shenzhen 518060, China
| | - Changkui Guo
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics & State Key Laboratory of Radio Frequency Heterogeneous, Shenzhen University, Shenzhen 518060, China
| | - Yuting Zhao
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics & State Key Laboratory of Radio Frequency Heterogeneous, Shenzhen University, Shenzhen 518060, China
| | - Ya-chao Wang
- Depart of Neurosurgery, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People's Hospital, Shenzhen 518060, China
| | - Siwei Zhu
- The Institute of Translational Medicine, Tianjin Union Medical Center of Nankai University, Tianjin 300121, China
| | - Changjun Min
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics & State Key Laboratory of Radio Frequency Heterogeneous, Shenzhen University, Shenzhen 518060, China
| | - Xiaocong Yuan
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics & State Key Laboratory of Radio Frequency Heterogeneous, Shenzhen University, Shenzhen 518060, China
- Research Center for Humanoid Sensing, Zhejiang Laboratory, Hangzhou 311100, China
| |
Collapse
|
29
|
Tserevelakis GJ, Barmparis GD, Kokosalis N, Giosa ES, Pavlopoulos A, Tsironis GP, Zacharakis G. Deep learning-assisted frequency-domain photoacoustic microscopy. OPTICS LETTERS 2023; 48:2720-2723. [PMID: 37186749 DOI: 10.1364/ol.486624] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Frequency-domain photoacoustic microscopy (FD-PAM) constitutes a powerful cost-efficient imaging method integrating intensity-modulated laser beams for the excitation of single-frequency photoacoustic waves. Nevertheless, FD-PAM provides an extremely small signal-to-noise ratio (SNR), which can be up to two orders of magnitude lower than the conventional time-domain (TD) systems. To overcome this inherent SNR limitation of FD-PAM, we utilize a U-Net neural network aiming at image augmentation without the need for excessive averaging or the application of high optical power. In this context, we improve the accessibility of PAM as the system's cost is dramatically reduced, and we expand its applicability to demanding observations while retaining sufficiently high image quality standards.
Collapse
|
30
|
Zhou LX, Xia Y, Dai R, Liu AR, Zhu SW, Shi P, Song W, Yuan XC. Non-uniform image reconstruction for fast photoacoustic microscopy of histology imaging. BIOMEDICAL OPTICS EXPRESS 2023; 14:2080-2090. [PMID: 37206133 PMCID: PMC10191656 DOI: 10.1364/boe.487622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Revised: 03/18/2023] [Accepted: 04/02/2023] [Indexed: 05/21/2023]
Abstract
Photoacoustic microscopic imaging utilizes the characteristic optical absorption properties of pigmented materials in tissues to enable label-free observation of fine morphological and structural features. Since DNA/RNA can strongly absorb ultraviolet light, ultraviolet photoacoustic microscopy can highlight the cell nucleus without complicated sample preparations such as staining, which is comparable to the standard pathological images. Further improvements in the imaging acquisition speed are critical to advancing the clinical translation of photoacoustic histology imaging technology. However, improving the imaging speed with additional hardware is hampered by considerable costs and complex design. In this work, considering heavy redundancy in the biological photoacoustic images that overconsume the computing power, we propose an image reconstruction framework called non-uniform image reconstruction (NFSR), which exploits an object detection network to reconstruct low-sampled photoacoustic histology images into high-resolution images. The sampling speed of photoacoustic histology imaging is significantly improved, saving 90% of the time cost. Furthermore, NFSR focuses on the reconstruction of the region of interest while maintaining high PSNR and SSIM evaluation indicators of more than 99% but reducing the overall computation by 60%.
Collapse
Affiliation(s)
- Ling Xiao Zhou
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Yu Xia
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Renxiang Dai
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - An Ran Liu
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Si Wei Zhu
- The Institute of Translational Medicine, Tianjin Union Medical Center of Nankai University, Tianjin, 300121, China
| | - Peng Shi
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Wei Song
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| | - Xiao Cong Yuan
- Nanophotonics Research Center, Shenchen Key Laboratory of Micro-Scale Optica Formation Technology, Institute of Microscale Optoelectronics Shenchen University, Shenchen, 518060, China
| |
Collapse
|
31
|
Zhang Z, Jin H, Zhang W, Lu W, Zheng Z, Sharma A, Pramanik M, Zheng Y. Adaptive enhancement of acoustic resolution photoacoustic microscopy imaging via deep CNN prior. PHOTOACOUSTICS 2023; 30:100484. [PMID: 37095888 PMCID: PMC10121479 DOI: 10.1016/j.pacs.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a promising medical imaging modality that can be employed for deep bio-tissue imaging. However, its relatively low imaging resolution has greatly hindered its wide applications. Previous model-based or learning-based PAM enhancement algorithms either require design of complex handcrafted prior to achieve good performance or lack the interpretability and flexibility that can adapt to different degradation models. However, the degradation model of AR-PAM imaging is subject to both imaging depth and center frequency of ultrasound transducer, which varies in different imaging conditions and cannot be handled by a single neural network model. To address this limitation, an algorithm integrating both learning-based and model-based method is proposed here so that a single framework can deal with various distortion functions adaptively. The vasculature image statistics is implicitly learned by a deep convolutional neural network, which served as plug and play (PnP) prior. The trained network can be directly plugged into the model-based optimization framework for iterative AR-PAM image enhancement, which fitted for different degradation mechanisms. Based on physical model, the point spread function (PSF) kernels for various AR-PAM imaging situations are derived and used for the enhancement of simulation and in vivo AR-PAM images, which collectively proved the effectiveness of proposed method. Quantitatively, the PSNR and SSIM values have all achieve best performance with the proposed algorithm in all three simulation scenarios; The SNR and CNR values have also significantly raised from 6.34 and 5.79 to 35.37 and 29.66 respectively in an in vivo testing result with the proposed algorithm.
Collapse
Affiliation(s)
- Zhengyuan Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Haoran Jin
- Zhejiang University, College of Mechanical Engineering, The State Key Laboratory of Fluid Power and Mechatronic Systems, Hangzhou 310027, China
| | - Wenwen Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Wenhao Lu
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Zesheng Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Arunima Sharma
- Johns Hopkins University, Electrical and Computer Engineering, Baltimore, MD 21218, USA
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, USA
| | - Yuanjin Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
- Corresponding author.
| |
Collapse
|
32
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
33
|
Zafar M, Manwar R, McGuire LS, Charbel FT, Avanaki K. Ultra-widefield and high-speed spiral laser scanning OR-PAM: System development and characterization. JOURNAL OF BIOPHOTONICS 2023:e202200383. [PMID: 36998211 DOI: 10.1002/jbio.202200383] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/01/2023] [Accepted: 03/15/2023] [Indexed: 06/19/2023]
Abstract
Photoacoustic microscopy (PAM) is a high-resolution imaging modality that has been mainly implemented with small field of view applications. Here, we developed a fast PAM system that utilizes a unique spiral laser scanning mechanism and a wide acoustic detection unit. The developed system can image an area of 12.5 cm2 in 6.4 s. The system has been characterized using highly detailed phantoms. Finally, the imaging capabilities of the system were further demonstrated by imaging a sheep brain ex vivo and a rat brain in vivo.
Collapse
Affiliation(s)
- Mohsin Zafar
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
- Department of Biomedical Engineering, Wayne State University, Detroit, Michigan, USA
| | - Rayyan Manwar
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Laura S McGuire
- Department of Neurological Surgery, University of Illinois at Chicago - College of Medicine, Chicago, Illinois, USA
| | - Fady T Charbel
- Department of Neurological Surgery, University of Illinois at Chicago - College of Medicine, Chicago, Illinois, USA
| | - Kamran Avanaki
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
- Department of Dermatology, University of Illinois at Chicago, Chicago, Illinois, USA
| |
Collapse
|
34
|
Bicer MB. Radar-Based Microwave Breast Imaging Using Neurocomputational Models. Diagnostics (Basel) 2023; 13:diagnostics13050930. [PMID: 36900075 PMCID: PMC10000704 DOI: 10.3390/diagnostics13050930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 02/21/2023] [Accepted: 02/27/2023] [Indexed: 03/06/2023] Open
Abstract
In this study, neurocomputational models are proposed for the acquisition of radar-based microwave images of breast tumors using deep neural networks (DNNs) and convolutional neural networks (CNNs). The circular synthetic aperture radar (CSAR) technique for radar-based microwave imaging (MWI) was utilized to generate 1000 numerical simulations for randomly generated scenarios. The scenarios contain information such as the number, size, and location of tumors for each simulation. Then, a dataset of 1000 distinct simulations with complex values based on the scenarios was built. Consequently, a real-valued DNN (RV-DNN) with five hidden layers, a real-valued CNN (RV-CNN) with seven convolutional layers, and a real-valued combined model (RV-MWINet) consisting of CNN and U-Net sub-models were built and trained to generate the radar-based microwave images. While the proposed RV-DNN, RV-CNN, and RV-MWINet models are real-valued, the MWINet model is restructured with complex-valued layers (CV-MWINet), resulting in a total of four models. For the RV-DNN model, the training and test errors in terms of mean squared error (MSE) are found to be 103.400 and 96.395, respectively, whereas for the RV-CNN model, the training and test errors are obtained to be 45.283 and 153.818. Due to the fact that the RV-MWINet model is a combined U-Net model, the accuracy metric is analyzed. The proposed RV-MWINet model has training and testing accuracy of 0.9135 and 0.8635, whereas the CV-MWINet model has training and testing accuracy of 0.991 and 1.000, respectively. The peak signal-to-noise ratio (PSNR), universal quality index (UQI), and structural similarity index (SSIM) metrics were also evaluated for the images generated by the proposed neurocomputational models. The generated images demonstrate that the proposed neurocomputational models can be successfully utilized for radar-based microwave imaging, especially for breast imaging.
Collapse
Affiliation(s)
- Mustafa Berkan Bicer
- Electrical and Electronics Engineering Department, Engineering Faculty, Tarsus University, 33400 Mersin, Turkey
| |
Collapse
|
35
|
Seong D, Lee E, Kim Y, Han S, Lee J, Jeon M, Kim J. Three-dimensional reconstructing undersampled photoacoustic microscopy images using deep learning. PHOTOACOUSTICS 2023; 29:100429. [PMID: 36544533 PMCID: PMC9761854 DOI: 10.1016/j.pacs.2022.100429] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 10/29/2022] [Accepted: 11/28/2022] [Indexed: 05/31/2023]
Abstract
Spatial sampling density and data size are important determinants of the imaging speed of photoacoustic microscopy (PAM). Therefore, undersampling methods that reduce the number of scanning points are typically adopted to enhance the imaging speed of PAM by increasing the scanning step size. Since undersampling methods sacrifice spatial sampling density, by considering the number of data points, data size, and the characteristics of PAM that provides three-dimensional (3D) volume data, in this study, we newly reported deep learning-based fully reconstructing the undersampled 3D PAM data. The results of quantitative analyses demonstrate that the proposed method exhibits robustness and outperforms interpolation-based reconstruction methods at various undersampling ratios, enhancing the PAM system performance with 80-times faster-imaging speed and 800-times lower data size. The proposed method is demonstrated to be the closest model that can be used under experimental conditions, effectively shortening the imaging time with significantly reduced data size for processing.
Collapse
Affiliation(s)
- Daewoon Seong
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Euimin Lee
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Yoonseok Kim
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Sangyeob Han
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- Institute of Biomedical Engineering, School of Medicine, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Jaeyul Lee
- Department of Bioengineering, University of California, Los Angeles, CA 90095, USA
| | - Mansik Jeon
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Jeehyun Kim
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
36
|
Zhou Y, Sun N, Hu S. Deep Learning-Powered Bessel-Beam Multiparametric Photoacoustic Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3544-3551. [PMID: 35788453 PMCID: PMC9767649 DOI: 10.1109/tmi.2022.3188739] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Enabling simultaneous and high-resolution quantification of the total concentration of hemoglobin ( [Formula: see text]), oxygen saturation of hemoglobin (sO2), and cerebral blood flow (CBF), multi-parametric photoacoustic microscopy (PAM) has emerged as a promising tool for functional and metabolic imaging of the live mouse brain. However, due to the limited depth of focus imposed by the Gaussian-beam excitation, the quantitative measurements become inaccurate when the imaging object is out of focus. To address this problem, we have developed a hardware-software combined approach by integrating Bessel-beam excitation and conditional generative adversarial network (cGAN)-based deep learning. Side-by-side comparison of the new cGAN-powered Bessel-beam multi-parametric PAM against the conventional Gaussian-beam multi-parametric PAM shows that the new system enables high-resolution, quantitative imaging of [Formula: see text], sO2, and CBF over a depth range of [Formula: see text] in the live mouse brain, with errors 13-58 times lower than those of the conventional system. Better fulfilling the rigid requirement of light focusing for accurate hemodynamic measurements, the deep learning-powered Bessel-beam multi-parametric PAM may find applications in large-field functional recording across the uneven brain surface and beyond (e.g., tumor imaging).
Collapse
|
37
|
Menozzi L, Yang W, Feng W, Yao J. Sound out the impaired perfusion: Photoacoustic imaging in preclinical ischemic stroke. Front Neurosci 2022; 16:1055552. [PMID: 36532279 PMCID: PMC9751426 DOI: 10.3389/fnins.2022.1055552] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 11/17/2022] [Indexed: 09/19/2023] Open
Abstract
Acoustically detecting the optical absorption contrast, photoacoustic imaging (PAI) is a highly versatile imaging modality that can provide anatomical, functional, molecular, and metabolic information of biological tissues. PAI is highly scalable and can probe the same biological process at various length scales ranging from single cells (microscopic) to the whole organ (macroscopic). Using hemoglobin as the endogenous contrast, PAI is capable of label-free imaging of blood vessels in the brain and mapping hemodynamic functions such as blood oxygenation and blood flow. These imaging merits make PAI a great tool for studying ischemic stroke, particularly for probing into hemodynamic changes and impaired cerebral blood perfusion as a consequence of stroke. In this narrative review, we aim to summarize the scientific progresses in the past decade by using PAI to monitor cerebral blood vessel impairment and restoration after ischemic stroke, mostly in the preclinical setting. We also outline and discuss the major technological barriers and challenges that need to be overcome so that PAI can play a more significant role in preclinical stroke research, and more importantly, accelerate its translation to be a useful clinical diagnosis and management tool for human strokes.
Collapse
Affiliation(s)
- Luca Menozzi
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Wei Yang
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University, Durham, NC, United States
| | - Wuwei Feng
- Department of Neurology, Duke University School of Medicine, Durham, NC, United States
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| |
Collapse
|
38
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
39
|
Chen J, Zhang Y, Zhu J, Tang X, Wang L. Freehand scanning photoacoustic microscopy with simultaneous localization and mapping. PHOTOACOUSTICS 2022; 28:100411. [PMID: 36254241 PMCID: PMC9568868 DOI: 10.1016/j.pacs.2022.100411] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/03/2022] [Accepted: 10/06/2022] [Indexed: 05/02/2023]
Abstract
Optical-resolution photoacoustic microscopy offers high-resolution, label-free hemodynamic and functional imaging to many biomedical applications. However, long-standing technical barriers, such as limited field of view, bulky scanning probes, and slow imaging speed, have limited the application of optical-resolution photoacoustic microscopy. Here, we present freehand scanning photoacoustic microscopy (FS-PAM) that can flexibly image various anatomical sites. We develop a compact handheld photoacoustic probe to acquire 3D images with high speed, and great flexibility. The high scanning speed not only enables video camera mode imaging but also allows for the first implementation of simultaneous localization and mapping (SLAM) in photoacoustic microscopy. We demonstrate fast in vivo imaging of some mouse organs, and human oral mucosa. The high imaging speed greatly reduces motion artifacts and distortions from tissue moving, breathing, and unintended handshaking. We demonstrate small-lesion localization in a large region of the brain. FS-PAM offers a flexible high-speed imaging tool with an extendable field of view, enabling more biomedical imaging applications.
Collapse
Affiliation(s)
- Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong Special Administrative Region of China
| | - Yachao Zhang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong Special Administrative Region of China
| | - Jingyi Zhu
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong Special Administrative Region of China
| | - Xu Tang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong Special Administrative Region of China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong Special Administrative Region of China
- City University of Hong Kong Shenzhen Research Institute, Yuexing Yi Dao, Shenzhen, Guang Dong 518057, China
- Corresponding author at: Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong Special Administrative Region of China.
| |
Collapse
|
40
|
Hybrid Methodology Based on Symmetrized Dot Pattern and Convolutional Neural Networks for Fault Diagnosis of Power Cables. Processes (Basel) 2022. [DOI: 10.3390/pr10102009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
This study proposes a recognition method based on symmetrized dot pattern (SDP) analysis and convolutional neural network (CNN) for rapid and accurate diagnosis of insulation defect problems by detecting the partial discharge (PD) signals of XLPE power cables. First, a normal and three power cable models with different insulation defects are built. The PD signals resulting from power cable insulation defects are measured. The frequency and amplitude variations of PD signals from different defects are reflected by comprehensible images using the proposed SDP analysis method. The features of different power cable defects are presented. Finally, the feature image is trained and identified by CNN to achieve a power cable insulation fault diagnosis system. The experimental results show that the proposed method could accurately diagnose the fault types of power cable insulation defects with a recognition accuracy of 98%. The proposed method is characterized by a short detection time and high diagnostic accuracy. It can effectively detect the power cable PD to identify the fault type of the insulation defect.
Collapse
|
41
|
Deep learning alignment of bidirectional raster scanning in high speed photoacoustic microscopy. Sci Rep 2022; 12:16238. [PMID: 36171249 PMCID: PMC9519743 DOI: 10.1038/s41598-022-20378-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 09/13/2022] [Indexed: 11/08/2022] Open
Abstract
Simultaneous point-by-point raster scanning of optical and acoustic beams has been widely adapted to high-speed photoacoustic microscopy (PAM) using a water-immersible microelectromechanical system or galvanometer scanner. However, when using high-speed water-immersible scanners, the two consecutively acquired bidirectional PAM images are misaligned with each other because of unstable performance, which causes a non-uniform time interval between scanning points. Therefore, only one unidirectionally acquired image is typically used; consequently, the imaging speed is reduced by half. Here, we demonstrate a scanning framework based on a deep neural network (DNN) to correct misaligned PAM images acquired via bidirectional raster scanning. The proposed method doubles the imaging speed compared to that of conventional methods by aligning nonlinear mismatched cross-sectional B-scan photoacoustic images during bidirectional raster scanning. Our DNN-assisted raster scanning framework can further potentially be applied to other raster scanning-based biomedical imaging tools, such as optical coherence tomography, ultrasound microscopy, and confocal microscopy.
Collapse
|
42
|
Hu Y, Lafci B, Luzgin A, Wang H, Klohs J, Dean-Ben XL, Ni R, Razansky D, Ren W. Deep learning facilitates fully automated brain image registration of optoacoustic tomography and magnetic resonance imaging. BIOMEDICAL OPTICS EXPRESS 2022; 13:4817-4833. [PMID: 36187259 PMCID: PMC9484422 DOI: 10.1364/boe.458182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 07/14/2022] [Accepted: 07/17/2022] [Indexed: 06/16/2023]
Abstract
Multispectral optoacoustic tomography (MSOT) is an emerging optical imaging method providing multiplex molecular and functional information from the rodent brain. It can be greatly augmented by magnetic resonance imaging (MRI) which offers excellent soft-tissue contrast and high-resolution brain anatomy. Nevertheless, registration of MSOT-MRI images remains challenging, chiefly due to the entirely different image contrast rendered by these two modalities. Previously reported registration algorithms mostly relied on manual user-dependent brain segmentation, which compromised data interpretation and quantification. Here we propose a fully automated registration method for MSOT-MRI multimodal imaging empowered by deep learning. The automated workflow includes neural network-based image segmentation to generate suitable masks, which are subsequently registered using an additional neural network. The performance of the algorithm is showcased with datasets acquired by cross-sectional MSOT and high-field MRI preclinical scanners. The automated registration method is further validated with manual and half-automated registration, demonstrating its robustness and accuracy.
Collapse
Affiliation(s)
- Yexing Hu
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- contributed equally
| | - Berkan Lafci
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
- contributed equally
| | - Artur Luzgin
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Hao Wang
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Jan Klohs
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Xose Luis Dean-Ben
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Ruiqing Ni
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
- Institute for Regenerative Medicine, University of Zurich, Zurich 8952, Switzerland
| | - Daniel Razansky
- Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Zurich 8052, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Zurich 8093, Switzerland
| | - Wuwei Ren
- School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| |
Collapse
|
43
|
Zhao T, Pham TT, Baker C, Ma MT, Ourselin S, Vercauteren T, Zhang E, Beard PC, Xia W. Ultrathin, high-speed, all-optical photoacoustic endomicroscopy probe for guiding minimally invasive surgery. BIOMEDICAL OPTICS EXPRESS 2022; 13:4414-4428. [PMID: 36032566 PMCID: PMC9408236 DOI: 10.1364/boe.463057] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 07/01/2022] [Accepted: 07/01/2022] [Indexed: 06/15/2023]
Abstract
Photoacoustic (PA) endoscopy has shown significant potential for clinical diagnosis and surgical guidance. Multimode fibres (MMFs) are becoming increasingly attractive for the development of miniature endoscopy probes owing to their ultrathin size, low cost and diffraction-limited spatial resolution enabled by wavefront shaping. However, current MMF-based PA endomicroscopy probes are either limited by a bulky ultrasound detector or a low imaging speed that hindered their usability. In this work, we report the development of a highly miniaturised and high-speed PA endomicroscopy probe that is integrated within the cannula of a 20 gauge medical needle. This probe comprises a MMF for delivering the PA excitation light and a single-mode optical fibre with a plano-concave microresonator for ultrasound detection. Wavefront shaping with a digital micromirror device enabled rapid raster-scanning of a focused light spot at the distal end of the MMF for tissue interrogation. High-resolution PA imaging of mouse red blood cells covering an area 100 µm in diameter was achieved with the needle probe at ∼3 frames per second. Mosaicing imaging was performed after fibre characterisation by translating the needle probe to enlarge the field-of-view in real-time. The developed ultrathin PA endomicroscopy probe is promising for guiding minimally invasive surgery by providing functional, molecular and microstructural information of tissue in real-time.
Collapse
Affiliation(s)
- Tianrui Zhao
- School of Biomedical Engineering and Imaging Sciences, King’s College London, 4 Floor, Lambeth Wing St Thomas’ Hospital, London SE1 7EH, United Kingdom
| | - Truc Thuy Pham
- School of Biomedical Engineering and Imaging Sciences, King’s College London, 4 Floor, Lambeth Wing St Thomas’ Hospital, London SE1 7EH, United Kingdom
| | - Christian Baker
- School of Biomedical Engineering and Imaging Sciences, King’s College London, 4 Floor, Lambeth Wing St Thomas’ Hospital, London SE1 7EH, United Kingdom
| | - Michelle T. Ma
- School of Biomedical Engineering and Imaging Sciences, King’s College London, 4 Floor, Lambeth Wing St Thomas’ Hospital, London SE1 7EH, United Kingdom
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King’s College London, 4 Floor, Lambeth Wing St Thomas’ Hospital, London SE1 7EH, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, 4 Floor, Lambeth Wing St Thomas’ Hospital, London SE1 7EH, United Kingdom
| | - Edward Zhang
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, London WC1E 6BT, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 67-73 Riding House Street, London W1W 7EJ, UK
| | - Paul C. Beard
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, London WC1E 6BT, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 67-73 Riding House Street, London W1W 7EJ, UK
| | - Wenfeng Xia
- School of Biomedical Engineering and Imaging Sciences, King’s College London, 4 Floor, Lambeth Wing St Thomas’ Hospital, London SE1 7EH, United Kingdom
| |
Collapse
|
44
|
Gao Y, Xu W, Chen Y, Xie W, Cheng Q. Deep Learning-Based Photoacoustic Imaging of Vascular Network Through Thick Porous Media. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2191-2204. [PMID: 35294347 DOI: 10.1109/tmi.2022.3158474] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging is a promising approach used to realize in vivo transcranial cerebral vascular imaging. However, the strong attenuation and distortion of the photoacoustic wave caused by the thick porous skull greatly affect the imaging quality. In this study, we developed a convolutional neural network based on U-Net to extract the effective photoacoustic information hidden in the speckle patterns obtained from vascular network images datasets under porous media. Our simulation and experimental results show that the proposed neural network can learn the mapping relationship between the speckle pattern and the target, and extract the photoacoustic signals of the vessels submerged in noise to reconstruct high-quality images of the vessels with a sharp outline and a clean background. Compared with the traditional photoacoustic reconstruction methods, the proposed deep learning-based reconstruction algorithm has a better performance with a lower mean absolute error, higher structural similarity, and higher peak signal-to-noise ratio of reconstructed images. In conclusion, the proposed neural network can effectively extract valid information from highly blurred speckle patterns for the rapid reconstruction of target images, which offers promising applications in transcranial photoacoustic imaging.
Collapse
|
45
|
Feng F, Liang S, Luo J, Chen SL. High-fidelity deconvolution for acoustic-resolution photoacoustic microscopy enabled by convolutional neural networks. PHOTOACOUSTICS 2022; 26:100360. [PMID: 35574187 PMCID: PMC9095893 DOI: 10.1016/j.pacs.2022.100360] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/18/2022] [Accepted: 04/18/2022] [Indexed: 05/10/2023]
Abstract
Acoustic-resolution photoacoustic microscopy (AR-PAM) image resolution is determined by the point spread function (PSF) of the imaging system. Previous algorithms, including Richardson-Lucy (R-L) deconvolution and model-based (MB) deconvolution, improve spatial resolution by taking advantage of the PSF as prior knowledge. However, these methods encounter the problems of inaccurate deconvolution, meaning the deconvolved feature size and the original one are not consistent (e.g., the former can be smaller than the latter). We present a novel deep convolution neural network (CNN)-based algorithm featuring high-fidelity recovery of multiscale feature size to improve lateral resolution of AR-PAM. The CNN is trained with simulated image pairs of line patterns, which is to mimic blood vessels. To investigate the suitable CNN model structure and elaborate on the effectiveness of CNN methods compared with non-learning methods, we select five different CNN models, while R-L and directional MB methods are also applied for comparison. Besides simulated data, experimental data including tungsten wires, leaf veins, and in vivo blood vessels are also evaluated. A custom-defined metric of relative size error (RSE) is used to quantify the multiscale feature recovery ability of different methods. Compared to other methods, enhanced deep super resolution (EDSR) network and residual in residual dense block network (RRDBNet) model show better recovery in terms of RSE for tungsten wires with diameters ranging from 30 μ m to 120 μ m . Moreover, AR-PAM images of leaf veins are tested to demonstrate the effectiveness of the optimized CNN methods (by EDSR and RRDBNet) for complex patterns. Finally, in vivo images of mouse ear blood vessels and rat ear blood vessels are acquired and then deconvolved, and the results show that the proposed CNN method (notably RRDBNet) enables accurate deconvolution of multiscale feature size and thus good fidelity.
Collapse
Affiliation(s)
- Fei Feng
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Siqi Liang
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jiajia Luo
- Institute of Medical Technology, Peking University Health Science Center, Beijing 100191, China
- Biomedical Engineering Department, Peking University, Beijing 100191, China
- Peking University People’s Hospital, Beijing 100044, China
- Corresponding author at: Biomedical Engineering Department, Peking University, Beijing 100191, China.
| | - Sung-Liang Chen
- University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Shanghai Jiao Tong University, Shanghai 200240, China
- Engineering Research Center of Digital Medicine and Clinical Translation, Ministry of Education, Shanghai 200030, China
- Corresponding author at: University of Michigan–Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China.
| |
Collapse
|
46
|
Rajendran P, Pramanik M. High frame rate (∼3 Hz) circular photoacoustic tomography using single-element ultrasound transducer aided with deep learning. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:066005. [PMID: 36452448 PMCID: PMC9209813 DOI: 10.1117/1.jbo.27.6.066005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 06/01/2022] [Indexed: 05/29/2023]
Abstract
Significance In circular scanning photoacoustic tomography (PAT), it takes several minutes to generate an image of acceptable quality, especially with a single-element ultrasound transducer (UST). The imaging speed can be enhanced by faster scanning (with high repetition rate light sources) and using multiple-USTs. However, artifacts arising from the sparse signal acquisition and low signal-to-noise ratio at higher scanning speeds limit the imaging speed. Thus, there is a need to improve the imaging speed of the PAT systems without hampering the quality of the PAT image. Aim To improve the frame rate (or imaging speed) of the PAT system by using deep learning (DL). Approach For improving the frame rate (or imaging speed) of the PAT system, we propose a novel U-Net-based DL framework to reconstruct PAT images from fast scanning data. Results The efficiency of the network was evaluated on both single- and multiple-UST-based PAT systems. Both phantom and in vivo imaging demonstrate that the network can improve the imaging frame rate by approximately sixfold in single-UST-based PAT systems and by approximately twofold in multi-UST-based PAT systems. Conclusions We proposed an innovative method to improve the frame rate (or imaging speed) by using DL and with this method, the fastest frame rate of ∼ 3 Hz imaging is achieved without hampering the quality of the reconstructed image.
Collapse
Affiliation(s)
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| |
Collapse
|
47
|
Deep-Learning-Based Algorithm for the Removal of Electromagnetic Interference Noise in Photoacoustic Endoscopic Image Processing. SENSORS 2022; 22:s22103961. [PMID: 35632370 PMCID: PMC9147354 DOI: 10.3390/s22103961] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 05/18/2022] [Accepted: 05/21/2022] [Indexed: 12/10/2022]
Abstract
Despite all the expectations for photoacoustic endoscopy (PAE), there are still several technical issues that must be resolved before the technique can be successfully translated into clinics. Among these, electromagnetic interference (EMI) noise, in addition to the limited signal-to-noise ratio (SNR), have hindered the rapid development of related technologies. Unlike endoscopic ultrasound, in which the SNR can be increased by simply applying a higher pulsing voltage, there is a fundamental limitation in leveraging the SNR of PAE signals because they are mostly determined by the optical pulse energy applied, which must be within the safety limits. Moreover, a typical PAE hardware situation requires a wide separation between the ultrasonic sensor and the amplifier, meaning that it is not easy to build an ideal PAE system that would be unaffected by EMI noise. With the intention of expediting the progress of related research, in this study, we investigated the feasibility of deep-learning-based EMI noise removal involved in PAE image processing. In particular, we selected four fully convolutional neural network architectures, U-Net, Segnet, FCN-16s, and FCN-8s, and observed that a modified U-Net architecture outperformed the other architectures in the EMI noise removal. Classical filter methods were also compared to confirm the superiority of the deep-learning-based approach. Still, it was by the U-Net architecture that we were able to successfully produce a denoised 3D vasculature map that could even depict the mesh-like capillary networks distributed in the wall of a rat colorectum. As the development of a low-cost laser diode or LED-based photoacoustic tomography (PAT) system is now emerging as one of the important topics in PAT, we expect that the presented AI strategy for the removal of EMI noise could be broadly applicable to many areas of PAT, in which the ability to apply a hardware-based prevention method is limited and thus EMI noise appears more prominently due to poor SNR.
Collapse
|
48
|
Zhu X, Huang Q, DiSpirito A, Vu T, Rong Q, Peng X, Sheng H, Shen X, Zhou Q, Jiang L, Hoffmann U, Yao J. Real-time whole-brain imaging of hemodynamics and oxygenation at micro-vessel resolution with ultrafast wide-field photoacoustic microscopy. LIGHT, SCIENCE & APPLICATIONS 2022; 11:138. [PMID: 35577780 PMCID: PMC9110749 DOI: 10.1038/s41377-022-00836-2] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 04/27/2022] [Accepted: 05/04/2022] [Indexed: 05/10/2023]
Abstract
High-speed high-resolution imaging of the whole-brain hemodynamics is critically important to facilitating neurovascular research. High imaging speed and image quality are crucial to visualizing real-time hemodynamics in complex brain vascular networks, and tracking fast pathophysiological activities at the microvessel level, which will enable advances in current queries in neurovascular and brain metabolism research, including stroke, dementia, and acute brain injury. Further, real-time imaging of oxygen saturation of hemoglobin (sO2) can capture fast-paced oxygen delivery dynamics, which is needed to solve pertinent questions in these fields and beyond. Here, we present a novel ultrafast functional photoacoustic microscopy (UFF-PAM) to image the whole-brain hemodynamics and oxygenation. UFF-PAM takes advantage of several key engineering innovations, including stimulated Raman scattering (SRS) based dual-wavelength laser excitation, water-immersible 12-facet-polygon scanner, high-sensitivity ultrasound transducer, and deep-learning-based image upsampling. A volumetric imaging rate of 2 Hz has been achieved over a field of view (FOV) of 11 × 7.5 × 1.5 mm3 with a high spatial resolution of ~10 μm. Using the UFF-PAM system, we have demonstrated proof-of-concept studies on the mouse brains in response to systemic hypoxia, sodium nitroprusside, and stroke. We observed the mouse brain's fast morphological and functional changes over the entire cortex, including vasoconstriction, vasodilation, and deoxygenation. More interestingly, for the first time, with the whole-brain FOV and micro-vessel resolution, we captured the vasoconstriction and hypoxia simultaneously in the spreading depolarization (SD) wave. We expect the new imaging technology will provide a great potential for fundamental brain research under various pathological and physiological conditions.
Collapse
Affiliation(s)
- Xiaoyi Zhu
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Qiang Huang
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
- Department of Pediatric Surgery, Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Qiangzhou Rong
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Xiaorui Peng
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Huaxin Sheng
- Roski Eye Institute, Department of Ophthalmology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
| | - Xiling Shen
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA
| | - Qifa Zhou
- Roski Eye Institute, Department of Ophthalmology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90089, USA
| | - Laiming Jiang
- Roski Eye Institute, Department of Ophthalmology, Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA.
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90089, USA.
| | - Ulrike Hoffmann
- Department of Anesthesiology, Duke University, Durham, NC, 27708, USA.
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC, 27708, USA.
| |
Collapse
|
49
|
Kim J, Kim G, Li L, Zhang P, Kim JY, Kim Y, Kim HH, Wang LV, Lee S, Kim C. Deep learning acceleration of multiscale superresolution localization photoacoustic imaging. LIGHT, SCIENCE & APPLICATIONS 2022; 11:131. [PMID: 35545614 PMCID: PMC9095876 DOI: 10.1038/s41377-022-00820-w] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 04/24/2022] [Accepted: 04/26/2022] [Indexed: 05/02/2023]
Abstract
A superresolution imaging approach that localizes very small targets, such as red blood cells or droplets of injected photoacoustic dye, has significantly improved spatial resolution in various biological and medical imaging modalities. However, this superior spatial resolution is achieved by sacrificing temporal resolution because many raw image frames, each containing the localization target, must be superimposed to form a sufficiently sampled high-density superresolution image. Here, we demonstrate a computational strategy based on deep neural networks (DNNs) to reconstruct high-density superresolution images from far fewer raw image frames. The localization strategy can be applied for both 3D label-free localization optical-resolution photoacoustic microscopy (OR-PAM) and 2D labeled localization photoacoustic computed tomography (PACT). For the former, the required number of raw volumetric frames is reduced from tens to fewer than ten. For the latter, the required number of raw 2D frames is reduced by 12 fold. Therefore, our proposed method has simultaneously improved temporal (via the DNN) and spatial (via the localization method) resolutions in both label-free microscopy and labeled tomography. Deep-learning powered localization PA imaging can potentially provide a practical tool in preclinical and clinical studies requiring fast temporal and fine spatial resolutions.
Collapse
Affiliation(s)
- Jongbeom Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Gyuwon Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Lei Li
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., MC 138-78, Pasadena, CA, 91125, USA
| | - Pengfei Zhang
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Jin Young Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
- Opticho, 532, CHANGeUP GROUND, 87 Cheongam-ro, Nam-gu, Pohang, Gyeongsangbuk, 37673, Republic of Korea
| | - Yeonggeun Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Hyung Ham Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea
| | - Lihong V Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 E. California Blvd., MC 138-78, Pasadena, CA, 91125, USA.
| | - Seungchul Lee
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea.
| | - Chulhong Kim
- Departments of Electrical Engineering, Mechanical Engineering, Convergence IT Engineering, and Interdisciplinary Bioscience and Bioengineering, Graduate School of Artificial Intelligence, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk, 37673, Republic of Korea.
- Opticho, 532, CHANGeUP GROUND, 87 Cheongam-ro, Nam-gu, Pohang, Gyeongsangbuk, 37673, Republic of Korea.
| |
Collapse
|
50
|
Song W, Wang YC, Chen H, Li X, Zhou L, Min C, Zhu S, Yuan X. Label-free identification of human glioma xenograft of mouse brain with quantitative ultraviolet photoacoustic histology imaging. JOURNAL OF BIOPHOTONICS 2022; 15:e202100329. [PMID: 35000293 DOI: 10.1002/jbio.202100329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 12/14/2021] [Accepted: 01/07/2022] [Indexed: 06/14/2023]
Abstract
The ability to unveil molecular specificities of endogenous nonfluorescent chromophores of ultraviolet photoacoustic imaging technology enables label-free histology imaging of tissue specimens. In this work, we exploit ultraviolet photoacoustic microscopy for identifying human glioma xenograft of mouse brain ex vivo. Intrinsically excellent imaging contrast of cell nucleus at ultraviolet photoacoustic illumination along with good spatial resolution allows for discerning the brain glioma of freshly-harvested thick brain slices, which circumvents laborious time-consuming preparations of the tissue specimens including micrometer-thick slicing and H&E staining that are prerequisites in standard histology analysis. The identification of tumor margins and quantitative analysis of tumor areas is implemented, representing good agreement with the standard H&E-stained observations. Quantitative ultraviolet photoacoustic microscopy can access fast pathological assessment to the brain tissues, and thus potentially facilitates intraoperative brain tumor resection to precisely remove all cancerous cells and preserve healthy tissue for maintaining its essential function.
Collapse
Affiliation(s)
- Wei Song
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| | - Ya-Chao Wang
- Department of Neurosurgery, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, China
- The Institute Translational Medicine, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, China
| | - Huang Chen
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| | - Xiangzhu Li
- Department of Neurosurgery, The First Affiliated Hospital of Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, China
| | - Lingxiao Zhou
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| | - Changjun Min
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| | - Siwei Zhu
- The Institute of Translational Medicine, Tianjin Union Medical Center of Nankai University, Tianjin, China
| | - Xiaocong Yuan
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| |
Collapse
|