51
|
Zhang H, Bo W, Wang D, DiSpirito A, Huang C, Nyayapathi N, Zheng E, Vu T, Gong Y, Yao J, Xu W, Xia J. Deep-E: A Fully-Dense Neural Network for Improving the Elevation Resolution in Linear-Array-Based Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1279-1288. [PMID: 34928793 PMCID: PMC9161237 DOI: 10.1109/tmi.2021.3137060] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Linear-array-based photoacoustic tomography has shown broad applications in biomedical research and preclinical imaging. However, the elevational resolution of a linear array is fundamentally limited due to the weak cylindrical focus of the transducer element. While several methods have been proposed to address this issue, they have all handled the problem in a less time-efficient way. In this work, we propose to improve the elevational resolution of a linear array through Deep-E, a fully dense neural network based on U-net. Deep-E exhibits high computational efficiency by converting the three-dimensional problem into a two-dimension problem: it focused on training a model to enhance the resolution along elevational direction by only using the 2D slices in the axial and elevational plane and thereby reducing the computational burden in simulation and training. We demonstrated the efficacy of Deep-E using various datasets, including simulation, phantom, and human subject results. We found that Deep-E could improve elevational resolution by at least four times and recover the object's true size. We envision that Deep-E will have a significant impact in linear-array-based photoacoustic imaging studies by providing high-speed and high-resolution image enhancement.
Collapse
|
52
|
Zheng S, Meng Q, Wang XY. Quantitative endoscopic photoacoustic tomography using a convolutional neural network. APPLIED OPTICS 2022; 61:2574-2581. [PMID: 35471325 DOI: 10.1364/ao.441250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
Endoscopic photoacoustic tomography (EPAT) is a catheter-based hybrid imaging modality capable of providing structural and functional information of biological luminal structures, such as coronary arterial vessels and the digestive tract. The recovery of the optical properties of the imaged tissue from acoustic measurements achieved by optical inversion is essential for implementing quantitative EPAT (qEPAT). In this paper, a convolutional neural network (CNN) based on deep gradient descent is developed for qEPAT. The network enables the reconstruction of images representing the spatially varying absorption coefficient in cross-sections of the tubular structures from limited measurement data. The forward operator reflecting the mapping from the absorption coefficient to the optical deposition due to pulsed irradiation is embedded into the network training. The network parameters are optimized layer by layer through the deep gradient descent mechanism using the numerically simulated data. The operation processes of the forward operator and its adjoint operator are separated from the network training. The trained network outputs an image representing the distribution of absorption coefficients by inputting an image that represents the optical deposition. The method has been tested with computer-generated phantoms mimicking coronary arterial vessels containing various tissue types. Results suggest that the structural similarity of the images reconstructed by our method is increased by about 10% in comparison with the non-learning method based on error minimization in the case of the same measuring view.
Collapse
|
53
|
Kim H, Kim JY, Cho S, Ahn J, Kim Y, Kim H, Kim C. Performance comparison of high-speed photoacoustic microscopy: opto-ultrasound combiner versus ring-shaped ultrasound transducer. Biomed Eng Lett 2022; 12:147-153. [PMID: 35529340 PMCID: PMC9046515 DOI: 10.1007/s13534-022-00218-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 01/04/2022] [Accepted: 01/26/2022] [Indexed: 12/17/2022] Open
Abstract
Photoacoustic microscopy (PAM) embedded with a 532 nm pulse laser is widely used to visualize the microvascular structures in both small animals and humans in vivo. An opto-ultrasound combiner (OUC) is often utilized in high-speed PAM to confocally align the optical and acoustic beams to improve the system's sensitivity. However, acoustic impedance mismatch in the OUC results in little improvement in the sensitivity. Alternatively, a ring-shaped ultrasound transducer (RUT) can also accomplish the confocal configuration. Here, we compare the performance of OUC and RUT modules through ultrasound pulse-echo tests and PA imaging experiments. The signal-to-noise ratios (SNRs) of the RUT-based system were 15 dB, 12 dB, and 7 dB higher when compared to the OUC-based system for ultrasound pulse-echo test, PA phantom imaging test, and PA in-vivo imaging test, respectively. In addition, the RUT-based system could image the microvascular structures of small parts of a mouse body in a few seconds with minimal loss in SNR. Thus, with increased sensitivity, improved image details, and fast image acquisition, we believe the RUT-based systems could play a significant role in the design of future fast-PAM systems.
Collapse
Affiliation(s)
- Hyojin Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk 37673 Republic of Korea
| | - Jin Young Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk 37673 Republic of Korea
| | - Seonghee Cho
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk 37673 Republic of Korea
| | - Joongho Ahn
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk 37673 Republic of Korea
| | - Yeonggeun Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk 37673 Republic of Korea
| | - Hyungham Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk 37673 Republic of Korea
| | - Chulhong Kim
- Departments of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), 77 Cheongam-ro, Nam-gu, Pohang, Gyeongbuk 37673 Republic of Korea
| |
Collapse
|
54
|
Ly CD, Nguyen VT, Vo TH, Mondal S, Park S, Choi J, Vu TTH, Kim CS, Oh J. Full-view in vivo skin and blood vessels profile segmentation in photoacoustic imaging based on deep learning. PHOTOACOUSTICS 2022; 25:100310. [PMID: 34824975 PMCID: PMC8603312 DOI: 10.1016/j.pacs.2021.100310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/23/2021] [Accepted: 10/18/2021] [Indexed: 05/08/2023]
Abstract
Photoacoustic (PA) microscopy allows imaging of the soft biological tissue based on optical absorption contrast and spatial ultrasound resolution. One of the major applications of PA imaging is its characterization of microvasculature. However, the strong PA signal from skin layer overshadowed the subcutaneous blood vessels leading to indirectly reconstruct the PA images in human study. Addressing the present situation, we examined a deep learning (DL) automatic algorithm to achieve high-resolution and high-contrast segmentation for widening PA imaging applications. In this research, we propose a DL model based on modified U-Net for extracting the relationship features between amplitudes of the generated PA signal from skin and underlying vessels. This study illustrates the broader potential of hybrid complex network as an automatic segmentation tool for the in vivo PA imaging. With DL-infused solution, our result outperforms the previous studies with achieved real-time semantic segmentation on large-size high-resolution PA images.
Collapse
Affiliation(s)
- Cao Duong Ly
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Van Tu Nguyen
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Tan Hung Vo
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Sudip Mondal
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| | - Sumin Park
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Jaeyeop Choi
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
| | - Thi Thu Ha Vu
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Chang-Seok Kim
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Junghwan Oh
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Department of Biomedical Engineering, Pukyong National University, Busan 48513, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| |
Collapse
|
55
|
Cheng S, Zhou Y, Chen J, Li H, Wang L, Lai P. High-resolution photoacoustic microscopy with deep penetration through learning. PHOTOACOUSTICS 2022; 25:100314. [PMID: 34824976 PMCID: PMC8604673 DOI: 10.1016/j.pacs.2021.100314] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 05/18/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) enjoys superior spatial resolution and has received intense attention in recent years. The application, however, has been limited to shallow depths because of strong scattering of light in biological tissues. In this work, we propose to achieve deep-penetrating OR-PAM performance by using deep learning enabled image transformation on blurry living mouse vascular images that were acquired with an acoustic-resolution photoacoustic microscopy (AR-PAM) setup. A generative adversarial network (GAN) was trained in this study and improved the imaging lateral resolution of AR-PAM from 54.0 µm to 5.1 µm, comparable to that of a typical OR-PAM (4.7 µm). The feasibility of the network was evaluated with living mouse ear data, producing superior microvasculature images that outperforms blind deconvolution. The generalization of the network was validated with in vivo mouse brain data. Moreover, it was shown experimentally that the deep-learning method can retain high resolution at tissue depths beyond one optical transport mean free path. Whilst it can be further improved, the proposed method provides new horizons to expand the scope of OR-PAM towards deep-tissue imaging and wide applications in biomedicine.
Collapse
Affiliation(s)
- Shengfu Cheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Yingying Zhou
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Huanhao Li
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Puxiang Lai
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
56
|
Kim G, Kim J, Choi WJ, Kim C, Lee S. Integrated deep learning framework for accelerated optical coherence tomography angiography. Sci Rep 2022; 12:1289. [PMID: 35079046 PMCID: PMC8789830 DOI: 10.1038/s41598-022-05281-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 01/10/2022] [Indexed: 11/26/2022] Open
Abstract
Label-free optical coherence tomography angiography (OCTA) has become a premium imaging tool in clinics to obtain structural and functional information of microvasculatures. One primary technical drawback for OCTA, however, is its imaging speed. The current protocols require high sampling density and multiple acquisitions of cross-sectional B-scans to form one image frame, resulting in low acquisition speed. Recently, deep learning (DL)-based methods have gained attention in accelerating the OCTA acquisition process. They achieve faster acquisition using two independent reconstructing approaches: high-quality angiograms from a few repeated B-scans and high-resolution angiograms from undersampled data. While these approaches have shown promising results, they provide limited solutions that only partially account for the OCTA scanning mechanism. Herein, we propose an integrated DL method to simultaneously tackle both factors and further enhance the reconstruction performance in speed and quality. We designed an end-to-end deep neural network (DNN) framework with a two-staged adversarial training scheme to reconstruct fully-sampled, high-quality (8 repeated B-scans) angiograms from their corresponding undersampled, low-quality (2 repeated B-scans) counterparts by successively enhancing the pixel resolution and the image quality. Using an in-vivo mouse brain vasculature dataset, we evaluate our proposed framework through quantitative and qualitative assessments and demonstrate that our method can achieve superior reconstruction performance compared to the conventional means. Our DL-based framework can accelerate the OCTA imaging speed from 16 to 256[Formula: see text] while preserving the image quality, thus enabling a convenient software-only solution to enhance preclinical and clinical studies.
Collapse
Affiliation(s)
- Gyuwon Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea
| | - Jongbeom Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea
- Departments of Electrical Engineering and Convergence I.T. Engineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea
| | - Woo June Choi
- School of Electrical and Electronics Engineering, College of ICT Engineering, Chung-Ang University, Seoul, 06974, Republic of Korea.
| | - Chulhong Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea.
- Departments of Electrical Engineering and Convergence I.T. Engineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea.
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea.
| | - Seungchul Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea.
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology (POSTECH), Pohang, 37673, Republic of Korea.
| |
Collapse
|
57
|
Sathyanarayana SG, Wang Z, Sun N, Ning B, Hu S, Hossack JA. Recovery of Blood Flow From Undersampled Photoacoustic Microscopy Data Using Sparse Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:103-120. [PMID: 34388091 DOI: 10.1109/tmi.2021.3104521] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Photoacoustic microscopy (PAM) leverages the optical absorption contrast of blood hemoglobin for high-resolution, multi-parametric imaging of the microvasculature in vivo. However, to quantify the blood flow speed, dense spatial sampling is required to assess blood flow-induced loss of correlation of sequentially acquired A-line signals, resulting in increased laser pulse repetition rate and consequently optical fluence. To address this issue, we have developed a sparse modeling approach for blood flow quantification based on downsampled PAM data. Evaluation of its performance both in vitro and in vivo shows that this sparse modeling method can accurately recover the substantially downsampled data (up to 8 times) for correlation-based blood flow analysis, with a relative error of 12.7 ± 6.1 % across 10 datasets in vitro and 12.7 ± 12.1 % in vivo for data downsampled 8 times. Reconstruction with the proposed method is on par with recovery using compressive sensing, which exhibits an error of 12.0 ± 7.9 % in vitro and 33.86 ± 26.18 % in vivo for data downsampled 8 times. Both methods outperform bicubic interpolation, which shows an error of 15.95 ± 9.85 % in vitro and 110.7 ± 87.1 % in vivo for data downsampled 8 times.
Collapse
|
58
|
Song X, Chen G, Zhao A, Liu X, Zeng J. Virtual optical-resolution photoacoustic microscopy using the k-Wave method. APPLIED OPTICS 2021; 60:11241-11246. [PMID: 35201116 DOI: 10.1364/ao.444106] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/27/2021] [Indexed: 06/14/2023]
Abstract
Deep learning has been widely used in image processing, quantitative analysis, and other applications in optical-resolution photoacoustic microscopy (OR-PAM). It requires a large amount of photoacoustic data for training and testing. However, due to the complex structure, high cost, slow imaging speed, and other factors of OR-PAM, it is difficult to obtain enough data required by deep learning, which limits the research of deep learning in OR-PAM to a certain extent. To solve this problem, a virtual OR-PAM based on k-Wave is proposed. The virtual photoacoustic microscopy mainly includes the setting of excitation light source and ultrasonic probe, scanning and signal processing, which can realize the common Gaussian-beam and Bessel-beam OR-PAMs. The system performance (lateral resolution, axial resolution, and depth of field) was tested by imaging a vertically tilted fiber, and the effectiveness and feasibility of the virtual simulation platform were verified by 3D imaging of the virtual vascular network. The ability to the generation of the dataset for deep learning was also verified. The construction of the virtual OR-PAM can promote the research of OR-PAM and the application of deep learning in OR-PAM.
Collapse
|
59
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
60
|
Sarmiento LC, Villamizar S, López O, Collazos AC, Sarmiento J, Rodríguez JB. Recognition of EEG Signals from Imagined Vowels Using Deep Learning Methods. SENSORS (BASEL, SWITZERLAND) 2021; 21:6503. [PMID: 34640824 PMCID: PMC8512781 DOI: 10.3390/s21196503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/17/2021] [Accepted: 09/24/2021] [Indexed: 01/27/2023]
Abstract
The use of imagined speech with electroencephalographic (EEG) signals is a promising field of brain-computer interfaces (BCI) that seeks communication between areas of the cerebral cortex related to language and devices or machines. However, the complexity of this brain process makes the analysis and classification of this type of signals a relevant topic of research. The goals of this study were: to develop a new algorithm based on Deep Learning (DL), referred to as CNNeeg1-1, to recognize EEG signals in imagined vowel tasks; to create an imagined speech database with 50 subjects specialized in imagined vowels from the Spanish language (/a/,/e/,/i/,/o/,/u/); and to contrast the performance of the CNNeeg1-1 algorithm with the DL Shallow CNN and EEGNet benchmark algorithms using an open access database (BD1) and the newly developed database (BD2). In this study, a mixed variance analysis of variance was conducted to assess the intra-subject and inter-subject training of the proposed algorithms. The results show that for intra-subject training analysis, the best performance among the Shallow CNN, EEGNet, and CNNeeg1-1 methods in classifying imagined vowels (/a/,/e/,/i/,/o/,/u/) was exhibited by CNNeeg1-1, with an accuracy of 65.62% for BD1 database and 85.66% for BD2 database.
Collapse
Affiliation(s)
- Luis Carlos Sarmiento
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Sergio Villamizar
- Department of Electrical and Electronics Engineering, School of Engineering, Universidad Nacional de Colombia, Bogotá 111321, Colombia; (S.V.); (J.B.R.)
| | - Omar López
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Ana Claros Collazos
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Jhon Sarmiento
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Jan Bacca Rodríguez
- Department of Electrical and Electronics Engineering, School of Engineering, Universidad Nacional de Colombia, Bogotá 111321, Colombia; (S.V.); (J.B.R.)
| |
Collapse
|
61
|
Rajendran P, Pramanik M. Deep-learning-based multi-transducer photoacoustic tomography imaging without radius calibration. OPTICS LETTERS 2021; 46:4510-4513. [PMID: 34525034 DOI: 10.1364/ol.434513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Pulsed laser diodes are used in photoacoustic tomography (PAT) as excitation sources because of their low cost, compact size, and high pulse repetition rate. In combination with multiple single-element ultrasound transducers (SUTs) the imaging speed of PAT can be improved. However, during PAT image reconstruction, the exact radius of each SUT is required for accurate reconstruction. Here we developed a novel deep learning approach to alleviate the need for radius calibration. We used a convolutional neural network (fully dense U-Net) aided with a convolutional long short-term memory block to reconstruct the PAT images. Our analysis on the test set demonstrates that the proposed network eliminates the need for radius calibration and improves the peak signal-to-noise ratio by ∼73% without compromising the image quality. In vivo imaging was used to verify the performance of the network.
Collapse
|
62
|
Chen J, Zhang Y, Bai S, Zhu J, Chirarattananon P, Ni K, Zhou Q, Wang L. Dual-foci fast-scanning photoacoustic microscopy with 3.2-MHz A-line rate. PHOTOACOUSTICS 2021; 23:100292. [PMID: 34430201 PMCID: PMC8367837 DOI: 10.1016/j.pacs.2021.100292] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 07/12/2021] [Accepted: 08/03/2021] [Indexed: 05/02/2023]
Abstract
We report fiber-based dual-foci fast-scanning OR-PAM that can double the scanning rate without compromising the imaging resolution, the field of view, and the detection sensitivity. To achieve fast scanning speed, the OR-PAM system uses a single-axis water-immersible resonant scanning mirror that can confocally scan the optical and acoustic beams at 1018 Hz with a 3-mm range. Pulse energies of 45∼100-nJ are sufficient for acquiring vascular and oxygen-saturation images. The dual-foci method can double the B-scan rate to 2036 Hz. Using two lasers and stimulated Raman scattering, we achieve dual-wavelength excitation on both foci, and the total A-line rate is 3.2-MHz. In in vivo experiments, we inject epinephrine and monitor the hemodynamic and oxygen saturation response in the peripheral vessels at 1.7 Hz over 2.5 × 6.7 mm2. Dual-foci OR-PAM offers a new imaging tool for the study of fast physiological and pathological changes.
Collapse
Affiliation(s)
- Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Yachao Zhang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Songnan Bai
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Jingyi Zhu
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Pakpong Chirarattananon
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
| | - Kai Ni
- Division of Advanced Manufacturing, Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Qian Zhou
- Division of Advanced Manufacturing, Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China
- City University of Hong Kong Shenzhen Research Institute, Yuexing Yi Dao, Shenzhen, Guang Dong, 518057, China
- Corresponding author at: Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Ave, Kowloon, Hong Kong SAR, China; City University of Hong Kong Shenzhen Research Institute, Yuexing Yi Dao, Shenzhen, Guang Dong, 518057, China.
| |
Collapse
|
63
|
Zhou J, He D, Shang X, Guo Z, Chen SL, Luo J. Photoacoustic microscopy with sparse data by convolutional neural networks. PHOTOACOUSTICS 2021; 22:100242. [PMID: 33763327 PMCID: PMC7973247 DOI: 10.1016/j.pacs.2021.100242] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/02/2023]
Abstract
The point-by-point scanning mechanism of photoacoustic microscopy (PAM) results in low-speed imaging, limiting the application of PAM. In this work, we propose a method to improve the quality of sparse PAM images using convolutional neural networks (CNNs), thereby speeding up image acquisition while maintaining good image quality. The CNN model utilizes attention modules, residual blocks, and perceptual losses to reconstruct the sparse PAM image, which is a mapping from a 1/4 or 1/16 low-sampling sparse PAM image to a latent fully-sampled one. The model is trained and validated mainly on PAM images of leaf veins, showing effective improvements quantitatively and qualitatively. Our model is also tested using in vivo PAM images of blood vessels of mouse ears and eyes. The results suggest that the model can enhance the quality of the sparse PAM image of blood vessels in several aspects, which facilitates fast PAM and its clinical applications.
Collapse
Affiliation(s)
- Jiasheng Zhou
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Da He
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xiaoyu Shang
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Zhendong Guo
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Sung-Liang Chen
- University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai Jiao Tong University, Shanghai 200240, China
- Corresponding authors.
| | - Jiajia Luo
- Biomedical Engineering Department, Peking University, Beijing 100191, China
- Corresponding authors.
| |
Collapse
|
64
|
Vu T, DiSpirito A, Li D, Wang Z, Zhu X, Chen M, Jiang L, Zhang D, Luo J, Zhang YS, Zhou Q, Horstmeyer R, Yao J. Deep image prior for undersampling high-speed photoacoustic microscopy. PHOTOACOUSTICS 2021; 22:100266. [PMID: 33898247 PMCID: PMC8056431 DOI: 10.1016/j.pacs.2021.100266] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/15/2021] [Accepted: 03/23/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic microscopy (PAM) is an emerging imaging method combining light and sound. However, limited by the laser's repetition rate, state-of-the-art high-speed PAM technology often sacrifices spatial sampling density (i.e., undersampling) for increased imaging speed over a large field-of-view. Deep learning (DL) methods have recently been used to improve sparsely sampled PAM images; however, these methods often require time-consuming pre-training and large training dataset with ground truth. Here, we propose the use of deep image prior (DIP) to improve the image quality of undersampled PAM images. Unlike other DL approaches, DIP requires neither pre-training nor fully-sampled ground truth, enabling its flexible and fast implementation on various imaging targets. Our results have demonstrated substantial improvement in PAM images with as few as 1.4 % of the fully sampled pixels on high-speed PAM. Our approach outperforms interpolation, is competitive with pre-trained supervised DL method, and is readily translated to other high-speed, undersampling imaging modalities.
Collapse
Affiliation(s)
- Tri Vu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | | | - Daiwei Li
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Zixuan Wang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Xiaoyi Zhu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Maomao Chen
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Laiming Jiang
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | - Dong Zhang
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Jianwen Luo
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Qifa Zhou
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | | | - Junjie Yao
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
65
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
66
|
Yao J, Wang LV. Perspective on fast-evolving photoacoustic tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210105-PERR. [PMID: 34196136 PMCID: PMC8244998 DOI: 10.1117/1.jbo.26.6.060602] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/17/2021] [Indexed: 05/19/2023]
Abstract
SIGNIFICANCE Acoustically detecting the rich optical absorption contrast in biological tissues, photoacoustic tomography (PAT) seamlessly bridges the functional and molecular sensitivity of optical excitation with the deep penetration and high scalability of ultrasound detection. As a result of continuous technological innovations and commercial development, PAT has been playing an increasingly important role in life sciences and patient care, including functional brain imaging, smart drug delivery, early cancer diagnosis, and interventional therapy guidance. AIM Built on our 2016 tutorial article that focused on the principles and implementations of PAT, this perspective aims to provide an update on the exciting technical advances in PAT. APPROACH This perspective focuses on the recent PAT innovations in volumetric deep-tissue imaging, high-speed wide-field microscopic imaging, high-sensitivity optical ultrasound detection, and machine-learning enhanced image reconstruction and data processing. Representative applications are introduced to demonstrate these enabling technical breakthroughs in biomedical research. CONCLUSIONS We conclude the perspective by discussing the future development of PAT technologies.
Collapse
Affiliation(s)
- Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Lihong V. Wang
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| |
Collapse
|
67
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 101] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
68
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
69
|
Yang C, Lan H, Gao F, Gao F. Review of deep learning for photoacoustic imaging. PHOTOACOUSTICS 2021; 21:100215. [PMID: 33425679 PMCID: PMC7779783 DOI: 10.1016/j.pacs.2020.100215] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 10/11/2020] [Accepted: 10/11/2020] [Indexed: 05/02/2023]
Abstract
Machine learning has been developed dramatically and witnessed a lot of applications in various fields over the past few years. This boom originated in 2009, when a new model emerged, that is, the deep artificial neural network, which began to surpass other established mature models on some important benchmarks. Later, it was widely used in academia and industry. Ranging from image analysis to natural language processing, it fully exerted its magic and now become the state-of-the-art machine learning models. Deep neural networks have great potential in medical imaging technology, medical data analysis, medical diagnosis and other healthcare issues, and is promoted in both pre-clinical and even clinical stages. In this review, we performed an overview of some new developments and challenges in the application of machine learning to medical image analysis, with a special focus on deep learning in photoacoustic imaging. The aim of this review is threefold: (i) introducing deep learning with some important basics, (ii) reviewing recent works that apply deep learning in the entire ecological chain of photoacoustic imaging, from image reconstruction to disease diagnosis, (iii) providing some open source materials and other resources for researchers interested in applying deep learning to photoacoustic imaging.
Collapse
Affiliation(s)
- Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
70
|
Das D, Sharma A, Rajendran P, Pramanik M. Another decade of photoacoustic imaging. Phys Med Biol 2020; 66. [PMID: 33361580 DOI: 10.1088/1361-6560/abd669] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/23/2020] [Indexed: 01/09/2023]
Abstract
Photoacoustic imaging - a hybrid biomedical imaging modality finding its way to clinical practices. Although the photoacoustic phenomenon was known more than a century back, only in the last two decades it has been widely researched and used for biomedical imaging applications. In this review we focus on the development and progress of the technology in the last decade (2010-2020). From becoming more and more user friendly, cheaper in cost, portable in size, photoacoustic imaging promises a wide range of applications, if translated to clinic. The growth of photoacoustic community is steady, and with several new directions researchers are exploring, it is inevitable that photoacoustic imaging will one day establish itself as a regular imaging system in the clinical practices.
Collapse
Affiliation(s)
- Dhiman Das
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Praveenbalaji Rajendran
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 70 Nanyang Drive, N1.3-B2-11, Singapore, 637457, SINGAPORE
| |
Collapse
|
71
|
Rajendran P, Pramanik M. Deep learning approach to improve tangential resolution in photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2020; 11:7311-7323. [PMID: 33408998 PMCID: PMC7747891 DOI: 10.1364/boe.410145] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/29/2020] [Accepted: 11/15/2020] [Indexed: 05/09/2023]
Abstract
In circular scan photoacoustic tomography (PAT), the axial resolution is spatially invariant and is limited by the bandwidth of the detector. However, the tangential resolution is spatially variant and is dependent on the aperture size of the detector. In particular, the tangential resolution improves with the decreasing aperture size. However, using a detector with a smaller aperture reduces the sensitivity of the transducer. Thus, large aperture size detectors are widely preferred in circular scan PAT imaging systems. Although several techniques have been proposed to improve the tangential resolution, they have inherent limitations such as high cost and the need for customized detectors. Herein, we propose a novel deep learning architecture to counter the spatially variant tangential resolution in circular scanning PAT imaging systems. We used a fully dense U-Net based convolutional neural network architecture along with 9 residual blocks to improve the tangential resolution of the PAT images. The network was trained on the simulated datasets and its performance was verified by experimental in vivo imaging. Results show that the proposed deep learning network improves the tangential resolution by eight folds, without compromising the structural similarity and quality of image.
Collapse
Affiliation(s)
- Praveenbalaji Rajendran
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| |
Collapse
|
72
|
Sharma A, Pramanik M. Convolutional neural network for resolution enhancement and noise reduction in acoustic resolution photoacoustic microscopy. BIOMEDICAL OPTICS EXPRESS 2020; 11:6826-6839. [PMID: 33408964 PMCID: PMC7747888 DOI: 10.1364/boe.411257] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 10/24/2020] [Accepted: 10/24/2020] [Indexed: 05/03/2023]
Abstract
In acoustic resolution photoacoustic microscopy (AR-PAM), a high numerical aperture focused ultrasound transducer (UST) is used for deep tissue high resolution photoacoustic imaging. There is a significant degradation of lateral resolution in the out-of-focus region. Improvement in out-of-focus resolution without degrading the image quality remains a challenge. In this work, we propose a deep learning-based method to improve the resolution of AR-PAM images, especially at the out of focus plane. A modified fully dense U-Net based architecture was trained on simulated AR-PAM images. Applying the trained model on experimental images showed that the variation in resolution is ∼10% across the entire imaging depth (∼4 mm) in the deep learning-based method, compared to ∼180% variation in the original PAM images. Performance of the trained network on in vivo rat vasculature imaging further validated that noise-free, high resolution images can be obtained using this method.
Collapse
Affiliation(s)
- Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| |
Collapse
|