1
|
Wang Z, Tao W, Zhao H. Extractor-attention-predictor network for quantitative photoacoustic tomography. PHOTOACOUSTICS 2024; 38:100609. [PMID: 38745884 PMCID: PMC11091525 DOI: 10.1016/j.pacs.2024.100609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 04/18/2024] [Accepted: 04/20/2024] [Indexed: 05/16/2024]
Abstract
Quantitative photoacoustic tomography (qPAT) holds great potential in estimating chromophore concentrations, whereas the involved optical inverse problem, aiming to recover absorption coefficient distributions from photoacoustic images, remains challenging. To address this problem, we propose an extractor-attention-predictor network architecture (EAPNet), which employs a contracting-expanding structure to capture contextual information alongside a multilayer perceptron to enhance nonlinear modeling capability. A spatial attention module is introduced to facilitate the utilization of important information. We also use a balanced loss function to prevent network parameter updates from being biased towards specific regions. Our method obtains satisfactory quantitative metrics in simulated and real-world validations. Moreover, it demonstrates superior robustness to target properties and yields reliable results for targets with small size, deep location, or relatively low absorption intensity, indicating its broader applicability. The EAPNet, compared to the conventional UNet, exhibits improved efficiency, which significantly enhances performance while maintaining similar network size and computational complexity.
Collapse
Affiliation(s)
- Zeqi Wang
- School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wei Tao
- School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hui Zhao
- School of Sensing Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
2
|
Poimala J, Cox B, Hauptmann A. Compensating unknown speed of sound in learned fast 3D limited-view photoacoustic tomography. PHOTOACOUSTICS 2024; 37:100597. [PMID: 38425677 PMCID: PMC10901832 DOI: 10.1016/j.pacs.2024.100597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/15/2023] [Accepted: 02/16/2024] [Indexed: 03/02/2024]
Abstract
Real-time applications in three-dimensional photoacoustic tomography from planar sensors rely on fast reconstruction algorithms that assume the speed of sound (SoS) in the tissue is homogeneous. Moreover, the reconstruction quality depends on the correct choice for the constant SoS. In this study, we discuss the possibility of ameliorating the problem of unknown or heterogeneous SoS distributions by using learned reconstruction methods. This can be done by modelling the uncertainties in the training data. In addition, a correction term can be included in the learned reconstruction method. We investigate the influence of both and while a learned correction component can improve reconstruction quality further, we show that a careful choice of uncertainties in the training data is the primary factor to overcome unknown SoS. We support our findings with simulated and in vivo measurements in 3D.
Collapse
Affiliation(s)
- Jenni Poimala
- Research Unit of Mathematical Sciences, University of Oulu, Finland
| | - Ben Cox
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Finland
- Department of Computer Science, University College London, UK
| |
Collapse
|
3
|
Lang Y, Jiang Z, Sun L, Xiang L, Ren L. Hybrid-supervised deep learning for domain transfer 3D protoacoustic image reconstruction. Phys Med Biol 2024; 69:10.1088/1361-6560/ad3327. [PMID: 38471184 PMCID: PMC11076107 DOI: 10.1088/1361-6560/ad3327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 03/12/2024] [Indexed: 03/14/2024]
Abstract
Objective. Protoacoustic imaging showed great promise in providing real-time 3D dose verification of proton therapy. However, the limited acquisition angle in protoacoustic imaging induces severe artifacts, which impairs its accuracy for dose verification. In this study, we developed a hybrid-supervised deep learning method for protoacoustic imaging to address the limited view issue.Approach. We proposed a Recon-Enhance two-stage deep learning method. In the Recon-stage, a transformer-based network was developed to reconstruct initial pressure maps from raw acoustic signals. The network is trained in a hybrid-supervised approach, where it is first trained using supervision by the iteratively reconstructed pressure map and then fine-tuned using transfer learning and self-supervision based on the data fidelity constraint. In the enhance-stage, a 3D U-net is applied to further enhance the image quality with supervision from the ground truth pressure map. The final protoacoustic images are then converted to dose for proton verification.Main results. The results evaluated on a dataset of 126 prostate cancer patients achieved an average root mean squared errors (RMSE) of 0.0292, and an average structural similarity index measure (SSIM) of 0.9618, out-performing related start-of-the-art methods. Qualitative results also demonstrated that our approach addressed the limit-view issue with more details reconstructed. Dose verification achieved an average RMSE of 0.018, and an average SSIM of 0.9891. Gamma index evaluation demonstrated a high agreement (94.7% and 95.7% for 1%/3 mm and 1%/5 mm) between the predicted and the ground truth dose maps. Notably, the processing time was reduced to 6 s, demonstrating its feasibility for online 3D dose verification for prostate proton therapy.Significance. Our study achieved start-of-the-art performance in the challenging task of direct reconstruction from radiofrequency signals, demonstrating the great promise of PA imaging as a highly efficient and accurate tool forinvivo3D proton dose verification to minimize the range uncertainties of proton therapy to improve its precision and outcomes.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Baltimore, MD 21201, United States of America
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University, Durham, NC 27710, United States of America
| | - Leshan Sun
- Department of Biomedical Engineering and Radiology, University of California, Irvine, Irnive, CA, 92617, United States of America
| | - Liangzhong Xiang
- Department of Biomedical Engineering and Radiology, University of California, Irvine, Irnive, CA, 92617, United States of America
| | - Lei Ren
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Baltimore, MD 21201, United States of America
| |
Collapse
|
4
|
Prakash R, Manwar R, Avanaki K. Evaluation of 10 current image reconstruction algorithms for linear array photoacoustic imaging. JOURNAL OF BIOPHOTONICS 2024; 17:e202300117. [PMID: 38010300 DOI: 10.1002/jbio.202300117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 10/15/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023]
Abstract
Various reconstruction algorithms have been implemented for linear array photoacoustic imaging systems with the goal of accurately reconstructing the strength absorbers within the tissue being imaged. Since the existing algorithms have been introduced by different research groups and the context of performance evaluation was not consistent, it is difficult to make a fair comparison between them. In this study, we systematically compared the performance of 10 published image reconstruction algorithms (DAS, UBP, pDAS, DMAS, MV, EIGMV, SLSC, GSC, TR, and FD) using in-vitro phantom data. Evaluations were conducted based on lateral resolution of the reconstructed images, computational time, target detectability, and noise sensitivity. We anticipate the outcome of this study will assist researchers in selecting appropriate algorithms for their linear array PA imaging applications.
Collapse
Affiliation(s)
- Ravi Prakash
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Rayyan Manwar
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Kamran Avanaki
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
- Department of Dermatology, University of Illinois at Chicago, Chicago, Illinois, USA
| |
Collapse
|
5
|
Susmelj AK, Lafci B, Ozdemir F, Davoudi N, Deán-Ben XL, Perez-Cruz F, Razansky D. Signal domain adaptation network for limited-view optoacoustic tomography. Med Image Anal 2024; 91:103012. [PMID: 37922769 DOI: 10.1016/j.media.2023.103012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 09/19/2023] [Accepted: 10/18/2023] [Indexed: 11/07/2023]
Abstract
Optoacoustic (OA) imaging is based on optical excitation of biological tissues with nanosecond-duration laser pulses and detection of ultrasound (US) waves generated by thermoelastic expansion following light absorption. The image quality and fidelity of OA images critically depend on the extent of tomographic coverage provided by the US detector arrays. However, full tomographic coverage is not always possible due to experimental constraints. One major challenge concerns an efficient integration between OA and pulse-echo US measurements using the same transducer array. A common approach toward the hybridization consists in using standard linear transducer arrays, which readily results in arc-type artifacts and distorted shapes in OA images due to the limited angular coverage. Deep learning methods have been proposed to mitigate limited-view artifacts in OA reconstructions by mapping artifactual to artifact-free (ground truth) images. However, acquisition of ground truth data with full angular coverage is not always possible, particularly when using handheld probes in a clinical setting. Deep learning methods operating in the image domain are then commonly based on networks trained on simulated data. This approach is yet incapable of transferring the learned features between two domains, which results in poor performance on experimental data. Here, we propose a signal domain adaptation network (SDAN) consisting of i) a domain adaptation network to reduce the domain gap between simulated and experimental signals and ii) a sides prediction network to complement the missing signals in limited-view OA datasets acquired from a human forearm by means of a handheld linear transducer array. The proposed method showed improved performance in reducing limited-view artifacts without the need for ground truth signals from full tomographic acquisitions.
Collapse
Affiliation(s)
| | - Berkan Lafci
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Firat Ozdemir
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland
| | - Neda Davoudi
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Xosé Luís Deán-Ben
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Fernando Perez-Cruz
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland; Institute for Machine Learning, Department of Computer Science, ETH Zurich, Switzerland
| | - Daniel Razansky
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland.
| |
Collapse
|
6
|
Shen Y, Zhang J, Jiang D, Gao Z, Zheng Y, Gao F, Gao F. S-Wave Accelerates Optimization-based Photoacoustic Image Reconstruction in vivo. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:18-27. [PMID: 37806923 DOI: 10.1016/j.ultrasmedbio.2023.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 07/25/2023] [Accepted: 07/29/2023] [Indexed: 10/10/2023]
Abstract
OBJECTIVE Photoacoustic imaging has undergone rapid development in recent years. To simulate photoacoustic imaging on a computer, the most popular MATLAB toolbox currently used for the forward projection process is k-Wave. However, k-Wave suffers from significant computation time. Here we propose a straightforward simulation approach based on superposed Wave (s-Wave) to accelerate photoacoustic simulation. METHODS In this study, we consider the initial pressure distribution as a collection of individual pixels. By obtaining standard sensor data from a single pixel beforehand, we can easily manipulate the phase and amplitude of the sensor data for specific pixels using loop and multiplication operators. The effectiveness of this approach is validated through an optimization-based reconstruction algorithm. RESULTS The results reveal significantly reduced computation time compared with k-Wave. Particularly in a sparse 3-D configuration, s-Wave exhibits a speed improvement >2000 times compared with k-Wave. In terms of optimization-based image reconstruction, in vivo imaging results reveal that using the s-Wave method yields images highly similar to those obtained using k-Wave, while reducing the reconstruction time by approximately 50 times. CONCLUSION Proposed here is an accelerated optimization-based algorithm for photoacoustic image reconstruction, using the fast s-Wave forward projection simulation. Our method achieves substantial time savings, particularly in sparse system configurations. Future work will focus on further optimizing the algorithm and expanding its applicability to a broader range of photoacoustic imaging scenarios.
Collapse
Affiliation(s)
- Yuting Shen
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Jiadong Zhang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Daohuai Jiang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Zijian Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Yuwei Zheng
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China; Shanghai Engineering Research Center of Energy Efficient and Custom AI IC, Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
7
|
Kim M, Pelivanov I, O'Donnell M. Review of Deep Learning Approaches for Interleaved Photoacoustic and Ultrasound (PAUS) Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1591-1606. [PMID: 37910419 PMCID: PMC10788151 DOI: 10.1109/tuffc.2023.3329119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Photoacoustic (PA) imaging provides optical contrast at relatively large depths within the human body, compared to other optical methods, at ultrasound (US) spatial resolution. By integrating real-time PA and US (PAUS) modalities, PAUS imaging has the potential to become a routine clinical modality bringing the molecular sensitivity of optics to medical US imaging. For applications where the full capabilities of clinical US scanners must be maintained in PAUS, conventional limited view and bandwidth transducers must be used. This approach, however, cannot provide high-quality maps of PA sources, especially vascular structures. Deep learning (DL) using data-driven modeling with minimal human design has been very effective in medical imaging, medical data analysis, and disease diagnosis, and has the potential to overcome many of the technical limitations of current PAUS imaging systems. The primary purpose of this article is to summarize the background and current status of DL applications in PAUS imaging. It also looks beyond current approaches to identify remaining challenges and opportunities for robust translation of PAUS technologies to the clinic.
Collapse
|
8
|
Wang R, Zhang Z, Chen R, Yu X, Zhang H, Hu G, Liu Q, Song X. Noise-insensitive defocused signal and resolution enhancement for optical-resolution photoacoustic microscopy via deep learning. JOURNAL OF BIOPHOTONICS 2023; 16:e202300149. [PMID: 37491832 DOI: 10.1002/jbio.202300149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 06/30/2023] [Accepted: 07/22/2023] [Indexed: 07/27/2023]
Abstract
Optical-resolution photoacoustic microscopy suffers from narrow depth of field and a significant deterioration in defocused signal intensity and spatial resolution. Here, a method based on deep learning was proposed to enhance the defocused resolution and signal-to-noise ratio. A virtual optical-resolution photoacoustic microscopy based on k-wave was used to obtain the datasets of deep learning with different noise levels. A fully dense U-Net was trained with randomly distributed sources to improve the quality of photoacoustic images. The results show that the PSNR of defocused signal was enhanced by more than 1.2 times. An over 2.6-fold enhancement in lateral resolution and an over 3.4-fold enhancement in axial resolution of defocused regions were achieved. The large volumetric and high-resolution imaging of blood vessels further verified that the proposed method can effectively overcome the deterioration of the signal and the spatial resolution due to the narrow depth of field of optical-resolution photoacoustic microscopy.
Collapse
Affiliation(s)
- Rui Wang
- School of Information Engineering, Nanchang University, Nanchang, China
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Zhipeng Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Ruiyi Chen
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xiaohai Yu
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Hongyu Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Gang Hu
- Jiangxi Medical College, Nanchang University, Nanchang, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
9
|
John S, Hester S, Basij M, Paul A, Xavierselvan M, Mehrmohammadi M, Mallidi S. Niche preclinical and clinical applications of photoacoustic imaging with endogenous contrast. PHOTOACOUSTICS 2023; 32:100533. [PMID: 37636547 PMCID: PMC10448345 DOI: 10.1016/j.pacs.2023.100533] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 08/29/2023]
Abstract
In the past decade, photoacoustic (PA) imaging has attracted a great deal of popularity as an emergent diagnostic technology owing to its successful demonstration in both preclinical and clinical arenas by various academic and industrial research groups. Such steady growth of PA imaging can mainly be attributed to its salient features, including being non-ionizing, cost-effective, easily deployable, and having sufficient axial, lateral, and temporal resolutions for resolving various tissue characteristics and assessing the therapeutic efficacy. In addition, PA imaging can easily be integrated with the ultrasound imaging systems, the combination of which confers the ability to co-register and cross-reference various features in the structural, functional, and molecular imaging regimes. PA imaging relies on either an endogenous source of contrast (e.g., hemoglobin) or those of an exogenous nature such as nano-sized tunable optical absorbers or dyes that may boost imaging contrast beyond that provided by the endogenous sources. In this review, we discuss the applications of PA imaging with endogenous contrast as they pertain to clinically relevant niches, including tissue characterization, cancer diagnostics/therapies (termed as theranostics), cardiovascular applications, and surgical applications. We believe that PA imaging's role as a facile indicator of several disease-relevant states will continue to expand and evolve as it is adopted by an increasing number of research laboratories and clinics worldwide.
Collapse
Affiliation(s)
- Samuel John
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Scott Hester
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | | - Mohammad Mehrmohammadi
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Wilmot Cancer Institute, Rochester, NY, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
10
|
Song E, Zhan B, Liu H, Cetinkaya C, Hung CC. NMNet: Learning Multi-level semantic information from scale extension domain for improved medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023]
|
11
|
Vousten V, Moradi H, Wu Z, Boctor EM, Salcudean SE. Laser diode photoacoustic point source detection: machine learning-based denoising and reconstruction. OPTICS EXPRESS 2023; 31:13895-13910. [PMID: 37157265 DOI: 10.1364/oe.483892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A new development in photoacoustic (PA) imaging has been the use of compact, portable and low-cost laser diodes (LDs), but LD-based PA imaging suffers from low signal intensity recorded by the conventional transducers. A common method to improve signal strength is temporal averaging, which reduces frame rate and increases laser exposure to patients. To tackle this problem, we propose a deep learning method that will denoise point source PA radio-frequency (RF) data before beamforming with a very few frames, even one. We also present a deep learning method to automatically reconstruct point sources from noisy pre-beamformed data. Finally, we employ a strategy of combined denoising and reconstruction, which can supplement the reconstruction algorithm for very low signal-to-noise ratio inputs.
Collapse
|
12
|
Zhang Z, Jin H, Zhang W, Lu W, Zheng Z, Sharma A, Pramanik M, Zheng Y. Adaptive enhancement of acoustic resolution photoacoustic microscopy imaging via deep CNN prior. PHOTOACOUSTICS 2023; 30:100484. [PMID: 37095888 PMCID: PMC10121479 DOI: 10.1016/j.pacs.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a promising medical imaging modality that can be employed for deep bio-tissue imaging. However, its relatively low imaging resolution has greatly hindered its wide applications. Previous model-based or learning-based PAM enhancement algorithms either require design of complex handcrafted prior to achieve good performance or lack the interpretability and flexibility that can adapt to different degradation models. However, the degradation model of AR-PAM imaging is subject to both imaging depth and center frequency of ultrasound transducer, which varies in different imaging conditions and cannot be handled by a single neural network model. To address this limitation, an algorithm integrating both learning-based and model-based method is proposed here so that a single framework can deal with various distortion functions adaptively. The vasculature image statistics is implicitly learned by a deep convolutional neural network, which served as plug and play (PnP) prior. The trained network can be directly plugged into the model-based optimization framework for iterative AR-PAM image enhancement, which fitted for different degradation mechanisms. Based on physical model, the point spread function (PSF) kernels for various AR-PAM imaging situations are derived and used for the enhancement of simulation and in vivo AR-PAM images, which collectively proved the effectiveness of proposed method. Quantitatively, the PSNR and SSIM values have all achieve best performance with the proposed algorithm in all three simulation scenarios; The SNR and CNR values have also significantly raised from 6.34 and 5.79 to 35.37 and 29.66 respectively in an in vivo testing result with the proposed algorithm.
Collapse
Affiliation(s)
- Zhengyuan Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Haoran Jin
- Zhejiang University, College of Mechanical Engineering, The State Key Laboratory of Fluid Power and Mechatronic Systems, Hangzhou 310027, China
| | - Wenwen Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Wenhao Lu
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Zesheng Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Arunima Sharma
- Johns Hopkins University, Electrical and Computer Engineering, Baltimore, MD 21218, USA
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, USA
| | - Yuanjin Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
- Corresponding author.
| |
Collapse
|
13
|
Cheng Z, Wang D, Zhang Z, Wang Z, Yang F, Zeng L, Ji X. Photoacoustic maximum amplitude projection microscopy by ultra-low data sampling. OPTICS LETTERS 2023; 48:1718-1721. [PMID: 37221749 DOI: 10.1364/ol.485628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 02/14/2023] [Indexed: 05/25/2023]
Abstract
Photoacoustic microscopy (PAM) has attracted increasing research interest in the biomedical field due to its unique merit of combining light and sound. In general, the bandwidth of a photoacoustic signal reaches up to tens or even hundreds of MHz, which requires a high-performance acquisition card to meet the high requirement of precision of sampling and control. For most depth-insensitive scenes, it is complex and costly to capture the photoacoustic maximum amplitude projection (MAP) images. Herein, we propose a simple and low-cost MAP-PAM system based on a custom-made peak holding circuit to obtain the extremum values by Hz data sampling. The dynamic range of the input signal is 0.01-2.5 V, and the -6-dB bandwidth of the input signal can be up to 45 MHz. Through in vitro and in vivo experiments, we have verified that the system has the same imaging ability as conventional PAM. Owing to its compact size and ultra-low price (approximately $18), it provides a new performance paradigm for PAM and opens up a new way for an optimal photoacoustic sensing and imaging device.
Collapse
|
14
|
Gu Y, Sun Y, Wang X, Li H, Qiu J, Lu W. Application of photoacoustic computed tomography in biomedical imaging: A literature review. Bioeng Transl Med 2023; 8:e10419. [PMID: 36925681 PMCID: PMC10013779 DOI: 10.1002/btm2.10419] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/11/2022] [Accepted: 09/18/2022] [Indexed: 11/06/2022] Open
Abstract
Photoacoustic computed tomography (PACT) is a hybrid imaging modality that combines optical excitation and acoustic detection techniques. It obtains high-resolution deep-tissue images based on the deep penetration of light, the anisotropy of light absorption in objects, and the photoacoustic effect. Hence, PACT shows great potential in biomedical sample imaging. Recently, due to its advantages of high sensitivity to optical absorption and wide scalability of spatial resolution with the desired imaging depth, PACT has received increasing attention in preclinical and clinical practice. To date, there has been a proliferation of PACT systems designed for specific biomedical imaging applications, from small animals to human organs, from ex vivo to in vivo real-time imaging, and from simple structural imaging to functional and molecular imaging with external contrast agents. Therefore, it is of great importance to summarize the previous applications of PACT systems in biomedical imaging and clinical practice. In this review, we searched for studies related to PACT imaging of biomedical tissues and samples over the past two decades; divided the studies into two categories, PACT imaging of preclinical animals and PACT imaging of human organs and body parts; and discussed the significance of the studies. Finally, we pointed out the future directions of PACT in biomedical applications. With the development of exogenous contrast agents and advances of imaging technique, in the future, PACT will enable biomedical imaging from organs to whole bodies, from superficial vasculature to internal organs, from anatomy to functions, and will play an increasingly important role in biomedical research and clinical practice.
Collapse
Affiliation(s)
- Yanru Gu
- Department of Radiology The Second Affiliated Hospital of Shandong First Medical University Taian China.,Department of Radiology Shandong First Medical University and Shandong Academy of Medical Sciences Taian China
| | - Yuanyuan Sun
- Department of Radiology Shandong First Medical University and Shandong Academy of Medical Sciences Taian China
| | - Xiao Wang
- College of Ocean Science and Engineering Shandong University of Science and Technology Qingdao China
| | - Hongyu Li
- College of Ocean Science and Engineering Shandong University of Science and Technology Qingdao China
| | - Jianfeng Qiu
- Department of Radiology Shandong First Medical University and Shandong Academy of Medical Sciences Taian China
| | - Weizhao Lu
- Department of Radiology The Second Affiliated Hospital of Shandong First Medical University Taian China.,Department of Radiology Shandong First Medical University and Shandong Academy of Medical Sciences Taian China
| |
Collapse
|
15
|
Nizam NI, Ochoa M, Smith JT, Intes X. Deep learning-based fusion of widefield diffuse optical tomography and micro-CT structural priors for accurate 3D reconstructions. BIOMEDICAL OPTICS EXPRESS 2023; 14:1041-1053. [PMID: 36950248 PMCID: PMC10026582 DOI: 10.1364/boe.480091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/10/2023] [Accepted: 01/24/2023] [Indexed: 06/17/2023]
Abstract
Widefield illumination and detection strategies leveraging structured light have enabled fast and robust probing of tissue properties over large surface areas and volumes. However, when applied to diffuse optical tomography (DOT) applications, they still require a time-consuming and expert-centric solving of an ill-posed inverse problem. Deep learning (DL) models have been recently proposed to facilitate this challenging step. Herein, we expand on a previously reported deep neural network (DNN) -based architecture (modified AUTOMAP - ModAM) for accurate and fast reconstructions of the absorption coefficient in 3D DOT based on a structured light illumination and detection scheme. Furthermore, we evaluate the improved performances when incorporating a micro-CT structural prior in the DNN-based workflow, named Z-AUTOMAP. This Z-AUTOMAP significantly improves the widefield imaging process's spatial resolution, especially in the transverse direction. The reported DL-based strategies are validated both in silico and in experimental phantom studies using spectral micro-CT priors. Overall, this is the first successful demonstration of micro-CT and DOT fusion using deep learning, greatly enhancing the prospect of rapid data-integration strategies, often demanded in challenging pre-clinical scenarios.
Collapse
|
16
|
Lan H, Yang C, Gao F. A jointed feature fusion framework for photoacoustic image reconstruction. PHOTOACOUSTICS 2023; 29:100442. [PMID: 36589516 PMCID: PMC9798177 DOI: 10.1016/j.pacs.2022.100442] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
The standard reconstruction of Photoacoustic (PA) computed tomography (PACT) image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. A quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The experimental results have demonstrated the superior performance and quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics by 135% (SSIM for simulation) and 40% (gCNR for in-vivo) improvement.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|
17
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
18
|
Fang Z, Gao F, Jin H, Liu S, Wang W, Zhang R, Zheng Z, Xiao X, Tang K, Lou L, Tang KT, Chen J, Zheng Y. A Review of Emerging Electromagnetic-Acoustic Sensing Techniques for Healthcare Monitoring. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:1075-1094. [PMID: 36459601 DOI: 10.1109/tbcas.2022.3226290] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Conventional electromagnetic (EM) sensing techniques such as radar and LiDAR are widely used for remote sensing, vehicle applications, weather monitoring, and clinical monitoring. Acoustic techniques such as sonar and ultrasound sensors are also used for consumer applications, such as ranging and in vivo medical/healthcare applications. It has been of long-term interest to doctors and clinical practitioners to realize continuous healthcare monitoring in hospitals and/or homes. Physiological and biopotential signals in real-time serve as important health indicators to predict and prevent serious illness. Emerging electromagnetic-acoustic (EMA) sensing techniques synergistically combine the merits of EM sensing with acoustic imaging to achieve comprehensive detection of physiological and biopotential signals. Further, EMA enables complementary fusion sensing for challenging healthcare settings, such as real-world long-term monitoring of treatment effects at home or in remote environments. This article reviews various examples of EMA sensing instruments, including implementation, performance, and application from the perspectives of circuits to systems. The novel and significant applications to healthcare are discussed. Three types of EMA sensors are presented: (1) Chip-based radar sensors for health status monitoring, (2) Thermo-acoustic sensing instruments for biomedical applications, and (3) Photoacoustic (PA) sensing and imaging systems, including dedicated reconstruction algorithms were reviewed from time-domain, frequency-domain, time-reversal, and model-based solutions. The future of EMA techniques for continuous healthcare with enhanced accuracy supported by artificial intelligence (AI) is also presented.
Collapse
|
19
|
Arooj S, Rehman SU, Imran A, Almuhaimeed A, Alzahrani AK, Alzahrani A. A Deep Convolutional Neural Network for the Early Detection of Heart Disease. Biomedicines 2022; 10:2796. [PMID: 36359317 PMCID: PMC9687844 DOI: 10.3390/biomedicines10112796] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/26/2022] [Accepted: 10/29/2022] [Indexed: 08/08/2023] Open
Abstract
Heart disease is one of the key contributors to human death. Each year, several people die due to this disease. According to the WHO, 17.9 million people die each year due to heart disease. With the various technologies and techniques developed for heart-disease detection, the use of image classification can further improve the results. Image classification is a significant matter of concern in modern times. It is one of the most basic jobs in pattern identification and computer vision, and refers to assigning one or more labels to images. Pattern identification from images has become easier by using machine learning, and deep learning has rendered it more precise than traditional image classification methods. This study aims to use a deep-learning approach using image classification for heart-disease detection. A deep convolutional neural network (DCNN) is currently the most popular classification technique for image recognition. The proposed model is evaluated on the public UCI heart-disease dataset comprising 1050 patients and 14 attributes. By gathering a set of directly obtainable features from the heart-disease dataset, we considered this feature vector to be input for a DCNN to discriminate whether an instance belongs to a healthy or cardiac disease class. To assess the performance of the proposed method, different performance metrics, namely, accuracy, precision, recall, and the F1 measure, were employed, and our model achieved validation accuracy of 91.7%. The experimental results indicate the effectiveness of the proposed approach in a real-world environment.
Collapse
Affiliation(s)
- Sadia Arooj
- University Institute of Information Technology, PMAS-Arid Agriculture University, Rawalpindi 46000, Pakistan
| | - Saif ur Rehman
- University Institute of Information Technology, PMAS-Arid Agriculture University, Rawalpindi 46000, Pakistan
| | - Azhar Imran
- Department of Creative Technologies, Faculty of Computing & Artificial Intelligence, Air University, Islamabad 42000, Pakistan
| | - Abdullah Almuhaimeed
- The National Centre for Genomics Technologies and Bioinformatics, King Abdulaziz City for Science and Technology, Riyadh 11442, Saudi Arabia
| | - A. Khuzaim Alzahrani
- Faculty of Applied Medical Sciences, Northern Border University, Arar 91431, Saudi Arabia
| | - Abdulkareem Alzahrani
- Faculty of Computer Science and Information Technology, Al Baha University, Al Baha 65779, Saudi Arabia
| |
Collapse
|
20
|
Wang T, He M, Shen K, Liu W, Tian C. Learned regularization for image reconstruction in sparse-view photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:5721-5737. [PMID: 36733736 PMCID: PMC9872879 DOI: 10.1364/boe.469460] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/07/2022] [Accepted: 10/01/2022] [Indexed: 06/18/2023]
Abstract
Constrained data acquisitions, such as sparse view measurements, are sometimes used in photoacoustic computed tomography (PACT) to accelerate data acquisition. However, it is challenging to reconstruct high-quality images under such scenarios. Iterative image reconstruction with regularization is a typical choice to solve this problem but it suffers from image artifacts. In this paper, we present a learned regularization method to suppress image artifacts in model-based iterative reconstruction in sparse view PACT. A lightweight dual-path network is designed to learn regularization features from both the data and the image domains. The network is trained and tested on both simulation and in vivo datasets and compared with other methods such as Tikhonov regularization, total variation regularization, and a U-Net based post-processing approach. Results show that although the learned regularization network possesses a size of only 0.15% of a U-Net, it outperforms other methods and converges after as few as five iterations, which takes less than one-third of the time of conventional methods. Moreover, the proposed reconstruction method incorporates the physical model of photoacoustic imaging and explores structural information from training datasets. The integration of deep learning with a physical model can potentially achieve improved imaging performance in practice.
Collapse
Affiliation(s)
- Tong Wang
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Menghui He
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
| | - Kang Shen
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Wen Liu
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Chao Tian
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
21
|
Dimaridis I, Sridharan P, Ntziachristos V, Karlas A, Hadjileontiadis L. Image Quality Improvement Techniques and Assessment Adequacy in Clinical Optoacoustic Imaging: A Systematic Review. BIOSENSORS 2022; 12:901. [PMID: 36291038 PMCID: PMC9599915 DOI: 10.3390/bios12100901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/09/2022] [Accepted: 09/17/2022] [Indexed: 06/16/2023]
Abstract
Optoacoustic imaging relies on the detection of optically induced acoustic waves to offer new possibilities in morphological and functional imaging. As the modality matures towards clinical application, research efforts aim to address multifactorial limitations that negatively impact the resulting image quality. In an endeavor to obtain a clear view on the limitations and their effects, as well as the status of this progressive refinement process, we conduct an extensive search for optoacoustic image quality improvement approaches that have been evaluated with humans in vivo, thus focusing on clinically relevant outcomes. We query six databases (PubMed, Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and Google Scholar) for articles published from 1 January 2010 to 31 October 2021, and identify 45 relevant research works through a systematic screening process. We review the identified approaches, describing their primary objectives, targeted limitations, and key technical implementation details. Moreover, considering comprehensive and objective quality assessment as an essential prerequisite for the adoption of such approaches in clinical practice, we subject 36 of the 45 papers to a further in-depth analysis of the reported quality evaluation procedures, and elicit a set of criteria with the intent to capture key evaluation aspects. Through a comparative criteria-wise rating process, we seek research efforts that exhibit excellence in quality assessment of their proposed methods, and discuss features that distinguish them from works with similar objectives. Additionally, informed by the rating results, we highlight areas with improvement potential, and extract recommendations for designing quality assessment pipelines capable of providing rich evidence.
Collapse
Affiliation(s)
- Ioannis Dimaridis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Patmaa Sridharan
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
| | - Vasilis Ntziachristos
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
- Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, 80992 Munich, Germany
- German Centre for Cardiovascular Research (DZHK), partner site Munich Heart Alliance, 80636 Munich, Germany
| | - Angelos Karlas
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
- German Centre for Cardiovascular Research (DZHK), partner site Munich Heart Alliance, 80636 Munich, Germany
- Clinic for Vascular and Endovascular Surgery, Klinikum rechts der Isar, 81675 Munich, Germany
| | - Leontios Hadjileontiadis
- Department of Biomedical Engineering, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Healthcare Engineering Innovation Center (HEIC), Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Signal Processing and Biomedical Technology Unit, Telecommunications Laboratory, Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
22
|
Madasamy A, Gujrati V, Ntziachristos V, Prakash J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:106004. [PMID: 36209354 PMCID: PMC9547608 DOI: 10.1117/1.jbo.27.10.106004] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect. AIM Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium. APPROACH Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets. RESULTS The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction. CONCLUSIONS The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Collapse
Affiliation(s)
- Arumugaraj Madasamy
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| | - Vipul Gujrati
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
- Technical University of Munich, Munich Institute of Robotics and Machine Intelligence (MIRMI), Munich, Germany
| | - Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| |
Collapse
|
23
|
Deep-Learning-Based Algorithm for the Removal of Electromagnetic Interference Noise in Photoacoustic Endoscopic Image Processing. SENSORS 2022; 22:s22103961. [PMID: 35632370 PMCID: PMC9147354 DOI: 10.3390/s22103961] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 05/18/2022] [Accepted: 05/21/2022] [Indexed: 12/10/2022]
Abstract
Despite all the expectations for photoacoustic endoscopy (PAE), there are still several technical issues that must be resolved before the technique can be successfully translated into clinics. Among these, electromagnetic interference (EMI) noise, in addition to the limited signal-to-noise ratio (SNR), have hindered the rapid development of related technologies. Unlike endoscopic ultrasound, in which the SNR can be increased by simply applying a higher pulsing voltage, there is a fundamental limitation in leveraging the SNR of PAE signals because they are mostly determined by the optical pulse energy applied, which must be within the safety limits. Moreover, a typical PAE hardware situation requires a wide separation between the ultrasonic sensor and the amplifier, meaning that it is not easy to build an ideal PAE system that would be unaffected by EMI noise. With the intention of expediting the progress of related research, in this study, we investigated the feasibility of deep-learning-based EMI noise removal involved in PAE image processing. In particular, we selected four fully convolutional neural network architectures, U-Net, Segnet, FCN-16s, and FCN-8s, and observed that a modified U-Net architecture outperformed the other architectures in the EMI noise removal. Classical filter methods were also compared to confirm the superiority of the deep-learning-based approach. Still, it was by the U-Net architecture that we were able to successfully produce a denoised 3D vasculature map that could even depict the mesh-like capillary networks distributed in the wall of a rat colorectum. As the development of a low-cost laser diode or LED-based photoacoustic tomography (PAT) system is now emerging as one of the important topics in PAT, we expect that the presented AI strategy for the removal of EMI noise could be broadly applicable to many areas of PAT, in which the ability to apply a hardware-based prevention method is limited and thus EMI noise appears more prominently due to poor SNR.
Collapse
|
24
|
Althobaiti MM, Ashour AA, Alhindi NA, Althobaiti A, Mansour RF, Gupta D, Khanna A. Deep Transfer Learning-Based Breast Cancer Detection and Classification Model Using Photoacoustic Multimodal Images. BIOMED RESEARCH INTERNATIONAL 2022; 2022:3714422. [PMID: 35572730 PMCID: PMC9098312 DOI: 10.1155/2022/3714422] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 03/29/2022] [Accepted: 04/07/2022] [Indexed: 01/28/2023]
Abstract
The rapid development of technologies in biomedical research has enriched and broadened the range of medical equipment. Magnetic resonance imaging, ultrasonic imaging, and optical imaging have been discovered by diverse research communities to design multimodal systems, which is essential for biomedical applications. One of the important tools is photoacoustic multimodal imaging (PAMI) which combines the concepts of optics and ultrasonic systems. At the same time, earlier detection of breast cancer becomes essential to reduce mortality. The recent advancements of deep learning (DL) models enable detection and classification the breast cancer using biomedical images. This article introduces a novel social engineering optimization with deep transfer learning-based breast cancer detection and classification (SEODTL-BDC) model using PAI. The intention of the SEODTL-BDC technique is to detect and categorize the presence of breast cancer using ultrasound images. Primarily, bilateral filtering (BF) is applied as an image preprocessing technique to remove noise. Besides, a lightweight LEDNet model is employed for the segmentation of biomedical images. In addition, residual network (ResNet-18) model can be utilized as a feature extractor. Finally, SEO with recurrent neural network (RNN) model, named SEO-RNN classifier, is applied to allot proper class labels to the biomedical images. The performance validation of the SEODTL-BDC technique is carried out using benchmark dataset and the experimental outcomes pointed out the supremacy of the SEODTL-BDC approach over the existing methods.
Collapse
Affiliation(s)
- Maha M. Althobaiti
- Department of Computer Science, College of Computing and Information Technology, Taif University, P.O.Box 11099, Taif 21944, Saudi Arabia
| | - Amal Adnan Ashour
- Department of Oral & Maxillofacial Surgery and Diagnostic Sciences, Faculty of Dentistry, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Nada A. Alhindi
- Oral Diagnostic Sciences Department, Faculty of Dentistry, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Asim Althobaiti
- Regional Laboratory and Blood Bank, Taif Health, Taif, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga 72511, Egypt
| | - Deepak Gupta
- Department of Computer Science & Engineering, Maharaja Agrasen Institute of Technology, Delhi, India
| | - Ashish Khanna
- Department of Computer Science & Engineering, Maharaja Agrasen Institute of Technology, Delhi, India
| |
Collapse
|
25
|
Gröhl J, Dreher KK, Schellenberg M, Rix T, Holzwarth N, Vieten P, Ayala L, Bohndiek SE, Seitel A, Maier-Hein L. SIMPA: an open-source toolkit for simulation and image processing for photonics and acoustics. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210395SSR. [PMID: 35380031 PMCID: PMC8978263 DOI: 10.1117/1.jbo.27.8.083010] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 02/28/2022] [Indexed: 05/09/2023]
Abstract
SIGNIFICANCE Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings. AIM To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards. APPROACH SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA's module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models. RESULTS To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations. CONCLUSIONS SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Kris K. Dreher
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
| | - Tom Rix
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| | - Niklas Holzwarth
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Patricia Vieten
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Leonardo Ayala
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Sarah E. Bohndiek
- University of Cambridge, Cancer Research UK Cambridge Institute, Robinson Way, Cambridge, United Kingdom
- University of Cambridge, Department of Physics, Cambridge, United Kingdom
| | - Alexander Seitel
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| |
Collapse
|
26
|
Feng J, Zhang W, Li Z, Jia K, Jiang S, Dehghani H, Pogue BW, Paulsen KD. Deep-learning based image reconstruction for MRI-guided near-infrared spectral tomography. OPTICA 2022; 9:264-267. [PMID: 35340570 PMCID: PMC8952193 DOI: 10.1364/optica.446576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Non-invasive near-infrared spectral tomography (NIRST) can incorporate the structural information provided by simultaneous magnetic resonance imaging (MRI), and this has significantly improved the images obtained of tissue function. However, the process of MRI guidance in NIRST has been time consuming because of the needs for tissue-type segmentation and forward diffuse modeling of light propagation. To overcome these problems, a reconstruction algorithm for MRI-guided NIRST based on deep learning is proposed and validated by simulation and real patient imaging data for breast cancer characterization. In this approach, diffused optical signals and MRI images were both used as the input to the neural network, and simultaneously recovered the concentrations of oxy-hemoglobin, deoxy-hemoglobin, and water via end-to-end training by using 20,000 sets of computer-generated simulation phantoms. The simulation phantom studies showed that the quality of the reconstructed images was improved, compared to that obtained by other existing reconstruction methods. Reconstructed patient images show that the well-trained neural network with only simulation data sets can be directly used for differentiating malignant from benign breast tumors.
Collapse
Affiliation(s)
- Jinchao Feng
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| | - Wanlong Zhang
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhe Li
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Kebin Jia
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Shudong Jiang
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| | - Hamid Dehghani
- School of Computer Science, University of Birmingham, Birmingham, B15 2TT, UK
| | - Brian W. Pogue
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| | - Keith D. Paulsen
- Thayer School of Engineering, Dartmouth College, Hanover, New Hampshire 03755, USA
| |
Collapse
|
27
|
Song X, Chen G, Zhao A, Liu X, Zeng J. Virtual optical-resolution photoacoustic microscopy using the k-Wave method. APPLIED OPTICS 2021; 60:11241-11246. [PMID: 35201116 DOI: 10.1364/ao.444106] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 11/27/2021] [Indexed: 06/14/2023]
Abstract
Deep learning has been widely used in image processing, quantitative analysis, and other applications in optical-resolution photoacoustic microscopy (OR-PAM). It requires a large amount of photoacoustic data for training and testing. However, due to the complex structure, high cost, slow imaging speed, and other factors of OR-PAM, it is difficult to obtain enough data required by deep learning, which limits the research of deep learning in OR-PAM to a certain extent. To solve this problem, a virtual OR-PAM based on k-Wave is proposed. The virtual photoacoustic microscopy mainly includes the setting of excitation light source and ultrasonic probe, scanning and signal processing, which can realize the common Gaussian-beam and Bessel-beam OR-PAMs. The system performance (lateral resolution, axial resolution, and depth of field) was tested by imaging a vertically tilted fiber, and the effectiveness and feasibility of the virtual simulation platform were verified by 3D imaging of the virtual vascular network. The ability to the generation of the dataset for deep learning was also verified. The construction of the virtual OR-PAM can promote the research of OR-PAM and the application of deep learning in OR-PAM.
Collapse
|
28
|
Wu M, Awasthi N, Rad NM, Pluim JPW, Lopata RGP. Advanced Ultrasound and Photoacoustic Imaging in Cardiology. SENSORS (BASEL, SWITZERLAND) 2021; 21:7947. [PMID: 34883951 PMCID: PMC8659598 DOI: 10.3390/s21237947] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 12/26/2022]
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of death worldwide. An effective management and treatment of CVDs highly relies on accurate diagnosis of the disease. As the most common imaging technique for clinical diagnosis of the CVDs, US imaging has been intensively explored. Especially with the introduction of deep learning (DL) techniques, US imaging has advanced tremendously in recent years. Photoacoustic imaging (PAI) is one of the most promising new imaging methods in addition to the existing clinical imaging methods. It can characterize different tissue compositions based on optical absorption contrast and thus can assess the functionality of the tissue. This paper reviews some major technological developments in both US (combined with deep learning techniques) and PA imaging in the application of diagnosis of CVDs.
Collapse
Affiliation(s)
- Min Wu
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| | - Navchetan Awasthi
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Nastaran Mohammadian Rad
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Josien P. W. Pluim
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Richard G. P. Lopata
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| |
Collapse
|
29
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
30
|
Al Mukaddim R, Ahmed R, Varghese T. Improving Minimum Variance Beamforming with Sub-Aperture Processing for Photoacoustic Imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2879-2882. [PMID: 34891848 PMCID: PMC8908882 DOI: 10.1109/embc46164.2021.9630278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Minimum variance (MV) beamforming improves resolution and reduces sidelobes when compared to delay-and-sum (DAS) beamforming for photoacoustic imaging (PAI). However, some level of sidelobe signal and incoherent clutter persist degrading MV PAI quality. Here, an adaptive beamforming algorithm (PSAPMV) combining MV formulation and sub-aperture processing is proposed. In PSAPMV, the received channel data are split into two complementary nonoverlapping sub-apertures and beamformed using MV. A weighting matrix based on similarity between sub-aperture beamformed images was derived and multiplied with the full aperture MV image resulting in suppression of sidelobe and incoherent clutter in the PA image. Numerical simulation experiments with point targets, diffuse inclusions and microvasculature networks are used to validate PSAPMV. Quantitative evaluation was done in terms of main-lobe-to-side-lobe ratio, full width at half maximum (FWHM), contrast ratio (CR) and generalized contrast-to-noise ratio (gCNR). PSAPMV demonstrated improved beamforming performance both qualitatively and quantitatively. PSAPMV had higher resolution (FWHM =0.19 mm) than MV (0.21 mm) and DAS (0.22mm) in point target simulations, better target detectability (gCNR =0.99) than MV (0.89) and DAS (0.84) for diffuse inclusions and improved contrast (CR in microvasculature simulation, DAS = 15.38, MV = 22.42, PSAPMV = 51.74 dB).
Collapse
|
31
|
Leitgeb R, Placzek F, Rank E, Krainz L, Haindl R, Li Q, Liu M, Andreana M, Unterhuber A, Schmoll T, Drexler W. Enhanced medical diagnosis for dOCTors: a perspective of optical coherence tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210150-PER. [PMID: 34672145 PMCID: PMC8528212 DOI: 10.1117/1.jbo.26.10.100601] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 09/23/2021] [Indexed: 05/17/2023]
Abstract
SIGNIFICANCE After three decades, more than 75,000 publications, tens of companies being involved in its commercialization, and a global market perspective of about USD 1.5 billion in 2023, optical coherence tomography (OCT) has become one of the fastest successfully translated imaging techniques with substantial clinical and economic impacts and acceptance. AIM Our perspective focuses on disruptive forward-looking innovations and key technologies to further boost OCT performance and therefore enable significantly enhanced medical diagnosis. APPROACH A comprehensive review of state-of-the-art accomplishments in OCT has been performed. RESULTS The most disruptive future OCT innovations include imaging resolution and speed (single-beam raster scanning versus parallelization) improvement, new implementations for dual modality or even multimodality systems, and using endogenous or exogenous contrast in these hybrid OCT systems targeting molecular and metabolic imaging. Aside from OCT angiography, no other functional or contrast enhancing OCT extension has accomplished comparable clinical and commercial impacts. Some more recently developed extensions, e.g., optical coherence elastography, dynamic contrast OCT, optoretinography, and artificial intelligence enhanced OCT are also considered with high potential for the future. In addition, OCT miniaturization for portable, compact, handheld, and/or cost-effective capsule-based OCT applications, home-OCT, and self-OCT systems based on micro-optic assemblies or photonic integrated circuits will revolutionize new applications and availability in the near future. Finally, clinical translation of OCT including medical device regulatory challenges will continue to be absolutely essential. CONCLUSIONS With its exquisite non-invasive, micrometer resolution depth sectioning capability, OCT has especially revolutionized ophthalmic diagnosis and hence is the fastest adopted imaging technology in the history of ophthalmology. Nonetheless, OCT has not been completely exploited and has substantial growth potential-in academics as well as in industry. This applies not only to the ophthalmic application field, but also especially to the original motivation of OCT to enable optical biopsy, i.e., the in situ imaging of tissue microstructure with a resolution approaching that of histology but without the need for tissue excision.
Collapse
Affiliation(s)
- Rainer Leitgeb
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
- Medical University of Vienna, Christian Doppler Laboratory OPTRAMED, Vienna, Austria
| | - Fabian Placzek
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Elisabet Rank
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Lisa Krainz
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Richard Haindl
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Qian Li
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Mengyang Liu
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Marco Andreana
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Angelika Unterhuber
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
| | - Tilman Schmoll
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
- Carl Zeiss Meditec, Inc., Dublin, California, United States
| | - Wolfgang Drexler
- Medical University of Vienna, Center for Medical Physics and Biomedical Engineering, Vienna, Austria
- Address all correspondence to Wolfgang Drexler,
| |
Collapse
|
32
|
Fang Z, Yang C, Zheng Z, Jin H, Tang K, Lou L, Tang X, Wang W, Zheng Y. A Mixed-Signal Chip-Based Configurable Coherent Photoacoustic-Radar Sensing Platform for In Vivo Temperature Monitoring and Vital Signs Detection. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2021; 15:666-678. [PMID: 33877986 DOI: 10.1109/tbcas.2021.3074430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
For precise health status monitoring and accurate disease diagnostics in the current COVID-19 pandemic, it is essential to detect various kinds of target signals robustly under high noise and strong interferences. Moreover, the health monitoring system is preferred to be realized in a small form factor for convenient mass deployments. A CMOS-integrated coherent sensing platform is proposed to achieve the goal, which synergetically leverages quadrature coherent photoacoustic (PA) detection and coherent radar sensing for achieving universal healthcare. By utilizing configurable mixed-signal quadrature coherent PA detection, high sensitivity and enhanced specificity can be achieved. In-phase (I) and quadrature (Q) templates are specifically designed to accurately sense and precisely reconstruct the target PA signals in a coherent mode. By mixed-signal implementation leveraging an FPGA to generate template waveforms adaptively, accurate tracking and precise reconstruction on the target PA signal can be attained based on the early-late tracking principle. The multiplication between the received PA signal and the templates is implemented efficiently in analog-domain by the Gilbert cell on-chip. In vivo blood temperature monitoring was realized based on the integrated PA sensing platform fabricated in a 65-nm CMOS process. With an integrated radar sensor deployed in the indoor scenario, noncontact monitoring on respiration and heartbeat rates can be attained based on electromagnetic (EM) sensing. By complementary usage of PA-EM sensing mechanisms, comprehensive health status monitoring and precise remote disease diagnostics can be achieved for the currentglobal COVID-19 pandemic and the future pervasive healthcare in the Internet of Everything (IoE) era.
Collapse
|
33
|
Mukaddim RA, Ahmed R, Varghese T. Subaperture Processing-Based Adaptive Beamforming for Photoacoustic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2336-2350. [PMID: 33606629 PMCID: PMC8330397 DOI: 10.1109/tuffc.2021.3060371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Delay-and-sum (DAS) beamformers, when applied to photoacoustic (PA) image reconstruction, produce strong sidelobes due to the absence of transmit focusing. Consequently, DAS PA images are often severely degraded by strong off-axis clutter. For preclinical in vivo cardiac PA imaging, the presence of these noise artifacts hampers the detectability and interpretation of PA signals from the myocardial wall, crucial for studying blood-dominated cardiac pathological information and to complement functional information derived from ultrasound imaging. In this article, we present PA subaperture processing (PSAP), an adaptive beamforming method, to mitigate these image degrading effects. In PSAP, a pair of DAS reconstructed images is formed by splitting the received channel data into two complementary nonoverlapping subapertures. Then, a weighting matrix is derived by analyzing the correlation between subaperture beamformed images and multiplied with the full-aperture DAS PA image to reduce sidelobes and incoherent clutter. We validated PSAP using numerical simulation studies using point target, diffuse inclusion and microvasculature imaging, and in vivo feasibility studies on five healthy murine models. Qualitative and quantitative analysis demonstrate improvements in PAI image quality with PSAP compared to DAS and coherence factor weighted DAS (DAS CF ). PSAP demonstrated improved target detectability with a higher generalized contrast-to-noise (gCNR) ratio in vasculature simulations where PSAP produces 19.61% and 19.53% higher gCNRs than DAS and DAS CF , respectively. Furthermore, PSAP provided higher image contrast quantified using contrast ratio (CR) (e.g., PSAP produces 89.26% and 11.90% higher CR than DAS and DAS CF in vasculature simulations) and improved clutter suppression.
Collapse
|
34
|
Davoudi N, Lafci B, Özbek A, Deán-Ben XL, Razansky D. Deep learning of image- and time-domain data enhances the visibility of structures in optoacoustic tomography. OPTICS LETTERS 2021; 46:3029-3032. [PMID: 34197371 DOI: 10.1364/ol.424571] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 05/15/2021] [Indexed: 06/13/2023]
Abstract
Images rendered with common optoacoustic system implementations are often afflicted with distortions and poor visibility of structures, hindering reliable image interpretation and quantification of bio-chrome distribution. Among the practical limitations contributing to artifactual reconstructions are insufficient tomographic detection coverage and suboptimal illumination geometry, as well as inability to accurately account for acoustic reflections and speed of sound heterogeneities in the imaged tissues. Here we developed a convolutional neural network (CNN) approach for enhancement of optoacoustic image quality which combines training on both time-resolved signals and tomographic reconstructions. Reference human finger data for training the CNN were recorded using a full-ring array system that provides optimal tomographic coverage around the imaged object. The reconstructions were further refined with a dedicated algorithm that minimizes acoustic reflection artifacts induced by acoustically mismatch structures, such as bones. The combined methodology is shown to outperform other learning-based methods solely operating on image-domain data.
Collapse
|
35
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
36
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
37
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
38
|
Yang C, Lan H, Gao F, Gao F. Review of deep learning for photoacoustic imaging. PHOTOACOUSTICS 2021; 21:100215. [PMID: 33425679 PMCID: PMC7779783 DOI: 10.1016/j.pacs.2020.100215] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 10/11/2020] [Accepted: 10/11/2020] [Indexed: 05/02/2023]
Abstract
Machine learning has been developed dramatically and witnessed a lot of applications in various fields over the past few years. This boom originated in 2009, when a new model emerged, that is, the deep artificial neural network, which began to surpass other established mature models on some important benchmarks. Later, it was widely used in academia and industry. Ranging from image analysis to natural language processing, it fully exerted its magic and now become the state-of-the-art machine learning models. Deep neural networks have great potential in medical imaging technology, medical data analysis, medical diagnosis and other healthcare issues, and is promoted in both pre-clinical and even clinical stages. In this review, we performed an overview of some new developments and challenges in the application of machine learning to medical image analysis, with a special focus on deep learning in photoacoustic imaging. The aim of this review is threefold: (i) introducing deep learning with some important basics, (ii) reviewing recent works that apply deep learning in the entire ecological chain of photoacoustic imaging, from image reconstruction to disease diagnosis, (iii) providing some open source materials and other resources for researchers interested in applying deep learning to photoacoustic imaging.
Collapse
Affiliation(s)
- Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
39
|
Tian C, Zhang C, Zhang H, Xie D, Jin Y. Spatial resolution in photoacoustic computed tomography. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2021; 84:036701. [PMID: 33434890 DOI: 10.1088/1361-6633/abdab9] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 01/12/2021] [Indexed: 06/12/2023]
Abstract
Photoacoustic computed tomography (PACT) is a novel biomedical imaging modality and has experienced fast developments in the past two decades. Spatial resolution is an important criterion to measure the imaging performance of a PACT system. Here we survey state-of-the-art literature on the spatial resolution of PACT and analyze resolution degradation models from signal generation, propagation, reception, to image reconstruction. Particularly, the impacts of laser pulse duration, acoustic attenuation, acoustic heterogeneity, detector bandwidth, detector aperture, detector view angle, signal sampling, and image reconstruction algorithms are reviewed and discussed. Analytical expressions of point spread functions related to these impacting factors are summarized based on rigorous mathematical formulas. State-of-the-art approaches devoted to enhancing spatial resolution are also reviewed. This work is expected to elucidate the concept of spatial resolution in PACT and inspire novel image quality enhancement techniques.
Collapse
Affiliation(s)
- Chao Tian
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Chenxi Zhang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Haoran Zhang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Dan Xie
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Yi Jin
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| |
Collapse
|
40
|
Razansky D, Klohs J, Ni R. Multi-scale optoacoustic molecular imaging of brain diseases. Eur J Nucl Med Mol Imaging 2021; 48:4152-4170. [PMID: 33594473 PMCID: PMC8566397 DOI: 10.1007/s00259-021-05207-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 01/17/2021] [Indexed: 02/07/2023]
Abstract
The ability to non-invasively visualize endogenous chromophores and exogenous probes and sensors across the entire rodent brain with the high spatial and temporal resolution has empowered optoacoustic imaging modalities with unprecedented capacities for interrogating the brain under physiological and diseased conditions. This has rapidly transformed optoacoustic microscopy (OAM) and multi-spectral optoacoustic tomography (MSOT) into emerging research tools to study animal models of brain diseases. In this review, we describe the principles of optoacoustic imaging and showcase recent technical advances that enable high-resolution real-time brain observations in preclinical models. In addition, advanced molecular probe designs allow for efficient visualization of pathophysiological processes playing a central role in a variety of neurodegenerative diseases, brain tumors, and stroke. We describe outstanding challenges in optoacoustic imaging methodologies and propose a future outlook.
Collapse
Affiliation(s)
- Daniel Razansky
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wolfgang-Pauli-Strasse 27, HIT E42.1, 8093, Zurich, Switzerland
- Zurich Neuroscience Center (ZNZ), Zurich, Switzerland
- Faculty of Medicine and Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland
| | - Jan Klohs
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wolfgang-Pauli-Strasse 27, HIT E42.1, 8093, Zurich, Switzerland
- Zurich Neuroscience Center (ZNZ), Zurich, Switzerland
| | - Ruiqing Ni
- Institute for Biomedical Engineering, University of Zurich & ETH Zurich, Wolfgang-Pauli-Strasse 27, HIT E42.1, 8093, Zurich, Switzerland.
- Zurich Neuroscience Center (ZNZ), Zurich, Switzerland.
- Institute for Regenerative Medicine, Uiversity of Zurich, Zurich, Switzerland.
| |
Collapse
|
41
|
Lu T, Chen T, Gao F, Sun B, Ntziachristos V, Li J. LV-GAN: A deep learning approach for limited-view optoacoustic imaging based on hybrid datasets. JOURNAL OF BIOPHOTONICS 2021; 14:e202000325. [PMID: 33098215 DOI: 10.1002/jbio.202000325] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 09/28/2020] [Accepted: 10/13/2020] [Indexed: 06/11/2023]
Abstract
The optoacoustic imaging (OAI) methods are rapidly evolving for resolving optical contrast in medical imaging applications. In practice, measurement strategies are commonly implemented under limited-view conditions due to oversized image objectives or system design limitations. Data acquired by limited-view detection may impart artifacts and distortions in reconstructed optoacoustic (OA) images. We propose a hybrid data-driven deep learning approach based on generative adversarial network (GAN), termed as LV-GAN, to efficiently recover high quality images from limited-view OA images. Trained on both simulation and experiment data, LV-GAN is found capable of achieving high recovery accuracy even under limited detection angles less than 60° . The feasibility of LV-GAN for artifact removal in biological applications was validated by ex vivo experiments based on two different OAI systems, suggesting high potential of a ubiquitous use of LV-GAN to optimize image quality or system design for different scanners and application scenarios.
Collapse
Affiliation(s)
- Tong Lu
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Tingting Chen
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Feng Gao
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| | - Biao Sun
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum Munchen, Munich, Germany
- Chair of Biological Imaging and TranslaTUM, Technical University of Munich, Munich, Germany
| | - Jiao Li
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| |
Collapse
|
42
|
Das D, Sharma A, Rajendran P, Pramanik M. Another decade of photoacoustic imaging. Phys Med Biol 2020; 66. [PMID: 33361580 DOI: 10.1088/1361-6560/abd669] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/23/2020] [Indexed: 01/09/2023]
Abstract
Photoacoustic imaging - a hybrid biomedical imaging modality finding its way to clinical practices. Although the photoacoustic phenomenon was known more than a century back, only in the last two decades it has been widely researched and used for biomedical imaging applications. In this review we focus on the development and progress of the technology in the last decade (2010-2020). From becoming more and more user friendly, cheaper in cost, portable in size, photoacoustic imaging promises a wide range of applications, if translated to clinic. The growth of photoacoustic community is steady, and with several new directions researchers are exploring, it is inevitable that photoacoustic imaging will one day establish itself as a regular imaging system in the clinical practices.
Collapse
Affiliation(s)
- Dhiman Das
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Praveenbalaji Rajendran
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 70 Nanyang Drive, N1.3-B2-11, Singapore, 637457, SINGAPORE
| |
Collapse
|
43
|
Rajendran P, Pramanik M. Deep learning approach to improve tangential resolution in photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2020; 11:7311-7323. [PMID: 33408998 PMCID: PMC7747891 DOI: 10.1364/boe.410145] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 10/29/2020] [Accepted: 11/15/2020] [Indexed: 05/09/2023]
Abstract
In circular scan photoacoustic tomography (PAT), the axial resolution is spatially invariant and is limited by the bandwidth of the detector. However, the tangential resolution is spatially variant and is dependent on the aperture size of the detector. In particular, the tangential resolution improves with the decreasing aperture size. However, using a detector with a smaller aperture reduces the sensitivity of the transducer. Thus, large aperture size detectors are widely preferred in circular scan PAT imaging systems. Although several techniques have been proposed to improve the tangential resolution, they have inherent limitations such as high cost and the need for customized detectors. Herein, we propose a novel deep learning architecture to counter the spatially variant tangential resolution in circular scanning PAT imaging systems. We used a fully dense U-Net based convolutional neural network architecture along with 9 residual blocks to improve the tangential resolution of the PAT images. The network was trained on the simulated datasets and its performance was verified by experimental in vivo imaging. Results show that the proposed deep learning network improves the tangential resolution by eight folds, without compromising the structural similarity and quality of image.
Collapse
Affiliation(s)
- Praveenbalaji Rajendran
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, 62 Nanyang Drive, Singapore 637459, Singapore
| |
Collapse
|