1
|
Ma Y, Zhou W, Ma R, Wang E, Yang S, Tang Y, Zhang XP, Guan X. DOVE: Doodled vessel enhancement for photoacoustic angiography super resolution. Med Image Anal 2024; 94:103106. [PMID: 38387244 DOI: 10.1016/j.media.2024.103106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/12/2023] [Accepted: 02/08/2024] [Indexed: 02/24/2024]
Abstract
Deep-learning-based super-resolution photoacoustic angiography (PAA) has emerged as a valuable tool for enhancing the resolution of blood vessel images and aiding in disease diagnosis. However, due to the scarcity of training samples, PAA super-resolution models do not generalize well, especially in the challenging in-vivo imaging of organs with deep tissue penetration. Furthermore, prolonged exposure to high laser intensity during the image acquisition process can lead to tissue damage and secondary infections. To address these challenges, we propose an approach doodled vessel enhancement (DOVE) that utilizes hand-drawn doodles to train a PAA super-resolution model. With a training dataset consisting of only 32 real PAA images, we construct a diffusion model that interprets hand-drawn doodles as low-resolution images. DOVE enables us to generate a large number of realistic PAA images, achieving a 49.375% fool rate, even among experts in photoacoustic imaging. Subsequently, we employ these generated images to train a self-similarity-based model for super-resolution. During cross-domain tests, our method, trained solely on generated images, achieves a structural similarity value of 0.8591, surpassing the scores of all other models trained with real high-resolution images. DOVE successfully overcomes the limitation of insufficient training samples and unlocks the clinic application potential of super-resolution-based biomedical imaging.
Collapse
Affiliation(s)
- Yuanzheng Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Wangting Zhou
- Engineering Research Center of Molecular & Neuro Imaging of the Ministry of Education, Xidian University, Xi'an, Shaanxi 710126, China
| | - Rui Ma
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Erqi Wang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Sihua Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.
| | - Yansong Tang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiao-Ping Zhang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xun Guan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| |
Collapse
|
2
|
Wang Y, Chen Y, Zhao Y, Liu S. Compressed Sensing for Biomedical Photoacoustic Imaging: A Review. Sensors (Basel) 2024; 24:2670. [PMID: 38732775 PMCID: PMC11085525 DOI: 10.3390/s24092670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 04/19/2024] [Accepted: 04/21/2024] [Indexed: 05/13/2024]
Abstract
Photoacoustic imaging (PAI) is a rapidly developing emerging non-invasive biomedical imaging technique that combines the strong contrast from optical absorption imaging and the high resolution from acoustic imaging. Abnormal biological tissues (such as tumors and inflammation) generate different levels of thermal expansion after absorbing optical energy, producing distinct acoustic signals from normal tissues. This technique can detect small tissue lesions in biological tissues and has demonstrated significant potential for applications in tumor research, melanoma detection, and cardiovascular disease diagnosis. During the process of collecting photoacoustic signals in a PAI system, various factors can influence the signals, such as absorption, scattering, and attenuation in biological tissues. A single ultrasound transducer cannot provide sufficient information to reconstruct high-precision photoacoustic images. To obtain more accurate and clear image reconstruction results, PAI systems typically use a large number of ultrasound transducers to collect multi-channel signals from different angles and positions, thereby acquiring more information about the photoacoustic signals. Therefore, to reconstruct high-quality photoacoustic images, PAI systems require a significant number of measurement signals, which can result in substantial hardware and time costs. Compressed sensing is an algorithm that breaks through the Nyquist sampling theorem and can reconstruct the original signal with a small number of measurement signals. PAI based on compressed sensing has made breakthroughs over the past decade, enabling the reconstruction of low artifacts and high-quality images with a small number of photoacoustic measurement signals, improving time efficiency, and reducing hardware costs. This article provides a detailed introduction to PAI based on compressed sensing, such as the physical transmission model-based compressed sensing method, two-stage reconstruction-based compressed sensing method, and single-pixel camera-based compressed sensing method. Challenges and future perspectives of compressed sensing-based PAI are also discussed.
Collapse
Affiliation(s)
- Yuanmao Wang
- School of Physics, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Yang Chen
- School of Physics, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Yongjian Zhao
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong 999077, China
| | - Siyu Liu
- School of Physics, Nanjing University of Science and Technology, Nanjing 210094, China
- Southwest Institute of Technical Physics, Chengdu 610041, China
| |
Collapse
|
3
|
Paul A, Mallidi S. U-Net enhanced real-time LED-based photoacoustic imaging. J Biophotonics 2024:e202300465. [PMID: 38622811 DOI: 10.1002/jbio.202300465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 02/18/2024] [Accepted: 03/17/2024] [Indexed: 04/17/2024]
Abstract
Photoacoustic (PA) imaging is hybrid imaging modality with good optical contrast and spatial resolution. Portable, cost-effective, smaller footprint light emitting diodes (LEDs) are rapidly becoming important PA optical sources. However, the key challenge faced by the LED-based systems is the low light fluence that is generally compensated by high frame averaging, consequently reducing acquisition frame-rate. In this study, we present a simple deep learning U-Net framework that enhances the signal-to-noise ratio (SNR) and contrast of PA image obtained by averaging low number of frames. The SNR increased by approximately four-fold for both in-class in vitro phantoms (4.39 ± 2.55) and out-of-class in vivo models (4.27 ± 0.87). We also demonstrate the noise invariancy of the network and discuss the downsides (blurry outcome and failure to reduce the salt & pepper noise). Overall, the developed U-Net framework can provide a real-time image enhancement platform for clinically translatable low-cost and low-energy light source-based PA imaging systems.
Collapse
Affiliation(s)
- Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, Massachusetts, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, Massachusetts, USA
| |
Collapse
|
4
|
Chang KW, Karthikesh MS, Zhu Y, Hudson HM, Barbay S, Bundy D, Guggenmos DJ, Frost S, Nudo RJ, Wang X, Yang X. Photoacoustic imaging of squirrel monkey cortical responses induced by peripheral mechanical stimulation. J Biophotonics 2024; 17:e202300347. [PMID: 38171947 PMCID: PMC10961203 DOI: 10.1002/jbio.202300347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/08/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024]
Abstract
Non-human primates (NHPs) are crucial models for studies of neuronal activity. Emerging photoacoustic imaging modalities offer excellent tools for studying NHP brains with high sensitivity and high spatial resolution. In this research, a photoacoustic microscopy (PAM) device was used to provide a label-free quantitative characterization of cerebral hemodynamic changes due to peripheral mechanical stimulation. A 5 × 5 mm area within the somatosensory cortex region of an adult squirrel monkey was imaged. A deep, fully connected neural network was characterized and applied to the PAM images of the cortex to enhance the vessel structures after mechanical stimulation on the forelimb digits. The quality of the PAM images was improved significantly with a neural network while preserving the hemodynamic responses. The functional responses to the mechanical stimulation were characterized based on the improved PAM images. This study demonstrates capability of PAM combined with machine learning for functional imaging of the NHP brain.
Collapse
Affiliation(s)
- Kai-Wei Chang
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | | | - Yunhao Zhu
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | - Heather M. Hudson
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Scott Barbay
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - David Bundy
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - David J. Guggenmos
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Shawn Frost
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Randolph J. Nudo
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Xueding Wang
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | - Xinmai Yang
- Bioengineering Graduate Program and Institute for Bioengineering Research, University of Kansas, Lawrence, Kansas, 66045, United States
- Department of Mechanical Engineering, University of Kansas, Lawrence, Kansas, 66045, United States
| |
Collapse
|
5
|
Zou Y, Lin Y, Zhu Q. PA-NeRF, a neural radiance field model for 3D photoacoustic tomography reconstruction from limited Bscan data. Biomed Opt Express 2024; 15:1651-1667. [PMID: 38495696 PMCID: PMC10942707 DOI: 10.1364/boe.511807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 01/25/2024] [Accepted: 01/26/2024] [Indexed: 03/19/2024]
Abstract
We introduce a novel deep-learning-based photoacoustic tomography method called Photoacoustic Tomography Neural Radiance Field (PA-NeRF) for reconstructing 3D volumetric PAT images from limited 2D Bscan data. In conventional 3D volumetric imaging, a 3D reconstruction requires transducer element data obtained from all directions. Our model employs a NeRF-based PAT 3D reconstruction method, which learns the relationship between transducer element positions and the corresponding 3D imaging. Compared with convolution-based deep-learning models, such as Unet and TransUnet, PA-NeRF does not learn the interpolation process but rather gains insight from 3D photoacoustic imaging principles. Additionally, we introduce a forward loss that improves the reconstruction quality. Both simulation and phantom studies validate the performance of PA-NeRF. Further, we apply the PA-NeRF model to clinical examples to demonstrate its feasibility. To the best of our knowledge, PA-NeRF is the first method in photoacoustic tomography to successfully reconstruct a 3D volume from sparse Bscan data.
Collapse
Affiliation(s)
- Yun Zou
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Yixiao Lin
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
- Department of Radiology, Washington University in St. Louis School of Medicine, St. Louis, Missouri, USA
| |
Collapse
|
6
|
Wang D, Wang X, Chen S, Li J, Liang L, Liu Y. Joint learning of sparse and limited-view guided waves signals for feature reconstruction and imaging. Ultrasonics 2024; 137:107200. [PMID: 37988767 DOI: 10.1016/j.ultras.2023.107200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 11/07/2023] [Accepted: 11/08/2023] [Indexed: 11/23/2023]
Abstract
Sparse and limited-view ultrasonic guided wave imaging has become a research hotspot in the field. Studies have shown that traditional under-sampling ultrasonic imaging methods either require a significant amount of time to recover the full data or produce poor quality imaging results. To address these issues, this paper proposes an end-to-end ultrasonic guided wave joint learning imaging method for sparse and limited-view transducer arrays, which integrates sparse feature reconstruction and deep learning imaging methods. Numerical and experimental studies demonstrate that this approach significantly improves the quality of imaging results. The quality of imaging results for sparse and limited-view transducer arrays is evaluated and quantified using average correlation coefficients on the testing set. The feasibility and effectiveness of the proposed method have been verified.
Collapse
Affiliation(s)
- Dingpeng Wang
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
| | - Xiaocen Wang
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
| | - Shili Chen
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
| | - Jian Li
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China
| | - Lin Liang
- Schlumberger-Doll Research, One Hampshire St., Cambridge, MA 02139, USA
| | - Yang Liu
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; International Institute for Innovative Design and Intelligent Manufacturing of Tianjin University in Zhejiang, Shaoxing 330100, China.
| |
Collapse
|
7
|
Susmelj AK, Lafci B, Ozdemir F, Davoudi N, Deán-Ben XL, Perez-Cruz F, Razansky D. Signal domain adaptation network for limited-view optoacoustic tomography. Med Image Anal 2024; 91:103012. [PMID: 37922769 DOI: 10.1016/j.media.2023.103012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 09/19/2023] [Accepted: 10/18/2023] [Indexed: 11/07/2023]
Abstract
Optoacoustic (OA) imaging is based on optical excitation of biological tissues with nanosecond-duration laser pulses and detection of ultrasound (US) waves generated by thermoelastic expansion following light absorption. The image quality and fidelity of OA images critically depend on the extent of tomographic coverage provided by the US detector arrays. However, full tomographic coverage is not always possible due to experimental constraints. One major challenge concerns an efficient integration between OA and pulse-echo US measurements using the same transducer array. A common approach toward the hybridization consists in using standard linear transducer arrays, which readily results in arc-type artifacts and distorted shapes in OA images due to the limited angular coverage. Deep learning methods have been proposed to mitigate limited-view artifacts in OA reconstructions by mapping artifactual to artifact-free (ground truth) images. However, acquisition of ground truth data with full angular coverage is not always possible, particularly when using handheld probes in a clinical setting. Deep learning methods operating in the image domain are then commonly based on networks trained on simulated data. This approach is yet incapable of transferring the learned features between two domains, which results in poor performance on experimental data. Here, we propose a signal domain adaptation network (SDAN) consisting of i) a domain adaptation network to reduce the domain gap between simulated and experimental signals and ii) a sides prediction network to complement the missing signals in limited-view OA datasets acquired from a human forearm by means of a handheld linear transducer array. The proposed method showed improved performance in reducing limited-view artifacts without the need for ground truth signals from full tomographic acquisitions.
Collapse
Affiliation(s)
| | - Berkan Lafci
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Firat Ozdemir
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland
| | - Neda Davoudi
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Xosé Luís Deán-Ben
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Fernando Perez-Cruz
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland; Institute for Machine Learning, Department of Computer Science, ETH Zurich, Switzerland
| | - Daniel Razansky
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland.
| |
Collapse
|
8
|
Wang R, Zhu J, Meng Y, Wang X, Chen R, Wang K, Li C, Shi J. Adaptive machine learning method for photoacoustic computed tomography based on sparse array sensor data. Comput Methods Programs Biomed 2023; 242:107822. [PMID: 37832425 DOI: 10.1016/j.cmpb.2023.107822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 08/18/2023] [Accepted: 09/17/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Photoacoustic computed tomography (PACT) is a non-invasive biomedical imaging technology that has developed rapidly in recent decades, especially has shown potential for small animal studies and early diagnosis of human diseases. To obtain high-quality images, the photoacoustic imaging system needs a high-element-density detector array. However, in practical applications, due to the cost limitation, manufacturing technology, and the system requirement in miniaturization and robustness, it is challenging to achieve sufficient elements and high-quality reconstructed images, which may even suffer from artifacts. Different from the latest machine learning methods based on removing distortions and artifacts to recover high-quality images, this paper proposes an adaptive machine learning method to firstly predict and complement the photoacoustic sensor channel data from sparse array sampling and then reconstruct images through conventional reconstruction algorithms. METHODS We develop an adaptive machine learning method to predict and complement the photoacoustic sensor channel data. The model consists of XGBoost and a neural network named SS-net. To handle data sets of different sizes and improve the generalization, a tunable parameter is used to control the weights of XGBoost and SS-net outputs. RESULTS The proposed method achieved superior performance as demonstrated by simulation, phantom experiments, and in vivo experiment results. Compared with linear interpolation, XGBoost, CAE, and U-net, the simulation results show that the SSIM value is increased by 12.83%, 6.78%, 21.46%, and 12.33%. Moreover, the median R2 is increased by 34.4%, 8.1%, 28.6%, and 84.1% with the in vivo data. CONCLUSIONS This model provides a framework to predict the missed photoacoustic sensor data on a sparse ring-shaped array for PACT imaging and has achieved considerable improvements in reconstructing the objects. Compared with linear interpolation and other deep learning methods qualitatively and quantitatively, our proposed methods can effectively suppress artifacts and improve image quality. The advantage of our methods is that there is no need for preparing a large number of images as the training dataset, and the data for training is directly from the sensors. It has the potential to be applied to a wide range of photoacoustic imaging detector arrays for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
| | - Jing Zhu
- Zhejiang Lab, Hangzhou 311100, China
| | | | | | | | | | - Chiye Li
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| | - Junhui Shi
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| |
Collapse
|
9
|
Li J, Meng YC. Multikernel positional embedding convolutional neural network for photoacoustic reconstruction with sparse data. Appl Opt 2023; 62:8506-8516. [PMID: 38037963 DOI: 10.1364/ao.504094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 10/14/2023] [Indexed: 12/02/2023]
Abstract
Photoacoustic imaging (PAI) is an emerging noninvasive imaging modality that merges the high contrast of optical imaging with the high resolution of ultrasonic imaging. Low-quality photoacoustic reconstruction with sparse data due to sparse spatial sampling and limited view detection is a major obstacle to the popularization of PAI for medical applications. Deep learning has been considered as the best solution to this problem in the past decade. In this paper, we propose what we believe to be a novel architecture, named DPM-UNet, which consists of the U-Net backbone with additional position embedding block and two multi-kernel-size convolution blocks, a dilated dense block and dilated multi-kernel-size convolution block. Our method was experimentally validated with both simulated data and in vivo data, achieving a SSIM of 0.9824 and a PSNR of 33.2744 dB. Furthermore, the reconstructed images of our proposed method were compared with those obtained by other advanced methods. The results have shown that our proposed DPM-UNet has a great advantage in PAI over other methods with respect to the imaging effect and memory consumption.
Collapse
|
10
|
Zhu J, Huynh N, Ogunlade O, Ansari R, Lucka F, Cox B, Beard P. Mitigating the Limited View Problem in Photoacoustic Tomography for a Planar Detection Geometry by Regularized Iterative Reconstruction. IEEE Trans Med Imaging 2023; 42:2603-2615. [PMID: 37115840 DOI: 10.1109/tmi.2023.3271390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The use of a planar detection geometry in photoacoustic tomography results in the so- called limited-view problem due to the finite extent of the acoustic detection aperture. When images are reconstructed using one-step reconstruction algorithms, image quality is compromised by the presence of streaking artefacts, reduced contrast, image distortion and reduced signal-to-noise ratio. To mitigate this, model-based iterative reconstruction approaches based on least squares minimisation with and without total variation regularization were evaluated using in-silico, experimental phantom, ex vivo and in vivo data. Compared to one-step reconstruction methods, it has been shown that iterative methods provide better image quality in terms of enhanced signal-to-artefact ratio, signal-to-noise ratio, amplitude accuracy and spatial fidelity. For the total variation approaches, the impact of the regularization parameter on image feature scale and amplitude distribution was evaluated. In addition, the extent to which the use of Bregman iterations can compensate for the systematic amplitude bias introduced by total variation was studied. This investigation is expected to inform the practical application of model-based iterative image reconstruction approaches for improving photoacoustic image quality when using finite aperture planar detection geometries.
Collapse
|
11
|
Xu M, Ouyang Y, Yuan Z. Deep Learning Aided Neuroimaging and Brain Regulation. Sensors (Basel) 2023; 23:s23114993. [PMID: 37299724 DOI: 10.3390/s23114993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 05/15/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
Currently, deep learning aided medical imaging is becoming the hot spot of AI frontier application and the future development trend of precision neuroscience. This review aimed to render comprehensive and informative insights into the recent progress of deep learning and its applications in medical imaging for brain monitoring and regulation. The article starts by providing an overview of the current methods for brain imaging, highlighting their limitations and introducing the potential benefits of using deep learning techniques to overcome these limitations. Then, we further delve into the details of deep learning, explaining the basic concepts and providing examples of how it can be used in medical imaging. One of the key strengths is its thorough discussion of the different types of deep learning models that can be used in medical imaging including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial network (GAN) assisted magnetic resonance imaging (MRI), positron emission tomography (PET)/computed tomography (CT), electroencephalography (EEG)/magnetoencephalography (MEG), optical imaging, and other imaging modalities. Overall, our review on deep learning aided medical imaging for brain monitoring and regulation provides a referrable glance for the intersection of deep learning aided neuroimaging and brain regulation.
Collapse
Affiliation(s)
- Mengze Xu
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, China
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| | - Yuanyuan Ouyang
- Nanomicro Sino-Europe Technology Company Limited, Zhuhai 519031, China
- Jiangfeng China-Portugal Technology Co., Ltd., Macau SAR 999078, China
| | - Zhen Yuan
- Centre for Cognitive and Brain Sciences, Institute of Collaborative Innovation, University of Macau, Macau SAR 999078, China
| |
Collapse
|
12
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. Biomed Opt Express 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
13
|
Zafar M, Manwar R, McGuire LS, Charbel FT, Avanaki K. Ultra-widefield and high-speed spiral laser scanning OR-PAM: System development and characterization. J Biophotonics 2023:e202200383. [PMID: 36998211 DOI: 10.1002/jbio.202200383] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/01/2023] [Accepted: 03/15/2023] [Indexed: 06/19/2023]
Abstract
Photoacoustic microscopy (PAM) is a high-resolution imaging modality that has been mainly implemented with small field of view applications. Here, we developed a fast PAM system that utilizes a unique spiral laser scanning mechanism and a wide acoustic detection unit. The developed system can image an area of 12.5 cm2 in 6.4 s. The system has been characterized using highly detailed phantoms. Finally, the imaging capabilities of the system were further demonstrated by imaging a sheep brain ex vivo and a rat brain in vivo.
Collapse
Affiliation(s)
- Mohsin Zafar
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
- Department of Biomedical Engineering, Wayne State University, Detroit, Michigan, USA
| | - Rayyan Manwar
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
| | - Laura S McGuire
- Department of Neurological Surgery, University of Illinois at Chicago - College of Medicine, Chicago, Illinois, USA
| | - Fady T Charbel
- Department of Neurological Surgery, University of Illinois at Chicago - College of Medicine, Chicago, Illinois, USA
| | - Kamran Avanaki
- The Richard and Loan Hill, Department of Biomedical Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
- Department of Dermatology, University of Illinois at Chicago, Chicago, Illinois, USA
| |
Collapse
|
14
|
Lan H, Yang C, Gao F. A jointed feature fusion framework for photoacoustic image reconstruction. Photoacoustics 2023; 29:100442. [PMID: 36589516 PMCID: PMC9798177 DOI: 10.1016/j.pacs.2022.100442] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
The standard reconstruction of Photoacoustic (PA) computed tomography (PACT) image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. A quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The experimental results have demonstrated the superior performance and quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics by 135% (SSIM for simulation) and 40% (gCNR for in-vivo) improvement.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|
15
|
Hsu KT, Guan S, Chitnis PV. Fast iterative reconstruction for photoacoustic tomography using learned physical model: Theoretical validation. Photoacoustics 2023; 29:100452. [PMID: 36700132 PMCID: PMC9867977 DOI: 10.1016/j.pacs.2023.100452] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/21/2022] [Accepted: 01/11/2023] [Indexed: 06/17/2023]
Abstract
Iterative reconstruction has demonstrated superior performance in medical imaging under compressed, sparse, and limited-view sensing scenarios. However, iterative reconstruction algorithms are slow to converge and rely heavily on hand-crafted parameters to achieve good performance. Many iterations are usually required to reconstruct a high-quality image, which is computationally expensive due to repeated evaluations of the physical model. While learned iterative reconstruction approaches such as model-based learning (MBLr) can reduce the number of iterations through convolutional neural networks, it still requires repeated evaluations of the physical models at each iteration. Therefore, the goal of this study is to develop a Fast Iterative Reconstruction (FIRe) algorithm that incorporates a learned physical model into the learned iterative reconstruction scheme to further reduce the reconstruction time while maintaining robust reconstruction performance. We also propose an efficient training scheme for FIRe, which releases the enormous memory footprint required by learned iterative reconstruction methods through the concept of recursive training. The results of our proposed method demonstrate comparable reconstruction performance to learned iterative reconstruction methods with a 9x reduction in computation time and a 620x reduction in computation time compared to variational reconstruction.
Collapse
|
16
|
Liu X, Dai S, Wang M, Zhang Y. Compressed Sensing Photoacoustic Imaging Reconstruction Using Elastic Net Approach. Mol Imaging 2022; 2022:7877049. [PMID: 36721731 DOI: 10.1155/2022/7877049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/04/2022] [Accepted: 12/10/2022] [Indexed: 12/24/2022] Open
Abstract
Photoacoustic imaging involves reconstructing an estimation of the absorbed energy density distribution from measured ultrasound data. The reconstruction task based on incomplete and noisy experimental data is usually an ill-posed problem that requires regularization to obtain meaningful solutions. The purpose of the work is to propose an elastic network (EN) model to improve the quality of reconstructed photoacoustic images. To evaluate the performance of the proposed method, a series of numerical simulations and tissue-mimicking phantom experiments are performed. The experiment results indicate that, compared with the L 1-norm and L 2-normbased regularization methods with different numerical phantoms, Gaussian noise of 10-50 dB, and different regularization parameters, the EN method with α = 0.5 has better image quality, calculation speed, and antinoise ability.
Collapse
|
17
|
Menozzi L, Yang W, Feng W, Yao J. Sound out the impaired perfusion: Photoacoustic imaging in preclinical ischemic stroke. Front Neurosci 2022; 16:1055552. [PMID: 36532279 PMCID: PMC9751426 DOI: 10.3389/fnins.2022.1055552] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 11/17/2022] [Indexed: 09/19/2023] Open
Abstract
Acoustically detecting the optical absorption contrast, photoacoustic imaging (PAI) is a highly versatile imaging modality that can provide anatomical, functional, molecular, and metabolic information of biological tissues. PAI is highly scalable and can probe the same biological process at various length scales ranging from single cells (microscopic) to the whole organ (macroscopic). Using hemoglobin as the endogenous contrast, PAI is capable of label-free imaging of blood vessels in the brain and mapping hemodynamic functions such as blood oxygenation and blood flow. These imaging merits make PAI a great tool for studying ischemic stroke, particularly for probing into hemodynamic changes and impaired cerebral blood perfusion as a consequence of stroke. In this narrative review, we aim to summarize the scientific progresses in the past decade by using PAI to monitor cerebral blood vessel impairment and restoration after ischemic stroke, mostly in the preclinical setting. We also outline and discuss the major technological barriers and challenges that need to be overcome so that PAI can play a more significant role in preclinical stroke research, and more importantly, accelerate its translation to be a useful clinical diagnosis and management tool for human strokes.
Collapse
Affiliation(s)
- Luca Menozzi
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Wei Yang
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University, Durham, NC, United States
| | - Wuwei Feng
- Department of Neurology, Duke University School of Medicine, Durham, NC, United States
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| |
Collapse
|
18
|
Kye H, Song Y, Ninjbadgar T, Kim C, Kim J. Whole-Body Photoacoustic Imaging Techniques for Preclinical Small Animal Studies. Sensors (Basel) 2022; 22:5130. [PMID: 35890810 PMCID: PMC9318812 DOI: 10.3390/s22145130] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Revised: 07/06/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Photoacoustic imaging is a hybrid imaging technique that has received considerable attention in biomedical studies. In contrast to pure optical imaging techniques, photoacoustic imaging enables the visualization of optical absorption properties at deeper imaging depths. In preclinical small animal studies, photoacoustic imaging is widely used to visualize biodistribution at the molecular level. Monitoring the whole-body distribution of chromophores in small animals is a key method used in preclinical research, including drug-delivery monitoring, treatment assessment, contrast-enhanced tumor imaging, and gastrointestinal tracking. In this review, photoacoustic systems for the whole-body imaging of small animals are explored and summarized. The configurations of the systems vary with the scanning methods and geometries of the ultrasound transducers. The future direction of research is also discussed with regard to achieving a deeper imaging depth and faster imaging speed, which are the main factors that an imaging system should realize to broaden its application in biomedical studies.
Collapse
Affiliation(s)
- Hyunjun Kye
- Departments of Cogno-Mechatronics Engineering and Optics & Mechatronics Engineering, Pusan National University, Busan 46241, Korea; (H.K.); (Y.S.); (T.N.)
| | - Yuon Song
- Departments of Cogno-Mechatronics Engineering and Optics & Mechatronics Engineering, Pusan National University, Busan 46241, Korea; (H.K.); (Y.S.); (T.N.)
| | - Tsedendamba Ninjbadgar
- Departments of Cogno-Mechatronics Engineering and Optics & Mechatronics Engineering, Pusan National University, Busan 46241, Korea; (H.K.); (Y.S.); (T.N.)
| | - Chulhong Kim
- Departments of Convergence IT Engineering, Mechanical Engineering, and Electrical Engineering, School of Interdisciplinary Bioscience and Bioengineering, Medical Device Innovation Center, Pohang University of Science and Technology (POSTECH), Pohang 37673, Korea
| | - Jeesu Kim
- Departments of Cogno-Mechatronics Engineering and Optics & Mechatronics Engineering, Pusan National University, Busan 46241, Korea; (H.K.); (Y.S.); (T.N.)
| |
Collapse
|
19
|
Yaqub M, Jinchao F, Arshid K, Ahmed S, Zhang W, Nawaz MZ, Mahmood T. Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities. Comput Math Methods Med 2022; 2022:8750648. [PMID: 35756423 PMCID: PMC9225884 DOI: 10.1155/2022/8750648] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 05/12/2022] [Accepted: 05/21/2022] [Indexed: 02/08/2023]
Abstract
Image reconstruction in magnetic resonance imaging (MRI) and computed tomography (CT) is a mathematical process that generates images at many different angles around the patient. Image reconstruction has a fundamental impact on image quality. In recent years, the literature has focused on deep learning and its applications in medical imaging, particularly image reconstruction. Due to the performance of deep learning models in a wide variety of vision applications, a considerable amount of work has recently been carried out using image reconstruction in medical images. MRI and CT appear as the ultimate scientifically appropriate imaging mode for identifying and diagnosing different diseases in this ascension age of technology. This study demonstrates a number of deep learning image reconstruction approaches and a comprehensive review of the most widely used different databases. We also give the challenges and promising future directions for medical image reconstruction.
Collapse
Affiliation(s)
- Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kaleem Arshid
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Wenqian Zhang
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Muhammad Zubair Nawaz
- College of Science and Shanghai Institute of Intelligent Electronics and Systems, Donghua University, 24105 Songjiang District, Shanghai, China
| | - Tariq Mahmood
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Division of Science and Technology, University of Education, Lahore, Pakistan
| |
Collapse
|
20
|
Maneas E, Hauptmann A, Alles EJ, Xia W, Vercauteren T, Ourselin S, David AL, Arridge S, Desjardins AE. Deep Learning for Instrumented Ultrasonic Tracking: From Synthetic Training Data to In Vivo Application. IEEE Trans Ultrason Ferroelectr Freq Control 2022; 69:543-552. [PMID: 34748488 DOI: 10.1109/tuffc.2021.3126530] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Instrumented ultrasonic tracking is used to improve needle localization during ultrasound guidance of minimally invasive percutaneous procedures. Here, it is implemented with transmitted ultrasound pulses from a clinical ultrasound imaging probe, which is detected by a fiber-optic hydrophone integrated into a needle. The detected transmissions are then reconstructed to form the tracking image. Two challenges are considered with the current implementation of ultrasonic tracking. First, tracking transmissions are interleaved with the acquisition of B-mode images, and thus, the effective B-mode frame rate is reduced. Second, it is challenging to achieve an accurate localization of the needle tip when the signal-to-noise ratio is low. To address these challenges, we present a framework based on a convolutional neural network (CNN) to maintain spatial resolution with fewer tracking transmissions and enhance signal quality. A major component of the framework included the generation of realistic synthetic training data. The trained network was applied to unseen synthetic data and experimental in vivo tracking data. The performance of needle localization was investigated when reconstruction was performed with fewer (up to eightfold) tracking transmissions. CNN-based processing of conventional reconstructions showed that the axial and lateral spatial resolutions could be improved even with an eightfold reduction in tracking transmissions. The framework presented in this study will significantly improve the performance of ultrasonic tracking, leading to faster image acquisition rates and increased localization accuracy.
Collapse
|
21
|
Yongyue Z, Yang S, Li Z, Rongjin Z, Shumin W. Functional Brain Imaging Based on the Neurovascular Unit for Evaluating Neural Networks after Strok. ADVANCED ULTRASOUND IN DIAGNOSIS AND THERAPY 2022. [DOI: 10.37015/audt.2022.210033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
|
22
|
Rajendran P, Sharma A, Pramanik M. Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
23
|
Hsu KT, Guan S, Chitnis PV. Comparing Deep Learning Frameworks for Photoacoustic Tomography Image Reconstruction. Photoacoustics 2021; 23:100271. [PMID: 34094851 PMCID: PMC8165448 DOI: 10.1016/j.pacs.2021.100271] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 04/08/2021] [Accepted: 05/11/2021] [Indexed: 05/02/2023]
Abstract
Conventional reconstruction methods for photoacoustic images are not suitable for the scenario of sparse sensing and geometrical limitation. To overcome these challenges and enhance the quality of reconstruction, several learning-based methods have recently been introduced for photoacoustic tomography reconstruction. The goal of this study is to compare and systematically evaluate the recently proposed learning-based methods and modified networks for photoacoustic image reconstruction. Specifically, learning-based post-processing methods and model-based learned iterative reconstruction methods are investigated. In addition to comparing the differences inherently brought by the models, we also study the impact of different inputs on the reconstruction effect. Our results demonstrate that the reconstruction performance mainly stems from the effective amount of information carried by the input. The inherent difference of the models based on the learning-based post-processing method does not provide a significant difference in photoacoustic image reconstruction. Furthermore, the results indicate that the model-based learned iterative reconstruction method outperforms all other learning-based post-processing methods in terms of generalizability and robustness.
Collapse
|
24
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
25
|
Sun Z, Wang X, Yan X. An iterative gradient convolutional neural network and its application in endoscopic photoacoustic image formation from incomplete acoustic measurement. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05607-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
26
|
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
27
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. Photoacoustics 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
28
|
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
29
|
Abstract
Assessing cancer response to therapeutic interventions has been realized as an important course to early predict curative efficacy and treatment outcomes due to tumor heterogeneity. Compared to the traditional invasive tissue biopsy method, molecular imaging techniques have fundamentally revolutionized the ability to evaluate cancer response in a spatiotemporal manner. The past few years has witnessed a paradigm shift on the efforts from manufacturing functional molecular imaging probes for seeing a tumor to a vantage stage of interpreting the tumor response during different treatments. This review is to stand by the current development of advanced imaging technologies aiming to predict the treatment response in cancer therapy. Special interest is placed on the systems that are able to provide rapid and noninvasive assessment of pharmacokinetic drug fates (e.g., drug distribution, release, and activation) and tumor microenvironment heterogeneity (e.g., tumor cells, macrophages, dendritic cells (DCs), T cells, and inflammatory cells). The current status, practical significance, and future challenges of the emerging artificial intelligence (AI) technology and machine learning in the applications of medical imaging fields is overviewed. Ultimately, the authors hope that this review is timely to spur research interest in molecular imaging and precision medicine.
Collapse
Affiliation(s)
- Changrong Shi
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics and Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen, 361102, China
| | - Zijian Zhou
- State Key Laboratory of Molecular Vaccinology and Molecular Diagnostics and Center for Molecular Imaging and Translational Medicine, School of Public Health, Xiamen University, Xiamen, 361102, China
| | - Hongyu Lin
- State Key Laboratory of Physical Chemistry of Solid Surfaces, The Key Laboratory for Chemical Biology of Fujian Province and Department of Chemical Biology, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| | - Jinhao Gao
- State Key Laboratory of Physical Chemistry of Solid Surfaces, The Key Laboratory for Chemical Biology of Fujian Province and Department of Chemical Biology, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, 361005, China
| |
Collapse
|
30
|
Yang C, Lan H, Gao F, Gao F. Review of deep learning for photoacoustic imaging. Photoacoustics 2021; 21:100215. [PMID: 33425679 PMCID: PMC7779783 DOI: 10.1016/j.pacs.2020.100215] [Citation(s) in RCA: 58] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 10/11/2020] [Accepted: 10/11/2020] [Indexed: 05/02/2023]
Abstract
Machine learning has been developed dramatically and witnessed a lot of applications in various fields over the past few years. This boom originated in 2009, when a new model emerged, that is, the deep artificial neural network, which began to surpass other established mature models on some important benchmarks. Later, it was widely used in academia and industry. Ranging from image analysis to natural language processing, it fully exerted its magic and now become the state-of-the-art machine learning models. Deep neural networks have great potential in medical imaging technology, medical data analysis, medical diagnosis and other healthcare issues, and is promoted in both pre-clinical and even clinical stages. In this review, we performed an overview of some new developments and challenges in the application of machine learning to medical image analysis, with a special focus on deep learning in photoacoustic imaging. The aim of this review is threefold: (i) introducing deep learning with some important basics, (ii) reviewing recent works that apply deep learning in the entire ecological chain of photoacoustic imaging, from image reconstruction to disease diagnosis, (iii) providing some open source materials and other resources for researchers interested in applying deep learning to photoacoustic imaging.
Collapse
Affiliation(s)
- Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
31
|
Abstract
Photoacoustic imaging - a hybrid biomedical imaging modality finding its way to clinical practices. Although the photoacoustic phenomenon was known more than a century back, only in the last two decades it has been widely researched and used for biomedical imaging applications. In this review we focus on the development and progress of the technology in the last decade (2010-2020). From becoming more and more user friendly, cheaper in cost, portable in size, photoacoustic imaging promises a wide range of applications, if translated to clinic. The growth of photoacoustic community is steady, and with several new directions researchers are exploring, it is inevitable that photoacoustic imaging will one day establish itself as a regular imaging system in the clinical practices.
Collapse
Affiliation(s)
- Dhiman Das
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Praveenbalaji Rajendran
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 70 Nanyang Drive, N1.3-B2-11, Singapore, 637457, SINGAPORE
| |
Collapse
|
32
|
Abstract
Photoacoustic tomography (PAT) has become increasingly popular for molecular imaging due to its unique optical absorption contrast, high spatial resolution, deep imaging depth, and high imaging speed. Yet, the strong optical attenuation of biological tissues has traditionally prevented PAT from penetrating more than a few centimeters and limited its application for studying deeply seated targets. A variety of PAT technologies have been developed to extend the imaging depth, including employing deep-penetrating microwaves and X-ray photons as excitation sources, delivering the light to the inside of the organ, reshaping the light wavefront to better focus into scattering medium, as well as improving the sensitivity of ultrasonic transducers. At the same time, novel optical fluence mapping algorithms and image reconstruction methods have been developed to improve the quantitative accuracy of PAT, which is crucial to recover weak molecular signals at larger depths. The development of highly-absorbing near-infrared PA molecular probes has also flourished to provide high sensitivity and specificity in studying cellular processes. This review aims to introduce the recent developments in deep PA molecular imaging, including novel imaging systems, image processing methods and molecular probes, as well as their representative biomedical applications. Existing challenges and future directions are also discussed.
Collapse
Affiliation(s)
- Mucong Li
- Department of Biomedical Engineering, 3065Duke University, Durham, NC, USA
| | - Nikhila Nyayapathi
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Hailey I Kilian
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Jun Xia
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Jonathan F Lovell
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Junjie Yao
- Department of Biomedical Engineering, 3065Duke University, Durham, NC, USA
| |
Collapse
|
33
|
Sharma A, Pramanik M. Convolutional neural network for resolution enhancement and noise reduction in acoustic resolution photoacoustic microscopy. Biomed Opt Express 2020; 11:6826-6839. [PMID: 33408964 PMCID: PMC7747888 DOI: 10.1364/boe.411257] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 10/24/2020] [Accepted: 10/24/2020] [Indexed: 05/03/2023]
Abstract
In acoustic resolution photoacoustic microscopy (AR-PAM), a high numerical aperture focused ultrasound transducer (UST) is used for deep tissue high resolution photoacoustic imaging. There is a significant degradation of lateral resolution in the out-of-focus region. Improvement in out-of-focus resolution without degrading the image quality remains a challenge. In this work, we propose a deep learning-based method to improve the resolution of AR-PAM images, especially at the out of focus plane. A modified fully dense U-Net based architecture was trained on simulated AR-PAM images. Applying the trained model on experimental images showed that the variation in resolution is ∼10% across the entire imaging depth (∼4 mm) in the deep learning-based method, compared to ∼180% variation in the original PAM images. Performance of the trained network on in vivo rat vasculature imaging further validated that noise-free, high resolution images can be obtained using this method.
Collapse
Affiliation(s)
- Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, 637459, Singapore
| |
Collapse
|