1
|
Xiao Y, Shen Y, Liao S, Yao B, Cai X, Zhang Y, Gao F. Limited-view photoacoustic imaging reconstruction via high-quality self-supervised neural representation. PHOTOACOUSTICS 2025; 42:100685. [PMID: 39931293 PMCID: PMC11808520 DOI: 10.1016/j.pacs.2025.100685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2024] [Revised: 01/04/2025] [Accepted: 01/06/2025] [Indexed: 02/13/2025]
Abstract
In practical applications within the human body, it is often challenging to fully encompass the target tissue or organ, necessitating the use of limited-view arrays, which can lead to the loss of crucial information. Addressing the reconstruction of photoacoustic sensor signals in limited-view detection spaces has become a focal point of current research. In this study, we introduce a self-supervised network termed HIgh-quality Self-supervised neural representation (HIS), which tackles the inverse problem of photoacoustic imaging to reconstruct high-quality photoacoustic images from sensor data acquired under limited viewpoints. We regard the desired reconstructed photoacoustic image as an implicit continuous function in 2D image space, viewing the pixels of the image as sparse discrete samples. The HIS's objective is to learn the continuous function from limited observations by utilizing a fully connected neural network combined with Fourier feature position encoding. By simply minimizing the error between the network's predicted sensor data and the actual sensor data, HIS is trained to represent the observed continuous model. The results indicate that the proposed HIS model offers superior image reconstruction quality compared to three commonly used methods for photoacoustic image reconstruction.
Collapse
Affiliation(s)
- Youshen Xiao
- School of Information Science and Technology, ShanghaiTech University, No. 393 HuaXia Middle Road, Pudong New Dist., 201210, China
| | - Yuting Shen
- School of Information Science and Technology, ShanghaiTech University, No. 393 HuaXia Middle Road, Pudong New Dist., 201210, China
| | - Sheng Liao
- School of Information Science and Technology, ShanghaiTech University, No. 393 HuaXia Middle Road, Pudong New Dist., 201210, China
| | - Bowei Yao
- School of Information Science and Technology, ShanghaiTech University, No. 393 HuaXia Middle Road, Pudong New Dist., 201210, China
| | - Xiran Cai
- School of Information Science and Technology, ShanghaiTech University, No. 393 HuaXia Middle Road, Pudong New Dist., 201210, China
| | - Yuyao Zhang
- School of Information Science and Technology, ShanghaiTech University, No. 393 HuaXia Middle Road, Pudong New Dist., 201210, China
| | - Fei Gao
- School of Biomedical Engineering, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, 230026, China
- Hybrid Imaging System Laboratory, Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, Jiangsu, 215123, China
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui, 230026, China
| |
Collapse
|
2
|
Zhang S, Li J, Shen L, Zhao Z, Lee M, Qian K, Sun N, Hu B. Structure and oxygen saturation recovery of sparse photoacoustic microscopy images by deep learning. PHOTOACOUSTICS 2025; 42:100687. [PMID: 39896070 PMCID: PMC11787619 DOI: 10.1016/j.pacs.2025.100687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 12/24/2024] [Accepted: 01/08/2025] [Indexed: 02/04/2025]
Abstract
Photoacoustic microscopy (PAM) leverages the photoacoustic effect to provide high-resolution structural and functional imaging. However, achieving high-speed imaging with high spatial resolution remains challenging. To address this, undersampling and deep learning have emerged as common techniques to enhance imaging speed. Yet, existing methods rarely achieve effective recovery of functional images. In this study, we propose Mask-enhanced U-net (MeU-net) for recovering sparsely sampled PAM structural and functional images. The model utilizes dual-channel input, processing photoacoustic data from 532 nm and 558 nm wavelengths. Additionally, we introduce an adaptive vascular attention mask module that focuses on vascular information recovery and design a vessel-specific loss function to enhance restoration accuracy. We simulate data from mouse brain and ear imaging under various levels of sparsity (4 ×, 8 ×, 12 ×) and conduct extensive experiments. The results demonstrate that MeU-net significantly outperforms traditional interpolation methods and other representative models in structural information and oxygen saturation recovery.
Collapse
Affiliation(s)
- Shuyan Zhang
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Jingtan Li
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Lin Shen
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Zhonghao Zhao
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Minjun Lee
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Computer Science and Technology, Beijing Institute of Technology, China
| | - Kun Qian
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Naidi Sun
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| | - Bin Hu
- Key Laboratory of Brain Health Intelligent Evaluation and Intervention (Beijing Institute of Technology), Ministry of Education, China
- School of Medical Technology, Beijing Institute of Technology, China
| |
Collapse
|
3
|
Mondal S, Paul S, Singh N, Warbal P, Khanam Z, Saha RK. Deep learning aided determination of the optimal number of detectors for photoacoustic tomography. Biomed Phys Eng Express 2025; 11:025029. [PMID: 39874604 DOI: 10.1088/2057-1976/adaf29] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 01/28/2025] [Indexed: 01/30/2025]
Abstract
Photoacoustic tomography (PAT) is a non-destructive, non-ionizing, and rapidly expanding hybrid biomedical imaging technique, yet it faces challenges in obtaining clear images due to limited data from detectors or angles. As a result, the methodology suffers from significant streak artifacts and low-quality images. The integration of deep learning (DL), specifically convolutional neural networks (CNNs), has recently demonstrated powerful performance in various fields of PAT. This work introduces a post-processing-based CNN architecture named residual-dense UNet (RDUNet) to address the stride artifacts in reconstructed PA images. The framework adopts the benefits of residual and dense blocks to form high-resolution reconstructed images. The network is trained with two different types of datasets to learn the relationship between the reconstructed images and their corresponding ground truths (GTs). In the first protocol, RDUNet (identified as RDUNet I) underwent training on heterogeneous simulated images featuring three distinct phantom types. Subsequently, in the second protocol, RDUNet (referred to as RDUNet II) was trained on a heterogeneous composition of 81% simulated data and 19% experimental data. The motivation behind this is to allow the network to adapt to diverse experimental challenges. The RDUNet algorithm was validated by performing numerical and experimental studies involving single-disk, T-shape, and vasculature phantoms. The performance of this protocol was compared with the famous backprojection (BP) and the traditional UNet algorithms. This study shows that RDUNet can substantially reduce the number of detectors from 100 to 25 for simulated testing images and 30 for experimental scenarios.
Collapse
Affiliation(s)
- Sudeep Mondal
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Subhadip Paul
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Navjot Singh
- Department of Information Technology, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Pankaj Warbal
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Zartab Khanam
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Ratan K Saha
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| |
Collapse
|
4
|
Wang L, Meng YC, Qian Y. MSD-Net: Multi-scale dense convolutional neural network for photoacoustic image reconstruction with sparse data. PHOTOACOUSTICS 2025; 41:100679. [PMID: 39802237 PMCID: PMC11720879 DOI: 10.1016/j.pacs.2024.100679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 11/20/2024] [Accepted: 12/10/2024] [Indexed: 01/16/2025]
Abstract
Photoacoustic imaging (PAI) is an emerging hybrid imaging technology that combines the advantages of optical and ultrasound imaging. Despite its excellent imaging capabilities, PAI still faces numerous challenges in clinical applications, particularly sparse spatial sampling and limited view detection. These limitations often result in severe streak artifacts and blurring when using standard methods to reconstruct images from incomplete data. In this work, we propose an improved convolutional neural network (CNN) architecture, called multi-scale dense UNet (MSD-Net), to correct artifacts in 2D photoacoustic tomography (PAT). MSD-Net exploits the advantages of multi-scale information fusion and dense connections to improve the performance of CNN. Experimental validation with both simulated and in vivo datasets demonstrates that our method achieves better reconstructions with improved speed.
Collapse
Affiliation(s)
- Liangjie Wang
- Institute of Fiber Optics, Shanghai University, Shanghai 201800, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai University, Shanghai 200444, China
| | - Yi-Chao Meng
- Institute of Fiber Optics, Shanghai University, Shanghai 201800, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai University, Shanghai 200444, China
| | - Yiming Qian
- Institute of Fiber Optics, Shanghai University, Shanghai 201800, China
- Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Joint International Research Laboratory of Specialty Fiber Optics and Advanced Communication, Shanghai University, Shanghai 200444, China
| |
Collapse
|
5
|
Li Z, Lin J, Wang Y, Li J, Cao Y, Liu X, Wan W, Liu Q, Song X. Ultra-sparse reconstruction for photoacoustic tomography: Sinogram domain prior-guided method exploiting enhanced score-based diffusion model. PHOTOACOUSTICS 2025; 41:100670. [PMID: 39687486 PMCID: PMC11648917 DOI: 10.1016/j.pacs.2024.100670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2024] [Revised: 10/26/2024] [Accepted: 11/18/2024] [Indexed: 12/18/2024]
Abstract
Photoacoustic tomography, a novel non-invasive imaging modality, combines the principles of optical and acoustic imaging for use in biomedical applications. In scenarios where photoacoustic signal acquisition is insufficient due to sparse-view sampling, conventional direct reconstruction methods significantly degrade image resolution and generate numerous artifacts. To mitigate these constraints, a novel sinogram-domain priors guided extremely sparse-view reconstruction method for photoacoustic tomography boosted by enhanced diffusion model is proposed. The model learns prior information from the data distribution of sinograms under full-ring, 512-projections. In iterative reconstruction, the prior information serves as a constraint in least-squares optimization, facilitating convergence towards more plausible solutions. The performance of the method is evaluated using blood vessel simulation, phantoms, and in vivo experimental data. Subsequently, the transformation of the reconstructed sinograms into the image domain is achieved through the delay-and-sum method, enabling a thorough assessment of the proposed method. The results show that the proposed method demonstrates superior performance compared to the U-Net method, yielding images of markedly higher quality. Notably, for in vivo data under 32 projections, the sinogram structural similarity improved by ∼21 % over U-Net, and the image structural similarity increased by ∼51 % and ∼84 % compared to U-Net and delay-and-sum methods, respectively. The reconstruction in the sinogram domain for photoacoustic tomography enhances sparse-view imaging capabilities, potentially expanding the applications of photoacoustic tomography.
Collapse
Affiliation(s)
| | | | - Yiguang Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiahong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yubin Cao
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Wenbo Wan
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
6
|
Zhang Z, Pan Z, Lin Z, Sharma A, Lin CW, Pramanik M, Zheng Y. Acoustic Resolution Photoacoustic Microscopy Imaging Enhancement: Integration of Group Sparsity with Deep Denoiser Prior. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; PP:522-537. [PMID: 40030994 DOI: 10.1109/tip.2025.3526065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a novel medical imaging modality, which can be used for both structural and functional imaging in deep bio-tissue. However, the imaging resolution is degraded and structural details are lost since its dependency on acoustic focusing, which significantly constrains its scope of applications in medical and clinical scenarios. To address the above issue, model-based approaches incorporating traditional analytical prior terms have been employed, making it challenging to capture finer details of anatomical bio-structures. In this paper, we proposed an innovative prior named group sparsity prior for simultaneous reconstruction, which utilizes the non-local structural similarity between patches extracted from internal AR-PAM images. The local image details and resolution are improved while artifacts are also introduced. To mitigate the artifacts introduced by patch-based reconstruction methods, we further integrate an external image dataset as an extra information provider and consolidate the group sparsity prior with a deep denoiser prior. In this way, complementary information can be exploited to improve reconstruction results. Extensive experiments are conducted to enhance the simulated and in vivo AR-PAM imaging results. Specifically, in the simulated images, the mean peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) values have increased from 16.36 dB and 0.46 to 27.62 dB and 0.92, respectively. The in vivo reconstructed results also demonstrate the proposed method achieves superior local and global perceptual qualities, the metrics of signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) have significantly increased from 10.59 and 8.61 to 30.83 and 27.54, respectively. Additionally, reconstruction fidelity is validated with the optical resolution photoacoustic microscopy (OR-PAM) data as reference image.
Collapse
|
7
|
Liu Z, Zhou X, Yang H, Zhang Q, Zhou L, Wu Y, Liu Q, Yan W, Song J, Ding M, Yuchi M, Qiu W. Reconstruction of reflection ultrasound computed tomography with sparse transmissions using conditional generative adversarial network. ULTRASONICS 2025; 145:107486. [PMID: 39426346 DOI: 10.1016/j.ultras.2024.107486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Revised: 09/29/2024] [Accepted: 10/04/2024] [Indexed: 10/21/2024]
Abstract
Ultrasound computed tomography (UCT) has attracted increasing attention due to its potential for early breast cancer diagnosis and screening. Synthetic aperture imaging is a widely used means for reflection UCT image reconstruction, due to its ability to produce isotropic and high-resolution anatomical images. However, obtaining fully sampled UCT data from all directions over multiple transmissions is a time-consuming scanning process. Even though sparse transmission strategy could mitigate the data acquisition complication, image quality reconstructed by traditional Delay and Sum (DAS) methods may degrade substantially. This study presents a deep learning framework based on a conditional generative adversarial network, UCT-GAN, to efficiently reconstruct reflection UCT image from sparse transmission data. The evaluation experiments using breast imaging data in vivo show that the proposed UCT-GAN is able to generate high-quality reflection UCT images when using 8 transmissions only, which are comparable to that reconstructed from the data acquired by 512 transmissions. Quantitative assessment in terms of peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), and structural similarity index measurement (SSIM) show that the proposed UCT-GAN is able to efficiently reconstruct high-quality reflection UCT images from sparsely available transmission data, outperforming several other methods, such as RED-GAN, DnCNN-GAN, BM3D. In the experiment of 8-transmission sparse data, the PSNR is 29.52 dB, and the SSIM is 0.7619. The proposed method has the potential of being integrated into the UCT imaging system for clinical usage.
Collapse
Affiliation(s)
- Zhaohui Liu
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xiang Zhou
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Hantao Yang
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Qiude Zhang
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Liang Zhou
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Yun Wu
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Quanquan Liu
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Weicheng Yan
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Junjie Song
- Wesee Medical Imaging Co., Ltd, Wuhan, Hubei, China
| | - Mingyue Ding
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Ming Yuchi
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Wu Qiu
- Department of Biomedical Engineering, School of Life Science and Technology, Key Laboratory of Molecular Biophysics of Education Ministry of China, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| |
Collapse
|
8
|
Tasmara FA, Mitrayana M, Setiawan A, Ishii T, Saijo Y, Widyaningrum R. Trends and developments in 3D photoacoustic imaging systems: A review of recent progress. Med Eng Phys 2025; 135:104268. [PMID: 39922642 DOI: 10.1016/j.medengphy.2024.104268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 11/02/2024] [Accepted: 11/25/2024] [Indexed: 02/10/2025]
Abstract
Photoacoustic imaging (PAI) is a non-invasive diagnostic imaging technique that utilizes the photoacoustic effect by combining optical and ultrasound imaging systems. The development of PAI is mostly centered on the generation of a high-quality 3D reconstruction system for more optimal and accurate identification of tissue abnormalities. This literature study was conducted to analyze the 3D image reconstruction in PAI over 2017-2024. In this review, the collected articles in 3D photoacoustic imaging were categorized based on the approach, design, and purpose of each study. Firstly, the approaches of the studies were classified into three groups: experimental studies, numerical simulation, and numerical simulation with experimental validation. Secondly, the design of the study was assessed based on the photoacoustic modality, laser type, and sensing mechanism. Thirdly, the purpose of the collected studies was summarized into seven subsections, including image quality improvement, frame rate improvement, image segmentation, system integration, inter-systems comparisons, improving computational efficiency, and portable system development. The results of this review revealed that the 3D PAI systems have been developed by various research groups, suggesting the investigation of numerous biological objects. Therefore, 3D PAI has the potential to contribute a wide range of novel biological imaging systems that support real-time biomedical imaging in the future.
Collapse
Affiliation(s)
- Fikhri Astina Tasmara
- Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Mitrayana Mitrayana
- Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Andreas Setiawan
- Department of Physics, Universitas Kristen Satya Wacana, Salatiga, Central Java, Indonesia
| | - Takuro Ishii
- Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai, Japan; Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Yoshifumi Saijo
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Rini Widyaningrum
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia.
| |
Collapse
|
9
|
Sun M, Wang X, Wang Y, Meng Y, Gao D, Li C, Chen R, Huang K, Shi J. Full-view volumetric photoacoustic imaging using a hemispheric transducer array combined with an acoustic reflector. BIOMEDICAL OPTICS EXPRESS 2024; 15:6864-6876. [PMID: 39679402 PMCID: PMC11640568 DOI: 10.1364/boe.540392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 10/13/2024] [Accepted: 11/05/2024] [Indexed: 12/17/2024]
Abstract
Photoacoustic computed tomography (PACT) has evoked extensive interest for applications in preclinical and clinical research. However, the current systems suffer from the limited view provided by detection setups, thus impeding the sufficient acquisition of intricate tissue structures. Here, we propose an approach to enable fast 3D full-view imaging. A hemispherical ultrasonic transducer array combined with a planar acoustic reflector serves as the ultrasonic detection device in the PACT system. The planar acoustic reflector can create a mirrored virtual transducer array, and the detection view range can be enlarged to cover approximately 3.7 π steradians in our detection setup. To verify the effectiveness of our proposed configuration, we present the imaging results of a hair phantom, an in vivo zebrafish larva, and a leaf skeleton phantom. Furthermore, the real-time dynamic imaging capacity of this system is demonstrated by observing the movement of zebrafish within 2 s. This strategy holds great potential for both preclinical and clinical research by providing more detailed and comprehensive images of biological tissues.
Collapse
Affiliation(s)
- Mingli Sun
- School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
| | | | - Yuqi Wang
- Zhejiang Lab, Hangzhou 311100, China
| | | | - Da Gao
- Zhejiang Lab, Hangzhou 311100, China
| | - Chiye Li
- Zhejiang Lab, Hangzhou 311100, China
| | | | - Kaikai Huang
- School of Physics, Zhejiang University, Hangzhou 310027, China
| | | |
Collapse
|
10
|
Song X, Zou X, Zeng K, Li J, Hou S, Wu Y, Li Z, Ma C, Zheng Z, Guo K, Liu Q. Multiple diffusion models-enhanced extremely limited-view reconstruction strategy for photoacoustic tomography boosted by multi-scale priors. PHOTOACOUSTICS 2024; 40:100646. [PMID: 39351140 PMCID: PMC11440308 DOI: 10.1016/j.pacs.2024.100646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 08/05/2024] [Accepted: 09/10/2024] [Indexed: 10/04/2024]
Abstract
Photoacoustic tomography (PAT) is an innovative biomedical imaging technology, which has the capacity to obtain high-resolution images of biological tissue. In the extremely limited-view cases, traditional reconstruction methods for photoacoustic tomography frequently result in severe artifacts and distortion. Therefore, multiple diffusion models-enhanced reconstruction strategy for PAT is proposed in this study. Boosted by the multi-scale priors of the sinograms obtained in the full view and the limited-view case of 240°, the alternating iteration method is adopted to generate data for missing views in the sinogram domain. The strategy refines the image information from global to local, which improves the stability of the reconstruction process and promotes high-quality PAT reconstruction. The blood vessel simulation dataset and the in vivo experimental dataset were utilized to assess the performance of the proposed method. When applied to the in vivo experimental dataset in the limited-view case of 60°, the proposed method demonstrates a significant enhancement in peak signal-to-noise ratio and structural similarity by 23.08 % and 7.14 %, respectively, concurrently reducing mean squared error by 108.91 % compared to the traditional method. The results indicate that the proposed approach achieves superior reconstruction quality in extremely limited-view cases, when compared to other methods. This innovative approach offers a promising pathway for extremely limited-view PAT reconstruction, with potential implications for expanding its utility in clinical diagnostics.
Collapse
Affiliation(s)
- Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xueyang Zou
- Ji luan Academy, Nanchang University, Nanchang 330031, China
| | - Kaixin Zeng
- School of Mathematics and Computer Science, Nanchang University, Nanchang 330031, China
| | - Jiahong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Shangkun Hou
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yuhua Wu
- Ji luan Academy, Nanchang University, Nanchang 330031, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Cheng Ma
- Department of Electronic Engineering, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Zhiyuan Zheng
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Kangjun Guo
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
11
|
Zhu X, Menozzi L, Cho SW, Yao J. High speed innovations in photoacoustic microscopy. NPJ IMAGING 2024; 2:46. [PMID: 39525278 PMCID: PMC11541221 DOI: 10.1038/s44303-024-00052-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2024] [Accepted: 10/17/2024] [Indexed: 11/16/2024]
Abstract
Photoacoustic microscopy (PAM) is a key implementation of photoacoustic imaging (PAI). PAM merges rich optical contrast with deep acoustic detection, allowing for broad biomedical research and diverse clinical applications. Recent advancements in PAM technology have dramatically improved its imaging speed, enabling real-time observation of dynamic biological processes in vivo and motion-sensitive targets in situ, such as brain activities and placental development. This review introduces the engineering principles of high-speed PAM, focusing on various excitation and detection methods, each presenting unique benefits and challenges. Driven by these technological innovations, high-speed PAM has expanded its applications across fundamental, preclinical, and clinical fields. We highlight these notable applications, discuss ongoing technical challenges, and outline future directions for the development of high-speed PAM.
Collapse
Affiliation(s)
- Xiaoyi Zhu
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| | - Luca Menozzi
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| | - Soon-Woo Cho
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC USA
| |
Collapse
|
12
|
Lan H, Huang L, Wei X, Li Z, Lv J, Ma C, Nie L, Luo J. Masked cross-domain self-supervised deep learning framework for photoacoustic computed tomography reconstruction. Neural Netw 2024; 179:106515. [PMID: 39032393 DOI: 10.1016/j.neunet.2024.106515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 06/24/2024] [Accepted: 07/05/2024] [Indexed: 07/23/2024]
Abstract
Accurate image reconstruction is crucial for photoacoustic (PA) computed tomography (PACT). Recently, deep learning has been used to reconstruct PA images with a supervised scheme, which requires high-quality images as ground truth labels. However, practical implementations encounter inevitable trade-offs between cost and performance due to the expensive nature of employing additional channels for accessing more measurements. Here, we propose a masked cross-domain self-supervised (CDSS) reconstruction strategy to overcome the lack of ground truth labels from limited PA measurements. We implement the self-supervised reconstruction in a model-based form. Simultaneously, we take advantage of self-supervision to enforce the consistency of measurements and images across three partitions of the measured PA data, achieved by randomly masking different channels. Our findings indicate that dynamically masking a substantial proportion of channels, such as 80%, yields meaningful self-supervisors in both the image and signal domains. Consequently, this approach reduces the multiplicity of pseudo solutions and enables efficient image reconstruction using fewer PA measurements, ultimately minimizing reconstruction error. Experimental results on in-vivo PACT dataset of mice demonstrate the potential of our self-supervised framework. Moreover, our method exhibits impressive performance, achieving a structural similarity index (SSIM) of 0.87 in an extreme sparse case utilizing only 13 channels, which outperforms the performance of the supervised scheme with 16 channels (0.77 SSIM). Adding to its advantages, our method can be deployed on different trainable models in an end-to-end manner, further enhancing its versatility and applicability.
Collapse
Affiliation(s)
- Hengrong Lan
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Lijie Huang
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Xingyue Wei
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Zhiqiang Li
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Jing Lv
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| | - Cheng Ma
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Liming Nie
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| | - Jianwen Luo
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
13
|
Zhong Y, Zhang X, Mo Z, Zhang S, Nie L, Chen W, Qi L. Spiral scanning and self-supervised image reconstruction enable ultra-sparse sampling multispectral photoacoustic tomography. PHOTOACOUSTICS 2024; 39:100641. [PMID: 39676906 PMCID: PMC11639357 DOI: 10.1016/j.pacs.2024.100641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 07/19/2024] [Accepted: 08/27/2024] [Indexed: 12/17/2024]
Abstract
Multispectral photoacoustic tomography (PAT) is an imaging modality that utilizes the photoacoustic effect to achieve non-invasive and high-contrast imaging of internal tissues but also molecular functional information derived from multi-spectral measurements. However, the hardware cost and computational demand of a multispectral PAT system consisting of up to thousands of detectors are huge. To address this challenge, we propose an ultra-sparse spiral sampling strategy for multispectral PAT, which we named U3S-PAT. Our strategy employs a sparse ring-shaped transducer that, when switching excitation wavelengths, simultaneously rotates and translates. This creates a spiral scanning pattern with multispectral angle-interlaced sampling. To solve the highly ill-conditioned image reconstruction problem, we propose a self-supervised learning method that is able to introduce structural information shared during spiral scanning. We simulate the proposed U3S-PAT method on a commercial PAT system and conduct in vivo animal experiments to verify its performance. The results show that even with a sparse sampling rate as low as 1/30, our U3S-PAT strategy achieves similar reconstruction and spectral unmixing accuracy as non-spiral dense sampling. Given its ability to dramatically reduce the time required for three-dimensional multispectral scanning, our U3S-PAT strategy has the potential to perform volumetric molecular imaging of dynamic biological activities.
Collapse
Affiliation(s)
- Yutian Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Xiaoming Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Zongxin Mo
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Shuangyang Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Liming Nie
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| |
Collapse
|
14
|
Li B, Lu M, Zhou T, Bu M, Gu W, Wang J, Zhu Q, Liu X, Ta D. Removing Artifacts in Transcranial Photoacoustic Imaging With Polarized Self-Attention Dense-UNet. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1530-1543. [PMID: 39013725 DOI: 10.1016/j.ultrasmedbio.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/28/2024] [Accepted: 06/16/2024] [Indexed: 07/18/2024]
Abstract
OBJECTIVE Photoacoustic imaging (PAI) is a promising transcranial imaging technique. However, the distortion of photoacoustic signals induced by the skull significantly influences its imaging quality. We aimed to use deep learning for removing artifacts in PAI. METHODS In this study, we propose a polarized self-attention dense U-Net, termed PSAD-UNet, to correct the distortion and accurately recover imaged objects beneath bone plates. To evaluate the performance of the proposed method, a series of experiments was performed using a custom-built PAI system. RESULTS The experimental results showed that the proposed PSAD-UNet method could effectively implement transcranial PAI through a one- or two-layer bone plate. Compared with the conventional delay-and-sum and classical U-Net methods, PSAD-UNet can diminish the influence of bone plates and provide high-quality PAI results in terms of structural similarity and peak signal-to-noise ratio. The 3-D experimental results further confirm the feasibility of PSAD-UNet in 3-D transcranial imaging. CONCLUSION PSAD-UNet paves the way for implementing transcranial PAI with high imaging accuracy, which reveals broad application prospects in preclinical and clinical fields.
Collapse
Affiliation(s)
- Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Mengyang Lu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Tianhua Zhou
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Mengxu Bu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Wenting Gu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Junyi Wang
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Qiuchen Zhu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China.
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China; Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| |
Collapse
|
15
|
Guo K, Zheng Z, Zhong W, Li Z, Wang G, Li J, Cao Y, Wang Y, Lin J, Liu Q, Song X. Score-based generative model-assisted information compensation for high-quality limited-view reconstruction in photoacoustic tomography. PHOTOACOUSTICS 2024; 38:100623. [PMID: 38832333 PMCID: PMC11144813 DOI: 10.1016/j.pacs.2024.100623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/11/2024] [Accepted: 05/17/2024] [Indexed: 06/05/2024]
Abstract
Photoacoustic tomography (PAT) regularly operates in limited-view cases owing to data acquisition limitations. The results using traditional methods in limited-view PAT exhibit distortions and numerous artifacts. Here, a novel limited-view PAT reconstruction strategy that combines model-based iteration with score-based generative model was proposed. By incrementally adding noise to the training samples, prior knowledge can be learned from the complex probability distribution. The acquired prior is then utilized as constraint in model-based iteration. The information of missing views can be gradually compensated by cyclic iteration to achieve high-quality reconstruction. The performance of the proposed method was evaluated with the circular phantom and in vivo experimental data. Experimental results demonstrate the outstanding effectiveness of the proposed method in limited-view cases. Notably, the proposed method exhibits excellent performance in limited-view case of 70° compared with traditional method. It achieves a remarkable improvement of 203% in PSNR and 48% in SSIM for the circular phantom experimental data, and an enhancement of 81% in PSNR and 65% in SSIM for in vivo experimental data, respectively. The proposed method has capability of reconstructing PAT images in extremely limited-view cases, which will further expand the application in clinical scenarios.
Collapse
Affiliation(s)
| | | | | | | | - Guijun Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiahong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yubin Cao
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yiguang Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiabin Lin
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
16
|
Cam RM, Villa U, Anastasio MA. Learning a stable approximation of an existing but unknown inverse mapping: application to the half-time circular Radon transform. INVERSE PROBLEMS 2024; 40:085002. [PMID: 38933410 PMCID: PMC11197394 DOI: 10.1088/1361-6420/ad4f0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 04/05/2024] [Accepted: 05/22/2024] [Indexed: 06/28/2024]
Abstract
Supervised deep learning-based methods have inspired a new wave of image reconstruction methods that implicitly learn effective regularization strategies from a set of training data. While they hold potential for improving image quality, they have also raised concerns regarding their robustness. Instabilities can manifest when learned methods are applied to find approximate solutions to ill-posed image reconstruction problems for which a unique and stable inverse mapping does not exist, which is a typical use case. In this study, we investigate the performance of supervised deep learning-based image reconstruction in an alternate use case in which a stable inverse mapping is known to exist but is not yet analytically available in closed form. For such problems, a deep learning-based method can learn a stable approximation of the unknown inverse mapping that generalizes well to data that differ significantly from the training set. The learned approximation of the inverse mapping eliminates the need to employ an implicit (optimization-based) reconstruction method and can potentially yield insights into the unknown analytic inverse formula. The specific problem addressed is image reconstruction from a particular case of radially truncated circular Radon transform (CRT) data, referred to as 'half-time' measurement data. For the half-time image reconstruction problem, we develop and investigate a learned filtered backprojection method that employs a convolutional neural network to approximate the unknown filtering operation. We demonstrate that this method behaves stably and readily generalizes to data that differ significantly from training data. The developed method may find application to wave-based imaging modalities that include photoacoustic computed tomography.
Collapse
Affiliation(s)
- Refik Mert Cam
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| | - Umberto Villa
- Oden Institute for Computational Engineering & Sciences, The University of Texas at Austin, Austin, TX 78712, United States of America
| | - Mark A Anastasio
- Department of Electrical and Computer Engineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
- Department of Bioengineering, University of Illinois Urbana–Champaign, Urbana, IL 61801, United States of America
| |
Collapse
|
17
|
Ge J, Mo Z, Zhang S, Zhang X, Zhong Y, Liang Z, Hu C, Chen W, Qi L. Image reconstruction of multispectral sparse sampling photoacoustic tomography based on deep algorithm unrolling. PHOTOACOUSTICS 2024; 38:100618. [PMID: 38957484 PMCID: PMC11217744 DOI: 10.1016/j.pacs.2024.100618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 05/07/2024] [Accepted: 05/10/2024] [Indexed: 07/04/2024]
Abstract
Photoacoustic tomography (PAT), as a novel medical imaging technology, provides structural, functional, and metabolism information of biological tissue in vivo. Sparse Sampling PAT, or SS-PAT, generates images with a smaller number of detectors, yet its image reconstruction is inherently ill-posed. Model-based methods are the state-of-the-art method for SS-PAT image reconstruction, but they require design of complex handcrafted prior. Owing to their ability to derive robust prior from labeled datasets, deep-learning-based methods have achieved great success in solving inverse problems, yet their interpretability is poor. Herein, we propose a novel SS-PAT image reconstruction method based on deep algorithm unrolling (DAU), which integrates the advantages of model-based and deep-learning-based methods. We firstly provide a thorough analysis of DAU for PAT reconstruction. Then, in order to incorporate the structural prior constraint, we propose a nested DAU framework based on plug-and-play Alternating Direction Method of Multipliers (PnP-ADMM) to deal with the sparse sampling problem. Experimental results on numerical simulation, in vivo animal imaging, and multispectral un-mixing demonstrate that the proposed DAU image reconstruction framework outperforms state-of-the-art model-based and deep-learning-based methods.
Collapse
Affiliation(s)
- Jia Ge
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Zongxin Mo
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Shuangyang Zhang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Xiaoming Zhang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Yutian Zhong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Zhaoyong Liang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Chaobin Hu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Rd., Baiyun District, Guangzhou, Guangdong 510515, China
| |
Collapse
|
18
|
Mozumder M, Hirvi P, Nissilä I, Hauptmann A, Ripoll J, Singh DE. Diffuse optical tomography of the brain: effects of inaccurate baseline optical parameters and refinements using learned post-processing. BIOMEDICAL OPTICS EXPRESS 2024; 15:4470-4485. [PMID: 39347006 PMCID: PMC11427210 DOI: 10.1364/boe.524245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 05/24/2024] [Accepted: 06/24/2024] [Indexed: 10/01/2024]
Abstract
Diffuse optical tomography (DOT) uses near-infrared light to image spatially varying optical parameters in biological tissues. In functional brain imaging, DOT uses a perturbation model to estimate the changes in optical parameters, corresponding to changes in measured data due to brain activity. The perturbation model typically uses approximate baseline optical parameters of the different brain compartments, since the actual baseline optical parameters are unknown. We simulated the effects of these approximate baseline optical parameters using parameter variations earlier reported in literature, and brain atlases from four adult subjects. We report the errors in estimated activation contrast, localization, and area when incorrect baseline values were used. Further, we developed a post-processing technique based on deep learning methods that can reduce the effects due to inaccurate baseline optical parameters. The method improved imaging of brain activation changes in the presence of such errors.
Collapse
Affiliation(s)
- Meghdoot Mozumder
- Department of Technical Physics, University of Eastern Finland, P.O. Box 1627, 70211 Kuopio, Finland
| | - Pauliina Hirvi
- Department of Mathematics and Systems Analysis, Aalto University, P.O. Box 11100, 00076 Aalto, Finland
| | - Ilkka Nissilä
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, 00076 Aalto, Finland
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Oulu, Finland
- Department of Computer Science, University College London, London WC1E 6BT, United Kingdom
| | - Jorge Ripoll
- Department of Bioengineering, Universidad Carlos III de Madrid, 28911 Leganés, Madrid, Spain
| | - David E Singh
- Departamento de Informática, Universidad Carlos III de Madrid, 28911 Leganés, Madrid, Spain
| |
Collapse
|
19
|
Shi M, Vercauteren T, Xia W. Learning-based sound speed estimation and aberration correction for linear-array photoacoustic imaging. PHOTOACOUSTICS 2024; 38:100621. [PMID: 39669099 PMCID: PMC11637060 DOI: 10.1016/j.pacs.2024.100621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 05/10/2024] [Accepted: 05/17/2024] [Indexed: 12/14/2024]
Abstract
Photoacoustic (PA) image reconstruction involves acoustic inversion that necessitates the specification of the speed of sound (SoS) within the medium of propagation. Due to the lack of information on the spatial distribution of the SoS within heterogeneous soft tissue, a homogeneous SoS distribution (such as 1540 m/s) is typically assumed in PA image reconstruction, similar to that of ultrasound (US) imaging. Failure to compensate for the SoS variations leads to aberration artefacts, deteriorating the image quality. Various methods have been proposed to address this issue, but they usually involve complex hardware and/or time-consuming algorithms, hindering clinical translation. In this work, we introduce a deep learning framework for SoS estimation and subsequent aberration correction in a dual-modal PA/US imaging system exploiting a clinical US probe. As the acquired PA and US images were inherently co-registered, the estimated SoS distribution from US channel data using a deep neural network was incorporated for accurate PA image reconstruction. The framework comprised an initial pre-training stage based on digital phantoms, which was further enhanced through transfer learning using physical phantom data and associated SoS maps obtained from measurements. This framework achieved a root mean square error of 10.2 m/s and 15.2 m/s for SoS estimation on digital and physical phantoms, respectively and structural similarity index measures of up to 0.88 for PA reconstructions compared to the conventional approach of 0.69. A maximum of 1.2 times improvement in the signal-to-noise ratio of PA images was further demonstrated with a human volunteer study. Our results show that the proposed framework could be valuable in various clinical and preclinical applications to enhance PA image reconstruction.
Collapse
Affiliation(s)
- Mengjie Shi
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, SE1 7EH, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, SE1 7EH, United Kingdom
| | - Wenfeng Xia
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, SE1 7EH, United Kingdom
| |
Collapse
|
20
|
Poimala J, Cox B, Hauptmann A. Compensating unknown speed of sound in learned fast 3D limited-view photoacoustic tomography. PHOTOACOUSTICS 2024; 37:100597. [PMID: 38425677 PMCID: PMC10901832 DOI: 10.1016/j.pacs.2024.100597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/15/2023] [Accepted: 02/16/2024] [Indexed: 03/02/2024]
Abstract
Real-time applications in three-dimensional photoacoustic tomography from planar sensors rely on fast reconstruction algorithms that assume the speed of sound (SoS) in the tissue is homogeneous. Moreover, the reconstruction quality depends on the correct choice for the constant SoS. In this study, we discuss the possibility of ameliorating the problem of unknown or heterogeneous SoS distributions by using learned reconstruction methods. This can be done by modelling the uncertainties in the training data. In addition, a correction term can be included in the learned reconstruction method. We investigate the influence of both and while a learned correction component can improve reconstruction quality further, we show that a careful choice of uncertainties in the training data is the primary factor to overcome unknown SoS. We support our findings with simulated and in vivo measurements in 3D.
Collapse
Affiliation(s)
- Jenni Poimala
- Research Unit of Mathematical Sciences, University of Oulu, Finland
| | - Ben Cox
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Finland
- Department of Computer Science, University College London, UK
| |
Collapse
|
21
|
Zumbo S, Mandija S, Meliadò EF, Stijnman P, Meerbothe TG, van den Berg CA, Isernia T, Bevacqua MT. Unrolled Optimization via Physics-Assisted Convolutional Neural Network for MR-Based Electrical Properties Tomography: A Numerical Investigation. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:505-513. [PMID: 39050972 PMCID: PMC11268945 DOI: 10.1109/ojemb.2024.3402998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/14/2024] [Accepted: 05/14/2024] [Indexed: 07/27/2024] Open
Abstract
Magnetic Resonance imaging based Electrical Properties Tomography (MR-EPT) is a non-invasive technique that measures the electrical properties (EPs) of biological tissues. In this work, we present and numerically investigate the performance of an unrolled, physics-assisted method for 2D MR-EPT reconstructions, where a cascade of Convolutional Neural Networks is used to compute the contrast update. Each network takes in input the EPs and the gradient descent direction (encoding the physics underlying the adopted scattering model) and returns as output the updated contrast function. The network is trained and tested in silico using 2D slices of realistic brain models at 128 MHz. Results show the capability of the proposed procedure to reconstruct EPs maps with quality comparable to that of the popular Contrast Source Inversion-EPT, while significantly reducing the computational time.
Collapse
Affiliation(s)
- Sabrina Zumbo
- Department DIIESUniversità Mediterranea di Reggio Calabria89124Reggio CalabriaItaly
| | - Stefano Mandija
- Department of Radiotherapy, Division of Imaging & OncologyUniversity Medical Center Utrecht3584 CXUtrechtThe Netherlands
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUtrecht University3584 CSUtrechtThe Netherlands
| | - Ettore F. Meliadò
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUtrecht University3584 CSUtrechtThe Netherlands
| | - Peter Stijnman
- Department of Radiotherapy, Division of Imaging & OncologyUniversity Medical Center Utrecht3584 CXUtrechtThe Netherlands
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUtrecht University3584 CSUtrechtThe Netherlands
| | - Thierry G. Meerbothe
- Department of Radiotherapy, Division of Imaging & OncologyUniversity Medical Center Utrecht3584 CXUtrechtThe Netherlands
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUtrecht University3584 CSUtrechtThe Netherlands
| | - Cornelis A.T. van den Berg
- Department of Radiotherapy, Division of Imaging & OncologyUniversity Medical Center Utrecht3584 CXUtrechtThe Netherlands
- Computational Imaging Group for MR Diagnostics & Therapy, Center for Image SciencesUtrecht University3584 CSUtrechtThe Netherlands
| | - Tommaso Isernia
- Department DIIESUniversità Mediterranea di Reggio Calabria89124Reggio CalabriaItaly
| | - Martina T. Bevacqua
- Department DIIESUniversità Mediterranea di Reggio Calabria89124Reggio CalabriaItaly
| |
Collapse
|
22
|
Ma Y, Zhou W, Ma R, Wang E, Yang S, Tang Y, Zhang XP, Guan X. DOVE: Doodled vessel enhancement for photoacoustic angiography super resolution. Med Image Anal 2024; 94:103106. [PMID: 38387244 DOI: 10.1016/j.media.2024.103106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/12/2023] [Accepted: 02/08/2024] [Indexed: 02/24/2024]
Abstract
Deep-learning-based super-resolution photoacoustic angiography (PAA) has emerged as a valuable tool for enhancing the resolution of blood vessel images and aiding in disease diagnosis. However, due to the scarcity of training samples, PAA super-resolution models do not generalize well, especially in the challenging in-vivo imaging of organs with deep tissue penetration. Furthermore, prolonged exposure to high laser intensity during the image acquisition process can lead to tissue damage and secondary infections. To address these challenges, we propose an approach doodled vessel enhancement (DOVE) that utilizes hand-drawn doodles to train a PAA super-resolution model. With a training dataset consisting of only 32 real PAA images, we construct a diffusion model that interprets hand-drawn doodles as low-resolution images. DOVE enables us to generate a large number of realistic PAA images, achieving a 49.375% fool rate, even among experts in photoacoustic imaging. Subsequently, we employ these generated images to train a self-similarity-based model for super-resolution. During cross-domain tests, our method, trained solely on generated images, achieves a structural similarity value of 0.8591, surpassing the scores of all other models trained with real high-resolution images. DOVE successfully overcomes the limitation of insufficient training samples and unlocks the clinic application potential of super-resolution-based biomedical imaging.
Collapse
Affiliation(s)
- Yuanzheng Ma
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Wangting Zhou
- Engineering Research Center of Molecular & Neuro Imaging of the Ministry of Education, Xidian University, Xi'an, Shaanxi 710126, China
| | - Rui Ma
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Erqi Wang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China
| | - Sihua Yang
- MOE Key Laboratory of Laser Life Science & Institute of Laser Life Science, College of Biophotonics, South China Normal University, Guangzhou, 510631, China.
| | - Yansong Tang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xiao-Ping Zhang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
| | - Xun Guan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China; Institute of Data and Information, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| |
Collapse
|
23
|
Chang KW, Karthikesh MS, Zhu Y, Hudson HM, Barbay S, Bundy D, Guggenmos DJ, Frost S, Nudo RJ, Wang X, Yang X. Photoacoustic imaging of squirrel monkey cortical responses induced by peripheral mechanical stimulation. JOURNAL OF BIOPHOTONICS 2024; 17:e202300347. [PMID: 38171947 PMCID: PMC10961203 DOI: 10.1002/jbio.202300347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/08/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024]
Abstract
Non-human primates (NHPs) are crucial models for studies of neuronal activity. Emerging photoacoustic imaging modalities offer excellent tools for studying NHP brains with high sensitivity and high spatial resolution. In this research, a photoacoustic microscopy (PAM) device was used to provide a label-free quantitative characterization of cerebral hemodynamic changes due to peripheral mechanical stimulation. A 5 × 5 mm area within the somatosensory cortex region of an adult squirrel monkey was imaged. A deep, fully connected neural network was characterized and applied to the PAM images of the cortex to enhance the vessel structures after mechanical stimulation on the forelimb digits. The quality of the PAM images was improved significantly with a neural network while preserving the hemodynamic responses. The functional responses to the mechanical stimulation were characterized based on the improved PAM images. This study demonstrates capability of PAM combined with machine learning for functional imaging of the NHP brain.
Collapse
Affiliation(s)
- Kai-Wei Chang
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | | | - Yunhao Zhu
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | - Heather M. Hudson
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Scott Barbay
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - David Bundy
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - David J. Guggenmos
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Shawn Frost
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Randolph J. Nudo
- Landon Center on Aging, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
- Department of Rehabilitation Medicine, University of Kansas Medical Center, Kansas City, Kansas, 66160, United States
| | - Xueding Wang
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan, 48109, United States
| | - Xinmai Yang
- Bioengineering Graduate Program and Institute for Bioengineering Research, University of Kansas, Lawrence, Kansas, 66045, United States
- Department of Mechanical Engineering, University of Kansas, Lawrence, Kansas, 66045, United States
| |
Collapse
|
24
|
Nyayapathi N, Zheng E, Zhou Q, Doyley M, Xia J. Dual-modal Photoacoustic and Ultrasound Imaging: from preclinical to clinical applications. FRONTIERS IN PHOTONICS 2024; 5:1359784. [PMID: 39185248 PMCID: PMC11343488 DOI: 10.3389/fphot.2024.1359784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Photoacoustic imaging is a novel biomedical imaging modality that has emerged over the recent decades. Due to the conversion of optical energy into the acoustic wave, photoacoustic imaging offers high-resolution imaging in depth beyond the optical diffusion limit. Photoacoustic imaging is frequently used in conjunction with ultrasound as a hybrid modality. The combination enables the acquisition of both optical and acoustic contrasts of tissue, providing functional, structural, molecular, and vascular information within the same field of view. In this review, we first described the principles of various photoacoustic and ultrasound imaging techniques and then classified the dual-modal imaging systems based on their preclinical and clinical imaging applications. The advantages of dual-modal imaging were thoroughly analyzed. Finally, the review ends with a critical discussion of existing developments and a look toward the future.
Collapse
Affiliation(s)
- Nikhila Nyayapathi
- Electrical and Computer Engineering, University of Rochester, Rochester, New York, 14627
| | - Emily Zheng
- Department of Biomedical Engineering, University at Buffalo, Buffalo, New York, 14226
| | - Qifa Zhou
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90007
| | - Marvin Doyley
- Electrical and Computer Engineering, University of Rochester, Rochester, New York, 14627
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, Buffalo, New York, 14226
| |
Collapse
|
25
|
Song X, Zhong W, Li Z, Peng S, Zhang H, Wang G, Dong J, Liu X, Xu X, Liu Q. Accelerated model-based iterative reconstruction strategy for sparse-view photoacoustic tomography aided by multi-channel autoencoder priors. JOURNAL OF BIOPHOTONICS 2024; 17:e202300281. [PMID: 38010827 DOI: 10.1002/jbio.202300281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 11/29/2023]
Abstract
Photoacoustic tomography (PAT) commonly works in sparse view due to data acquisition limitations. However, reconstruction suffers from serious deterioration (e.g., severe artifacts) using traditional algorithms under sparse view. Here, a novel accelerated model-based iterative reconstruction strategy for sparse-view PAT aided by multi-channel autoencoder priors was proposed. A multi-channel denoising autoencoder network was designed to learn prior information, which provides constraints for model-based iterative reconstruction. This integration accelerates the iteration process, leading to optimal reconstruction outcomes. The performance of the proposed method was evaluated using blood vessel simulation data and experimental data. The results show that the proposed method can achieve superior sparse-view reconstruction with a significant acceleration of iteration. Notably, the proposed method exhibits excellent performance under extremely sparse condition (e.g., 32 projections) compared with the U-Net method, with an improvement of 48% in PSNR and 12% in SSIM for in vivo experimental data.
Collapse
Affiliation(s)
- Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Wenhua Zhong
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Shuchong Peng
- Ji luan Academy, Nanchang University, Nanchang, China
| | - Hongyu Zhang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Guijun Wang
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Jiaqing Dong
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Xiaoling Xu
- School of Information Engineering, Nanchang University, Nanchang, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
26
|
Huang M, Liu W, Sun G, Shi C, Liu X, Han K, Liu S, Wang Z, Xie Z, Guo Q. Unveiling precision: a data-driven approach to enhance photoacoustic imaging with sparse data. BIOMEDICAL OPTICS EXPRESS 2024; 15:28-43. [PMID: 38223183 PMCID: PMC10783920 DOI: 10.1364/boe.506334] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 10/31/2023] [Accepted: 11/21/2023] [Indexed: 01/16/2024]
Abstract
This study presents the Fourier Decay Perception Generative Adversarial Network (FDP-GAN), an innovative approach dedicated to alleviating limitations in photoacoustic imaging stemming from restricted sensor availability and biological tissue heterogeneity. By integrating diverse photoacoustic data, FDP-GAN notably enhances image fidelity and reduces artifacts, particularly in scenarios of low sampling. Its demonstrated effectiveness highlights its potential for substantial contributions to clinical applications, marking a significant stride in addressing pertinent challenges within the realm of photoacoustic acquisition techniques.
Collapse
Affiliation(s)
- Mengyuan Huang
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Wu Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Guocheng Sun
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Chaojing Shi
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Xi Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Kaitai Han
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Shitou Liu
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Zijun Wang
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| | - Zhennian Xie
- Xiyuan Hospital, Chinese Academy of Traditional Chinese Medicine, China
| | - Qianjin Guo
- Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing 102617, China
- School of Mechanical Engineering & Hydrogen Energy Research Centre, Beijing Institute of Petrochemical Technology, Beijing 102617, China
| |
Collapse
|
27
|
Cam RM, Wang C, Thompson W, Ermilov SA, Anastasio MA, Villa U. Spatiotemporal image reconstruction to enable high-frame-rate dynamic photoacoustic tomography with rotating-gantry volumetric imagers. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11516. [PMID: 38249994 PMCID: PMC10798269 DOI: 10.1117/1.jbo.29.s1.s11516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Revised: 11/22/2023] [Accepted: 12/20/2023] [Indexed: 01/23/2024]
Abstract
Significance Dynamic photoacoustic computed tomography (PACT) is a valuable imaging technique for monitoring physiological processes. However, current dynamic PACT imaging techniques are often limited to two-dimensional spatial imaging. Although volumetric PACT imagers are commercially available, these systems typically employ a rotating measurement gantry in which the tomographic data are sequentially acquired as opposed to being acquired simultaneously at all views. Because the dynamic object varies during the data-acquisition process, the sequential data-acquisition process poses substantial challenges to image reconstruction associated with data incompleteness. The proposed image reconstruction method is highly significant in that it will address these challenges and enable volumetric dynamic PACT imaging with existing preclinical imagers. Aim The aim of this study is to develop a spatiotemporal image reconstruction (STIR) method for dynamic PACT that can be applied to commercially available volumetric PACT imagers that employ a sequential scanning strategy. The proposed reconstruction method aims to overcome the challenges caused by the limited number of tomographic measurements acquired per frame. Approach A low-rank matrix estimation-based STIR (LRME-STIR) method is proposed to enable dynamic volumetric PACT. The LRME-STIR method leverages the spatiotemporal redundancies in the dynamic object to accurately reconstruct a four-dimensional (4D) spatiotemporal image. Results The conducted numerical studies substantiate the LRME-STIR method's efficacy in reconstructing 4D dynamic images from tomographic measurements acquired with a rotating measurement gantry. The experimental study demonstrates the method's ability to faithfully recover the flow of a contrast agent with a frame rate of 10 frames per second, even when only a single tomographic measurement per frame is available. Conclusions The proposed LRME-STIR method offers a promising solution to the challenges faced by enabling 4D dynamic imaging using commercially available volumetric PACT imagers. By enabling accurate STIRs, this method has the potential to significantly advance preclinical research and facilitate the monitoring of critical physiological biomarkers.
Collapse
Affiliation(s)
- Refik Mert Cam
- University of Illinois Urbana-Champaign, Department of Electrical and Computer Engineering, Urbana, Illinois, United States
| | - Chao Wang
- National University of Singapore, Department of Statistics and Data Science, Singapore
| | | | | | - Mark A. Anastasio
- University of Illinois Urbana-Champaign, Department of Bioengineering, Urbana, Illinois, United States
| | - Umberto Villa
- The University of Texas at Austin, Oden Institute for Computational Engineering and Sciences, Austin, Texas, United States
| |
Collapse
|
28
|
Tarvainen T, Cox B. Quantitative photoacoustic tomography: modeling and inverse problems. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11509. [PMID: 38125717 PMCID: PMC10731766 DOI: 10.1117/1.jbo.29.s1.s11509] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 11/19/2023] [Accepted: 11/28/2023] [Indexed: 12/23/2023]
Abstract
Significance Quantitative photoacoustic tomography (QPAT) exploits the photoacoustic effect with the aim of estimating images of clinically relevant quantities related to the tissue's optical absorption. The technique has two aspects: an acoustic part, where the initial acoustic pressure distribution is estimated from measured photoacoustic time-series, and an optical part, where the distributions of the optical parameters are estimated from the initial pressure. Aim Our study is focused on the optical part. In particular, computational modeling of light propagation (forward problem) and numerical solution methodologies of the image reconstruction (inverse problem) are discussed. Approach The commonly used mathematical models of how light and sound propagate in biological tissue are reviewed. A short overview of how the acoustic inverse problem is usually treated is given. The optical inverse problem and methods for its solution are reviewed. In addition, some limitations of real-life measurements and their effect on the inverse problems are discussed. Results An overview of QPAT with a focus on the optical part was given. Computational modeling and inverse problems of QPAT were addressed, and some key challenges were discussed. Furthermore, the developments for tackling these problems were reviewed. Although modeling of light transport is well-understood and there is a well-developed framework of inverse mathematics for approaching the inverse problem of QPAT, there are still challenges in taking these methodologies to practice. Conclusions Modeling and inverse problems of QPAT together were discussed. The scope was limited to the optical part, and the acoustic aspects were discussed only to the extent that they relate to the optical aspect.
Collapse
Affiliation(s)
- Tanja Tarvainen
- University of Eastern Finland, Department of Technical Physics, Kuopio, Finland
| | - Ben Cox
- University College London, Department of Medical Physics and Biomedical Engineering, London, United Kingdom
| |
Collapse
|
29
|
Susmelj AK, Lafci B, Ozdemir F, Davoudi N, Deán-Ben XL, Perez-Cruz F, Razansky D. Signal domain adaptation network for limited-view optoacoustic tomography. Med Image Anal 2024; 91:103012. [PMID: 37922769 DOI: 10.1016/j.media.2023.103012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 09/19/2023] [Accepted: 10/18/2023] [Indexed: 11/07/2023]
Abstract
Optoacoustic (OA) imaging is based on optical excitation of biological tissues with nanosecond-duration laser pulses and detection of ultrasound (US) waves generated by thermoelastic expansion following light absorption. The image quality and fidelity of OA images critically depend on the extent of tomographic coverage provided by the US detector arrays. However, full tomographic coverage is not always possible due to experimental constraints. One major challenge concerns an efficient integration between OA and pulse-echo US measurements using the same transducer array. A common approach toward the hybridization consists in using standard linear transducer arrays, which readily results in arc-type artifacts and distorted shapes in OA images due to the limited angular coverage. Deep learning methods have been proposed to mitigate limited-view artifacts in OA reconstructions by mapping artifactual to artifact-free (ground truth) images. However, acquisition of ground truth data with full angular coverage is not always possible, particularly when using handheld probes in a clinical setting. Deep learning methods operating in the image domain are then commonly based on networks trained on simulated data. This approach is yet incapable of transferring the learned features between two domains, which results in poor performance on experimental data. Here, we propose a signal domain adaptation network (SDAN) consisting of i) a domain adaptation network to reduce the domain gap between simulated and experimental signals and ii) a sides prediction network to complement the missing signals in limited-view OA datasets acquired from a human forearm by means of a handheld linear transducer array. The proposed method showed improved performance in reducing limited-view artifacts without the need for ground truth signals from full tomographic acquisitions.
Collapse
Affiliation(s)
| | - Berkan Lafci
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Firat Ozdemir
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland
| | - Neda Davoudi
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Xosé Luís Deán-Ben
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland
| | - Fernando Perez-Cruz
- Swiss Data Science Center, ETH Zürich and EPFL, Switzerland; Institute for Machine Learning, Department of Computer Science, ETH Zurich, Switzerland
| | - Daniel Razansky
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland; Institute of Pharmacology and Toxicology and Institute for Biomedical Engineering, Faculty of Medicine, University of Zurich, Switzerland.
| |
Collapse
|
30
|
Shen Y, Zhang J, Jiang D, Gao Z, Zheng Y, Gao F, Gao F. S-Wave Accelerates Optimization-based Photoacoustic Image Reconstruction in vivo. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:18-27. [PMID: 37806923 DOI: 10.1016/j.ultrasmedbio.2023.07.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 07/25/2023] [Accepted: 07/29/2023] [Indexed: 10/10/2023]
Abstract
OBJECTIVE Photoacoustic imaging has undergone rapid development in recent years. To simulate photoacoustic imaging on a computer, the most popular MATLAB toolbox currently used for the forward projection process is k-Wave. However, k-Wave suffers from significant computation time. Here we propose a straightforward simulation approach based on superposed Wave (s-Wave) to accelerate photoacoustic simulation. METHODS In this study, we consider the initial pressure distribution as a collection of individual pixels. By obtaining standard sensor data from a single pixel beforehand, we can easily manipulate the phase and amplitude of the sensor data for specific pixels using loop and multiplication operators. The effectiveness of this approach is validated through an optimization-based reconstruction algorithm. RESULTS The results reveal significantly reduced computation time compared with k-Wave. Particularly in a sparse 3-D configuration, s-Wave exhibits a speed improvement >2000 times compared with k-Wave. In terms of optimization-based image reconstruction, in vivo imaging results reveal that using the s-Wave method yields images highly similar to those obtained using k-Wave, while reducing the reconstruction time by approximately 50 times. CONCLUSION Proposed here is an accelerated optimization-based algorithm for photoacoustic image reconstruction, using the fast s-Wave forward projection simulation. Our method achieves substantial time savings, particularly in sparse system configurations. Future work will focus on further optimizing the algorithm and expanding its applicability to a broader range of photoacoustic imaging scenarios.
Collapse
Affiliation(s)
- Yuting Shen
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Jiadong Zhang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Daohuai Jiang
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Zijian Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Yuwei Zheng
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, School of Information Science and Technology, ShanghaiTech University, Shanghai, China; Shanghai Engineering Research Center of Energy Efficient and Custom AI IC, Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
31
|
Haltmeier M, Ye M, Felbermayer K, Hinterleitner F, Burgholzer P. Design, implementation, and analysis of a compressed sensing photoacoustic projection imaging system. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:S11529. [PMID: 38650979 PMCID: PMC11033734 DOI: 10.1117/1.jbo.29.s1.s11529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 02/16/2024] [Accepted: 02/28/2024] [Indexed: 04/25/2024]
Abstract
Significance Compressed sensing (CS) uses special measurement designs combined with powerful mathematical algorithms to reduce the amount of data to be collected while maintaining image quality. This is relevant to almost any imaging modality, and in this paper we focus on CS in photoacoustic projection imaging (PAPI) with integrating line detectors (ILDs). Aim Our previous research involved rather general CS measurements, where each ILD can contribute to any measurement. In the real world, however, the design of CS measurements is subject to practical constraints. In this research, we aim at a CS-PAPI system where each measurement involves only a subset of ILDs, and which can be implemented in a cost-effective manner. Approach We extend the existing PAPI with a self-developed CS unit. The system provides structured CS matrices for which the existing recovery theory cannot be applied directly. A random search strategy is applied to select the CS measurement matrix within this class for which we obtain exact sparse recovery. Results We implement a CS PAPI system for a compression factor of 4:3, where specific measurements are made on separate groups of 16 ILDs. We algorithmically design optimal CS measurements that have proven sparse CS capabilities. Numerical experiments are used to support our results. Conclusions CS with proven sparse recovery capabilities can be integrated into PAPI, and numerical results support this setup. Future work will focus on applying it to experimental data and utilizing data-driven approaches to enhance the compression factor and generalize the signal class.
Collapse
Affiliation(s)
- Markus Haltmeier
- University of Innsbruck, Department of Mathematics, Innsbruck, Austria
| | - Matthias Ye
- University of Innsbruck, Department of Mathematics, Innsbruck, Austria
| | | | | | | |
Collapse
|
32
|
Wang R, Zhu J, Meng Y, Wang X, Chen R, Wang K, Li C, Shi J. Adaptive machine learning method for photoacoustic computed tomography based on sparse array sensor data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107822. [PMID: 37832425 DOI: 10.1016/j.cmpb.2023.107822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 08/18/2023] [Accepted: 09/17/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Photoacoustic computed tomography (PACT) is a non-invasive biomedical imaging technology that has developed rapidly in recent decades, especially has shown potential for small animal studies and early diagnosis of human diseases. To obtain high-quality images, the photoacoustic imaging system needs a high-element-density detector array. However, in practical applications, due to the cost limitation, manufacturing technology, and the system requirement in miniaturization and robustness, it is challenging to achieve sufficient elements and high-quality reconstructed images, which may even suffer from artifacts. Different from the latest machine learning methods based on removing distortions and artifacts to recover high-quality images, this paper proposes an adaptive machine learning method to firstly predict and complement the photoacoustic sensor channel data from sparse array sampling and then reconstruct images through conventional reconstruction algorithms. METHODS We develop an adaptive machine learning method to predict and complement the photoacoustic sensor channel data. The model consists of XGBoost and a neural network named SS-net. To handle data sets of different sizes and improve the generalization, a tunable parameter is used to control the weights of XGBoost and SS-net outputs. RESULTS The proposed method achieved superior performance as demonstrated by simulation, phantom experiments, and in vivo experiment results. Compared with linear interpolation, XGBoost, CAE, and U-net, the simulation results show that the SSIM value is increased by 12.83%, 6.78%, 21.46%, and 12.33%. Moreover, the median R2 is increased by 34.4%, 8.1%, 28.6%, and 84.1% with the in vivo data. CONCLUSIONS This model provides a framework to predict the missed photoacoustic sensor data on a sparse ring-shaped array for PACT imaging and has achieved considerable improvements in reconstructing the objects. Compared with linear interpolation and other deep learning methods qualitatively and quantitatively, our proposed methods can effectively suppress artifacts and improve image quality. The advantage of our methods is that there is no need for preparing a large number of images as the training dataset, and the data for training is directly from the sensors. It has the potential to be applied to a wide range of photoacoustic imaging detector arrays for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
| | - Jing Zhu
- Zhejiang Lab, Hangzhou 311100, China
| | | | | | | | | | - Chiye Li
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| | - Junhui Shi
- Zhejiang Lab, Hangzhou 311100, China; Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China.
| |
Collapse
|
33
|
Yan B, Song B, Mu G, Fan Y, Zhao Y. Compressed single-shot 3D photoacoustic imaging with a single-element transducer. PHOTOACOUSTICS 2023; 34:100570. [PMID: 38027529 PMCID: PMC10661598 DOI: 10.1016/j.pacs.2023.100570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 10/14/2023] [Accepted: 11/06/2023] [Indexed: 12/01/2023]
Abstract
Three-dimensional (3D) photoacoustic imaging (PAI) can provide rich information content and has gained increasingly more attention in various biomedical applications. However, current 3D PAI methods either involves pointwise scanning of the 3D volume using a single-element transducer, which can be time-consuming, or requires an array of transducers, which is known to be complex and expensive. By utilizing a 3D encoder and compressed sensing techniques, we develop a new imaging modality that is capable of single-shot 3D PAI using a single-element transducer. The proposed method is validated with phantom study, which demonstrates single-shot 3D imaging of different objects and 3D tracking of a moving object. After one-time calibration, while the system could perform single-shot 3D imaging for different objects, the calibration could remain effective over 7 days, which is highly beneficial for practical translation. Overall, the experimental results showcase the potential of this technique for both scientific research and clinical applications.
Collapse
Affiliation(s)
- Bingbao Yan
- Beijing Advanced Innovation Center for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Engineering Medicine, Beihang University, Beijing 100191, China
| | - Bowen Song
- Beijing Advanced Innovation Center for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Engineering Medicine, Beihang University, Beijing 100191, China
| | - Gen Mu
- Beijing Advanced Innovation Center for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Engineering Medicine, Beihang University, Beijing 100191, China
| | - Yubo Fan
- Beijing Advanced Innovation Center for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Engineering Medicine, Beihang University, Beijing 100191, China
| | - Yanyu Zhao
- Beijing Advanced Innovation Center for Biomedical Engineering, Key Laboratory for Biomechanics and Mechanobiology of Ministry of Education, School of Engineering Medicine, Beihang University, Beijing 100191, China
| |
Collapse
|
34
|
Li J, Meng YC. Multikernel positional embedding convolutional neural network for photoacoustic reconstruction with sparse data. APPLIED OPTICS 2023; 62:8506-8516. [PMID: 38037963 DOI: 10.1364/ao.504094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 10/14/2023] [Indexed: 12/02/2023]
Abstract
Photoacoustic imaging (PAI) is an emerging noninvasive imaging modality that merges the high contrast of optical imaging with the high resolution of ultrasonic imaging. Low-quality photoacoustic reconstruction with sparse data due to sparse spatial sampling and limited view detection is a major obstacle to the popularization of PAI for medical applications. Deep learning has been considered as the best solution to this problem in the past decade. In this paper, we propose what we believe to be a novel architecture, named DPM-UNet, which consists of the U-Net backbone with additional position embedding block and two multi-kernel-size convolution blocks, a dilated dense block and dilated multi-kernel-size convolution block. Our method was experimentally validated with both simulated data and in vivo data, achieving a SSIM of 0.9824 and a PSNR of 33.2744 dB. Furthermore, the reconstructed images of our proposed method were compared with those obtained by other advanced methods. The results have shown that our proposed DPM-UNet has a great advantage in PAI over other methods with respect to the imaging effect and memory consumption.
Collapse
|
35
|
Murad N, Pan MC, Hsu YF. Optimizing diffuse optical imaging for breast tissues with a dual-encoder neural network to preserve small structural information and fine features. J Med Imaging (Bellingham) 2023; 10:066003. [PMID: 38074624 PMCID: PMC10704257 DOI: 10.1117/1.jmi.10.6.066003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 11/15/2023] [Accepted: 11/17/2023] [Indexed: 12/08/2024] Open
Abstract
Purpose Various laboratory sources have recently achieved progress in implementing deep learning models on biomedical optical imaging of soft biological tissues. The highly scattered nature of tissues at specific optical wavelengths results in poor spatial resolution. This opens up opportunities for diffuse optical imaging to improve the spatial resolution of obtained optical properties suffering from artifacts. This study aims to investigate a dual-encoder deep learning model for successfully detecting tumors in different phantoms w.r.t tumor size on diffuse optical imaging. Approach Our proposed dual-encoder network extends U-net by adding a parallel branch of signal data to get information directly from the base source. This allows the trained network to localize the inclusions without degrading or merging with the background. The signals from the forward model and the images from the inverse problem are combined in a single decoder, filling the gap between existing direct processing and post-processing. Results Absorption and reduced scattering coefficients are well reconstructed in both simulation and phantom test datasets. The proposed and implemented dual-encoder networks characterize better optical-property images than the signal-encoder and image-encoder networks, and the contrast-and-size detail resolution of the dual-encoder networks outperforms the other two approaches. From the measures of performance evaluation, the structural similarity and peak signal-to-noise ratio of the reconstructed images obtained by the dual-encoder networks remain the highest values. Conclusions In this study, we synthesized the advantages of boundary data direct reconstruction, namely the extracted signals and iterative methods, from the obtained images into a unified network architecture.
Collapse
Affiliation(s)
- Nazish Murad
- National Central University, Department of Mechanical Engineering, Taoyuan City, Taiwan
| | - Min-Chun Pan
- National Central University, Department of Mechanical Engineering, Taoyuan City, Taiwan
| | - Ya-Fen Hsu
- Landseed Hospital International, Department of Surgery, Taoyuan City, Taiwan
| |
Collapse
|
36
|
Song X, Wang G, Zhong W, Guo K, Li Z, Liu X, Dong J, Liu Q. Sparse-view reconstruction for photoacoustic tomography combining diffusion model with model-based iteration. PHOTOACOUSTICS 2023; 33:100558. [PMID: 38021282 PMCID: PMC10658608 DOI: 10.1016/j.pacs.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/14/2023] [Accepted: 09/16/2023] [Indexed: 12/01/2023]
Abstract
As a non-invasive hybrid biomedical imaging technology, photoacoustic tomography combines high contrast of optical imaging and high penetration of acoustic imaging. However, the conventional standard reconstruction under sparse view could result in low-quality image in photoacoustic tomography. Here, a novel model-based sparse reconstruction method for photoacoustic tomography via diffusion model was proposed. A score-based diffusion model is designed for learning the prior information of the data distribution. The learned prior information is utilized as a constraint for the data consistency term of an optimization problem based on the least-square method in the model-based iterative reconstruction, aiming to achieve the optimal solution. Blood vessels simulation data and the animal in vivo experimental data were used to evaluate the performance of the proposed method. The results demonstrate that the proposed method achieves higher-quality sparse reconstruction compared with conventional reconstruction methods and U-Net. In particular, under the extreme sparse projection (e.g., 32 projections), the proposed method achieves an improvement of ∼ 260 % in structural similarity and ∼ 30 % in peak signal-to-noise ratio for in vivo data, compared with the conventional delay-and-sum method. This method has the potential to reduce the acquisition time and cost of photoacoustic tomography, which will further expand the application range.
Collapse
Affiliation(s)
| | | | - Wenhua Zhong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Kangjun Guo
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiaqing Dong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
37
|
Zheng W, Zhang H, Huang C, Shijo V, Xu C, Xu W, Xia J. Deep Learning Enhanced Volumetric Photoacoustic Imaging of Vasculature in Human. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2301277. [PMID: 37530209 PMCID: PMC10582405 DOI: 10.1002/advs.202301277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 06/26/2023] [Indexed: 08/03/2023]
Abstract
The development of high-performance imaging processing algorithms is a core area of photoacoustic tomography. While various deep learning based image processing techniques have been developed in the area, their applications in 3D imaging are still limited due to challenges in computational cost and memory allocation. To address those limitations, this work implements a 3D fully-dense (3DFD) U-net to linear array based photoacoustic tomography and utilizes volumetric simulation and mixed precision training to increase efficiency and training size. Through numerical simulation, phantom imaging, and in vivo experiments, this work demonstrates that the trained network restores the true object size, reduces the noise level and artifacts, improves the contrast at deep regions, and reveals vessels subject to limited view distortion. With these enhancements, 3DFD U-net successfully produces clear 3D vascular images of the palm, arms, breasts, and feet of human subjects. These enhanced vascular images offer improved capabilities for biometric identification, foot ulcer evaluation, and breast cancer imaging. These results indicate that the new algorithm will have a significant impact on preclinical and clinical photoacoustic tomography.
Collapse
Affiliation(s)
- Wenhan Zheng
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Huijuan Zhang
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Chuqin Huang
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Varun Shijo
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Chenhan Xu
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Wenyao Xu
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| | - Jun Xia
- Department of Biomedical EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
- Department of Computer Science and EngineeringUniversity at BuffaloThe State University of New YorkBuffaloNew YorkNY14260USA
| |
Collapse
|
38
|
Poplack SP, Park EY, Ferrara KW. Optical Breast Imaging: A Review of Physical Principles, Technologies, and Clinical Applications. JOURNAL OF BREAST IMAGING 2023; 5:520-537. [PMID: 37981994 PMCID: PMC10655724 DOI: 10.1093/jbi/wbad057] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2023]
Abstract
Optical imaging involves the propagation of light through tissue. Current optical breast imaging technologies, including diffuse optical spectroscopy, diffuse optical tomography, and photoacoustic imaging, capitalize on the selective absorption of light in the near-infrared spectrum by deoxygenated and oxygenated hemoglobin. They provide information on the morphological and functional characteristics of different tissues based on their varied interactions with light, including physiologic information on lesion vascular content and anatomic information on tissue vascularity. Fluorescent contrast agents, such as indocyanine green, are used to visualize specific tissues, molecules, or proteins depending on how and where the agent accumulates. In this review, we describe the physical principles, spectrum of technologies, and clinical applications of the most common optical systems currently being used or developed for breast imaging. Most notably, US co-registered photoacoustic imaging and US-guided diffuse optical tomography have demonstrated efficacy in differentiating benign from malignant breast masses, thereby improving the specificity of diagnostic imaging. Diffuse optical tomography and diffuse optical spectroscopy have shown promise in assessing treatment response to preoperative systemic therapy, and photoacoustic imaging and diffuse optical tomography may help predict tumor phenotype. Lastly, fluorescent imaging using indocyanine green dye performs comparably to radioisotope mapping of sentinel lymph nodes and appears to improve the outcomes of autologous tissue flap breast reconstruction.
Collapse
Affiliation(s)
- Steven P. Poplack
- Stanford University School of Medicine, Department of Radiology, Palo Alto, CA, USA
| | - Eun-Yeong Park
- Stanford University School of Medicine, Department of Radiology, Palo Alto, CA, USA
| | - Katherine W. Ferrara
- Stanford University School of Medicine, Department of Radiology, Palo Alto, CA, USA
| |
Collapse
|
39
|
Zhu J, Huynh N, Ogunlade O, Ansari R, Lucka F, Cox B, Beard P. Mitigating the Limited View Problem in Photoacoustic Tomography for a Planar Detection Geometry by Regularized Iterative Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2603-2615. [PMID: 37115840 DOI: 10.1109/tmi.2023.3271390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The use of a planar detection geometry in photoacoustic tomography results in the so- called limited-view problem due to the finite extent of the acoustic detection aperture. When images are reconstructed using one-step reconstruction algorithms, image quality is compromised by the presence of streaking artefacts, reduced contrast, image distortion and reduced signal-to-noise ratio. To mitigate this, model-based iterative reconstruction approaches based on least squares minimisation with and without total variation regularization were evaluated using in-silico, experimental phantom, ex vivo and in vivo data. Compared to one-step reconstruction methods, it has been shown that iterative methods provide better image quality in terms of enhanced signal-to-artefact ratio, signal-to-noise ratio, amplitude accuracy and spatial fidelity. For the total variation approaches, the impact of the regularization parameter on image feature scale and amplitude distribution was evaluated. In addition, the extent to which the use of Bregman iterations can compensate for the systematic amplitude bias introduced by total variation was studied. This investigation is expected to inform the practical application of model-based iterative image reconstruction approaches for improving photoacoustic image quality when using finite aperture planar detection geometries.
Collapse
|
40
|
Tang K, Zhang S, Wang Y, Zhang X, Liu Z, Liang Z, Wang H, Chen L, Chen W, Qi L. Learning spatially variant degradation for unsupervised blind photoacoustic tomography image restoration. PHOTOACOUSTICS 2023; 32:100536. [PMID: 37575971 PMCID: PMC10413197 DOI: 10.1016/j.pacs.2023.100536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 07/18/2023] [Accepted: 07/19/2023] [Indexed: 08/15/2023]
Abstract
Photoacoustic tomography (PAT) images contain inherent distortions due to the imaging system and heterogeneous tissue properties. Improving image quality requires the removal of these system distortions. While model-based approaches and data-driven techniques have been proposed for PAT image restoration, achieving accurate and robust image recovery remains challenging. Recently, deep-learning-based image deconvolution approaches have shown promise for image recovery. However, PAT imaging presents unique challenges, including spatially varying resolution and the absence of ground truth data. Consequently, there is a pressing need for a novel learning strategy specifically tailored for PAT imaging. Herein, we propose a configurable network model named Deep hybrid Image-PSF Prior (DIPP) that builds upon the physical image degradation model of PAT. DIPP is an unsupervised and deeply learned network model that aims to extract the ideal PAT image from complex system degradation. Our DIPP framework captures the degraded information solely from the acquired PAT image, without relying on ground truth or labeled data for network training. Additionally, we can incorporate the experimentally measured Point Spread Functions (PSFs) of the specific PAT system as a reference to further enhance performance. To evaluate the algorithm's effectiveness in addressing multiple degradations in PAT, we conduct extensive experiments using simulation images, publicly available datasets, phantom images, and in vivo small animal imaging data. Comparative analyses with classical analytical methods and state-of-the-art deep learning models demonstrate that our DIPP approach achieves significantly improved restoration results in terms of image details and contrast.
Collapse
Affiliation(s)
- Kaiyi Tang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Shuangyang Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Yang Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Xiaoming Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Zhenyang Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Zhichao Liang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Huafeng Wang
- Research Center of Narrative Medicine, Shunde Hospital, Southern Medical University, Foshan, Guangdong, China
| | - Lingjian Chen
- Research Center of Narrative Medicine, Shunde Hospital, Southern Medical University, Foshan, Guangdong, China
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| | - Li Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong, China
| |
Collapse
|
41
|
Menozzi L, Del Águila Á, Vu T, Ma C, Yang W, Yao J. Integrated Photoacoustic, Ultrasound, and Angiographic Tomography (PAUSAT) for NonInvasive Whole-Brain Imaging of Ischemic Stroke. J Vis Exp 2023:10.3791/65319. [PMID: 37335115 PMCID: PMC10411115 DOI: 10.3791/65319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/21/2023] Open
Abstract
Presented here is an experimental ischemic stroke study using our newly developed noninvasive imaging system that integrates three acoustic-based imaging technologies: photoacoustic, ultrasound, and angiographic tomography (PAUSAT). Combining these three modalities helps acquire multi-spectral photoacoustic tomography (PAT) of the brain blood oxygenation, high-frequency ultrasound imaging of the brain tissue, and acoustic angiography of the cerebral blood perfusion. The multi-modal imaging platform allows the study of cerebral perfusion and oxygenation changes in the whole mouse brain after stroke. Two commonly used ischemic stroke models were evaluated: the permanent middle cerebral artery occlusion (pMCAO) model and the photothrombotic (PT) model. PAUSAT was used to image the same mouse brains before and after a stroke and quantitatively analyze both stroke models. This imaging system was able to clearly show the brain vascular changes after ischemic stroke, including significantly reduced blood perfusion and oxygenation in the stroke infarct region (ipsilateral) compared to the uninjured tissue (contralateral). The results were confirmed by both laser speckle contrast imaging and triphenyltetrazolium chloride (TTC) staining. Furthermore, stroke infarct volume in both stroke models was measured and validated by TTC staining as the ground truth. Through this study, we have demonstrated that PAUSAT can be a powerful tool in noninvasive and longitudinal preclinical studies of ischemic stroke.
Collapse
Affiliation(s)
- Luca Menozzi
- Department of Biomedical Engineering, Duke University
| | - Ángela Del Águila
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University School of Medicine
| | - Tri Vu
- Department of Biomedical Engineering, Duke University
| | - Chenshuo Ma
- Department of Biomedical Engineering, Duke University
| | - Wei Yang
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University School of Medicine;
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University;
| |
Collapse
|
42
|
Kurnikov A, Volkov G, Orlova A, Kovalchuk A, Khochenkova Y, Razansky D, Subochev P. Fisheye piezo polymer detector for scanning optoacoustic angiography of experimental neoplasms. PHOTOACOUSTICS 2023; 31:100507. [PMID: 37252652 PMCID: PMC10212753 DOI: 10.1016/j.pacs.2023.100507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 04/14/2023] [Accepted: 05/06/2023] [Indexed: 05/31/2023]
Abstract
A number of optoacoustic (or photoacoustic) microscopy and mesoscopy techniques have successfully been employed for non-invasive tumor angiography. However, accurate rendering of tortuous and multidirectional neoplastic vessels is commonly hindered by the limited aperture size, narrow bandwidth and insufficient angular coverage of commercially available ultrasound transducers. We exploited the excellent flexibility and elasticity of a piezo polymer (PVDF) material to devise a fisheye-shape ultrasound detector with a high numerical aperture of 0.9, wide 1-30 MHz detection bandwidth and 27 mm diameter aperture suitable for imaging tumors of various size. We show theoretically and experimentally that the wide detector's view-angle and bandwidth are paramount for achieving a detailed visualization of the intricate arbitrarily-oriented neovasculature in experimental tumors. The developed approach is shown to be well adapted to the tasks of experimental oncology thus allows to better exploit the angiographic potential of optoacoustics.
Collapse
Affiliation(s)
- Alexey Kurnikov
- Institute of Applied Physics, Russian Academy of Sciences, 46 Ulyanov Str., Nizhny Novgorod 603950, Russia
| | - Grigory Volkov
- Institute of Applied Physics, Russian Academy of Sciences, 46 Ulyanov Str., Nizhny Novgorod 603950, Russia
| | - Anna Orlova
- Institute of Applied Physics, Russian Academy of Sciences, 46 Ulyanov Str., Nizhny Novgorod 603950, Russia
| | - Andrey Kovalchuk
- Institute of Applied Physics, Russian Academy of Sciences, 46 Ulyanov Str., Nizhny Novgorod 603950, Russia
| | - Yulia Khochenkova
- National Medical Research Center of Oncology named after N. N. Blokhin, Kashirskoe highway 23, Moscow 115522, Russia
| | - Daniel Razansky
- Institute of Pharmacology and Toxicology, Faculty of Medicine, UZH Zurich, Rämistrasse 71, Zurich 8006, Switzerland
- Institute for Biomedical Engineering, Department of Information Technology and Electrical Engineering, ETH Zurich, Gloriastrasse 35, Zurich 8092, Switzerland
| | - Pavel Subochev
- Institute of Applied Physics, Russian Academy of Sciences, 46 Ulyanov Str., Nizhny Novgorod 603950, Russia
| |
Collapse
|
43
|
Wang T, Chen C, Shen K, Liu W, Tian C. Streak artifact suppressed back projection for sparse-view photoacoustic computed tomography. APPLIED OPTICS 2023; 62:3917-3925. [PMID: 37706701 DOI: 10.1364/ao.487957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/21/2023] [Indexed: 09/15/2023]
Abstract
The development of fast and accurate image reconstruction algorithms under constrained data acquisition conditions is important for photoacoustic computed tomography (PACT). Sparse-view measurements have been used to accelerate data acquisition and reduce system complexity; however, reconstructed images suffer from sparsity-induced streak artifacts. In this paper, a modified back-projection (BP) method termed anti-streak BP is proposed to suppress streak artifacts in sparse-view PACT reconstruction. During the reconstruction process, the anti-streak BP finds the back-projection terms contaminated by high-intensity sources with an outlier detection method. Then, the weights of the contaminated back-projection terms are adaptively adjusted to eliminate the effects of high-intensity sources. The proposed anti-streak BP method is compared with the conventional BP method on both simulation and in vivo data. The anti-streak BP method shows substantially fewer artifacts in the reconstructed images, and the streak index is 54% and 20% lower than that of the conventional BP method on simulation and in vivo data, when the transducer number N=128. The anti-streak BP method is a powerful improvement of the BP method with the ability of artifact suppression.
Collapse
|
44
|
Zhang Z, Jin H, Zhang W, Lu W, Zheng Z, Sharma A, Pramanik M, Zheng Y. Adaptive enhancement of acoustic resolution photoacoustic microscopy imaging via deep CNN prior. PHOTOACOUSTICS 2023; 30:100484. [PMID: 37095888 PMCID: PMC10121479 DOI: 10.1016/j.pacs.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a promising medical imaging modality that can be employed for deep bio-tissue imaging. However, its relatively low imaging resolution has greatly hindered its wide applications. Previous model-based or learning-based PAM enhancement algorithms either require design of complex handcrafted prior to achieve good performance or lack the interpretability and flexibility that can adapt to different degradation models. However, the degradation model of AR-PAM imaging is subject to both imaging depth and center frequency of ultrasound transducer, which varies in different imaging conditions and cannot be handled by a single neural network model. To address this limitation, an algorithm integrating both learning-based and model-based method is proposed here so that a single framework can deal with various distortion functions adaptively. The vasculature image statistics is implicitly learned by a deep convolutional neural network, which served as plug and play (PnP) prior. The trained network can be directly plugged into the model-based optimization framework for iterative AR-PAM image enhancement, which fitted for different degradation mechanisms. Based on physical model, the point spread function (PSF) kernels for various AR-PAM imaging situations are derived and used for the enhancement of simulation and in vivo AR-PAM images, which collectively proved the effectiveness of proposed method. Quantitatively, the PSNR and SSIM values have all achieve best performance with the proposed algorithm in all three simulation scenarios; The SNR and CNR values have also significantly raised from 6.34 and 5.79 to 35.37 and 29.66 respectively in an in vivo testing result with the proposed algorithm.
Collapse
Affiliation(s)
- Zhengyuan Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Haoran Jin
- Zhejiang University, College of Mechanical Engineering, The State Key Laboratory of Fluid Power and Mechatronic Systems, Hangzhou 310027, China
| | - Wenwen Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Wenhao Lu
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Zesheng Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Arunima Sharma
- Johns Hopkins University, Electrical and Computer Engineering, Baltimore, MD 21218, USA
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, USA
| | - Yuanjin Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
- Corresponding author.
| |
Collapse
|
45
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
46
|
Mom K, Langer M, Sixou B. Deep Gauss-Newton for phase retrieval. OPTICS LETTERS 2023; 48:1136-1139. [PMID: 36857232 DOI: 10.1364/ol.484862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 01/26/2023] [Indexed: 06/18/2023]
Abstract
We propose the deep Gauss-Newton (DGN) algorithm. The DGN allows one to take into account the knowledge of the forward model in a deep neural network by unrolling a Gauss-Newton optimization method. No regularization or step size needs to be chosen; they are learned through convolutional neural networks. The proposed algorithm does not require an initial reconstruction and is able to retrieve simultaneously the phase and absorption from a single-distance diffraction pattern. The DGN method was applied to both simulated and experimental data and permitted large improvements of the reconstruction error and of the resolution compared with a state-of-the-art iterative method and another neural-network-based reconstruction algorithm.
Collapse
|
47
|
Lan H, Yang C, Gao F. A jointed feature fusion framework for photoacoustic image reconstruction. PHOTOACOUSTICS 2023; 29:100442. [PMID: 36589516 PMCID: PMC9798177 DOI: 10.1016/j.pacs.2022.100442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
The standard reconstruction of Photoacoustic (PA) computed tomography (PACT) image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. A quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The experimental results have demonstrated the superior performance and quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics by 135% (SSIM for simulation) and 40% (gCNR for in-vivo) improvement.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|
48
|
Hsu KT, Guan S, Chitnis PV. Fast iterative reconstruction for photoacoustic tomography using learned physical model: Theoretical validation. PHOTOACOUSTICS 2023; 29:100452. [PMID: 36700132 PMCID: PMC9867977 DOI: 10.1016/j.pacs.2023.100452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/21/2022] [Accepted: 01/11/2023] [Indexed: 06/17/2023]
Abstract
Iterative reconstruction has demonstrated superior performance in medical imaging under compressed, sparse, and limited-view sensing scenarios. However, iterative reconstruction algorithms are slow to converge and rely heavily on hand-crafted parameters to achieve good performance. Many iterations are usually required to reconstruct a high-quality image, which is computationally expensive due to repeated evaluations of the physical model. While learned iterative reconstruction approaches such as model-based learning (MBLr) can reduce the number of iterations through convolutional neural networks, it still requires repeated evaluations of the physical models at each iteration. Therefore, the goal of this study is to develop a Fast Iterative Reconstruction (FIRe) algorithm that incorporates a learned physical model into the learned iterative reconstruction scheme to further reduce the reconstruction time while maintaining robust reconstruction performance. We also propose an efficient training scheme for FIRe, which releases the enormous memory footprint required by learned iterative reconstruction methods through the concept of recursive training. The results of our proposed method demonstrate comparable reconstruction performance to learned iterative reconstruction methods with a 9x reduction in computation time and a 620x reduction in computation time compared to variational reconstruction.
Collapse
|
49
|
Compressed Sensing Photoacoustic Imaging Reconstruction Using Elastic Net Approach. Mol Imaging 2022; 2022:7877049. [PMID: 36721731 PMCID: PMC9881674 DOI: 10.1155/2022/7877049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/04/2022] [Accepted: 12/10/2022] [Indexed: 12/24/2022] Open
Abstract
Photoacoustic imaging involves reconstructing an estimation of the absorbed energy density distribution from measured ultrasound data. The reconstruction task based on incomplete and noisy experimental data is usually an ill-posed problem that requires regularization to obtain meaningful solutions. The purpose of the work is to propose an elastic network (EN) model to improve the quality of reconstructed photoacoustic images. To evaluate the performance of the proposed method, a series of numerical simulations and tissue-mimicking phantom experiments are performed. The experiment results indicate that, compared with the L 1-norm and L 2-normbased regularization methods with different numerical phantoms, Gaussian noise of 10-50 dB, and different regularization parameters, the EN method with α = 0.5 has better image quality, calculation speed, and antinoise ability.
Collapse
|
50
|
Menozzi L, Yang W, Feng W, Yao J. Sound out the impaired perfusion: Photoacoustic imaging in preclinical ischemic stroke. Front Neurosci 2022; 16:1055552. [PMID: 36532279 PMCID: PMC9751426 DOI: 10.3389/fnins.2022.1055552] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 11/17/2022] [Indexed: 09/19/2023] Open
Abstract
Acoustically detecting the optical absorption contrast, photoacoustic imaging (PAI) is a highly versatile imaging modality that can provide anatomical, functional, molecular, and metabolic information of biological tissues. PAI is highly scalable and can probe the same biological process at various length scales ranging from single cells (microscopic) to the whole organ (macroscopic). Using hemoglobin as the endogenous contrast, PAI is capable of label-free imaging of blood vessels in the brain and mapping hemodynamic functions such as blood oxygenation and blood flow. These imaging merits make PAI a great tool for studying ischemic stroke, particularly for probing into hemodynamic changes and impaired cerebral blood perfusion as a consequence of stroke. In this narrative review, we aim to summarize the scientific progresses in the past decade by using PAI to monitor cerebral blood vessel impairment and restoration after ischemic stroke, mostly in the preclinical setting. We also outline and discuss the major technological barriers and challenges that need to be overcome so that PAI can play a more significant role in preclinical stroke research, and more importantly, accelerate its translation to be a useful clinical diagnosis and management tool for human strokes.
Collapse
Affiliation(s)
- Luca Menozzi
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| | - Wei Yang
- Multidisciplinary Brain Protection Program, Department of Anesthesiology, Duke University, Durham, NC, United States
| | - Wuwei Feng
- Department of Neurology, Duke University School of Medicine, Durham, NC, United States
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC, United States
| |
Collapse
|