1
|
Lian T, Lv Y, Guo K, Li Z, Li J, Wang G, Lin J, Cao Y, Liu Q, Song X. Generative priors-constraint accelerated iterative reconstruction for extremely sparse photoacoustic tomography boosted by mean-reverting diffusion model: Towards 8 projections. PHOTOACOUSTICS 2025; 43:100709. [PMID: 40161358 PMCID: PMC11951203 DOI: 10.1016/j.pacs.2025.100709] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2024] [Revised: 02/16/2025] [Accepted: 03/02/2025] [Indexed: 04/02/2025]
Abstract
As a novel non-invasive hybrid biomedical imaging technology, photoacoustic tomography combines the advantages of high contrast of optical imaging and high penetration of acoustic imaging. However, the conventional standard reconstruction methods under sparse view may lead to low-quality image in photoacoustic tomography. To address this problem, an advanced sparse reconstruction method for photoacoustic tomography based on the mean-reverting diffusion model is proposed. By modeling the degradation process from a high-quality image under full-view scanning (512 projections) to a sparse image with stable Gaussian noise (i.e., mean state), a mean-reverting diffusion model is trained to learn prior information of the data distribution. Then the learned prior information is employed to generate a high-quality image from the sparse image by iteratively sampling the noisy state. Blood vessels simulation data and the animal in vivo experimental data were used to evaluate the performance of the proposed method. The results demonstrate that the proposed method achieves higher-quality sparse reconstruction compared with conventional reconstruction methods and U-Net method. In addition, the proposed method dramatically speeds up the sparse reconstruction and achieves better reconstruction results for extremely sparse images compared with the method based on conventional diffusion model. The proposed method achieves an improvement of 0.52 (∼289 %) in structural similarity and 10.01 dB (∼59 %) in peak signal-to-noise ratio for extremely sparse projections (8 projections), compared with the conventional delay-and-sum method. This method is expected to shorten the acquisition time and reduce the cost of photoacoustic tomography, thus further expanding the range of applications.
Collapse
Affiliation(s)
- Teng Lian
- Jiluan Academy, Nanchang University, Nanchang 330031, China
| | - Yichen Lv
- School of Information Engineering, Nanchang University, Nanchang 330031, China
- Jiluan Academy, Nanchang University, Nanchang 330031, China
| | - Kangjun Guo
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiahong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Guijun Wang
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiabin Lin
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Yiyang Cao
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xianlin Song
- School of Information Engineering, Nanchang University, Nanchang 330031, China
- Jiangxi Provincial Key Laboratory of Advanced Signal Processing and Intelligent Communications, Nanchang University, Nanchang 330031, China
- Jiangxi Provincial Engineering Research Center for Intelligent Medical Information Detection and Internet of Things, Nanchang University, Nanchang 330031, China
| |
Collapse
|
2
|
Pérez-Liva M, Alonso de Leciñana M, Gutiérrez-Fernández M, Camacho Sosa Dias J, F Cruza J, Rodríguez-Pardo J, García-Suárez I, Laso-García F, Herraiz JL, Elvira Segura L. Dual photoacoustic/ultrasound technologies for preclinical research: current status and future trends. Phys Med Biol 2025; 70:07TR01. [PMID: 39914003 DOI: 10.1088/1361-6560/adb368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Accepted: 02/06/2025] [Indexed: 02/12/2025]
Abstract
Photoacoustic (PA) imaging, by integrating optical and ultrasound (US) modalities, combines high spatial resolution with deep tissue penetration, making it a transformative tool in biomedical research. This review presents a comprehensive analysis of the current status of dual PA/US imaging technologies, emphasising their applications in preclinical research. It details advancements in light excitation strategies, including tomographic and microscopic modalities, innovations in pulsed laser and alternative light sources, and US instrumentation. The review further explores preclinical methodologies, encompassing dedicated instrumentation, signal processing, and data analysis techniques essential for PA/US systems. Key applications discussed include the visualisation of blood vessels, micro-circulation, and tissue perfusion; diagnosis and monitoring of inflammation; evaluation of infections, atherosclerosis, burn injuries, healing, and scar formation; assessment of liver and renal diseases; monitoring of epilepsy and neurodegenerative conditions; studies on brain disorders and preeclampsia; cell therapy monitoring; and tumour detection, staging, and recurrence monitoring. Challenges related to imaging depth, resolution, cost, and the translation of contrast agents to clinical practice are analysed, alongside advancements in high-speed acquisition, artificial intelligence-driven reconstruction, and innovative light-delivery methods. While clinical translation remains complex, this review underscores the crucial role of preclinical studies in unravelling fundamental biomedical questions and assessing novel imaging strategies. Ultimately, this review delves into the future trends of dual PA/US imaging, highlighting its potential to bridge preclinical discoveries with clinical applications and drive advances in diagnostics, therapeutic monitoring, and personalised medicine.
Collapse
Affiliation(s)
- Mailyn Pérez-Liva
- IPARCOS Institute and EMFTEL Department, Universidad Complutense de Madrid, Pl. de las Ciencias, 1, Moncloa-Aravaca, Madrid 28040, Spain
- Health Research Institute of the Hospital Clínico San Carlos, IdISSC, C/ Profesor Martín Lagos s/n, Madrid 28040, Spain
| | - María Alonso de Leciñana
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - María Gutiérrez-Fernández
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - Jorge Camacho Sosa Dias
- Instituto de Tecnologías Físicas y de la Información (ITEFI, CSIC), Serrano 144, Madrid 28006, Spain
| | - Jorge F Cruza
- Instituto de Tecnologías Físicas y de la Información (ITEFI, CSIC), Serrano 144, Madrid 28006, Spain
| | - Jorge Rodríguez-Pardo
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - Iván García-Suárez
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
- Department of Emergency Service, San Agustín University Hospital, Asturias, Spain
| | - Fernando Laso-García
- Department of Neurology and Stroke Centre, Neurological Sciences and Cerebrovascular Research Laboratory, Neurology and Cerebrovascular Disease Group, Neuroscience Area Hospital La Paz Institute for Health Research-IdiPAZ (La Paz University Hospital, Universidad Autónoma de Madrid), Madrid, Spain
| | - Joaquin L Herraiz
- IPARCOS Institute and EMFTEL Department, Universidad Complutense de Madrid, Pl. de las Ciencias, 1, Moncloa-Aravaca, Madrid 28040, Spain
- Health Research Institute of the Hospital Clínico San Carlos, IdISSC, C/ Profesor Martín Lagos s/n, Madrid 28040, Spain
| | - Luis Elvira Segura
- Instituto de Tecnologías Físicas y de la Información (ITEFI, CSIC), Serrano 144, Madrid 28006, Spain
| |
Collapse
|
3
|
Mondal S, Paul S, Singh N, Warbal P, Khanam Z, Saha RK. Deep learning aided determination of the optimal number of detectors for photoacoustic tomography. Biomed Phys Eng Express 2025; 11:025029. [PMID: 39874604 DOI: 10.1088/2057-1976/adaf29] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 01/28/2025] [Indexed: 01/30/2025]
Abstract
Photoacoustic tomography (PAT) is a non-destructive, non-ionizing, and rapidly expanding hybrid biomedical imaging technique, yet it faces challenges in obtaining clear images due to limited data from detectors or angles. As a result, the methodology suffers from significant streak artifacts and low-quality images. The integration of deep learning (DL), specifically convolutional neural networks (CNNs), has recently demonstrated powerful performance in various fields of PAT. This work introduces a post-processing-based CNN architecture named residual-dense UNet (RDUNet) to address the stride artifacts in reconstructed PA images. The framework adopts the benefits of residual and dense blocks to form high-resolution reconstructed images. The network is trained with two different types of datasets to learn the relationship between the reconstructed images and their corresponding ground truths (GTs). In the first protocol, RDUNet (identified as RDUNet I) underwent training on heterogeneous simulated images featuring three distinct phantom types. Subsequently, in the second protocol, RDUNet (referred to as RDUNet II) was trained on a heterogeneous composition of 81% simulated data and 19% experimental data. The motivation behind this is to allow the network to adapt to diverse experimental challenges. The RDUNet algorithm was validated by performing numerical and experimental studies involving single-disk, T-shape, and vasculature phantoms. The performance of this protocol was compared with the famous backprojection (BP) and the traditional UNet algorithms. This study shows that RDUNet can substantially reduce the number of detectors from 100 to 25 for simulated testing images and 30 for experimental scenarios.
Collapse
Affiliation(s)
- Sudeep Mondal
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Subhadip Paul
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Navjot Singh
- Department of Information Technology, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Pankaj Warbal
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Zartab Khanam
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| | - Ratan K Saha
- Department of Applied Sciences, Indian Institute of Information Technology Allahabad, Prayagraj, 211015, India
| |
Collapse
|
4
|
Zou Z, Li D, Guo H, Yao Y, Yin J, Tao C, Liu X. Enhancement of structural and functional photoacoustic imaging based on a reference-inputted convolutional neural network. OPTICS EXPRESS 2025; 33:1260-1270. [PMID: 39876303 DOI: 10.1364/oe.541906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 12/22/2024] [Indexed: 01/30/2025]
Abstract
Photoacoustic microscopy has demonstrated outstanding performance in high-resolution functional imaging. However, in the process of photoacoustic imaging, the photoacoustic signals will be polluted by inevitable background noise. Besides, the image quality is compromised due to the biosafety limitation of the laser. The conventional approach to improving image quality, such as increasing laser pulse energy or multiple-times averaging, could result in more health risks and motion artifacts for high exposures to the laser. To overcome this challenge of biosafety and compromised image quality, we propose a reference-inputted convolutional neural network (Ri-Net). The network is trained using the photoacoustic signal and noise datasets from phantom experiments. Evaluation of the trained neural network demonstrates significant signal improvement. Human cuticle microvasculature imaging experiments are also conducted to further assess the performance and practicality of our network. The quantitative results show that we achieved a 2.6-fold improvement in image contrast and a 9.6 dB increase in signal-to-noise ratio. Finally, we apply our network, trained on single-wavelength data, to multi-wavelength functional imaging. The functional imaging of the mouse ear demonstrates the robustness of our method and the potential to capture the oxygen saturation of microvasculature. The Ri-Net enhances photoacoustic microscopy imaging, allowing for more efficient microcirculation assessments in a clinical setting.
Collapse
|
5
|
Tasmara FA, Mitrayana M, Setiawan A, Ishii T, Saijo Y, Widyaningrum R. Trends and developments in 3D photoacoustic imaging systems: A review of recent progress. Med Eng Phys 2025; 135:104268. [PMID: 39922642 DOI: 10.1016/j.medengphy.2024.104268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 11/02/2024] [Accepted: 11/25/2024] [Indexed: 02/10/2025]
Abstract
Photoacoustic imaging (PAI) is a non-invasive diagnostic imaging technique that utilizes the photoacoustic effect by combining optical and ultrasound imaging systems. The development of PAI is mostly centered on the generation of a high-quality 3D reconstruction system for more optimal and accurate identification of tissue abnormalities. This literature study was conducted to analyze the 3D image reconstruction in PAI over 2017-2024. In this review, the collected articles in 3D photoacoustic imaging were categorized based on the approach, design, and purpose of each study. Firstly, the approaches of the studies were classified into three groups: experimental studies, numerical simulation, and numerical simulation with experimental validation. Secondly, the design of the study was assessed based on the photoacoustic modality, laser type, and sensing mechanism. Thirdly, the purpose of the collected studies was summarized into seven subsections, including image quality improvement, frame rate improvement, image segmentation, system integration, inter-systems comparisons, improving computational efficiency, and portable system development. The results of this review revealed that the 3D PAI systems have been developed by various research groups, suggesting the investigation of numerous biological objects. Therefore, 3D PAI has the potential to contribute a wide range of novel biological imaging systems that support real-time biomedical imaging in the future.
Collapse
Affiliation(s)
- Fikhri Astina Tasmara
- Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Mitrayana Mitrayana
- Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Gadjah Mada, Yogyakarta, Indonesia
| | - Andreas Setiawan
- Department of Physics, Universitas Kristen Satya Wacana, Salatiga, Central Java, Indonesia
| | - Takuro Ishii
- Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai, Japan; Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Yoshifumi Saijo
- Graduate School of Biomedical Engineering, Tohoku University, Sendai, Japan
| | - Rini Widyaningrum
- Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Universitas Gadjah Mada, Yogyakarta, Indonesia.
| |
Collapse
|
6
|
Sun M, Wang X, Wang Y, Meng Y, Gao D, Li C, Chen R, Huang K, Shi J. Full-view volumetric photoacoustic imaging using a hemispheric transducer array combined with an acoustic reflector. BIOMEDICAL OPTICS EXPRESS 2024; 15:6864-6876. [PMID: 39679402 PMCID: PMC11640568 DOI: 10.1364/boe.540392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 10/13/2024] [Accepted: 11/05/2024] [Indexed: 12/17/2024]
Abstract
Photoacoustic computed tomography (PACT) has evoked extensive interest for applications in preclinical and clinical research. However, the current systems suffer from the limited view provided by detection setups, thus impeding the sufficient acquisition of intricate tissue structures. Here, we propose an approach to enable fast 3D full-view imaging. A hemispherical ultrasonic transducer array combined with a planar acoustic reflector serves as the ultrasonic detection device in the PACT system. The planar acoustic reflector can create a mirrored virtual transducer array, and the detection view range can be enlarged to cover approximately 3.7 π steradians in our detection setup. To verify the effectiveness of our proposed configuration, we present the imaging results of a hair phantom, an in vivo zebrafish larva, and a leaf skeleton phantom. Furthermore, the real-time dynamic imaging capacity of this system is demonstrated by observing the movement of zebrafish within 2 s. This strategy holds great potential for both preclinical and clinical research by providing more detailed and comprehensive images of biological tissues.
Collapse
Affiliation(s)
- Mingli Sun
- School of Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
| | | | - Yuqi Wang
- Zhejiang Lab, Hangzhou 311100, China
| | | | - Da Gao
- Zhejiang Lab, Hangzhou 311100, China
| | - Chiye Li
- Zhejiang Lab, Hangzhou 311100, China
| | | | - Kaikai Huang
- School of Physics, Zhejiang University, Hangzhou 310027, China
| | | |
Collapse
|
7
|
Shang R, Luke GP, O'Donnell M. Joint segmentation and image reconstruction with error prediction in photoacoustic imaging using deep learning. PHOTOACOUSTICS 2024; 40:100645. [PMID: 39347464 PMCID: PMC11424948 DOI: 10.1016/j.pacs.2024.100645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 08/16/2024] [Accepted: 09/10/2024] [Indexed: 10/01/2024]
Abstract
Deep learning has been used to improve photoacoustic (PA) image reconstruction. One major challenge is that errors cannot be quantified to validate predictions when ground truth is unknown. Validation is key to quantitative applications, especially using limited-bandwidth ultrasonic linear detector arrays. Here, we propose a hybrid Bayesian convolutional neural network (Hybrid-BCNN) to jointly predict PA image and segmentation with error (uncertainty) predictions. Each output pixel represents a probability distribution where error can be quantified. The Hybrid-BCNN was trained with simulated PA data and applied to both simulations and experiments. Due to the sparsity of PA images, segmentation focuses Hybrid-BCNN on minimizing the loss function in regions with PA signals for better predictions. The results show that accurate PA segmentations and images are obtained, and error predictions are highly statistically correlated to actual errors. To leverage error predictions, confidence processing created PA images above a specific confidence level.
Collapse
Affiliation(s)
- Ruibo Shang
- uWAMIT Center, Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| | - Geoffrey P Luke
- Thayer School of Engineering, Dartmouth College, Hanover, NH 03755, USA
| | - Matthew O'Donnell
- uWAMIT Center, Department of Bioengineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
8
|
Lan H, Huang L, Wei X, Li Z, Lv J, Ma C, Nie L, Luo J. Masked cross-domain self-supervised deep learning framework for photoacoustic computed tomography reconstruction. Neural Netw 2024; 179:106515. [PMID: 39032393 DOI: 10.1016/j.neunet.2024.106515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 06/24/2024] [Accepted: 07/05/2024] [Indexed: 07/23/2024]
Abstract
Accurate image reconstruction is crucial for photoacoustic (PA) computed tomography (PACT). Recently, deep learning has been used to reconstruct PA images with a supervised scheme, which requires high-quality images as ground truth labels. However, practical implementations encounter inevitable trade-offs between cost and performance due to the expensive nature of employing additional channels for accessing more measurements. Here, we propose a masked cross-domain self-supervised (CDSS) reconstruction strategy to overcome the lack of ground truth labels from limited PA measurements. We implement the self-supervised reconstruction in a model-based form. Simultaneously, we take advantage of self-supervision to enforce the consistency of measurements and images across three partitions of the measured PA data, achieved by randomly masking different channels. Our findings indicate that dynamically masking a substantial proportion of channels, such as 80%, yields meaningful self-supervisors in both the image and signal domains. Consequently, this approach reduces the multiplicity of pseudo solutions and enables efficient image reconstruction using fewer PA measurements, ultimately minimizing reconstruction error. Experimental results on in-vivo PACT dataset of mice demonstrate the potential of our self-supervised framework. Moreover, our method exhibits impressive performance, achieving a structural similarity index (SSIM) of 0.87 in an extreme sparse case utilizing only 13 channels, which outperforms the performance of the supervised scheme with 16 channels (0.77 SSIM). Adding to its advantages, our method can be deployed on different trainable models in an end-to-end manner, further enhancing its versatility and applicability.
Collapse
Affiliation(s)
- Hengrong Lan
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Lijie Huang
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Xingyue Wei
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Zhiqiang Li
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Jing Lv
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| | - Cheng Ma
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Liming Nie
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| | - Jianwen Luo
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
9
|
Li B, Lu M, Zhou T, Bu M, Gu W, Wang J, Zhu Q, Liu X, Ta D. Removing Artifacts in Transcranial Photoacoustic Imaging With Polarized Self-Attention Dense-UNet. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1530-1543. [PMID: 39013725 DOI: 10.1016/j.ultrasmedbio.2024.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/28/2024] [Accepted: 06/16/2024] [Indexed: 07/18/2024]
Abstract
OBJECTIVE Photoacoustic imaging (PAI) is a promising transcranial imaging technique. However, the distortion of photoacoustic signals induced by the skull significantly influences its imaging quality. We aimed to use deep learning for removing artifacts in PAI. METHODS In this study, we propose a polarized self-attention dense U-Net, termed PSAD-UNet, to correct the distortion and accurately recover imaged objects beneath bone plates. To evaluate the performance of the proposed method, a series of experiments was performed using a custom-built PAI system. RESULTS The experimental results showed that the proposed PSAD-UNet method could effectively implement transcranial PAI through a one- or two-layer bone plate. Compared with the conventional delay-and-sum and classical U-Net methods, PSAD-UNet can diminish the influence of bone plates and provide high-quality PAI results in terms of structural similarity and peak signal-to-noise ratio. The 3-D experimental results further confirm the feasibility of PSAD-UNet in 3-D transcranial imaging. CONCLUSION PSAD-UNet paves the way for implementing transcranial PAI with high imaging accuracy, which reveals broad application prospects in preclinical and clinical fields.
Collapse
Affiliation(s)
- Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Mengyang Lu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Tianhua Zhou
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Mengxu Bu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Wenting Gu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Junyi Wang
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Qiuchen Zhu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China.
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai 200438, China; Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, China
| |
Collapse
|
10
|
Zhong W, Li T, Hou S, Zhang H, Li Z, Wang G, Liu Q, Song X. Unsupervised disentanglement strategy for mitigating artifact in photoacoustic tomography under extremely sparse view. PHOTOACOUSTICS 2024; 38:100613. [PMID: 38764521 PMCID: PMC11101706 DOI: 10.1016/j.pacs.2024.100613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/15/2024] [Accepted: 04/30/2024] [Indexed: 05/21/2024]
Abstract
Traditional methods under sparse view for reconstruction of photoacoustic tomography (PAT) often result in significant artifacts. Here, a novel image to image transformation method based on unsupervised learning artifact disentanglement network (ADN), named PAT-ADN, was proposed to address the issue. This network is equipped with specialized encoders and decoders that are responsible for encoding and decoding the artifacts and content components of unpaired images, respectively. The performance of the proposed PAT-ADN was evaluated using circular phantom data and the animal in vivo experimental data. The results demonstrate that PAT-ADN exhibits excellent performance in effectively removing artifacts. In particular, under extremely sparse view (e.g., 16 projections), structural similarity index and peak signal-to-noise ratio are improved by ∼188 % and ∼85 % in in vivo experimental data using the proposed method compared to traditional reconstruction methods. PAT-ADN improves the imaging performance of PAT, opening up possibilities for its application in multiple domains.
Collapse
Affiliation(s)
- Wenhua Zhong
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Tianle Li
- Nanchang University, Jiluan Academy, Nanchang, China
| | - Shangkun Hou
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Hongyu Zhang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Zilong Li
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Guijun Wang
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Qiegen Liu
- Nanchang University, School of Information Engineering, Nanchang, China
| | - Xianlin Song
- Nanchang University, School of Information Engineering, Nanchang, China
| |
Collapse
|
11
|
Poimala J, Cox B, Hauptmann A. Compensating unknown speed of sound in learned fast 3D limited-view photoacoustic tomography. PHOTOACOUSTICS 2024; 37:100597. [PMID: 38425677 PMCID: PMC10901832 DOI: 10.1016/j.pacs.2024.100597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/15/2023] [Accepted: 02/16/2024] [Indexed: 03/02/2024]
Abstract
Real-time applications in three-dimensional photoacoustic tomography from planar sensors rely on fast reconstruction algorithms that assume the speed of sound (SoS) in the tissue is homogeneous. Moreover, the reconstruction quality depends on the correct choice for the constant SoS. In this study, we discuss the possibility of ameliorating the problem of unknown or heterogeneous SoS distributions by using learned reconstruction methods. This can be done by modelling the uncertainties in the training data. In addition, a correction term can be included in the learned reconstruction method. We investigate the influence of both and while a learned correction component can improve reconstruction quality further, we show that a careful choice of uncertainties in the training data is the primary factor to overcome unknown SoS. We support our findings with simulated and in vivo measurements in 3D.
Collapse
Affiliation(s)
- Jenni Poimala
- Research Unit of Mathematical Sciences, University of Oulu, Finland
| | - Ben Cox
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Finland
- Department of Computer Science, University College London, UK
| |
Collapse
|
12
|
Nyayapathi N, Zheng E, Zhou Q, Doyley M, Xia J. Dual-modal Photoacoustic and Ultrasound Imaging: from preclinical to clinical applications. FRONTIERS IN PHOTONICS 2024; 5:1359784. [PMID: 39185248 PMCID: PMC11343488 DOI: 10.3389/fphot.2024.1359784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2024]
Abstract
Photoacoustic imaging is a novel biomedical imaging modality that has emerged over the recent decades. Due to the conversion of optical energy into the acoustic wave, photoacoustic imaging offers high-resolution imaging in depth beyond the optical diffusion limit. Photoacoustic imaging is frequently used in conjunction with ultrasound as a hybrid modality. The combination enables the acquisition of both optical and acoustic contrasts of tissue, providing functional, structural, molecular, and vascular information within the same field of view. In this review, we first described the principles of various photoacoustic and ultrasound imaging techniques and then classified the dual-modal imaging systems based on their preclinical and clinical imaging applications. The advantages of dual-modal imaging were thoroughly analyzed. Finally, the review ends with a critical discussion of existing developments and a look toward the future.
Collapse
Affiliation(s)
- Nikhila Nyayapathi
- Electrical and Computer Engineering, University of Rochester, Rochester, New York, 14627
| | - Emily Zheng
- Department of Biomedical Engineering, University at Buffalo, Buffalo, New York, 14226
| | - Qifa Zhou
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA 90007
| | - Marvin Doyley
- Electrical and Computer Engineering, University of Rochester, Rochester, New York, 14627
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, Buffalo, New York, 14226
| |
Collapse
|
13
|
Zheng S, Lu L, Yingsa H, Meichen S. Deep learning framework for three-dimensional surface reconstruction of object of interest in photoacoustic tomography. OPTICS EXPRESS 2024; 32:6037-6061. [PMID: 38439316 DOI: 10.1364/oe.507476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 01/23/2024] [Indexed: 03/06/2024]
Abstract
Photoacoustic tomography (PAT) is a non-ionizing hybrid imaging technology of clinical importance that combines the high contrast of optical imaging with the high penetration of ultrasonic imaging. Two-dimensional (2D) tomographic images can only provide the cross-sectional structure of the imaging target rather than its overall spatial morphology. This work proposes a deep learning framework for reconstructing three-dimensional (3D) surface of an object of interest from a series of 2D images. It achieves end-to-end mapping from a series of 2D images to a 3D image, visually displaying the overall morphology of the object. The framework consists of four modules: segmentation module, point cloud generation module, point cloud completion module, and mesh conversion module, which respectively implement the tasks of segmenting a region of interest, generating a sparse point cloud, completing sparse point cloud and reconstructing 3D surface. The network model is trained on simulation data sets and verified on simulation, phantom, and in vivo data sets. The results showed superior 3D reconstruction performance both visually and on the basis of quantitative evaluation metrics compared to the state-of-the-art non-learning and learning approaches. This method potentially enables high-precision 3D surface reconstruction from the tomographic images output by the preclinical PAT system without changing the imaging system. It provides a general deep learning scheme for 3D reconstruction from tomographic scanning data.
Collapse
|
14
|
Kim M, Pelivanov I, O'Donnell M. Review of Deep Learning Approaches for Interleaved Photoacoustic and Ultrasound (PAUS) Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1591-1606. [PMID: 37910419 PMCID: PMC10788151 DOI: 10.1109/tuffc.2023.3329119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Photoacoustic (PA) imaging provides optical contrast at relatively large depths within the human body, compared to other optical methods, at ultrasound (US) spatial resolution. By integrating real-time PA and US (PAUS) modalities, PAUS imaging has the potential to become a routine clinical modality bringing the molecular sensitivity of optics to medical US imaging. For applications where the full capabilities of clinical US scanners must be maintained in PAUS, conventional limited view and bandwidth transducers must be used. This approach, however, cannot provide high-quality maps of PA sources, especially vascular structures. Deep learning (DL) using data-driven modeling with minimal human design has been very effective in medical imaging, medical data analysis, and disease diagnosis, and has the potential to overcome many of the technical limitations of current PAUS imaging systems. The primary purpose of this article is to summarize the background and current status of DL applications in PAUS imaging. It also looks beyond current approaches to identify remaining challenges and opportunities for robust translation of PAUS technologies to the clinic.
Collapse
|
15
|
Xu S, Momin M, Ahmed S, Hossain A, Veeramuthu L, Pandiyan A, Kuo CC, Zhou T. Illuminating the Brain: Advances and Perspectives in Optoelectronics for Neural Activity Monitoring and Modulation. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2303267. [PMID: 37726261 DOI: 10.1002/adma.202303267] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 05/30/2023] [Indexed: 09/21/2023]
Abstract
Optogenetic modulation of brain neural activity that combines optical and electrical modes in a unitary neural system has recently gained robust momentum. Controlling illumination spatial coverage, designing light-activated modulators, and developing wireless light delivery and data transmission are crucial for maximizing the use of optical neuromodulation. To this end, biocompatible electrodes with enhanced optoelectrical performance, device integration for multiplexed addressing, wireless transmission, and multimodal operation in soft systems have been developed. This review provides an outlook for uniformly illuminating large brain areas while spatiotemporally imaging the neural responses upon optoelectrical stimulation with little artifacts. Representative concepts and important breakthroughs, such as head-mounted illumination, multiple implanted optical fibers, and micro-light-delivery devices, are discussed. Examples of techniques that incorporate electrophysiological monitoring and optoelectrical stimulation are presented. Challenges and perspectives are posed for further research efforts toward high-density optoelectrical neural interface modulation, with the potential for nonpharmacological neurological disease treatments and wireless optoelectrical stimulation.
Collapse
Affiliation(s)
- Shumao Xu
- Department of Engineering Science and Mechanics, Center for Neural Engineering, The Pennsylvania State University, Pennsylvania, 16802, USA
| | - Marzia Momin
- Department of Engineering Science and Mechanics, Center for Neural Engineering, The Pennsylvania State University, Pennsylvania, 16802, USA
| | - Salahuddin Ahmed
- Department of Engineering Science and Mechanics, Center for Neural Engineering, The Pennsylvania State University, Pennsylvania, 16802, USA
| | - Arafat Hossain
- Department of Electrical Engineering, The Pennsylvania State University, Pennsylvania, 16802, USA
| | - Loganathan Veeramuthu
- Department of Molecular Science and Engineering, National Taipei University of Technology, Taipei, 10608, Republic of China
| | - Archana Pandiyan
- Department of Molecular Science and Engineering, National Taipei University of Technology, Taipei, 10608, Republic of China
| | - Chi-Ching Kuo
- Department of Molecular Science and Engineering, National Taipei University of Technology, Taipei, 10608, Republic of China
| | - Tao Zhou
- Department of Engineering Science and Mechanics, Center for Neural Engineering, The Pennsylvania State University, Pennsylvania, 16802, USA
| |
Collapse
|
16
|
Song X, Wang G, Zhong W, Guo K, Li Z, Liu X, Dong J, Liu Q. Sparse-view reconstruction for photoacoustic tomography combining diffusion model with model-based iteration. PHOTOACOUSTICS 2023; 33:100558. [PMID: 38021282 PMCID: PMC10658608 DOI: 10.1016/j.pacs.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/14/2023] [Accepted: 09/16/2023] [Indexed: 12/01/2023]
Abstract
As a non-invasive hybrid biomedical imaging technology, photoacoustic tomography combines high contrast of optical imaging and high penetration of acoustic imaging. However, the conventional standard reconstruction under sparse view could result in low-quality image in photoacoustic tomography. Here, a novel model-based sparse reconstruction method for photoacoustic tomography via diffusion model was proposed. A score-based diffusion model is designed for learning the prior information of the data distribution. The learned prior information is utilized as a constraint for the data consistency term of an optimization problem based on the least-square method in the model-based iterative reconstruction, aiming to achieve the optimal solution. Blood vessels simulation data and the animal in vivo experimental data were used to evaluate the performance of the proposed method. The results demonstrate that the proposed method achieves higher-quality sparse reconstruction compared with conventional reconstruction methods and U-Net. In particular, under the extreme sparse projection (e.g., 32 projections), the proposed method achieves an improvement of ∼ 260 % in structural similarity and ∼ 30 % in peak signal-to-noise ratio for in vivo data, compared with the conventional delay-and-sum method. This method has the potential to reduce the acquisition time and cost of photoacoustic tomography, which will further expand the application range.
Collapse
Affiliation(s)
| | | | - Wenhua Zhong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Kangjun Guo
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiaqing Dong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
17
|
Shang R, O’Brien MA, Wang F, Situ G, Luke GP. Approximating the uncertainty of deep learning reconstruction predictions in single-pixel imaging. COMMUNICATIONS ENGINEERING 2023; 2:53. [PMID: 38463559 PMCID: PMC10923550 DOI: 10.1038/s44172-023-00103-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 07/23/2023] [Indexed: 03/12/2024]
Abstract
Single-pixel imaging (SPI) has the advantages of high-speed acquisition over a broad wavelength range and system compactness. Deep learning (DL) is a powerful tool that can achieve higher image quality than conventional reconstruction approaches. Here, we propose a Bayesian convolutional neural network (BCNN) to approximate the uncertainty of the DL predictions in SPI. Each pixel in the predicted image represents a probability distribution rather than an image intensity value, indicating the uncertainty of the prediction. We show that the BCNN uncertainty predictions are correlated to the reconstruction errors. When the BCNN is trained and used in practical applications where the ground truths are unknown, the level of the predicted uncertainty can help to determine whether system, data, or network adjustments are needed. Overall, the proposed BCNN can provide a reliable tool to indicate the confidence levels of DL predictions as well as the quality of the model and dataset for many applications of SPI.
Collapse
Affiliation(s)
- Ruibo Shang
- Thayer School of Engineering, Dartmouth College, Hanover,
NH 03755, USA
- Department of Bioengineering, University of Washington,
Seattle, WA 98195, USA
| | | | - Fei Wang
- Shanghai Institute of Optics and Fine Mechanics, Chinese
Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics
Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Guohai Situ
- Shanghai Institute of Optics and Fine Mechanics, Chinese
Academy of Sciences, Shanghai 201800, China
- Center of Materials Science and Optoelectronics
Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Hangzhou Institute for Advanced Study, University of
Chinese Academy of Sciences, Hangzhou 310024, China
| | - Geoffrey P. Luke
- Thayer School of Engineering, Dartmouth College, Hanover,
NH 03755, USA
| |
Collapse
|
18
|
John S, Hester S, Basij M, Paul A, Xavierselvan M, Mehrmohammadi M, Mallidi S. Niche preclinical and clinical applications of photoacoustic imaging with endogenous contrast. PHOTOACOUSTICS 2023; 32:100533. [PMID: 37636547 PMCID: PMC10448345 DOI: 10.1016/j.pacs.2023.100533] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 08/29/2023]
Abstract
In the past decade, photoacoustic (PA) imaging has attracted a great deal of popularity as an emergent diagnostic technology owing to its successful demonstration in both preclinical and clinical arenas by various academic and industrial research groups. Such steady growth of PA imaging can mainly be attributed to its salient features, including being non-ionizing, cost-effective, easily deployable, and having sufficient axial, lateral, and temporal resolutions for resolving various tissue characteristics and assessing the therapeutic efficacy. In addition, PA imaging can easily be integrated with the ultrasound imaging systems, the combination of which confers the ability to co-register and cross-reference various features in the structural, functional, and molecular imaging regimes. PA imaging relies on either an endogenous source of contrast (e.g., hemoglobin) or those of an exogenous nature such as nano-sized tunable optical absorbers or dyes that may boost imaging contrast beyond that provided by the endogenous sources. In this review, we discuss the applications of PA imaging with endogenous contrast as they pertain to clinically relevant niches, including tissue characterization, cancer diagnostics/therapies (termed as theranostics), cardiovascular applications, and surgical applications. We believe that PA imaging's role as a facile indicator of several disease-relevant states will continue to expand and evolve as it is adopted by an increasing number of research laboratories and clinics worldwide.
Collapse
Affiliation(s)
- Samuel John
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Scott Hester
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | - Maryam Basij
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
| | - Avijit Paul
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
| | | | - Mohammad Mehrmohammadi
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Wilmot Cancer Institute, Rochester, NY, USA
| | - Srivalleesha Mallidi
- Department of Biomedical Engineering, Tufts University, Medford, MA, USA
- Wellman Center for Photomedicine, Massachusetts General Hospital, Boston, MA 02114, USA
| |
Collapse
|
19
|
Lee H, Choi W, Kim C, Park B, Kim J. Review on ultrasound-guided photoacoustic imaging for complementary analyses of biological systems in vivo. Exp Biol Med (Maywood) 2023; 248:762-774. [PMID: 37452700 PMCID: PMC10468641 DOI: 10.1177/15353702231181341] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2023] Open
Abstract
Photoacoustic imaging has been developed as a new biomedical molecular imaging modality. Due to its similarity to conventional ultrasound imaging in terms of signal detection and image generation, dual-modal photoacoustic and ultrasound imaging has been applied to visualize physiological and morphological information in biological systems in vivo. By complementing each other, dual-modal photoacoustic and ultrasound imaging showed synergistic advances in photoacoustic imaging with the guidance of ultrasound images. In this review, we introduce our recent progresses in dual-modal photoacoustic and ultrasound imaging systems at various scales of study, from preclinical small animals to clinical humans. A summary of the works reveals various strategies for combining the structural information of ultrasound images with the molecular information of photoacoustic images.
Collapse
Affiliation(s)
- Haeni Lee
- Department of Cogno-Mechatronics Engineering and Optics & Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Wonseok Choi
- Department of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Chulhong Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, and Medical Device Innovation Center, Pohang University of Science and Technology, Pohang 37673, Republic of Korea
| | - Byullee Park
- Department of Biophysics, Institute of Quantum Biophysics, Sungkyunkwan University, Suwon 16419, Republic of Korea
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Jeesu Kim
- Department of Cogno-Mechatronics Engineering and Optics & Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| |
Collapse
|
20
|
Vousten V, Moradi H, Wu Z, Boctor EM, Salcudean SE. Laser diode photoacoustic point source detection: machine learning-based denoising and reconstruction. OPTICS EXPRESS 2023; 31:13895-13910. [PMID: 37157265 DOI: 10.1364/oe.483892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
A new development in photoacoustic (PA) imaging has been the use of compact, portable and low-cost laser diodes (LDs), but LD-based PA imaging suffers from low signal intensity recorded by the conventional transducers. A common method to improve signal strength is temporal averaging, which reduces frame rate and increases laser exposure to patients. To tackle this problem, we propose a deep learning method that will denoise point source PA radio-frequency (RF) data before beamforming with a very few frames, even one. We also present a deep learning method to automatically reconstruct point sources from noisy pre-beamformed data. Finally, we employ a strategy of combined denoising and reconstruction, which can supplement the reconstruction algorithm for very low signal-to-noise ratio inputs.
Collapse
|
21
|
Zhang Z, Jin H, Zhang W, Lu W, Zheng Z, Sharma A, Pramanik M, Zheng Y. Adaptive enhancement of acoustic resolution photoacoustic microscopy imaging via deep CNN prior. PHOTOACOUSTICS 2023; 30:100484. [PMID: 37095888 PMCID: PMC10121479 DOI: 10.1016/j.pacs.2023.100484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 03/29/2023] [Indexed: 05/03/2023]
Abstract
Acoustic resolution photoacoustic microscopy (AR-PAM) is a promising medical imaging modality that can be employed for deep bio-tissue imaging. However, its relatively low imaging resolution has greatly hindered its wide applications. Previous model-based or learning-based PAM enhancement algorithms either require design of complex handcrafted prior to achieve good performance or lack the interpretability and flexibility that can adapt to different degradation models. However, the degradation model of AR-PAM imaging is subject to both imaging depth and center frequency of ultrasound transducer, which varies in different imaging conditions and cannot be handled by a single neural network model. To address this limitation, an algorithm integrating both learning-based and model-based method is proposed here so that a single framework can deal with various distortion functions adaptively. The vasculature image statistics is implicitly learned by a deep convolutional neural network, which served as plug and play (PnP) prior. The trained network can be directly plugged into the model-based optimization framework for iterative AR-PAM image enhancement, which fitted for different degradation mechanisms. Based on physical model, the point spread function (PSF) kernels for various AR-PAM imaging situations are derived and used for the enhancement of simulation and in vivo AR-PAM images, which collectively proved the effectiveness of proposed method. Quantitatively, the PSNR and SSIM values have all achieve best performance with the proposed algorithm in all three simulation scenarios; The SNR and CNR values have also significantly raised from 6.34 and 5.79 to 35.37 and 29.66 respectively in an in vivo testing result with the proposed algorithm.
Collapse
Affiliation(s)
- Zhengyuan Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Haoran Jin
- Zhejiang University, College of Mechanical Engineering, The State Key Laboratory of Fluid Power and Mechatronic Systems, Hangzhou 310027, China
| | - Wenwen Zhang
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Wenhao Lu
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Zesheng Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
| | - Arunima Sharma
- Johns Hopkins University, Electrical and Computer Engineering, Baltimore, MD 21218, USA
| | - Manojit Pramanik
- Iowa State University, Department of Electrical and Computer Engineering, Ames, Iowa, USA
| | - Yuanjin Zheng
- Nanyang Technological University, School of Electrical and Electronic Engineering, 639798, Singapore
- Corresponding author.
| |
Collapse
|
22
|
Lan H, Yang C, Gao F. A jointed feature fusion framework for photoacoustic image reconstruction. PHOTOACOUSTICS 2023; 29:100442. [PMID: 36589516 PMCID: PMC9798177 DOI: 10.1016/j.pacs.2022.100442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
The standard reconstruction of Photoacoustic (PA) computed tomography (PACT) image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. A quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The experimental results have demonstrated the superior performance and quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics by 135% (SSIM for simulation) and 40% (gCNR for in-vivo) improvement.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|
23
|
Hsu KT, Guan S, Chitnis PV. Fast iterative reconstruction for photoacoustic tomography using learned physical model: Theoretical validation. PHOTOACOUSTICS 2023; 29:100452. [PMID: 36700132 PMCID: PMC9867977 DOI: 10.1016/j.pacs.2023.100452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 12/21/2022] [Accepted: 01/11/2023] [Indexed: 06/17/2023]
Abstract
Iterative reconstruction has demonstrated superior performance in medical imaging under compressed, sparse, and limited-view sensing scenarios. However, iterative reconstruction algorithms are slow to converge and rely heavily on hand-crafted parameters to achieve good performance. Many iterations are usually required to reconstruct a high-quality image, which is computationally expensive due to repeated evaluations of the physical model. While learned iterative reconstruction approaches such as model-based learning (MBLr) can reduce the number of iterations through convolutional neural networks, it still requires repeated evaluations of the physical models at each iteration. Therefore, the goal of this study is to develop a Fast Iterative Reconstruction (FIRe) algorithm that incorporates a learned physical model into the learned iterative reconstruction scheme to further reduce the reconstruction time while maintaining robust reconstruction performance. We also propose an efficient training scheme for FIRe, which releases the enormous memory footprint required by learned iterative reconstruction methods through the concept of recursive training. The results of our proposed method demonstrate comparable reconstruction performance to learned iterative reconstruction methods with a 9x reduction in computation time and a 620x reduction in computation time compared to variational reconstruction.
Collapse
|
24
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
25
|
Choi S, Yang J, Lee SY, Kim J, Lee J, Kim WJ, Lee S, Kim C. Deep Learning Enhances Multiparametric Dynamic Volumetric Photoacoustic Computed Tomography In Vivo (DL-PACT). ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2022; 10:e2202089. [PMID: 36354200 PMCID: PMC9811490 DOI: 10.1002/advs.202202089] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 10/09/2022] [Indexed: 05/19/2023]
Abstract
Photoacoustic computed tomography (PACT) has become a premier preclinical and clinical imaging modality. Although PACT's image quality can be dramatically improved with a large number of ultrasound (US) transducer elements and associated multiplexed data acquisition systems, the associated high system cost and/or slow temporal resolution are significant problems. Here, a deep learning-based approach is demonstrated that qualitatively and quantitively diminishes the limited-view artifacts that reduce image quality and improves the slow temporal resolution. This deep learning-enhanced multiparametric dynamic volumetric PACT approach, called DL-PACT, requires only a clustered subset of many US transducer elements on the conventional multiparametric PACT. Using DL-PACT, high-quality static structural and dynamic contrast-enhanced whole-body images as well as dynamic functional brain images of live animals and humans are successfully acquired, all in a relatively fast and cost-effective manner. It is believed that the strategy can significantly advance the use of PACT technology for preclinical and clinical applications such as neurology, cardiology, pharmacology, endocrinology, and oncology.
Collapse
Affiliation(s)
- Seongwook Choi
- Department of Electrical EngineeringConvergence IT EngineeringMechanical EngineeringSchool of Interdisciplinary Bioscience and BioengineeringGraduate School of Artificial Intelligenceand Medical Device Innovation CenterPohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| | - Jinge Yang
- Department of Electrical EngineeringConvergence IT EngineeringMechanical EngineeringSchool of Interdisciplinary Bioscience and BioengineeringGraduate School of Artificial Intelligenceand Medical Device Innovation CenterPohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| | - Soo Young Lee
- Department of Electrical EngineeringConvergence IT EngineeringMechanical EngineeringSchool of Interdisciplinary Bioscience and BioengineeringGraduate School of Artificial Intelligenceand Medical Device Innovation CenterPohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| | - Jiwoong Kim
- Department of Electrical EngineeringConvergence IT EngineeringMechanical EngineeringSchool of Interdisciplinary Bioscience and BioengineeringGraduate School of Artificial Intelligenceand Medical Device Innovation CenterPohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| | - Jihye Lee
- Department of ChemistryPOSTECH‐CATHOLIC Biomedical Engineering InstitutePohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| | - Won Jong Kim
- Department of ChemistryPOSTECH‐CATHOLIC Biomedical Engineering InstitutePohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| | - Seungchul Lee
- Department of Electrical EngineeringConvergence IT EngineeringMechanical EngineeringSchool of Interdisciplinary Bioscience and BioengineeringGraduate School of Artificial Intelligenceand Medical Device Innovation CenterPohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| | - Chulhong Kim
- Department of Electrical EngineeringConvergence IT EngineeringMechanical EngineeringSchool of Interdisciplinary Bioscience and BioengineeringGraduate School of Artificial Intelligenceand Medical Device Innovation CenterPohang University of Science and Technology (POSTECH)77 Cheongam‐ro, Nam‐guPohangGyeongbuk37673Republic of Korea
| |
Collapse
|
26
|
Dimaridis I, Sridharan P, Ntziachristos V, Karlas A, Hadjileontiadis L. Image Quality Improvement Techniques and Assessment Adequacy in Clinical Optoacoustic Imaging: A Systematic Review. BIOSENSORS 2022; 12:901. [PMID: 36291038 PMCID: PMC9599915 DOI: 10.3390/bios12100901] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/09/2022] [Accepted: 09/17/2022] [Indexed: 06/16/2023]
Abstract
Optoacoustic imaging relies on the detection of optically induced acoustic waves to offer new possibilities in morphological and functional imaging. As the modality matures towards clinical application, research efforts aim to address multifactorial limitations that negatively impact the resulting image quality. In an endeavor to obtain a clear view on the limitations and their effects, as well as the status of this progressive refinement process, we conduct an extensive search for optoacoustic image quality improvement approaches that have been evaluated with humans in vivo, thus focusing on clinically relevant outcomes. We query six databases (PubMed, Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and Google Scholar) for articles published from 1 January 2010 to 31 October 2021, and identify 45 relevant research works through a systematic screening process. We review the identified approaches, describing their primary objectives, targeted limitations, and key technical implementation details. Moreover, considering comprehensive and objective quality assessment as an essential prerequisite for the adoption of such approaches in clinical practice, we subject 36 of the 45 papers to a further in-depth analysis of the reported quality evaluation procedures, and elicit a set of criteria with the intent to capture key evaluation aspects. Through a comparative criteria-wise rating process, we seek research efforts that exhibit excellence in quality assessment of their proposed methods, and discuss features that distinguish them from works with similar objectives. Additionally, informed by the rating results, we highlight areas with improvement potential, and extract recommendations for designing quality assessment pipelines capable of providing rich evidence.
Collapse
Affiliation(s)
- Ioannis Dimaridis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Patmaa Sridharan
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
| | - Vasilis Ntziachristos
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
- Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, 80992 Munich, Germany
- German Centre for Cardiovascular Research (DZHK), partner site Munich Heart Alliance, 80636 Munich, Germany
| | - Angelos Karlas
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
- German Centre for Cardiovascular Research (DZHK), partner site Munich Heart Alliance, 80636 Munich, Germany
- Clinic for Vascular and Endovascular Surgery, Klinikum rechts der Isar, 81675 Munich, Germany
| | - Leontios Hadjileontiadis
- Department of Biomedical Engineering, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Healthcare Engineering Innovation Center (HEIC), Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Signal Processing and Biomedical Technology Unit, Telecommunications Laboratory, Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
27
|
Rushambwa MC, Suvendi R, Pandelani T, Palaniappan R, Vijean V, Nabi FG. A Review of Optical Ultrasound Imaging Modalities for Intravascular Imaging. PERTANIKA JOURNAL OF SCIENCE AND TECHNOLOGY 2022. [DOI: 10.47836/pjst.31.1.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Recent advances in medical imaging include integrating photoacoustic and optoacoustic techniques with conventional imaging modalities. The developments in the latter have led to the use of optics combined with the conventional ultrasound technique for imaging intravascular tissues and applied to different areas of the human body. Conventional ultrasound is a skin contact-based method used for imaging. It does not expose patients to harmful radiation compared to other techniques such as Computerised Tomography (CT) and Magnetic Resonance Imaging (MRI) scans. On the other hand, optical Ultrasound (OpUS) provides a new way of viewing internal organs of the human body by using skin and an eye-safe laser range. OpUS is mostly used for binary measurements since they do not require to be resolved at a much higher resolution but can be used to check for intravascular imaging. Various signal processing techniques and reconstruction methodologies exist for Photo-Acoustic Imaging, and their applicability in bioimaging is explored in this paper.
Collapse
|
28
|
Madasamy A, Gujrati V, Ntziachristos V, Prakash J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:106004. [PMID: 36209354 PMCID: PMC9547608 DOI: 10.1117/1.jbo.27.10.106004] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect. AIM Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium. APPROACH Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets. RESULTS The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction. CONCLUSIONS The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Collapse
Affiliation(s)
- Arumugaraj Madasamy
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| | - Vipul Gujrati
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
- Technical University of Munich, Munich Institute of Robotics and Machine Intelligence (MIRMI), Munich, Germany
| | - Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| |
Collapse
|
29
|
Yip LCM, Omidi P, Rascevska E, Carson JJL. Approaching closed spherical, full-view detection for photoacoustic tomography. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-220034GRR. [PMID: 36042544 PMCID: PMC9424748 DOI: 10.1117/1.jbo.27.8.086004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 07/01/2022] [Indexed: 05/28/2023]
Abstract
SIGNIFICANCE Photoacoustic tomography (PAT) is a widely explored imaging modality and has excellent potential for clinical applications. On the acoustic detection side, limited-view angle and limited-bandwidth are common key issues in PAT systems that result in unwanted artifacts. While analytical and simulation studies of limited-view artifacts are fairly extensive, experimental setups capable of comparing limited-view to an ideal full-view case are lacking. AIMS A custom ring-shaped detector array was assembled and mounted to a 6-axis robot, then rotated and translated to achieve up to 3.8π steradian view angle coverage of an imaged object. APPROACH Minimization of negativity artifacts and phantom imaging were used to optimize the system, followed by demonstrative imaging of a star contrast phantom, a synthetic breast tumor specimen phantom, and a vascular phantom. RESULTS Optimization of the angular/rotation scans found ≈212 effective detectors were needed for high-quality images, while 15-mm steps were used to increase the field of view as required depending on the size of the imaged object. Example phantoms were clearly imaged with all discerning features visible and minimal artifacts. CONCLUSIONS A near full-view closed spherical system has been developed, paving the way for future work demonstrating experimentally the significant advantages of using a full-view PAT setup.
Collapse
Affiliation(s)
- Lawrence C. M. Yip
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, Schulich School of Medicine and Dentistry, Department of Medical Biophysics, London, Ontario, Canada
| | - Parsa Omidi
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, School of Biomedical Engineering, London, Ontario, Canada
| | - Elina Rascevska
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, School of Biomedical Engineering, London, Ontario, Canada
| | - Jeffrey J. L. Carson
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, Schulich School of Medicine and Dentistry, Department of Medical Biophysics, London, Ontario, Canada
- Western University, School of Biomedical Engineering, London, Ontario, Canada
- Western University, Schulich School of Medicine and Dentistry, Department of Surgery, London, Ontario, Canada
| |
Collapse
|
30
|
Gao Y, Xu W, Chen Y, Xie W, Cheng Q. Deep Learning-Based Photoacoustic Imaging of Vascular Network Through Thick Porous Media. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2191-2204. [PMID: 35294347 DOI: 10.1109/tmi.2022.3158474] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging is a promising approach used to realize in vivo transcranial cerebral vascular imaging. However, the strong attenuation and distortion of the photoacoustic wave caused by the thick porous skull greatly affect the imaging quality. In this study, we developed a convolutional neural network based on U-Net to extract the effective photoacoustic information hidden in the speckle patterns obtained from vascular network images datasets under porous media. Our simulation and experimental results show that the proposed neural network can learn the mapping relationship between the speckle pattern and the target, and extract the photoacoustic signals of the vessels submerged in noise to reconstruct high-quality images of the vessels with a sharp outline and a clean background. Compared with the traditional photoacoustic reconstruction methods, the proposed deep learning-based reconstruction algorithm has a better performance with a lower mean absolute error, higher structural similarity, and higher peak signal-to-noise ratio of reconstructed images. In conclusion, the proposed neural network can effectively extract valid information from highly blurred speckle patterns for the rapid reconstruction of target images, which offers promising applications in transcranial photoacoustic imaging.
Collapse
|
31
|
Hui X, Malik MOA, Pramanik M. Looking deep inside tissue with photoacoustic molecular probes: a review. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:070901. [PMID: 36451698 PMCID: PMC9307281 DOI: 10.1117/1.jbo.27.7.070901] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/01/2022] [Indexed: 05/19/2023]
Abstract
Significance Deep tissue noninvasive high-resolution imaging with light is challenging due to the high degree of light absorption and scattering in biological tissue. Photoacoustic imaging (PAI) can overcome some of the challenges of pure optical or ultrasound imaging to provide high-resolution deep tissue imaging. However, label-free PAI signals from light absorbing chromophores within the tissue are nonspecific. The use of exogeneous contrast agents (probes) not only enhances the imaging contrast (and imaging depth) but also increases the specificity of PAI by binding only to targeted molecules and often providing signals distinct from the background. Aim We aim to review the current development and future progression of photoacoustic molecular probes/contrast agents. Approach First, PAI and the need for using contrast agents are briefly introduced. Then, the recent development of contrast agents in terms of materials used to construct them is discussed. Then, various probes are discussed based on targeting mechanisms, in vivo molecular imaging applications, multimodal uses, and use in theranostic applications. Results Material combinations are being used to develop highly specific contrast agents. In addition to passive accumulation, probes utilizing activation mechanisms show promise for greater controllability. Several probes also enable concurrent multimodal use with fluorescence, ultrasound, Raman, magnetic resonance imaging, and computed tomography. Finally, targeted probes are also shown to aid localized and molecularly specific photo-induced therapy. Conclusions The development of contrast agents provides a promising prospect for increased contrast, higher imaging depth, and molecularly specific information. Of note are agents that allow for controlled activation, explore other optical windows, and enable multimodal use to overcome some of the shortcomings of label-free PAI.
Collapse
Affiliation(s)
- Xie Hui
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| | - Mohammad O. A. Malik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore
| |
Collapse
|
32
|
Zuo H, Cui M, Wang X, Ma C. Spectral crosstalk in photoacoustic computed tomography. PHOTOACOUSTICS 2022; 26:100356. [PMID: 35574185 PMCID: PMC9095891 DOI: 10.1016/j.pacs.2022.100356] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Revised: 04/04/2022] [Accepted: 04/11/2022] [Indexed: 06/15/2023]
Abstract
Multispectral photoacoustic (PA) imaging faces two major challenges: the spectral coloring effect, which has been studied extensively as an optical inversion problem, and the spectral crosstalk, which is basically a result of non-ideal acoustic inversion. So far, there is no systematic work to analyze the spectral crosstalk because acoustic inversion and spectroscopic measurement are always treated as decoupled. In this work, we theorize and demonstrate through a series of simulations and experiments how imperfect acoustic inversion induces inaccurate PA spectrum measurement. We provide detailed analysis to elucidate how different factors, including limited bandwidth, limited view, light attenuation, out-of-plane signal, and image reconstruction schemes, conspire to render the measured PA spectrum inaccurate. We found that the model-based reconstruction outperforms universal back-projection in suppressing the spectral crosstalk in some cases.
Collapse
Affiliation(s)
- Hongzhi Zuo
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Manxiu Cui
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Xuanhao Wang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Cheng Ma
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Center for Clinical Big Data Research, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China
- Photomedicine Laboratory, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
33
|
Dermoscopic Image Classification of Pigmented Nevus under Deep Learning and the Correlation with Pathological Features. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:9726181. [PMID: 35669372 PMCID: PMC9167096 DOI: 10.1155/2022/9726181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 04/19/2022] [Accepted: 04/23/2022] [Indexed: 11/26/2022]
Abstract
The objective of this study was to explore the image classification and case characteristics of pigmented nevus (PN) diagnosed by dermoscopy under deep learning. 268 patients were included as the research objects and they were randomly divided into observation group (n = 134) and control group (n = 134). Image recognition algorithm was used for feature extraction, segmentation, and classification of dermoscopic images, and the image recognition and classification algorithm were studied as the performance and accuracy of fusion classifier were compared. The results showed that the classifier was optimized, and the linear kernel accuracy was 85.82%. The PN studied mainly included mixed nevus, junctional nevus, intradermal nevus, and acral nevus. The sensitivity under collaborative training was higher than that under feature training and fusion feature training, and the differences among three trainings were significant (P < 0.05). The sensitivity of the observation group was 88.65%, and the specificity was 90.26%, while the sensitivity and the specificity of the control group were 85.65% and 84.03%, respectively; there were significant differences between the two groups (P < 0.05). In conclusion, dermoscopy under deep learning could be applied as a diagnostic way of PN, which helped improve the accuracy of diagnosis. The dermoscopic manifestations of PN showed a certain corresponding relationship with the type of cases and could provide auxiliary diagnosis in clinical practice. It could be applied clinically.
Collapse
|
34
|
Zhang H, Bo W, Wang D, DiSpirito A, Huang C, Nyayapathi N, Zheng E, Vu T, Gong Y, Yao J, Xu W, Xia J. Deep-E: A Fully-Dense Neural Network for Improving the Elevation Resolution in Linear-Array-Based Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1279-1288. [PMID: 34928793 PMCID: PMC9161237 DOI: 10.1109/tmi.2021.3137060] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Linear-array-based photoacoustic tomography has shown broad applications in biomedical research and preclinical imaging. However, the elevational resolution of a linear array is fundamentally limited due to the weak cylindrical focus of the transducer element. While several methods have been proposed to address this issue, they have all handled the problem in a less time-efficient way. In this work, we propose to improve the elevational resolution of a linear array through Deep-E, a fully dense neural network based on U-net. Deep-E exhibits high computational efficiency by converting the three-dimensional problem into a two-dimension problem: it focused on training a model to enhance the resolution along elevational direction by only using the 2D slices in the axial and elevational plane and thereby reducing the computational burden in simulation and training. We demonstrated the efficacy of Deep-E using various datasets, including simulation, phantom, and human subject results. We found that Deep-E could improve elevational resolution by at least four times and recover the object's true size. We envision that Deep-E will have a significant impact in linear-array-based photoacoustic imaging studies by providing high-speed and high-resolution image enhancement.
Collapse
|
35
|
Wu Q. Research on deep learning image processing technology of second-order partial differential equations. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07017-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
36
|
Si L, Huang T, Wang X, Yao Y, Dong Y, Liao R, Ma H. Deep learning Mueller matrix feature retrieval from a snapshot Stokes image. OPTICS EXPRESS 2022; 30:8676-8689. [PMID: 35299314 DOI: 10.1364/oe.451612] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 02/18/2022] [Indexed: 06/14/2023]
Abstract
A Mueller matrix (MM) provides a comprehensive representation of the polarization properties of a complex medium and encodes very rich information on the macro- and microstructural features. Histopathological features can be characterized by polarization parameters derived from MM. However, a MM must be derived from at least four Stokes vectors corresponding to four different incident polarization states, which makes the qualities of MM very sensitive to small changes in the imaging system or the sample during the exposures, such as fluctuations in illumination light and co-registration of polarization component images. In this work, we use a deep learning approach to retrieve MM-based specific polarimetry basis parameters (PBPs) from a snapshot Stokes vector. This data post-processing method is capable of eliminating errors introduced by multi-exposure, as well as reducing the imaging time and hardware complexity. It shows the potential for accurate MM imaging on dynamic samples or in unstable environments. The translation model is designed based on generative adversarial network with customized loss functions. The effectiveness of the approach was demonstrated on liver and breast tissue slices and blood smears. Finally, we evaluated the performance by quantitative similarity assessment methods in both pixel and image levels.
Collapse
|
37
|
Maneas E, Hauptmann A, Alles EJ, Xia W, Vercauteren T, Ourselin S, David AL, Arridge S, Desjardins AE. Deep Learning for Instrumented Ultrasonic Tracking: From Synthetic Training Data to In Vivo Application. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:543-552. [PMID: 34748488 DOI: 10.1109/tuffc.2021.3126530] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Instrumented ultrasonic tracking is used to improve needle localization during ultrasound guidance of minimally invasive percutaneous procedures. Here, it is implemented with transmitted ultrasound pulses from a clinical ultrasound imaging probe, which is detected by a fiber-optic hydrophone integrated into a needle. The detected transmissions are then reconstructed to form the tracking image. Two challenges are considered with the current implementation of ultrasonic tracking. First, tracking transmissions are interleaved with the acquisition of B-mode images, and thus, the effective B-mode frame rate is reduced. Second, it is challenging to achieve an accurate localization of the needle tip when the signal-to-noise ratio is low. To address these challenges, we present a framework based on a convolutional neural network (CNN) to maintain spatial resolution with fewer tracking transmissions and enhance signal quality. A major component of the framework included the generation of realistic synthetic training data. The trained network was applied to unseen synthetic data and experimental in vivo tracking data. The performance of needle localization was investigated when reconstruction was performed with fewer (up to eightfold) tracking transmissions. CNN-based processing of conventional reconstructions showed that the axial and lateral spatial resolutions could be improved even with an eightfold reduction in tracking transmissions. The framework presented in this study will significantly improve the performance of ultrasonic tracking, leading to faster image acquisition rates and increased localization accuracy.
Collapse
|
38
|
Sathyanarayana SG, Wang Z, Sun N, Ning B, Hu S, Hossack JA. Recovery of Blood Flow From Undersampled Photoacoustic Microscopy Data Using Sparse Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:103-120. [PMID: 34388091 DOI: 10.1109/tmi.2021.3104521] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Photoacoustic microscopy (PAM) leverages the optical absorption contrast of blood hemoglobin for high-resolution, multi-parametric imaging of the microvasculature in vivo. However, to quantify the blood flow speed, dense spatial sampling is required to assess blood flow-induced loss of correlation of sequentially acquired A-line signals, resulting in increased laser pulse repetition rate and consequently optical fluence. To address this issue, we have developed a sparse modeling approach for blood flow quantification based on downsampled PAM data. Evaluation of its performance both in vitro and in vivo shows that this sparse modeling method can accurately recover the substantially downsampled data (up to 8 times) for correlation-based blood flow analysis, with a relative error of 12.7 ± 6.1 % across 10 datasets in vitro and 12.7 ± 12.1 % in vivo for data downsampled 8 times. Reconstruction with the proposed method is on par with recovery using compressive sensing, which exhibits an error of 12.0 ± 7.9 % in vitro and 33.86 ± 26.18 % in vivo for data downsampled 8 times. Both methods outperform bicubic interpolation, which shows an error of 15.95 ± 9.85 % in vitro and 110.7 ± 87.1 % in vivo for data downsampled 8 times.
Collapse
|
39
|
Xia F, Gao X, Shen X, Xu H, Zhong S. Preparation of a gold@europium-based coordination polymer nanocomposite with excellent photothermal properties and its potential for four-mode imaging. NEW J CHEM 2022. [DOI: 10.1039/d2nj01021f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
A nanocomposite was synthesized by replacing the toxic CTAB on the surface of GNRs with a europium-based hyaluronic acid coordination polymer. The nanocomposite exhibits excellent photothermal performance and also has potential for four-mode imaging.
Collapse
Affiliation(s)
- Faming Xia
- Key Lab of Fluorine and Silicon for Energy Materials and Chemistry of Ministry of Education, College of Chemistry and Chemical Engineering, Jiangxi Normal University, Nanchang, 330022, China
| | - Xuejiao Gao
- Key Lab of Fluorine and Silicon for Energy Materials and Chemistry of Ministry of Education, College of Chemistry and Chemical Engineering, Jiangxi Normal University, Nanchang, 330022, China
| | - Xiaomei Shen
- Key Lab of Fluorine and Silicon for Energy Materials and Chemistry of Ministry of Education, College of Chemistry and Chemical Engineering, Jiangxi Normal University, Nanchang, 330022, China
| | - Hualan Xu
- Analytical and Testing Center, Jiangxi Normal University, Nanchang 330022, China
| | - Shengliang Zhong
- Key Lab of Fluorine and Silicon for Energy Materials and Chemistry of Ministry of Education, College of Chemistry and Chemical Engineering, Jiangxi Normal University, Nanchang, 330022, China
| |
Collapse
|
40
|
Wu M, Awasthi N, Rad NM, Pluim JPW, Lopata RGP. Advanced Ultrasound and Photoacoustic Imaging in Cardiology. SENSORS (BASEL, SWITZERLAND) 2021; 21:7947. [PMID: 34883951 PMCID: PMC8659598 DOI: 10.3390/s21237947] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 12/26/2022]
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of death worldwide. An effective management and treatment of CVDs highly relies on accurate diagnosis of the disease. As the most common imaging technique for clinical diagnosis of the CVDs, US imaging has been intensively explored. Especially with the introduction of deep learning (DL) techniques, US imaging has advanced tremendously in recent years. Photoacoustic imaging (PAI) is one of the most promising new imaging methods in addition to the existing clinical imaging methods. It can characterize different tissue compositions based on optical absorption contrast and thus can assess the functionality of the tissue. This paper reviews some major technological developments in both US (combined with deep learning techniques) and PA imaging in the application of diagnosis of CVDs.
Collapse
Affiliation(s)
- Min Wu
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| | - Navchetan Awasthi
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Nastaran Mohammadian Rad
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Josien P. W. Pluim
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Richard G. P. Lopata
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| |
Collapse
|
41
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
42
|
Al Mukaddim R, Ahmed R, Varghese T. Improving Minimum Variance Beamforming with Sub-Aperture Processing for Photoacoustic Imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2879-2882. [PMID: 34891848 PMCID: PMC8908882 DOI: 10.1109/embc46164.2021.9630278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Minimum variance (MV) beamforming improves resolution and reduces sidelobes when compared to delay-and-sum (DAS) beamforming for photoacoustic imaging (PAI). However, some level of sidelobe signal and incoherent clutter persist degrading MV PAI quality. Here, an adaptive beamforming algorithm (PSAPMV) combining MV formulation and sub-aperture processing is proposed. In PSAPMV, the received channel data are split into two complementary nonoverlapping sub-apertures and beamformed using MV. A weighting matrix based on similarity between sub-aperture beamformed images was derived and multiplied with the full aperture MV image resulting in suppression of sidelobe and incoherent clutter in the PA image. Numerical simulation experiments with point targets, diffuse inclusions and microvasculature networks are used to validate PSAPMV. Quantitative evaluation was done in terms of main-lobe-to-side-lobe ratio, full width at half maximum (FWHM), contrast ratio (CR) and generalized contrast-to-noise ratio (gCNR). PSAPMV demonstrated improved beamforming performance both qualitatively and quantitatively. PSAPMV had higher resolution (FWHM =0.19 mm) than MV (0.21 mm) and DAS (0.22mm) in point target simulations, better target detectability (gCNR =0.99) than MV (0.89) and DAS (0.84) for diffuse inclusions and improved contrast (CR in microvasculature simulation, DAS = 15.38, MV = 22.42, PSAPMV = 51.74 dB).
Collapse
|
43
|
Lu M, Liu X, Liu C, Li B, Gu W, Jiang J, Ta D. Artifact removal in photoacoustic tomography with an unsupervised method. BIOMEDICAL OPTICS EXPRESS 2021; 12:6284-6299. [PMID: 34745737 PMCID: PMC8548009 DOI: 10.1364/boe.434172] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 08/13/2021] [Accepted: 09/07/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic tomography (PAT) is an emerging biomedical imaging technology that can realize high contrast imaging with a penetration depth of the acoustic. Recently, deep learning (DL) methods have also been successfully applied to PAT for improving the image reconstruction quality. However, the current DL-based PAT methods are implemented by the supervised learning strategy, and the imaging performance is dependent on the available ground-truth data. To overcome the limitation, this work introduces a new image domain transformation method based on cyclic generative adversarial network (CycleGAN), termed as PA-GAN, which is used to remove artifacts in PAT images caused by the use of the limited-view measurement data in an unsupervised learning way. A series of data from phantom and in vivo experiments are used to evaluate the performance of the proposed PA-GAN. The experimental results show that PA-GAN provides a good performance in removing artifacts existing in photoacoustic tomographic images. In particular, when dealing with extremely sparse measurement data (e.g., 8 projections in circle phantom experiments), higher imaging performance is achieved by the proposed unsupervised PA-GAN, with an improvement of ∼14% in structural similarity (SSIM) and ∼66% in peak signal to noise ratio (PSNR), compared with the supervised-learning U-Net method. With an increasing number of projections (e.g., 128 projections), U-Net, especially FD U-Net, shows a slight improvement in artifact removal capability, in terms of SSIM and PSNR. Furthermore, the computational time obtained by PA-GAN and U-Net is similar (∼60 ms/frame), once the network is trained. More importantly, PA-GAN is more flexible than U-Net that allows the model to be effectively trained with unpaired data. As a result, PA-GAN makes it possible to implement PAT with higher flexibility without compromising imaging performance.
Collapse
Affiliation(s)
- Mengyang Lu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- State Key Laboratory of Medical Neurobiology, Institutes of Brain Science, Fudan University, Shanghai 200433, China
| | - Chengcheng Liu
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Boyi Li
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
| | - Wenting Gu
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Jiehui Jiang
- School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Dean Ta
- Academy for Engineering and Technology, Fudan University, Shanghai 200433, China
- Center for Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200433, China
| |
Collapse
|
44
|
Jia Q, Chang L, Qiang B, Zhang S, Xie W, Yang X, Sun Y, Yang M. Real-Time 3D Reconstruction Method Based on Monocular Vision. SENSORS (BASEL, SWITZERLAND) 2021; 21:5909. [PMID: 34502800 PMCID: PMC8434368 DOI: 10.3390/s21175909] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 08/27/2021] [Accepted: 08/29/2021] [Indexed: 11/17/2022]
Abstract
Real-time 3D reconstruction is one of the current popular research directions of computer vision, and it has become the core technology in the fields of virtual reality, industrialized automatic systems, and mobile robot path planning. Currently, there are three main problems in the real-time 3D reconstruction field. Firstly, it is expensive. It requires more varied sensors, so it is less convenient. Secondly, the reconstruction speed is slow, and the 3D model cannot be established accurately in real time. Thirdly, the reconstruction error is large, which cannot meet the requirements of scenes with accuracy. For this reason, we propose a real-time 3D reconstruction method based on monocular vision in this paper. Firstly, a single RGB-D camera is used to collect visual information in real time, and the YOLACT++ network is used to identify and segment the visual information to extract part of the important visual information. Secondly, we combine the three stages of depth recovery, depth optimization, and deep fusion to propose a three-dimensional position estimation method based on deep learning for joint coding of visual information. It can reduce the depth error caused by the depth measurement process, and the accurate 3D point values of the segmented image can be obtained directly. Finally, we propose a method based on the limited outlier adjustment of the cluster center distance to optimize the three-dimensional point values obtained above. It improves the real-time reconstruction accuracy and obtains the three-dimensional model of the object in real time. Experimental results show that this method only needs a single RGB-D camera, which is not only low cost and convenient to use, but also significantly improves the speed and accuracy of 3D reconstruction.
Collapse
Affiliation(s)
- Qingyu Jia
- Guangxi Key Laboratory of Image and Graphics Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China; (Q.J.); (L.C.); (S.Z.); (W.X.)
| | - Liang Chang
- Guangxi Key Laboratory of Image and Graphics Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China; (Q.J.); (L.C.); (S.Z.); (W.X.)
| | - Baohua Qiang
- Guangxi Key Laboratory of Image and Graphics Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China; (Q.J.); (L.C.); (S.Z.); (W.X.)
| | - Shihao Zhang
- Guangxi Key Laboratory of Image and Graphics Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China; (Q.J.); (L.C.); (S.Z.); (W.X.)
| | - Wu Xie
- Guangxi Key Laboratory of Image and Graphics Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China; (Q.J.); (L.C.); (S.Z.); (W.X.)
| | - Xianyi Yang
- Guangxi Key Laboratory of Image and Graphics Intelligent Processing, Guilin University of Electronic Technology, Guilin 541004, China; (Q.J.); (L.C.); (S.Z.); (W.X.)
| | - Yangchang Sun
- Research Center for Brain-Inspired Intelligence (BII), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China; (Y.S.); (M.Y.)
| | - Minghao Yang
- Research Center for Brain-Inspired Intelligence (BII), Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing 100190, China; (Y.S.); (M.Y.)
| |
Collapse
|
45
|
Yang X, Chen YH, Xia F, Sawan M. Photoacoustic imaging for monitoring of stroke diseases: A review. PHOTOACOUSTICS 2021; 23:100287. [PMID: 34401324 PMCID: PMC8353507 DOI: 10.1016/j.pacs.2021.100287] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2021] [Revised: 07/02/2021] [Accepted: 07/16/2021] [Indexed: 05/14/2023]
Abstract
Stroke is the leading cause of death and disability after ischemic heart disease. However, there is lacking a non-invasive long-time monitoring technique for stroke diagnosis and therapy. The photoacoustic imaging approach reconstructs images of an object based on the energy excitation by optical absorption and its conversion to acoustic waves, due to corresponding thermoelastic expansion, which has optical resolution and acoustic propagation. This emerging functional imaging method is a non-invasive technique. Due to its precision, this method is particularly attractive for stroke monitoring purpose. In this paper, we review the achievements of this technology and its applications on stroke, as well as the development status in both animal and human applications. Also, various photoacoustic systems and multi-modality photoacoustic imaging are introduced as for potential clinical applications. Finally, the challenges of photoacoustic imaging for monitoring stroke are discussed.
Collapse
Affiliation(s)
- Xi Yang
- Zhejiang University, Hangzhou, 310024, Zhejiang, China
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
| | - Yun-Hsuan Chen
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
| | - Fen Xia
- Zhejiang University, Hangzhou, 310024, Zhejiang, China
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
| | - Mohamad Sawan
- CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China
- Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, 310024, Zhejiang, China
- Corresponding author at: CenBRAIN Lab., School of Engineering, Westlake University, Hangzhou, 310024, Zhejiang, China.
| |
Collapse
|
46
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
47
|
Mukaddim RA, Ahmed R, Varghese T. Subaperture Processing-Based Adaptive Beamforming for Photoacoustic Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2336-2350. [PMID: 33606629 PMCID: PMC8330397 DOI: 10.1109/tuffc.2021.3060371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Delay-and-sum (DAS) beamformers, when applied to photoacoustic (PA) image reconstruction, produce strong sidelobes due to the absence of transmit focusing. Consequently, DAS PA images are often severely degraded by strong off-axis clutter. For preclinical in vivo cardiac PA imaging, the presence of these noise artifacts hampers the detectability and interpretation of PA signals from the myocardial wall, crucial for studying blood-dominated cardiac pathological information and to complement functional information derived from ultrasound imaging. In this article, we present PA subaperture processing (PSAP), an adaptive beamforming method, to mitigate these image degrading effects. In PSAP, a pair of DAS reconstructed images is formed by splitting the received channel data into two complementary nonoverlapping subapertures. Then, a weighting matrix is derived by analyzing the correlation between subaperture beamformed images and multiplied with the full-aperture DAS PA image to reduce sidelobes and incoherent clutter. We validated PSAP using numerical simulation studies using point target, diffuse inclusion and microvasculature imaging, and in vivo feasibility studies on five healthy murine models. Qualitative and quantitative analysis demonstrate improvements in PAI image quality with PSAP compared to DAS and coherence factor weighted DAS (DAS CF ). PSAP demonstrated improved target detectability with a higher generalized contrast-to-noise (gCNR) ratio in vasculature simulations where PSAP produces 19.61% and 19.53% higher gCNRs than DAS and DAS CF , respectively. Furthermore, PSAP provided higher image contrast quantified using contrast ratio (CR) (e.g., PSAP produces 89.26% and 11.90% higher CR than DAS and DAS CF in vasculature simulations) and improved clutter suppression.
Collapse
|
48
|
Davoudi N, Lafci B, Özbek A, Deán-Ben XL, Razansky D. Deep learning of image- and time-domain data enhances the visibility of structures in optoacoustic tomography. OPTICS LETTERS 2021; 46:3029-3032. [PMID: 34197371 DOI: 10.1364/ol.424571] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 05/15/2021] [Indexed: 06/13/2023]
Abstract
Images rendered with common optoacoustic system implementations are often afflicted with distortions and poor visibility of structures, hindering reliable image interpretation and quantification of bio-chrome distribution. Among the practical limitations contributing to artifactual reconstructions are insufficient tomographic detection coverage and suboptimal illumination geometry, as well as inability to accurately account for acoustic reflections and speed of sound heterogeneities in the imaged tissues. Here we developed a convolutional neural network (CNN) approach for enhancement of optoacoustic image quality which combines training on both time-resolved signals and tomographic reconstructions. Reference human finger data for training the CNN were recorded using a full-ring array system that provides optimal tomographic coverage around the imaged object. The reconstructions were further refined with a dedicated algorithm that minimizes acoustic reflection artifacts induced by acoustically mismatch structures, such as bones. The combined methodology is shown to outperform other learning-based methods solely operating on image-domain data.
Collapse
|
49
|
Na S, Wang LV. Photoacoustic computed tomography for functional human brain imaging [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:4056-4083. [PMID: 34457399 PMCID: PMC8367226 DOI: 10.1364/boe.423707] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 06/05/2021] [Accepted: 06/08/2021] [Indexed: 05/02/2023]
Abstract
The successes of magnetic resonance imaging and modern optical imaging of human brain function have stimulated the development of complementary modalities that offer molecular specificity, fine spatiotemporal resolution, and sufficient penetration simultaneously. By virtue of its rich optical contrast, acoustic resolution, and imaging depth far beyond the optical transport mean free path (∼1 mm in biological tissues), photoacoustic computed tomography (PACT) offers a promising complementary modality. In this article, PACT for functional human brain imaging is reviewed in its hardware, reconstruction algorithms, in vivo demonstration, and potential roadmap.
Collapse
Affiliation(s)
- Shuai Na
- Caltech Optical Imaging Laboratory, Andrew
and Peggy Cherng Department of Medical Engineering,
California Institute of Technology, 1200
East California Boulevard, Pasadena, CA 91125, USA
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew
and Peggy Cherng Department of Medical Engineering,
California Institute of Technology, 1200
East California Boulevard, Pasadena, CA 91125, USA
- Caltech Optical Imaging Laboratory,
Department of Electrical Engineering, California
Institute of Technology, 1200 East California Boulevard,
Pasadena, CA 91125, USA
| |
Collapse
|
50
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|