51
|
Zhang Z, Jin H, Zheng Z, Sharma A, Wang L, Pramanik M, Zheng Y. Deep and Domain Transfer Learning Aided Photoacoustic Microscopy: Acoustic Resolution to Optical Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3636-3648. [PMID: 35849667 DOI: 10.1109/tmi.2022.3192072] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Acoustic resolution photoacoustic micros- copy (AR-PAM) can achieve deeper imaging depth in biological tissue, with the sacrifice of imaging resolution compared with optical resolution photoacoustic microscopy (OR-PAM). Here we aim to enhance the AR-PAM image quality towards OR-PAM image, which specifically includes the enhancement of imaging resolution, restoration of micro-vasculatures, and reduction of artifacts. To address this issue, a network (MultiResU-Net) is first trained as generative model with simulated AR-OR image pairs, which are synthesized with physical transducer model. Moderate enhancement results can already be obtained when applying this model to in vivo AR imaging data. Nevertheless, the perceptual quality is unsatisfactory due to domain shift. Further, domain transfer learning technique under generative adversarial network (GAN) framework is proposed to drive the enhanced image's manifold towards that of real OR image. In this way, perceptually convincing AR to OR enhancement result is obtained, which can also be supported by quantitative analysis. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) values are significantly increased from 14.74 dB to 19.01 dB and from 0.1974 to 0.2937, respectively, validating the improvement of reconstruction correctness and overall perceptual quality. The proposed algorithm has also been validated across different imaging depths with experiments conducted in both shallow and deep tissue. The above AR to OR domain transfer learning with GAN (AODTL-GAN) framework has enabled the enhancement target with limited amount of matched in vivo AR-OR imaging data.
Collapse
|
52
|
Wang T, He M, Shen K, Liu W, Tian C. Learned regularization for image reconstruction in sparse-view photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:5721-5737. [PMID: 36733736 PMCID: PMC9872879 DOI: 10.1364/boe.469460] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/07/2022] [Accepted: 10/01/2022] [Indexed: 06/18/2023]
Abstract
Constrained data acquisitions, such as sparse view measurements, are sometimes used in photoacoustic computed tomography (PACT) to accelerate data acquisition. However, it is challenging to reconstruct high-quality images under such scenarios. Iterative image reconstruction with regularization is a typical choice to solve this problem but it suffers from image artifacts. In this paper, we present a learned regularization method to suppress image artifacts in model-based iterative reconstruction in sparse view PACT. A lightweight dual-path network is designed to learn regularization features from both the data and the image domains. The network is trained and tested on both simulation and in vivo datasets and compared with other methods such as Tikhonov regularization, total variation regularization, and a U-Net based post-processing approach. Results show that although the learned regularization network possesses a size of only 0.15% of a U-Net, it outperforms other methods and converges after as few as five iterations, which takes less than one-third of the time of conventional methods. Moreover, the proposed reconstruction method incorporates the physical model of photoacoustic imaging and explores structural information from training datasets. The integration of deep learning with a physical model can potentially achieve improved imaging performance in practice.
Collapse
Affiliation(s)
- Tong Wang
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Menghui He
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
| | - Kang Shen
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Wen Liu
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Chao Tian
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
53
|
Dimaridis I, Sridharan P, Ntziachristos V, Karlas A, Hadjileontiadis L. Image Quality Improvement Techniques and Assessment Adequacy in Clinical Optoacoustic Imaging: A Systematic Review. BIOSENSORS 2022; 12:901. [PMID: 36291038 PMCID: PMC9599915 DOI: 10.3390/bios12100901] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/09/2022] [Accepted: 09/17/2022] [Indexed: 06/16/2023]
Abstract
Optoacoustic imaging relies on the detection of optically induced acoustic waves to offer new possibilities in morphological and functional imaging. As the modality matures towards clinical application, research efforts aim to address multifactorial limitations that negatively impact the resulting image quality. In an endeavor to obtain a clear view on the limitations and their effects, as well as the status of this progressive refinement process, we conduct an extensive search for optoacoustic image quality improvement approaches that have been evaluated with humans in vivo, thus focusing on clinically relevant outcomes. We query six databases (PubMed, Scopus, Web of Science, IEEE Xplore, ACM Digital Library, and Google Scholar) for articles published from 1 January 2010 to 31 October 2021, and identify 45 relevant research works through a systematic screening process. We review the identified approaches, describing their primary objectives, targeted limitations, and key technical implementation details. Moreover, considering comprehensive and objective quality assessment as an essential prerequisite for the adoption of such approaches in clinical practice, we subject 36 of the 45 papers to a further in-depth analysis of the reported quality evaluation procedures, and elicit a set of criteria with the intent to capture key evaluation aspects. Through a comparative criteria-wise rating process, we seek research efforts that exhibit excellence in quality assessment of their proposed methods, and discuss features that distinguish them from works with similar objectives. Additionally, informed by the rating results, we highlight areas with improvement potential, and extract recommendations for designing quality assessment pipelines capable of providing rich evidence.
Collapse
Affiliation(s)
- Ioannis Dimaridis
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| | - Patmaa Sridharan
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
| | - Vasilis Ntziachristos
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
- Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, 80992 Munich, Germany
- German Centre for Cardiovascular Research (DZHK), partner site Munich Heart Alliance, 80636 Munich, Germany
| | - Angelos Karlas
- Chair of Biological Imaging, Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, 81675 Munich, Germany
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, 85764 Neuherberg, Germany
- German Centre for Cardiovascular Research (DZHK), partner site Munich Heart Alliance, 80636 Munich, Germany
- Clinic for Vascular and Endovascular Surgery, Klinikum rechts der Isar, 81675 Munich, Germany
| | - Leontios Hadjileontiadis
- Department of Biomedical Engineering, Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Healthcare Engineering Innovation Center (HEIC), Khalifa University, Abu Dhabi P.O. Box 127788, United Arab Emirates
- Signal Processing and Biomedical Technology Unit, Telecommunications Laboratory, Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
| |
Collapse
|
54
|
Zhang Y, Yang S, Xia Z, Hou R, Xu B, Hou L, Marsh JH, Hou JJ, Sani SMR, Liu X, Xiong J. Co-optimization method to improve lateral resolution in photoacoustic computed tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:4621-4636. [PMID: 36187257 PMCID: PMC9484412 DOI: 10.1364/boe.469744] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/01/2022] [Indexed: 05/28/2023]
Abstract
In biomedical imaging, photoacoustic computed tomography (PACT) has recently gained increased interest as this imaging technique has good optical contrast and depth of acoustic penetration. However, a spinning blur will be introduced during the image reconstruction process due to the limited size of the ultrasonic transducers (UT) and a discontinuous measurement process. In this study, a damping UT and adaptive back-projection co-optimization (CODA) method is developed to improve the lateral spatial resolution of PACT. In our PACT system, a damping aperture UT controls the size of the receiving area, which suppresses image blur at the signal acquisition stage. Then, an innovative adaptive back-projection algorithm is developed, which corrects the undesirable artifacts. The proposed method was evaluated using agar phantom and ex-vivo experiments. The results show that the CODA method can effectively compensate for the spinning blur and eliminate unwanted artifacts in PACT. The proposed method can significantly improve the lateral spatial resolution and image quality of reconstructed images, making it more appealing for wider clinical applications of PACT as a novel, cost-effective modality.
Collapse
Affiliation(s)
- Yang Zhang
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Shufan Yang
- School of Computing, Edinburgh Napier University, Edinburgh, Scotland, EH10 5DT, UK
| | - Zhiying Xia
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Ruijie Hou
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Bin Xu
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Lianping Hou
- James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
| | - John H. Marsh
- James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
| | - Jamie Jiangmin Hou
- The Royal College of Surgeons of Edinburgh, Nicolson Street, Edinburgh, Scotland, EH8 9DW, UK
| | | | - Xuefeng Liu
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
- Equal Contribution
| | - Jichuan Xiong
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
- Equal Contribution
| |
Collapse
|
55
|
Shahid H, Khalid A, Yue Y, Liu X, Ta D. Feasibility of a Generative Adversarial Network for Artifact Removal in Experimental Photoacoustic Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:1628-1643. [PMID: 35660105 DOI: 10.1016/j.ultrasmedbio.2022.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 03/06/2022] [Accepted: 04/16/2022] [Indexed: 06/15/2023]
Abstract
Photoacoustic tomography (PAT) reconstruction is an expeditiously growing interest among biomedical researchers because of its possible transition from laboratory to clinical pre-eminence. Nonetheless, the PAT inverse problem is yet to achieve an optimal solution in rapid and precise reconstruction under practical constraints. Precisely, the sparse sampling problem and random noise are the main impediments to attaining accuracy but in support of rapid PAT reconstruction. The limitations are associated with acquiring undersampled artifacts that deteriorate the optimality of the reconstruction task. Therefore, the former achievements of fast image formation limit the modality for clinical settings. Delving into the problem, here we explore a deep learning-based generative adversarial network (GAN) to improve the image quality by denoising and removing these artifacts. The specially designed attributes and unique manner of optimizing the problem, such as incorporating the data set limitations and providing stable training performance, constitute the main motivation behind the employment of GAN. Moreover, exploitation of the U-net variant as a generator network offers robust performance in terms of quality and computational cost, which is further validated with the detailed quantitative and qualitative analysis. The quantitatively evaluated structured similarity indexing method = 0.980 ± 0.043 and peak signal-to-noise ratio = 31 ± 0.002 dB state that the proposed solution provides the high-resolution image at the output, even training with a low-quality data set.
Collapse
Affiliation(s)
- Husnain Shahid
- Center for Biomedical Engineering, Fudan University, China
| | - Adnan Khalid
- School of Information and Communication Engineering, Tianjin University, China
| | - Yaoting Yue
- Center for Biomedical Engineering, Fudan University, China
| | - Xin Liu
- Academy for Engineering and Technology, Fudan University, Shanghai, China.
| | - Dean Ta
- Center for Biomedical Engineering, Fudan University, China; Academy for Engineering and Technology, Fudan University, Shanghai, China.
| |
Collapse
|
56
|
Yip LCM, Omidi P, Rascevska E, Carson JJL. Approaching closed spherical, full-view detection for photoacoustic tomography. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-220034GRR. [PMID: 36042544 PMCID: PMC9424748 DOI: 10.1117/1.jbo.27.8.086004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 07/01/2022] [Indexed: 05/28/2023]
Abstract
SIGNIFICANCE Photoacoustic tomography (PAT) is a widely explored imaging modality and has excellent potential for clinical applications. On the acoustic detection side, limited-view angle and limited-bandwidth are common key issues in PAT systems that result in unwanted artifacts. While analytical and simulation studies of limited-view artifacts are fairly extensive, experimental setups capable of comparing limited-view to an ideal full-view case are lacking. AIMS A custom ring-shaped detector array was assembled and mounted to a 6-axis robot, then rotated and translated to achieve up to 3.8π steradian view angle coverage of an imaged object. APPROACH Minimization of negativity artifacts and phantom imaging were used to optimize the system, followed by demonstrative imaging of a star contrast phantom, a synthetic breast tumor specimen phantom, and a vascular phantom. RESULTS Optimization of the angular/rotation scans found ≈212 effective detectors were needed for high-quality images, while 15-mm steps were used to increase the field of view as required depending on the size of the imaged object. Example phantoms were clearly imaged with all discerning features visible and minimal artifacts. CONCLUSIONS A near full-view closed spherical system has been developed, paving the way for future work demonstrating experimentally the significant advantages of using a full-view PAT setup.
Collapse
Affiliation(s)
- Lawrence C. M. Yip
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, Schulich School of Medicine and Dentistry, Department of Medical Biophysics, London, Ontario, Canada
| | - Parsa Omidi
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, School of Biomedical Engineering, London, Ontario, Canada
| | - Elina Rascevska
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, School of Biomedical Engineering, London, Ontario, Canada
| | - Jeffrey J. L. Carson
- Lawson Health Research Institute, Imaging Program, London, Ontario, Canada
- Western University, Schulich School of Medicine and Dentistry, Department of Medical Biophysics, London, Ontario, Canada
- Western University, School of Biomedical Engineering, London, Ontario, Canada
- Western University, Schulich School of Medicine and Dentistry, Department of Surgery, London, Ontario, Canada
| |
Collapse
|
57
|
Gao Y, Xu W, Chen Y, Xie W, Cheng Q. Deep Learning-Based Photoacoustic Imaging of Vascular Network Through Thick Porous Media. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2191-2204. [PMID: 35294347 DOI: 10.1109/tmi.2022.3158474] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging is a promising approach used to realize in vivo transcranial cerebral vascular imaging. However, the strong attenuation and distortion of the photoacoustic wave caused by the thick porous skull greatly affect the imaging quality. In this study, we developed a convolutional neural network based on U-Net to extract the effective photoacoustic information hidden in the speckle patterns obtained from vascular network images datasets under porous media. Our simulation and experimental results show that the proposed neural network can learn the mapping relationship between the speckle pattern and the target, and extract the photoacoustic signals of the vessels submerged in noise to reconstruct high-quality images of the vessels with a sharp outline and a clean background. Compared with the traditional photoacoustic reconstruction methods, the proposed deep learning-based reconstruction algorithm has a better performance with a lower mean absolute error, higher structural similarity, and higher peak signal-to-noise ratio of reconstructed images. In conclusion, the proposed neural network can effectively extract valid information from highly blurred speckle patterns for the rapid reconstruction of target images, which offers promising applications in transcranial photoacoustic imaging.
Collapse
|
58
|
Zhang H, Bo W, Wang D, DiSpirito A, Huang C, Nyayapathi N, Zheng E, Vu T, Gong Y, Yao J, Xu W, Xia J. Deep-E: A Fully-Dense Neural Network for Improving the Elevation Resolution in Linear-Array-Based Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1279-1288. [PMID: 34928793 PMCID: PMC9161237 DOI: 10.1109/tmi.2021.3137060] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Linear-array-based photoacoustic tomography has shown broad applications in biomedical research and preclinical imaging. However, the elevational resolution of a linear array is fundamentally limited due to the weak cylindrical focus of the transducer element. While several methods have been proposed to address this issue, they have all handled the problem in a less time-efficient way. In this work, we propose to improve the elevational resolution of a linear array through Deep-E, a fully dense neural network based on U-net. Deep-E exhibits high computational efficiency by converting the three-dimensional problem into a two-dimension problem: it focused on training a model to enhance the resolution along elevational direction by only using the 2D slices in the axial and elevational plane and thereby reducing the computational burden in simulation and training. We demonstrated the efficacy of Deep-E using various datasets, including simulation, phantom, and human subject results. We found that Deep-E could improve elevational resolution by at least four times and recover the object's true size. We envision that Deep-E will have a significant impact in linear-array-based photoacoustic imaging studies by providing high-speed and high-resolution image enhancement.
Collapse
|
59
|
Mozumder M, Hauptmann A, Nissila I, Arridge SR, Tarvainen T. A Model-Based Iterative Learning Approach for Diffuse Optical Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1289-1299. [PMID: 34914584 DOI: 10.1109/tmi.2021.3136461] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Diffuse optical tomography (DOT) utilises near-infrared light for imaging spatially distributed optical parameters, typically the absorption and scattering coefficients. The image reconstruction problem of DOT is an ill-posed inverse problem, due to the non-linear light propagation in tissues and limited boundary measurements. The ill-posedness means that the image reconstruction is sensitive to measurement and modelling errors. The Bayesian approach for the inverse problem of DOT offers the possibility of incorporating prior information about the unknowns, rendering the problem less ill-posed. It also allows marginalisation of modelling errors utilising the so-called Bayesian approximation error method. A more recent trend in image reconstruction techniques is the use of deep learning, which has shown promising results in various applications from image processing to tomographic reconstructions. In this work, we study the non-linear DOT inverse problem of estimating the (absolute) absorption and scattering coefficients utilising a 'model-based' learning approach, essentially intertwining learned components with the model equations of DOT. The proposed approach was validated with 2D simulations and 3D experimental data. We demonstrated improved absorption and scattering estimates for targets with a mix of smooth and sharp image features, implying that the proposed approach could learn image features that are difficult to model using standard Gaussian priors. Furthermore, it was shown that the approach can be utilised in compensating for modelling errors due to coarse discretisation enabling computationally efficient solutions. Overall, the approach provided improved computation times compared to a standard Gauss-Newton iteration.
Collapse
|
60
|
Nam S, Kim D, Jung W, Zhu Y. Understanding the Research Landscape of Deep Learning in Biomedical Science: Scientometric Analysis. J Med Internet Res 2022; 24:e28114. [PMID: 35451980 PMCID: PMC9077503 DOI: 10.2196/28114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 05/30/2021] [Accepted: 02/20/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Advances in biomedical research using deep learning techniques have generated a large volume of related literature. However, there is a lack of scientometric studies that provide a bird's-eye view of them. This absence has led to a partial and fragmented understanding of the field and its progress. OBJECTIVE This study aimed to gain a quantitative and qualitative understanding of the scientific domain by analyzing diverse bibliographic entities that represent the research landscape from multiple perspectives and levels of granularity. METHODS We searched and retrieved 978 deep learning studies in biomedicine from the PubMed database. A scientometric analysis was performed by analyzing the metadata, content of influential works, and cited references. RESULTS In the process, we identified the current leading fields, major research topics and techniques, knowledge diffusion, and research collaboration. There was a predominant focus on applying deep learning, especially convolutional neural networks, to radiology and medical imaging, whereas a few studies focused on protein or genome analysis. Radiology and medical imaging also appeared to be the most significant knowledge sources and an important field in knowledge diffusion, followed by computer science and electrical engineering. A coauthorship analysis revealed various collaborations among engineering-oriented and biomedicine-oriented clusters of disciplines. CONCLUSIONS This study investigated the landscape of deep learning research in biomedicine and confirmed its interdisciplinary nature. Although it has been successful, we believe that there is a need for diverse applications in certain areas to further boost the contributions of deep learning in addressing biomedical research problems. We expect the results of this study to help researchers and communities better align their present and future work.
Collapse
Affiliation(s)
- Seojin Nam
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Donghun Kim
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Woojin Jung
- Department of Library and Information Science, Sungkyunkwan University, Seoul, Republic of Korea
| | - Yongjun Zhu
- Department of Library and Information Science, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
61
|
Zheng S, Meng Q, Wang XY. Quantitative endoscopic photoacoustic tomography using a convolutional neural network. APPLIED OPTICS 2022; 61:2574-2581. [PMID: 35471325 DOI: 10.1364/ao.441250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 02/23/2022] [Indexed: 06/14/2023]
Abstract
Endoscopic photoacoustic tomography (EPAT) is a catheter-based hybrid imaging modality capable of providing structural and functional information of biological luminal structures, such as coronary arterial vessels and the digestive tract. The recovery of the optical properties of the imaged tissue from acoustic measurements achieved by optical inversion is essential for implementing quantitative EPAT (qEPAT). In this paper, a convolutional neural network (CNN) based on deep gradient descent is developed for qEPAT. The network enables the reconstruction of images representing the spatially varying absorption coefficient in cross-sections of the tubular structures from limited measurement data. The forward operator reflecting the mapping from the absorption coefficient to the optical deposition due to pulsed irradiation is embedded into the network training. The network parameters are optimized layer by layer through the deep gradient descent mechanism using the numerically simulated data. The operation processes of the forward operator and its adjoint operator are separated from the network training. The trained network outputs an image representing the distribution of absorption coefficients by inputting an image that represents the optical deposition. The method has been tested with computer-generated phantoms mimicking coronary arterial vessels containing various tissue types. Results suggest that the structural similarity of the images reconstructed by our method is increased by about 10% in comparison with the non-learning method based on error minimization in the case of the same measuring view.
Collapse
|
62
|
Gröhl J, Dreher KK, Schellenberg M, Rix T, Holzwarth N, Vieten P, Ayala L, Bohndiek SE, Seitel A, Maier-Hein L. SIMPA: an open-source toolkit for simulation and image processing for photonics and acoustics. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:JBO-210395SSR. [PMID: 35380031 PMCID: PMC8978263 DOI: 10.1117/1.jbo.27.8.083010] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 02/28/2022] [Indexed: 05/09/2023]
Abstract
SIGNIFICANCE Optical and acoustic imaging techniques enable noninvasive visualisation of structural and functional properties of tissue. The quantification of measurements, however, remains challenging due to the inverse problems that must be solved. Emerging data-driven approaches are promising, but they rely heavily on the presence of high-quality simulations across a range of wavelengths due to the lack of ground truth knowledge of tissue acoustical and optical properties in realistic settings. AIM To facilitate this process, we present the open-source simulation and image processing for photonics and acoustics (SIMPA) Python toolkit. SIMPA is being developed according to modern software design standards. APPROACH SIMPA enables the use of computational forward models, data processing algorithms, and digital device twins to simulate realistic images within a single pipeline. SIMPA's module implementations can be seamlessly exchanged as SIMPA abstracts from the concrete implementation of each forward model and builds the simulation pipeline in a modular fashion. Furthermore, SIMPA provides comprehensive libraries of biological structures, such as vessels, as well as optical and acoustic properties and other functionalities for the generation of realistic tissue models. RESULTS To showcase the capabilities of SIMPA, we show examples in the context of photoacoustic imaging: the diversity of creatable tissue models, the customisability of a simulation pipeline, and the degree of realism of the simulations. CONCLUSIONS SIMPA is an open-source toolkit that can be used to simulate optical and acoustic imaging modalities. The code is available at: https://github.com/IMSY-DKFZ/simpa, and all of the examples and experiments in this paper can be reproduced using the code available at: https://github.com/IMSY-DKFZ/simpa_paper_experiments.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Kris K. Dreher
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
| | - Tom Rix
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| | - Niklas Holzwarth
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Patricia Vieten
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Leonardo Ayala
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Sarah E. Bohndiek
- University of Cambridge, Cancer Research UK Cambridge Institute, Robinson Way, Cambridge, United Kingdom
- University of Cambridge, Department of Physics, Cambridge, United Kingdom
| | - Alexander Seitel
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ), Division of Intelligent Medical Systems, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| |
Collapse
|
63
|
Lan H, Gong J, Gao F. Deep learning adapted acceleration for limited-view photoacoustic image reconstruction. OPTICS LETTERS 2022; 47:1911-1914. [PMID: 35363767 DOI: 10.1364/ol.450860] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/07/2022] [Indexed: 06/14/2023]
Abstract
The limited-view issue can cause a low-quality image in photoacoustic (PA) computed tomography due to the limitation of geometric condition. The model-based method is used to resolve this problem, which contains different regularization. To adapt fast and high-quality reconstruction of limited-view PA data, in this Letter, a model-based method that combines the mathematical variational model with deep learning is proposed to speed up and regularize the unrolled procedure of reconstruction. A deep neural network is designed to adapt the step of the gradient updated term of data consistency in the gradient descent procedure, which can obtain a high-quality PA image with only a few iterations. A comparison of different model-based methods shows that our proposed scheme has superior performances (over 0.05 for SSIM) with the same iteration (three times) steps. Finally, we find that our method obtains superior results (0.94 value of SSIM for in vivo) with a high robustness and accelerated reconstruction.
Collapse
|
64
|
Mom K, Sixou B, Langer M. Mixed scale dense convolutional networks for x-ray phase contrast imaging. APPLIED OPTICS 2022; 61:2497-2505. [PMID: 35471314 DOI: 10.1364/ao.443330] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Accepted: 02/25/2022] [Indexed: 06/14/2023]
Abstract
X-ray in-line phase contrast imaging relies on the measurement of Fresnel diffraction intensity patterns due to the phase shift and the attenuation induced by the object. The recovery of phase and attenuation from one or several diffraction patterns is a nonlinear ill-posed inverse problem. In this work, we propose supervised learning approaches using mixed scale dense (MS-D) convolutional neural networks to simultaneously retrieve the phase and the attenuation from x-ray phase contrast images. This network architecture uses dilated convolutions to capture features at different image scales and densely connects all feature maps. The long range information in images becomes quickly available, and greater receptive field size can be obtained without losing resolution. This network architecture seems to account for the effect of the Fresnel operator very efficiently. We train the networks using simulated data of objects consisting of either homogeneous components, characterized by a fixed ratio of the induced refractive phase shifts and attenuation, or heterogeneous components, consisting of various materials. We also train the networks in the image domain by applying a simple initial reconstruction using the adjoint of the Fréchet derivative. We compare the results obtained with the MS-D network to reconstructions using U-Net, another popular network architecture, as well as to reconstructions using the contrast transfer function method, a direct phase and attenuation retrieval method based on linearization of the direct problem. The networks are evaluated using simulated noisy data as well as images acquired at NanoMAX (MAX IV, Lund, Sweden). In all cases, large improvements of the reconstruction errors are obtained on simulated data compared to the linearized method. Moreover, on experimental data, the networks improve the reconstruction quantitatively, improving the low-frequency behavior and the resolution.
Collapse
|
65
|
Ly CD, Nguyen VT, Vo TH, Mondal S, Park S, Choi J, Vu TTH, Kim CS, Oh J. Full-view in vivo skin and blood vessels profile segmentation in photoacoustic imaging based on deep learning. PHOTOACOUSTICS 2022; 25:100310. [PMID: 34824975 PMCID: PMC8603312 DOI: 10.1016/j.pacs.2021.100310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/23/2021] [Accepted: 10/18/2021] [Indexed: 05/08/2023]
Abstract
Photoacoustic (PA) microscopy allows imaging of the soft biological tissue based on optical absorption contrast and spatial ultrasound resolution. One of the major applications of PA imaging is its characterization of microvasculature. However, the strong PA signal from skin layer overshadowed the subcutaneous blood vessels leading to indirectly reconstruct the PA images in human study. Addressing the present situation, we examined a deep learning (DL) automatic algorithm to achieve high-resolution and high-contrast segmentation for widening PA imaging applications. In this research, we propose a DL model based on modified U-Net for extracting the relationship features between amplitudes of the generated PA signal from skin and underlying vessels. This study illustrates the broader potential of hybrid complex network as an automatic segmentation tool for the in vivo PA imaging. With DL-infused solution, our result outperforms the previous studies with achieved real-time semantic segmentation on large-size high-resolution PA images.
Collapse
Affiliation(s)
- Cao Duong Ly
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Van Tu Nguyen
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Tan Hung Vo
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Sudip Mondal
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| | - Sumin Park
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Jaeyeop Choi
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
| | - Thi Thu Ha Vu
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Chang-Seok Kim
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Junghwan Oh
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Department of Biomedical Engineering, Pukyong National University, Busan 48513, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| |
Collapse
|
66
|
Cheng S, Zhou Y, Chen J, Li H, Wang L, Lai P. High-resolution photoacoustic microscopy with deep penetration through learning. PHOTOACOUSTICS 2022; 25:100314. [PMID: 34824976 PMCID: PMC8604673 DOI: 10.1016/j.pacs.2021.100314] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/01/2021] [Accepted: 11/01/2021] [Indexed: 05/18/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) enjoys superior spatial resolution and has received intense attention in recent years. The application, however, has been limited to shallow depths because of strong scattering of light in biological tissues. In this work, we propose to achieve deep-penetrating OR-PAM performance by using deep learning enabled image transformation on blurry living mouse vascular images that were acquired with an acoustic-resolution photoacoustic microscopy (AR-PAM) setup. A generative adversarial network (GAN) was trained in this study and improved the imaging lateral resolution of AR-PAM from 54.0 µm to 5.1 µm, comparable to that of a typical OR-PAM (4.7 µm). The feasibility of the network was evaluated with living mouse ear data, producing superior microvasculature images that outperforms blind deconvolution. The generalization of the network was validated with in vivo mouse brain data. Moreover, it was shown experimentally that the deep-learning method can retain high resolution at tissue depths beyond one optical transport mean free path. Whilst it can be further improved, the proposed method provides new horizons to expand the scope of OR-PAM towards deep-tissue imaging and wide applications in biomedicine.
Collapse
Affiliation(s)
- Shengfu Cheng
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Yingying Zhou
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Jiangbo Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Huanhao Li
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| | - Lidai Wang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| | - Puxiang Lai
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong, China
- The Hong Kong Polytechnic University Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
67
|
Duan T, Peng X, Chen M, Zhang D, Gao F, Yao J. Detection of weak optical absorption by optical-resolution photoacoustic microscopy. PHOTOACOUSTICS 2022; 25:100335. [PMID: 35198378 PMCID: PMC8844787 DOI: 10.1016/j.pacs.2022.100335] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 02/04/2022] [Accepted: 02/04/2022] [Indexed: 06/14/2023]
Abstract
Optical-resolution photoacoustic microscopy (OR-PAM) is one of the major implementations of photoacoustic (PA) imaging. With tightly focused optical illumination and high-frequency ultrasound detection, OR-PAM provides micrometer-level resolutions as well as high sensitivity to optical absorption contrast. Traditionally, it is assumed that the detected PA signal in OR-PAM has a linear dependence on the target's optical absorption coefficient, which is the basis for quantitative functional and molecular PA imaging. In this paper, we demonstrate that, due to the limited detection bandwidth and detection view, OR-PAM can have a strong nonlinear dependence on the optical absorption, especially for weak optical absorption (<10 cm-1). We have investigated the nonlinear dependence in OR-PAM using numerical simulations, analyzed the underlining mechanisms, proposed potential solutions, and experimentally confirmed the results on phantoms. This work may correct a traditional misunderstanding of the OR-PAM signals and improve quantitative accuracy for functional and molecular applications.
Collapse
Affiliation(s)
- Tingyang Duan
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
- Departmentof Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Xiaorui Peng
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Departmentof Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Maomao Chen
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Dong Zhang
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- Departmentof Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
68
|
Sangha GS, Hu B, Li G, Fox SE, Sholl AB, Brown JQ, Goergen CJ. Assessment of photoacoustic tomography contrast for breast tissue imaging using 3D correlative virtual histology. Sci Rep 2022; 12:2532. [PMID: 35169198 PMCID: PMC8847353 DOI: 10.1038/s41598-022-06501-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/25/2022] [Indexed: 11/12/2022] Open
Abstract
Current breast tumor margin detection methods are destructive, time-consuming, and result in significant reoperative rates. Dual-modality photoacoustic tomography (PAT) and ultrasound has the potential to enhance breast margin characterization by providing clinically relevant compositional information with high sensitivity and tissue penetration. However, quantitative methods that rigorously compare volumetric PAT and ultrasound images with gold-standard histology are lacking, thus limiting clinical validation and translation. Here, we present a quantitative multimodality workflow that uses inverted Selective Plane Illumination Microscopy (iSPIM) to facilitate image co-registration between volumetric PAT-ultrasound datasets with histology in human invasive ductal carcinoma breast tissue samples. Our ultrasound-PAT system consisted of a tunable Nd:YAG laser coupled with a 40 MHz central frequency ultrasound transducer. A linear stepper motor was used to acquire volumetric PAT and ultrasound breast biopsy datasets using 1100 nm light to identify hemoglobin-rich regions and 1210 nm light to identify lipid-rich regions. Our iSPIM system used 488 nm and 647 nm laser excitation combined with Eosin and DRAQ5, a cell-permeant nucleic acid binding dye, to produce high-resolution volumetric datasets comparable to histology. Image thresholding was applied to PAT and iSPIM images to extract, quantify, and topologically visualize breast biopsy lipid, stroma, hemoglobin, and nuclei distribution. Our lipid-weighted PAT and iSPIM images suggest that low lipid regions strongly correlate with malignant breast tissue. Hemoglobin-weighted PAT images, however, correlated poorly with cancerous regions determined by histology and interpreted by a board-certified pathologist. Nuclei-weighted iSPIM images revealed similar cellular content in cancerous and non-cancerous tissues, suggesting malignant cell migration from the breast ducts to the surrounding tissues. We demonstrate the utility of our nondestructive, volumetric, region-based quantitative method for comprehensive validation of 3D tomographic imaging methods suitable for bedside tumor margin detection.
Collapse
Affiliation(s)
- Gurneet S Sangha
- Fischell Department of Bioengineering, University of Maryland, 8278 Paint Branch Dr, College Park, MD, 20742, USA.,Weldon School of Biomedical Engineering, Purdue University, 206 S. Martin Jischke Dr., West Lafayette, IN, 47907, USA
| | - Bihe Hu
- Department of Biomedical Engineering, Tulane University, 547 Lindy Boggs Center, New Orleans, LA, 70118, USA
| | - Guang Li
- Department of Biomedical Engineering, Tulane University, 547 Lindy Boggs Center, New Orleans, LA, 70118, USA
| | - Sharon E Fox
- Department of Pathology, LSU Health Sciences Center, New Orleans, 433 Bolivar St, New Orleans, LA, 70112, USA.,Pathology and Laboratory Medicine Service, Southeast Louisiana Veterans Healthcare System, 2400 Canal Street, New Orleans, LA, 70112, USA
| | - Andrew B Sholl
- Delta Pathology Group, Touro Infirmary, 1401 Foucher St, New Orleans, LA, 70115, USA
| | - J Quincy Brown
- Department of Biomedical Engineering, Tulane University, 547 Lindy Boggs Center, New Orleans, LA, 70118, USA
| | - Craig J Goergen
- Weldon School of Biomedical Engineering, Purdue University, 206 S. Martin Jischke Dr., West Lafayette, IN, 47907, USA. .,Purdue University Center for Cancer Research, Purdue University, 201 S. University St., West Lafayette, IN, 47907, USA.
| |
Collapse
|
69
|
Maneas E, Hauptmann A, Alles EJ, Xia W, Vercauteren T, Ourselin S, David AL, Arridge S, Desjardins AE. Deep Learning for Instrumented Ultrasonic Tracking: From Synthetic Training Data to In Vivo Application. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:543-552. [PMID: 34748488 DOI: 10.1109/tuffc.2021.3126530] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Instrumented ultrasonic tracking is used to improve needle localization during ultrasound guidance of minimally invasive percutaneous procedures. Here, it is implemented with transmitted ultrasound pulses from a clinical ultrasound imaging probe, which is detected by a fiber-optic hydrophone integrated into a needle. The detected transmissions are then reconstructed to form the tracking image. Two challenges are considered with the current implementation of ultrasonic tracking. First, tracking transmissions are interleaved with the acquisition of B-mode images, and thus, the effective B-mode frame rate is reduced. Second, it is challenging to achieve an accurate localization of the needle tip when the signal-to-noise ratio is low. To address these challenges, we present a framework based on a convolutional neural network (CNN) to maintain spatial resolution with fewer tracking transmissions and enhance signal quality. A major component of the framework included the generation of realistic synthetic training data. The trained network was applied to unseen synthetic data and experimental in vivo tracking data. The performance of needle localization was investigated when reconstruction was performed with fewer (up to eightfold) tracking transmissions. CNN-based processing of conventional reconstructions showed that the axial and lateral spatial resolutions could be improved even with an eightfold reduction in tracking transmissions. The framework presented in this study will significantly improve the performance of ultrasonic tracking, leading to faster image acquisition rates and increased localization accuracy.
Collapse
|
70
|
Zhang X, Ma F, Zhang Y, Wang J, Liu C, Meng J. Sparse-sampling photoacoustic computed tomography: Deep learning vs. compressed sensing. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103233] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
71
|
Eldar YC, Li Y, Ye JC. Mathematical Foundations of AIM. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
72
|
Gallet A, Rigby S, Tallman TN, Kong X, Hajirasouliha I, Liew A, Liu D, Chen L, Hauptmann A, Smyl D. Structural engineering from an inverse problems perspective. Proc Math Phys Eng Sci 2022; 478:20210526. [PMID: 35153609 PMCID: PMC8791046 DOI: 10.1098/rspa.2021.0526] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 12/07/2021] [Indexed: 01/16/2023] Open
Abstract
The field of structural engineering is vast, spanning areas from the design of new infrastructure to the assessment of existing infrastructure. From the onset, traditional entry-level university courses teach students to analyse structural responses given data including external forces, geometry, member sizes, restraint, etc.-characterizing a forward problem (structural causalities → structural response). Shortly thereafter, junior engineers are introduced to structural design where they aim to, for example, select an appropriate structural form for members based on design criteria, which is the inverse of what they previously learned. Similar inverse realizations also hold true in structural health monitoring and a number of structural engineering sub-fields (response → structural causalities). In this light, we aim to demonstrate that many structural engineering sub-fields may be fundamentally or partially viewed as inverse problems and thus benefit via the rich and established methodologies from the inverse problems community. To this end, we conclude that the future of inverse problems in structural engineering is inexorably linked to engineering education and machine learning developments.
Collapse
Affiliation(s)
- A. Gallet
- Department of Civil and Structural Engineering, University of Sheffield, Sheffield, UK
| | - S. Rigby
- Department of Civil and Structural Engineering, University of Sheffield, Sheffield, UK
| | - T. N. Tallman
- School of Aeronautics and Astronautics, Purdue University, West Lafayette, IN, USA
| | - X. Kong
- Department of Physics and Engineering Science, Coastal Carolina University, Conway, SC, USA
| | - I. Hajirasouliha
- Department of Civil and Structural Engineering, University of Sheffield, Sheffield, UK
| | - A. Liew
- Department of Civil and Structural Engineering, University of Sheffield, Sheffield, UK
| | - D. Liu
- School of Physical Sciences, University of Science and Technology of China, Hefei, People’s Republic of China
| | - L. Chen
- Department of Civil and Structural Engineering, University of Sheffield, Sheffield, UK
| | - A. Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Oulu, Finland
- Department of Computer Science, University College London, London, UK
| | - D. Smyl
- Department of Civil, Coastal, and Environmental Engineering, University of South Alabama, Mobile, AL, USA
| |
Collapse
|
73
|
Herzberg W, Rowe DB, Hauptmann A, Hamilton SJ. Graph Convolutional Networks for Model-Based Learning in Nonlinear Inverse Problems. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2021; 7:1341-1353. [PMID: 35873096 PMCID: PMC9307146 DOI: 10.1109/tci.2021.3132190] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The majority of model-based learned image reconstruction methods in medical imaging have been limited to uniform domains, such as pixelated images. If the underlying model is solved on nonuniform meshes, arising from a finite element method typical for nonlinear inverse problems, interpolation and embeddings are needed. To overcome this, we present a flexible framework to extend model-based learning directly to nonuniform meshes, by interpreting the mesh as a graph and formulating our network architectures using graph convolutional neural networks. This gives rise to the proposed iterative Graph Convolutional Newton-type Method (GCNM), which includes the forward model in the solution of the inverse problem, while all updates are directly computed by the network on the problem specific mesh. We present results for Electrical Impedance Tomography, a severely ill-posed nonlinear inverse problem that is frequently solved via optimization-based methods, where the forward problem is solved by finite element methods. Results for absolute EIT imaging are compared to standard iterative methods as well as a graph residual network. We show that the GCNM has good generalizability to different domain shapes and meshes, out of distribution data as well as experimental data, from purely simulated training data and without transfer training.
Collapse
Affiliation(s)
- William Herzberg
- Department of Mathematical and Statistical Sciences; Marquette University, Milwaukee, WI 53233 USA
| | - Daniel B Rowe
- Department of Mathematical and Statistical Sciences; Marquette University, Milwaukee, WI 53233 USA
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences; University of Oulu, Oulu, Finland and with the Department of Computer Science; University College London, London, United Kingdom
| | - Sarah J Hamilton
- Department of Mathematical and Statistical Sciences; Marquette University, Milwaukee, WI 53233 USA
| |
Collapse
|
74
|
Lan H, Zhang J, Yang C, Gao F. Compressed sensing for photoacoustic computed tomography based on an untrained neural network with a shape prior. BIOMEDICAL OPTICS EXPRESS 2021; 12:7835-7848. [PMID: 35003870 PMCID: PMC8713655 DOI: 10.1364/boe.441901] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 10/12/2021] [Accepted: 10/12/2021] [Indexed: 05/20/2023]
Abstract
Photoacoustic (PA) computed tomography (PACT) shows great potential in various preclinical and clinical applications. A great number of measurements are the premise that obtains a high-quality image, which implies a low imaging rate or a high system cost. The artifacts or sidelobes could pollute the image if we decrease the number of measured channels or limit the detected view. In this paper, a novel compressed sensing method for PACT using an untrained neural network is proposed, which decreases a half number of the measured channels and recovers enough details. This method uses a neural network to reconstruct without the requirement for any additional learning based on the deep image prior. The model can reconstruct the image only using a few detections with gradient descent. As an unlearned strategy, our method can cooperate with other existing regularization, and further improve the quality. In addition, we introduce a shape prior to easily converge the model to the image. We verify the feasibility of untrained network-based compressed sensing in PA image reconstruction and compare this method with a conventional method using total variation minimization. The experimental results show that our proposed method outperforms 32.72% (SSIM) with the traditional compressed sensing method in the same regularization. It could dramatically reduce the requirement for the number of transducers, by sparsely sampling the raw PA data, and improve the quality of PA image significantly.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai
Engineering Research Center of Intelligent Vision and Imaging, School
of Information Science and Technology, ShanghaiTech
University, Shanghai 201210, China
- Chinese Academy of Sciences,
Shanghai Institute of Microsystem and Information
Technology, Shanghai 200050, China
- University of Chinese Academy
of Sciences, Beijing 100049, China
| | - Juze Zhang
- Hybrid Imaging System Laboratory, Shanghai
Engineering Research Center of Intelligent Vision and Imaging, School
of Information Science and Technology, ShanghaiTech
University, Shanghai 201210, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai
Engineering Research Center of Intelligent Vision and Imaging, School
of Information Science and Technology, ShanghaiTech
University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai
Engineering Research Center of Intelligent Vision and Imaging, School
of Information Science and Technology, ShanghaiTech
University, Shanghai 201210, China
| |
Collapse
|
75
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
76
|
Li Z, Lan H, Gao F. Learned Parameters and Increment for Iterative Photoacoustic Image Reconstruction via Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2989-2992. [PMID: 34891873 DOI: 10.1109/embc46164.2021.9630545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Photoacoustic (PA) tomography is a relatively new medical imaging technique that combines traditional ultrasound imaging and optical imaging, which has great application prospects in recent years. To reveal the light absorption coefficient of biological tissues, the images are reconstructed from PA signals by reconstruction algorithms. However, traditional model-based reconstruction method requires a huge number of iterations to obtain relatively good experimental results, which is quite time-consuming. In this paper, we propose to use deep learning method to replace brute parameter adjustment in model-based reconstruction, and speed up the rate of convergence by building convolutional neural networks (CNN). The parameters we defined in our model can be learned automatically. Meanwhile, our method can optimize the increment of gradient in each step of iteration. The numerical experiment validates our method, showing that only three iterations are needed to obtain the satisfactory image quality, which normally requires 10 iterations for tradition method. It demonstrated that efficiency of photoacoustic reconstruction can be greatly improved by our proposed method, compared with traditional model-based methods.
Collapse
|
77
|
Jeon S, Choi W, Park B, Kim C. A Deep Learning-Based Model That Reduces Speed of Sound Aberrations for Improved In Vivo Photoacoustic Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8773-8784. [PMID: 34665732 DOI: 10.1109/tip.2021.3120053] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Photoacoustic imaging (PAI) has attracted great attention as a medical imaging method. Typically, photoacoustic (PA) images are reconstructed via beamforming, but many factors still hinder the beamforming techniques in reconstructing optimal images in terms of image resolution, imaging depth, or processing speed. Here, we demonstrate a novel deep learning PAI that uses multiple speed of sound (SoS) inputs. With this novel method, we achieved SoS aberration mitigation, streak artifact removal, and temporal resolution improvement all at once in structural and functional in vivo PA images of healthy human limbs and melanoma patients. The presented method produces high-contrast PA images in vivo with reduced distortion, even in adverse conditions where the medium is heterogeneous and/or the data sampling is sparse. Thus, we believe that this new method can achieve high image quality with fast data acquisition and can contribute to the advance of clinical PAI.
Collapse
|
78
|
Rajendran P, Pramanik M. Deep-learning-based multi-transducer photoacoustic tomography imaging without radius calibration. OPTICS LETTERS 2021; 46:4510-4513. [PMID: 34525034 DOI: 10.1364/ol.434513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Pulsed laser diodes are used in photoacoustic tomography (PAT) as excitation sources because of their low cost, compact size, and high pulse repetition rate. In combination with multiple single-element ultrasound transducers (SUTs) the imaging speed of PAT can be improved. However, during PAT image reconstruction, the exact radius of each SUT is required for accurate reconstruction. Here we developed a novel deep learning approach to alleviate the need for radius calibration. We used a convolutional neural network (fully dense U-Net) aided with a convolutional long short-term memory block to reconstruct the PAT images. Our analysis on the test set demonstrates that the proposed network eliminates the need for radius calibration and improves the peak signal-to-noise ratio by ∼73% without compromising the image quality. In vivo imaging was used to verify the performance of the network.
Collapse
|
79
|
Hsu KT, Guan S, Chitnis PV. Comparing Deep Learning Frameworks for Photoacoustic Tomography Image Reconstruction. PHOTOACOUSTICS 2021; 23:100271. [PMID: 34094851 PMCID: PMC8165448 DOI: 10.1016/j.pacs.2021.100271] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 04/08/2021] [Accepted: 05/11/2021] [Indexed: 05/02/2023]
Abstract
Conventional reconstruction methods for photoacoustic images are not suitable for the scenario of sparse sensing and geometrical limitation. To overcome these challenges and enhance the quality of reconstruction, several learning-based methods have recently been introduced for photoacoustic tomography reconstruction. The goal of this study is to compare and systematically evaluate the recently proposed learning-based methods and modified networks for photoacoustic image reconstruction. Specifically, learning-based post-processing methods and model-based learned iterative reconstruction methods are investigated. In addition to comparing the differences inherently brought by the models, we also study the impact of different inputs on the reconstruction effect. Our results demonstrate that the reconstruction performance mainly stems from the effective amount of information carried by the input. The inherent difference of the models based on the learning-based post-processing method does not provide a significant difference in photoacoustic image reconstruction. Furthermore, the results indicate that the model-based learned iterative reconstruction method outperforms all other learning-based post-processing methods in terms of generalizability and robustness.
Collapse
|
80
|
Zhang C, Li Y, Chen GH. Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS). Med Phys 2021; 48:5765-5781. [PMID: 34458996 DOI: 10.1002/mp.15183] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/09/2021] [Accepted: 08/02/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Sparse-view CT image reconstruction problems encountered in dynamic CT acquisitions are technically challenging. Recently, many deep learning strategies have been proposed to reconstruct CT images from sparse-view angle acquisitions showing promising results. However, two fundamental problems with these deep learning reconstruction methods remain to be addressed: (1) limited reconstruction accuracy for individual patients and (2) limited generalizability for patient statistical cohorts. PURPOSE The purpose of this work is to address the previously mentioned challenges in current deep learning methods. METHODS A method that combines a deep learning strategy with prior image constrained compressed sensing (PICCS) was developed to address these two problems. In this method, the sparse-view CT data were reconstructed by the conventional filtered backprojection (FBP) method first, and then processed by the trained deep neural network to eliminate streaking artifacts. The outputs of the deep learning architecture were then used as the needed prior image in PICCS to reconstruct the image. If the noise level from the PICCS reconstruction is not satisfactory, another light duty deep neural network can then be used to reduce noise level. Both extensive numerical simulation data and human subject data have been used to quantitatively and qualitatively assess the performance of the proposed DL-PICCS method in terms of reconstruction accuracy and generalizability. RESULTS Extensive evaluation studies have demonstrated that: (1) quantitative reconstruction accuracy of DL-PICCS for individual patient is improved when it is compared with the deep learning methods and CS-based methods; (2) the false-positive lesion-like structures and false negative missing anatomical structures in the deep learning approaches can be effectively eliminated in the DL-PICCS reconstructed images; and (3) DL-PICCS enables a deep learning scheme to relax its working conditions to enhance its generalizability. CONCLUSIONS DL-PICCS offers a promising opportunity to achieve personalized reconstruction with improved reconstruction accuracy and enhanced generalizability.
Collapse
Affiliation(s)
- Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.,Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
81
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
82
|
Sun Z, Wang X, Yan X. An iterative gradient convolutional neural network and its application in endoscopic photoacoustic image formation from incomplete acoustic measurement. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05607-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
83
|
Na S, Wang LV. Photoacoustic computed tomography for functional human brain imaging [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:4056-4083. [PMID: 34457399 PMCID: PMC8367226 DOI: 10.1364/boe.423707] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 06/05/2021] [Accepted: 06/08/2021] [Indexed: 05/02/2023]
Abstract
The successes of magnetic resonance imaging and modern optical imaging of human brain function have stimulated the development of complementary modalities that offer molecular specificity, fine spatiotemporal resolution, and sufficient penetration simultaneously. By virtue of its rich optical contrast, acoustic resolution, and imaging depth far beyond the optical transport mean free path (∼1 mm in biological tissues), photoacoustic computed tomography (PACT) offers a promising complementary modality. In this article, PACT for functional human brain imaging is reviewed in its hardware, reconstruction algorithms, in vivo demonstration, and potential roadmap.
Collapse
Affiliation(s)
- Shuai Na
- Caltech Optical Imaging Laboratory, Andrew
and Peggy Cherng Department of Medical Engineering,
California Institute of Technology, 1200
East California Boulevard, Pasadena, CA 91125, USA
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew
and Peggy Cherng Department of Medical Engineering,
California Institute of Technology, 1200
East California Boulevard, Pasadena, CA 91125, USA
- Caltech Optical Imaging Laboratory,
Department of Electrical Engineering, California
Institute of Technology, 1200 East California Boulevard,
Pasadena, CA 91125, USA
| |
Collapse
|
84
|
Pantazis D, Adler A. MEG Source Localization via Deep Learning. SENSORS (BASEL, SWITZERLAND) 2021; 21:4278. [PMID: 34206620 PMCID: PMC8271934 DOI: 10.3390/s21134278] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 06/14/2021] [Accepted: 06/17/2021] [Indexed: 12/22/2022]
Abstract
We present a deep learning solution to the problem of localization of magnetoencephalography (MEG) brain signals. The proposed deep model architectures are tuned to single and multiple time point MEG data, and can estimate varying numbers of dipole sources. Results from simulated MEG data on the cortical surface of a real human subject demonstrated improvements against the popular RAP-MUSIC localization algorithm in specific scenarios with varying SNR levels, inter-source correlation values, and number of sources. Importantly, the deep learning models had robust performance to forward model errors resulting from head translation and rotation and a significant reduction in computation time, to a fraction of 1 ms, paving the way to real-time MEG source localization.
Collapse
Affiliation(s)
- Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Amir Adler
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- Electrical Engineering Department, Braude College of Engineering, Karmiel 2161002, Israel
| |
Collapse
|
85
|
Abstract
Photoacoustic tomography (PAT) that integrates the molecular contrast of optical imaging with the high spatial resolution of ultrasound imaging in deep tissue has widespread applications in basic biological science, preclinical research, and clinical trials. Recently, tremendous progress has been made in PAT regarding technical innovations, preclinical applications, and clinical translations. Here, we selectively review the recent progresses and advances in PAT, including the development of advanced PAT systems for small-animal and human imaging, newly engineered optical probes for molecular imaging, broad-spectrum PAT for label-free imaging of biological tissues, high-throughput snapshot photoacoustic topography, and integration of machine learning for image reconstruction and processing. We envision that PAT will have further technical developments and more impactful applications in biomedicine.
Collapse
Affiliation(s)
- Lei Li
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA 91125, USA
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA 91125, USA
| |
Collapse
|
86
|
Vu T, DiSpirito A, Li D, Wang Z, Zhu X, Chen M, Jiang L, Zhang D, Luo J, Zhang YS, Zhou Q, Horstmeyer R, Yao J. Deep image prior for undersampling high-speed photoacoustic microscopy. PHOTOACOUSTICS 2021; 22:100266. [PMID: 33898247 PMCID: PMC8056431 DOI: 10.1016/j.pacs.2021.100266] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Revised: 03/15/2021] [Accepted: 03/23/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic microscopy (PAM) is an emerging imaging method combining light and sound. However, limited by the laser's repetition rate, state-of-the-art high-speed PAM technology often sacrifices spatial sampling density (i.e., undersampling) for increased imaging speed over a large field-of-view. Deep learning (DL) methods have recently been used to improve sparsely sampled PAM images; however, these methods often require time-consuming pre-training and large training dataset with ground truth. Here, we propose the use of deep image prior (DIP) to improve the image quality of undersampled PAM images. Unlike other DL approaches, DIP requires neither pre-training nor fully-sampled ground truth, enabling its flexible and fast implementation on various imaging targets. Our results have demonstrated substantial improvement in PAM images with as few as 1.4 % of the fully sampled pixels on high-speed PAM. Our approach outperforms interpolation, is competitive with pre-trained supervised DL method, and is readily translated to other high-speed, undersampling imaging modalities.
Collapse
Affiliation(s)
- Tri Vu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | | | - Daiwei Li
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Zixuan Wang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Xiaoyi Zhu
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Maomao Chen
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| | - Laiming Jiang
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | - Dong Zhang
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Jianwen Luo
- Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China
| | - Yu Shrike Zhang
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Cambridge, MA, 02139, USA
| | - Qifa Zhou
- Department of Biomedical Engineering and USC Roski Eye Institute, University of Southern California, Los Angeles, CA, 90089, USA
| | | | - Junjie Yao
- Photoacoustic Imaging Lab, Duke University, Durham, NC, 27708, USA
| |
Collapse
|
87
|
Lan H, Jiang D, Gao F, Gao F. Deep learning enabled real-time photoacoustic tomography system via single data acquisition channel. PHOTOACOUSTICS 2021; 22:100270. [PMID: 34026492 PMCID: PMC8122165 DOI: 10.1016/j.pacs.2021.100270] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 04/26/2021] [Accepted: 04/27/2021] [Indexed: 05/02/2023]
Abstract
Photoacoustic computed tomography (PACT) combines the optical contrast of optical imaging and the penetrability of sonography. In this work, we develop a novel PACT system to provide real-time imaging, which is achieved by a 120-elements ultrasound array only using a single data acquisition (DAQ) channel. To reduce the channel number of DAQ, we superimpose 30 nearby channels' signals together in the analog domain, and shrinking to 4 channels of data (120/30 = 4). Furthermore, a four-to-one delay-line module is designed to combine these four channels' data into one channel before entering the single-channel DAQ, followed by decoupling the signals after data acquisition. To reconstruct the image from four superimposed 30-channels' PA signals, we train a dedicated deep learning model to reconstruct the final PA image. In this paper, we present the preliminary results of phantom and in-vivo experiments, which manifests its robust real-time imaging performance. The significance of this novel PACT system is that it dramatically reduces the cost of multi-channel DAQ module (from 120 channels to 1 channel), paving the way to a portable, low-cost and real-time PACT system.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
| | - Daohuai Jiang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
88
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
89
|
Yao J, Wang LV. Perspective on fast-evolving photoacoustic tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210105-PERR. [PMID: 34196136 PMCID: PMC8244998 DOI: 10.1117/1.jbo.26.6.060602] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/17/2021] [Indexed: 05/19/2023]
Abstract
SIGNIFICANCE Acoustically detecting the rich optical absorption contrast in biological tissues, photoacoustic tomography (PAT) seamlessly bridges the functional and molecular sensitivity of optical excitation with the deep penetration and high scalability of ultrasound detection. As a result of continuous technological innovations and commercial development, PAT has been playing an increasingly important role in life sciences and patient care, including functional brain imaging, smart drug delivery, early cancer diagnosis, and interventional therapy guidance. AIM Built on our 2016 tutorial article that focused on the principles and implementations of PAT, this perspective aims to provide an update on the exciting technical advances in PAT. APPROACH This perspective focuses on the recent PAT innovations in volumetric deep-tissue imaging, high-speed wide-field microscopic imaging, high-sensitivity optical ultrasound detection, and machine-learning enhanced image reconstruction and data processing. Representative applications are introduced to demonstrate these enabling technical breakthroughs in biomedical research. CONCLUSIONS We conclude the perspective by discussing the future development of PAT technologies.
Collapse
Affiliation(s)
- Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Lihong V. Wang
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| |
Collapse
|
90
|
Chen Q, Qin W, Qi W, Xi L. Progress of clinical translation of handheld and semi-handheld photoacoustic imaging. PHOTOACOUSTICS 2021; 22:100264. [PMID: 33868921 PMCID: PMC8040335 DOI: 10.1016/j.pacs.2021.100264] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 03/10/2021] [Accepted: 03/10/2021] [Indexed: 05/05/2023]
Abstract
Photoacoustic imaging (PAI), featuring rich contrast, high spatial/temporal resolution and deep penetration, is one of the fastest-growing biomedical imaging technology over the last decade. To date, numbers of handheld and semi-handheld photoacoustic imaging devices have been reported with corresponding potential clinical applications. Here, we summarize emerged handheld and semi-handheld systems in terms of photoacoustic computed tomography (PACT), optoacoustic mesoscopy (OAMes), and photoacoustic microscopy (PAM). We will discuss each modality in three aspects: laser delivery, scanning protocol, and acoustic detection. Besides new technical developments, we also review the associated clinical studies, and the advantages/disadvantages of these new techniques. In the end, we propose the challenges and perspectives of miniaturized PAI in the future.
Collapse
Affiliation(s)
- Qian Chen
- School of Electronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 610054, China
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, 518055, China
| | - Wei Qin
- School of Physics, University of Electronics Science and Technology of China, Chengdu, 610054, China
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, 518055, China
| | - Weizhi Qi
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, 518055, China
| | - Lei Xi
- Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong, 518055, China
| |
Collapse
|
91
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 101] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
92
|
Yin L, Cao Z, Wang K, Tian J, Yang X, Zhang J. A review of the application of machine learning in molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:825. [PMID: 34268438 PMCID: PMC8246214 DOI: 10.21037/atm-20-5877] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/02/2020] [Indexed: 12/12/2022]
Abstract
Molecular imaging (MI) is a science that uses imaging methods to reflect the changes of molecular level in living state and conduct qualitative and quantitative studies on its biological behaviors in imaging. Optical molecular imaging (OMI) and nuclear medical imaging are two key research fields of MI. OMI technology refers to the optical information generated by the imaging target (such as tumors) due to drug intervention and other reasons. By collecting the optical information, researchers can track the motion trajectory of the imaging target at the molecular level. Owing to its high specificity and sensitivity, OMI has been widely used in preclinical research and clinical surgery. Nuclear medical imaging mainly detects ionizing radiation emitted by radioactive substances. It can provide molecular information for early diagnosis, effective treatment and basic research of diseases, which has become one of the frontiers and hot topics in the field of medicine in the world today. Both OMI and nuclear medical imaging technology require a lot of data processing and analysis. In recent years, artificial intelligence technology, especially neural network-based machine learning (ML) technology, has been widely used in MI because of its powerful data processing capability. It provides a feasible strategy to deal with large and complex data for the requirement of MI. In this review, we will focus on the applications of ML methods in OMI and nuclear medical imaging.
Collapse
Affiliation(s)
- Lin Yin
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Cao
- Peking University First Hospital, Beijing, China
| | - Kun Wang
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jie Tian
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| | - Xing Yang
- Peking University First Hospital, Beijing, China
| | | |
Collapse
|
93
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
94
|
Tordera Mora J, Feng X, Nyayapathi N, Xia J, Gao L. Generalized spatial coherence reconstruction for photoacoustic computed tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210008R. [PMID: 33880892 PMCID: PMC8056071 DOI: 10.1117/1.jbo.26.4.046002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 03/11/2021] [Indexed: 06/12/2023]
Abstract
SIGNIFICANCE Coherence, a fundamental property of waves and fields, plays a key role in photoacoustic image reconstruction. Previously, techniques such as short-lag spatial coherence (SLSC) and filtered delay, multiply, and sum (FDMAS) have utilized spatial coherence to improve the reconstructed resolution and contrast with respect to delay-and-sum (DAS). While SLSC uses spatial coherence directly as the imaging contrast, FDMAS employs spatial coherence implicitly. Despite being more robust against noise, both techniques have their own drawbacks: SLSC does not preserve a relative signal magnitude, and FDMAS shows a reduced contrast-to-noise ratio. AIM To overcome these limitations, our aim is to develop a beamforming algorithm-generalized spatial coherence (GSC)-that unifies SLSC and FDMAS into a single equation and outperforms both beamformers. APPROACH We demonstrated the application of GSC in photoacoustic computed tomography (PACT) through simulation and experiments and compared it to previous beamformers: DAS, FDMAS, and SLSC. RESULTS GSC outperforms the imaging metrics of previous state-of-the-art coherence-based beamformers in both simulation and experiments. CONCLUSIONS GSC is an innovative reconstruction algorithm for PACT, which combines the strengths of FDMAS and SLSC expanding PACT's applications.
Collapse
Affiliation(s)
- Jorge Tordera Mora
- University of California Los Angeles, Samueli School of Engineering, Department of Bioengineering, California, United States
| | - Xiaohua Feng
- University of California Los Angeles, Samueli School of Engineering, Department of Bioengineering, California, United States
| | - Nikhila Nyayapathi
- University at Buffalo, School of Engineering and Applied Sciences, Department of Biomedical Engineering, Buffalo, New York, United States
| | - Jun Xia
- University at Buffalo, School of Engineering and Applied Sciences, Department of Biomedical Engineering, Buffalo, New York, United States
| | - Liang Gao
- University of California Los Angeles, Samueli School of Engineering, Department of Bioengineering, California, United States
| |
Collapse
|
95
|
Wiacek A, Lediju Bell MA. Photoacoustic-guided surgery from head to toe [Invited]. BIOMEDICAL OPTICS EXPRESS 2021; 12:2079-2117. [PMID: 33996218 PMCID: PMC8086464 DOI: 10.1364/boe.417984] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/17/2021] [Accepted: 02/18/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging-the combination of optics and acoustics to visualize differences in optical absorption - has recently demonstrated strong viability as a promising method to provide critical guidance of multiple surgeries and procedures. Benefits include its potential to assist with tumor resection, identify hemorrhaged and ablated tissue, visualize metal implants (e.g., needle tips, tool tips, brachytherapy seeds), track catheter tips, and avoid accidental injury to critical subsurface anatomy (e.g., major vessels and nerves hidden by tissue during surgery). These benefits are significant because they reduce surgical error, associated surgery-related complications (e.g., cancer recurrence, paralysis, excessive bleeding), and accidental patient death in the operating room. This invited review covers multiple aspects of the use of photoacoustic imaging to guide both surgical and related non-surgical interventions. Applicable organ systems span structures within the head to contents of the toes, with an eye toward surgical and interventional translation for the benefit of patients and for use in operating rooms and interventional suites worldwide. We additionally include a critical discussion of complete systems and tools needed to maximize the success of surgical and interventional applications of photoacoustic-based technology, spanning light delivery, acoustic detection, and robotic methods. Multiple enabling hardware and software integration components are also discussed, concluding with a summary and future outlook based on the current state of technological developments, recent achievements, and possible new directions.
Collapse
Affiliation(s)
- Alycen Wiacek
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Biomedical Engineering, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
- Department of Computer Science, 3400 N. Charles St., Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
96
|
Yang C, Lan H, Gao F, Gao F. Review of deep learning for photoacoustic imaging. PHOTOACOUSTICS 2021; 21:100215. [PMID: 33425679 PMCID: PMC7779783 DOI: 10.1016/j.pacs.2020.100215] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 10/11/2020] [Accepted: 10/11/2020] [Indexed: 05/02/2023]
Abstract
Machine learning has been developed dramatically and witnessed a lot of applications in various fields over the past few years. This boom originated in 2009, when a new model emerged, that is, the deep artificial neural network, which began to surpass other established mature models on some important benchmarks. Later, it was widely used in academia and industry. Ranging from image analysis to natural language processing, it fully exerted its magic and now become the state-of-the-art machine learning models. Deep neural networks have great potential in medical imaging technology, medical data analysis, medical diagnosis and other healthcare issues, and is promoted in both pre-clinical and even clinical stages. In this review, we performed an overview of some new developments and challenges in the application of machine learning to medical image analysis, with a special focus on deep learning in photoacoustic imaging. The aim of this review is threefold: (i) introducing deep learning with some important basics, (ii) reviewing recent works that apply deep learning in the entire ecological chain of photoacoustic imaging, from image reconstruction to disease diagnosis, (iii) providing some open source materials and other resources for researchers interested in applying deep learning to photoacoustic imaging.
Collapse
Affiliation(s)
- Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
97
|
Godefroy G, Arnal B, Bossy E. Compensating for visibility artefacts in photoacoustic imaging with a deep learning approach providing prediction uncertainties. PHOTOACOUSTICS 2021; 21:100218. [PMID: 33364161 PMCID: PMC7750172 DOI: 10.1016/j.pacs.2020.100218] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 10/15/2020] [Accepted: 10/17/2020] [Indexed: 05/04/2023]
Abstract
Conventional photoacoustic imaging may suffer from the limited view and bandwidth of ultrasound transducers. A deep learning approach is proposed to handle these problems and is demonstrated both in simulations and in experiments on a multi-scale model of leaf skeleton. We employed an experimental approach to build the training and the test sets using photographs of the samples as ground truth images. Reconstructions produced by the neural network show a greatly improved image quality as compared to conventional approaches. In addition, this work aimed at quantifying the reliability of the neural network predictions. To achieve this, the dropout Monte-Carlo procedure is applied to estimate a pixel-wise degree of confidence on each predicted picture. Last, we address the possibility to use transfer learning with simulated data in order to drastically limit the size of the experimental dataset.
Collapse
Affiliation(s)
| | - Bastien Arnal
- Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
| | - Emmanuel Bossy
- Univ. Grenoble Alpes, CNRS, LIPhy, 38000 Grenoble, France
| |
Collapse
|
98
|
Tian C, Zhang C, Zhang H, Xie D, Jin Y. Spatial resolution in photoacoustic computed tomography. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2021; 84:036701. [PMID: 33434890 DOI: 10.1088/1361-6633/abdab9] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 01/12/2021] [Indexed: 06/12/2023]
Abstract
Photoacoustic computed tomography (PACT) is a novel biomedical imaging modality and has experienced fast developments in the past two decades. Spatial resolution is an important criterion to measure the imaging performance of a PACT system. Here we survey state-of-the-art literature on the spatial resolution of PACT and analyze resolution degradation models from signal generation, propagation, reception, to image reconstruction. Particularly, the impacts of laser pulse duration, acoustic attenuation, acoustic heterogeneity, detector bandwidth, detector aperture, detector view angle, signal sampling, and image reconstruction algorithms are reviewed and discussed. Analytical expressions of point spread functions related to these impacting factors are summarized based on rigorous mathematical formulas. State-of-the-art approaches devoted to enhancing spatial resolution are also reviewed. This work is expected to elucidate the concept of spatial resolution in PACT and inspire novel image quality enhancement techniques.
Collapse
Affiliation(s)
- Chao Tian
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Chenxi Zhang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Haoran Zhang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Dan Xie
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| | - Yi Jin
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
- Key Laboratory of Precision Scientific Instrumentation of Anhui Higher Education Institutes, University of Science and Technology of China, Hefei, Anhui 230026, People's Republic of China
| |
Collapse
|
99
|
Lafci B, Mercep E, Morscher S, Dean-Ben XL, Razansky D. Deep Learning for Automatic Segmentation of Hybrid Optoacoustic Ultrasound (OPUS) Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:688-696. [PMID: 32894712 DOI: 10.1109/tuffc.2020.3022324] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The highly complementary information provided by multispectral optoacoustics and pulse-echo ultrasound have recently prompted development of hybrid imaging instruments bringing together the unique contrast advantages of both modalities. In the hybrid optoacoustic ultrasound (OPUS) combination, images retrieved by one modality may further be used to improve the reconstruction accuracy of the other. In this regard, image segmentation plays a major role as it can aid improving the image quality and quantification abilities by facilitating modeling of light and sound propagation through the imaged tissues and surrounding coupling medium. Here, we propose an automated approach for surface segmentation in whole-body mouse OPUS imaging using a deep convolutional neural network (CNN). The method has shown robust performance, attaining accurate segmentation of the animal boundary in both optoacoustic and pulse-echo ultrasound images, as evinced by quantitative performance evaluation using Dice coefficient metrics.
Collapse
|
100
|
DiSpirito A, Li D, Vu T, Chen M, Zhang D, Luo J, Horstmeyer R, Yao J. Reconstructing Undersampled Photoacoustic Microscopy Images Using Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:562-570. [PMID: 33064648 PMCID: PMC7858223 DOI: 10.1109/tmi.2020.3031541] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
One primary technical challenge in photoacoustic microscopy (PAM) is the necessary compromise between spatial resolution and imaging speed. In this study, we propose a novel application of deep learning principles to reconstruct undersampled PAM images and transcend the trade-off between spatial resolution and imaging speed. We compared various convolutional neural network (CNN) architectures, and selected a Fully Dense U-net (FD U-net) model that produced the best results. To mimic various undersampling conditions in practice, we artificially downsampled fully-sampled PAM images of mouse brain vasculature at different ratios. This allowed us to not only definitively establish the ground truth, but also train and test our deep learning model at various imaging conditions. Our results and numerical analysis have collectively demonstrated the robust performance of our model to reconstruct PAM images with as few as 2% of the original pixels, which can effectively shorten the imaging time without substantially sacrificing the image quality.
Collapse
|