101
|
Lu T, Chen T, Gao F, Sun B, Ntziachristos V, Li J. LV-GAN: A deep learning approach for limited-view optoacoustic imaging based on hybrid datasets. JOURNAL OF BIOPHOTONICS 2021; 14:e202000325. [PMID: 33098215 DOI: 10.1002/jbio.202000325] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 09/28/2020] [Accepted: 10/13/2020] [Indexed: 06/11/2023]
Abstract
The optoacoustic imaging (OAI) methods are rapidly evolving for resolving optical contrast in medical imaging applications. In practice, measurement strategies are commonly implemented under limited-view conditions due to oversized image objectives or system design limitations. Data acquired by limited-view detection may impart artifacts and distortions in reconstructed optoacoustic (OA) images. We propose a hybrid data-driven deep learning approach based on generative adversarial network (GAN), termed as LV-GAN, to efficiently recover high quality images from limited-view OA images. Trained on both simulation and experiment data, LV-GAN is found capable of achieving high recovery accuracy even under limited detection angles less than 60° . The feasibility of LV-GAN for artifact removal in biological applications was validated by ex vivo experiments based on two different OAI systems, suggesting high potential of a ubiquitous use of LV-GAN to optimize image quality or system design for different scanners and application scenarios.
Collapse
Affiliation(s)
- Tong Lu
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Tingting Chen
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Feng Gao
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| | - Biao Sun
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum Munchen, Munich, Germany
- Chair of Biological Imaging and TranslaTUM, Technical University of Munich, Munich, Germany
| | - Jiao Li
- School of Precision Instrument and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin, China
| |
Collapse
|
102
|
Eldar YC, Li Y, Ye JC. Mathematical Foundations of AIM. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_333-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
103
|
Lunz S, Hauptmann A, Tarvainen T, Schönlieb CB, Arridge S. On Learned Operator Correction in Inverse Problems. SIAM JOURNAL ON IMAGING SCIENCES 2021; 14:92-127. [PMID: 39741577 PMCID: PMC7617273 DOI: 10.1137/20m1338460] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
Abstract
We discuss the possibility of learning a data-driven explicit model correction for inverse problems and whether such a model correction can be used within a variational framework to obtain regularized reconstructions. This paper discusses the conceptual difficulty of learning such a forward model correction and proceeds to present a possible solution as a forward-adjoint correction that explicitly corrects in both data and solution spaces. We then derive conditions under which solutions to the variational problem with a learned correction converge to solutions obtained with the correct operator. The proposed approach is evaluated on an application to limited view photoacoustic tomography and compared to the established framework of the Bayesian approximation error method.
Collapse
Affiliation(s)
- Sebastian Lunz
- University of Cambridge, Department of Applied Mathematics and Theoretical Physics, Cambridge
| | - Andreas Hauptmann
- University of Oulu, Research Unit of Mathematical Sciences; University College London, Department of Computer Science, London
| | - Tanja Tarvainen
- University of Eastern Finland, Department of Applied Physics, Kuopio; University College London, Department of Computer Science, London
| | | | - Simon Arridge
- University College London, Department of Computer Science, London
| |
Collapse
|
104
|
Das D, Sharma A, Rajendran P, Pramanik M. Another decade of photoacoustic imaging. Phys Med Biol 2020; 66. [PMID: 33361580 DOI: 10.1088/1361-6560/abd669] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Accepted: 12/23/2020] [Indexed: 01/09/2023]
Abstract
Photoacoustic imaging - a hybrid biomedical imaging modality finding its way to clinical practices. Although the photoacoustic phenomenon was known more than a century back, only in the last two decades it has been widely researched and used for biomedical imaging applications. In this review we focus on the development and progress of the technology in the last decade (2010-2020). From becoming more and more user friendly, cheaper in cost, portable in size, photoacoustic imaging promises a wide range of applications, if translated to clinic. The growth of photoacoustic community is steady, and with several new directions researchers are exploring, it is inevitable that photoacoustic imaging will one day establish itself as a regular imaging system in the clinical practices.
Collapse
Affiliation(s)
- Dhiman Das
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Arunima Sharma
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Praveenbalaji Rajendran
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore, SINGAPORE
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang Technological University, 70 Nanyang Drive, N1.3-B2-11, Singapore, 637457, SINGAPORE
| |
Collapse
|
105
|
Li M, Nyayapathi N, Kilian HI, Xia J, Lovell JF, Yao J. Sound Out the Deep Colors: Photoacoustic Molecular Imaging at New Depths. Mol Imaging 2020; 19:1536012120981518. [PMID: 33336621 PMCID: PMC7750763 DOI: 10.1177/1536012120981518] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023] Open
Abstract
Photoacoustic tomography (PAT) has become increasingly popular for molecular imaging due to its unique optical absorption contrast, high spatial resolution, deep imaging depth, and high imaging speed. Yet, the strong optical attenuation of biological tissues has traditionally prevented PAT from penetrating more than a few centimeters and limited its application for studying deeply seated targets. A variety of PAT technologies have been developed to extend the imaging depth, including employing deep-penetrating microwaves and X-ray photons as excitation sources, delivering the light to the inside of the organ, reshaping the light wavefront to better focus into scattering medium, as well as improving the sensitivity of ultrasonic transducers. At the same time, novel optical fluence mapping algorithms and image reconstruction methods have been developed to improve the quantitative accuracy of PAT, which is crucial to recover weak molecular signals at larger depths. The development of highly-absorbing near-infrared PA molecular probes has also flourished to provide high sensitivity and specificity in studying cellular processes. This review aims to introduce the recent developments in deep PA molecular imaging, including novel imaging systems, image processing methods and molecular probes, as well as their representative biomedical applications. Existing challenges and future directions are also discussed.
Collapse
Affiliation(s)
- Mucong Li
- Department of Biomedical Engineering, 3065Duke University, Durham, NC, USA
| | - Nikhila Nyayapathi
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Hailey I Kilian
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Jun Xia
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Jonathan F Lovell
- Department of Biomedical Engineering, 12292University of Buffalo, NY, USA
| | - Junjie Yao
- Department of Biomedical Engineering, 3065Duke University, Durham, NC, USA
| |
Collapse
|
106
|
|
107
|
Johnstonbaugh K, Agrawal S, Durairaj DA, Fadden C, Dangi A, Karri SPK, Kothapalli SR. A Deep Learning Approach to Photoacoustic Wavefront Localization in Deep-Tissue Medium. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2649-2659. [PMID: 31944951 PMCID: PMC7769001 DOI: 10.1109/tuffc.2020.2964698] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Optical photons undergo strong scattering when propagating beyond 1-mm deep inside biological tissue. Finding the origin of these diffused optical wavefronts is a challenging task. Breaking through the optical diffusion limit, photoacoustic (PA) imaging (PAI) provides high-resolution and label-free images of human vasculature with high contrast due to the optical absorption of hemoglobin. In real-time PAI, an ultrasound transducer array detects PA signals, and B-mode images are formed by delay-and-sum or frequency-domain beamforming. Fundamentally, the strength of a PA signal is proportional to the local optical fluence, which decreases with the increasing depth due to depth-dependent optical attenuation. This limits the visibility of deep-tissue vasculature or other light-absorbing PA targets. To address this practical challenge, an encoder-decoder convolutional neural network architecture was constructed with custom modules and trained to identify the origin of the PA wavefronts inside an optically scattering deep-tissue medium. A comprehensive ablation study provides strong evidence that each module improves the localization accuracy. The network was trained on model-based simulated PA signals produced by 16 240 blood-vessel targets subjected to both optical scattering and Gaussian noise. Test results on 4600 simulated and five experimental PA signals collected under various scattering conditions show that the network can localize the targets with a mean error less than 30 microns (standard deviation 20.9 microns) for targets below 40-mm imaging depth and 1.06 mm (standard deviation 2.68 mm) for targets at a depth between 40 and 60 mm. The proposed work has broad applications such as diffused optical wavefront shaping, circulating melanoma cell detection, and real-time vascular surgeries (e.g., deep-vein thrombosis).
Collapse
Affiliation(s)
- Kerrick Johnstonbaugh
- Department of Biomedical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Sumit Agrawal
- Department of Biomedical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Deepit Abhishek Durairaj
- Department of Electrical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Christopher Fadden
- Department of Electrical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Ajay Dangi
- Department of Biomedical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
| | - Sri Phani Krishna Karri
- Department of Electrical Engineering, National Institute of Technology Andhra Pradesh, AP, India 534102
| | - Sri-Rajasekhar Kothapalli
- Department of Biomedical Engineering, Pennsylvania State University, University Park, State College, Pennsylvania, USA, 16802
- Penn State Cancer Institute, Pennsylvania State University, Hershey, Pennsylvania, USA, 17033
- Graduate Program in Acoustics, The Pennsylvania State University, University Park, Pennsylvania, USA 16802
| |
Collapse
|
108
|
Awasthi N, Jain G, Kalva SK, Pramanik M, Yalavarthy PK. Deep Neural Network-Based Sinogram Super-Resolution and Bandwidth Enhancement for Limited-Data Photoacoustic Tomography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2660-2673. [PMID: 32142429 DOI: 10.1109/tuffc.2020.2977210] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Photoacoustic tomography (PAT) is a noninvasive imaging modality combining the benefits of optical contrast at ultrasonic resolution. Analytical reconstruction algorithms for photoacoustic (PA) signals require a large number of data points for accurate image reconstruction. However, in practical scenarios, data are collected using the limited number of transducers along with data being often corrupted with noise resulting in only qualitative images. Furthermore, the collected boundary data are band-limited due to limited bandwidth (BW) of the transducer, making the PA imaging with limited data being qualitative. In this work, a deep neural network-based model with loss function being scaled root-mean-squared error was proposed for super-resolution, denoising, as well as BW enhancement of the PA signals collected at the boundary of the domain. The proposed network has been compared with traditional as well as other popular deep-learning methods in numerical as well as experimental cases and is shown to improve the collected boundary data, in turn, providing superior quality reconstructed PA image. The improvement obtained in the Pearson correlation, structural similarity index metric, and root-mean-square error was as high as 35.62%, 33.81%, and 41.07%, respectively, for phantom cases and signal-to-noise ratio improvement in the reconstructed PA images was as high as 11.65 dB for in vivo cases compared with reconstructed image obtained using original limited BW data. Code is available at https://sites.google.com/site/sercmig/home/dnnpat.
Collapse
|
109
|
Lan H, Jiang D, Yang C, Gao F, Gao F. Y-Net: Hybrid deep learning image reconstruction for photoacoustic tomography in vivo. PHOTOACOUSTICS 2020; 20:100197. [PMID: 32612929 PMCID: PMC7322183 DOI: 10.1016/j.pacs.2020.100197] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 06/09/2020] [Accepted: 06/10/2020] [Indexed: 05/04/2023]
Abstract
Conventional reconstruction algorithms (e.g., delay-and-sum) used in photoacoustic imaging (PAI) provide a fast solution while many artifacts remain, especially for limited-view with ill-posed problem. In this paper, we propose a new convolutional neural network (CNN) framework Y-Net: a CNN architecture to reconstruct the initial PA pressure distribution by optimizing both raw data and beamformed images once. The network combines two encoders with one decoder path, which optimally utilizes more information from raw data and beamformed image. We compared our result with some ablation studies, and the results of the test set show better performance compared with conventional reconstruction algorithms and other deep learning method (U-Net). Both in-vitro and in-vivo experiments are used to validated our method, which still performs better than other existing methods. The proposed Y-Net architecture also has high potential in medical image reconstruction for other imaging modalities beyond PAI.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai 200050, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Daohuai Jiang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai 200050, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai 200050, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| |
Collapse
|
110
|
Ansari R, Zhang EZ, Desjardins AE, Beard PC. Miniature all-optical flexible forward-viewing photoacoustic endoscopy probe for surgical guidance. OPTICS LETTERS 2020; 45:6238-6241. [PMID: 33186959 PMCID: PMC8219374 DOI: 10.1364/ol.400295] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 08/17/2020] [Accepted: 08/18/2020] [Indexed: 05/13/2023]
Abstract
A miniature flexible photoacoustic endoscopy probe that provides high-resolution 3D images of vascular structures in the forward-viewing configuration is described. A planar Fabry-Perot ultrasound sensor with a -3dB bandwidth of 53 MHz located at the tip of the probe is interrogated via a flexible fiber bundle and a miniature optical relay system to realize an all-optical probe measuring 7.4 mm in outer diameter at the tip. This approach to photoacoustic endoscopy offers advantages over previous piezoelectric based distal-end scanning probes. These include a forward-viewing configuration in widefield photoacoustic tomography mode, finer spatial sampling (87 µm spatial sampling interval), and wider detection bandwidth (53 MHz) than has been achievable with conventional ultrasound detection technology and an all-optical passive imaging head for safe endoscopic use.
Collapse
Affiliation(s)
- Rehman Ansari
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, WC1E 6BT, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, 43-45 Foley Street, London W1W 7TS, UK
| | - Edward Z. Zhang
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, WC1E 6BT, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, 43-45 Foley Street, London W1W 7TS, UK
| | - Adrien E. Desjardins
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, WC1E 6BT, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, 43-45 Foley Street, London W1W 7TS, UK
| | - Paul C. Beard
- Department of Medical Physics and Biomedical Engineering, University College London, Gower Street, WC1E 6BT, UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, 43-45 Foley Street, London W1W 7TS, UK
| |
Collapse
|
111
|
Li X, Zhang S, Wu J, Huang S, Feng Q, Qi L, Chen W. Multispectral Interlaced Sparse Sampling Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3463-3474. [PMID: 32746097 DOI: 10.1109/tmi.2020.2996240] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multispectral photoacoustic tomography (PAT) is capable of resolving tissue chromophore distribution based on spectral un-mixing. It works by identifying the absorption spectrum variations from a sequence of photoacoustic images acquired at multiple illumination wavelengths. Due to multispectral acquisition, this inevitably creates a large dataset. To cut down the data volume, sparse sampling methods that reduce the number of detectors have been developed. However, image reconstruction of sparse sampling PAT is challenging because of insufficient angular coverage. During spectral un-mixing, these inaccurate reconstructions will further amplify imaging artefacts and contaminate the results. To solve this problem, we present the interlaced sparse sampling (ISS) PAT, a method that involved: 1) a novel scanning-based image acquisition scheme in which the sparse detector array rotates while switching illumination wavelength, such that a dense angular coverage could be achieved by using only a few detectors; and 2) a corresponding image reconstruction algorithm that makes use of an anatomical prior image created from the ISS strategy to guide PAT image computation. Reconstructed from the signals acquired at different wavelengths (angles), this self-generated prior image fuses multispectral and angular information, and thus has rich anatomical features and minimum artefacts. A specialized iterative imaging model that effectively incorporates this anatomical prior image into the reconstruction process is also developed. Simulation, phantom, and in vivo animal experiments showed that even under 1/6 or 1/8 sparse sampling rate, our method achieved comparable image reconstruction and spectral un-mixing results to those obtained by conventional dense sampling method.
Collapse
|
112
|
Liu X, Zhou T, Lu M, Yang Y, He Q, Luo J. Deep Learning for Ultrasound Localization Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3064-3078. [PMID: 32286964 DOI: 10.1109/tmi.2020.2986781] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
By localizing microbubbles (MBs) in the vasculature, ultrasound localization microscopy (ULM) has recently been proposed, which greatly improves the spatial resolution of ultrasound (US) imaging and will be helpful for clinical diagnosis. Nevertheless, several challenges remain in fast ULM imaging. The main problems are that current localization methods used to implement fast ULM imaging, e.g., a previously reported localization method based on sparse recovery (CS-ULM), suffer from long data-processing time and exhaustive parameter tuning (optimization). To address these problems, in this paper, we propose a ULM method based on deep learning, which is achieved by using a modified sub-pixel convolutional neural network (CNN), termed as mSPCN-ULM. Simulations and in vivo experiments are performed to evaluate the performance of mSPCN-ULM. Simulation results show that even if under high-density condition (6.4 MBs/mm2), a high localization precision ( [Formula: see text] in the lateral direction and [Formula: see text] in the axial direction) and a high localization reliability (Jaccard index of 0.66) can be obtained by mSPCN-ULM, compared to CS-ULM. The in vivo experimental results indicate that with plane wave scan at a transmit center frequency of 15.625 MHz, microvessels with diameters of [Formula: see text] can be detected and adjacent microvessels with a distance of [Formula: see text] can be separated. Furthermore, when using GPU acceleration, the data-processing time of mSPCN-ULM can be shortened to ~6 sec/frame in the simulations and ~23 sec/frame in the in vivo experiments, which is 3-4 orders of magnitude faster than CS-ULM. Finally, once the network is trained, mSPCN-ULM does not need parameter tuning to implement ULM. As a result, mSPCN-ULM opens the door to implement ULM with fast data-processing speed, high imaging accuracy, short data-acquisition time, and high flexibility (robustness to parameters) characteristics.
Collapse
|
113
|
Feng J, Deng J, Li Z, Sun Z, Dou H, Jia K. End-to-end Res-Unet based reconstruction algorithm for photoacoustic imaging. BIOMEDICAL OPTICS EXPRESS 2020; 11:5321-5340. [PMID: 33014617 PMCID: PMC7510873 DOI: 10.1364/boe.396598] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 07/30/2020] [Accepted: 08/20/2020] [Indexed: 05/03/2023]
Abstract
Recently, deep neural networks have attracted great attention in photoacoustic imaging (PAI). In PAI, reconstructing the initial pressure distribution from acquired photoacoustic (PA) signals is a typically inverse problem. In this paper, an end-to-end Unet with residual blocks (Res-Unet) is designed and trained to solve the inverse problem in PAI. The performance of the proposed algorithm is explored and analyzed by comparing a recent model-resolution-based regularization algorithm (MRR) with numerical and physical phantom experiments. The improvement obtained in the reconstructed images was more than 95% in pearson correlation and 39% in peak signal-to-noise ratio in comparison to the MRR. The Res-Unet also achieved superior performance over the state-of-the-art Unet++ architecture by more than 18% in PSNR in simulation experiments.
Collapse
Affiliation(s)
- Jinchao Feng
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Jianguang Deng
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhe Li
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Zhonghua Sun
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| | - Huijing Dou
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kebin Jia
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Beijing Laboratory of Advanced Information Networks, Beijing 100124, China
| |
Collapse
|
114
|
Tong T, Huang W, Wang K, He Z, Yin L, Yang X, Zhang S, Tian J. Domain Transform Network for Photoacoustic Tomography from Limited-view and Sparsely Sampled Data. PHOTOACOUSTICS 2020; 19:100190. [PMID: 32617261 PMCID: PMC7322684 DOI: 10.1016/j.pacs.2020.100190] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Medical image reconstruction methods based on deep learning have recently demonstrated powerful performance in photoacoustic tomography (PAT) from limited-view and sparse data. However, because most of these methods must utilize conventional linear reconstruction methods to implement signal-to-image transformations, their performance is restricted. In this paper, we propose a novel deep learning reconstruction approach that integrates appropriate data pre-processing and training strategies. The Feature Projection Network (FPnet) presented herein is designed to learn this signal-to-image transformation through data-driven learning rather than through direct use of linear reconstruction. To further improve reconstruction results, our method integrates an image post-processing network (U-net). Experiments show that the proposed method can achieve high reconstruction quality from limited-view data with sparse measurements. When employing GPU acceleration, this method can achieve a reconstruction speed of 15 frames per second.
Collapse
Affiliation(s)
- Tong Tong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenhui Huang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zicong He
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Lin Yin
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuixing Zhang
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
115
|
Nguyen HNY, Steenbergen W. Three-dimensional view of out-of-plane artifacts in photoacoustic imaging using a laser-integrated linear-transducer-array probe. PHOTOACOUSTICS 2020; 19:100176. [PMID: 32257797 PMCID: PMC7096763 DOI: 10.1016/j.pacs.2020.100176] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Revised: 03/09/2020] [Accepted: 03/10/2020] [Indexed: 05/08/2023]
Abstract
Research on photoacoustic imaging (PAI) using a handheld integrated photoacoustic probe has been a recent focus of clinical translation of this imaging technique. One of the remaining challenges is the occurrence of out-of-plane artifacts (OPAs) in such a probe. Previously, we proposed a method to identify and remove OPAs by axially displacing the transducer array. Here we show that besides the benefit of removing OPAs from the imaging plane, the proposed method can provide a three-dimensional (3D) view of the OPAs. In this work, we present a 3D reconstruction method using axial transducer array displacement. By axially displacing the transducer array, out-of-plane absorbers can be three-dimensionally visualized at an elevation distance of up to the acquired imaging depth. Additionally, OPAs in the in-plane image are significantly reduced. We experimentally demonstrate the method with phantom and in vivo experiments using an integrated PAI probe. We also compare the method with elevational transducer array displacement and take into account the sensitivity of the transducer array in the 3D reconstruction.
Collapse
|
116
|
Ding L, Razansky D, Dean-Ben XL. Model-Based Reconstruction of Large Three-Dimensional Optoacoustic Datasets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2931-2940. [PMID: 32191883 DOI: 10.1109/tmi.2020.2981835] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Iterative model-based algorithms are known to enable more accurate and quantitative optoacoustic (photoacoustic) tomographic reconstructions than standard back-projection methods. However, three-dimensional (3D) model-based inversion is often hampered by high computational complexity and memory overhead. Parallel implementations on a graphics processing unit (GPU) have been shown to efficiently reduce the memory requirements by on-the-fly calculation of the actions of the optoacoustic model matrix, but the high complexity still makes these approaches impractical for large 3D optoacoustic datasets. Herein, we show that the computational complexity of 3D model-based iterative inversion can be significantly reduced by splitting the model matrix into two parts: one maximally sparse matrix containing only one entry per voxel-transducer pair and a second matrix corresponding to cyclic convolution. We further suggest reconstructing the images by multiplying the transpose of the model matrix calculated in this manner with the acquired signals, which is equivalent to using a very large regularization parameter in the iterative inversion method. The performance of these two approaches is compared to that of standard back-projection and a recently introduced GPU-based model-based method using datasets from in vivo experiments. The reconstruction time was accelerated by approximately an order of magnitude with the new iterative method, while multiplication with the transpose of the matrix is shown to be as fast as standard back-projection.
Collapse
|
117
|
Lediju Bell MA. Photoacoustic imaging for surgical guidance: Principles, applications, and outlook. JOURNAL OF APPLIED PHYSICS 2020; 128:060904. [PMID: 32817994 PMCID: PMC7428347 DOI: 10.1063/5.0018190] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 07/30/2020] [Indexed: 05/08/2023]
Abstract
Minimally invasive surgeries often require complicated maneuvers and delicate hand-eye coordination and ideally would incorporate "x-ray vision" to see beyond tool tips and underneath tissues prior to making incisions. Photoacoustic imaging has the potential to offer this feature but not with ionizing x-rays. Instead, optical fibers and acoustic receivers enable photoacoustic sensing of major structures-such as blood vessels and nerves-that are otherwise hidden from view. This imaging process is initiated by transmitting laser pulses that illuminate regions of interest, causing thermal expansion and the generation of sound waves that are detectable with conventional ultrasound transducers. The recorded signals are then converted to images through the beamforming process. Photoacoustic imaging may be implemented to both target and avoid blood-rich surgical contents (and in some cases simultaneously or independently visualize optical fiber tips or metallic surgical tool tips) in order to prevent accidental injury and assist device operators during minimally invasive surgeries and interventional procedures. Novel light delivery systems, counterintuitive findings, and robotic integration methods introduced by the Photoacoustic & Ultrasonic Systems Engineering Lab are summarized in this invited Perspective, setting the foundation and rationale for the subsequent discussion of the author's views on possible future directions for this exciting frontier known as photoacoustic-guided surgery.
Collapse
Affiliation(s)
- Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland 21218, USA
| |
Collapse
|
118
|
Bench C, Hauptmann A, Cox B. Toward accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in three dimensions. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:jbo-200119R. [PMID: 32840068 PMCID: PMC7443711 DOI: 10.1117/1.jbo.25.8.085003] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Accepted: 07/23/2020] [Indexed: 05/02/2023]
Abstract
SIGNIFICANCE Two-dimensional (2-D) fully convolutional neural networks have been shown capable of producing maps of sO2 from 2-D simulated images of simple tissue models. However, their potential to produce accurate estimates in vivo is uncertain as they are limited by the 2-D nature of the training data when the problem is inherently three-dimensional (3-D), and they have not been tested with realistic images. AIM To demonstrate the capability of deep neural networks to process whole 3-D images and output 3-D maps of vascular sO2 from realistic tissue models/images. APPROACH Two separate fully convolutional neural networks were trained to produce 3-D maps of vascular blood oxygen saturation and vessel positions from multiwavelength simulated images of tissue models. RESULTS The mean of the absolute difference between the true mean vessel sO2 and the network output for 40 examples was 4.4% and the standard deviation was 4.5%. CONCLUSIONS 3-D fully convolutional networks were shown capable of producing accurate sO2 maps using the full extent of spatial information contained within 3-D images generated under conditions mimicking real imaging scenarios. We demonstrate that networks can cope with some of the confounding effects present in real images such as limited-view artifacts and have the potential to produce accurate estimates in vivo.
Collapse
Affiliation(s)
- Ciaran Bench
- University College London, Department of Medical Physics and Biomedical Engineering, Gower Street, London, United Kingdom
- Address all correspondence to Ciaran Bench, E-mail:
| | - Andreas Hauptmann
- University of Oulu, Research Unit of Mathematical Sciences, Oulu, Finland
- University College London, Department of Computer Science, Gower Street, London, United Kingdom
| | - Ben Cox
- University College London, Department of Medical Physics and Biomedical Engineering, Gower Street, London, United Kingdom
| |
Collapse
|
119
|
Kofler A, Haltmeier M, Schaeffter T, Kachelrieß M, Dewey M, Wald C, Kolbitsch C. Neural networks-based regularization for large-scale medical image reconstruction. Phys Med Biol 2020; 65:135003. [PMID: 32492660 DOI: 10.1088/1361-6560/ab990e] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
In this paper we present a generalized Deep Learning-based approach for solving ill-posed large-scale inverse problems occuring in medical image reconstruction. Recently, Deep Learning methods using iterative neural networks (NNs) and cascaded NNs have been reported to achieve state-of-the-art results with respect to various quantitative quality measures as PSNR, NRMSE and SSIM across different imaging modalities. However, the fact that these approaches employ the application of the forward and adjoint operators repeatedly in the network architecture requires the network to process the whole images or volumes at once, which for some applications is computationally infeasible. In this work, we follow a different reconstruction strategy by strictly separating the application of the NN, the regularization of the solution and the consistency with the measured data. The regularization is given in the form of an image prior obtained by the output of a previously trained NN which is used in a Tikhonov regularization framework. By doing so, more complex and sophisticated network architectures can be used for the removal of the artefacts or noise than it is usually the case in iterative NNs. Due to the large scale of the considered problems and the resulting computational complexity of the employed networks, the priors are obtained by processing the images or volumes as patches or slices. We evaluated the method for the cases of 3D cone-beam low dose CT and undersampled 2D radial cine MRI and compared it to a total variation-minimization-based reconstruction algorithm as well as to a method with regularization based on learned overcomplete dictionaries. The proposed method outperformed all the reported methods with respect to all chosen quantitative measures and further accelerates the regularization step in the reconstruction by several orders of magnitude.
Collapse
Affiliation(s)
- A Kofler
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | | | | | | | | | | | | |
Collapse
|
120
|
Chen XL, Yan TY, Wang N, von Deneen KM. Rising role of artificial intelligence in image reconstruction for biomedical imaging. Artif Intell Med Imaging 2020; 1:1-5. [DOI: 10.35711/aimi.v1.i1.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/09/2020] [Accepted: 06/16/2020] [Indexed: 02/06/2023] Open
Abstract
In this editorial, we review recent progress on the applications of artificial intelligence (AI) in image reconstruction for biomedical imaging. Because it abandons prior information of traditional artificial design and adopts a completely data-driven mode to obtain deeper prior information via learning, AI technology plays an increasingly important role in biomedical image reconstruction. The combination of AI technology and the biomedical image reconstruction method has become a hotspot in the field. Favoring AI, the performance of biomedical image reconstruction has been improved in terms of accuracy, resolution, imaging speed, etc. We specifically focus on how to use AI technology to improve the performance of biomedical image reconstruction, and propose possible future directions in this field.
Collapse
Affiliation(s)
- Xue-Li Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Tian-Yu Yan
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| | - Karen M von Deneen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an 710126, Shaanxi Province, China
| |
Collapse
|
121
|
A New Deep Learning Network for Mitigating Limited-view and Under-sampling Artifacts in Ring-shaped Photoacoustic Tomography. Comput Med Imaging Graph 2020; 84:101720. [PMID: 32679469 DOI: 10.1016/j.compmedimag.2020.101720] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 03/29/2020] [Accepted: 03/30/2020] [Indexed: 12/21/2022]
Abstract
Photoacoustic tomography (PAT) is a hybrid technique for high-resolution imaging of optical absorption in tissue. Among various transducer arrays proposed for PAT, the ring-shaped transducer array is widely used in cross-sectional imaging applications. However, due to the high fabrication cost, most ring-shaped transducer arrays have a sparse transducer arrangement, which leads to limited-view problems and under-sampling artifacts. To address these issues, we paired conventional PAT reconstruction with deep learning, which recently achieved a breakthrough in image processing and tomographic reconstruction. In this study, we designed a convolutional neural network (CNN) called a ring-array deep learning network (RADL-net), which can eliminate limited-view and under-sampling artifacts in PAT images. The method was validated on a three-quarter ring transducer array using numerical simulation, phantom imaging, and in vivo imaging. Our results indicate that the proposed RADL-net significantly improves the quality of reconstructed images on a three-quarter ring transducer array. The method is also superior to the conventional compressed sensing (CS) algorithm.
Collapse
|
122
|
Yang C, Lan H, Gao F. Accelerated Photoacoustic Tomography Reconstruction via Recurrent Inference Machines. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:6371-6374. [PMID: 31947300 DOI: 10.1109/embc.2019.8856290] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accelerated photoacoustic tomography (PAT) reconstruction is important for real-time photoacoustic imaging (PAI) applications. PAT requires a reconstruction algorithm to reconstruct the detected photoacoustic signal in order to obtain the detected image of the tissue, which is usually an inverse problem. Different from the typical method for solving the inverse problems that defines a model and chooses an inference procedure, we propose to use the Recurrent Inference Machines (RIM) as a framework for PAT reconstruction. Our model performs an accelerated iterative reconstruction, and directly learns to solve the inverse problem in PAT using the information from a forward model that is based on k-space methods. As shown in experiments, our method achieves faster high-resolution PAT reconstruction, and outperforms another method based on deep neural network in some respects.
Collapse
|
123
|
Guan S, Khan AA, Sikdar S, Chitnis PV. Limited-View and Sparse Photoacoustic Tomography for Neuroimaging with Deep Learning. Sci Rep 2020; 10:8510. [PMID: 32444649 PMCID: PMC7244747 DOI: 10.1038/s41598-020-65235-2] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 04/26/2020] [Indexed: 12/15/2022] Open
Abstract
Photoacoustic tomography (PAT) is a non-ionizing imaging modality capable of acquiring high contrast and resolution images of optical absorption at depths greater than traditional optical imaging techniques. Practical considerations with instrumentation and geometry limit the number of available acoustic sensors and their "view" of the imaging target, which result in image reconstruction artifacts degrading image quality. Iterative reconstruction methods can be used to reduce artifacts but are computationally expensive. In this work, we propose a novel deep learning approach termed pixel-wise deep learning (Pixel-DL) that first employs pixel-wise interpolation governed by the physics of photoacoustic wave propagation and then uses a convolution neural network to reconstruct an image. Simulated photoacoustic data from synthetic, mouse-brain, lung, and fundus vasculature phantoms were used for training and testing. Results demonstrated that Pixel-DL achieved comparable or better performance to iterative methods and consistently outperformed other CNN-based approaches for correcting artifacts. Pixel-DL is a computationally efficient approach that enables for real-time PAT rendering and improved image reconstruction quality for limited-view and sparse PAT.
Collapse
Affiliation(s)
- Steven Guan
- Bioengineering Department, George Mason University, 4400 University Drive, Fairfax, 22030, VA, USA.
- The MITRE Corporation, McLean, VA, 22102, USA.
| | - Amir A Khan
- Bioengineering Department, George Mason University, 4400 University Drive, Fairfax, 22030, VA, USA
| | - Siddhartha Sikdar
- Bioengineering Department, George Mason University, 4400 University Drive, Fairfax, 22030, VA, USA
| | - Parag V Chitnis
- Bioengineering Department, George Mason University, 4400 University Drive, Fairfax, 22030, VA, USA.
| |
Collapse
|
124
|
Hauptmann A, Adler J, Arridge S, Öktem O. Multi-Scale Learned Iterative Reconstruction. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2020; 6:843-856. [PMID: 33644260 PMCID: PMC7116830 DOI: 10.1109/tci.2020.2990299] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Model-based learned iterative reconstruction methods have recently been shown to outperform classical reconstruction algorithms. Applicability of these methods to large scale inverse problems is however limited by the available memory for training and extensive training times, the latter due to computationally expensive forward models. As a possible solution to these restrictions we propose a multi-scale learned iterative reconstruction scheme that computes iterates on discretisations of increasing resolution. This procedure does not only reduce memory requirements, it also considerably speeds up reconstruction and training times, but most importantly is scalable to large scale inverse problems with non-trivial forward operators, such as those that arise in many 3D tomographic applications. In particular, we propose a hybrid network that combines the multiscale iterative approach with a particularly expressive network architecture which in combination exhibits excellent scalability in 3D. Applicability of the algorithm is demonstrated for 3D cone beam computed tomography from real measurement data of an organic phantom. Additionally, we examine scalability and reconstruction quality in comparison to established learned reconstruction methods in two dimensions for low dose computed tomography on human phantoms.
Collapse
Affiliation(s)
- Andreas Hauptmann
- Research Unit of Mathematical Sciences; University of Oulu, Oulu, Finland and with the Department of Computer Science; University College London, London, United Kingdom
| | - Jonas Adler
- Elekta, Stockholm, Sweden and KTH - Royal Institute of Technology, Stockolm, Sweden. He is currently with DeepMind, London, UK
| | - Simon Arridge
- Department of Computer Science; University College London, London, United Kingdom
| | - Ozan Öktem
- Department of Mathematics, KTH - Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
125
|
Feigin M, Freedman D, Anthony BW. A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound. IEEE Trans Biomed Eng 2020; 67:1142-1151. [DOI: 10.1109/tbme.2019.2931195] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
126
|
Vu T, Li M, Humayun H, Zhou Y, Yao J. A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer. Exp Biol Med (Maywood) 2020; 245:597-605. [PMID: 32208974 DOI: 10.1177/1535370220914285] [Citation(s) in RCA: 67] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
With balanced spatial resolution, penetration depth, and imaging speed, photoacoustic computed tomography (PACT) is promising for clinical translation such as in breast cancer screening, functional brain imaging, and surgical guidance. Typically using a linear ultrasound (US) transducer array, PACT has great flexibility for hand-held applications. However, the linear US transducer array has a limited detection angle range and frequency bandwidth, resulting in limited-view and limited-bandwidth artifacts in the reconstructed PACT images. These artifacts significantly reduce the imaging quality. To address these issues, existing solutions often have to pay the price of system complexity, cost, and/or imaging speed. Here, we propose a deep-learning-based method that explores the Wasserstein generative adversarial network with gradient penalty (WGAN-GP) to reduce the limited-view and limited-bandwidth artifacts in PACT. Compared with existing reconstruction and convolutional neural network approach, our model has shown improvement in imaging quality and resolution. Our results on simulation, phantom, and in vivo data have collectively demonstrated the feasibility of applying WGAN-GP to improve PACT’s image quality without any modification to the current imaging set-up. Impact statement This study has the following main impacts. It offers a promising solution for removing limited-view and limited-bandwidth artifact in PACT using a linear-array transducer and conventional image reconstruction, which have long hindered its clinical translation. Our solution shows unprecedented artifact removal ability for in vivo image, which may enable important applications such as imaging tumor angiogenesis and hypoxia. The study reports, for the first time, the use of an advanced deep-learning model based on stabilized generative adversarial network. Our results have demonstrated its superiority over other state-of-the-art deep-learning methods.
Collapse
Affiliation(s)
- Tri Vu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Mucong Li
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Hannah Humayun
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Yuan Zhou
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA.,IBM Research-China, ZPark, Beijing 100085, China
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
127
|
Li M, Lan B, Sankin G, Zhou Y, Liu W, Xia J, Wang D, Trahey G, Zhong P, Yao J. Simultaneous Photoacoustic Imaging and Cavitation Mapping in Shockwave Lithotripsy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:468-477. [PMID: 31329550 PMCID: PMC6960366 DOI: 10.1109/tmi.2019.2928740] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Kidney stone disease is a major health problem worldwide. Shockwave lithotripsy (SWL), which uses high-energy shockwave pulses to break up kidney stones, is extensively used in clinic. However, despite its noninvasiveness, SWL can produce cavitation in vivo. The rapid expansion and violent collapse of cavitation bubbles in small blood vessels may result in renal vascular injury. To better understand the mechanism of tissue injury and improve treatment safety and efficiency, it is highly desirable to concurrently detect cavitation and vascular injury during SWL. Current imaging modalities used in SWL ( e.g. , C-arm fluoroscopy and B-mode ultrasound) are not sensitive to vascular injuries. By contrast, photoacoustic imaging is a non-invasive and non-radiative imaging modality that is sensitive to blood, by using hemoglobin as the endogenous contrast. Moreover, photoacoustic imaging is also compatible with passive cavitation detection by sharing the ultrasound detection system. Here, we have integrated shockwave treatment, photoacoustic imaging, and passive cavitation detection into a single system. Our experimental results on phantoms and in vivo small animals have collectively demonstrated that the integrated system is capable of capturing shockwave-induced cavitation and the resultant vascular injury simultaneously. We expect that the integrated system, when combined with our recently developed internal-light-illumination photoacoustic imaging, will find important applications for monitoring shockwave-induced vascular injury in deep tissues during SWL.
Collapse
Affiliation(s)
- Mucong Li
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Bangxin Lan
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Georgii Sankin
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA
| | - Yuan Zhou
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Wei Liu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Jun Xia
- Department of Biomedical Engineering, University of Buffalo, Buffalo, NY 14260, USA
| | - Depeng Wang
- Department of Biomedical Engineering, University of Buffalo, Buffalo, NY 14260, USA
| | - Gregg Trahey
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Pei Zhong
- Department of Mechanical Engineering and Materials Science, Duke University, Durham, NC 27708, USA
- P. Zhong, , J. Yao,
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
- P. Zhong, , J. Yao,
| |
Collapse
|
128
|
Boink YE, Manohar S, Brune C. A Partially-Learned Algorithm for Joint Photo-acoustic Reconstruction and Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:129-139. [PMID: 31180846 DOI: 10.1109/tmi.2019.2922026] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
In an inhomogeneously illuminated photoacoustic image, important information like vascular geometry is not readily available, when only the initial pressure is reconstructed. To obtain the desired information, algorithms for image segmentation are often applied as a post-processing step. In this article, we propose to jointly acquire the photoacoustic reconstruction and segmentation, by modifying a recently developed partially learned algorithm based on a convolutional neural network. We investigate the stability of the algorithm against changes in initial pressures and photoacoustic system settings. These insights are used to develop an algorithm that is robust to input and system settings. Our approach can easily be applied to other imaging modalities and can be modified to perform other high-level tasks different from segmentation. The method is validated on challenging synthetic and experimental photoacoustic tomography data in limited angle and limited view scenarios. It is computationally less expensive than classical iterative methods and enables higher quality reconstructions and segmentations than the state-of-the-art learned and non-learned methods.
Collapse
|
129
|
An Y, Meng H, Gao Y, Tong T, Zhang C, Wang K, Tian J. Application of machine learning method in optical molecular imaging: a review. SCIENCE CHINA INFORMATION SCIENCES 2020; 63:111101. [DOI: 10.1007/s11432-019-2708-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 09/17/2019] [Accepted: 10/22/2019] [Indexed: 08/30/2023]
|
130
|
Zheng S, Xiangyang Y. Image reconstruction based on compressed sensing for sparse-data endoscopic photoacoustic tomography. Comput Biol Med 2019; 116:103587. [PMID: 32001014 DOI: 10.1016/j.compbiomed.2019.103587] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 12/17/2019] [Accepted: 12/17/2019] [Indexed: 11/30/2022]
Abstract
Endoscopic photoacoustic tomography (EPAT) is an interventional application of photoacoustic tomography (PAT) to visualize anatomical features and functional components of biological cavity structures such as nasal cavity, digestive tract or coronary arterial vessels. One of the main challenges in clinical applicability of EPAT is the incomplete acoustic measurements due to the limited detectors or the limited-view acoustic detection enclosed in the cavity. In this case, conventional image reconstruction methodologies suffer from significantly degraded image quality. This work introduces a compressed-sensing (CS)-based method to reconstruct a high-quality image that represents the initial pressure distribution on a luminal cross-section from incomplete discrete acoustic measurements. The method constructs and trains a complete dictionary for the sparse representation of the photoacoustically-induced acoustic measurements. The sparse representation of the complete acoustic signals is then optimally obtained based on the sparse measurements and a sensing matrix. The complete acoustic signals are recovered from the sparse representation by inverse sparse transformation. The image of the initial pressure distribution is finally reconstructed from the recovered complete signals by using the time reversal (TR) algorithm. It was shown with numerical experiments that high-quality images with reduced under-sampling artifacts can be reconstructed from sparse measurements. The comparison results suggest that the proposed method outperforms the standard TR reconstruction by 40% in terms of the structural similarity of the reconstructed images.
Collapse
Affiliation(s)
- Sun Zheng
- Department of Electronic and Communication Engineering, North China Electric Power University, Baoding, 071003, China.
| | - Yan Xiangyang
- Department of Electronic and Communication Engineering, North China Electric Power University, Baoding, 071003, China
| |
Collapse
|
131
|
Vu T, Razansky D, Yao J. Listening to tissues with new light: recent technological advances in photoacoustic imaging. JOURNAL OF OPTICS (2010) 2019; 21:10.1088/2040-8986/ab3b1a. [PMID: 32051756 PMCID: PMC7015182 DOI: 10.1088/2040-8986/ab3b1a] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Photoacoustic tomography (PAT), or optoacoustic tomography, has achieved remarkable progress in the past decade, benefiting from the joint developments in optics, acoustics, chemistry, computing and mathematics. Unlike pure optical or ultrasound imaging, PAT can provide unique optical absorption contrast as well as widely scalable spatial resolution, penetration depth and imaging speed. Moreover, PAT has inherent sensitivity to tissue's functional, molecular, and metabolic state. With these merits, PAT has been applied in a wide range of life science disciplines, and has enabled biomedical research unattainable by other imaging methods. This Review article aims at introducing state-of-the-art PAT technologies and their representative applications. The focus is on recent technological breakthroughs in structural, functional, molecular PAT, including super-resolution imaging, real-time small-animal whole-body imaging, and high-sensitivity functional/molecular imaging. We also discuss the remaining challenges in PAT and envisioned opportunities.
Collapse
Affiliation(s)
- Tri Vu
- Photoacoustic Imaging Lab, Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Daniel Razansky
- Faculty of Medicine and Institute of Pharmacology and Toxicology, University of Zurich, Switzerland
- Institute for Biomedical Engineering and Department of Information Technology and Electrical Engineering, ETH Zurich, Switzerland
| | - Junjie Yao
- Photoacoustic Imaging Lab, Department of Biomedical Engineering, Duke University, Durham, NC, USA
| |
Collapse
|
132
|
Qin T, Zheng Z, Zhang R, Wang C, Yu W. $ \newcommand{\e}{{\rm e}} {{\ell }_{0}}$ gradient minimization for limited-view photoacoustic tomography. ACTA ACUST UNITED AC 2019; 64:195004. [DOI: 10.1088/1361-6560/ab3704] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
|
133
|
Wu D, Kim K, Li Q. Computationally efficient deep neural network for computed tomography image reconstruction. Med Phys 2019; 46:4763-4776. [PMID: 31132144 DOI: 10.1002/mp.13627] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 04/22/2019] [Accepted: 05/14/2019] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Deep neural network-based image reconstruction has demonstrated promising performance in medical imaging for undersampled and low-dose scenarios. However, it requires large amount of memory and extensive time for the training. It is especially challenging to train the reconstruction networks for three-dimensional computed tomography (CT) because of the high resolution of CT images. The purpose of this work is to reduce the memory and time consumption of the training of the reconstruction networks for CT to make it practical for current hardware, while maintaining the quality of the reconstructed images. METHODS We unrolled the proximal gradient descent algorithm for iterative image reconstruction to finite iterations and replaced the terms related to the penalty function with trainable convolutional neural networks (CNN). The network was trained greedily iteration by iteration in the image domain on patches, which requires reasonable amount of memory and time on mainstream graphics processing unit (GPU). To overcome the local-minimum problem caused by greedy learning, we used deep UNet as the CNN and incorporated separable quadratic surrogate with ordered subsets for data fidelity, so that the solution could escape from easy local minimums and achieve better image quality. RESULTS The proposed method achieved comparable image quality with state-of-the-art neural network for CT image reconstruction on two-dimensional (2D) sparse-view and limited-angle problems on the low-dose CT challenge dataset. The difference in root-mean-square-error (RMSE) and structural similarity index (SSIM) was within [-0.23,0.47] HU and [0,0.001], respectively, with 95% confidence level. For three-dimensional (3D) image reconstruction with ordinary-size CT volume, the proposed method only needed 2 GB graphics processing unit (GPU) memory and 0.45 s per training iteration as minimum requirement, whereas existing methods may require 417 GB and 31 min. The proposed method achieved improved performance compared to total variation- and dictionary learning-based iterative reconstruction for both 2D and 3D problems. CONCLUSIONS We proposed a training-time computationally efficient neural network for CT image reconstruction. The proposed method achieved comparable image quality with state-of-the-art neural network for CT reconstruction, with significantly reduced memory and time requirement during training. The proposed method is applicable to 3D image reconstruction problems such as cone-beam CT and tomosynthesis on mainstream GPUs.
Collapse
Affiliation(s)
- Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| |
Collapse
|
134
|
Deán-Ben XL, Razansky D. Optoacoustic image formation approaches-a clinical perspective. Phys Med Biol 2019; 64:18TR01. [PMID: 31342913 DOI: 10.1088/1361-6560/ab3522] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Clinical translation of optoacoustic imaging is fostered by the rapid technical advances in imaging performance as well as the growing number of clinicians recognizing the immense diagnostic potential of this technology. Clinical optoacoustic systems are available in multiple configurations, including hand-held and endoscopic probes as well as raster-scan approaches. The hardware design must be adapted to the accessible portion of the imaged region and other application-specific requirements pertaining the achievable depth, field of view or spatio-temporal resolution. Equally important is the adequate choice of the signal and image processing approach, which is largely responsible for the resulting imaging performance. Thus, new image reconstruction algorithms are constantly evolving in parallel to the newly-developed set-ups. This review focuses on recent progress on optoacoustic image formation algorithms and processing methods in the clinical setting. Major reconstruction challenges include real-time image rendering in two and three dimensions, efficient hybridization with other imaging modalitites as well as accurate interpretation and quantification of bio-markers, herein discussed in the context of ongoing progress in clinical translation.
Collapse
Affiliation(s)
- Xosé Luís Deán-Ben
- Faculty of Medicine and Institute of Pharmacology and Toxicology, University of Zurich, Zurich, Switzerland. Department of Information Technology and Electrical Engineering and Institute for Biomedical Engineering, ETH Zurich, Zurich, Switzerland
| | | |
Collapse
|
135
|
Davoudi N, Deán-Ben XL, Razansky D. Deep learning optoacoustic tomography with sparse data. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0095-3] [Citation(s) in RCA: 93] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
136
|
Arridge S, Hauptmann A. Networks for Nonlinear Diffusion Problems in Imaging. JOURNAL OF MATHEMATICAL IMAGING AND VISION 2019; 62:471-487. [PMID: 32300266 PMCID: PMC7138784 DOI: 10.1007/s10851-019-00901-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Accepted: 08/23/2019] [Indexed: 06/11/2023]
Abstract
A multitude of imaging and vision tasks have seen recently a major transformation by deep learning methods and in particular by the application of convolutional neural networks. These methods achieve impressive results, even for applications where it is not apparent that convolutions are suited to capture the underlying physics. In this work, we develop a network architecture based on nonlinear diffusion processes, named DiffNet. By design, we obtain a nonlinear network architecture that is well suited for diffusion-related problems in imaging. Furthermore, the performed updates are explicit, by which we obtain better interpretability and generalisability compared to classical convolutional neural network architectures. The performance of DiffNet is tested on the inverse problem of nonlinear diffusion with the Perona-Malik filter on the STL-10 image dataset. We obtain competitive results to the established U-Net architecture, with a fraction of parameters and necessary training data.
Collapse
Affiliation(s)
- S. Arridge
- Department of Computer Science, University College London, London, UK
| | - A. Hauptmann
- Department of Computer Science, University College London, London, UK
- Research Unit of Mathematical Sciences, University of Oulu, Oulu, Finland
| |
Collapse
|
137
|
Cai C, Wang X, Si K, Qian J, Luo J, Ma C. Streak artifact suppression in photoacoustic computed tomography using adaptive back projection. BIOMEDICAL OPTICS EXPRESS 2019; 10:4803-4814. [PMID: 31565526 PMCID: PMC6757473 DOI: 10.1364/boe.10.004803] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 07/30/2019] [Accepted: 08/12/2019] [Indexed: 05/18/2023]
Abstract
For photoacoustic computed tomography (PACT), an insufficient number of ultrasound detectors can cause serious streak-type artifacts. These artifacts get overlaid on top of image features, and thus locally jeopardize image quality and resolution. Here, a reconstruction algorithm, termed Contamination-Tracing Back-Projection (CTBP), is proposed for the mitigation of streak-type artifacts. During reconstruction, CTBP adaptively adjusts the back-projection weight, whose value is determined by the likelihood of contamination, to minimize the negative influences of strong absorbers. An iterative solution of the eikonal equation is implemented to accurately trace the time of flight of different pixels. Numerical, phantom and in vivo experiments demonstrate that CTBP can dramatically suppress streak artifacts in PACT and improve image quality.
Collapse
Affiliation(s)
- Chuangjian Cai
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
- These authors contribute equally
| | - Xuanhao Wang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- These authors contribute equally
| | - Ke Si
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
- Center for Neuroscience, Department of Neurobiology, NHC and CAMS Key Laboratory of Medical Neurobiology, Zhejiang University School of Medicine, Hangzhou 310058, China
| | - Jun Qian
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China
| | - Jianwen Luo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Cheng Ma
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Beijing National Research Center for Information Science and Technology, Beijing 100084, China
- Beijing Innovation Center for Future Chip, Beijing 100084, China
| |
Collapse
|
138
|
Hamilton SJ, Hänninen A, Hauptmann A, Kolehmainen V. Beltrami-net: domain-independent deep D-bar learning for absolute imaging with electrical impedance tomography (a-EIT). Physiol Meas 2019; 40:074002. [PMID: 31091516 PMCID: PMC6816539 DOI: 10.1088/1361-6579/ab21b2] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
OBJECTIVE To develop, and demonstrate the feasibility of, a novel image reconstruction method for absolute electrical impedance tomography (a-EIT) that pairs deep learning techniques with real-time robust D-bar methods and examine the influence of prior information on the reconstruction. APPROACH A D-bar method is paired with a trained convolutional neural network (CNN) as a post-processing step. Training data is simulated for the network using no knowledge of the boundary shape by using an associated nonphysical Beltrami equation rather than simulating the traditional current and voltage data specific to a given domain. This allows the training data to be boundary shape independent. The method is tested on experimental data from two EIT systems (ACT4 and KIT4) with separate training sets of varying prior information. MAIN RESULTS Post-processing the D-bar images with a CNN produces significant improvements in image quality measured by structural SIMilarity indices (SSIMs) as well as relative [Formula: see text] and [Formula: see text] image errors. SIGNIFICANCE This work demonstrates that more general networks can be trained without being specific about boundary shape, a key challenge in EIT image reconstruction. The work is promising for future studies involving databases of anatomical atlases.
Collapse
Affiliation(s)
- S J Hamilton
- Department of Mathematics, Statistics, and Computer Science, Marquette University, Milwaukee, WI 53233, United States of America. Authors to whom any correspondence should be addressed
| | | | | | | |
Collapse
|
139
|
Accelerated Correction of Reflection Artifacts by Deep Neural Networks in Photo-Acoustic Tomography. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9132615] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Photo-Acoustic Tomography (PAT) is an emerging non-invasive hybrid modality driven by a constant yearning for superior imaging performance. The image quality, however, hinges on the acoustic reflection, which may compromise the diagnostic performance. To address this challenge, we propose to incorporate a deep neural network into conventional iterative algorithms to accelerate and improve the correction of reflection artifacts. Based on the simulated PAT dataset from computed tomography (CT) scans, this network-accelerated reconstruction approach is shown to outperform two state-of-the-art iterative algorithms in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) in the presence of noise. The proposed network also demonstrates considerably higher computational efficiency than conventional iterative algorithms, which are time-consuming and cumbersome.
Collapse
|
140
|
段 爽, 孙 正. [Review on photoacoustic tomographic image reconstruction for acoustically heterogeneous tissues]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2019; 36:486-492. [PMID: 31232553 PMCID: PMC9929972 DOI: 10.7507/1001-5515.201809014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Indexed: 06/09/2023]
Abstract
Acoustic properties of biological tissues usually vary inhomogeneously in space. Tissues with different chemical composition often have different acoustic properties. The assumption of acoustic homogeneity may lead to blurred details, misalignment of targets and artifacts in the reconstructed photoacoustic tomography (PAT) images. This paper summarizes the main solutions to PAT imaging of acoustically heterogeneous tissues, including the variable sound speed and acoustic attenuation. The advantages and limits of the methods are discussed and the possible future development is prospected.
Collapse
Affiliation(s)
- 爽 段
- 华北电力大学 电子与通信工程系(河北保定 071003)Department of Electronic and Communication Engineering, North China Electric Power University, Baoding, Hebei 071003, P.R.China
| | - 正 孙
- 华北电力大学 电子与通信工程系(河北保定 071003)Department of Electronic and Communication Engineering, North China Electric Power University, Baoding, Hebei 071003, P.R.China
| |
Collapse
|
141
|
Guan S, Khan AA, Sikdar S, Chitnis PV. Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal. IEEE J Biomed Health Inform 2019; 24:568-576. [PMID: 31021809 DOI: 10.1109/jbhi.2019.2912935] [Citation(s) in RCA: 171] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Photoacoustic imaging is an emerging imaging modality that is based upon the photoacoustic effect. In photoacoustic tomography (PAT), the induced acoustic pressure waves are measured by an array of detectors and used to reconstruct an image of the initial pressure distribution. A common challenge faced in PAT is that the measured acoustic waves can only be sparsely sampled. Reconstructing sparsely sampled data using standard methods results in severe artifacts that obscure information within the image. We propose a modified convolutional neural network (CNN) architecture termed fully dense UNet (FD-UNet) for removing artifacts from two-dimensional PAT images reconstructed from sparse data and compare the proposed CNN with the standard UNet in terms of reconstructed image quality.
Collapse
|
142
|
Liu F, Gong X, Wang LV, Guan J, Song L, Meng J. Dictionary learning sparse-sampling reconstruction method for in-vivo 3D photoacoustic computed tomography. BIOMEDICAL OPTICS EXPRESS 2019; 10:1660-1677. [PMID: 31061761 PMCID: PMC6484974 DOI: 10.1364/boe.10.001660] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 02/14/2019] [Accepted: 02/17/2019] [Indexed: 05/02/2023]
Abstract
The sparse transforms currently used in the model-based reconstruction method for photoacoustic computed tomography (PACT) are predefined and they typically cannot capture the underlying features of the specific data sets adequately, thus limiting the high-quality recovery of photoacoustic images. In this work, we present an advanced reconstruction model using the K-VSD dictionary learning technique and present the in vivo results after adapting the model into the 3D PACT system. The in vivo experiments were performed on an IRB approved human hand and two rats. When compared to the traditional sparse transform, experimental results using our proposed method improved accuracy and contrast to noise ration of the reconstructed photoacoustic images, on average, by 3.7 and 1.8 times in the case of 50% sparse-sampling rate, respectively. We also compared the performance of our algorithm against other techniques, and imaging speed was 60% faster than other approaches. Our system would require sparse-transducer array and lower number of data acquisition hardware (DAQs) potentially reducing the cost of the system. Thus, our work provides a new way for reconstructing photoacoustic images, and it would enable the development of new high-speed low-cost 3D PACT for various biomedical applications.
Collapse
Affiliation(s)
- Fangyan Liu
- Qufu Normal University, School of Information Science and Engineering, 80 Yantai Road North, Rizhao, 276826, China
- Equal Contribution
| | - Xiaojing Gong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Institute of Biomedical and Health Engineering, 1068 Xueyuan Boulevard, Shenzhen, 518055, China
- Equal Contribution
| | - Lihong V Wang
- California Institute of Technology, Department of Electronic Engineering, Andrew & Peggy Cherng Department of Medical Engineering, Pasadena, CA 91125, USA
| | - Jingjing Guan
- Qufu Normal University, School of Information Science and Engineering, 80 Yantai Road North, Rizhao, 276826, China
| | - Liang Song
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Institute of Biomedical and Health Engineering, 1068 Xueyuan Boulevard, Shenzhen, 518055, China
| | - Jing Meng
- Qufu Normal University, School of Information Science and Engineering, 80 Yantai Road North, Rizhao, 276826, China
| |
Collapse
|
143
|
Abstract
In medical applications, the accuracy and robustness of imaging methods are of crucial importance to ensure optimal patient care. While photoacoustic imaging (PAI) is an emerging modality with promising clinical applicability, state-of-the-art approaches to quantitative photoacoustic imaging (qPAI), which aim to solve the ill-posed inverse problem of recovering optical absorption from the measurements obtained, currently cannot comply with these high standards. This can be attributed to the fact that existing methods often rely on several simplifying a priori assumptions of the underlying physical tissue properties or cannot deal with realistic noise levels. In this manuscript, we address this issue with a new method for estimating an indicator of the uncertainty of an estimated optical property. Specifically, our method uses a deep learning model to compute error estimates for optical parameter estimations of a qPAI algorithm. Functional tissue parameters, such as blood oxygen saturation, are usually derived by averaging over entire signal intensity-based regions of interest (ROIs). Therefore, we propose to reduce the systematic error of the ROI samples by additionally discarding those pixels for which our method estimates a high error and thus a low confidence. In silico experiments show an improvement in the accuracy of optical absorption quantification when applying our method to refine the ROI, and it might thus become a valuable tool for increasing the robustness of qPAI methods.
Collapse
|
144
|
Wang J, Wang Y. Photoacoustic imaging reconstruction using combined nonlocal patch and total-variation regularization for straight-line scanning. Biomed Eng Online 2018; 17:105. [PMID: 30075784 PMCID: PMC6076421 DOI: 10.1186/s12938-018-0537-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 07/30/2018] [Indexed: 01/06/2023] Open
Abstract
Background For practical straight-line scanning in photoacoustic imaging (PAI), serious artifacts caused by missing data will occur. Traditional total variation (TV)-based algorithms fail to obtain satisfactory results, with an over-smoothed and blurred geometric structure. Therefore, it is important to develop a new algorithm to improve the quality of practical straight-line reconstructed images. Methods In this paper, a combined nonlocal patch and TV-based regularization model for PAI reconstruction is proposed to solve these problems. A modified adaptive nonlocal weight function is adopted to provide more reliable estimations for the similarities between patches. Similar patches are searched for throughout the entire image; thus, this model realizes adaptive search for the neighborhood of the patch. The optimization problem is simplified to a common iterative PAI reconstruction problem. Results and conclusion The proposed algorithm is validated by a series of numerical simulations and an in vitro experiment for straight-line scanning. The results of patch-TV are compared to those of two mainstream TV-based algorithms as well as the iterative algorithm only with patch-based regularization. Moreover, the peak signal-to-noise ratio, the noise robustness, and the convergence and calculation speeds are compared and discussed. The results show that the proposed patch-TV yields significant improvement over the other three algorithms qualitatively and quantitatively. These simulations and experiment indicate that the patch-TV algorithm successfully solves the problems of PAI reconstruction and is highly effective in practical PAI applications.
Collapse
Affiliation(s)
- Jin Wang
- Department of Electronic Engineering, Fudan University, No. 220 Handan Road, Shanghai, 200433, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, No. 220 Handan Road, Shanghai, 200433, China. .,Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention (MICCAI) of Shanghai, Fudan University, Shanghai, 200433, China.
| |
Collapse
|
145
|
Li M, Tang Y, Yao J. Photoacoustic tomography of blood oxygenation: A mini review. PHOTOACOUSTICS 2018; 10:65-73. [PMID: 29988848 PMCID: PMC6033062 DOI: 10.1016/j.pacs.2018.05.001] [Citation(s) in RCA: 192] [Impact Index Per Article: 27.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Revised: 05/24/2018] [Accepted: 05/28/2018] [Indexed: 05/04/2023]
Abstract
Photoacoustic tomography (PAT) is a hybrid imaging modality that combines rich contrast of optical excitation and deep penetration of ultrasound detection. With its unique optical absorption contrast mechanism, PAT is inherently sensitive to the functional and molecular information of biological tissues, and thus has been widely used in preclinical and clinical studies. Among many functional capabilities of PAT, measuring blood oxygenation is arguably one of the most important applications, and has been widely performed in photoacoustic studies of brain functions, tumor hypoxia, wound healing, and cancer therapy. Yet, the complex optical conditions of biological tissues, especially the strong wavelength-dependent optical attenuation, have long hurdled the PAT measurement of blood oxygenation at depths beyond a few millimeters. A variety of PAT methods have been developed to improve the accuracy of blood oxygenation measurement, using novel laser illumination schemes, oxygen-sensitive fluorescent dyes, comprehensive mathematic models, or prior information provided by complementary imaging modalities. These novel methods have made exciting progress, while several challenges remain. This concise review aims to introduce the recent developments in photoacoustic blood oxygenation measurement, compare each method's advantages and limitations, highlight their representative applications, and discuss the remaining challenges for future advances.
Collapse
Affiliation(s)
| | | | - Junjie Yao
- Photoacoustic Imaging Laboratory, Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
146
|
Kirchner T, Gröhl J, Maier-Hein L. Context encoding enables machine learning-based quantitative photoacoustics. JOURNAL OF BIOMEDICAL OPTICS 2018; 23:1-9. [PMID: 29777580 PMCID: PMC7138258 DOI: 10.1117/1.jbo.23.5.056008] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2017] [Accepted: 04/25/2018] [Indexed: 05/02/2023]
Abstract
Real-time monitoring of functional tissue parameters, such as local blood oxygenation, based on optical imaging could provide groundbreaking advances in the diagnosis and interventional therapy of various diseases. Although photoacoustic (PA) imaging is a modality with great potential to measure optical absorption deep inside tissue, quantification of the measurements remains a major challenge. We introduce the first machine learning-based approach to quantitative PA imaging (qPAI), which relies on learning the fluence in a voxel to deduce the corresponding optical absorption. The method encodes relevant information of the measured signal and the characteristics of the imaging system in voxel-based feature vectors, which allow the generation of thousands of training samples from a single simulated PA image. Comprehensive in silico experiments suggest that context encoding-qPAI enables highly accurate and robust quantification of the local fluence and thereby the optical absorption from PA images.
Collapse
Affiliation(s)
- Thomas Kirchner
- German Cancer Research Center (DKFZ), Division of Computer Assisted Medical Interventions (CAMI), Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Janek Gröhl
- German Cancer Research Center (DKFZ), Division of Computer Assisted Medical Interventions (CAMI), Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ), Division of Computer Assisted Medical Interventions (CAMI), Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| |
Collapse
|
147
|
Approximate k-Space Models and Deep Learning for Fast Photoacoustic Reconstruction. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2018. [DOI: 10.1007/978-3-030-00129-2_12] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
148
|
Ansari R, Zhang EZ, Desjardins AE, Beard PC. All-optical forward-viewing photoacoustic probe for high-resolution 3D endoscopy. LIGHT, SCIENCE & APPLICATIONS 2018; 7:75. [PMID: 30323927 PMCID: PMC6177463 DOI: 10.1038/s41377-018-0070-5] [Citation(s) in RCA: 91] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Revised: 08/29/2018] [Accepted: 08/30/2018] [Indexed: 05/03/2023]
Abstract
A miniature forward-viewing endoscopic probe that provides high-resolution 3D photoacoustic images is demonstrated. The probe is of outer diameter 3.2 mm and comprised of a transparent Fabry-Pérot (FP) polymer-film ultrasound sensor that is located at the distal end of a rigid optical fiber bundle. Excitation laser pulses are coupled simultaneously into all cores of the bundle and are transmitted through the FP sensor to provide wide-field tissue illumination at the distal end. The resulting photoacoustic waves are mapped in 2D by sequentially scanning the input end of the bundle with an interrogation laser beam in order to individually address different points on the FP sensor. In this way, the sensor acts as a high-density ultrasound array that is comprised of 50,000 individual elements, each of which is 12 µm in diameter, within the 3.2 mm diameter footprint of the probe. The fine spatial sampling that this affords, along with the wide bandwidth (f -3dB = 34 MHz) of the sensor, enables a high-resolution photoacoustic image to be reconstructed. The measured on-axis lateral resolution of the probe was depth-dependent and ranged from 45-170 µm for depths between 1 and 7 mm, and the vertical resolution was 31 µm over the same depth range. The system was evaluated by acquiring 3D images of absorbing phantoms and the microvascular anatomies of a duck embryo and mouse skin. Excellent image fidelity was demonstrated. It is anticipated that this type of probe could find application as a tool for guiding laparoscopic procedures, fetal surgery and other minimally invasive interventions that require a millimeter-scale forward-viewing 3D photoacoustic imaging probe.
Collapse
Affiliation(s)
- Rehman Ansari
- Department Medical Physics and Biomedical Engineering, University College London, Gower Street, London, WC1E 6BT UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 67-73 Riding House Street, London, W1W 7EJ UK
| | - Edward Z. Zhang
- Department Medical Physics and Biomedical Engineering, University College London, Gower Street, London, WC1E 6BT UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 67-73 Riding House Street, London, W1W 7EJ UK
| | - Adrien E. Desjardins
- Department Medical Physics and Biomedical Engineering, University College London, Gower Street, London, WC1E 6BT UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 67-73 Riding House Street, London, W1W 7EJ UK
| | - Paul C. Beard
- Department Medical Physics and Biomedical Engineering, University College London, Gower Street, London, WC1E 6BT UK
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Charles Bell House, 67-73 Riding House Street, London, W1W 7EJ UK
| |
Collapse
|