1
|
Lan H, Huang L, Wei X, Li Z, Lv J, Ma C, Nie L, Luo J. Masked cross-domain self-supervised deep learning framework for photoacoustic computed tomography reconstruction. Neural Netw 2024; 179:106515. [PMID: 39032393 DOI: 10.1016/j.neunet.2024.106515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2023] [Revised: 06/24/2024] [Accepted: 07/05/2024] [Indexed: 07/23/2024]
Abstract
Accurate image reconstruction is crucial for photoacoustic (PA) computed tomography (PACT). Recently, deep learning has been used to reconstruct PA images with a supervised scheme, which requires high-quality images as ground truth labels. However, practical implementations encounter inevitable trade-offs between cost and performance due to the expensive nature of employing additional channels for accessing more measurements. Here, we propose a masked cross-domain self-supervised (CDSS) reconstruction strategy to overcome the lack of ground truth labels from limited PA measurements. We implement the self-supervised reconstruction in a model-based form. Simultaneously, we take advantage of self-supervision to enforce the consistency of measurements and images across three partitions of the measured PA data, achieved by randomly masking different channels. Our findings indicate that dynamically masking a substantial proportion of channels, such as 80%, yields meaningful self-supervisors in both the image and signal domains. Consequently, this approach reduces the multiplicity of pseudo solutions and enables efficient image reconstruction using fewer PA measurements, ultimately minimizing reconstruction error. Experimental results on in-vivo PACT dataset of mice demonstrate the potential of our self-supervised framework. Moreover, our method exhibits impressive performance, achieving a structural similarity index (SSIM) of 0.87 in an extreme sparse case utilizing only 13 channels, which outperforms the performance of the supervised scheme with 16 channels (0.77 SSIM). Adding to its advantages, our method can be deployed on different trainable models in an end-to-end manner, further enhancing its versatility and applicability.
Collapse
Affiliation(s)
- Hengrong Lan
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Lijie Huang
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Xingyue Wei
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Zhiqiang Li
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China
| | - Jing Lv
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| | - Cheng Ma
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Liming Nie
- Medical Research Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Southern Medical University, Guangzhou 510080, China
| | - Jianwen Luo
- School of Biomedical Engineering, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
2
|
Meng J, Yu J, Wu Z, Ma F, Zhang Y, Liu C. WSA-MP-Net: Weak-signal-attention and multi-scale perception network for microvascular extraction in optical-resolution photoacoustic microcopy. PHOTOACOUSTICS 2024; 37:100600. [PMID: 38516294 PMCID: PMC10955652 DOI: 10.1016/j.pacs.2024.100600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 03/01/2024] [Accepted: 03/05/2024] [Indexed: 03/23/2024]
Abstract
The unique advantage of optical-resolution photoacoustic microscopy (OR-PAM) is its ability to achieve high-resolution microvascular imaging without exogenous agents. This ability has excellent potential in the study of tissue microcirculation. However, tracing and monitoring microvascular morphology and hemodynamics in tissues is challenging because the segmentation of microvascular in OR-PAM images is complex due to the high density, structure complexity, and low contrast of vascular structures. Various microvasculature extraction techniques have been developed over the years but have many limitations: they cannot consider both thick and thin blood vessel segmentation simultaneously, they cannot address incompleteness and discontinuity in microvasculature, there is a lack of open-access datasets for DL-based algorithms. We have developed a novel segmentation approach to extract vascularity in OR-PAM images using a deep learning network incorporating a weak signal attention mechanism and multi-scale perception (WSA-MP-Net) model. The proposed WSA network focuses on weak and tiny vessels, while the MP module extracts features from different vessel sizes. In addition, Hessian-matrix enhancement is incorporated into the pre-and post-processing of the input and output data of the network to enhance vessel continuity. We constructed normal vessel (NV-ORPAM, 660 data pairs) and tumor vessel (TV-ORPAM, 1168 data pairs) datasets to verify the performance of the proposed method. We developed a semi-automatic annotation algorithm to obtain the ground truth for our network optimization. We applied our optimized model successfully to monitor glioma angiogenesis in mouse brains, thus demonstrating the feasibility and excellent generalization ability of our model. Compared to previous works, our proposed WSA-MP-Net extracts a significant number of microvascular while maintaining vessel continuity and signal fidelity. In quantitative analysis, the indicator values of our method improved by about 1.3% to 25.9%. We believe our proposed approach provides a promising way to extract a complete and continuous microvascular network of OR-PAM and enables its use in many microvascular-related biological studies and medical diagnoses.
Collapse
Affiliation(s)
- Jing Meng
- School of Computer, Qufu Normal University, Rizhao 276826, China
| | - Jialing Yu
- School of Computer, Qufu Normal University, Rizhao 276826, China
| | - Zhifeng Wu
- Research Center for Biomedical Optics and Molecular Imaging, Key Laboratory of Biomedical Imaging Science and System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Fei Ma
- School of Computer, Qufu Normal University, Rizhao 276826, China
| | - Yuanke Zhang
- School of Computer, Qufu Normal University, Rizhao 276826, China
| | - Chengbo Liu
- Research Center for Biomedical Optics and Molecular Imaging, Key Laboratory of Biomedical Imaging Science and System, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| |
Collapse
|
3
|
Poimala J, Cox B, Hauptmann A. Compensating unknown speed of sound in learned fast 3D limited-view photoacoustic tomography. PHOTOACOUSTICS 2024; 37:100597. [PMID: 38425677 PMCID: PMC10901832 DOI: 10.1016/j.pacs.2024.100597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/15/2023] [Accepted: 02/16/2024] [Indexed: 03/02/2024]
Abstract
Real-time applications in three-dimensional photoacoustic tomography from planar sensors rely on fast reconstruction algorithms that assume the speed of sound (SoS) in the tissue is homogeneous. Moreover, the reconstruction quality depends on the correct choice for the constant SoS. In this study, we discuss the possibility of ameliorating the problem of unknown or heterogeneous SoS distributions by using learned reconstruction methods. This can be done by modelling the uncertainties in the training data. In addition, a correction term can be included in the learned reconstruction method. We investigate the influence of both and while a learned correction component can improve reconstruction quality further, we show that a careful choice of uncertainties in the training data is the primary factor to overcome unknown SoS. We support our findings with simulated and in vivo measurements in 3D.
Collapse
Affiliation(s)
- Jenni Poimala
- Research Unit of Mathematical Sciences, University of Oulu, Finland
| | - Ben Cox
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Andreas Hauptmann
- Research Unit of Mathematical Sciences, University of Oulu, Finland
- Department of Computer Science, University College London, UK
| |
Collapse
|
4
|
Ylisiurua S, Sipola A, Nieminen MT, Brix MAK. Deep learning enables time-efficient soft tissue enhancement in CBCT: Proof-of-concept study for dentomaxillofacial applications. Phys Med 2024; 117:103184. [PMID: 38016216 DOI: 10.1016/j.ejmp.2023.103184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/06/2023] [Accepted: 11/19/2023] [Indexed: 11/30/2023] Open
Abstract
PURPOSE The use of iterative and deep learning reconstruction methods, which would allow effective noise reduction, is limited in cone-beam computed tomography (CBCT). As a consequence, the visibility of soft tissues is limited with CBCT. The study aimed to improve this issue through time-efficient deep learning enhancement (DLE) methods. METHODS Two DLE networks, UNIT and U-Net, were trained with simulated CBCT data. The performance of the networks was tested with three different test data sets. The quantitative evaluation measured the structural similarity index measure (SSIM) and the peak signal-to-noise ratio (PSNR) of the DLE reconstructions with respect to the ground truth iterative reconstruction method. In the second assessment, a dentomaxillofacial radiologist assessed the resolution of hard tissue structures, visibility of soft tissues, and overall image quality of real patient data using the Likert scale. Finally, the technical image quality was determined using modulation transfer function, noise power spectrum, and noise magnitude analyses. RESULTS The study demonstrated that deep learning CBCT denoising is feasible and time efficient. The DLE methods, trained with simulated CBCT data, generalized well, and DLE provided quantitatively (SSIM/PSNR) and visually similar noise-reduction as conventional IR, but with faster processing time. The DLE methods improved soft tissue visibility compared to the conventional Feldkamp-Davis-Kress (FDK) algorithm through noise reduction. However, in hard tissue quantification tasks, the radiologist preferred the FDK over the DLE methods. CONCLUSION Post-reconstruction DLE allowed feasible reconstruction times while yielding improvements in soft tissue visibility in each dataset.
Collapse
Affiliation(s)
- Sampo Ylisiurua
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu 90220, Finland; Department of Diagnostic Radiology, Oulu University Hospital, Oulu 90220, Finland.
| | - Annina Sipola
- Medical Research Center, University of Oulu and Oulu University Hospital, Oulu 90220, Finland; Department of Dental Imaging, Oulu University Hospital, Oulu 90220, Finland; Research Unit of Oral Health Sciences, University of Oulu, Oulu 90220, Finland.
| | - Miika T Nieminen
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu 90220, Finland; Department of Diagnostic Radiology, Oulu University Hospital, Oulu 90220, Finland; Medical Research Center, University of Oulu and Oulu University Hospital, Oulu 90220, Finland.
| | - Mikael A K Brix
- Research Unit of Health Sciences and Technology, University of Oulu, Oulu 90220, Finland; Department of Diagnostic Radiology, Oulu University Hospital, Oulu 90220, Finland; Medical Research Center, University of Oulu and Oulu University Hospital, Oulu 90220, Finland.
| |
Collapse
|
5
|
Song X, Wang G, Zhong W, Guo K, Li Z, Liu X, Dong J, Liu Q. Sparse-view reconstruction for photoacoustic tomography combining diffusion model with model-based iteration. PHOTOACOUSTICS 2023; 33:100558. [PMID: 38021282 PMCID: PMC10658608 DOI: 10.1016/j.pacs.2023.100558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/14/2023] [Accepted: 09/16/2023] [Indexed: 12/01/2023]
Abstract
As a non-invasive hybrid biomedical imaging technology, photoacoustic tomography combines high contrast of optical imaging and high penetration of acoustic imaging. However, the conventional standard reconstruction under sparse view could result in low-quality image in photoacoustic tomography. Here, a novel model-based sparse reconstruction method for photoacoustic tomography via diffusion model was proposed. A score-based diffusion model is designed for learning the prior information of the data distribution. The learned prior information is utilized as a constraint for the data consistency term of an optimization problem based on the least-square method in the model-based iterative reconstruction, aiming to achieve the optimal solution. Blood vessels simulation data and the animal in vivo experimental data were used to evaluate the performance of the proposed method. The results demonstrate that the proposed method achieves higher-quality sparse reconstruction compared with conventional reconstruction methods and U-Net. In particular, under the extreme sparse projection (e.g., 32 projections), the proposed method achieves an improvement of ∼ 260 % in structural similarity and ∼ 30 % in peak signal-to-noise ratio for in vivo data, compared with the conventional delay-and-sum method. This method has the potential to reduce the acquisition time and cost of photoacoustic tomography, which will further expand the application range.
Collapse
Affiliation(s)
| | | | - Wenhua Zhong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Kangjun Guo
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Zilong Li
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Xuan Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Jiaqing Dong
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| | - Qiegen Liu
- School of Information Engineering, Nanchang University, Nanchang 330031, China
| |
Collapse
|
6
|
Rix T, Dreher KK, Nölke JH, Schellenberg M, Tizabi MD, Seitel A, Maier-Hein L. Efficient Photoacoustic Image Synthesis with Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:7085. [PMID: 37631628 PMCID: PMC10457787 DOI: 10.3390/s23167085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/25/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023]
Abstract
Photoacoustic imaging potentially allows for the real-time visualization of functional human tissue parameters such as oxygenation but is subject to a challenging underlying quantification problem. While in silico studies have revealed the great potential of deep learning (DL) methodology in solving this problem, the inherent lack of an efficient gold standard method for model training and validation remains a grand challenge. This work investigates whether DL can be leveraged to accurately and efficiently simulate photon propagation in biological tissue, enabling photoacoustic image synthesis. Our approach is based on estimating the initial pressure distribution of the photoacoustic waves from the underlying optical properties using a back-propagatable neural network trained on synthetic data. In proof-of-concept studies, we validated the performance of two complementary neural network architectures, namely a conventional U-Net-like model and a Fourier Neural Operator (FNO) network. Our in silico validation on multispectral human forearm images shows that DL methods can speed up image generation by a factor of 100 when compared to Monte Carlo simulations with 5×108 photons. While the FNO is slightly more accurate than the U-Net, when compared to Monte Carlo simulations performed with a reduced number of photons (5×106), both neural network architectures achieve equivalent accuracy. In contrast to Monte Carlo simulations, the proposed DL models can be used as inherently differentiable surrogate models in the photoacoustic image synthesis pipeline, allowing for back-propagation of the synthesis error and gradient-based optimization over the entire pipeline. Due to their efficiency, they have the potential to enable large-scale training data generation that can expedite the clinical application of photoacoustic imaging.
Collapse
Affiliation(s)
- Tom Rix
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
| | - Kris K. Dreher
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, 69120 Heidelberg, Germany
| | - Jan-Hinrich Nölke
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
| | - Melanie Schellenberg
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
- HIDSS4Health—Helmholtz Information and Data Science School for Health, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
| | - Minu D. Tizabi
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
| | - Alexander Seitel
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
| | - Lena Maier-Hein
- Division of Intelligent Medical Systems, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 223, 69120 Heidelberg, Germany
- Faculty of Mathematics and Computer Sciences, Heidelberg University, 69120 Heidelberg, Germany
- HIDSS4Health—Helmholtz Information and Data Science School for Health, 69120 Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, 69120 Heidelberg, Germany
- Medical Faculty, Heidelberg University, 69120 Heidelberg, Germany
| |
Collapse
|
7
|
Wang T, Chen C, Shen K, Liu W, Tian C. Streak artifact suppressed back projection for sparse-view photoacoustic computed tomography. APPLIED OPTICS 2023; 62:3917-3925. [PMID: 37706701 DOI: 10.1364/ao.487957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/21/2023] [Indexed: 09/15/2023]
Abstract
The development of fast and accurate image reconstruction algorithms under constrained data acquisition conditions is important for photoacoustic computed tomography (PACT). Sparse-view measurements have been used to accelerate data acquisition and reduce system complexity; however, reconstructed images suffer from sparsity-induced streak artifacts. In this paper, a modified back-projection (BP) method termed anti-streak BP is proposed to suppress streak artifacts in sparse-view PACT reconstruction. During the reconstruction process, the anti-streak BP finds the back-projection terms contaminated by high-intensity sources with an outlier detection method. Then, the weights of the contaminated back-projection terms are adaptively adjusted to eliminate the effects of high-intensity sources. The proposed anti-streak BP method is compared with the conventional BP method on both simulation and in vivo data. The anti-streak BP method shows substantially fewer artifacts in the reconstructed images, and the streak index is 54% and 20% lower than that of the conventional BP method on simulation and in vivo data, when the transducer number N=128. The anti-streak BP method is a powerful improvement of the BP method with the ability of artifact suppression.
Collapse
|
8
|
Lan H, Yang C, Gao F. A jointed feature fusion framework for photoacoustic image reconstruction. PHOTOACOUSTICS 2023; 29:100442. [PMID: 36589516 PMCID: PMC9798177 DOI: 10.1016/j.pacs.2022.100442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 12/19/2022] [Indexed: 06/17/2023]
Abstract
The standard reconstruction of Photoacoustic (PA) computed tomography (PACT) image could cause the artifacts due to interferences or ill-posed setup. Recently, deep learning has been used to reconstruct the PA image with ill-posed conditions. In this paper, we propose a jointed feature fusion framework (JEFF-Net) based on deep learning to reconstruct the PA image using limited-view data. The cross-domain features from limited-view position-wise data and the reconstructed image are fused by a backtracked supervision. A quarter position-wise data (32 channels) is fed into model, which outputs another 3-quarters-view data (96 channels). Moreover, two novel losses are designed to restrain the artifacts by sufficiently manipulating superposed data. The experimental results have demonstrated the superior performance and quantitative evaluations show that our proposed method outperformed the ground-truth in some metrics by 135% (SSIM for simulation) and 40% (gCNR for in-vivo) improvement.
Collapse
Affiliation(s)
- Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
- Shanghai Clinical Research and Trial Center, Shanghai 201210, China
| |
Collapse
|
9
|
Wang T, He M, Shen K, Liu W, Tian C. Learned regularization for image reconstruction in sparse-view photoacoustic tomography. BIOMEDICAL OPTICS EXPRESS 2022; 13:5721-5737. [PMID: 36733736 PMCID: PMC9872879 DOI: 10.1364/boe.469460] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 09/07/2022] [Accepted: 10/01/2022] [Indexed: 06/18/2023]
Abstract
Constrained data acquisitions, such as sparse view measurements, are sometimes used in photoacoustic computed tomography (PACT) to accelerate data acquisition. However, it is challenging to reconstruct high-quality images under such scenarios. Iterative image reconstruction with regularization is a typical choice to solve this problem but it suffers from image artifacts. In this paper, we present a learned regularization method to suppress image artifacts in model-based iterative reconstruction in sparse view PACT. A lightweight dual-path network is designed to learn regularization features from both the data and the image domains. The network is trained and tested on both simulation and in vivo datasets and compared with other methods such as Tikhonov regularization, total variation regularization, and a U-Net based post-processing approach. Results show that although the learned regularization network possesses a size of only 0.15% of a U-Net, it outperforms other methods and converges after as few as five iterations, which takes less than one-third of the time of conventional methods. Moreover, the proposed reconstruction method incorporates the physical model of photoacoustic imaging and explores structural information from training datasets. The integration of deep learning with a physical model can potentially achieve improved imaging performance in practice.
Collapse
Affiliation(s)
- Tong Wang
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Menghui He
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
| | - Kang Shen
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Wen Liu
- School of Physical Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| | - Chao Tian
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui 230088, China
- School of Engineering Science, University of Science and Technology of China, Hefei, Anhui 230026, China
| |
Collapse
|
10
|
Madasamy A, Gujrati V, Ntziachristos V, Prakash J. Deep learning methods hold promise for light fluence compensation in three-dimensional optoacoustic imaging. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:106004. [PMID: 36209354 PMCID: PMC9547608 DOI: 10.1117/1.jbo.27.10.106004] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 08/30/2022] [Indexed: 06/16/2023]
Abstract
SIGNIFICANCE Quantitative optoacoustic imaging (QOAI) continues to be a challenge due to the influence of nonlinear optical fluence distribution, which distorts the optoacoustic image representation. Nonlinear optical fluence correction in OA imaging is highly ill-posed, leading to the inaccurate recovery of optical absorption maps. This work aims to recover the optical absorption maps using deep learning (DL) approach by correcting for the fluence effect. AIM Different DL models were compared and investigated to enable optical absorption coefficient recovery at a particular wavelength in a nonhomogeneous foreground and background medium. APPROACH Data-driven models were trained with two-dimensional (2D) Blood vessel and three-dimensional (3D) numerical breast phantom with highly heterogeneous/realistic structures to correct for the nonlinear optical fluence distribution. The trained DL models such as U-Net, Fully Dense (FD) U-Net, Y-Net, FD Y-Net, Deep residual U-Net (Deep ResU-Net), and generative adversarial network (GAN) were tested to evaluate the performance of optical absorption coefficient recovery (or fluence compensation) with in-silico and in-vivo datasets. RESULTS The results indicated that FD U-Net-based deconvolution improves by about 10% over reconstructed optoacoustic images in terms of peak-signal-to-noise ratio. Further, it was observed that DL models can indeed highlight deep-seated structures with higher contrast due to fluence compensation. Importantly, the DL models were found to be about 17 times faster than solving diffusion equation for fluence correction. CONCLUSIONS The DL methods were able to compensate for nonlinear optical fluence distribution more effectively and improve the optoacoustic image quality.
Collapse
Affiliation(s)
- Arumugaraj Madasamy
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| | - Vipul Gujrati
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München (GmbH), Neuherberg, Germany
- Technical University of Munich, School of Medicine, Chair of Biological Imaging, Munich, Germany
- Technical University of Munich, Munich Institute of Robotics and Machine Intelligence (MIRMI), Munich, Germany
| | - Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bengaluru, Karnataka, India
| |
Collapse
|
11
|
Shi M, Zhao T, West SJ, Desjardins AE, Vercauteren T, Xia W. Improving needle visibility in LED-based photoacoustic imaging using deep learning with semi-synthetic datasets. PHOTOACOUSTICS 2022; 26:100351. [PMID: 35495095 PMCID: PMC9048160 DOI: 10.1016/j.pacs.2022.100351] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 06/14/2023]
Abstract
Photoacoustic imaging has shown great potential for guiding minimally invasive procedures by accurate identification of critical tissue targets and invasive medical devices (such as metallic needles). The use of light emitting diodes (LEDs) as the excitation light sources accelerates its clinical translation owing to its high affordability and portability. However, needle visibility in LED-based photoacoustic imaging is compromised primarily due to its low optical fluence. In this work, we propose a deep learning framework based on U-Net to improve the visibility of clinical metallic needles with a LED-based photoacoustic and ultrasound imaging system. To address the complexity of capturing ground truth for real data and the poor realism of purely simulated data, this framework included the generation of semi-synthetic training datasets combining both simulated data to represent features from the needles and in vivo measurements for tissue background. Evaluation of the trained neural network was performed with needle insertions into blood-vessel-mimicking phantoms, pork joint tissue ex vivo and measurements on human volunteers. This deep learning-based framework substantially improved the needle visibility in photoacoustic imaging in vivo compared to conventional reconstruction by suppressing background noise and image artefacts, achieving 5.8 and 4.5 times improvements in terms of signal-to-noise ratio and the modified Hausdorff distance, respectively. Thus, the proposed framework could be helpful for reducing complications during percutaneous needle insertions by accurate identification of clinical needles in photoacoustic imaging.
Collapse
Affiliation(s)
- Mengjie Shi
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Tianrui Zhao
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Simeon J. West
- Department of Anaesthesia, University College Hospital, London NW1 2BU, United Kingdom
| | - Adrien E. Desjardins
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1 W 7TY, United Kingdom
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| | - Wenfeng Xia
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, United Kingdom
| |
Collapse
|
12
|
Ly CD, Nguyen VT, Vo TH, Mondal S, Park S, Choi J, Vu TTH, Kim CS, Oh J. Full-view in vivo skin and blood vessels profile segmentation in photoacoustic imaging based on deep learning. PHOTOACOUSTICS 2022; 25:100310. [PMID: 34824975 PMCID: PMC8603312 DOI: 10.1016/j.pacs.2021.100310] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/23/2021] [Accepted: 10/18/2021] [Indexed: 05/08/2023]
Abstract
Photoacoustic (PA) microscopy allows imaging of the soft biological tissue based on optical absorption contrast and spatial ultrasound resolution. One of the major applications of PA imaging is its characterization of microvasculature. However, the strong PA signal from skin layer overshadowed the subcutaneous blood vessels leading to indirectly reconstruct the PA images in human study. Addressing the present situation, we examined a deep learning (DL) automatic algorithm to achieve high-resolution and high-contrast segmentation for widening PA imaging applications. In this research, we propose a DL model based on modified U-Net for extracting the relationship features between amplitudes of the generated PA signal from skin and underlying vessels. This study illustrates the broader potential of hybrid complex network as an automatic segmentation tool for the in vivo PA imaging. With DL-infused solution, our result outperforms the previous studies with achieved real-time semantic segmentation on large-size high-resolution PA images.
Collapse
Affiliation(s)
- Cao Duong Ly
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Van Tu Nguyen
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Tan Hung Vo
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Sudip Mondal
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| | - Sumin Park
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Jaeyeop Choi
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
| | - Thi Thu Ha Vu
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
| | - Chang-Seok Kim
- Department of Cogno-Mechatronics Engineering, Pusan National University, Busan 46241, Republic of Korea
| | - Junghwan Oh
- Industry 4.0 Convergence Bionics Engineering, Pukyong National University, Republic of Korea
- Department of Biomedical Engineering, Pukyong National University, Busan 48513, Republic of Korea
- Ohlabs Corp, Busan 48513, Republic of Korea
- New-senior Healthcare Innovation Center (BK21 Plus), Pukyong National University, Busan 48513, Republic of Korea
| |
Collapse
|
13
|
Design of Metaheuristic Optimization-Based Vascular Segmentation Techniques for Photoacoustic Images. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:4736113. [PMID: 35173560 PMCID: PMC8818398 DOI: 10.1155/2022/4736113] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/05/2022] [Accepted: 01/10/2022] [Indexed: 11/18/2022]
Abstract
Biomedical imaging technologies are designed to offer functional, anatomical, and molecular details related to the internal organs. Photoacoustic imaging (PAI) is becoming familiar among researchers and industrialists. The PAI is found useful in several applications of brain and cancer imaging such as prostate cancer, breast cancer, and ovarian cancer. At the same time, the vessel images hold important medical details which offer strategies for a qualified diagnosis. Recently developed image processing techniques can be employed to segment vessels. Since vessel segmentation on PAI is a difficult process, this paper employs metaheuristic optimization-based vascular segmentation techniques for PAI. The proposed model involves two distinct kinds of vessel segmentation approaches such as Shannon’s entropy function (SEF) and multilevel Otsu thresholding (MLOT). Moreover, the threshold value and entropy function in the segmentation process are optimized using three metaheuristics such as the cuckoo search (CS), equilibrium optimizer (EO), and harmony search (HS) algorithms. A detailed experimental analysis is made on benchmark PAI dataset, and the results are inspected under varying aspects. The obtained results pointed out the supremacy of the presented model with a higher accuracy of 98.71%.
Collapse
|
14
|
Sathyanarayana SG, Wang Z, Sun N, Ning B, Hu S, Hossack JA. Recovery of Blood Flow From Undersampled Photoacoustic Microscopy Data Using Sparse Modeling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:103-120. [PMID: 34388091 DOI: 10.1109/tmi.2021.3104521] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Photoacoustic microscopy (PAM) leverages the optical absorption contrast of blood hemoglobin for high-resolution, multi-parametric imaging of the microvasculature in vivo. However, to quantify the blood flow speed, dense spatial sampling is required to assess blood flow-induced loss of correlation of sequentially acquired A-line signals, resulting in increased laser pulse repetition rate and consequently optical fluence. To address this issue, we have developed a sparse modeling approach for blood flow quantification based on downsampled PAM data. Evaluation of its performance both in vitro and in vivo shows that this sparse modeling method can accurately recover the substantially downsampled data (up to 8 times) for correlation-based blood flow analysis, with a relative error of 12.7 ± 6.1 % across 10 datasets in vitro and 12.7 ± 12.1 % in vivo for data downsampled 8 times. Reconstruction with the proposed method is on par with recovery using compressive sensing, which exhibits an error of 12.0 ± 7.9 % in vitro and 33.86 ± 26.18 % in vivo for data downsampled 8 times. Both methods outperform bicubic interpolation, which shows an error of 15.95 ± 9.85 % in vitro and 110.7 ± 87.1 % in vivo for data downsampled 8 times.
Collapse
|
15
|
Zheng H. Detection of Tibiofemoral Joint Injury in High-Impact Motion Based on Neural Network Reconstruction Algorithm. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5800893. [PMID: 34900197 PMCID: PMC8654531 DOI: 10.1155/2021/5800893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 10/29/2021] [Accepted: 11/12/2021] [Indexed: 11/18/2022]
Abstract
In order to reduce the damage degree of joint bones, ligaments, and soft tissues caused by the high impact on the tibiofemoral joint during landing, a method for detecting the damage of tibiofemoral joint under high-impact action based on neural network reconstruction algorithm is proposed. Two dimensional X-ray images of knee joints from straightening to bending in 10 healthy volunteers were selected. CT scans were performed on the knee joint on the same side, and the 3D model from the acquired images was reconstructed. The kinematics data of the femur relative to the tibia with full degree of freedom were measured by registering the 3D model with 2D images. The results showed that in the extended position, the femur was rotated inward (5.5° ± 6.3°) relative to the tibia. The range of femoral external rotation is (18.7° ± 5.9°) from flexion to 90° in straight position. However, from 90° to 120°, a small amount of internal rotation occurred (1.4° ± 1.9°), so during the whole flexion process, the femur rotated (17.3° ± 6.9°), among which, from the straight position to 15°, the femur rotated (10.0° ± 5.6°). Damage in different areas is determined by the size of the interlayer displacement sample size method of sample space reduction. It is proved that the detection method of tibiofemoral joint injury in high-impact motion based on neural network reconstruction algorithm has high accuracy and consistency.
Collapse
Affiliation(s)
- Hongbo Zheng
- PE Department of Shenyang Pharmaceutical University, Shenyang 110016, China
| |
Collapse
|
16
|
Photoacoustic imaging aided with deep learning: a review. Biomed Eng Lett 2021; 12:155-173. [DOI: 10.1007/s13534-021-00210-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2021] [Revised: 10/19/2021] [Accepted: 11/07/2021] [Indexed: 12/21/2022] Open
|
17
|
Hsu KT, Guan S, Chitnis PV. Comparing Deep Learning Frameworks for Photoacoustic Tomography Image Reconstruction. PHOTOACOUSTICS 2021; 23:100271. [PMID: 34094851 PMCID: PMC8165448 DOI: 10.1016/j.pacs.2021.100271] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 04/08/2021] [Accepted: 05/11/2021] [Indexed: 05/02/2023]
Abstract
Conventional reconstruction methods for photoacoustic images are not suitable for the scenario of sparse sensing and geometrical limitation. To overcome these challenges and enhance the quality of reconstruction, several learning-based methods have recently been introduced for photoacoustic tomography reconstruction. The goal of this study is to compare and systematically evaluate the recently proposed learning-based methods and modified networks for photoacoustic image reconstruction. Specifically, learning-based post-processing methods and model-based learned iterative reconstruction methods are investigated. In addition to comparing the differences inherently brought by the models, we also study the impact of different inputs on the reconstruction effect. Our results demonstrate that the reconstruction performance mainly stems from the effective amount of information carried by the input. The inherent difference of the models based on the learning-based post-processing method does not provide a significant difference in photoacoustic image reconstruction. Furthermore, the results indicate that the model-based learned iterative reconstruction method outperforms all other learning-based post-processing methods in terms of generalizability and robustness.
Collapse
|
18
|
Prakash J, Kalva SK, Pramanik M, Yalavarthy PK. Binary photoacoustic tomography for improved vasculature imaging. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210135R. [PMID: 34405599 PMCID: PMC8370884 DOI: 10.1117/1.jbo.26.8.086004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 06/29/2021] [Indexed: 05/09/2023]
Abstract
SIGNIFICANCE The proposed binary tomography approach was able to recover the vasculature structures accurately, which could potentially enable the utilization of binary tomography algorithm in scenarios such as therapy monitoring and hemorrhage detection in different organs. AIM Photoacoustic tomography (PAT) involves reconstruction of vascular networks having direct implications in cancer research, cardiovascular studies, and neuroimaging. Various methods have been proposed for recovering vascular networks in photoacoustic imaging; however, most methods are two-step (image reconstruction and image segmentation) in nature. We propose a binary PAT approach wherein direct reconstruction of vascular network from the acquired photoacoustic sinogram data is plausible. APPROACH Binary tomography approach relies on solving a dual-optimization problem to reconstruct images with every pixel resulting in a binary outcome (i.e., either background or the absorber). Further, the binary tomography approach was compared against backprojection, Tikhonov regularization, and sparse recovery-based schemes. RESULTS Numerical simulations, physical phantom experiment, and in-vivo rat brain vasculature data were used to compare the performance of different algorithms. The results indicate that the binary tomography approach improved the vasculature recovery by 10% using in-silico data with respect to the Dice similarity coefficient against the other reconstruction methods. CONCLUSION The proposed algorithm demonstrates superior vasculature recovery with limited data both visually and based on quantitative image metrics.
Collapse
Affiliation(s)
- Jaya Prakash
- Indian Institute of Science, Department of Instrumentation and Applied Physics, Bangalore, Karnataka, India
- Address all correspondence to Jaya Prakash,
| | - Sandeep Kumar Kalva
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Manojit Pramanik
- Nanyang Technological University, School of Chemical and Biomedical Engineering, Singapore, Singapore
| | - Phaneendra K. Yalavarthy
- Indian Institute of Science, Department of Computational and Data Sciences, Bangalore, Karnataka, India
| |
Collapse
|
19
|
Tian L, Hunt B, Bell MAL, Yi J, Smith JT, Ochoa M, Intes X, Durr NJ. Deep Learning in Biomedical Optics. Lasers Surg Med 2021; 53:748-775. [PMID: 34015146 PMCID: PMC8273152 DOI: 10.1002/lsm.23414] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 04/02/2021] [Accepted: 04/15/2021] [Indexed: 01/02/2023]
Abstract
This article reviews deep learning applications in biomedical optics with a particular emphasis on image formation. The review is organized by imaging domains within biomedical optics and includes microscopy, fluorescence lifetime imaging, in vivo microscopy, widefield endoscopy, optical coherence tomography, photoacoustic imaging, diffuse tomography, and functional optical brain imaging. For each of these domains, we summarize how deep learning has been applied and highlight methods by which deep learning can enable new capabilities for optics in medicine. Challenges and opportunities to improve translation and adoption of deep learning in biomedical optics are also summarized. Lasers Surg. Med. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- L. Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA
| | - B. Hunt
- Thayer School of Engineering, Dartmouth College, Hanover, NH, USA
| | - M. A. L. Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - J. Yi
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Ophthalmology, Johns Hopkins University, Baltimore, MD, USA
| | - J. T. Smith
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - M. Ochoa
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - X. Intes
- Center for Modeling, Simulation, and Imaging in Medicine, Rensselaer Polytechnic Institute, Troy, New York NY 12180
| | - N. J. Durr
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
20
|
Sun Z, Wang X, Yan X. An iterative gradient convolutional neural network and its application in endoscopic photoacoustic image formation from incomplete acoustic measurement. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05607-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
21
|
DiSpirito A, Vu T, Pramanik M, Yao J. Sounding out the hidden data: A concise review of deep learning in photoacoustic imaging. Exp Biol Med (Maywood) 2021; 246:1355-1367. [PMID: 33779342 PMCID: PMC8243210 DOI: 10.1177/15353702211000310] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo-with endogenous or exogenous contrast-that makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography's current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.
Collapse
Affiliation(s)
- Anthony DiSpirito
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Tri Vu
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| | - Manojit Pramanik
- School of Chemical and Biomedical Engineering, Nanyang
Technological University, Singapore 637459, Singapore
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham,
NC 27708, USA
| |
Collapse
|
22
|
Yao J, Wang LV. Perspective on fast-evolving photoacoustic tomography. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-210105-PERR. [PMID: 34196136 PMCID: PMC8244998 DOI: 10.1117/1.jbo.26.6.060602] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/17/2021] [Indexed: 05/19/2023]
Abstract
SIGNIFICANCE Acoustically detecting the rich optical absorption contrast in biological tissues, photoacoustic tomography (PAT) seamlessly bridges the functional and molecular sensitivity of optical excitation with the deep penetration and high scalability of ultrasound detection. As a result of continuous technological innovations and commercial development, PAT has been playing an increasingly important role in life sciences and patient care, including functional brain imaging, smart drug delivery, early cancer diagnosis, and interventional therapy guidance. AIM Built on our 2016 tutorial article that focused on the principles and implementations of PAT, this perspective aims to provide an update on the exciting technical advances in PAT. APPROACH This perspective focuses on the recent PAT innovations in volumetric deep-tissue imaging, high-speed wide-field microscopic imaging, high-sensitivity optical ultrasound detection, and machine-learning enhanced image reconstruction and data processing. Representative applications are introduced to demonstrate these enabling technical breakthroughs in biomedical research. CONCLUSIONS We conclude the perspective by discussing the future development of PAT technologies.
Collapse
Affiliation(s)
- Junjie Yao
- Duke University, Department of Biomedical Engineering, Durham, North Carolina, United States
| | - Lihong V. Wang
- California Institute of Technology, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, Pasadena, California, United States
| |
Collapse
|
23
|
Gröhl J, Schellenberg M, Dreher K, Maier-Hein L. Deep learning for biomedical photoacoustic imaging: A review. PHOTOACOUSTICS 2021; 22:100241. [PMID: 33717977 PMCID: PMC7932894 DOI: 10.1016/j.pacs.2021.100241] [Citation(s) in RCA: 101] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 01/18/2021] [Accepted: 01/20/2021] [Indexed: 05/04/2023]
Abstract
Photoacoustic imaging (PAI) is a promising emerging imaging modality that enables spatially resolved imaging of optical tissue properties up to several centimeters deep in tissue, creating the potential for numerous exciting clinical applications. However, extraction of relevant tissue parameters from the raw data requires the solving of inverse image reconstruction problems, which have proven extremely difficult to solve. The application of deep learning methods has recently exploded in popularity, leading to impressive successes in the context of medical imaging and also finding first use in the field of PAI. Deep learning methods possess unique advantages that can facilitate the clinical translation of PAI, such as extremely fast computation times and the fact that they can be adapted to any given problem. In this review, we examine the current state of the art regarding deep learning in PAI and identify potential directions of research that will help to reach the goal of clinical applicability.
Collapse
Affiliation(s)
- Janek Gröhl
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Melanie Schellenberg
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Kris Dreher
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Lena Maier-Hein
- German Cancer Research Center, Computer Assisted Medical Interventions, Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
- Heidelberg University, Faculty of Mathematics and Computer Science, Heidelberg, Germany
| |
Collapse
|
24
|
Liu L, Wolterink JM, Brune C, Veldhuis RNJ. Anatomy-aided deep learning for medical image segmentation: a review. Phys Med Biol 2021; 66. [PMID: 33906186 DOI: 10.1088/1361-6560/abfbf4] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/27/2021] [Indexed: 01/17/2023]
Abstract
Deep learning (DL) has become widely used for medical image segmentation in recent years. However, despite these advances, there are still problems for which DL-based segmentation fails. Recently, some DL approaches had a breakthrough by using anatomical information which is the crucial cue for manual segmentation. In this paper, we provide a review of anatomy-aided DL for medical image segmentation which covers systematically summarized anatomical information categories and corresponding representation methods. We address known and potentially solvable challenges in anatomy-aided DL and present a categorized methodology overview on using anatomical information with DL from over 70 papers. Finally, we discuss the strengths and limitations of the current anatomy-aided DL approaches and suggest potential future work.
Collapse
Affiliation(s)
- Lu Liu
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands.,Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Jelmer M Wolterink
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Christoph Brune
- Applied Analysis, Department of Applied Mathematics, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Raymond N J Veldhuis
- Data Management and Biometrics, Department of Computer Science, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| |
Collapse
|
25
|
Yin L, Cao Z, Wang K, Tian J, Yang X, Zhang J. A review of the application of machine learning in molecular imaging. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:825. [PMID: 34268438 PMCID: PMC8246214 DOI: 10.21037/atm-20-5877] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 10/02/2020] [Indexed: 12/12/2022]
Abstract
Molecular imaging (MI) is a science that uses imaging methods to reflect the changes of molecular level in living state and conduct qualitative and quantitative studies on its biological behaviors in imaging. Optical molecular imaging (OMI) and nuclear medical imaging are two key research fields of MI. OMI technology refers to the optical information generated by the imaging target (such as tumors) due to drug intervention and other reasons. By collecting the optical information, researchers can track the motion trajectory of the imaging target at the molecular level. Owing to its high specificity and sensitivity, OMI has been widely used in preclinical research and clinical surgery. Nuclear medical imaging mainly detects ionizing radiation emitted by radioactive substances. It can provide molecular information for early diagnosis, effective treatment and basic research of diseases, which has become one of the frontiers and hot topics in the field of medicine in the world today. Both OMI and nuclear medical imaging technology require a lot of data processing and analysis. In recent years, artificial intelligence technology, especially neural network-based machine learning (ML) technology, has been widely used in MI because of its powerful data processing capability. It provides a feasible strategy to deal with large and complex data for the requirement of MI. In this review, we will focus on the applications of ML methods in OMI and nuclear medical imaging.
Collapse
Affiliation(s)
- Lin Yin
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Zhen Cao
- Peking University First Hospital, Beijing, China
| | - Kun Wang
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jie Tian
- Key Laboratory of Molecular Imaging of Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences, Beijing, China.,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.,Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, China
| | - Xing Yang
- Peking University First Hospital, Beijing, China
| | | |
Collapse
|
26
|
Deng H, Qiao H, Dai Q, Ma C. Deep learning in photoacoustic imaging: a review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200374VRR. [PMID: 33837678 PMCID: PMC8033250 DOI: 10.1117/1.jbo.26.4.040901] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 03/18/2021] [Indexed: 05/18/2023]
Abstract
SIGNIFICANCE Photoacoustic (PA) imaging can provide structural, functional, and molecular information for preclinical and clinical studies. For PA imaging (PAI), non-ideal signal detection deteriorates image quality, and quantitative PAI (QPAI) remains challenging due to the unknown light fluence spectra in deep tissue. In recent years, deep learning (DL) has shown outstanding performance when implemented in PAI, with applications in image reconstruction, quantification, and understanding. AIM We provide (i) a comprehensive overview of the DL techniques that have been applied in PAI, (ii) references for designing DL models for various PAI tasks, and (iii) a summary of the future challenges and opportunities. APPROACH Papers published before November 2020 in the area of applying DL in PAI were reviewed. We categorized them into three types: image understanding, reconstruction of the initial pressure distribution, and QPAI. RESULTS When applied in PAI, DL can effectively process images, improve reconstruction quality, fuse information, and assist quantitative analysis. CONCLUSION DL has become a powerful tool in PAI. With the development of DL theory and technology, it will continue to boost the performance and facilitate the clinical translation of PAI.
Collapse
Affiliation(s)
- Handi Deng
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
| | - Hui Qiao
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Qionghai Dai
- Tsinghua University, Department of Automation, Haidian, Beijing, China
- Tsinghua University, Institute for Brain and Cognitive Science, Beijing, China
- Tsinghua University, Beijing Laboratory of Brain and Cognitive Intelligence, Beijing, China
- Tsinghua University, Beijing Key Laboratory of Multi-Dimension and Multi-Scale Computational Photography, Beijing, China
| | - Cheng Ma
- Tsinghua University, Department of Electronic Engineering, Haidian, Beijing, China
- Beijing Innovation Center for Future Chip, Beijing, China
| |
Collapse
|
27
|
Leuschner J, Schmidt M, Ganguly PS, Andriiashen V, Coban SB, Denker A, Bauer D, Hadjifaradji A, Batenburg KJ, Maass P, van Eijnatten M. Quantitative Comparison of Deep Learning-Based Image Reconstruction Methods for Low-Dose and Sparse-Angle CT Applications. J Imaging 2021; 7:44. [PMID: 34460700 PMCID: PMC8321320 DOI: 10.3390/jimaging7030044] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/21/2021] [Accepted: 02/22/2021] [Indexed: 11/29/2022] Open
Abstract
The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed.
Collapse
Affiliation(s)
- Johannes Leuschner
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359 Bremen, Germany; (M.S.); (A.D.); (P.M.)
| | - Maximilian Schmidt
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359 Bremen, Germany; (M.S.); (A.D.); (P.M.)
| | - Poulami Somanya Ganguly
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (P.S.G.); (V.A.); (S.B.C.); (K.J.B.)
- The Mathematical Institute, Leiden University, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
| | - Vladyslav Andriiashen
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (P.S.G.); (V.A.); (S.B.C.); (K.J.B.)
| | - Sophia Bethany Coban
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (P.S.G.); (V.A.); (S.B.C.); (K.J.B.)
| | - Alexander Denker
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359 Bremen, Germany; (M.S.); (A.D.); (P.M.)
| | - Dominik Bauer
- Computer Assisted Clinical Medicine, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany;
| | - Amir Hadjifaradji
- School of Biomedical Engineering, University of British Columbia, 2222 Health Sciences Mall, Vancouver, BC V6T 1Z3, Canada;
| | - Kees Joost Batenburg
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (P.S.G.); (V.A.); (S.B.C.); (K.J.B.)
- Leiden Institute of Advanced Computer Science, Niels Bohrweg 1, 2333 CA Leiden, The Netherlands
| | - Peter Maass
- Center for Industrial Mathematics, University of Bremen, Bibliothekstr. 5, 28359 Bremen, Germany; (M.S.); (A.D.); (P.M.)
| | - Maureen van Eijnatten
- Centrum Wiskunde & Informatica, Science Park 123, 1098 XG Amsterdam, The Netherlands; (P.S.G.); (V.A.); (S.B.C.); (K.J.B.)
- Department of Biomedical Engineering, Eindhoven University of Technology, Groene Loper 3, 5612 AE Eindhoven, The Netherlands
| |
Collapse
|
28
|
Yang C, Lan H, Gao F, Gao F. Review of deep learning for photoacoustic imaging. PHOTOACOUSTICS 2021; 21:100215. [PMID: 33425679 PMCID: PMC7779783 DOI: 10.1016/j.pacs.2020.100215] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 10/11/2020] [Accepted: 10/11/2020] [Indexed: 05/02/2023]
Abstract
Machine learning has been developed dramatically and witnessed a lot of applications in various fields over the past few years. This boom originated in 2009, when a new model emerged, that is, the deep artificial neural network, which began to surpass other established mature models on some important benchmarks. Later, it was widely used in academia and industry. Ranging from image analysis to natural language processing, it fully exerted its magic and now become the state-of-the-art machine learning models. Deep neural networks have great potential in medical imaging technology, medical data analysis, medical diagnosis and other healthcare issues, and is promoted in both pre-clinical and even clinical stages. In this review, we performed an overview of some new developments and challenges in the application of machine learning to medical image analysis, with a special focus on deep learning in photoacoustic imaging. The aim of this review is threefold: (i) introducing deep learning with some important basics, (ii) reviewing recent works that apply deep learning in the entire ecological chain of photoacoustic imaging, from image reconstruction to disease diagnosis, (iii) providing some open source materials and other resources for researchers interested in applying deep learning to photoacoustic imaging.
Collapse
Affiliation(s)
- Changchun Yang
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hengrong Lan
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
- Chinese Academy of Sciences, Shanghai Institute of Microsystem and Information Technology, Shanghai, 200050, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Feng Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| | - Fei Gao
- Hybrid Imaging System Laboratory, Shanghai Engineering Research Center of Intelligent Vision and Imaging, School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210, China
| |
Collapse
|
29
|
Yuan AY, Gao Y, Peng L, Zhou L, Liu J, Zhu S, Song W. Hybrid deep learning network for vascular segmentation in photoacoustic imaging. BIOMEDICAL OPTICS EXPRESS 2020; 11:6445-6457. [PMID: 33282500 PMCID: PMC7687958 DOI: 10.1364/boe.409246] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 10/02/2020] [Accepted: 10/06/2020] [Indexed: 05/04/2023]
Abstract
Photoacoustic (PA) technology has been used extensively on vessel imaging due to its capability of identifying molecular specificities and achieving high optical-diffraction-limited lateral resolution down to the cellular level. Vessel images carry essential medical information that provides guidelines for a professional diagnosis. Modern image processing techniques provide a decent contribution to vessel segmentation. However, these methods suffer from under or over-segmentation. Thus, we demonstrate both the results of adopting a fully convolutional network and U-net, and propose a hybrid network consisting of both applied on PA vessel images. Comparison results indicate that the hybrid network can significantly increase the segmentation accuracy and robustness.
Collapse
Affiliation(s)
- Alan Yilun Yuan
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
- These authors contributed equally to this work
| | - Yang Gao
- Nanophotonics Research Center, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
- These authors contributed equally to this work
| | - Liangliang Peng
- Nanophotonics Research Center, College of Physics and Optoelectronic Engineering, Shenzhen University, Shenzhen, China
| | - Lingxiao Zhou
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
- Department of Respiratory Medicine, Zhongshan-Xuhui Hospital, Fudan University, Shanghai, China
| | - Jun Liu
- Tianjin Union Medical Centre, Tianjin, China
| | - Siwei Zhu
- Tianjin Union Medical Centre, Tianjin, China
| | - Wei Song
- Nanophotonics Research Center, Shenzhen Key Laboratory of Micro-Scale Optical Information Technology, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen, China
| |
Collapse
|
30
|
Tong T, Huang W, Wang K, He Z, Yin L, Yang X, Zhang S, Tian J. Domain Transform Network for Photoacoustic Tomography from Limited-view and Sparsely Sampled Data. PHOTOACOUSTICS 2020; 19:100190. [PMID: 32617261 PMCID: PMC7322684 DOI: 10.1016/j.pacs.2020.100190] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Medical image reconstruction methods based on deep learning have recently demonstrated powerful performance in photoacoustic tomography (PAT) from limited-view and sparse data. However, because most of these methods must utilize conventional linear reconstruction methods to implement signal-to-image transformations, their performance is restricted. In this paper, we propose a novel deep learning reconstruction approach that integrates appropriate data pre-processing and training strategies. The Feature Projection Network (FPnet) presented herein is designed to learn this signal-to-image transformation through data-driven learning rather than through direct use of linear reconstruction. To further improve reconstruction results, our method integrates an image post-processing network (U-net). Experiments show that the proposed method can achieve high reconstruction quality from limited-view data with sparse measurements. When employing GPU acceleration, this method can achieve a reconstruction speed of 15 frames per second.
Collapse
Affiliation(s)
- Tong Tong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenhui Huang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zicong He
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Lin Yin
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuixing Zhang
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
31
|
Hauptmann A, Adler J, Arridge S, Öktem O. Multi-Scale Learned Iterative Reconstruction. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2020; 6:843-856. [PMID: 33644260 PMCID: PMC7116830 DOI: 10.1109/tci.2020.2990299] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Model-based learned iterative reconstruction methods have recently been shown to outperform classical reconstruction algorithms. Applicability of these methods to large scale inverse problems is however limited by the available memory for training and extensive training times, the latter due to computationally expensive forward models. As a possible solution to these restrictions we propose a multi-scale learned iterative reconstruction scheme that computes iterates on discretisations of increasing resolution. This procedure does not only reduce memory requirements, it also considerably speeds up reconstruction and training times, but most importantly is scalable to large scale inverse problems with non-trivial forward operators, such as those that arise in many 3D tomographic applications. In particular, we propose a hybrid network that combines the multiscale iterative approach with a particularly expressive network architecture which in combination exhibits excellent scalability in 3D. Applicability of the algorithm is demonstrated for 3D cone beam computed tomography from real measurement data of an organic phantom. Additionally, we examine scalability and reconstruction quality in comparison to established learned reconstruction methods in two dimensions for low dose computed tomography on human phantoms.
Collapse
Affiliation(s)
- Andreas Hauptmann
- Research Unit of Mathematical Sciences; University of Oulu, Oulu, Finland and with the Department of Computer Science; University College London, London, United Kingdom
| | - Jonas Adler
- Elekta, Stockholm, Sweden and KTH - Royal Institute of Technology, Stockolm, Sweden. He is currently with DeepMind, London, UK
| | - Simon Arridge
- Department of Computer Science; University College London, London, United Kingdom
| | - Ozan Öktem
- Department of Mathematics, KTH - Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|