1
|
Li Y, Sun X, Wang S, Guo L, Qin Y, Pan J, Chen P. TD-STrans: Tri-domain sparse-view CT reconstruction based on sparse transformer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 260:108575. [PMID: 39733746 DOI: 10.1016/j.cmpb.2024.108575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2024] [Revised: 12/15/2024] [Accepted: 12/24/2024] [Indexed: 12/31/2024]
Abstract
BACKGROUND AND OBJECTIVE Sparse-view computed tomography (CT) speeds up scanning and reduces radiation exposure in medical diagnosis. However, when the projection views are severely under-sampled, deep learning-based reconstruction methods often suffer from over-smoothing of the reconstructed images due to the lack of high-frequency information. To address this issue, we introduce frequency domain information into the popular projection-image domain reconstruction, proposing a Tri-Domain sparse-view CT reconstruction model based on Sparse Transformer (TD-STrans). METHODS TD-STrans integrates three essential modules: the projection recovery module completes the sparse-view projection, the Fourier domain filling module mitigates artifacts and over-smoothing by filling in missing high-frequency details; the image refinement module further enhances and preserves image details. Additionally, a multi-domain joint loss function is designed to simultaneously enhance the reconstruction quality in the projection domain, image domain, and frequency domain, thereby further improving the preservation of image details. RESULTS The results of simulation experiments on the lymph node dataset and real experiments on the walnut dataset consistently demonstrate the effectiveness of TD-STrans in artifact removal, suppression of over-smoothing, and preservation of structural fidelity. CONCLUSION The reconstruction results of TD-STrans indicate that sparse transformer across multiple domains can alleviate over-smoothing and detail loss caused by reduced views, offering a novel solution for ultra-sparse-view CT imaging.
Collapse
Affiliation(s)
- Yu Li
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Xueqin Sun
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Sukai Wang
- The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China; Department of computer science and technology, North University of China, Taiyuan 030051, China
| | - Lina Guo
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Yingwei Qin
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Jinxiao Pan
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China
| | - Ping Chen
- Department of Information and Communication Engineering, North University of China, Taiyuan 030051, China; The State Key Lab for Electronic Testing Technology, North University of China, Taiyuan 030051, China.
| |
Collapse
|
2
|
Shi Y, Gao Y, Xu Q, Li Y, Mou X, Liang Z. Learned Tensor Neural Network Texture Prior for Photon-Counting CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3830-3842. [PMID: 38753483 DOI: 10.1109/tmi.2024.3402079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2024]
Abstract
Photon-counting computed tomography (PCCT) reconstructs multiple energy-channel images to describe the same object, where there exists a strong correlation among different channel images. In addition, reconstruction of each channel image suffers photon count starving problem. To make full use of the correlation among different channel images to suppress the data noise and enhance the texture details in reconstructing each channel image, this paper proposes a tensor neural network (TNN) architecture to learn a multi-channel texture prior for PCCT reconstruction. Specifically, we first learn a spatial texture prior in each individual channel image by modeling the relationship between the center pixels and its corresponding neighbor pixels using a neural network. Then, we merge the single channel spatial texture prior into multi-channel neural network to learn the spectral local correlation information among different channel images. Since our proposed TNN is trained on a series of unpaired small spatial-spectral cubes which are extracted from one single reference multi-channel image, the local correlation in the spatial-spectral cubes is considered by TNN. To boost the TNN performance, a low-rank representation is also employed to consider the global correlation among different channel images. Finally, we integrate the learned TNN and the low-rank representation as priors into Bayesian reconstruction framework. To evaluate the performance of the proposed method, four references are considered. One is simulated images from ultra-high-resolution CT. One is spectral images from dual-energy CT. The other two are animal tissue and preclinical mouse images from a custom-made PCCT systems. Our TNN prior Bayesian reconstruction demonstrated better performance than other state-of-the-art competing algorithms, in terms of not only preserving texture feature but also suppressing image noise in each channel image.
Collapse
|
3
|
Gao Y, Tan J, Shi Y, Zhang H, Lu S, Gupta A, Li H, Reiter M, Liang Z. Machine Learned Texture Prior From Full-Dose CT Database via Multi-Modality Feature Selection for Bayesian Reconstruction of Low-Dose CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3129-3139. [PMID: 34968178 PMCID: PMC9243192 DOI: 10.1109/tmi.2021.3139533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In our earlier study, we proposed a regional Markov random field type tissue-specific texture prior from previous full-dose computed tomography (FdCT) scan for current low-dose CT (LdCT) imaging, which showed clinical benefits through task-based evaluation. Nevertheless, two assumptions were made for early study. One assumption is that the center pixel has a linear relationship with its nearby neighbors and the other is previous FdCT scans of the same subject are available. To eliminate the two assumptions, we proposed a database assisted end-to-end LdCT reconstruction framework which includes a deep learning texture prior model and a multi-modality feature based candidate selection model. A convolutional neural network-based texture prior is proposed to eliminate the linear relationship assumption. And for scenarios in which the concerned subject has no previous FdCT scans, we propose to select one proper prior candidate from the FdCT database using multi-modality features. Features from three modalities are used including the subjects' physiological factors, the CT scan protocol, and a novel feature named Lung Mark which is deliberately proposed to reflect the z-axial property of human anatomy. Moreover, a majority vote strategy is designed to overcome the noise effect from LdCT scans. Experimental results showed the effectiveness of Lung Mark. The selection model has accuracy of 84% testing on 1,470 images from 49 subjects. The learned texture prior from FdCT database provided reconstruction comparable to the subjects having corresponding FdCT. This study demonstrated the feasibility of bringing clinically relevant textures from available FdCT database to perform Bayesian reconstruction of any current LdCT scan.
Collapse
|
4
|
Gao Y, Lu S, Shi Y, Chang S, Zhang H, Hou W, Li L, Liang Z. A Joint-Parameter Estimation and Bayesian Reconstruction Approach to Low-Dose CT. SENSORS (BASEL, SWITZERLAND) 2023; 23:1374. [PMID: 36772417 PMCID: PMC9921255 DOI: 10.3390/s23031374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 01/18/2023] [Accepted: 01/23/2023] [Indexed: 06/18/2023]
Abstract
Most penalized maximum likelihood methods for tomographic image reconstruction based on Bayes' law include a freely adjustable hyperparameter to balance the data fidelity term and the prior/penalty term for a specific noise-resolution tradeoff. The hyperparameter is determined empirically via a trial-and-error fashion in many applications, which then selects the optimal result from multiple iterative reconstructions. These penalized methods are not only time-consuming by their iterative nature, but also require manual adjustment. This study aims to investigate a theory-based strategy for Bayesian image reconstruction without a freely adjustable hyperparameter, to substantially save time and computational resources. The Bayesian image reconstruction problem is formulated by two probability density functions (PDFs), one for the data fidelity term and the other for the prior term. When formulating these PDFs, we introduce two parameters. While these two parameters ensure the PDFs completely describe the data and prior terms, they cannot be determined by the acquired data; thus, they are called complete but unobservable parameters. Estimating these two parameters becomes possible under the conditional expectation and maximization for the image reconstruction, given the acquired data and the PDFs. This leads to an iterative algorithm, which jointly estimates the two parameters and computes the to-be reconstructed image by maximizing a posteriori probability, denoted as joint-parameter-Bayes. In addition to the theoretical formulation, comprehensive simulation experiments are performed to analyze the stopping criterion of the iterative joint-parameter-Bayes method. Finally, given the data, an optimal reconstruction is obtained without any freely adjustable hyperparameter by satisfying the PDF condition for both the data likelihood and the prior probability, and by satisfying the stopping criterion. Moreover, the stability of joint-parameter-Bayes is investigated through factors such as initialization, the PDF specification, and renormalization in an iterative manner. Both phantom simulation and clinical patient data results show that joint-parameter-Bayes can provide comparable reconstructed image quality compared to the conventional methods, but with much less reconstruction time. To see the response of the algorithm to different types of noise, three common noise models are introduced to the simulation data, including white Gaussian noise to post-log sinogram data, Poisson-like signal-dependent noise to post-log sinogram data and Poisson noise to the pre-log transmission data. The experimental outcomes of the white Gaussian noise reveal that the two parameters estimated by the joint-parameter-Bayes method agree well with simulations. It is observed that the parameter introduced to satisfy the prior's PDF is more sensitive to stopping the iteration process for all three noise models. A stability investigation showed that the initial image by filtered back projection is very robust. Clinical patient data demonstrated the effectiveness of the proposed joint-parameter-Bayes and stopping criterion.
Collapse
Affiliation(s)
- Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Siming Lu
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Yongyi Shi
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Shaojie Chang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Wei Hou
- Department of Preventive Medicine, Stony Brook University, Stony Brook, NY 11794, USA
| | - Lihong Li
- Department of Engineering Science and Physics, CUNY/CSI, Staten Island, NY 10314, USA
| | - Zhengrong Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
5
|
Joy A, Nagarajan R, Saucedo A, Iqbal Z, Sarma MK, Wilson N, Felker E, Reiter RE, Raman SS, Thomas MA. Dictionary learning compressed sensing reconstruction: pilot validation of accelerated echo planar J-resolved spectroscopic imaging in prostate cancer. MAGNETIC RESONANCE MATERIALS IN PHYSICS, BIOLOGY AND MEDICINE 2022; 35:667-682. [PMID: 35869359 PMCID: PMC9363346 DOI: 10.1007/s10334-022-01029-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 07/01/2022] [Accepted: 07/03/2022] [Indexed: 11/28/2022]
Abstract
Objectives This study aimed at developing dictionary learning (DL) based compressed sensing (CS) reconstruction for randomly undersampled five-dimensional (5D) MR Spectroscopic Imaging (3D spatial + 2D spectral) data acquired in prostate cancer patients and healthy controls, and test its feasibility at 8x and 12x undersampling factors. Materials and methods Prospectively undersampled 5D echo-planar J-resolved spectroscopic imaging (EP-JRESI) data were acquired in nine prostate cancer (PCa) patients and three healthy males. The 5D EP-JRESI data were reconstructed using DL and compared with gradient sparsity-based Total Variation (TV) and Perona-Malik (PM) methods. A hybrid reconstruction technique, Dictionary Learning-Total Variation (DLTV), was also designed to further improve the quality of reconstructed spectra. Results The CS reconstruction of prospectively undersampled (8x and 12x) 5D EP-JRESI data acquired in prostate cancer and healthy subjects were performed using DL, DLTV, TV and PM. It is evident that the hybrid DLTV method can unambiguously resolve 2D J-resolved peaks including myo-inositol, citrate, creatine, spermine and choline. Conclusion Improved reconstruction of the accelerated 5D EP-JRESI data was observed using the hybrid DLTV. Accelerated acquisition of in vivo 5D data with as low as 8.33% samples (12x) corresponds to a total scan time of 14 min as opposed to a fully sampled scan that needs a total duration of 2.4 h (TR = 1.2 s, 32 \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${k}_{x}$$\end{document}kx×16 \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${k}_{y}$$\end{document}ky×8 \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${k}_{z}$$\end{document}kz, 512 \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${t}_{2}$$\end{document}t2 and 64 \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${t}_{1}$$\end{document}t1). Supplementary Information The online version contains supplementary material available at 10.1007/s10334-022-01029-z.
Collapse
|
6
|
Rabbani H, Teyfouri N, Jabbari I. Low-dose cone-beam computed tomography reconstruction through a fast three-dimensional compressed sensing method based on the three-dimensional pseudo-polar fourier transform. JOURNAL OF MEDICAL SIGNALS & SENSORS 2022; 12:8-24. [PMID: 35265461 PMCID: PMC8804585 DOI: 10.4103/jmss.jmss_114_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 04/24/2021] [Accepted: 08/20/2021] [Indexed: 12/02/2022]
Abstract
Background: Reconstruction of high quality two dimensional images from fan beam computed tomography (CT) with a limited number of projections is already feasible through Fourier based iterative reconstruction method. However, this article is focused on a more complicated reconstruction of three dimensional (3D) images in a sparse view cone beam computed tomography (CBCT) by utilizing Compressive Sensing (CS) based on 3D pseudo polar Fourier transform (PPFT). Method: In comparison with the prevalent Cartesian grid, PPFT re gridding is potent to remove rebinning and interpolation errors. Furthermore, using PPFT based radon transform as the measurement matrix, reduced the computational complexity. Results: In order to show the computational efficiency of the proposed method, we compare it with an algebraic reconstruction technique and a CS type algorithm. We observed convergence in <20 iterations in our algorithm while others would need at least 50 iterations for reconstructing a qualified phantom image. Furthermore, using a fast composite splitting algorithm solver in each iteration makes it a fast CBCT reconstruction algorithm. The algorithm will minimize a linear combination of three terms corresponding to a least square data fitting, Hessian (HS) Penalty and l1 norm wavelet regularization. We named it PP-based compressed sensing-HS-W. In the reconstruction range of 120 projections around the 360° rotation, the image quality is visually similar to reconstructed images by Feldkamp-Davis-Kress algorithm using 720 projections. This represents a high dose reduction. Conclusion: The main achievements of this work are to reduce the radiation dose without degrading the image quality. Its ability in removing the staircase effect, preserving edges and regions with smooth intensity transition, and producing high-resolution, low-noise reconstruction results in low-dose level are also shown.
Collapse
|
7
|
Zhang L, Zhao H, Zhou Z, Jia M, Zhang L, Jiang J, Gao F. Improving spatial resolution with an edge-enhancement model for low-dose propagation-based X-ray phase-contrast computed tomography. OPTICS EXPRESS 2021; 29:37399-37417. [PMID: 34808812 DOI: 10.1364/oe.440664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 10/14/2021] [Indexed: 06/13/2023]
Abstract
Propagation-based X-ray phase-contrast computed tomography (PB-PCCT) has been increasingly popular for distinguishing low contrast tissues. Phase retrieval is an important step to quantitatively obtain the phase information before the tomographic reconstructions, while typical phase retrieval methods in PB-PCCT, such as homogenous transport of intensity equation (TIE-Hom), are essentially low-pass filters and thus improve the signal to noise ratio at the expense of the reduced spatial resolution of the reconstructed image. To improve the reconstructed spatial resolution, measured phase contrast projections with high edge enhancement and the phase projections retrieved by TIE-Hom were weighted summed and fed into an iterative tomographic algorithm within the framework of the adaptive steepest descent projections onto convex sets (ASD-POCS), which was employed for suppressing the image noise in low dose reconstructions because of the sparse-view scanning strategy or low exposure time for single phase contrast projection. The merging strategy decreases the accuracy of the linear model of PB-PCCT and would finally lead to the reconstruction failure in iterative reconstructions. Therefore, the additive median root prior is also introduced in the algorithm to partly increase the model accuracy. The reconstructed spatial resolution and noise performance can be flexibly balanced by a pair of antagonistic hyper-parameters. Validations were performed by the established phase-contrast Feldkamp-Davis-Kress, phase-retrieved Feldkamp-Davis-Kress, conventional ASD-POCS and the proposed enhanced ASD-POCS with a numerical phantom dataset and experimental biomaterial dataset. Simulation results show that the proposed algorithm outperforms the conventional ASD-POCS in spatial evaluation assessments such as root mean square error (a ratio of 9.78%), contrast to noise ratio (CNR) (a ratio of 7.46%), and also frequency evaluation assessments such as modulation transfer function (a ratio of 66.48% of MTF50% (50% MTF value)), noise power spectrum (a ratio of 35.25% of f50% (50% value of the Nyquist frequency)) and noise equivalent quanta (1-2 orders of magnitude at high frequencies). Experimental results again confirm the superiority of proposed strategy relative to the conventional one in terms of edge sharpness and CNR (an average increase of 67.35%).
Collapse
|
8
|
Bai T, Wang B, Nguyen D, Wang B, Dong B, Cong W, Kalra MK, Jiang S. Deep Interactive Denoiser (DID) for X-Ray Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2965-2975. [PMID: 34329156 DOI: 10.1109/tmi.2021.3101241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Low-dose computed tomography (LDCT) is desirable for both diagnostic imaging and image-guided interventions. Denoisers are widely used to improve the quality of LDCT. Deep learning (DL)-based denoisers have shown state-of-the-art performance and are becoming mainstream methods. However, there are two challenges to using DL-based denoisers: 1) a trained model typically does not generate different image candidates with different noise-resolution tradeoffs, which are sometimes needed for different clinical tasks; and 2) the model's generalizability might be an issue when the noise level in the testing images differs from that in the training dataset. To address these two challenges, in this work, we introduce a lightweight optimization process that can run on top of any existing DL-based denoiser during the testing phase to generate multiple image candidates with different noise-resolution tradeoffs suitable for different clinical tasks in real time. Consequently, our method allows users to interact with the denoiser to efficiently review various image candidates and quickly pick the desired one; thus, we termed this method deep interactive denoiser (DID). Experimental results demonstrated that DID can deliver multiple image candidates with different noise-resolution tradeoffs and shows great generalizability across various network architectures, as well as training and testing datasets with various noise levels.
Collapse
|
9
|
Wong KK, Cummock JS, He Y, Ghosh R, Volpi JJ, Wong STC. Retrospective study of deep learning to reduce noise in non-contrast head CT images. Comput Med Imaging Graph 2021; 94:101996. [PMID: 34637998 DOI: 10.1016/j.compmedimag.2021.101996] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 08/30/2021] [Accepted: 09/06/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Presented herein is a novel CT denoising method uses a skip residual encoder-decoder framework with group convolutions and a novel loss function to improve the subjective and objective image quality for improved disease detection in patients with acute ischemic stroke (AIS). MATERIALS AND METHODS In this retrospective study, confirmed AIS patients with full-dose NCCT head scans were randomly selected from a stroke registry between 2016 and 2020. 325 patients (67 ± 15 years, 176 men) were included. 18 patients each with 4-7 NCCTs performed within 5-day timeframe (83 total scans) were used for model training; 307 patients each with 1-4 NCCTs performed within 5-day timeframe (380 total scans) were used for hold-out testing. In the training group, a mean CT was created from the patient's co-registered scans for each input CT to train a rotation-reflection equivariant U-Net with skip and residual connections, as well as a group convolutional neural network (SRED-GCNN) using a custom loss function to remove image noise. Denoising performance was compared to the standard Block-matching and 3D filtering (BM3D) method and RED-CNN quantitatively and visually. Signal-to-noise ratio (SNR) and contrast-to-noise (CNR) were measured in manually drawn regions-of-interest in grey matter (GM), white matter (WM) and deep grey matter (DG). Visual comparison and impact on spatial resolution were assessed through phantom images. RESULTS SRED-GCNN reduced the original CT image noise significantly better than BM3D, with SNR improvements in GM, WM, and DG by 2.47x, 2.83x, and 2.64x respectively and CNR improvements in DG/WM and GM/WM by 2.30x and 2.16x respectively. Compared to the proposed SRED-GCNN, RED-CNN reduces noise effectively though the results are visibly blurred. Scans denoised by the SRED-GCNN are shown to be visually clearer with preserved anatomy. CONCLUSION The proposed SRED-GCNN model significantly reduces image noise and improves signal-to-noise and contrast-to-noise ratios in 380 unseen head NCCT cases.
Collapse
Affiliation(s)
- Kelvin K Wong
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; The Ting Tsung and Wei Fong Chao Center for BRAIN, Houston Methodist Hospital, 6670 Bertner Ave, Houston, TX 77030, USA; Department of Radiology, Houston Methodist Institute for Academic Medicine, 6670 Bertner Ave, Houston, TX 77030, USA.
| | - Jonathon S Cummock
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; MD/PhD Program, Texas A&M University College of Medicine, 8447 Riverside Parkway, Suite 1002, Bryan, TX 77807, USA
| | - Yunjie He
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA
| | - Rahul Ghosh
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; MD/PhD Program, Texas A&M University College of Medicine, 8447 Riverside Parkway, Suite 1002, Bryan, TX 77807, USA
| | - John J Volpi
- Department of Neurology, Houston Methodist Institute for Academic Medicine, 6670 Bertner Ave, Houston, TX 77030, USA
| | - Stephen T C Wong
- Systems Medicine and Bioengineering, Houston Methodist Cancer Center, Houston Methodist Hospital and Department of Radiology, Weill Cornell Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; The Ting Tsung and Wei Fong Chao Center for BRAIN, Houston Methodist Hospital, 6670 Bertner Ave, Houston, TX 77030, USA; Department of Radiology, Houston Methodist Institute for Academic Medicine, 6670 Bertner Ave, Houston, TX 77030, USA; Department of Neuroscience and Experimental Therapeutics, Texas A&M University College of Medicine, 8447 Riverside Parkway, Suite 1005, Bryan, TX 77807, USA.
| |
Collapse
|
10
|
Zhi S, Kachelrieß M, Mou X. Spatiotemporal structure-aware dictionary learning-based 4D CBCT reconstruction. Med Phys 2021; 48:6421-6436. [PMID: 34514608 DOI: 10.1002/mp.15009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 05/12/2021] [Accepted: 05/19/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Four-dimensional cone-beam computed tomography (4D CBCT) is developed to reconstruct a sequence of phase-resolved images, which could assist in verifying the patient's position and offering information for cancer treatment planning. However, 4D CBCT images suffer from severe streaking artifacts and noise due to the extreme sparse-view CT reconstruction problem for each phase. As a result, it would cause inaccuracy of treatment estimation. The purpose of this paper was to develop a new 4D CBCT reconstruction method to generate a series of high spatiotemporal 4D CBCT images. METHODS Considering the advantage of (DL) on representing structural features and correlation between neighboring pixels effectively, we construct a novel DL-based method for the 4D CBCT reconstruction. In this study, both a motion-aware dictionary and a spatially structural 2D dictionary are trained for 4D CBCT by excavating the spatiotemporal correlation among ten phase-resolved images and the spatial information in each image, respectively. Specifically, two reconstruction models are produced in this study. The first one is the motion-aware dictionary learning-based 4D CBCT algorithm, called motion-aware DL based 4D CBCT (MaDL). The second one is the MaDL equipped with a prior knowledge constraint, called pMaDL. Qualitative and quantitative evaluations are performed using a 4D extended cardiac torso (XCAT) phantom, simulated patient data, and two sets of patient data sets. Several state-of-the-art 4D CBCT algorithms, such as the McKinnon-Bates (MKB) algorithm, prior image constrained compressed sensing (PICCS), and the high-quality initial image-guided 4D CBCT reconstruction method (HQI-4DCBCT) are applied for comparison to validate the performance of the proposed MaDL and prior constraint MaDL (pMaDL) pmadl reconstruction frameworks. RESULTS Experimental results validate that the proposed MaDL can output the reconstructions with few streaking artifacts but some structural information such as tumors and blood vessels, may still be missed. Meanwhile, the results of the proposed pMaDL demonstrate an improved spatiotemporal resolution of the reconstructed 4D CBCT images. In these improved 4D CBCT reconstructions, streaking artifacts are suppressed primarily and detailed structures are also restored. Regarding the XCAT phantom, quantitative evaluations indicate that an average of 58.70%, 45.25%, and 40.10% decrease in terms of root-mean-square error (RMSE) and an average of 2.10, 1.37, and 1.37 times in terms of structural similarity index (SSIM) are achieved by the proposed pMaDL method when compared with piccs, PICCS, MaDL(2D), and MaDL(2D), respectively. Moreover the proposed pMaDL achieves a comparable performance with HQI-4DCBCT algorithm in terms of RMSE and SSIM metrics. However, pMaDL has a better ability to suppress streaking artifacts than HQI-4DCBCT. CONCLUSIONS The proposed algorithm could reconstruct a set of 4D CBCT images with both high spatiotemporal resolution and detailed features preservation. Moreover the proposed pMaDL can effectively suppress the streaking artifacts in the resultant reconstructions, while achieving an overall improved spatiotemporal resolution by incorporating the motion-aware dictionary with a prior constraint into the proposed 4D CBCT iterative framework.
Collapse
Affiliation(s)
- Shaohua Zhi
- Institute of Image Processing and Pattern Recognition, School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| | - Marc Kachelrieß
- German Cancer Research Center, Heidelberg (DKFZ), Im Neuenheimer Feld 280, Heidelberg, 69120, Germany
| | - Xuanqin Mou
- Institute of Image Processing and Pattern Recognition, School of Information and Communications Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, 710049, China
| |
Collapse
|
11
|
Duan J, Mou X. Image quality guided iterative reconstruction for low-dose CT based on CT image statistics. Phys Med Biol 2021; 66. [PMID: 34352735 DOI: 10.1088/1361-6560/ac1b1b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 08/05/2021] [Indexed: 11/12/2022]
Abstract
Iterative reconstruction framework shows predominance in low dose and incomplete data situation. In the iterative reconstruction framework, there are two components, i.e., fidelity term aims to maintain the structure details of the reconstructed object, and the regularization term uses prior information to suppress the artifacts such as noise. A regularization parameter balances them, aiming to find a good trade-off between noise and resolution. Currently, the regularization parameters are selected as a rule of thumb or some prior knowledge assumption is required, which limits practical uses. Furthermore, the computation cost of regularization parameter selection is also heavy. In this paper, we address this problem by introducing CT image quality assessment (IQA) into the iterative reconstruction framework. Several steps are involved during the study. First, we analyze the CT image statistics using the dual dictionary method. Regularities are observed and concluded, revealing the relationship among the regularization parameter, iterations, and CT image quality. Second, with derivation and simplification of DDL procedure, a CT IQA metric named SODVAC is designed. The SODVAC locates the optimal regularization parameter that results in the reconstructed image with distinct structures and with no noise or little noise. Third, we introduce SODVAC into the iterative reconstruction framework and then propose a general image-quality-guided iterative reconstruction (QIR) framework and give a specific framework example (sQIR) by introducing SODVAC into the iterative reconstruction framework. sQIR simultaneously optimizes the reconstructed image and the regularization parameter during the iterations. Results confirm the effectiveness of the proposed method. No prior information needed and low computation cost are the advantages of our method compared with existing state-of-theart L-curve and ZIP selection strategies.
Collapse
Affiliation(s)
- Jiayu Duan
- Institute of Image Processing and Pattern Recognition School of Electronics and Information Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, CHINA
| | - Xuanqin Mou
- Institute of Image Processing and Pattern Recognition School of Electronics and Information Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, Xi'an, 710049, CHINA
| |
Collapse
|
12
|
Bai T, Wang B, Nguyen D, Jiang S. Probabilistic self-learning framework for low-dose CT denoising. Med Phys 2021; 48:2258-2270. [PMID: 33621348 DOI: 10.1002/mp.14796] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 02/15/2021] [Accepted: 02/16/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Despite the indispensable role of x-ray computed tomography (CT) in diagnostic medicine, the associated harmful ionizing radiation dose is a major concern, as it may cause genetic diseases and cancer. Decreasing patients' exposure can reduce the radiation dose and hence the related risks, but it would inevitably induce higher quantum noise. Supervised deep learning techniques have been used to train deep neural networks for denoising low-dose CT (LDCT) images, but the success of such strategies requires massive sets of pixel-level paired LDCT and normal-dose CT (NDCT) images, which are rarely available in real clinical practice. Our purpose is to mitigate the data scarcity problem for deep learning-based LDCT denoising. METHODS To solve this problem, we devised a shift-invariant property-based neural network that uses only the LDCT images to characterize both the inherent pixel correlations and the noise distribution, shaping into our probabilistic self-learning (PSL) framework. The AAPM Low-dose CT Challenge dataset was used to train the network. Both simulated datasets and real dataset were employed to test the denoising performance as well as the model generalizability. The performance was compared to a conventional method (total variation (TV)-based), a popular self-learning method (noise2void (N2V)), and a well-known unsupervised learning method (CycleGAN) by using both qualitative visual inspection and quantitative metrics including peak signal-noise-ratio (PSNR), structural similarity index (SSIM) and contrast-to-noise-ratio (CNR). The standard deviations (STD) of selected flat regions were also calculated for comparison. RESULTS The PSL method can improve the averaged PSNR/SSIM values from 27.61/0.5939 (LDCT) to 30.50/0.6797. By contrast, the averaged PSNR/SSIM values were 31.49/0.7284 (TV), 29.43/0.6699 (N2V), and 29.79/0.6992 (CycleGAN). The averaged STDs of selected flat regions were calculated to be 132.3 HU (LDCT), 25.77 HU (TV), 19.95 HU (N2V), 75.06 HU (CycleGAN), 60.62 HU (PSL) and 57.28 HU (NDCT). As for the low-contrast lesion detectability quantification, the CNR were calculated to be 0.202 (LDCT), 0.356 (TV), 0.372 (N2V), 0.383 (CycleGAN), 0.399 (PSL), and 0.359 (NDCT). By visual inspection, we observed that the proposed PSL method can deliver a noise-suppressed and detail-preserved image, while the TV-based method would lead to the blocky artifact, the N2V method would produce over-smoothed structures and CT value biased effect, and the CycleGAN method would generate slightly noisy results with inaccurate CT values. We also verified the generalizability of the PSL method, which exhibited superior denoising performance among various testing datasets with different data distribution shifts. CONCLUSIONS A deep learning-based convolutional neural network can be trained without paired datasets. Qualitatively visual inspection showed the proposed PSL method can achieve superior denoising performance than all the competitors, despite that the employed quantitative metrics in terms of PSNR, SSIM and CNR did not always show consistently better values.
Collapse
Affiliation(s)
- Ti Bai
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, Texas, 75239, USA
| | - Biling Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, Texas, 75239, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, Texas, 75239, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, Texas, 75239, USA
| |
Collapse
|
13
|
Zhang H, Liu B, Yu H, Dong B. MetaInv-Net: Meta Inversion Network for Sparse View CT Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:621-634. [PMID: 33104506 DOI: 10.1109/tmi.2020.3033541] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
X-ray Computed Tomography (CT) is widely used in clinical applications such as diagnosis and image-guided interventions. In this paper, we propose a new deep learning based model for CT image reconstruction with the backbone network architecture built by unrolling an iterative algorithm. However, unlike the existing strategy to include as many data-adaptive components in the unrolled dynamics model as possible, we find that it is enough to only learn the parts where traditional designs mostly rely on intuitions and experience. More specifically, we propose to learn an initializer for the conjugate gradient (CG) algorithm that involved in one of the subproblems of the backbone model. Other components, such as image priors and hyperparameters, are kept as the original design. Since a hypernetwork is introduced to inference on the initialization of the CG module, it makes the proposed model a certain meta-learning model. Therefore, we shall call the proposed model the meta-inversion network (MetaInv-Net). The proposed MetaInv-Net can be designed with much less trainable parameters while still preserves its superior image reconstruction performance than some state-of-the-art deep models in CT imaging. In simulated and real data experiments, MetaInv-Net performs very well and can be generalized beyond the training setting, i.e., to other scanning settings, noise levels, and data sets.
Collapse
|
14
|
Shi Y, Gao Y, Zhang Y, Sun J, Mou X, Liang Z. Spectral CT Reconstruction via Low-Rank Representation and Region-Specific Texture Preserving Markov Random Field Regularization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2996-3007. [PMID: 32217474 PMCID: PMC7529661 DOI: 10.1109/tmi.2020.2983414] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Photon-counting spectral computed tomography (CT) is capable of material characterization and can improve diagnostic performance over traditional clinical CT. However, it suffers from photon count starving for each individual energy channel which may cause severe artifacts in the reconstructed images. Furthermore, since the images in different energy channels describe the same object, there are high correlations among different channels. To make full use of the inter-channel correlations and minimize the count starving effect while maintaining clinically meaningful texture information, this paper combines a region-specific texture model with a low-rank correlation descriptor as an a priori regularization to explore a superior texture preserving Bayesian reconstruction of spectral CT. Specifically, the inter-channel correlations are characterized by the low-rank representation, and the inner-channel regional textures are modeled by a texture preserving Markov random field. In other words, this paper integrates the spectral and spatial information into a unified Bayesian reconstruction framework. The widely-used Split-Bregman algorithm is employed to minimize the objective function because of the non-differentiable property of the low-rank representation. To evaluate the tissue texture preserving performance of the proposed method for each channel, three references are built for comparison: one is the traditional CT image from energy integration detection. The second one is spectral images from dual-energy CT. The third one is individual channels images from custom-made photon-counting spectral CT. As expected, the proposed method produced promising results in terms of not only preserving texture features but also suppressing image noise in each channel, comparing to existing methods of total variation (TV), low-rank TV and tensor dictionary learning, by both visual inspection and quantitative indexes of root mean square error, peak signal to noise ratio, structural similarity and feature similarity.
Collapse
|
15
|
Gao Y, Tan J, Shi Y, Lu S, Gupta A, Li H, Liang Z. Constructing a tissue-specific texture prior by machine learning from previous full-dose scan for Bayesian reconstruction of current ultralow-dose CT images. J Med Imaging (Bellingham) 2020; 7:032502. [PMID: 32118093 PMCID: PMC7040436 DOI: 10.1117/1.jmi.7.3.032502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/27/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Bayesian theory provides a sound framework for ultralow-dose computed tomography (ULdCT) image reconstruction with two terms for modeling the data statistical property and incorporating a priori knowledge for the image that is to be reconstructed. We investigate the feasibility of using a machine learning (ML) strategy, particularly the convolutional neural network (CNN), to construct a tissue-specific texture prior from previous full-dose computed tomography. Approach: Our study constructs four tissue-specific texture priors, corresponding with lung, bone, fat, and muscle, and integrates the prior with the prelog shift Poisson (SP) data property for Bayesian reconstruction of ULdCT images. The Bayesian reconstruction was implemented by an algorithm called SP-CNN-T and compared with our previous Markov random field (MRF)-based tissue-specific texture prior algorithm called SP-MRF-T. Results: In addition to conventional quantitative measures, mean squared error and peak signal-to-noise ratio, structure similarity index, feature similarity, and texture Haralick features were used to measure the performance difference between SP-CNN-T and SP-MRF-T algorithms in terms of the structure and tissue texture preservation, demonstrating the feasibility and the potential of the investigated ML approach. Conclusions: Both training performance and image reconstruction results showed the feasibility of constructing CNN texture prior model and the potential of improving the structure preservation of the nodule comparing to our previous regional tissue-specific MRF texture prior model.
Collapse
Affiliation(s)
- Yongfeng Gao
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Jiaxing Tan
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Yongyi Shi
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Siming Lu
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| | - Amit Gupta
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Haifang Li
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Zhengrong Liang
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| |
Collapse
|
16
|
Zhi S, Kachelrieß M, Mou X. High-quality initial image-guided 4D CBCT reconstruction. Med Phys 2020; 47:2099-2115. [PMID: 32017128 DOI: 10.1002/mp.14060] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 11/27/2019] [Accepted: 01/20/2020] [Indexed: 01/24/2023] Open
Abstract
PURPOSE Four-dimensional cone-beam computed tomography (4D CBCT) has been developed to provide a sequence of phase-resolved reconstructions in image-guided radiation therapy. However, 4D CBCT images are degraded by severe streaking artifacts because the 4D CBCT reconstruction process is an extreme sparse-view CT procedure wherein only under-sampled projections are used for the reconstruction of each phase. To obtain a set of 4D CBCT images achieving both high spatial and temporal resolution, we propose an algorithm by providing a high-quality initial image at the beginning of the iterative reconstruction process for each phase to guide the final reconstructed result toward its optimal solution. METHODS The proposed method consists of three steps to generate the initial image. First, a prior image is obtained by an iterative reconstruction method using the measured projections of the entire set of 4D CBCT images. The prior image clearly shows the appearance of structures in static regions, although it contains blurring artifacts in motion regions. Second, the robust principal component analysis (RPCA) model is adopted to extract the motion components corresponding to each phase-resolved image. Third, a set of initial images are produced by the proposed linear estimation model that combines the prior image and the RPCA-decomposed motion components. The final 4D CBCT images are derived from the simultaneous algebraic reconstruction technique (SART) equipped with the initial images. Qualitative and quantitative evaluations were performed by using two extended cardiac-torso (XCAT) phantoms and two sets of patient data. Several state-of-the-art 4D CBCT algorithms were performed for comparison to validate the performance of the proposed method. RESULTS The image quality of phase-resolved images is greatly improved by the proposed method in both phantom and patient studies. The results show an outstanding spatial resolution, in which streaking artifacts are suppressed to a large extent, while detailed structures such as tumors and blood vessels are well restored. Meanwhile, the proposed method depicts a high temporal resolution with a distinct respiratory motion change at different phases. For simulation phantom, quantitative evaluations of the simulation data indicate that an average of 36.72% decrease at EI phase and 42% decrease at EE phase in terms of root-mean-square error (RMSE) are achieved by our method when comparing with PICCS algorithm in Phantom 1 and Phantom 2. In addition, the proposed method has the lowest entropy and the highest normalized mutual information compared with the existing methods in simulation experiments, such as PRI, RPCA-4DCT, SMART, and PICCS. And for real patient cases, the proposed method also achieves the lowest entropy value compared with the competitive method. CONCLUSIONS The proposed algorithm can generate an optimal initial image to improve iterative reconstruction performance. The final sequence of phase-resolved volumes guided by the initial image achieves high spatiotemporal resolution by eliminating motion-induced artifacts. This study presents a practical 4D CBCT reconstruction method with leading image quality.
Collapse
Affiliation(s)
- Shaohua Zhi
- Institute of Image Processing and Pattern Recognition, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Marc Kachelrieß
- German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, Germany
| | - Xuanqin Mou
- Institute of Image Processing and Pattern Recognition, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| |
Collapse
|
17
|
Zhu G, Jiang B, Tong L, Xie Y, Zaharchuk G, Wintermark M. Applications of Deep Learning to Neuro-Imaging Techniques. Front Neurol 2019; 10:869. [PMID: 31474928 PMCID: PMC6702308 DOI: 10.3389/fneur.2019.00869] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 07/26/2019] [Indexed: 12/12/2022] Open
Abstract
Many clinical applications based on deep learning and pertaining to radiology have been proposed and studied in radiology for classification, risk assessment, segmentation tasks, diagnosis, prognosis, and even prediction of therapy responses. There are many other innovative applications of AI in various technical aspects of medical imaging, particularly applied to the acquisition of images, ranging from removing image artifacts, normalizing/harmonizing images, improving image quality, lowering radiation and contrast dose, and shortening the duration of imaging studies. This article will address this topic and will seek to present an overview of deep learning applied to neuroimaging techniques.
Collapse
Affiliation(s)
| | | | | | | | | | - Max Wintermark
- Neuroradiology Section, Department of Radiology, Stanford Healthcare, Stanford, CA, United States
| |
Collapse
|
18
|
Liu F, Gong X, Wang LV, Guan J, Song L, Meng J. Dictionary learning sparse-sampling reconstruction method for in-vivo 3D photoacoustic computed tomography. BIOMEDICAL OPTICS EXPRESS 2019; 10:1660-1677. [PMID: 31061761 PMCID: PMC6484974 DOI: 10.1364/boe.10.001660] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 02/14/2019] [Accepted: 02/17/2019] [Indexed: 05/02/2023]
Abstract
The sparse transforms currently used in the model-based reconstruction method for photoacoustic computed tomography (PACT) are predefined and they typically cannot capture the underlying features of the specific data sets adequately, thus limiting the high-quality recovery of photoacoustic images. In this work, we present an advanced reconstruction model using the K-VSD dictionary learning technique and present the in vivo results after adapting the model into the 3D PACT system. The in vivo experiments were performed on an IRB approved human hand and two rats. When compared to the traditional sparse transform, experimental results using our proposed method improved accuracy and contrast to noise ration of the reconstructed photoacoustic images, on average, by 3.7 and 1.8 times in the case of 50% sparse-sampling rate, respectively. We also compared the performance of our algorithm against other techniques, and imaging speed was 60% faster than other approaches. Our system would require sparse-transducer array and lower number of data acquisition hardware (DAQs) potentially reducing the cost of the system. Thus, our work provides a new way for reconstructing photoacoustic images, and it would enable the development of new high-speed low-cost 3D PACT for various biomedical applications.
Collapse
Affiliation(s)
- Fangyan Liu
- Qufu Normal University, School of Information Science and Engineering, 80 Yantai Road North, Rizhao, 276826, China
- Equal Contribution
| | - Xiaojing Gong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Institute of Biomedical and Health Engineering, 1068 Xueyuan Boulevard, Shenzhen, 518055, China
- Equal Contribution
| | - Lihong V Wang
- California Institute of Technology, Department of Electronic Engineering, Andrew & Peggy Cherng Department of Medical Engineering, Pasadena, CA 91125, USA
| | - Jingjing Guan
- Qufu Normal University, School of Information Science and Engineering, 80 Yantai Road North, Rizhao, 276826, China
| | - Liang Song
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Institute of Biomedical and Health Engineering, 1068 Xueyuan Boulevard, Shenzhen, 518055, China
| | - Jing Meng
- Qufu Normal University, School of Information Science and Engineering, 80 Yantai Road North, Rizhao, 276826, China
| |
Collapse
|