51
|
Cheng W, He J, Liu Y, Zhang H, Wang X, Liu Y, Zhang P, Chen H, Gui Z. CAIR: Combining integrated attention with iterative optimization learning for sparse-view CT reconstruction. Comput Biol Med 2023; 163:107161. [PMID: 37311381 DOI: 10.1016/j.compbiomed.2023.107161] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/21/2023] [Accepted: 06/07/2023] [Indexed: 06/15/2023]
Abstract
Sparse-view CT is an efficient way for low dose scanning but degrades image quality. Inspired by the successful use of non-local attention in natural image denoising and compression artifact removal, we proposed a network combining integrated attention and iterative optimization learning for sparse-view CT reconstruction (CAIR). Specifically, we first unrolled the proximal gradient descent into a deep network and added an enhanced initializer between the gradient term and the approximation term. It can enhance the information flow between different layers, fully preserve the image details, and improve the network convergence speed. Secondly, the integrated attention module was introduced into the reconstruction process as a regularization term. It adaptively fuses the local and non-local features of the image which are used to reconstruct the complex texture and repetitive details of the image, respectively. Note that we innovatively designed a one-shot iteration strategy to simplify the network structure and reduce the reconstruction time while maintaining image quality. Experiments showed that the proposed method is very robust and outperforms state-of-the-art methods in terms of both quantitative and qualitative, greatly improving the preservation of structures and the removal of artifacts.
Collapse
Affiliation(s)
- Weiting Cheng
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Jichun He
- School of Medical and BioInformation Engineering, Northeastern University, Shenyang, 110000, China
| | - Yi Liu
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Haowen Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Xiang Wang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Yuhang Liu
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Pengcheng Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Hao Chen
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China
| | - Zhiguo Gui
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan, 030051, China.
| |
Collapse
|
52
|
Huang Z, Li W, Wang Y, Liu Z, Zhang Q, Jin Y, Wu R, Quan G, Liang D, Hu Z, Zhang N. MLNAN: Multi-level noise-aware network for low-dose CT imaging implemented with constrained cycle Wasserstein generative adversarial networks. Artif Intell Med 2023; 143:102609. [PMID: 37673577 DOI: 10.1016/j.artmed.2023.102609] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Low-dose CT techniques attempt to minimize the radiation exposure of patients by estimating the high-resolution normal-dose CT images to reduce the risk of radiation-induced cancer. In recent years, many deep learning methods have been proposed to solve this problem by building a mapping function between low-dose CT images and their high-dose counterparts. However, most of these methods ignore the effect of different radiation doses on the final CT images, which results in large differences in the intensity of the noise observable in CT images. What'more, the noise intensity of low-dose CT images exists significantly differences under different medical devices manufacturers. In this paper, we propose a multi-level noise-aware network (MLNAN) implemented with constrained cycle Wasserstein generative adversarial networks to recovery the low-dose CT images under uncertain noise levels. Particularly, the noise-level classification is predicted and reused as a prior pattern in generator networks. Moreover, the discriminator network introduces noise-level determination. Under two dose-reduction strategies, experiments to evaluate the performance of proposed method are conducted on two datasets, including the simulated clinical AAPM challenge datasets and commercial CT datasets from United Imaging Healthcare (UIH). The experimental results illustrate the effectiveness of our proposed method in terms of noise suppression and structural detail preservation compared with several other deep-learning based methods. Ablation studies validate the effectiveness of the individual components regarding the afforded performance improvement. Further research for practical clinical applications and other medical modalities is required in future works.
Collapse
Affiliation(s)
- Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yunling Wang
- Department of Radiology, First Affiliated Hospital of Xinjiang Medical University, Urumqi, 830011, China.
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, China
| | - Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yuxi Jin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruodai Wu
- Department of Radiology, Shenzhen University General Hospital, Shenzhen University Clinical Medical Academy, Shenzhen 518055, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare, Shanghai 201807, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
53
|
Wang H, Chi J, Wu C, Yu X, Wu H. Degradation Adaption Local-to-Global Transformer for Low-Dose CT Image Denoising. J Digit Imaging 2023; 36:1894-1909. [PMID: 37118101 PMCID: PMC10407009 DOI: 10.1007/s10278-023-00831-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/13/2023] [Accepted: 04/06/2023] [Indexed: 04/30/2023] Open
Abstract
Computer tomography (CT) has played an essential role in the field of medical diagnosis. Considering the potential risk of exposing patients to X-ray radiations, low-dose CT (LDCT) images have been widely applied in the medical imaging field. Since reducing the radiation dose may result in increased noise and artifacts, methods that can eliminate the noise and artifacts in the LDCT image have drawn increasing attentions and produced impressive results over the past decades. However, recent proposed methods mostly suffer from noise remaining, over-smoothing structures, or false lesions derived from noise. To tackle these issues, we propose a novel degradation adaption local-to-global transformer (DALG-Transformer) for restoring the LDCT image. Specifically, the DALG-Transformer is built on self-attention modules which excel at modeling long-range information between image patch sequences. Meanwhile, an unsupervised degradation representation learning scheme is first developed in medical image processing to learn abstract degradation representations of the LDCT images, which can distinguish various degradations in the representation space rather than the pixel space. Then, we introduce a degradation-aware modulated convolution and gated mechanism into the building modules (i.e., multi-head attention and feed-forward network) of each Transformer block, which can bring in the complementary strength of convolution operation to emphasize on the spatially local context. The experimental results show that the DALG-Transformer can provide superior performance in noise removal, structure preservation, and false lesions elimination compared with five existing representative deep networks. The proposed networks may be readily applied to other image processing tasks including image reconstruction, image deblurring, and image super-resolution.
Collapse
Affiliation(s)
- Huan Wang
- Northeastern University, NO. 195, Chuangxin Road, Shenyang, China
| | - Jianning Chi
- Northeastern University, NO. 195, Chuangxin Road, Shenyang, China
| | - Chengdong Wu
- Northeastern University, NO. 195, Chuangxin Road, Shenyang, China.
| | - Xiaosheng Yu
- Northeastern University, NO. 195, Chuangxin Road, Shenyang, China
| | - Hao Wu
- University of Sydney, Sydney, NSW, 2006, Australia
| |
Collapse
|
54
|
Zhao C, Yan H. Deep learning enables nanoscale X-ray 3D imaging with limited data. LIGHT, SCIENCE & APPLICATIONS 2023; 12:159. [PMID: 37369649 DOI: 10.1038/s41377-023-01198-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/29/2023]
Abstract
Deep neural network can greatly improve tomography reconstruction with limited data. A recent effort of combining ptycho-tomography model with the 3D U-net demonstrated a significant reduction in both the number of projections and computation time, and showed its potential for integrated circuit imaging that requires high-resolution and fast measurement speed.
Collapse
Affiliation(s)
- Chonghang Zhao
- National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, NY, 11973, USA
| | - Hanfei Yan
- National Synchrotron Light Source II, Brookhaven National Laboratory, Upton, NY, 11973, USA.
| |
Collapse
|
55
|
Chan Y, Liu X, Wang T, Dai J, Xie Y, Liang X. An attention-based deep convolutional neural network for ultra-sparse-view CT reconstruction. Comput Biol Med 2023; 161:106888. [DOI: 10.1016/j.compbiomed.2023.106888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 03/06/2023] [Accepted: 04/01/2023] [Indexed: 04/05/2023]
|
56
|
Du C, Qiao Z. EPRI sparse reconstruction method based on deep learning. Magn Reson Imaging 2023; 97:24-30. [PMID: 36493992 DOI: 10.1016/j.mri.2022.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 11/03/2022] [Accepted: 12/04/2022] [Indexed: 12/12/2022]
Abstract
Electron paramagnetic resonance imaging (EPRI) is an advanced tumor oxygen concentration imaging method. Now, the bottleneck problem of EPRI is that the scanning time is too long. Sparse reconstruction is an effective and fast imaging method, which means reconstructing images from sparse-view projections. However, the EPRI images sparsely reconstructed by the classic filtered back projection (FBP) algorithm often contain severe streak artifacts, which affect subsequent image processing. In this work, we propose a feature pyramid attention-based, residual, dense, deep convolutional network (FRD-Net) to suppress the streak artifacts in the FBP-reconstructed images. This network combines residual connection, attention mechanism, dense connections and introduces perceptual loss. The EPRI image with streak artifacts is used as the input of the network and the output-label is the corresponding high-quality image densely reconstructed by the FBP algorithm. After training, the FRD-Net gets the capability of suppressing streak artifacts. The real data reconstruction experiments show that the FRD-Net can better improve the sparse reconstruction accuracy, compared with three existing representative deep networks.
Collapse
Affiliation(s)
- Congcong Du
- School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi 030006, China
| | - Zhiwei Qiao
- School of Computer and Information Technology, Shanxi University, Taiyuan, Shanxi 030006, China.
| |
Collapse
|
57
|
Li S, Peng L, Li F, Liang Z. Low-dose sinogram restoration enabled by conditional GAN with cross-domain regularization in SPECT imaging. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:9728-9758. [PMID: 37322909 DOI: 10.3934/mbe.2023427] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
In order to generate high-quality single-photon emission computed tomography (SPECT) images under low-dose acquisition mode, a sinogram denoising method was studied for suppressing random oscillation and enhancing contrast in the projection domain. A conditional generative adversarial network with cross-domain regularization (CGAN-CDR) is proposed for low-dose SPECT sinogram restoration. The generator stepwise extracts multiscale sinusoidal features from a low-dose sinogram, which are then rebuilt into a restored sinogram. Long skip connections are introduced into the generator, so that the low-level features can be better shared and reused, and the spatial and angular sinogram information can be better recovered. A patch discriminator is employed to capture detailed sinusoidal features within sinogram patches; thereby, detailed features in local receptive fields can be effectively characterized. Meanwhile, a cross-domain regularization is developed in both the projection and image domains. Projection-domain regularization directly constrains the generator via penalizing the difference between generated and label sinograms. Image-domain regularization imposes a similarity constraint on the reconstructed images, which can ameliorate the issue of ill-posedness and serves as an indirect constraint on the generator. By adversarial learning, the CGAN-CDR model can achieve high-quality sinogram restoration. Finally, the preconditioned alternating projection algorithm with total variation regularization is adopted for image reconstruction. Extensive numerical experiments show that the proposed model exhibits good performance in low-dose sinogram restoration. From visual analysis, CGAN-CDR performs well in terms of noise and artifact suppression, contrast enhancement and structure preservation, particularly in low-contrast regions. From quantitative analysis, CGAN-CDR has obtained superior results in both global and local image quality metrics. From robustness analysis, CGAN-CDR can better recover the detailed bone structure of the reconstructed image for a higher-noise sinogram. This work demonstrates the feasibility and effectiveness of CGAN-CDR in low-dose SPECT sinogram restoration. CGAN-CDR can yield significant quality improvement in both projection and image domains, which enables potential applications of the proposed method in real low-dose study.
Collapse
Affiliation(s)
- Si Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Limei Peng
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Fenghuan Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Zengguo Liang
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| |
Collapse
|
58
|
Zhu Y, Zhao H, Wang T, Deng L, Yang Y, Jiang Y, Li N, Chan Y, Dai J, Zhang C, Li Y, Xie Y, Liang X. Sinogram domain metal artifact correction of CT via deep learning. Comput Biol Med 2023; 155:106710. [PMID: 36842222 DOI: 10.1016/j.compbiomed.2023.106710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 02/12/2023] [Accepted: 02/19/2023] [Indexed: 02/22/2023]
Abstract
PURPOSE Metal artifacts can significantly decrease the quality of computed tomography (CT) images. This occurs as X-rays penetrate implanted metals, causing severe attenuation and resulting in metal artifacts in the CT images. This degradation in image quality can hinder subsequent clinical diagnosis and treatment planning. Beam hardening artifacts are often manifested as severe strip artifacts in the image domain, affecting the overall quality of the reconstructed CT image. In the sinogram domain, metal is typically located in specific areas, and image processing in these regions can preserve image information in other areas, making the model more robust. To address this issue, we propose a region-based correction of beam hardening artifacts in the sinogram domain using deep learning. METHODS We present a model composed of three modules: (a) a Sinogram Metal Segmentation Network (Seg-Net), (b) a Sinogram Enhancement Network (Sino-Net), and (c) a Fusion Module. The model starts by using the Attention U-Net network to segment the metal regions in the sinogram. The segmented metal regions are then interpolated to obtain a sinogram image free of metal. The Sino-Net is then applied to compensate for the loss of organizational and artifact information in the metal regions. The corrected metal sinogram and the interpolated metal-free sinogram are then used to reconstruct the metal CT and metal-free CT images, respectively. Finally, the Fusion Module combines the two CT images to produce the result. RESULTS Our proposed method shows strong performance in both qualitative and quantitative evaluations. The peak signal-to-noise ratio (PSNR) of the CT image before and after correction was 18.22 and 30.32, respectively. The structural similarity index measure (SSIM) improved from 0.75 to 0.99, and the weighted peak signal-to-noise ratio (WPSNR) increased from 21.69 to 35.68. CONCLUSIONS Our proposed method demonstrates the reliability of high-accuracy correction of beam hardening artifacts.
Collapse
Affiliation(s)
- Yulin Zhu
- The First Dongguan Affiliated Hospital, Guangdong Medical University, Dongguan, 523808, China
| | - Hanqing Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Tangsheng Wang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Lei Deng
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yupeng Yang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yuming Jiang
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Na Li
- Department of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, China
| | - Yinping Chan
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jingjing Dai
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Chulong Zhang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yunhui Li
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Xiaokun Liang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
59
|
Wu X, Gao P, Zhang P, Shang Y, He B, Zhang L, Jiang J, Hui H, Tian J. Cross-domain knowledge transfer based parallel-cascaded multi-scale attention network for limited view reconstruction in projection magnetic particle imaging. Comput Biol Med 2023; 158:106809. [PMID: 37004433 DOI: 10.1016/j.compbiomed.2023.106809] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 02/20/2023] [Accepted: 03/20/2023] [Indexed: 03/30/2023]
Abstract
Projection magnetic particle imaging (MPI) can significantly improve the temporal resolution of three-dimensional (3D) imaging compared to that using traditional point by point scanning. However, the dense view of projections required for tomographic reconstruction limits the scope of temporal resolution optimization. The solution to this problem in computed tomography (CT) is using limited view projections (sparse view or limited angle) for reconstruction, which can be divided into: completing the limited view sinogram and image post-processing for streaking artifacts caused by insufficient projections. Benefiting from large-scale CT datasets, both categories of deep learning-based methods have achieved tremendous progress; yet, there is a data scarcity limitation in MPI. We propose a cross-domain knowledge transfer learning strategy that can transfer the prior knowledge of the limited view learned by the model in CT to MPI, which can help reduce the network requirements for real MPI data. In addition, the size of the imaging target affects the scale of the streaking artifacts caused by insufficient projections. Therefore, we propose a parallel-cascaded multi-scale attention module that allows the network to adaptively identify streaking artifacts at different scales. The proposed method was evaluated on real phantom and in vivo mouse data, and it significantly outperformed several advanced limited view methods. The streaking artifacts caused by an insufficient number of projections can be overcome using the proposed method.
Collapse
Affiliation(s)
- Xiangjun Wu
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Pengli Gao
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Peng Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Yaxin Shang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Department of Biomedical Engineering, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Bingxi He
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Liwen Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Jingying Jiang
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China.
| | - Hui Hui
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
| | - Jie Tian
- School of Engineering Medicine & School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China; CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, China; Beijing Key Laboratory of Molecular Imaging, Beijing, China; Zhuhai Precision Medical Center, Zhuhai People's Hospital, Jinan University, Zhuhai, China.
| |
Collapse
|
60
|
Lyu Q, Neph R, Sheng K. Tomographic detection of photon pairs produced from high-energy X-rays for the monitoring of radiotherapy dosing. Nat Biomed Eng 2023; 7:323-334. [PMID: 36280738 PMCID: PMC10038801 DOI: 10.1038/s41551-022-00953-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 09/14/2022] [Indexed: 01/07/2023]
Abstract
Measuring the radiation dose reaching a patient's body is difficult. Here we report a technique for the tomographic reconstruction of the location of photon pairs originating from the annihilation of positron-electron pairs produced by high-energy X-rays travelling through tissue. We used Monte Carlo simulations on pre-recorded data from tissue-mimicking phantoms and from a patient with a brain tumour to show the feasibility of this imaging modality, which we named 'pair-production tomography', for the monitoring of radiotherapy dosing. We simulated three image-reconstruction methods, one applicable to a pencil X-ray beam scanning through a region of interest, and two applicable to the excitation of tissue volumes via broad beams (with temporal resolution sufficient to identify coincident photon pairs via filtered back projection, or with higher temporal resolution sufficient for the estimation of a photon's time-of-flight). In addition to the monitoring of radiotherapy dosing, we show that image contrast resulting from pair-production tomography is highly proportional to the material's atomic number. The technique may thus also allow for element mapping and for soft-tissue differentiation.
Collapse
Affiliation(s)
- Qihui Lyu
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Ryan Neph
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
61
|
Chen X, Zhou B, Xie H, Miao T, Liu H, Holler W, Lin M, Miller EJ, Carson RE, Sinusas AJ, Liu C. DuDoSS: Deep-learning-based dual-domain sinogram synthesis from sparsely sampled projections of cardiac SPECT. Med Phys 2023; 50:89-103. [PMID: 36048541 PMCID: PMC9868054 DOI: 10.1002/mp.15958] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 08/04/2022] [Accepted: 08/19/2022] [Indexed: 01/26/2023] Open
Abstract
PURPOSE Myocardial perfusion imaging (MPI) using single-photon emission-computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. In clinical practice, the long scanning procedures and acquisition time might induce patient anxiety and discomfort, motion artifacts, and misalignments between SPECT and computed tomography (CT). Reducing the number of projection angles provides a solution that results in a shorter scanning time. However, fewer projection angles might cause lower reconstruction accuracy, higher noise level, and reconstruction artifacts due to reduced angular sampling. We developed a deep-learning-based approach for high-quality SPECT image reconstruction using sparsely sampled projections. METHODS We proposed a novel deep-learning-based dual-domain sinogram synthesis (DuDoSS) method to recover full-view projections from sparsely sampled projections of cardiac SPECT. DuDoSS utilized the SPECT images predicted in the image domain as guidance to generate synthetic full-view projections in the sinogram domain. The synthetic projections were then reconstructed into non-attenuation-corrected and attenuation-corrected (AC) SPECT images for voxel-wise and segment-wise quantitative evaluations in terms of normalized mean square error (NMSE) and absolute percent error (APE). Previous deep-learning-based approaches, including direct sinogram generation (Direct Sino2Sino) and direct image prediction (Direct Img2Img), were tested in this study for comparison. The dataset used in this study included a total of 500 anonymized clinical stress-state MPI studies acquired on a GE NM/CT 850 scanner with 60 projection angles following the injection of 99m Tc-tetrofosmin. RESULTS Our proposed DuDoSS generated more consistent synthetic projections and SPECT images with the ground truth than other approaches. The average voxel-wise NMSE between the synthetic projections by DuDoSS and the ground-truth full-view projections was 2.08% ± 0.81%, as compared to 2.21% ± 0.86% (p < 0.001) by Direct Sino2Sino. The averaged voxel-wise NMSE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 1.63% ± 0.72%, as compared to 1.84% ± 0.79% (p < 0.001) by Direct Sino2Sino and 1.90% ± 0.66% (p < 0.001) by Direct Img2Img. The averaged segment-wise APE between the AC SPECT images by DuDoSS and the ground-truth AC SPECT images was 3.87% ± 3.23%, as compared to 3.95% ± 3.21% (p = 0.023) by Direct Img2Img and 4.46% ± 3.58% (p < 0.001) by Direct Sino2Sino. CONCLUSIONS Our proposed DuDoSS is feasible to generate accurate synthetic full-view projections from sparsely sampled projections for cardiac SPECT. The synthetic projections and reconstructed SPECT images generated from DuDoSS are more consistent with the ground-truth full-view projections and SPECT images than other approaches. DuDoSS can potentially enable fast data acquisition of cardiac SPECT.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
| | - Tianshun Miao
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Hui Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | | | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Visage Imaging, Inc., San Diego, California, United States, 92130
| | - Edward J. Miller
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Richard E. Carson
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| | - Albert J. Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
- Department of Internal Medicine (Cardiology), Yale University School of Medicine, New Haven, Connecticut, United States, 06511
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut, United States, 06511
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut, United States, 06511
| |
Collapse
|
62
|
Lina J, Xu H, Aimin H, Beibei J, Zhiguo G. A densely connected LDCT image denoising network based on dual-edge extraction and multi-scale attention under compound loss. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:1207-1226. [PMID: 37742690 DOI: 10.3233/xst-230132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
BACKGROUND Low dose computed tomography (LDCT) uses lower radiation dose, but the reconstructed images contain higher noise that can have negative impact in disease diagnosis. Although deep learning with the edge extraction operators reserves edge information well, only applying the edge extraction operators to input LDCT images does not yield overall satisfactory results. OBJECTIVE To improve LDCT images quality, this study proposes and tests a dual edge extraction multi-scale attention mechanism convolution neural network (DEMACNN) based on a compound loss. METHODS The network uses edge extraction operators to extract edge information from both the input images and the feature maps in the network, improving the utilization of the edge operators and retaining the images edge information. The feature enhancement block is constructed by fusing the attention mechanism and multi-scale module, enhancing effective information, while suppressing useless information. The residual learning method is used to learn the network, improving the performance of the network, and solving the problem of gradient disappearance. Except for the network structure, a compound loss function, which consists of the MSE loss, the proposed joint total variation loss, and the edge loss, is proposed to enhance the denoising ability of the network and reserve the edge of images. RESULTS Compared with other advanced methods (REDCNN, CT-former and EDCNN), the proposed new network achieves the best PSNR and SSIM values in LDCT images of the abdomen, which are 33.3486 and 0.9104, respectively. In addition, the new network also performs well on head and chest image data. CONCLUSION The experimental results demonstrate that the proposed new network structure and denoising algorithm not only effectively removes the noise in LDCT images, but also protects the edges and details of the images well.
Collapse
Affiliation(s)
- Jia Lina
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan, China
| | - He Xu
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan, China
| | - Huang Aimin
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan, China
| | - Jia Beibei
- School of Physics and Electronic Engineering, Shanxi University, Taiyuan, China
| | - Gui Zhiguo
- State Key Laboratory of Dynamic Measurement Technology, North University of China, Taiyuan, China
| |
Collapse
|
63
|
TIME-Net: Transformer-Integrated Multi-Encoder Network for limited-angle artifact removal in dual-energy CBCT. Med Image Anal 2023; 83:102650. [PMID: 36334394 DOI: 10.1016/j.media.2022.102650] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 08/25/2022] [Accepted: 10/07/2022] [Indexed: 11/08/2022]
Abstract
Dual-energy cone-beam computed tomography (DE-CBCT) is a promising imaging technique with foreseeable clinical applications. DE-CBCT images acquired with two different spectra can provide material-specific information. Meanwhile, the anatomical consistency and energy-domain correlation result in significant information redundancy, which could be exploited to improve image quality. In this context, this paper develops the Transformer-Integrated Multi-Encoder Network (TIME-Net) for DE-CBCT to remove the limited-angle artifacts. TIME-Net comprises three encoders (image encoder, prior encoder, and transformer encoder), two decoders (low- and high-energy decoders), and one feature fusion module. Three encoders extract various features for image restoration. The feature fusion module compresses these features into more compact shared features and feeds them to the decoders. Two decoders perform differential learning for DE-CBCT images. By design, TIME-Net could obtain high-quality DE-CBCT images using two complementary quarter-scans, holding great potential to reduce radiation dose and shorten the acquisition time. Qualitative and quantitative analyses based on simulated data and real rat data have demonstrated the promising performance of TIME-Net in artifact removal, subtle structure restoration, and reconstruction accuracy preservation. Two clinical applications, virtual non-contrast (VNC) imaging and iodine quantification, have proved the potential utility of the DE-CBCT images provided by TIME-Net.
Collapse
|
64
|
Kim B, Shim H, Baek J. A streak artifact reduction algorithm in sparse-view CT using a self-supervised neural representation. Med Phys 2022; 49:7497-7515. [PMID: 35880806 DOI: 10.1002/mp.15885] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/13/2022] [Accepted: 07/17/2022] [Indexed: 12/27/2022] Open
Abstract
PURPOSE Sparse-view computed tomography (CT) has been attracting attention for its reduced radiation dose and scanning time. However, analytical image reconstruction methods suffer from streak artifacts due to insufficient projection views. Recently, various deep learning-based methods have been developed to solve this ill-posed inverse problem. Despite their promising results, they are easily overfitted to the training data, showing limited generalizability to unseen systems and patients. In this work, we propose a novel streak artifact reduction algorithm that provides a system- and patient-specific solution. METHODS Motivated by the fact that streak artifacts are deterministic errors, we regenerate the same artifacts from a prior CT image under the same system geometry. This prior image need not be perfect but should contain patient-specific information and be consistent with full-view projection data for accurate regeneration of the artifacts. To this end, we use a coordinate-based neural representation that often causes image blur but can greatly suppress the streak artifacts while having multiview consistency. By employing techniques in neural radiance fields originally proposed for scene representations, the neural representation is optimized to the measured sparse-view projection data via self-supervised learning. Then, we subtract the regenerated artifacts from the analytically reconstructed original image to obtain the final corrected image. RESULTS To validate the proposed method, we used simulated data of extended cardiac-torso phantoms and the 2016 NIH-AAPM-Mayo Clinic Low-Dose CT Grand Challenge and experimental data of physical pediatric and head phantoms. The performance of the proposed method was compared with a total variation-based iterative reconstruction method, naive application of the neural representation, and a convolutional neural network-based method. In visual inspection, it was observed that the small anatomical features were best preserved by the proposed method. The proposed method also achieved the best scores in the visual information fidelity, modulation transfer function, and lung nodule segmentation. CONCLUSIONS The results on both simulated and experimental data suggest that the proposed method can effectively reduce the streak artifacts while preserving small anatomical structures that are easily blurred or replaced with misleading features by the existing methods. Since the proposed method does not require any additional training datasets, it would be useful in clinical practice where the large datasets cannot be collected.
Collapse
Affiliation(s)
- Byeongjoon Kim
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Hyunjung Shim
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| | - Jongduk Baek
- School of Integrated Technology, Yonsei University, Incheon, South Korea
| |
Collapse
|
65
|
Meng M, Zhang M, Shen D, He G. Differentiation of breast lesions on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using deep transfer learning based on DenseNet201. Medicine (Baltimore) 2022; 101:e31214. [PMID: 36397422 PMCID: PMC9666147 DOI: 10.1097/md.0000000000031214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
In order to achieve better performance, artificial intelligence is used in breast cancer diagnosis. In this study, we evaluated the efficacy of different fine-tuning strategies of deep transfer learning (DTL) based on the DenseNet201 model to differentiate malignant from benign lesions on breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We collected 4260 images of benign lesions and 4140 images of malignant lesions of the breast pertaining to pathologically confirmed cases. The benign and malignant groups was randomly divided into a training set and a testing set at a ratio of 9:1. A DTL model based on the DenseNet201 model was established, and the effectiveness of 4 fine-tuning strategies (S0: strategy 0, S1: strategy; S2: strategy; and S3: strategy) was compared. Additionally, DCE-MRI images of 48 breast lesions were selected to verify the robustness of the model. Ten images were obtained for each lesion. The classification was considered correct if more than 5 images were correctly classified. The metrics for model performance evaluation included accuracy (Ac) in the training and testing sets, precision (Pr), recall rate (Rc), f1 score (f1), and area under the receiver operating characteristic curve (AUROC) in the validation set. The Ac of the 4 fine-tuning strategies reached 100.00% in the training set. The S2 strategy exhibited good convergence in the testing set. The Ac of S2 was 98.01% in the testing set, which was higher than those of S0 (93.10%), S1 (90.45%), and S3 (93.90%). The average classification Pr, Rc, f1, and AUROC of S2 in the validation set were (89.00%, 80.00%, 0.81, and 0.79, respectively) higher than those of S0 (76.00%, 67.00%, 0.69, and 0.65, respectively), S1 (60.00%, 60.00%, 0.60, 0.66, and respectively), and S3 (77.00%, 73.00%, 0.74, 0.72, respectively). The degree of coincidence between S2 and the histopathological method for differentiating between benign and malignant breast lesions was high (κ = 0.749). The S2 strategy can improve the robustness of the DenseNet201 model in relatively small breast DCE-MRI datasets, and this is a reliable method to increase the Ac of discriminating benign from malignant breast lesions on DCE-MRI.
Collapse
Affiliation(s)
- Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Ming Zhang
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- * Correspondence: Guangyuan He, Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, No.68 Gehuzhong Rd, Changzhou 213164, Jiangsu Province, China (e-mail: )
| |
Collapse
|
66
|
Zhou J, Wang Y, Zhang C, Wu W, Ji Y, Zou Y. Eyebirds: Enabling the Public to Recognize Water Birds at Hand. Animals (Basel) 2022; 12:3000. [PMID: 36359124 PMCID: PMC9658372 DOI: 10.3390/ani12213000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/25/2022] [Accepted: 10/26/2022] [Indexed: 09/29/2023] Open
Abstract
Enabling the public to easily recognize water birds has a positive effect on wetland bird conservation. However, classifying water birds requires advanced ornithological knowledge, which makes it very difficult for the public to recognize water bird species in daily life. To break the knowledge barrier of water bird recognition for the public, we construct a water bird recognition system (Eyebirds) by using deep learning, which is implemented as a smartphone app. Eyebirds consists of three main modules: (1) a water bird image dataset; (2) an attention mechanism-based deep convolution neural network for water bird recognition (AM-CNN); (3) an app for smartphone users. The waterbird image dataset currently covers 48 families, 203 genera and 548 species of water birds worldwide, which is used to train our water bird recognition model. The AM-CNN model employs attention mechanism to enhance the shallow features of bird images for boosting image classification performance. Experimental results on the North American bird dataset (CUB200-2011) show that the AM-CNN model achieves an average classification accuracy of 85%. On our self-built water bird image dataset, the AM-CNN model also works well with classification accuracies of 94.0%, 93.6% and 86.4% at three levels: family, genus and species, respectively. The user-side app is a WeChat applet deployed in smartphones. With the app, users can easily recognize water birds in expeditions, camping, sightseeing, or even daily life. In summary, our system can bring not only fun, but also water bird knowledge to the public, thus inspiring their interests and further promoting their participation in bird ecological conservation.
Collapse
Affiliation(s)
- Jiaogen Zhou
- Jiangsu Provincial Engineering Research Center for Intelligent Monitoring and Ecological Management of Pond and Reservoir Water Environment, Huaiyin Normal University, Huaian 223300, China
| | - Yang Wang
- Department of Computer Science and Technology, Tongji University, Shanghai 201804, China
| | - Caiyun Zhang
- Jiangsu Provincial Engineering Research Center for Intelligent Monitoring and Ecological Management of Pond and Reservoir Water Environment, Huaiyin Normal University, Huaian 223300, China
| | - Wenbo Wu
- Research Center of Information Technology, Beijing Academy of Agriculture and Forestry Sciences, Beijing 100097, China
| | - Yanzhu Ji
- Key Laboratory of Zoological Systematics and Evolution, Institute of Zoology, Chinese Academy of Sciences, Beijing 100101, China
| | - Yeai Zou
- Dongting Lake Station for Wetland Ecosystem Research, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China
| |
Collapse
|
67
|
Zhang P, Li K. A dual-domain neural network based on sinogram synthesis for sparse-view CT reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107168. [PMID: 36219892 DOI: 10.1016/j.cmpb.2022.107168] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 09/28/2022] [Accepted: 09/29/2022] [Indexed: 06/16/2023]
Abstract
OBJECTIVE The dual-domain deep learning-based reconstruction techniques have enjoyed many successful applications in the field of medical image reconstruction. Applying the analytical reconstruction based operator to transfer the data from the projection domain to the image domain, the dual-domain techniques may suffer from the insufficient suppression or removal of streak artifacts in areas with the missing view data, when addressing the sparse-view reconstruction problems. In this work, to overcome this problem, an intelligent sinogram synthesis based back-projection network (iSSBP-Net) was proposed for sparse-view computed tomography (CT) reconstruction. In the iSSBP-Net method, a convolutional neural network (CNN) was involved in the dual-domain method to inpaint the missing view data in the sinogram before CT reconstruction. METHODS The proposed iSSBP-Net method fused a sinogram synthesis sub-network (SS-Net), a sinogram filter sub-network (SF-Net), a back-projection layer, and a post-CNN into an end-to-end network. Firstly, to inpaint the missing view data, the SS-Net employed a CNN to synthesize the full-view sinogram in the projection domain. Secondly, to improve the visual quality of the sparse-view CT images, the synthesized sinogram was filtered by a CNN. Thirdly, the filtered sinogram was brought into the image domain through the back-projection layer. Finally, to yield images of high visual sensitivity, the post-CNN was applied to restore the desired images from the outputs of the back-projection layer. RESULTS The numerical experiments demonstrate that the proposed iSSBP-Net is superior to all competing algorithms under different scanning condintions for sparse-view CT reconstruction. Compared to the competing algorithms, the proposed iSSBP-Net method improved the peak signal-to-noise ratio of the reconstructed images about 1.21 dB, 0.26 dB, 0.01 dB, and 0.37 dB under the scanning conditions of 360, 180, 90, and 60 views, respectively. CONCLUSION The promising reconstruction results indicate that involving the SS-Net in the dual-domain method is could be an effective manner to suppress or remove the streak artifacts in sparse-view CT images. Due to the promising results reconstructed by the iSSBP-Net method, this study is intended to inspire the further development of sparse-view CT reconstruction by involving a SS-Net in the dual-domain method.
Collapse
Affiliation(s)
- Pengcheng Zhang
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, PR China.
| | - Kunpeng Li
- State Key Laboratory of Dynamic Testing Technology, North University of China, Taiyuan 030051, PR China
| |
Collapse
|
68
|
Zhang X, Cao X, Zhang P, Song F, Zhang J, Zhang L, Zhang G. Self-Training Strategy Based on Finite Element Method for Adaptive Bioluminescence Tomography Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2629-2643. [PMID: 35436185 DOI: 10.1109/tmi.2022.3167809] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Bioluminescence tomography (BLT) is a promising pre-clinical imaging technique for a wide variety of biomedical applications, which can non-invasively reveal functional activities inside living animal bodies through the detection of visible or near-infrared light produced by bioluminescent reactions. Recently, reconstruction approaches based on deep learning have shown great potential in optical tomography modalities. However, these reports only generate data with stationary patterns of constant target number, shape, and size. The neural networks trained by these data sets are difficult to reconstruct the patterns outside the data sets. This will tremendously restrict the applications of deep learning in optical tomography reconstruction. To address this problem, a self-training strategy is proposed for BLT reconstruction in this paper. The proposed strategy can fast generate large-scale BLT data sets with random target numbers, shapes, and sizes through an algorithm named random seed growth algorithm and the neural network is automatically self-trained. In addition, the proposed strategy uses the neural network to build a map between photon densities on surface and inside the imaged object rather than an end-to-end neural network that directly infers the distribution of sources from the photon density on surface. The map of photon density is further converted into the distribution of sources through the multiplication with stiffness matrix. Simulation, phantom, and mouse studies are carried out. Results show the availability of the proposed self-training strategy.
Collapse
|
69
|
Chen C, Xing Y, Gao H, Zhang L, Chen Z. Sam's Net: A Self-Augmented Multistage Deep-Learning Network for End-to-End Reconstruction of Limited Angle CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2912-2924. [PMID: 35576423 DOI: 10.1109/tmi.2022.3175529] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). Given incomplete projection data, images reconstructed by conventional analytical algorithms and iterative methods suffer from severe structural distortions and artifacts. In this paper, we proposed a self-augmented multi-stage deep-learning network (Sam's Net) for end-to-end reconstruction of limited angle CT. With the merit of the alternating minimization technique, Sam's Net integrates multi-stage self-constraints into cross-domain optimization to provide additional constraints on the manifold of neural networks. In practice, a sinogram completion network (SCNet) and artifact suppression network (ASNet), together with domain transformation layers constitute the backbone for cross-domain optimization. An online self-augmentation module was designed following the manner defined by alternating minimization, which enables a self-augmented learning procedure and multi-stage inference manner. Besides, a substitution operation was applied as a hard constraint for the solution space based on the data fidelity and a learnable weighting layer was constructed for data consistency refinement. Sam's Net forms a new framework for ill-posed reconstruction problems. In the training phase, the self-augmented procedure guides the optimization into a tightened solution space with enriched diverse data distribution and enhanced data consistency. In the inference phase, multi-stage prediction can improve performance progressively. Extensive experiments with both simulated and practical projections under 90-degree and 120-degree fan-beam configurations validate that Sam's Net can significantly improve the reconstruction quality with high stability and robustness.
Collapse
|
70
|
Li D, Ma L, Li J, Qi S, Yao Y, Teng Y. A comprehensive survey on deep learning techniques in CT image quality improvement. Med Biol Eng Comput 2022; 60:2757-2770. [DOI: 10.1007/s11517-022-02631-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 06/15/2022] [Indexed: 11/28/2022]
|
71
|
Patil BD, Singhal V, Agrawal U, Langoju R, Hsieh J, Lakshminarasimhan S, Das B. Deep learning based correction of low performing pixel in computed tomography. Biomed Phys Eng Express 2022; 8. [PMID: 35939980 DOI: 10.1088/2057-1976/ac87b4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 08/08/2022] [Indexed: 11/12/2022]
Abstract
Low Performing Pixel (LPP)/bad pixel in CT detectors cause ring and streaks artifacts, structured non-uniformities and deterioration of the image quality. These artifacts make the image unusable for diagnostic purposes. A missing/defective detector pixel translates to a channel missing across all views in sinogram domain and its effect gets spill over entire image in reconstruction domain as artifacts. Most of the existing ring and streak removal algorithms perform correction only in the reconstructed image domain. In this work, we propose a supervised deep learning algorithm that operates in sinogram domain to remove distortions cause by the LPP. This method leverages CT scan geometry, including conjugate ray information to learn the interpolation in sinogram domain. While the experiments are designed to cover the entire detector space, we emphasize on LPPs near detector iso-center as these have most adverse impact on image quality specially if the LPPs fall on the high frequency region (bone-tissue interface). We demonstrated efficacy of the proposed method using data acquired on GE RevACT multi-slice CT system with flat-panel detector. Experimental results on head scans show significant reduction in ring artifacts regardless of LPP location in the detector geometry. We have simulated isolated LPPs accounting for 5% and 10% of total channels. Detailed statistical analysis illustrates approximately 5dB improvement in SNR in both sinogram and reconstruction domain as compared to classical bicubic and Lagrange interpolation methods. Also, with reduction in ring and streak artifacts, the perceptual image quality is improved across all the test images.
Collapse
Affiliation(s)
- Bhushan D Patil
- Wipro GE Healthcare Pvt Ltd Bangalore, JFWTC, Bangalore, Bangalore, 560066, INDIA
| | - Vanika Singhal
- Wipro GE Healthcare Pvt Ltd Bangalore, JFWTC, Bangalore, Bangalore, Karnataka, 560066, INDIA
| | - Utkarsh Agrawal
- Wipro GE Healthcare Pvt Ltd Bangalore, JFWTC, Bangalore, Bangalore, Karnataka, 560066, INDIA
| | - Rajesh Langoju
- Wipro GE Healthcare Pvt Ltd Bangalore, JFWTC, Bangalore, Bangalore, Karnataka, 560066, INDIA
| | - Jiang Hsieh
- GE Healthcare, GE Healthcare, Chicago, Illinois, 60661-3655, UNITED STATES
| | | | - Bipul Das
- Wipro GE Healthcare Pvt Ltd Bangalore, JFWTC, Bangalore, Bangalore, Karnataka, 560066, INDIA
| |
Collapse
|
72
|
Kandarpa VSS, Perelli A, Bousse A, Visvikis D. LRR-CED: low-resolution reconstruction-aware convolutional encoder–decoder network for direct sparse-view CT image reconstruction. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7bce] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 06/23/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Sparse-view computed tomography (CT) reconstruction has been at the forefront of research in medical imaging. Reducing the total x-ray radiation dose to the patient while preserving the reconstruction accuracy is a big challenge. The sparse-view approach is based on reducing the number of rotation angles, which leads to poor quality reconstructed images as it introduces several artifacts. These artifacts are more clearly visible in traditional reconstruction methods like the filtered-backprojection (FBP) algorithm. Approach. Over the years, several model-based iterative and more recently deep learning-based methods have been proposed to improve sparse-view CT reconstruction. Many deep learning-based methods improve FBP-reconstructed images as a post-processing step. In this work, we propose a direct deep learning-based reconstruction that exploits the information from low-dimensional scout images, to learn the projection-to-image mapping. This is done by concatenating FBP scout images at multiple resolutions in the decoder part of a convolutional encoder–decoder (CED). Main results. This approach is investigated on two different networks, based on Dense Blocks and U-Net to show that a direct mapping can be learned from a sinogram to an image. The results are compared to two post-processing deep learning methods (FBP-ConvNet and DD-Net) and an iterative method that uses a total variation (TV) regularization. Significance. This work presents a novel method that uses information from both sinogram and low-resolution scout images for sparse-view CT image reconstruction. We also generalize this idea by demonstrating results with two different neural networks. This work is in the direction of exploring deep learning across the various stages of the image reconstruction pipeline involving data correction, domain transfer and image improvement.
Collapse
|
73
|
Liu H, Jin X, Liu L, Jin X. Low-Dose CT Image Denoising Based on Improved DD-Net and Local Filtered Mechanism. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2692301. [PMID: 35965772 PMCID: PMC9365583 DOI: 10.1155/2022/2692301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 06/02/2022] [Accepted: 06/20/2022] [Indexed: 11/18/2022]
Abstract
Low-dose CT (LDCT) images can reduce the radiation damage to the patients; however, the unavoidable information loss will influence the clinical diagnosis under low-dose conditions, such as noise, streak artifacts, and smooth details. LDCT image denoising is a significant topic in medical image processing to overcome the above deficits. This work proposes an improved DD-Net (DenseNet and deconvolution-based network) joint local filtered mechanism, the DD-Net is enhanced by introducing improved residual dense block to strengthen the feature representation ability, and the local filtered mechanism and gradient loss are also employed to effectively restore the subtle structures. First, the LDCT image is inputted into the network to obtain the denoised image. The original loss between the denoised image and normal-dose CT (NDCT) image is calculated, and the difference image between the NDCT image and the denoised image is obtained. Second, a mask image is generated by taking a threshold operation to the difference image, and the filtered LDCT and NDCT images are obtained by conducting an elementwise multiplication operation with LDCT and NDCT images using the mask image. Third, the filtered image is inputted into the network to obtain the filtered denoised image, and the correction loss is calculated. At last, the sum of original loss and correction loss of the improved DD-Net is used to optimize the network. Considering that it is insufficient to generate the edge information using the combination of mean square error (MSE) and multiscale structural similarity (MS-SSIM), we introduce the gradient loss that can calculate the loss of the high-frequency portion. The experimental results show that the proposed method can achieve better performance than conventional schemes and most neural networks. Our source code is made available at https://github.com/LHE-IT/Low-dose-CT-Image-Denoising/tree/main/Local Filtered Mechanism.
Collapse
Affiliation(s)
- Hongen Liu
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
| | - Xin Jin
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
| | - Ling Liu
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
| | - Xin Jin
- School of Software, Yunnan University, Kunming 650091, Yunnan, China
- Engineering Research Center of Cyberspace, Yunnan University, Kunming 650000, Yunnan, China
| |
Collapse
|
74
|
Kim S, Ahn J, Kim B, Kim C, Baek J. Convolutional neural network‐based metal and streak artifacts reduction in dental CT images with sparse‐view sampling scheme. Med Phys 2022; 49:6253-6277. [DOI: 10.1002/mp.15884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 07/02/2022] [Accepted: 07/18/2022] [Indexed: 11/08/2022] Open
Affiliation(s)
- Seongjun Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Junhyun Ahn
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Byeongjoon Kim
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| | - Chulhong Kim
- Departments of Electrical Engineering Convergence IT Engineering, Mechanical Engineering School of Interdisciplinary Bioscience and Bioengineering, and Medical Device Innovation Center Pohang University of Science and Technology Pohang 37673 South Korea
| | - Jongduk Baek
- School of Integrated Technology Yonsei University Incheon 21983 South Korea
| |
Collapse
|
75
|
Hu D, Zhang Y, Liu J, Luo S, Chen Y. DIOR: Deep Iterative Optimization-Based Residual-Learning for Limited-Angle CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1778-1790. [PMID: 35100109 DOI: 10.1109/tmi.2022.3148110] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Limited-angle CT is a challenging problem in real applications. Incomplete projection data will lead to severe artifacts and distortions in reconstruction images. To tackle this problem, we propose a novel reconstruction framework termed Deep Iterative Optimization-based Residual-learning (DIOR) for limited-angle CT. Instead of directly deploying the regularization term on image space, the DIOR combines iterative optimization and deep learning based on the residual domain, significantly improving the convergence property and generalization ability. Specifically, the asymmetric convolutional modules are adopted to strengthen the feature extraction capacity in smooth regions for deep priors. Besides, in our DIOR method, the information contained in low-frequency and high-frequency components is also evaluated by perceptual loss to improve the performance in tissue preservation. Both simulated and clinical datasets are performed to validate the performance of DIOR. Compared with existing competitive algorithms, quantitative and qualitative results show that the proposed method brings a promising improvement in artifact removal, detail restoration and edge preservation.
Collapse
|
76
|
Pan J, Zhang H, Wu W, Gao Z, Wu W. Multi-domain integrative Swin transformer network for sparse-view tomographic reconstruction. PATTERNS (NEW YORK, N.Y.) 2022; 3:100498. [PMID: 35755869 PMCID: PMC9214338 DOI: 10.1016/j.patter.2022.100498] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 01/17/2022] [Accepted: 03/30/2022] [Indexed: 11/09/2022]
Abstract
Decreasing projection views to a lower X-ray radiation dose usually leads to severe streak artifacts. To improve image quality from sparse-view data, a multi-domain integrative Swin transformer network (MIST-net) was developed and is reported in this article. First, MIST-net incorporated lavish domain features from data, residual data, image, and residual image using flexible network architectures, where a residual data and residual image sub-network was considered as a data consistency module to eliminate interpolation and reconstruction errors. Second, a trainable edge enhancement filter was incorporated to detect and protect image edges. Third, a high-quality reconstruction Swin transformer (i.e., Recformer) was designed to capture image global features. The experimental results on numerical and real cardiac clinical datasets with 48 views demonstrated that our proposed MIST-net provided better image quality with more small features and sharp edges than other competitors.
Collapse
Affiliation(s)
- Jiayi Pan
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Heye Zhang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Weifei Wu
- Department of Orthopedics, The People’s Hospital of China Three Gorges University, The First People’s Hospital of Yichang, Yichang, Hubei, China
| | - Zhifan Gao
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Weiwen Wu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
| |
Collapse
|
77
|
Shen L, Zhao W, Capaldi D, Pauly J, Xing L. A geometry-informed deep learning framework for ultra-sparse 3D tomographic image reconstruction. Comput Biol Med 2022; 148:105710. [DOI: 10.1016/j.compbiomed.2022.105710] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 05/11/2022] [Accepted: 06/04/2022] [Indexed: 11/26/2022]
|
78
|
The use of deep learning methods in low-dose computed tomography image reconstruction: a systematic review. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00724-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractConventional reconstruction techniques, such as filtered back projection (FBP) and iterative reconstruction (IR), which have been utilised widely in the image reconstruction process of computed tomography (CT) are not suitable in the case of low-dose CT applications, because of the unsatisfying quality of the reconstructed image and inefficient reconstruction time. Therefore, as the demand for CT radiation dose reduction continues to increase, the use of artificial intelligence (AI) in image reconstruction has become a trend that attracts more and more attention. This systematic review examined various deep learning methods to determine their characteristics, availability, intended use and expected outputs concerning low-dose CT image reconstruction. Utilising the methodology of Kitchenham and Charter, we performed a systematic search of the literature from 2016 to 2021 in Springer, Science Direct, arXiv, PubMed, ACM, IEEE, and Scopus. This review showed that algorithms using deep learning technology are superior to traditional IR methods in noise suppression, artifact reduction and structure preservation, in terms of improving the image quality of low-dose reconstructed images. In conclusion, we provided an overview of the use of deep learning approaches in low-dose CT image reconstruction together with their benefits, limitations, and opportunities for improvement.
Collapse
|
79
|
Okamoto T, Kumakiri T, Haneishi H. Patch-based artifact reduction for three-dimensional volume projection data of sparse-view micro-computed tomography. Radiol Phys Technol 2022; 15:206-223. [PMID: 35622229 DOI: 10.1007/s12194-022-00661-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 04/27/2022] [Accepted: 04/28/2022] [Indexed: 11/27/2022]
Abstract
Micro-computed tomography (micro-CT) enables the non-destructive acquisition of three-dimensional (3D) morphological structures at the micrometer scale. Although it is expected to be used in pathology and histology to analyze the 3D microstructure of tissues, micro-CT imaging of tissue specimens requires a long scan time. A high-speed imaging method, sparse-view CT, can reduce the total scan time and radiation dose; however, it causes severe streak artifacts on tomographic images reconstructed with analytical algorithms due to insufficient sampling. In this paper, we propose an artifact reduction method for 3D volume projection data from sparse-view micro-CT. Specifically, we developed a patch-based lightweight fully convolutional network to estimate full-view 3D volume projection data from sparse-view 3D volume projection data. We evaluated the effectiveness of the proposed method using physically acquired datasets. The qualitative and quantitative results showed that the proposed method achieved high estimation accuracy and suppressed streak artifacts in the reconstructed images. In addition, we confirmed that the proposed method requires both short training and prediction times. Our study demonstrates that the proposed method has great potential for artifact reduction for 3D volume projection data under sparse-view conditions.
Collapse
Affiliation(s)
- Takayuki Okamoto
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan.
| | - Toshio Kumakiri
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Hideaki Haneishi
- Center for Frontier Medical Engineering, Chiba University, Chiba, 263-8522, Japan
| |
Collapse
|
80
|
Bai J, Liu Y, Yang H. Sparse-View CT Reconstruction Based on a Hybrid Domain Model with Multi-Level Wavelet Transform. SENSORS 2022; 22:s22093228. [PMID: 35590918 PMCID: PMC9105730 DOI: 10.3390/s22093228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 04/04/2022] [Accepted: 04/19/2022] [Indexed: 02/01/2023]
Abstract
The reconstruction of sparsely sampled projection data will generate obvious streaking artifacts, resulting in image quality degradation and affecting medical diagnosis results. Wavelet transform can effectively decompose directional components of image, so the artifact features and edge details with high directionality can be better detected in the wavelet domain. Therefore, a hybrid domain method based on wavelet transform is proposed in this paper for the sparse-view CT reconstruction. The reconstruction model combines wavelet, spatial, and radon domains to restore the projection consistency and enhance image details. In addition, the global distribution of artifacts requires the network to have a large receptive field, so that a multi-level wavelet transform network (MWCNN) is applied to the hybrid domain model. Wavelet transform is used in the encoding part of the network to reduce the size of feature maps instead of pooling operation and inverse wavelet transform is deployed in the decoding part to recover image details. The proposed method can achieve PSNR of 41.049 dB and SSIM of 0.958 with 120 projections of three angular intervals, and obtain the highest values in this paper. Through the results of numerical analysis and reconstructed images, it shows that the hybrid domain method is superior to the single-domain methods. At the same time, the multi-level wavelet transform model is more suitable for CT reconstruction than the single-level wavelet transform.
Collapse
|
81
|
Prospects of Structural Similarity Index for Medical Image Analysis. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083754] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
An image quality matrix provides a significant principle for objectively observing an image based on an alteration between the original and distorted images. During the past two decades, a novel universal image quality assessment has been developed with the ability of adaptation with human visual perception for measuring the difference of a degraded image from the reference image, namely a structural similarity index. Structural similarity has since been widely used in various sectors, including medical image evaluation. Although numerous studies have reported the use of structural similarity as an evaluation strategy for computer-based medical images, reviews on the prospects of using structural similarity for medical imaging applications have been rare. This paper presents previous studies implementing structural similarity in analyzing medical images from various imaging modalities. In addition, this review describes structural similarity from the perspective of a family’s historical background, as well as progress made from the original to the recent structural similarity, and its strengths and drawbacks. Additionally, potential research directions in applying such similarities related to medical image analyses are described. This review will be beneficial in guiding researchers toward the discovery of potential medical image examination methods that can be improved through structural similarity index.
Collapse
|
82
|
Wang H, Wang N, Xie H, Wang L, Zhou W, Yang D, Cao X, Zhu S, Liang J, Chen X. Two-stage deep learning network-based few-view image reconstruction for parallel-beam projection tomography. Quant Imaging Med Surg 2022; 12:2535-2551. [PMID: 35371942 PMCID: PMC8923870 DOI: 10.21037/qims-21-778] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 12/20/2021] [Indexed: 08/30/2023]
Abstract
BACKGROUND Projection tomography (PT) is a very important and valuable method for fast volumetric imaging with isotropic spatial resolution. Sparse-view or limited-angle reconstruction-based PT can greatly reduce data acquisition time, lower radiation doses, and simplify sample fixation modes. However, few techniques can currently achieve image reconstruction based on few-view projection data, which is especially important for in vivo PT in living organisms. METHODS A 2-stage deep learning network (TSDLN)-based framework was proposed for parallel-beam PT reconstructions using few-view projections. The framework is composed of a reconstruction network (R-net) and a correction network (C-net). The R-net is a generative adversarial network (GAN) used to complete image information with direct back-projection (BP) of a sparse signal, bringing the reconstructed image close to reconstruction results obtained from fully projected data. The C-net is a U-net array that denoises the compensation result to obtain a high-quality reconstructed image. RESULTS The accuracy and feasibility of the proposed TSDLN-based framework in few-view projection-based reconstruction were first evaluated with simulations, using images from the DeepLesion public dataset. The framework exhibited better reconstruction performance than traditional analytic reconstruction algorithms and iterative algorithms, especially in cases using sparse-view projection images. For example, with as few as two projections, the TSDLN-based framework reconstructed high-quality images very close to the original image, with structural similarities greater than 0.8. By using previously acquired optical PT (OPT) data in the TSDLN-based framework trained on computed tomography (CT) data, we further exemplified the migration capabilities of the TSDLN-based framework. The results showed that when the number of projections was reduced to 5, the contours and distribution information of the samples in question could still be seen in the reconstructed images. CONCLUSIONS The simulations and experimental results showed that the TSDLN-based framework has strong reconstruction abilities using few-view projection images, and has great potential in the application of in vivo PT.
Collapse
Affiliation(s)
- Huiyuan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Nan Wang
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Hui Xie
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Lin Wang
- School of Computer Science and Engineering, Xi’an University of Technology, Xi’an, China
| | - Wangting Zhou
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Defu Yang
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Xu Cao
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Shouping Zhu
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi’an, China
| | - Xueli Chen
- Engineering Research Center of Molecular and Neuro Imaging of Ministry of Education, School of Life Science and Technology, Xidian University, Xi’an, China
- Xi’an Key Laboratory of Intelligent Sensing and Regulation of Trans-scale Life Information, Xi’an, China
| |
Collapse
|
83
|
Abstract
In clinical medical applications, sparse-view computed tomography (CT) imaging is an effective method for reducing radiation doses. The iterative reconstruction method is usually adopted for sparse-view CT. In the process of optimizing the iterative model, the approach of directly solving the quadratic penalty function of the objective function can be expected to perform poorly. Compared with the direct solution method, the alternating direction method of multipliers (ADMM) algorithm can avoid the ill-posed problem associated with the quadratic penalty function. However, the regularization items, sparsity transform, and parameters in the traditional ADMM iterative model need to be manually adjusted. In this paper, we propose a data-driven ADMM reconstruction method that can automatically optimize the above terms that are difficult to choose within an iterative framework. The main contribution of this paper is that a modified U-net represents the sparse transformation, and the prior information and related parameters are automatically trained by the network. Based on a comparison with other state-of-the-art reconstruction algorithms, the qualitative and quantitative results show the effectiveness of our method for sparse-view CT image reconstruction. The experimental results show that the proposed method performs well in streak artifact elimination and detail structure preservation. The proposed network can deal with a wide range of noise levels and has exceptional performance in low-dose reconstruction tasks.
Collapse
|
84
|
Li S, Ye W, Li F. LU-Net: combining LSTM and U-Net for sinogram synthesis in sparse-view SPECT reconstruction. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:4320-4340. [PMID: 35341300 DOI: 10.3934/mbe.2022200] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lowering the dose in single-photon emission computed tomography (SPECT) imaging to reduce the radiation damage to patients has become very significant. In SPECT imaging, lower radiation dose can be achieved by reducing the activity of administered radiotracer, which will lead to projection data with either sparse projection views or reduced photon counts per view. Direct reconstruction of sparse-view projection data may lead to severe ray artifacts in the reconstructed image. Many existing works use neural networks to synthesize the projection data of sparse-view to address the issue of ray artifacts. However, these methods rarely consider the sequence feature of projection data along projection view. This work is dedicated to developing a neural network architecture that accounts for the sequence feature of projection data at adjacent view angles. In this study, we propose a network architecture combining Long Short-Term Memory network (LSTM) and U-Net, dubbed LU-Net, to learn the mapping from sparse-view projection data to full-view data. In particular, the LSTM module in the proposed network architecture can learn the sequence feature of projection data at adjacent angles to synthesize the missing views in the sinogram. All projection data used in the numerical experiment are generated by the Monte Carlo simulation software SIMIND. We evenly sample the full-view sinogram and obtain the 1/2-, 1/3- and 1/4-view projection data, respectively, representing three different levels of view sparsity. We explore the performance of the proposed network architecture at the three simulated view levels. Finally, we employ the preconditioned alternating projection algorithm (PAPA) to reconstruct the synthesized projection data. Compared with U-Net and traditional iterative reconstruction method with total variation regularization as well as PAPA solver (TV-PAPA), the proposed network achieves significant improvement in both global and local quality metrics.
Collapse
Affiliation(s)
- Si Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Wenquan Ye
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| | - Fenghuan Li
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou 510006, China
| |
Collapse
|
85
|
Deng K, Sun C, Gong W, Liu Y, Yang H. A Limited-View CT Reconstruction Framework Based on Hybrid Domains and Spatial Correlation. SENSORS 2022; 22:s22041446. [PMID: 35214348 PMCID: PMC8875841 DOI: 10.3390/s22041446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 02/07/2022] [Accepted: 02/09/2022] [Indexed: 02/04/2023]
Abstract
Limited-view Computed Tomography (CT) can be used to efficaciously reduce radiation dose in clinical diagnosis, it is also adopted when encountering inevitable mechanical and physical limitation in industrial inspection. Nevertheless, limited-view CT leads to severe artifacts in its imaging, which turns out to be a major issue in the low dose protocol. Thus, how to exploit the limited prior information to obtain high-quality CT images becomes a crucial issue. We notice that almost all existing methods solely focus on a single CT image while neglecting the solid fact that, the scanned objects are always highly spatially correlated. Consequently, there lies bountiful spatial information between these acquired consecutive CT images, which is still largely left to be exploited. In this paper, we propose a novel hybrid-domain structure composed of fully convolutional networks that groundbreakingly explores the three-dimensional neighborhood and works in a “coarse-to-fine” manner. We first conduct data completion in the Radon domain, and transform the obtained full-view Radon data into images through FBP. Subsequently, we employ the spatial correlation between continuous CT images to productively restore them and then refine the image texture to finally receive the ideal high-quality CT images, achieving PSNR of 40.209 and SSIM of 0.943. Besides, unlike other current limited-view CT reconstruction methods, we adopt FBP (and implement it on GPUs) instead of SART-TV to significantly accelerate the overall procedure and realize it in an end-to-end manner.
Collapse
|
86
|
Jiang Z, Zhang Z, Chang Y, Ge Y, Yin FF, Ren L. Enhancement of 4-D Cone-Beam Computed Tomography (4D-CBCT) Using a Dual-Encoder Convolutional Neural Network (DeCNN). IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:222-230. [PMID: 35386935 PMCID: PMC8979258 DOI: 10.1109/trpms.2021.3133510] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
4D-CBCT is a powerful tool to provide respiration-resolved images for the moving target localization. However, projections in each respiratory phase are intrinsically under-sampled under the clinical scanning time and imaging dose constraints. Images reconstructed by compressed sensing (CS)-based methods suffer from blurred edges. Introducing the average-4D-image constraint to the CS-based reconstruction, such as prior-image-constrained CS (PICCS), can improve the edge sharpness of the stable structures. However, PICCS can lead to motion artifacts in the moving regions. In this study, we proposed a dual-encoder convolutional neural network (DeCNN) to realize the average-image-constrained 4D-CBCT reconstruction. The proposed DeCNN has two parallel encoders to extract features from both the under-sampled target phase images and the average images. The features are then concatenated and fed into the decoder for the high-quality target phase image reconstruction. The reconstructed 4D-CBCT using of the proposed DeCNN from the real lung cancer patient data showed (1) qualitatively, clear and accurate edges for both stable and moving structures; (2) quantitatively, low-intensity errors, high peak signal-to-noise ratio, and high structural similarity compared to the ground truth images; and (3) superior quality to those reconstructed by several other state-of-the-art methods including the back-projection, CS total-variation, PICCS, and the single-encoder CNN. Overall, the proposed DeCNN is effective in exploiting the average-image constraint to improve the 4D-CBCT image quality.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA
| | - Zeyu Zhang
- Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC, 27705, USA
| | - Yushi Chang
- Department of Radiation Oncology, Hospital of University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, 210046, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, North Carolina, 27710, USA, and is also with Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, USA, and is also with Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, China
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, 21201, USA
| |
Collapse
|
87
|
Fu Y, Zhang H, Morris ED, Glide-Hurst CK, Pai S, Traverso A, Wee L, Hadzic I, Lønne PI, Shen C, Liu T, Yang X. Artificial Intelligence in Radiation Therapy. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:158-181. [PMID: 35992632 PMCID: PMC9385128 DOI: 10.1109/trpms.2021.3107454] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Eric D. Morris
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA
| | - Carri K. Glide-Hurst
- Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA
| | - Suraj Pai
- Maastricht University Medical Centre, Netherlands
| | | | - Leonard Wee
- Maastricht University Medical Centre, Netherlands
| | | | - Per-Ivar Lønne
- Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway
| | - Chenyang Shen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
88
|
Fan Y, Wang H, Gemmeke H, Hopp T, Hesser J. Model-data-driven image reconstruction with neural networks for ultrasound computed tomography breast imaging. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.09.035] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
89
|
Zhou B, Chen X, Zhou SK, Duncan JS, Liu C. DuDoDR-Net: Dual-domain data consistent recurrent network for simultaneous sparse view and metal artifact reduction in computed tomography. Med Image Anal 2022; 75:102289. [PMID: 34758443 PMCID: PMC8678361 DOI: 10.1016/j.media.2021.102289] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 09/03/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023]
Abstract
Sparse-view computed tomography (SVCT) aims to reconstruct a cross-sectional image using a reduced number of x-ray projections. While SVCT can efficiently reduce the radiation dose, the reconstruction suffers from severe streak artifacts, and the artifacts are further amplified with the presence of metallic implants, which could adversely impact the medical diagnosis and other downstream applications. Previous methods have extensively explored either SVCT reconstruction without metallic implants, or full-view CT metal artifact reduction (MAR). The issue of simultaneous sparse-view and metal artifact reduction (SVMAR) remains under-explored, and it is infeasible to directly apply previous SVCT and MAR methods to SVMAR which may yield non-ideal reconstruction quality. In this work, we propose a dual-domain data consistent recurrent network, called DuDoDR-Net, for SVMAR. Our DuDoDR-Net aims to reconstruct an artifact-free image by recurrent image domain and sinogram domain restorations. To ensure the metal-free part of acquired projection data is preserved, we also develop the image data consistent layer (iDCL) and sinogram data consistent layer (sDCL) that are interleaved in our recurrent framework. Our experimental results demonstrate that our DuDoDR-Net is able to produce superior artifact-reduced results while preserving the anatomical structures, that outperforming previous SVCT and SVMAR methods, under different sparse-view acquisition settings.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China; Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
90
|
Luca AR, Ursuleanu TF, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Grigorovici A. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
91
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
92
|
Sun C, Liu Y, Yang H. Degradation-Aware Deep Learning Framework for Sparse-View CT Reconstruction. Tomography 2021; 7:932-949. [PMID: 34941649 PMCID: PMC8704775 DOI: 10.3390/tomography7040077] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/25/2021] [Accepted: 11/29/2021] [Indexed: 12/11/2022] Open
Abstract
Sparse-view CT reconstruction is a fundamental task in computed tomography to overcome undesired artifacts and recover the details of textual structure in degraded CT images. Recently, many deep learning-based networks have achieved desirable performances compared to iterative reconstruction algorithms. However, the performance of these methods may severely deteriorate when the degradation strength of the test image is not consistent with that of the training dataset. In addition, these methods do not pay enough attention to the characteristics of different degradation levels, so solely extending the training dataset with multiple degraded images is also not effective. Although training plentiful models in terms of each degradation level can mitigate this problem, extensive parameter storage is involved. Accordingly, in this paper, we focused on sparse-view CT reconstruction for multiple degradation levels. We propose a single degradation-aware deep learning framework to predict clear CT images by understanding the disparity of degradation in both the frequency domain and image domain. The dual-domain procedure can perform particular operations at different degradation levels in frequency component recovery and spatial details reconstruction. The peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and visual results demonstrate that our method outperformed the classical deep learning-based reconstruction methods in terms of effectiveness and scalability.
Collapse
|
93
|
Sun H, Xi Q, Fan R, Sun J, Xie K, Ni X, Yang J. Synthesis of pseudo-CT images from pelvic MRI images based on MD-CycleGAN model for radiotherapy. Phys Med Biol 2021; 67. [PMID: 34879356 DOI: 10.1088/1361-6560/ac4123] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/08/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model was proposed to synthesize higher-quality pseudo-CT from MRI. APPROACH The MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted the DenseNet as the main architecture. The local and global discriminators based on convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by four-fold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with generator based on the architectures of ResNet, sU-Net, and FCN. MAIN RESULTS There are significant differences(P<0.05) in the four-fold-cross validation results on peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgt with root mean square error of 47.83±2.92 HU and normalized mutual information value of 0.9014±0.0212 and mean absolute error value of 46.79±2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgt were minimal. The mean absolute dose errors of Dosemax, Dosemin and Dosemean based on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82 and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45±1.91, and the comparison methods (ResNet_GAN, sUnet_GAN and FCN_GAN) were 93.33±1.20, 89.64±1.63 and 87.31±1.94, respectively. SIGNIFICANCE The pseudo-CT obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgt in terms of anatomy and dosimetry than other GAN models.
Collapse
Affiliation(s)
- Hongfei Sun
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Qianyi Xi
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Rongbo Fan
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Jiawei Sun
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Kai Xie
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Xinye Ni
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, 213003, CHINA
| | - Jianhua Yang
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| |
Collapse
|
94
|
Wang K, Jiang P, Meng J, Jiang X. Attention-Based DenseNet for Pneumonia Classification. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.12.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
95
|
Xia W, Lu Z, Huang Y, Liu Y, Chen H, Zhou J, Zhang Y. CT Reconstruction With PDF: Parameter-Dependent Framework for Data From Multiple Geometries and Dose Levels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3065-3076. [PMID: 34086564 DOI: 10.1109/tmi.2021.3085839] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The current mainstream computed tomography (CT) reconstruction methods based on deep learning usually need to fix the scanning geometry and dose level, which significantly aggravates the training costs and requires more training data for real clinical applications. In this paper, we propose a parameter-dependent framework (PDF) that trains a reconstruction network with data originating from multiple alternative geometries and dose levels simultaneously. In the proposed PDF, the geometry and dose level are parameterized and fed into two multilayer perceptrons (MLPs). The outputs of the MLPs are used to modulate the feature maps of the CT reconstruction network, which condition the network outputs on different geometries and dose levels. The experiments show that our proposed method can obtain competitive performance compared to the original network trained with either specific or mixed geometry and dose level, which can efficiently save extra training costs for multiple geometries and dose levels.
Collapse
|
96
|
Wu W, Hu D, Niu C, Yu H, Vardhanabhuti V, Wang G. DRONE: Dual-Domain Residual-based Optimization NEtwork for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3002-3014. [PMID: 33956627 PMCID: PMC8591633 DOI: 10.1109/tmi.2021.3078067] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Deep learning has attracted rapidly increasing attention in the field of tomographic image reconstruction, especially for CT, MRI, PET/SPECT, ultrasound and optical imaging. Among various topics, sparse-view CT remains a challenge which targets a decent image reconstruction from very few projections. To address this challenge, in this article we propose a Dual-domain Residual-based Optimization NEtwork (DRONE). DRONE consists of three modules respectively for embedding, refinement, and awareness. In the embedding module, a sparse sinogram is first extended. Then, sparse-view artifacts are effectively suppressed in the image domain. After that, the refinement module recovers image details in the residual data and image domains synergistically. Finally, the results from the embedding and refinement modules in the data and image domains are regularized for optimized image quality in the awareness module, which ensures the consistency between measurements and images with the kernel awareness of compressed sensing. The DRONE network is trained, validated, and tested on preclinical and clinical datasets, demonstrating its merits in edge preservation, feature recovery, and reconstruction accuracy.
Collapse
|
97
|
Tao X, Wang Y, Lin L, Hong Z, Ma J. Learning to Reconstruct CT Images From the VVBP-Tensor. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3030-3041. [PMID: 34138703 DOI: 10.1109/tmi.2021.3090257] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning (DL) is bringing a big movement in the field of computed tomography (CT) imaging. In general, DL for CT imaging can be applied by processing the projection or the image data with trained deep neural networks (DNNs), unrolling the iterative reconstruction as a DNN for training, or training a well-designed DNN to directly reconstruct the image from the projection. In all of these applications, the whole or part of the DNNs work in the projection or image domain alone or in combination. In this study, instead of focusing on the projection or image, we train DNNs to reconstruct CT images from the view-by-view backprojection tensor (VVBP-Tensor). The VVBP-Tensor is the 3D data before summation in backprojection. It contains structures of the scanned object after applying a sorting operation. Unlike the image or projection that provides compressed information due to the integration/summation step in forward or back projection, the VVBP-Tensor provides lossless information for processing, allowing the trained DNNs to preserve fine details of the image. We develop a learning strategy by inputting slices of the VVBP-Tensor as feature maps and outputting the image. Such strategy can be viewed as a generalization of the summation step in conventional filtered backprojection reconstruction. Numerous experiments reveal that the proposed VVBP-Tensor domain learning framework obtains significant improvement over the image, projection, and hybrid projection-image domain learning frameworks. We hope the VVBP-Tensor domain learning framework could inspire algorithm development for DL-based CT imaging.
Collapse
|
98
|
Zhang Z, Yu S, Qin W, Liang X, Xie Y, Cao G. Self-supervised CT super-resolution with hybrid model. Comput Biol Med 2021; 138:104775. [PMID: 34666243 DOI: 10.1016/j.compbiomed.2021.104775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/14/2021] [Accepted: 08/17/2021] [Indexed: 12/19/2022]
Abstract
Software-based methods can improve CT spatial resolution without changing the hardware of the scanner or increasing the radiation dose to the object. In this work, we aim to develop a deep learning (DL) based CT super-resolution (SR) method that can reconstruct low-resolution (LR) sinograms into high-resolution (HR) CT images. We mathematically analyzed imaging processes in the CT SR imaging problem and synergistically integrated the SR model in the sinogram domain and the deblur model in the image domain into a hybrid model (SADIR). SADIR incorporates the CT domain knowledge and is unrolled into a DL network (SADIR-Net). The SADIR-Net is a self-supervised network, which can be trained and tested with a single sinogram. SADIR-Net was evaluated through SR CT imaging of a Catphan700 physical phantom and a real porcine phantom, and its performance was compared to the other state-of-the-art (SotA) DL-based CT SR methods. On both phantoms, SADIR-Net obtains the highest information fidelity criterion (IFC), structure similarity index (SSIM), and lowest root-mean-square-error (RMSE). As to the modulation transfer function (MTF), SADIR-Net also obtains the best result and improves the MTF50% by 69.2% and MTF10% by 69.5% compared with FBP. Alternatively, the spatial resolutions at MTF50% and MTF10% from SADIR-Net can reach 91.3% and 89.3% of the counterparts reconstructed from the HR sinogram with FBP. The results show that SADIR-Net can provide performance comparable to the other SotA methods for CT SR reconstruction, especially in the case of extremely limited training data or even no data at all. Thus, the SADIR method could find use in improving CT resolution without changing the hardware of the scanner or increasing the radiation dose to the object.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, 94305-5847, CA, USA; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shaode Yu
- College of Information and Communication Engineering, Communication University of China, Beijing 100024, China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Guohua Cao
- Virginia Polytechnic Institute & State University, Blacksburg, VA 24061, USA.
| |
Collapse
|
99
|
Zhi S, KachelrieB M, Pan F, Mou X. CycN-Net: A Convolutional Neural Network Specialized for 4D CBCT Images Refinement. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3054-3064. [PMID: 34010129 DOI: 10.1109/tmi.2021.3081824] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Four-dimensional cone-beam computed tomography (4D CBCT) has been developed to provide a sequence of phase-resolved reconstructions in image-guided radiation therapy. However, 4D CBCT images are degraded by severe streaking artifacts and noise because the phase-resolved image is an extremely sparse-view CT procedure wherein a few under-sampled projections are used for the reconstruction of each phase. Aiming at improving the overall quality of 4D CBCT images, we proposed two CNN models, named N-Net and CycN-Net, respectively, by fully excavating the inherent property of 4D CBCT. To be specific, the proposed N-Net incorporates the prior image reconstructed from entire projection data based on U-Net to boost the image quality for each phase-resolved image. Based on N-Net, a temporal correlation among the phase-resolved images is also considered by the proposed CycN-Net. Extensive experiments on both XCAT simulation data and real patient 4D CBCT datasets were carried out to verify the feasibility of the proposed CNNs. Both networks can effectively suppress streaking artifacts and noise while restoring the distinct features simultaneously, compared with the existing CNN models and two state-of-the-art iterative algorithms. Moreover, the proposed method is robust in handling complicated tasks of various patient datasets and imaging devices, which implies its excellent generalization ability.
Collapse
|
100
|
Roder M, Passos LA, de Rosa GH, de Albuquerque VHC, Papa JP. Reinforcing learning in Deep Belief Networks through nature-inspired optimization. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107466] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|