51
|
Abstract
In clinical medical applications, sparse-view computed tomography (CT) imaging is an effective method for reducing radiation doses. The iterative reconstruction method is usually adopted for sparse-view CT. In the process of optimizing the iterative model, the approach of directly solving the quadratic penalty function of the objective function can be expected to perform poorly. Compared with the direct solution method, the alternating direction method of multipliers (ADMM) algorithm can avoid the ill-posed problem associated with the quadratic penalty function. However, the regularization items, sparsity transform, and parameters in the traditional ADMM iterative model need to be manually adjusted. In this paper, we propose a data-driven ADMM reconstruction method that can automatically optimize the above terms that are difficult to choose within an iterative framework. The main contribution of this paper is that a modified U-net represents the sparse transformation, and the prior information and related parameters are automatically trained by the network. Based on a comparison with other state-of-the-art reconstruction algorithms, the qualitative and quantitative results show the effectiveness of our method for sparse-view CT image reconstruction. The experimental results show that the proposed method performs well in streak artifact elimination and detail structure preservation. The proposed network can deal with a wide range of noise levels and has exceptional performance in low-dose reconstruction tasks.
Collapse
|
52
|
Lv L, Zeng GL, Zan Y, Hong X, Guo M, Chen G, Tao W, Ding W, Huang Q. A back‐projection‐and‐filtering‐like (BPF‐like) reconstruction method with the deep learning filtration from listmode data in TOF‐PET. Med Phys 2022; 49:2531-2544. [PMID: 35122265 PMCID: PMC10080664 DOI: 10.1002/mp.15520] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 01/19/2022] [Accepted: 01/19/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The time-of-flight (TOF) information improves signal-to-noise ratio (SNR) for positron emission tomography (PET) imaging. Existing analytical algorithms for TOF PET usually follow a filtered back-projection process on reconstructing images from the sinogram data. This work aims to develop a back-projection-and-filtering-like (BPF-like) algorithm that reconstructs the TOF PET image directly from listmode data rapidly. METHODS We extended the 2D conventional non-TOF PET projection model to a TOF case, where projection data are represented as line integrals weighted by the one-dimensional TOF kernel along the projection direction. After deriving the central slice theorem and the TOF back-projection of listmode data, we designed a deep learning network with a modified U-net architecture to perform the spatial filtration (reconstruction filter). The proposed BP-Net method was validated via Monte Carlo simulations of TOF PET listmode data with three different time resolutions for two types of activity phantoms. The network was only trained on the simulated full-dose XCAT dataset and then evaluated on XCAT and Jaszczak data with different time resolutions and dose levels. RESULTS Reconstructed images show that when compared with the conventional BPF algorithm and the MLEM algorithm proposed for TOF PET, the proposed BP-Net method obtains better image quality in terms of peak signal-to-noise ratio, relative mean square error, and structure similarity index; besides, the reconstruction speed of the BP-Net is 1.75 times faster than BPF and 29.05 times faster than MLEM using 15 iterations. The results also indicate that the performance of the BP-Net degrades with worse time resolutions and lower tracer doses, but degrades less than BPF or MLEM reconstructions. CONCLUSION In this work, we developed an analytical-like reconstruction in the form of BPF with the reconstruction filtering operation performed via a deep network. The method runs even faster than the conventional BPF algorithm and provides accurate reconstructions from listmode data in TOF-PET, free of rebinning data to a sinogram.
Collapse
Affiliation(s)
- Li Lv
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Gengsheng L. Zeng
- Department of Computer Science Utah Valley University Orem UT 84058 USA
| | - Yunlong Zan
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| | - Xiang Hong
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Minghao Guo
- School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Gaoyu Chen
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Weijie Tao
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| | - Wenxiang Ding
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Qiu Huang
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Nuclear Medicine Rui Jin Hospital School of Medicine Shanghai Jiao Tong University Shanghai 200240 China
| |
Collapse
|
53
|
Flenner S, Bruns S, Longo E, Parnell AJ, Stockhausen KE, Müller M, Greving I. Machine learning denoising of high-resolution X-ray nanotomography data. JOURNAL OF SYNCHROTRON RADIATION 2022; 29:230-238. [PMID: 34985440 PMCID: PMC8733986 DOI: 10.1107/s1600577521011139] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 10/23/2021] [Indexed: 05/13/2023]
Abstract
High-resolution X-ray nanotomography is a quantitative tool for investigating specimens from a wide range of research areas. However, the quality of the reconstructed tomogram is often obscured by noise and therefore not suitable for automatic segmentation. Filtering methods are often required for a detailed quantitative analysis. However, most filters induce blurring in the reconstructed tomograms. Here, machine learning (ML) techniques offer a powerful alternative to conventional filtering methods. In this article, we verify that a self-supervised denoising ML technique can be used in a very efficient way for eliminating noise from nanotomography data. The technique presented is applied to high-resolution nanotomography data and compared to conventional filters, such as a median filter and a nonlocal means filter, optimized for tomographic data sets. The ML approach proves to be a very powerful tool that outperforms conventional filters by eliminating noise without blurring relevant structural features, thus enabling efficient quantitative analysis in different scientific fields.
Collapse
Affiliation(s)
- Silja Flenner
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Stefan Bruns
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Elena Longo
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Andrew J. Parnell
- Department of Physics and Astronomy, University of Sheffield, Western Bank, Sheffield S3 7RH, United Kingdom
| | - Kilian E. Stockhausen
- Department of Osteology and Biomechanics, University Medical Center, Lottestrasse 55a, 22529 Hamburg, Germany
| | - Martin Müller
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| | - Imke Greving
- Helmholtz-Zentrum Hereon, Max-Planck-Strasse 1, 21502 Geesthacht, Germany
| |
Collapse
|
54
|
Zeng D, Wang L, Geng M, Li S, Deng Y, Xie Q, Li D, Zhang H, Li Y, Xu Z, Meng D, Ma J. Noise-Generating-Mechanism-Driven Unsupervised Learning for Low-Dose CT Sinogram Recovery. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022. [DOI: 10.1109/trpms.2021.3083361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
55
|
Olasz C, Varga LG, Nagy A. Novel U-net based deep neural networks for transmission tomography. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:13-31. [PMID: 34806643 DOI: 10.3233/xst-210962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND The fusion of computer tomography and deep learning is an effective way of achieving improved image quality and artifact reduction in reconstructed images. OBJECTIVE In this paper, we present two novel neural network architectures for tomographic reconstruction with reduced effects of beam hardening and electrical noise. METHODS In the case of the proposed novel architectures, the image reconstruction step is located inside the neural networks, which allows the network to be trained by taking the mathematical model of the projections into account. This strong connection enables us to enhance the projection data and the reconstructed image together. We tested the two proposed models against three other methods on two datasets. The datasets contain physically correct simulated data, and they show strong signs of beam hardening and electrical noise. We also performed a numerical evaluation of the neural networks on the reconstructed images according to three error measurements and provided a scoring system of the methods derived from the three measures. RESULTS The results showed the superiority of the novel architecture called TomoNet2. TomoNet2 improved the quality of the images according to the average Structural Similarity Index from 0.9372 to 0.9977 and 0.9519 to 0.9886 on the two data sets, when compared to the FBP method. This network also yielded the best results for 79.2 and 53.0 percent for the two datasets according to Peak-Signal-to-Noise-Ratio compared to the other improvement techniques. CONCLUSIONS Our experimental results showed that the reconstruction step used in skip connections in deep neural networks improves the quality of the reconstructions. We are confident that our proposed method can be effectively applied to other datasets for tomographic purposes.
Collapse
Affiliation(s)
| | | | - Antal Nagy
- University of Szeged, 6720, Szeged, Hungary
| |
Collapse
|
56
|
Jiang Z, Zhang Z, Chang Y, Ge Y, Yin FF, Ren L. Prior image-guided cone-beam computed tomography augmentation from under-sampled projections using a convolutional neural network. Quant Imaging Med Surg 2021; 11:4767-4780. [PMID: 34888188 DOI: 10.21037/qims-21-114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Accepted: 07/09/2021] [Indexed: 11/06/2022]
Abstract
Background Acquiring sparse-view cone-beam computed tomography (CBCT) is an effective way to reduce the imaging dose. However, images reconstructed by the conventional filtered back-projection method suffer from severe streak artifacts due to the projection under-sampling. Existing deep learning models have demonstrated feasibilities in restoring volumetric structures from the highly under-sampled images. However, because of the inter-patient variabilities, they failed to restore the patient-specific details with the common restoring pattern learned from the group data. Although the patient-specific models have been developed by training models using the intra-patient data and have shown effectiveness in restoring the patient-specific details, the models have to be retrained to be exclusive for each patient. It is highly desirable to develop a generalized model that can utilize the patient-specific information for the under-sampled image augmentation. Methods In this study, we proposed a merging-encoder convolutional neural network (MeCNN) to realize the prior image-guided under-sampled CBCT augmentation. Instead of learning the patient-specific structures, the proposed model learns a generalized pattern of utilizing the patient-specific information in the prior images to facilitate the under-sampled image enhancement. Specifically, the MeCNN consists of a merging-encoder and a decoder. The merging-encoder extracts image features from both the prior CT images and the under-sampled CBCT images, and merges the features at multi-scale levels via deep convolutions. The merged features are then connected to the decoders via shortcuts to yield high-quality CBCT images. The proposed model was tested on both the simulated CBCTs and the clinical CBCTs. The predicted CBCT images were evaluated qualitatively and quantitatively in terms of image quality and tumor localization accuracy. Mann-Whitney U test was conducted for the statistical analysis. P<0.05 was considered statistically significant. Results The proposed model yields CT-like high-quality CBCT images from only 36 half-fan projections. Compared to other methods, CBCT images augmented by the proposed model have significantly lower intensity errors, significantly higher peak signal-to-noise ratio, and significantly higher structural similarity with respect to the ground truth images. Besides, the proposed method significantly reduced the 3D distance of the CBCT-based tumor localization errors. In addition, the CBCT augmentation is nearly real-time. Conclusions With the prior-image guidance, the proposed method is effective in reconstructing high-quality CBCT images from the highly under-sampled projections, considerably reducing the imaging dose and improving the clinical utility of the CBCT.
Collapse
Affiliation(s)
- Zhuoran Jiang
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Zeyu Zhang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Yushi Chang
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.,Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, China
| | - Lei Ren
- Department of Radiation Oncology, University of Maryland, Baltimore, MD, USA
| |
Collapse
|
57
|
Decuyper M, Maebe J, Van Holen R, Vandenberghe S. Artificial intelligence with deep learning in nuclear medicine and radiology. EJNMMI Phys 2021; 8:81. [PMID: 34897550 PMCID: PMC8665861 DOI: 10.1186/s40658-021-00426-y] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 11/19/2021] [Indexed: 12/19/2022] Open
Abstract
The use of deep learning in medical imaging has increased rapidly over the past few years, finding applications throughout the entire radiology pipeline, from improved scanner performance to automatic disease detection and diagnosis. These advancements have resulted in a wide variety of deep learning approaches being developed, solving unique challenges for various imaging modalities. This paper provides a review on these developments from a technical point of view, categorizing the different methodologies and summarizing their implementation. We provide an introduction to the design of neural networks and their training procedure, after which we take an extended look at their uses in medical imaging. We cover the different sections of the radiology pipeline, highlighting some influential works and discussing the merits and limitations of deep learning approaches compared to other traditional methods. As such, this review is intended to provide a broad yet concise overview for the interested reader, facilitating adoption and interdisciplinary research of deep learning in the field of medical imaging.
Collapse
Affiliation(s)
- Milan Decuyper
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Jens Maebe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Roel Van Holen
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Stefaan Vandenberghe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| |
Collapse
|
58
|
Ma G, Zhao X, Zhu Y, Zhang H. Projection-to-image transform frame: a lightweight block reconstruction network (LBRN) for computed tomography. Phys Med Biol 2021; 67. [PMID: 34879357 DOI: 10.1088/1361-6560/ac4122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 12/08/2021] [Indexed: 11/12/2022]
Abstract
To solve the problem of learning based computed tomography (CT) reconstruction, several reconstruction networks were invented. However, applying neural network to tomographic reconstruction still remains challenging due to unacceptable memory space requirement. In this study, we presents a novel lightweight block reconstruction network (LBRN), which transforms the reconstruction operator into a deep neural network by unrolling the filter back-projection (FBP) method. Specifically, the proposed network contains two main modules, which, respectively, correspond to the filter and back-projection of FBP method. The first module of LBRN decouples the relationship of Radon transform between the reconstructed image and the projection data. Therefore, the following module, block back-projection module, can use the block reconstruction strategy. Due to each image block is only connected with part filtered projection data, the network structure is greatly simplified and the parameters of the whole network is dramatically reduced. Moreover, this approach is trained end-to-end, working directly from raw projection data and does not depend on any initial images. Five reconstruction experiments are conducted to evaluate the performance of the proposed LBRN: full angle, low-dose CT, region of interest (ROI), metal artifacts reduction and real data experiment. The results of the experiments show that the LBRN can be effectively introduced into the reconstruction process and has outstanding advantages in terms of different reconstruction problems.
Collapse
Affiliation(s)
- Genwei Ma
- Capital Normal University School of Mathematical Sciences, ., Beijing, 100037, CHINA
| | - Xing Zhao
- Capital Normal University School of Mathematical Sciences, West Third Ring Road North, Beijing, 100037, CHINA
| | - Yining Zhu
- school of mathmatical, Capital Normal University, ., Beijing, 100037, CHINA
| | - Huitao Zhang
- Capital Normal University School of Mathematical Sciences, ., Beijing, 100037, CHINA
| |
Collapse
|
59
|
Chao L, Wang Z, Zhang H, Xu W, Zhang P, Li Q. Sparse-view cone beam CT reconstruction using dual CNNs in projection domain and image domain. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.12.096] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
60
|
Su T, Cui Z, Yang J, Zhang Y, Liu J, Zhu J, Gao X, Fang S, Zheng H, Ge Y, Liang D. Generalized deep iterative reconstruction for sparse-view CT imaging. Phys Med Biol 2021; 67. [PMID: 34847538 DOI: 10.1088/1361-6560/ac3eae] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 11/30/2021] [Indexed: 11/11/2022]
Abstract
Sparse-view CT is a promising approach in reducing the X-ray radiation dose in clinical CT imaging. However, the CT images reconstructed from the conventional filtered backprojection (FBP) algorithm suffer from severe streaking artifacts. Iterative reconstruction (IR) algorithms have been widely adopted to mitigate these streaking artifacts, but they may prolong the CT imaging time due to the intense data-specific computations. Recently, model-driven deep learning (DL) CT image reconstruction method, which unrolls the iterative optimization procedures into the deep neural network, has shown exciting prospect in improving the image quality and shortening the reconstruction time. In this work, we explore the generalized unrolling scheme for such iterative model to further enhance its performance on sparse-view CT imaging. By using it, the iteration parameters, regularizer term, data-fidelity term and even the mathematical operations are all assumed to be learned and optimized via the network training. Results from the numerical and experimental sparse-view CT imaging demonstrate that the newly proposed network with the maximum generalization provides the best reconstruction performance.
Collapse
Affiliation(s)
- Ting Su
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Zhuoxu Cui
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Jiecheng Yang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Yunxin Zhang
- Beijing Jishuitan Hospital, Beijing, Beijing, CHINA
| | - Jian Liu
- Beijing Tiantan Hospital, Beijing, CHINA
| | - Jiongtao Zhu
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Xiang Gao
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Shibo Fang
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, CHINA
| | - Hairong Zheng
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Shenzhen Institutes of Advanced Technology, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, P.R.China, Shenzhen, CHINA
| | - Yongshuai Ge
- Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology,Chinese Academy of Sciences, Shenzhen, 518055, CHINA
| | - Dong Liang
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology Chinese Academy of Sciences, 1068 Xueyuan Avenue, Shenzhen University Town, Shenzhen, P.R.China, Shenzhen, 518055, CHINA
| |
Collapse
|
61
|
Fahrig R, Jaffray DA, Sechopoulos I, Webster Stayman J. Flat-panel conebeam CT in the clinic: history and current state. J Med Imaging (Bellingham) 2021; 8:052115. [PMID: 34722795 DOI: 10.1117/1.jmi.8.5.052115] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 09/27/2021] [Indexed: 11/14/2022] Open
Abstract
Research into conebeam CT concepts began as soon as the first clinical single-slice CT scanner was conceived. Early implementations of conebeam CT in the 1980s focused on high-contrast applications where concurrent high resolution ( < 200 μ m ), for visualization of small contrast-filled vessels, bones, or teeth, was an imaging requirement that could not be met by the contemporaneous CT scanners. However, the use of nonlinear imagers, e.g., x-ray image intensifiers, limited the clinical utility of the earliest diagnostic conebeam CT systems. The development of consumer-electronics large-area displays provided a technical foundation that was leveraged in the 1990s to first produce large-area digital x-ray detectors for use in radiography and then compact flat panels suitable for high-resolution and high-frame-rate conebeam CT. In this review, we show the concurrent evolution of digital flat panel (DFP) technology and clinical conebeam CT. We give a brief summary of conebeam CT reconstruction, followed by a brief review of the correction approaches for DFP-specific artifacts. The historical development and current status of flat-panel conebeam CT in four clinical areas-breast, fixed C-arm, image-guided radiation therapy, and extremity/head-is presented. Advances in DFP technology over the past two decades have led to improved visualization of high-contrast, high-resolution clinical tasks, and image quality now approaches the soft-tissue contrast resolution that is the standard in clinical CT. Future technical developments in DFPs will enable an even broader range of clinical applications; research in the arena of flat-panel CT shows no signs of slowing down.
Collapse
Affiliation(s)
- Rebecca Fahrig
- Innovation, Advanced Therapies, Siemens Healthcare GmbH, Forchheim, Germany.,Friedrich-Alexander Universitat, Department of Computer Science 5, Erlangen, Germany
| | - David A Jaffray
- MD Anderson Cancer Center, Departments of Radiation Physics and Imaging Physics, Houston, Texas, United States
| | - Ioannis Sechopoulos
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen, The Netherlands.,Dutch Expert Center for Screening (LRCB), Nijmegen, The Netherlands.,University of Twente, Technical Medical Center, Enschede, The Netherlands
| | - J Webster Stayman
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
62
|
Huang Y, Preuhs A, Manhart M, Lauritsch G, Maier A. Data Extrapolation From Learned Prior Images for Truncation Correction in Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3042-3053. [PMID: 33844627 DOI: 10.1109/tmi.2021.3072568] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Data truncation is a common problem in computed tomography (CT). Truncation causes cupping artifacts inside the field-of-view (FOV) and anatomical structures missing outside the FOV. Deep learning has achieved impressive results in CT reconstruction from limited data. However, its robustness is still a concern for clinical applications. Although the image quality of learning-based compensation schemes may be inadequate for clinical diagnosis, they can provide prior information for more accurate extrapolation than conventional heuristic extrapolation methods. With extrapolated projection, a conventional image reconstruction algorithm can be applied to obtain a final reconstruction. In this work, a general plug-and-play (PnP) method for truncation correction is proposed based on this idea, where various deep learning methods and conventional reconstruction algorithms can be plugged in. Such a PnP method integrates data consistency for measured data and learned prior image information for truncated data. This shows to have better robustness and interpretability than deep learning only. To demonstrate the efficacy of the proposed PnP method, two state-of-the-art deep learning methods, FBPConvNet and Pix2pixGAN, are investigated for truncation correction in cone-beam CT in noise-free and noisy cases. Their robustness is evaluated by showing false negative and false positive lesion cases. With our proposed PnP method, false lesion structures are corrected for both deep learning methods. For FBPConvNet, the root-mean-square error (RMSE) inside the FOV can be improved from 92HU to around 30HU by PnP in the noisy case. Pix2pixGAN solely achieves better image quality than FBPConvNet solely for truncation correction in general. PnP further improves the RMSE inside the FOV from 42HU to around 27HU for Pix2pixGAN. The efficacy of PnP is also demonstrated on real clinical head data.
Collapse
|
63
|
He J, Chen S, Zhang H, Tao X, Lin W, Zhang S, Zeng D, Ma J. Downsampled Imaging Geometric Modeling for Accurate CT Reconstruction via Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2976-2985. [PMID: 33881992 DOI: 10.1109/tmi.2021.3074783] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
X-ray computed tomography (CT) is widely used clinically to diagnose a variety of diseases by reconstructing the tomographic images of a living subject using penetrating X-rays. For accurate CT image reconstruction, a precise imaging geometric model for the radiation attenuation process is usually required to solve the inversion problem of CT scanning, which encodes the subject into a set of intermediate representations in different angular positions. Here, we show that accurate CT image reconstruction can be subsequently achieved by downsampled imaging geometric modeling via deep-learning techniques. Specifically, we first propose a downsampled imaging geometric modeling approach for the data acquisition process and then incorporate it into a hierarchical neural network, which simultaneously combines both geometric modeling knowledge of the CT imaging system and prior knowledge gained from a data-driven training process for accurate CT image reconstruction. The proposed neural network is denoted as DSigNet, i.e., downsampled-imaging-geometry-based network for CT image reconstruction. We demonstrate the feasibility of the proposed DSigNet for accurate CT image reconstruction with clinical patient data. In addition to improving the CT image quality, the proposed DSigNet might help reduce the computational complexity and accelerate the reconstruction speed for modern CT imaging systems.
Collapse
|
64
|
Okawa Y, Hotate K. Computed tomography for distributed Brillouin sensing. OPTICS EXPRESS 2021; 29:35067-35077. [PMID: 34808950 DOI: 10.1364/oe.435320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 09/29/2021] [Indexed: 06/13/2023]
Abstract
A method to reconstruct the spatial distribution of Brillouin gain spectrum from its Radon transform is proposed, which is a type of optical computed tomography. To verify the concept, an experiment was performed on distributed Brillouin fiber sensing, which succeeded in detecting a 55-cm strain section along a 10-m fiber. The experimental system to obtain the Radon transform of the Brillouin gain spectrum is based on a Brillouin optical correlation-domain analysis with a linear frequency-modulated continuous-wave laser. Combining distributed fiber sensing with computed tomography, this method can realize a high signal-to-noise ratio Brillouin sensing.
Collapse
|
65
|
Wu W, Hu D, Niu C, Broeke LV, Butler APH, Cao P, Atlas J, Chernoglazov A, Vardhanabhuti V, Wang G. Deep learning based spectral CT imaging. Neural Netw 2021; 144:342-358. [PMID: 34560584 DOI: 10.1016/j.neunet.2021.08.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 07/14/2021] [Accepted: 08/20/2021] [Indexed: 10/20/2022]
Abstract
Spectral computed tomography (CT) has attracted much attention in radiation dose reduction, metal artifacts removal, tissue quantification and material discrimination. The x-ray energy spectrum is divided into several bins, each energy-bin-specific projection has a low signal-noise-ratio (SNR) than the current-integrating counterpart, which makes image reconstruction a unique challenge. Traditional wisdom is to use prior knowledge based iterative methods. However, this kind of methods demands a great computational cost. Inspired by deep learning, here we first develop a deep learning based reconstruction method; i.e., U-net with Lpp-norm, Total variation, Residual learning, and Anisotropic adaption (ULTRA). Specifically, we emphasize the various multi-scale feature fusion and multichannel filtering enhancement with a denser connection encoding architecture for residual learning and feature fusion. To address the image deblurring problem associated with the L22- loss, we propose a general Lpp-loss, p>0. Furthermore, the images from different energy bins share similar structures of the same object, the regularization characterizing correlations of different energy bins is incorporated into the Lpp- loss function, which helps unify the deep learning based methods with traditional compressed sensing based methods. Finally, the anisotropically weighted total variation is employed to characterize the sparsity in the spatial-spectral domain to regularize the proposed network In particular, we validate our ULTRA networks on three large-scale spectral CT datasets, and obtain excellent results relative to the competing algorithms. In conclusion, our quantitative and qualitative results in numerical simulation and preclinical experiments demonstrate that our proposed approach is accurate, efficient and robust for high-quality spectral CT image reconstruction.
Collapse
Affiliation(s)
- Weiwen Wu
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China; Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Dianlin Hu
- The Laboratory of Image Science and Technology, Southeast University, Nanjing, People's Republic of China
| | - Chuang Niu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Lieza Vanden Broeke
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China
| | | | - Peng Cao
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China
| | - James Atlas
- Department of Radiology, University of Otago, Christchurch, New Zealand
| | | | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Queen Mary Hospital, University of Hong Kong, Hong Kong, People's Republic of China.
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, School of Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
66
|
Alla Takam C, Tchagna Kouanou A, Samba O, Mih Attia T, Tchiotsop D. Big Data Framework Using Spark Architecture for Dose Optimization Based on Deep Learning in Medical Imaging. ARTIF INTELL 2021. [DOI: 10.5772/intechopen.97746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Deep learning and machine learning provide more consistent tools and powerful functions for recognition, classification, reconstruction, noise reduction, quantification and segmentation in biomedical image analysis. Some breakthroughs. Recently, some applications of deep learning and machine learning for low-dose optimization in computed tomography have been developed. Due to reconstruction and processing technology, it has become crucial to develop architectures and/or methods based on deep learning algorithms to minimize radiation during computed tomography scan inspections. This chapter is an extension work done by Alla et al. in 2020 and explain that work very well. This chapter introduces the deep learning for computed tomography scan low-dose optimization, shows examples described in the literature, briefly discusses new methods for computed tomography scan image processing, and provides conclusions. We propose a pipeline for low-dose computed tomography scan image reconstruction based on the literature. Our proposed pipeline relies on deep learning and big data technology using Spark Framework. We will discuss with the pipeline proposed in the literature to finally derive the efficiency and importance of our pipeline. A big data architecture using computed tomography images for low-dose optimization is proposed. The proposed architecture relies on deep learning and allows us to develop effective and appropriate methods to process dose optimization with computed tomography scan images. The real realization of the image denoising pipeline shows us that we can reduce the radiation dose and use the pipeline we recommend to improve the quality of the captured image.
Collapse
|
67
|
Gong H, Ren L, Hsieh SS, McCollough CH, Yu L. Deep learning enabled ultra-fast-pitch acquisition in clinical X-ray computed tomography. Med Phys 2021; 48:5712-5726. [PMID: 34415068 DOI: 10.1002/mp.15176] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 07/04/2021] [Accepted: 07/30/2021] [Indexed: 01/04/2023] Open
Abstract
OBJECTIVE In X-raycomputed tomography (CT), many important clinical applications may benefit from a fast acquisition speed. The helical scan is the most widely used acquisition mode in clinical CT, where a fast helical pitch can improve the acquisition speed. However, on a typical single-source helical CT (SSCT) system, the helical pitch p typically cannot exceed 1.5; otherwise, reconstruction artifacts will result from data insufficiency. The purpose of this work is to develop a deep convolutional neural network (CNN) to correct for artifacts caused by an ultra-fast pitch, which can enable faster acquisition speed than what is currently achievable. METHODS A customized CNN (denoted as ultra-fast-pitch network (UFP-net)) was developed to restore the underlying anatomical structure from the artifact-corrupted post-reconstruction data acquired from SSCT with ultra-fast pitch (i.e., p ≥ 2). UFP-net employed residual learning to capture the features of image artifacts. UFP-net further deployed in-house-customized functional blocks with spatial-domain local operators and frequency-domain non-local operators, to explore multi-scale feature representation. Images of contrast-enhanced patient exams (n = 83) with routine pitch setting (i.e., p < 1) were retrospectively collected, which were used as training and testing datasets. This patient cohort involved CT exams over different scan ranges of anatomy (chest, abdomen, and pelvis) and CT systems (Siemens Definition, Definition Flash, Definition AS+, Siemens Healthcare, Inc.), and the corresponding base CT scanning protocols used consistent settings of major scan parameters (e.g., collimation and pitch). Forward projection of the original images was calculated to synthesize helical CT scans with one regular pitch setting (p = 1) and two ultra-fast-pitch setting (p = 2 and 3). All patient images were reconstructed using the standard filtered-back-projection (FBP) algorithm. A customized multi-stage training scheme was developed to incrementally optimize the parameters of UFP-net, using ultra-fast-pitch images as network inputs and regular pitch images as labels. Visual inspection was conducted to evaluate image quality. Structural similarity index (SSIM) and relative root-mean-square error (rRMSE) were used as quantitative quality metrics. RESULTS The UFP-net dramatically improved image quality over standard FBP at both ultra-fast-pitch settings. At p = 2, UFP-net yielded higher mean SSIM (> 0.98) with lower mean rRMSE (< 2.9%), compared to FBP (mean SSIM < 0.93; mean rRMSE > 9.1%). Quantitative metrics at p = 3: UFP-net-mean SSIM [0.86, 0.94] and mean rRMSE [5.0%, 8.2%]; FBP-mean SSIM [0.36, 0.61] and mean rRMSE [36.0%, 58.6%]. CONCLUSION The proposed UFP-net has the potential to enable ultra-fast data acquisition in clinical CT without sacrificing image quality. This method has demonstrated reasonable generalizability over different body parts when the corresponding CT exams involved consistent base scan parameters.
Collapse
Affiliation(s)
- Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Liqiang Ren
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Scott S Hsieh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
68
|
Lu K, Ren L, Yin FF. A geometry-guided deep learning technique for CBCT reconstruction. Phys Med Biol 2021; 66. [PMID: 34261057 DOI: 10.1088/1361-6560/ac145b] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 07/14/2021] [Indexed: 11/12/2022]
Abstract
Purpose.Although deep learning (DL) technique has been successfully used for computed tomography (CT) reconstruction, its implementation on cone-beam CT (CBCT) reconstruction is extremely challenging due to memory limitations. In this study, a novel DL technique is developed to resolve the memory issue, and its feasibility is demonstrated for CBCT reconstruction from sparsely sampled projection data.Methods.The novel geometry-guided deep learning (GDL) technique is composed of a GDL reconstruction module and a post-processing module. The GDL reconstruction module learns and performs projection-to-image domain transformation by replacing the traditional single fully connected layer with an array of small fully connected layers in the network architecture based on the projection geometry. The DL post-processing module further improves image quality after reconstruction. We demonstrated the feasibility and advantage of the model by comparing ground truth CBCT with CBCT images reconstructed using (1) GDL reconstruction module only, (2) GDL reconstruction module with DL post-processing module, (3) Feldkamp, Davis, and Kress (FDK) only, (4) FDK with DL post-processing module, (5) ray-tracing only, and (6) ray-tracing with DL post-processing module. The differences are quantified by peak-signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and root-mean-square error (RMSE).Results.CBCT images reconstructed with GDL show improvements in quantitative scores of PSNR, SSIM, and RMSE. Reconstruction time per image for all reconstruction methods are comparable. Compared to current DL methods using large fully connected layers, the estimated memory requirement using GDL is four orders of magnitude less, making DL CBCT reconstruction feasible.Conclusion.With much lower memory requirement compared to other existing networks, the GDL technique is demonstrated to be the first DL technique that can rapidly and accurately reconstruct CBCT images from sparsely sampled data.
Collapse
Affiliation(s)
- Ke Lu
- Medical Physics Graduate Program, Duke University, Durham, NC, United States of America.,Department of Radiation Oncology, Duke University, Durham, NC, United States of America
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, United States of America.,Department of Radiation Oncology, Duke University, Durham, NC, United States of America
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, NC, United States of America.,Department of Radiation Oncology, Duke University, Durham, NC, United States of America.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, People's Republic of China
| |
Collapse
|
69
|
Pan H, Xiao D, Zhang F, Li X, Xu M. Adaptive weight matrix and phantom intensity learning for computed tomography of chemiluminescence. OPTICS EXPRESS 2021; 29:23682-23700. [PMID: 34614629 DOI: 10.1364/oe.427459] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 06/16/2021] [Indexed: 06/13/2023]
Abstract
Classic algebraic reconstruction technique (ART) for computed tomography requires pre-determined weights of the voxels for the projected pixel values to build the equations. However, such weights cannot be accurately obtained in the application of chemiluminescence measurements due to the high physical complexity and computation resources required. Moreover, streaks arise in the results from ART method especially with imperfect projections. In this study, we propose a semi-case-wise learning-based method named Weight Encode Reconstruction Network (WERNet) to co-learn the target phantom intensities and the adaptive weight matrix of the case without labeling the target voxel set and thus offers a more applicable solution for computed tomography problems. Both numerical and experimental validations were conducted to evaluate the algorithm. In the numerical test, with the help of gradient normalization, the WERNet reconstructed voxel set with a high accuracy and showed a higher capability of denoising compared to the classic ART methods. In the experimental test, WERNet produces comparable results to the ART method while having a better performance in avoiding the streaks. Furthermore, with the adaptive weight matrix, WERNet is not sensitive to the ensemble intensity of the projection which shows much better robustness than ART method.
Collapse
|
70
|
Wang J, Li M, Cheng J, Guo Z, Li D, Wu S. Exact reconstruction condition for angle-limited computed tomography of chemiluminescence. APPLIED OPTICS 2021; 60:4273-4281. [PMID: 34143113 DOI: 10.1364/ao.420223] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Accepted: 04/21/2021] [Indexed: 06/12/2023]
Abstract
Computed tomography of chemiluminescence (CTC) is an effective technique for three-dimensional (3D) combustion diagnostics. It reconstructs the 3D concentrations of intermediate species or 3D images of flame topology by multiple chemiluminescence projections captured from different perspectives. In the previous studies of CTC systems, it was assumed that projections from arbitrary perspectives are available. However, for some practical applications, the range of view angles and the number of projections might be restricted due to the optical access limitation, greatly affecting the reconstruction quality. In this paper, the exact reconstruction condition for angle-limited computed tomography of chemiluminescence was studied based on Mojette transform theories, and it was demonstrated by numerical simulations and experiments. The studies indicate that the object tested within limited angles can be well reconstructed when the number of grids, the number of projections, and the sampling rate of projections satisfy the exact reconstruction condition. By increasing the sampling rate of projections, high-quality tomographic reconstruction can be achieved by a few projections in a small angle range. Although this technique is discussed under combustion diagnostics, it can also be used and adapted for other tomography methods.
Collapse
|
71
|
Hooper SM, Dunnmon JA, Lungren MP, Mastrodicasa D, Rubin DL, Ré C, Wang A, Patel BN. Impact of Upstream Medical Image Processing on Downstream Performance of a Head CT Triage Neural Network. Radiol Artif Intell 2021; 3:e200229. [PMID: 34350412 PMCID: PMC8328108 DOI: 10.1148/ryai.2021200229] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 04/01/2021] [Accepted: 04/08/2021] [Indexed: 01/17/2023]
Abstract
PURPOSE To develop a convolutional neural network (CNN) to triage head CT (HCT) studies and investigate the effect of upstream medical image processing on the CNN's performance. MATERIALS AND METHODS A total of 9776 HCT studies were retrospectively collected from 2001 through 2014, and a CNN was trained to triage them as normal or abnormal. CNN performance was evaluated on a held-out test set, assessing triage performance and sensitivity to 20 disorders to assess differential model performance, with 7856 CT studies in the training set, 936 in the validation set, and 984 in the test set. This CNN was used to understand how the upstream imaging chain affects CNN performance by evaluating performance after altering three variables: image acquisition by reducing the number of x-ray projections, image reconstruction by inputting sinogram data into the CNN, and image preprocessing. To evaluate performance, the DeLong test was used to assess differences in the area under the receiver operating characteristic curve (AUROC), and the McNemar test was used to compare sensitivities. RESULTS The CNN achieved a mean AUROC of 0.84 (95% CI: 0.83, 0.84) in discriminating normal and abnormal HCT studies. The number of x-ray projections could be reduced by 16 times and the raw sensor data could be input into the CNN with no statistically significant difference in classification performance. Additionally, CT windowing consistently improved CNN performance, increasing the mean triage AUROC by 0.07 points. CONCLUSION A CNN was developed to triage HCT studies, which may help streamline image evaluation, and the means by which upstream image acquisition, reconstruction, and preprocessing affect downstream CNN performance was investigated, bringing focus to this important part of the imaging chain.Keywords Head CT, Automated Triage, Deep Learning, Sinogram, DatasetSupplemental material is available for this article.© RSNA, 2021.
Collapse
|
72
|
Zhang Y. An unsupervised 2D-3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation. Phys Med Biol 2021; 66. [PMID: 33631734 DOI: 10.1088/1361-6560/abe9f6] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 02/25/2021] [Indexed: 12/25/2022]
Abstract
Acquiring CBCTs from a limited scan angle can help to reduce the imaging time, save the imaging dose, and allow continuous target localizations through arc-based treatments with high temporal resolution. However, insufficient scan angle sampling leads to severe distortions and artifacts in the reconstructed CBCT images, limiting their clinical applicability. 2D-3D deformable registration can map a prior fully-sampled CT/CBCT volume to estimate a new CBCT, based on limited-angle on-board cone-beam projections. The resulting CBCT images estimated by 2D-3D deformable registration can successfully suppress the distortions and artifacts, and reflect up-to-date patient anatomy. However, traditional iterative 2D-3D deformable registration algorithm is very computationally expensive and time-consuming, which takes hours to generate a high quality deformation vector field (DVF) and the CBCT. In this work, we developed an unsupervised, end-to-end, 2D-3D deformable registration framework using convolutional neural networks (2D3D-RegNet) to address the speed bottleneck of the conventional iterative 2D-3D deformable registration algorithm. The 2D3D-RegNet was able to solve the DVFs within 5 seconds for 90 orthogonally-arranged projections covering a combined 90° scan angle, with DVF accuracy superior to 3D-3D deformable registration, and on par with the conventional 2D-3D deformable registration algorithm. We also performed a preliminary robustness analysis of 2D3D-RegNet towards projection angular sampling frequency variations, as well as scan angle offsets. The synergy of 2D3D-RegNet with biomechanical modeling was also evaluated, and demonstrated that 2D3D-RegNet can function as a fast DVF solution core for further DVF refinement.
Collapse
Affiliation(s)
- You Zhang
- Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75235, United States of America
| |
Collapse
|
73
|
Jiang Z, Yin FF, Ge Y, Ren L. Enhancing digital tomosynthesis (DTS) for lung radiotherapy guidance using patient-specific deep learning model. Phys Med Biol 2021; 66:035009. [PMID: 33238249 DOI: 10.1088/1361-6560/abcde8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Digital tomosynthesis (DTS) has been proposed as a fast low-dose imaging technique for image-guided radiation therapy (IGRT). However, due to the limited scanning angle, DTS reconstructed by the conventional FDK method suffers from significant distortions and poor plane-to-plane resolutions without full volumetric information, which severely limits its capability for image guidance. Although existing deep learning-based methods showed feasibilities in restoring volumetric information in DTS, they ignored the inter-patient variabilities by training the model using group patients. Consequently, the restored images still suffered from blurred and inaccurate edges. In this study, we presented a DTS enhancement method based on a patient-specific deep learning model to recover the volumetric information in DTS images. The main idea is to use the patient-specific prior knowledge to train the model to learn the patient-specific correlation between DTS and the ground truth volumetric images. To validate the performance of the proposed method, we enrolled both simulated and real on-board projections from lung cancer patient data. Results demonstrated the benefits of the proposed method: (1) qualitatively, DTS enhanced by the proposed method shows CT-like high image quality with accurate and clear edges; (2) quantitatively, the enhanced DTS has low-intensity errors and high structural similarity with respect to the ground truth CT images; (3) in the tumor localization study, compared to the ground truth CT-CBCT registration, the enhanced DTS shows 3D localization errors of ≤0.7 mm and ≤1.6 mm for studies using simulated and real projections, respectively; and (4), the DTS enhancement is nearly real-time. Overall, the proposed method is effective and efficient in enhancing DTS to make it a valuable tool for IGRT applications.
Collapse
Affiliation(s)
- Zhuoran Jiang
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, People's Republic of China.,Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America
| | - Fang-Fang Yin
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, 215316, People's Republic of China
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, 163 Xianlin Road, Nanjing, Jiangsu, 210046, People's Republic of China
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, DUMC Box 3295, Durham, NC 27710, United States of America.,Medical Physics Graduate Program, Duke University, 2424 Erwin Road Suite 101, Durham, NC 27705, United States of America
| |
Collapse
|
74
|
Hu D, Liu J, Lv T, Zhao Q, Zhang Y, Quan G, Feng J, Chen Y, Luo L. Hybrid-Domain Neural Network Processing for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3011413] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
75
|
Shi L, Qu G. A preconditioned landweber iteration scheme for the limited-angle image reconstruction. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:1045-1063. [PMID: 34542052 DOI: 10.3233/xst-210936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND The limited-angle reconstruction problem is of both theoretical and practical importance. Due to the severe ill-posedness of the problem, it is very challenging to get a valid reconstructed result from the known small limited-angle projection data. The theoretical ill-posedness leads the normal equation AT Ax = AT b of the linear system derived by discretizing the Radon transform to be severely ill-posed, which is quantified as the large condition number of AT A. OBJECTIVE To develop and test a new valid algorithm for improving the limited-angle image reconstruction with the known appropriately small angle range from [0,π3]∼[0,π2]. METHODS We propose a reweighted method of improving the condition number of AT Ax = AT b and the corresponding preconditioned Landweber iteration scheme. The weight means multiplying AT Ax = AT b by a matrix related to AT A, and the weighting process is repeated multiple times. In the experiment, the condition number of the coefficient matrix in the reweighted linear system decreases monotonically to 1 as the weighting times approaches infinity. RESULTS The numerical experiments showed that the proposed algorithm is significantly superior to other iterative algorithms (Landweber, Cimmino, NWL-a and AEDS) and can reconstruct a valid image from the known appropriately small angle range. CONCLUSIONS The proposed algorithm is effective for the limited-angle reconstruction problem with the known appropriately small angle range.
Collapse
Affiliation(s)
- Lei Shi
- School of Science, Beijing Jiaotong University, Beijing, China
| | - Gangrong Qu
- School of Science, Beijing Jiaotong University, Beijing, China
| |
Collapse
|
76
|
Podgorsak AR, Shiraz Bhurwani MM, Ionita CN. CT artifact correction for sparse and truncated projection data using generative adversarial networks. Med Phys 2020; 48:615-626. [PMID: 32996149 DOI: 10.1002/mp.14504] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 09/17/2020] [Accepted: 09/18/2020] [Indexed: 12/17/2022] Open
Abstract
PURPOSE Computed tomography image reconstruction using truncated or sparsely acquired projection data to reduce radiation dose, iodine volume, and patient motion artifacts has been widely investigated. To continue these efforts, we investigated the use of machine learning-based reconstruction techniques using deep convolutional generative adversarial networks (DCGANs) and evaluated its effect using standard imaging metrics. METHODS Ten thousand head computed tomography (CT) scans were collected from the 2019 RSNA Intracranial Hemorrhage Detection and Classification Challenge dataset. Sinograms were simulated and then resampled in both a one-third truncated and one-third sparse manner. DCGANs were tasked with correcting the incomplete projection data, either in the sinogram domain where the full sinogram was recovered by the DCGAN and then reconstructed, or the reconstruction domain where the incomplete data were first reconstructed and the sparse or truncation artifacts were corrected by the DCGAN. Seventy-five hundred images were used for network training and 2500 were withheld for network assessment using mean absolute error (MAE), structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR) between results of different correction techniques. Image data from a quality-assurance phantom were also resampled in the two manners and corrected and reconstructed for network performance assessment using line profiles across high-contrast features, the modulation transfer function (MTF), noise power spectrum (NPS), and Hounsfield Unit (HU) linearity analysis. RESULTS Better agreement with the fully sampled reconstructions were achieved from sparse acquisition corrected in the sinogram domain and the truncated acquisition corrected in the reconstruction domain. MAE, SSIM, and PSNR showed quantitative improvement from the DCGAN correction techniques. HU linearity of the reconstructions was maintained by the correction techniques for the sparse and truncated acquisitions. MTF curves reached the 10% modulation cutoff frequency at 5.86 lp/cm for the truncated corrected reconstruction compared with 2.98 lp/cm for the truncated uncorrected reconstruction, and 5.36 lp/cm for the sparse corrected reconstruction compared with around 2.91 lp/cm for the sparse uncorrected reconstruction. NPS analyses yielded better agreement across a range of frequencies between the resampled corrected phantom and truth reconstructions. CONCLUSIONS We demonstrated the use of DCGANs for CT-image correction from sparse and truncated simulated projection data, while preserving imaging quality of the fully sampled projection data.
Collapse
Affiliation(s)
- Alexander R Podgorsak
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Medical Physics Program, State University of New York at Buffalo, 955 Main Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| | - Mohammad Mahdi Shiraz Bhurwani
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| | - Ciprian N Ionita
- Canon Stroke and Vascular Research Center, 875 Ellicott Street, Buffalo, NY, 14203, USA.,Medical Physics Program, State University of New York at Buffalo, 955 Main Street, Buffalo, NY, 14203, USA.,Department of Biomedical Engineering, State University of New York at Buffalo, 200 Lee Road, Buffalo, NY, 14228, USA
| |
Collapse
|
77
|
Zheng A, Gao H, Zhang L, Xing Y. A dual-domain deep learning-based reconstruction method for fully 3D sparse data helical CT. Phys Med Biol 2020; 65:245030. [PMID: 32365345 DOI: 10.1088/1361-6560/ab8fc1] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Helical CT has been widely used in clinical diagnosis. In this work, we focus on a new prototype of helical CT, equipped with sparsely spaced multidetector and multi-slit collimator (MSC) in the axis direction. This type of system can not only lower radiation dose, and suppress scattering by MSC, but also cuts down the manufacturing cost of the detector. The major problem to overcome with such a system, however, is that of insufficient data for reconstruction. Hence, we propose a deep learning-based function optimization method for this ill-posed inverse problem. By incorporating a Radon inverse operator, and disentangling each slice, we significantly simplify the complexity of our network for 3D reconstruction. The network is composed of three subnetworks. Firstly, a convolutional neural network (CNN) in the projection domain is constructed to estimate missing projection data, and to convert helical projection data to 2D fan-beam projection data. This is follwed by the deployment of an analytical linear operator to transfer the data from the projection domain to the image domain. Finally, an additional CNN in the image domain is added for further image refinement. These three steps work collectively, and can be trained end to end. The overall network is trained on a simulated CT dataset based on eight patients from the American Association of Physicists in Medicine (AAPM) Low Dose CT Grand Challenge. We evaluate the trained network on both simulated datasets and clinical datasets. Extensive experimental studies have yielded very encouraging results, based on both visual examination and quantitative evaluation. These results demonstrate the effectiveness of our method and its potential for clinical usage. The proposed method provides us with a new solution for a fully 3D ill-posed problem.
Collapse
Affiliation(s)
- Ao Zheng
- Department of Engineering Physics, Tsinghua University, Beijing 100084, People's Republic of China. Key Laboratory of Particle & Radiation Imaging, Tsinghua University, Ministry of Education, Beijing 100084, People's Republic of China
| | | | | | | |
Collapse
|
78
|
|
79
|
Zhang T, Zhang L, Chen Z, Xing Y, Gao H. Fourier Properties of Symmetric-Geometry Computed Tomography and Its Linogram Reconstruction With Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4445-4457. [PMID: 32866095 DOI: 10.1109/tmi.2020.3020720] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this work, we investigate the Fourier properties of a symmetric-geometry computed tomography (SGCT) with linearly distributed source and detector in a stationary configuration. A linkage between the 1D Fourier Transform of a weighted projection from SGCT and the 2D Fourier Transform of a deformed object is established in a simple mathematical form (i.e., the Fourier slice theorem for SGCT). Based on its Fourier slice theorem and its unique data sampling in the Fourier space, a Linogram-based Fourier reconstruction method is derived for SGCT. We demonstrate that the entire Linogram reconstruction process can be embedded as known operators into an end-to-end neural network. As a learning-based approach, the proposed Linogram-Net has capability of improving CT image quality for non-ideal imaging scenarios, a limited-angle SGCT for instance, through combining weights learning in the projection domain and loss minimization in the image domain. Numerical simulations and physical experiments on an SGCT prototype platform showed that our proposed Linogram-based method can achieve accurate reconstruction from a dual-SGCT scan and can greatly reduce computational complexity when compared with the filtered backprojection type reconstruction. The Linogram-Net achieved accurate reconstruction when projection data are complete and significantly suppressed image artifacts from a limited-angle SGCT scan mimicked by using a clinical CT dataset, with the average CT number error in the selected regions of interest reduced from 67.7 Hounsfield Units (HU) to 28.7 HU, and the average normalized mean square error of overall images reduced from 4.21e-3 to 2.65e-3.
Collapse
|
80
|
Peng C, Li B, Liang P, Zheng J, Zhang Y, Qiu B, Chen DZ. A Cross-Domain Metal Trace Restoring Network for Reducing X-Ray CT Metal Artifacts. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3831-3842. [PMID: 32746126 DOI: 10.1109/tmi.2020.3005432] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Metal artifacts commonly appear in computed tomography (CT) images of the patient body with metal implants and can affect disease diagnosis. Known deep learning and traditional metal trace restoring methods did not effectively restore details and sinogram consistency information in X-ray CT sinograms, hence often causing considerable secondary artifacts in CT images. In this paper, we propose a new cross-domain metal trace restoring network which promotes sinogram consistency while reducing metal artifacts and recovering tissue details in CT images. Our new approach includes a cross-domain procedure that ensures information exchange between the image domain and the sinogram domain in order to help them promote and complement each other. Under this cross-domain structure, we develop a hierarchical analytic network (HAN) to recover fine details of metal trace, and utilize the perceptual loss to guide HAN to concentrate on the absorption of sinogram consistency information of metal trace. To allow our entire cross-domain network to be trained end-to-end efficiently and reduce the graphic memory usage and time cost, we propose effective and differentiable forward projection (FP) and filtered back-projection (FBP) layers based on FP and FBP algorithms. We use both simulated and clinical datasets in three different clinical scenarios to evaluate our proposed network's practicality and universality. Both quantitative and qualitative evaluation results show that our new network outperforms state-of-the-art metal artifact reduction methods. In addition, the elapsed time analysis shows that our proposed method meets the clinical time requirement.
Collapse
|
81
|
Wang B, Liu H. FBP-Net for direct reconstruction of dynamic PET images. Phys Med Biol 2020; 65. [PMID: 33049720 DOI: 10.1088/1361-6560/abc09d] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 10/13/2020] [Indexed: 12/22/2022]
Abstract
Dynamic positron emission tomography (PET) imaging can provide information about metabolic changes over time, used for kinetic analysis and auxiliary diagnosis. Existing deep learning-based reconstruction methods have too many trainable parameters and poor generalization, and require mass data to train the neural network. However, obtaining large amounts of medical data is expensive and time-consuming. To reduce the need for data and improve the generalization of network, we combined the filtered back-projection (FBP) algorithm with neural network, and proposed FBP-Net which could directly reconstruct PET images from sinograms instead of post-processing the rough reconstruction images obtained by traditional methods. The FBP-Net contained two parts: the FBP part and the denoiser part. The FBP part adaptively learned the frequency filter to realize the transformation from the detector domain to the image domain, and normalized the coarse reconstruction images obtained. The denoiser part merged the information of all time frames to improve the quality of dynamic PET reconstruction images, especially the early time frames. The proposed FBP-Net was performed on simulation and real dataset, and the results were compared with the state-of-art U-net and DeepPET. The results showed that FBP-Net did not tend to overfit the training set and had a stronger generalization.
Collapse
Affiliation(s)
- Bo Wang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 310027 Hangzhou, People's Republic of China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 310027 Hangzhou, People's Republic of China.,Author to whom any correspondence should be addressed
| |
Collapse
|
82
|
Abstract
AbstractThe immense growth of the cloud infrastructure leads to the deployment of several machine learning as a service (MLaaS) in which the training and the development of machine learning models are ultimately performed in the cloud providers’ environment. However, this could also cause potential security threats and privacy risk as the deep learning algorithms need to access generated data collection, which lacks security in nature. This paper predominately focuses on developing a secure deep learning system design with the threat analysis involved within the smart farming technologies as they are acquiring more attention towards the global food supply needs with their intensifying demands. Smart farming is known to be a combination of data-driven technology and agricultural applications that helps in yielding quality food products with the enhancing crop yield. Nowadays, many use cases had been developed by executing smart farming paradigm and promote high impacts on the agricultural lands.
Collapse
|
83
|
Syben C, Stimpel B, Roser P, Dorfler A, Maier A. Known Operator Learning Enables Constrained Projection Geometry Conversion: Parallel to Cone-Beam for Hybrid MR/X-Ray Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3488-3498. [PMID: 32746099 DOI: 10.1109/tmi.2020.2998179] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
X-ray imaging is a wide-spread real-time imaging technique. Magnetic Resonance Imaging (MRI) offers a multitude of contrasts that offer improved guidance to interventionalists. As such simultaneous real-time acquisition and overlay would be highly favorable for image-guided interventions, e.g., in stroke therapy. One major obstacle in this setting is the fundamentally different acquisition geometry. MRI k -space sampling is associated with parallel projection geometry, while the X-ray acquisition results in perspective distorted projections. The classical rebinning methods to overcome this limitation inherently suffers from a loss of resolution. To counter this problem, we present a novel rebinning algorithm for parallel to cone-beam conversion. We derive a rebinning formula that is then used to find an appropriate deep neural network architecture. Following the known operator learning paradigm, the novel algorithm is mapped to a neural network with differentiable projection operators enabling data-driven learning of the remaining unknown operators. The evaluation aims in two directions: First, we give a profound analysis of the different hypotheses to the unknown operator and investigate the influence of numerical training data. Second, we evaluate the performance of the proposed method against the classical rebinning approach. We demonstrate that the derived network achieves better results than the baseline method and that such operators can be trained with simulated data without losing their generality making them applicable to real data without the need for retraining or transfer learning.
Collapse
|
84
|
Gao C, Liu X, Gu W, Killeen B, Armand M, Taylor R, Unberath M. Generalizing Spatial Transformers to Projective Geometry with Applications to 2D/3D Registration. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12263:329-339. [PMID: 33135014 DOI: 10.1007/978-3-030-59716-0_32] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Differentiable rendering is a technique to connect 3D scenes with corresponding 2D images. Since it is differentiable, processes during image formation can be learned. Previous approaches to differentiable rendering focus on mesh-based representations of 3D scenes, which is inappropriate for medical applications where volumetric, voxelized models are used to represent anatomy. We propose a novel Projective Spatial Transformer module that generalizes spatial transformers to projective geometry, thus enabling differentiable volume rendering. We demonstrate the usefulness of this architecture on the example of 2D/3D registration between radiographs and CT scans. Specifically, we show that our transformer enables end-to-end learning of an image processing and projection model that approximates an image similarity function that is convex with respect to the pose parameters, and can thus be optimized effectively using conventional gradient descent. To the best of our knowledge, we are the first to describe the spatial transformers in the context of projective transmission imaging, including rendering and pose estimation. We hope that our developments will benefit related 3D research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.
Collapse
Affiliation(s)
- Cong Gao
- Johns Hopkins University, Baltimore MD 21218, USA
| | - Xingtong Liu
- Johns Hopkins University, Baltimore MD 21218, USA
| | - Wenhao Gu
- Johns Hopkins University, Baltimore MD 21218, USA
| | | | | | | | | |
Collapse
|
85
|
Chen G, Zhao Y, Huang Q, Gao H. 4D-AirNet: a temporally-resolved CBCT slice reconstruction method synergizing analytical and iterative method with deep learning. Phys Med Biol 2020; 65:175020. [DOI: 10.1088/1361-6560/ab9f60] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
86
|
|
87
|
Zhang Q, Hu Z, Jiang C, Zheng H, Ge Y, Liang D. Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging. Phys Med Biol 2020; 65:155010. [PMID: 32369793 DOI: 10.1088/1361-6560/ab9066] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The suppression of streak artifacts in computed tomography with a limited-angle configuration is challenging. Conventional analytical algorithms, such as filtered backprojection (FBP), are not successful due to incomplete projection data. Moreover, model-based iterative total variation algorithms effectively reduce small streaks but do not work well at eliminating large streaks. In contrast, FBP mapping networks and deep-learning-based postprocessing networks are outstanding at removing large streak artifacts; however, these methods perform processing in separate domains, and the advantages of multiple deep learning algorithms operating in different domains have not been simultaneously explored. In this paper, we present a hybrid-domain convolutional neural network (hdNet) for the reduction of streak artifacts in limited-angle computed tomography. The network consists of three components: the first component is a convolutional neural network operating in the sinogram domain, the second is a domain transformation operation, and the last is a convolutional neural network operating in the CT image domain. After training the network, we can obtain artifact-suppressed CT images directly from the sinogram domain. Verification results based on numerical, experimental and clinical data confirm that the proposed method can significantly reduce serious artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China. Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China. Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | | | | | | | | | | |
Collapse
|
88
|
Chen R, Wang M, Lai Y. Analysis of the role and robustness of artificial intelligence in commodity image recognition under deep learning neural network. PLoS One 2020; 15:e0235783. [PMID: 32634167 PMCID: PMC7340283 DOI: 10.1371/journal.pone.0235783] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Accepted: 06/22/2020] [Indexed: 12/23/2022] Open
Abstract
In order to explore the application of the image recognition model based on multi-stage convolutional neural network (MS-CNN) in the deep learning neural network in the intelligent recognition of commodity images and the recognition performance of the method, in the study, the features of color, shape, and texture of commodity images are first analyzed, and the basic structure of deep convolutional neural network (CNN) model is analyzed. Then, 50,000 pictures containing different commodities are constructed to verify the recognition effect of the model. Finally, the MS-CNN model is taken as the research object for improvement to explore the influence of label errors (p = 0.03, 0.05, 0.07, 0.09, 0.12) with different parameter settings and different probabilities (size of convolutional kernel, Dropout rate) on the recognition accuracy of MS-CNN model, at the same time, a CIR system platform based on MS-CNN model is built, and the recognition performance of salt and pepper noise images with different SNR (0, 0.03, 0.05, 0.07, 0.1) was compared, then the performance of the algorithm in the actual image recognition test was compared. The results show that the recognition accuracy is the highest (97.8%) when the convolution kernel size in the MS-CNN model is 2*2 and 3*3, and the average recognition accuracy is the highest (97.8%) when the dropout rate is 0.1; when the error probability of picture label is 12%, the recognition accuracy of the model constructed in this study is above 96%. Finally, the commodity image database constructed in this study is used to identify and verify the model. The recognition accuracy of the algorithm in this study is significantly higher than that of the Minitch stochastic gradient descent algorithm under different SNR conditions, and the recognition accuracy is the highest when SNR = 0 (99.3%). The test results show that the model proposed in this study has good recognition effect in the identification of commodity images in scenes of local occlusion, different perspectives, different backgrounds, and different light intensity, and the recognition accuracy is 97.1%. To sum up, the CIR platform based on MS-CNN model constructed in this study has high recognition accuracy and robustness, which can lay a foundation for the realization of subsequent intelligent commodity recognition technology.
Collapse
Affiliation(s)
- Rui Chen
- School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Meiling Wang
- School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| | - Yi Lai
- School of Communications and Information Engineering, Xi’an University of Posts and Telecommunications, Xi’an, China
| |
Collapse
|
89
|
Xie S, Huang W, Yang T, Wu D, Liu H. Compressed Sensing based Image Reconstruction with Projection Recovery for Limited Angle Cone-Beam CT Imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1307-1310. [PMID: 33018228 DOI: 10.1109/embc44109.2020.9175367] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper presents a new 3D CT image reconstruction for limited angle C-arm cone-beam CT imaging system based on total-variation (TV) regularized in image domain and L1-penalty in projection domain. This is motivated by the facts that the CT images are sparse in TV setting and their projections are sinusoid-like forms, which are sparse in the discrete cosine transform (DCT) domain. Furthermore, the artifacts in image domain are directional due to limited angle views, so the anisotropic TV is employed. And the reweighted L1penalty in projection domain is adopted to enhance sparsity. Hence, this paper applied the anisotropic TV-norm and reweighted L1-norm sparse techniques to the limited angle Carm CT imaging system to enhance the image quality in both CT image and projection domains. Experimental results also show the efficiency of the proposed method.Clinical Relevance-This new CT reconstruction approach provides high quality images and projections for practicing clinicians.
Collapse
|
90
|
Wang Y, Yang T, Huang W. Limited-Angle Computed Tomography Reconstruction using Combined FDK-Based Neural Network and U-Net. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1572-1575. [PMID: 33018293 DOI: 10.1109/embc44109.2020.9176040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The limited-angle cone-beam Computed Tomography (CT) is often used in C-arm for clinical diagnosis with the advantages of cheap cost and radiation dose reduction. However, due to incomplete projection data, the 3-dimensional CT images reconstructed by conventional methods, such as the Feldkamp, Davis and Kres (FDK) algorithm [1], suffer from heavy artifacts and missing features. In this paper, we propose a novel pipeline of neural networks jointly by a FDK-based neural network revisited from Würfl et al.'s work [2] and an image domain U-Net to enhance the 3-dimensional reconstruction quality for limited projection sinogram less than 180 degrees, i.e. 145 degrees in our work. Experimental results, on simulated projections of real-scan CTs, show that the proposed pipeline can reduce some of the major artifacts caused by the limited views while keep the key features, with a 16.60% improvement than Würfl et al.'s work on peak signal-to-noise ratio.
Collapse
|
91
|
Abstract
The Radon transform is widely used in physical and life sciences, and one of its major applications is in medical X-ray computed tomography (CT), which is significantly important in disease screening and diagnosis. In this paper, we propose a novel reconstruction framework for Radon inversion with deep learning (DL) techniques. For simplicity, the proposed framework is denoted as iRadonMAP, i.e., inverse Radon transform approximation. Specifically, we construct an interpretable neural network that contains three dedicated components. The first component is a fully connected filtering (FCF) layer along the rotation angle direction in the sinogram domain, and the second one is a sinusoidal back-projection (SBP) layer, which back-projects the filtered sinogram data into the spatial domain. Next, a common network structure is added to further improve the overall performance. iRadonMAP is first pretrained on a large number of generic images from the ImageNet database and then fine-tuned with clinical patient data. The experimental results demonstrate the feasibility of the proposed iRadonMAP framework for Radon inversion.
Collapse
|
92
|
Chen G, Hong X, Ding Q, Zhang Y, Chen H, Fu S, Zhao Y, Zhang X, Ji H, Wang G, Huang Q, Gao H. AirNet: Fused analytical and iterative reconstruction with deep neural network regularization for sparse‐data CT. Med Phys 2020; 47:2916-2930. [DOI: 10.1002/mp.14170] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/26/2020] [Accepted: 03/28/2020] [Indexed: 11/06/2022] Open
Affiliation(s)
- Gaoyu Chen
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| | - Xiang Hong
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Qiaoqiao Ding
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Yi Zhang
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Hu Chen
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Shujun Fu
- School of Mathematics Shandong University Jinan Shandong 250100 China
| | - Yunsong Zhao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
- School of Mathematical Sciences Capital Normal University Beijing 100048 China
| | - Xiaoqun Zhang
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hui Ji
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Ge Wang
- Department of Biomedical Engineering Rensselaer Polytechnic Institute Troy NY 12180 USA
| | - Qiu Huang
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hao Gao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| |
Collapse
|
93
|
Huang Y, Wang S, Guan Y, Maier A. Limited angle tomography for transmission X-ray microscopy using deep learning. JOURNAL OF SYNCHROTRON RADIATION 2020; 27:477-485. [PMID: 32153288 PMCID: PMC7064107 DOI: 10.1107/s160057752000017x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 01/07/2020] [Indexed: 05/25/2023]
Abstract
In transmission X-ray microscopy (TXM) systems, the rotation of a scanned sample might be restricted to a limited angular range to avoid collision with other system parts or high attenuation at certain tilting angles. Image reconstruction from such limited angle data suffers from artifacts because of missing data. In this work, deep learning is applied to limited angle reconstruction in TXMs for the first time. With the challenge to obtain sufficient real data for training, training a deep neural network from synthetic data is investigated. In particular, U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic ellipsoid data and multi-category data to reduce artifacts in filtered back-projection (FBP) reconstruction images. The proposed method is evaluated on synthetic data and real scanned chlorella data in 100° limited angle tomography. For synthetic test data, U-Net significantly reduces the root-mean-square error (RMSE) from 2.55 × 10-3 µm-1 in the FBP reconstruction to 1.21 × 10-3 µm-1 in the U-Net reconstruction and also improves the structural similarity (SSIM) index from 0.625 to 0.920. With penalized weighted least-square denoising of measured projections, the RMSE and SSIM are further improved to 1.16 × 10-3 µm-1 and 0.932, respectively. For real test data, the proposed method remarkably improves the 3D visualization of the subcellular structures in the chlorella cell, which indicates its important value for nanoscale imaging in biology, nanoscience and materials science.
Collapse
Affiliation(s)
- Yixing Huang
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058 Erlangen, Germany
| | - Shengxiang Wang
- Spallation Neutron Source Science Center, Dongguan, Guangdong 523803, People’s Republic of China
- Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, People’s Republic of China
| | - Yong Guan
- National Synchrotron Radiation Laboratory, University of Science and Technology of China, Hefei, Anhui 230026, People’s Republic of China
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058 Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), 91052 Erlangen, Germany
| |
Collapse
|
94
|
Tao X, Zhang H, Wang Y, Yan G, Zeng D, Chen W, Ma J. VVBP-Tensor in the FBP Algorithm: Its Properties and Application in Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:764-776. [PMID: 31425024 DOI: 10.1109/tmi.2019.2935187] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
For decades, commercial X-ray computed tomography (CT) scanners have been using the filtered backprojection (FBP) algorithm for image reconstruction. However, the desire for lower radiation doses has pushed the FBP algorithm to its limit. Previous studies have made significant efforts to improve the results of FBP through preprocessing the sinogram, modifying the ramp filter, or postprocessing the reconstructed images. In this paper, we focus on analyzing and processing the stacked view-by-view backprojections (named VVBP-Tensor) in the FBP algorithm. A key challenge for our analysis lies in the radial structures in each backprojection slice. To overcome this difficulty, a sorting operation was introduced to the VVBP-Tensor in its z direction (the direction of the projection views). The results show that, after sorting, the tensor contains structures that are similar to those of the object, and structures in different slices of the tensor are correlated. We then analyzed the properties of the VVBP-Tensor, including structural self-similarity, tensor sparsity, and noise statistics. Considering these properties, we have developed an algorithm using the tensor singular value decomposition (named VVBP-tSVD) to denoise the VVBP-Tensor for low-mAs CT imaging. Experiments were conducted using a physical phantom and clinical patient data with different mAs levels. The results demonstrate that the VVBP-tSVD is superior to all competing methods under different reconstruction schemes, including sinogram preprocessing, image postprocessing, and iterative reconstruction. We conclude that the VVBP-Tensor is a suitable processing target for improving the quality of FBP reconstruction, and the proposed VVBP-tSVD is an effective algorithm for noise reduction in low-mAs CT imaging. This preliminary work might provide a heuristic perspective for reviewing and rethinking the FBP algorithm.
Collapse
|
95
|
Ge Y, Su T, Zhu J, Deng X, Zhang Q, Chen J, Hu Z, Zheng H, Liang D. ADAPTIVE-NET: deep computed tomography reconstruction network with analytical domain transformation knowledge. Quant Imaging Med Surg 2020; 10:415-427. [PMID: 32190567 DOI: 10.21037/qims.2019.12.12] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Background Recently, the paradigm of computed tomography (CT) reconstruction has shifted as the deep learning technique evolves. In this study, we proposed a new convolutional neural network (called ADAPTIVE-NET) to perform CT image reconstruction directly from a sinogram by integrating the analytical domain transformation knowledge. Methods In the proposed ADAPTIVE-NET, a specific network layer with constant weights was customized to transform the sinogram onto the CT image domain via analytical back-projection. With this new framework, feature extractions were performed simultaneously on both the sinogram domain and the CT image domain. The Mayo low dose CT (LDCT) data was used to validate the new network. In particular, the new network was compared with the previously proposed residual encoder-decoder (RED)-CNN network. For each network, the mean square error (MSE) loss with and without VGG-based perceptual loss was compared. Furthermore, to evaluate the image quality with certain metrics, the noise correlation was quantified via the noise power spectrum (NPS) on the reconstructed LDCT for each method. Results CT images that have clinically relevant dimensions of 512×512 can be easily reconstructed from a sinogram on a single graphics processing unit (GPU) with moderate memory size (e.g., 11 GB) by ADAPTIVE-NET. With the same MSE loss function, the new network is able to generate better results than the RED-CNN. Moreover, the new network is able to reconstruct natural looking CT images with enhanced image quality if jointly using the VGG loss. Conclusions The newly proposed end-to-end supervised ADAPTIVE-NET is able to reconstruct high-quality LDCT images directly from a sinogram.
Collapse
Affiliation(s)
- Yongshuai Ge
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Ting Su
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jiongtao Zhu
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaolei Deng
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Qiyang Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jianwei Chen
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Dong Liang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| |
Collapse
|
96
|
Alla Takam C, Samba O, Tchagna Kouanou A, Tchiotsop D. Spark Architecture for deep learning-based dose optimization in medical imaging. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100335] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
97
|
Xie H, Shan H, Cong W, Liu C, Zhang X, Liu S, Ning R, Wang GE. Deep Efficient End-to-end Reconstruction (DEER) Network for Few-view Breast CT Image Reconstruction. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:196633-196646. [PMID: 33251081 PMCID: PMC7695229 DOI: 10.1109/access.2020.3033795] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences. Since breast is sensitive to x-ray radiation, dose reduction of breast CT is an important topic, and for this purpose, few-view scanning is a main approach. In this article, we propose a Deep Efficient End-to-end Reconstruction (DEER) network for few-view breast CT image reconstruction. The major merits of our network include high dose efficiency, excellent image quality, and low model complexity. By the design, the proposed network can learn the reconstruction process with as few as O ( N ) parameters, where N is the side length of an image to be reconstructed, which represents orders of magnitude improvements relative to the state-of-the-art deep-learning-based reconstruction methods that map raw data to tomographic images directly. Also, validated on a cone-beam breast CT dataset prepared by Koning Corporation on a commercial scanner, our method demonstrates a competitive performance over the state-of-the-art reconstruction networks in terms of image quality. The source code of this paper is available at: https://github.com/HuidongXie/DEER.
Collapse
Affiliation(s)
- Huidong Xie
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Hongming Shan
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Wenxiang Cong
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Ruola Ning
- Koning Corporation, West Henrietta, NY USA
| | - G E Wang
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| |
Collapse
|
98
|
A deep learning approach for converting prompt gamma images to proton dose distributions: A Monte Carlo simulation study. Phys Med 2020; 69:110-119. [DOI: 10.1016/j.ejmp.2019.12.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 11/11/2019] [Accepted: 12/05/2019] [Indexed: 11/20/2022] Open
|
99
|
Bastiaannet R, van der Velden S, Lam MGEH, Viergever MA, de Jong HWAM. Fast and accurate quantitative determination of the lung shunt fraction in hepatic radioembolization. ACTA ACUST UNITED AC 2019; 64:235002. [DOI: 10.1088/1361-6560/ab4e49] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
100
|
Syben C, Michen M, Stimpel B, Seitz S, Ploner S, Maier AK. Technical Note: PYRO-NN: Python reconstruction operators in neural networks. Med Phys 2019; 46:5110-5115. [PMID: 31389023 PMCID: PMC6899669 DOI: 10.1002/mp.13753] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 07/23/2019] [Accepted: 07/24/2019] [Indexed: 11/24/2022] Open
Abstract
PURPOSE Recently, several attempts were conducted to transfer deep learning to medical image reconstruction. An increasingly number of publications follow the concept of embedding the computed tomography (CT) reconstruction as a known operator into a neural network. However, most of the approaches presented lack an efficient CT reconstruction framework fully integrated into deep learning environments. As a result, many approaches use workarounds for mathematically unambiguously solvable problems. METHODS PYRO-NN is a generalized framework to embed known operators into the prevalent deep learning framework Tensorflow. The current status includes state-of-the-art parallel-, fan-, and cone-beam projectors, and back-projectors accelerated with CUDA provided as Tensorflow layers. On top, the framework provides a high-level Python API to conduct FBP and iterative reconstruction experiments with data from real CT systems. RESULTS The framework provides all necessary algorithms and tools to design end-to-end neural network pipelines with integrated CT reconstruction algorithms. The high-level Python API allows a simple use of the layers as known from Tensorflow. All algorithms and tools are referenced to a scientific publication and are compared to existing non-deep learning reconstruction frameworks. To demonstrate the capabilities of the layers, the framework comes with baseline experiments, which are described in the supplementary material. The framework is available as open-source software under the Apache 2.0 licence at https://github.com/csyben/PYRO-NN. CONCLUSIONS PYRO-NN comes with the prevalent deep learning framework Tensorflow and allows to setup end-to-end trainable neural networks in the medical image reconstruction context. We believe that the framework will be a step toward reproducible research and give the medical physics community a toolkit to elevate medical image reconstruction with new deep learning techniques.
Collapse
Affiliation(s)
- Christopher Syben
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Markus Michen
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Bernhard Stimpel
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Stephan Seitz
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Stefan Ploner
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| | - Andreas K. Maier
- Pattern Recognition LabFriedich‐Alexander Universität Erlangen‐Nürnberg91058ErlangenGermany
| |
Collapse
|