101
|
Li Y, Li K, Zhang C, Montoya J, Chen GH. Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2469-2481. [PMID: 30990179 PMCID: PMC7962902 DOI: 10.1109/tmi.2019.2910760] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computed tomography (CT) is widely used in medical diagnosis and non-destructive detection. Image reconstruction in CT aims to accurately recover pixel values from measured line integrals, i.e., the summed pixel values along straight lines. Provided that the acquired data satisfy the data sufficiency condition as well as other conditions regarding the view angle sampling interval and the severity of transverse data truncation, researchers have discovered many solutions to accurately reconstruct the image. However, if these conditions are violated, accurate image reconstruction from line integrals remains an intellectual challenge. In this paper, a deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy. Particularly, accurate reconstructions were achieved for the case when the sparse view reconstruction problem (i.e., compressed sensing problem) is entangled with the classical interior tomographic problems.
Collapse
Affiliation(s)
- Yinsheng Li
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Ke Li
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| | - Chengzhu Zhang
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Juan Montoya
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Guang-Hong Chen
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| |
Collapse
|
102
|
Liu X, Huang Z, Wang Z, Wen C, Jiang Z, Yu Z, Liu J, Liu G, Huang X, Maier A, Ren Q, Lu Y. A deep learning based pipeline for optical coherence tomography angiography. JOURNAL OF BIOPHOTONICS 2019; 12:e201900008. [PMID: 31168927 DOI: 10.1002/jbio.201900008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 05/28/2019] [Accepted: 05/29/2019] [Indexed: 05/11/2023]
Abstract
Optical coherence tomography angiography (OCTA) is a relatively new imaging modality that generates microvasculature map. Meanwhile, deep learning has been recently attracting considerable attention in image-to-image translation, such as image denoising, super-resolution and prediction. In this paper, we propose a deep learning based pipeline for OCTA. This pipeline consists of three parts: training data preparation, model learning and OCTA predicting using the trained model. To be mentioned, the datasets used in this work were automatically generated by a conventional system setup without any expert labeling. Promising results have been validated by in-vivo animal experiments, which demonstrate that deep learning is able to outperform traditional OCTA methods. The image quality is improved in not only higher signal-to-noise ratio but also better vasculature connectivity by laser speckle eliminating, showing potential in clinical use. Schematic description of the deep learning based optical coherent tomography angiography pipeline.
Collapse
Affiliation(s)
- Xi Liu
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
| | - Zhiyu Huang
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
| | - Zhenzhou Wang
- Department of Emergency Medicine, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Chenyao Wen
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
| | - Zhe Jiang
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
| | - Zekuan Yu
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
| | - Jingfeng Liu
- Department of Emergency Medicine, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Gangjun Liu
- Shenzhen Graduate School, Peking University, Shenzhen, China
| | - Xiaolin Huang
- Institute of Image Processing and Pattern Recognition, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Shenzhen Graduate School, Peking University, Shenzhen, China
| | - Yanye Lu
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
103
|
Xu J, Liu H. Three-dimensional convolutional neural networks for simultaneous dual-tracer PET imaging. Phys Med Biol 2019; 64:185016. [PMID: 31292287 DOI: 10.1088/1361-6560/ab3103] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Dual-tracer positron emission tomography (PET) is a promising technique to measure the distribution of two tracers in the body by a single scan, which can improve the clinical accuracy of disease diagnosis and can also serve as a research tool for scientists. Most current research on dual-tracer PET reconstruction is based on mixed images pre-reconstructed by algorithms, which restricts the further improvement of the precision of reconstruction. In this study, we present a hybrid loss-guided deep learning based framework for dual-tracer PET imaging using sinogram data, which can achieve reconstruction by naturally unifying two functions: the reconstruction of the mixed images and the separation for individual tracers. Combined with volumetric dual-tracer images, we adopted a three-dimensional (3D) convolutional neural network (CNN) to learn full features, including spatial information and temporal information simultaneously. In addition, an auxiliary loss layer was introduced to guide the reconstruction of the dual tracers. We used Monte Carlo simulations with data augmentation to generate sufficient datasets for training and testing. The results were analyzed by the bias and variance both spatially (different regions of interest) and temporally (different frames). The analysis verified the feasibility of the 3D CNN framework for dual-tracer reconstruction. Furthermore, we compared the reconstruction results with a deep belief network (DBN), which is another deep learning based technique for the separation of dual-tracer images based on time-activity curves (TACs). The comparison results provide insights into the superior features and performance of the 3D CNN. Furthermore, we tested the [11C]FMZ-[11C]DTBZ images with three total-counts levels ([Formula: see text], [Formula: see text], [Formula: see text]), which indicate different noise ratios. The analysis results demonstrate that our method can successfully recover the respective distribution of lower total counts with nearly the same accuracy as that of the higher total counts in the total counts range we applied, which also also indicates the proposed 3D CNN framework is more robust to noise compared with DBN.
Collapse
Affiliation(s)
- Jinmin Xu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | | |
Collapse
|
104
|
Unberath M, Zaech JN, Gao C, Bier B, Goldmann F, Lee SC, Fotouhi J, Taylor R, Armand M, Navab N. Enabling machine learning in X-ray-based procedures via realistic simulation of image formation. Int J Comput Assist Radiol Surg 2019; 14:1517-1528. [PMID: 31187399 PMCID: PMC7297499 DOI: 10.1007/s11548-019-02011-2] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 06/03/2019] [Indexed: 12/19/2022]
Abstract
PURPOSE Machine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available. METHODS We extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system. RESULTS Our findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]). CONCLUSION Our findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows.
Collapse
Affiliation(s)
- Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA.
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA.
| | - Jan-Nico Zaech
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Cong Gao
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Bastian Bier
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Florian Goldmann
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Sing Chun Lee
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Javad Fotouhi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| | - Russell Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
| | - Mehran Armand
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA
| | - Nassir Navab
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Laboratory for Computational Sensing + Robotics, Johns Hopkins University, Baltimore, MD, USA
- Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
105
|
A gentle introduction to deep learning in medical image processing. Z Med Phys 2019; 29:86-101. [DOI: 10.1016/j.zemedi.2018.12.003] [Citation(s) in RCA: 229] [Impact Index Per Article: 38.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 02/07/2023]
|
106
|
Micieli D, Minniti T, Evans LM, Gorini G. Accelerating Neutron Tomography experiments through Artificial Neural Network based reconstruction. Sci Rep 2019; 9:2450. [PMID: 30792423 PMCID: PMC6385317 DOI: 10.1038/s41598-019-38903-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/18/2018] [Indexed: 11/19/2022] Open
Abstract
Neutron Tomography (NT) is a non-destructive technique to investigate the inner structure of a wide range of objects and, in some cases, provides valuable results in comparison to the more common X-ray imaging techniques. However, NT is time consuming and scanning a set of similar objects during a beamtime leads to data redundancy and long acquisition times. Nowadays NT is unfeasible for quality checking study of large quantities of similar objects. One way to decrease the total scan time is to reduce the number of projections. Analytical reconstruction methods are very fast but under this condition generate streaking artifacts in the reconstructed images. Iterative algorithms generally provide better reconstruction for limited data problems, but at the expense of longer reconstruction time. In this study, we propose the recently introduced Neural Network Filtered Back-Projection (NN-FBP) method to optimize the time usage in NT experiments. Simulated and real neutron data were used to assess the performance of the NN-FBP method as a function of the number of projections. For the first time a machine learning based algorithm is applied and tested for NT image reconstruction problem. We demonstrate that the NN-FBP method can reliably reduce acquisition and reconstruction times and it outperforms conventional reconstruction methods used in NT, providing high image quality for limited datasets.
Collapse
Affiliation(s)
- Davide Micieli
- Università della Calabria, Dipartimento di Fisica, Arcavacata di Rende (Cosenza), 87036, Italy.
- Università degli Studi Milano-Bicocca, Dipartimento di Fisica "G. Occhialini", Milano, 20126, Italy.
| | - Triestino Minniti
- STFC, Rutherford Appleton Laboratory, ISIS Facility, Harwell, United Kingdom
| | - Llion Marc Evans
- Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, Oxfordshire, United Kingdom
- College of Engineering, Swansea University, Bay Campus, Fabian Way, Swansea, United Kingdom
| | - Giuseppe Gorini
- Università degli Studi Milano-Bicocca, Dipartimento di Fisica "G. Occhialini", Milano, 20126, Italy
| |
Collapse
|
107
|
Deep Variational Networks with Exponential Weighting for Learning Computed Tomography. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32226-7_35] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
108
|
Wu X, He P, Long Z, Guo X, Chen M, Ren X, Chen P, Deng L, An K, Li P, Wei B, Feng P. Multi-material decomposition of spectral CT images via Fully Convolutional DenseNets. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:461-471. [PMID: 31177260 DOI: 10.3233/xst-190500] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
BACKGROUND Spectral computed tomography (CT) has the capability to resolve the energy levels of incident photons, which has the potential to distinguish different material compositions. Although material decomposition methods based on x-ray attenuation characteristics have good performance in dual-energy CT imaging, there are some limitations in terms of image contrast and noise levels. OBJECTIVE This study focused on multi-material decomposition of spectral CT images based on a deep learning approach. METHODS To classify and quantify different materials, we proposed a multi-material decomposition method via the improved Fully Convolutional DenseNets (FC-DenseNets). A mouse specimen was first scanned by spectral CT system based on a photon-counting detector with different energy ranges. We then constructed a training set from the reconstructed CT images for deep learning to decompose different materials. RESULTS Experimental results demonstrated that the proposed multi-material decomposition method could more effectively identify bone, lung and soft tissue than the basis material decomposition based on post-reconstruction space in high noise levels. CONCLUSIONS The new proposed approach yielded good performance on spectral CT material decomposition, which could establish guidelines for multi-material decomposition approaches based on the deep learning algorithm.
Collapse
Affiliation(s)
- Xiaochuan Wu
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Peng He
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- ICT NDT Engineering Research Center, Ministry of Education, Chongqing University, Chongqing, China
| | - Zourong Long
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Xiaodong Guo
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Mianyi Chen
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Xuezhi Ren
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Peijun Chen
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Luzhen Deng
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Kang An
- ICT NDT Engineering Research Center, Ministry of Education, Chongqing University, Chongqing, China
| | - Pengcheng Li
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Biao Wei
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- ICT NDT Engineering Research Center, Ministry of Education, Chongqing University, Chongqing, China
| | - Peng Feng
- The Key Lab of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- ICT NDT Engineering Research Center, Ministry of Education, Chongqing University, Chongqing, China
| |
Collapse
|
109
|
Ben Yedder H, Shokoufi M, Cardoen B, Golnaraghi F, Hamarneh G. Limited-Angle Diffuse Optical Tomography Image Reconstruction Using Deep Learning. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32239-7_8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
110
|
Huang Y, Preuhs A, Lauritsch G, Manhart M, Huang X, Maier A. Data Consistent Artifact Reduction for Limited Angle Tomography with Deep Learning Prior. MACHINE LEARNING FOR MEDICAL IMAGE RECONSTRUCTION 2019. [DOI: 10.1007/978-3-030-33843-5_10] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
111
|
Maier J, Eulig E, Vöth T, Knaup M, Kuntz J, Sawall S, Kachelrieß M. Real-time scatter estimation for medical CT using the deep scatter estimation: Method and robustness analysis with respect to different anatomies, dose levels, tube voltages, and data truncation. Med Phys 2018; 46:238-249. [DOI: 10.1002/mp.13274] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Revised: 10/01/2018] [Accepted: 10/29/2018] [Indexed: 01/02/2023] Open
Affiliation(s)
- Joscha Maier
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Department of Physics and Astronomy; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 226 69120 Heidelberg Germany
| | - Elias Eulig
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Department of Physics and Astronomy; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 226 69120 Heidelberg Germany
| | - Tim Vöth
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Department of Physics and Astronomy; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 226 69120 Heidelberg Germany
| | - Michael Knaup
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
| | - Jan Kuntz
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
| | - Stefan Sawall
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Medical Faculty; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 672 69120 Heidelberg Germany
| | - Marc Kachelrieß
- German Cancer Research Center (DKFZ); Im Neuenheimer Feld 280 69120 Heidelberg Germany
- Medical Faculty; Ruprecht-Karls-University Heidelberg; Im Neuenheimer Feld 672 69120 Heidelberg Germany
| |
Collapse
|
112
|
Jia X, Liao Y, Zeng D, Zhang H, Zhang Y, He J, Bian Z, Wang Y, Tao X, Liang Z, Huang J, Ma J. Statistical CT reconstruction using region-aware texture preserving regularization learning from prior normal-dose CT image. Phys Med Biol 2018; 63:225020. [PMID: 30457116 PMCID: PMC6309620 DOI: 10.1088/1361-6560/aaebc9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
In some clinical applications, prior normal-dose CT (NdCT) images are available, and the valuable textures and structure features in them may be used to promote follow-up low-dose CT (LdCT) reconstruction. This study aims to learn texture information from the NdCT images and leverage it for follow-up LdCT image reconstruction to preserve textures and structure features. Specifically, the proposed reconstruction method first learns the texture information from those patches with similar structures in NdCT image, and the similar patches can be clustered by searching context features efficiently from the surroundings of the current patch. Then it utilizes redundant texture information from the similar patches as a priori knowledge to describe specific regions in the LdCT image. The advanced region-aware texture preserving prior is termed as 'RATP'. The main advantage of the PATP prior is that it can properly learn the texture features from available NdCT images and adaptively characterize the region-specific structures in the LdCT image. The experiments using patient data were performed to evaluate the performance of the proposed method. The proposed RATP method demonstrated superior performance in LdCT imaging compared to the filtered back projection (FBP) and statistical iterative reconstruction (SIR) methods using Gaussian regularization, Huber regularization and the original texture preserving regularization.
Collapse
Affiliation(s)
- Xiao Jia
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- School of Software Engineering, Nanyang Normal University, Nanyang, Henan 473061, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Yuting Liao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Hao Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, United States of America
| | - Yuanke Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Ji He
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Xi Tao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Zhengrong Liang
- Department of Radiology and Biomedical Engineering, State University of New York at Stony Brook, NY 11794, United States of America
| | - Jing Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
- Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou, Guangdong 510515, People’s Republic of China
| |
Collapse
|
113
|
Huang Y, Lu Y, Taubmann O, Lauritsch G, Maier A. Traditional machine learning for limited angle tomography. Int J Comput Assist Radiol Surg 2018; 14:11-19. [DOI: 10.1007/s11548-018-1851-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Accepted: 08/15/2018] [Indexed: 10/28/2022]
|
114
|
Huang Y, Würfl T, Breininger K, Liu L, Lauritsch G, Maier A. Some Investigations on Robustness of Deep Learning in Limited Angle Tomography. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00928-1_17] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|