201
|
Li Y, Li K, Zhang C, Montoya J, Chen GH. Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2469-2481. [PMID: 30990179 PMCID: PMC7962902 DOI: 10.1109/tmi.2019.2910760] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computed tomography (CT) is widely used in medical diagnosis and non-destructive detection. Image reconstruction in CT aims to accurately recover pixel values from measured line integrals, i.e., the summed pixel values along straight lines. Provided that the acquired data satisfy the data sufficiency condition as well as other conditions regarding the view angle sampling interval and the severity of transverse data truncation, researchers have discovered many solutions to accurately reconstruct the image. However, if these conditions are violated, accurate image reconstruction from line integrals remains an intellectual challenge. In this paper, a deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy. Particularly, accurate reconstructions were achieved for the case when the sparse view reconstruction problem (i.e., compressed sensing problem) is entangled with the classical interior tomographic problems.
Collapse
Affiliation(s)
- Yinsheng Li
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Ke Li
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| | - Chengzhu Zhang
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Juan Montoya
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Guang-Hong Chen
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| |
Collapse
|
202
|
Bilgic B, Chatnuntawech I, Manhard MK, Tian Q, Liao C, Iyer SS, Cauley SF, Huang SY, Polimeni JR, Wald LL, Setsompop K. Highly accelerated multishot echo planar imaging through synergistic machine learning and joint reconstruction. Magn Reson Med 2019; 82:1343-1358. [PMID: 31106902 PMCID: PMC6626584 DOI: 10.1002/mrm.27813] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2018] [Revised: 04/22/2019] [Accepted: 04/22/2019] [Indexed: 12/13/2022]
Abstract
PURPOSE To introduce a combined machine learning (ML)- and physics-based image reconstruction framework that enables navigator-free, highly accelerated multishot echo planar imaging (msEPI) and demonstrate its application in high-resolution structural and diffusion imaging. METHODS Single-shot EPI is an efficient encoding technique, but does not lend itself well to high-resolution imaging because of severe distortion artifacts and blurring. Although msEPI can mitigate these artifacts, high-quality msEPI has been elusive because of phase mismatch arising from shot-to-shot variations which preclude the combination of the multiple-shot data into a single image. We utilize deep learning to obtain an interim image with minimal artifacts, which permits estimation of image phase variations attributed to shot-to-shot changes. These variations are then included in a joint virtual coil sensitivity encoding (JVC-SENSE) reconstruction to utilize data from all shots and improve upon the ML solution. RESULTS Our combined ML + physics approach enabled Rinplane × multiband (MB) = 8- × 2-fold acceleration using 2 EPI shots for multiecho imaging, so that whole-brain T2 and T2 * parameter maps could be derived from an 8.3-second acquisition at 1 × 1 × 3-mm3 resolution. This has also allowed high-resolution diffusion imaging with high geometrical fidelity using 5 shots at Rinplane × MB = 9- × 2-fold acceleration. To make these possible, we extended the state-of-the-art MUSSELS reconstruction technique to simultaneous multislice encoding and used it as an input to our ML network. CONCLUSION Combination of ML and JVC-SENSE enabled navigator-free msEPI at higher accelerations than previously possible while using fewer shots, with reduced vulnerability to poor generalizability and poor acceptance of end-to-end ML approaches.
Collapse
Affiliation(s)
- Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Mary Kate Manhard
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Congyu Liao
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Siddharth S. Iyer
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephen F. Cauley
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Susie Y. Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Lawrence L. Wald
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, MIT, Cambridge, MA, USA
| |
Collapse
|
203
|
Wu D, Kim K, Li Q. Computationally efficient deep neural network for computed tomography image reconstruction. Med Phys 2019; 46:4763-4776. [PMID: 31132144 DOI: 10.1002/mp.13627] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 04/22/2019] [Accepted: 05/14/2019] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Deep neural network-based image reconstruction has demonstrated promising performance in medical imaging for undersampled and low-dose scenarios. However, it requires large amount of memory and extensive time for the training. It is especially challenging to train the reconstruction networks for three-dimensional computed tomography (CT) because of the high resolution of CT images. The purpose of this work is to reduce the memory and time consumption of the training of the reconstruction networks for CT to make it practical for current hardware, while maintaining the quality of the reconstructed images. METHODS We unrolled the proximal gradient descent algorithm for iterative image reconstruction to finite iterations and replaced the terms related to the penalty function with trainable convolutional neural networks (CNN). The network was trained greedily iteration by iteration in the image domain on patches, which requires reasonable amount of memory and time on mainstream graphics processing unit (GPU). To overcome the local-minimum problem caused by greedy learning, we used deep UNet as the CNN and incorporated separable quadratic surrogate with ordered subsets for data fidelity, so that the solution could escape from easy local minimums and achieve better image quality. RESULTS The proposed method achieved comparable image quality with state-of-the-art neural network for CT image reconstruction on two-dimensional (2D) sparse-view and limited-angle problems on the low-dose CT challenge dataset. The difference in root-mean-square-error (RMSE) and structural similarity index (SSIM) was within [-0.23,0.47] HU and [0,0.001], respectively, with 95% confidence level. For three-dimensional (3D) image reconstruction with ordinary-size CT volume, the proposed method only needed 2 GB graphics processing unit (GPU) memory and 0.45 s per training iteration as minimum requirement, whereas existing methods may require 417 GB and 31 min. The proposed method achieved improved performance compared to total variation- and dictionary learning-based iterative reconstruction for both 2D and 3D problems. CONCLUSIONS We proposed a training-time computationally efficient neural network for CT image reconstruction. The proposed method achieved comparable image quality with state-of-the-art neural network for CT reconstruction, with significantly reduced memory and time requirement during training. The proposed method is applicable to 3D image reconstruction problems such as cone-beam CT and tomosynthesis on mainstream GPUs.
Collapse
Affiliation(s)
- Dufan Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| |
Collapse
|
204
|
Chen F, Cheng JY, Taviani V, Sheth VR, Brunsing RL, Pauly JM, Vasanawala SS. Data-driven self-calibration and reconstruction for non-cartesian wave-encoded single-shot fast spin echo using deep learning. J Magn Reson Imaging 2019; 51:841-853. [PMID: 31322799 DOI: 10.1002/jmri.26871] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 07/03/2019] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Current self-calibration and reconstruction methods for wave-encoded single-shot fast spin echo imaging (SSFSE) requires long computational time, especially when high accuracy is needed. PURPOSE To develop and investigate the clinical feasibility of data-driven self-calibration and reconstruction of wave-encoded SSFSE imaging for computation time reduction and quality improvement. STUDY TYPE Prospective controlled clinical trial. SUBJECTS With Institutional Review Board approval, the proposed method was assessed on 29 consecutive adult patients (18 males, 11 females, range, 24-77 years). FIELD STRENGTH/SEQUENCE A wave-encoded variable-density SSFSE sequence was developed for clinical 3.0T abdominal scans to enable 3.5× acceleration with full-Fourier acquisitions. Data-driven calibration of wave-encoding point-spread function (PSF) was developed using a trained deep neural network. Data-driven reconstruction was developed with another set of neural networks based on the calibrated wave-encoding PSF. Training of the calibration and reconstruction networks was performed on 15,783 2D wave-encoded SSFSE abdominal images. ASSESSMENT Image quality of the proposed data-driven approach was compared independently and blindly with a conventional approach using iterative self-calibration and reconstruction with parallel imaging and compressed sensing by three radiologists on a scale from -2 to 2 for noise, contrast, sharpness, artifacts, and confidence. Computation time of these two approaches was also compared. STATISTICAL TESTS Wilcoxon signed-rank tests were used to compare image quality and two-tailed t-tests were used to compare computation time with P values of under 0.05 considered statistically significant. RESULTS An average 2.1-fold speedup in computation was achieved using the proposed method. The proposed data-driven self-calibration and reconstruction approach significantly reduced the perceived noise level (mean scores 0.82, P < 0.0001). DATA CONCLUSION The proposed data-driven calibration and reconstruction achieved twice faster computation with reduced perceived noise, providing a fast and robust self-calibration and reconstruction for clinical abdominal SSFSE imaging. LEVEL OF EVIDENCE 1 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2020;51:841-853.
Collapse
Affiliation(s)
- Feiyu Chen
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Joseph Y Cheng
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Valentina Taviani
- Global MR Applications and Workflow, GE Healthcare, Menlo Park, California, USA
| | - Vipul R Sheth
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Ryan L Brunsing
- Department of Radiology, Stanford University, Stanford, California, USA
| | - John M Pauly
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | | |
Collapse
|
205
|
Liu J, Zhang Y, Zhao Q, Lv T, Wu W, Cai N, Quan G, Yang W, Chen Y, Luo L, Shu H, Coatrieux JL. Deep iterative reconstruction estimation (DIRE): approximate iterative reconstruction estimation for low dose CT imaging. ACTA ACUST UNITED AC 2019; 64:135007. [DOI: 10.1088/1361-6560/ab18db] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
206
|
Gong K, Catana C, Qi J, Li Q. PET Image Reconstruction Using Deep Image Prior. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1655-1665. [PMID: 30575530 PMCID: PMC6584077 DOI: 10.1109/tmi.2018.2888491] [Citation(s) in RCA: 121] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Recently, deep neural networks have been widely and successfully applied in computer vision tasks and have attracted growing interest in medical imaging. One barrier for the application of deep neural networks to medical imaging is the need for large amounts of prior training pairs, which is not always feasible in clinical practice. This is especially true for medical image reconstruction problems, where raw data are needed. Inspired by the deep image prior framework, in this paper, we proposed a personalized network training method where no prior training pairs are needed, but only the patient's own prior information. The network is updated during the iterative reconstruction process using the patient-specific prior information and measured data. We formulated the maximum-likelihood estimation as a constrained optimization problem and solved it using the alternating direction method of multipliers algorithm. Magnetic resonance imaging guided positron emission tomography reconstruction was employed as an example to demonstrate the effectiveness of the proposed framework. Quantification results based on simulation and real data show that the proposed reconstruction framework can outperform Gaussian post-smoothing and anatomically guided reconstructions using the kernel method or the neural-network penalty.
Collapse
|
207
|
Abstract
PET images often suffer poor signal-to-noise ratio (SNR). Our objective is to improve the SNR of PET images using a deep neural network (DNN) model and MRI images without requiring any higher SNR PET images in training. Our proposed DNN model consists of three modified U-Nets (3U-net). The PET training input data and targets were reconstructed using filtered-backprojection (FBP) and maximum likelihood expectation maximization (MLEM), respectively. FBP reconstruction was used because of its computational efficiency so that the trained network not only removes noise, but also accelerates image reconstruction. Digital brain phantoms downloaded from BrainWeb were used to evaluate the proposed method. Poisson noise was added into sinogram data to simulate a 6 min brain PET scan. Attenuation effect was included and corrected before the image reconstruction. Extra Poisson noise was introduced to the training inputs to improve the network denoising capability. Three independent experiments were conducted to examine the reproducibility. A lesion was inserted into testing data to evaluate the impact of mismatched MRI information using the contrast-to-noise ratio (CNR). The negative impact on noise reduction was also studied when miscoregistration between PET and MRI images occurs. Compared with 1U-net trained with only PET images, training with PET/MRI decreased the mean squared error (MSE) by 31.3% and 34.0% for 1U-net and 3U-net, respectively. The MSE reduction is equivalent to an increase in the count level by 2.5 folds and 2.9 folds for 1U-net and 3U-net, respectively. Compared with the MLEM images, the lesion CNR was improved 2.7 folds and 1.4 folds for 1U-net and 3U-net, respectively. The results show that the proposed method could improve the PET SNR without having higher SNR PET images.
Collapse
Affiliation(s)
- Chih-Chieh Liu
- Department of Biomedical Engineering, University of California, Davis, CA, United States of America
| | | |
Collapse
|
208
|
Zeng DY, Shaikh J, Holmes S, Brunsing RL, Pauly JM, Nishimura DG, Vasanawala SS, Cheng JY. Deep residual network for off-resonance artifact correction with application to pediatric body MRA with 3D cones. Magn Reson Med 2019; 82:1398-1411. [PMID: 31115936 DOI: 10.1002/mrm.27825] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 04/28/2019] [Accepted: 05/01/2019] [Indexed: 01/06/2023]
Abstract
PURPOSE To enable rapid imaging with a scan time-efficient 3D cones trajectory with a deep-learning off-resonance artifact correction technique. METHODS A residual convolutional neural network to correct off-resonance artifacts (Off-ResNet) was trained with a prospective study of pediatric MRA exams. Each exam acquired a short readout scan (1.18 ms ± 0.38) and a long readout scan (3.35 ms ± 0.74) at 3 T. Short readout scans, with longer scan times but negligible off-resonance blurring, were used as reference images and augmented with additional off-resonance for supervised training examples. Long readout scans, with greater off-resonance artifacts but shorter scan time, were corrected by autofocus and Off-ResNet and compared with short readout scans by normalized RMS error, structural similarity index, and peak SNR. Scans were also compared by scoring on 8 anatomical features by two radiologists, using analysis of variance with post hoc Tukey's test and two one-sided t-tests. Reader agreement was determined with intraclass correlation. RESULTS The total scan time for long readout scans was on average 59.3% shorter than short readout scans. Images from Off-ResNet had superior normalized RMS error, structural similarity index, and peak SNR compared with uncorrected images across ±1 kHz off-resonance (P < .01). The proposed method had superior normalized RMS error over -677 Hz to +1 kHz and superior structural similarity index and peak SNR over ±1 kHz compared with autofocus (P < .01). Radiologic scoring demonstrated that long readout scans corrected with Off-ResNet were noninferior to short readout scans (P < .05). CONCLUSION The proposed method can correct off-resonance artifacts from rapid long-readout 3D cones scans to a noninferior image quality compared with diagnostically standard short readout scans.
Collapse
Affiliation(s)
- David Y Zeng
- Department of Electrical Engineering, Stanford University, Stanford, California
| | - Jamil Shaikh
- Department of Radiology, Stanford University, Stanford, California
| | - Signy Holmes
- Department of Radiology, Stanford University, Stanford, California
| | - Ryan L Brunsing
- Department of Radiology, Stanford University, Stanford, California
| | - John M Pauly
- Department of Electrical Engineering, Stanford University, Stanford, California
| | - Dwight G Nishimura
- Department of Electrical Engineering, Stanford University, Stanford, California
| | | | - Joseph Y Cheng
- Department of Radiology, Stanford University, Stanford, California
| |
Collapse
|
209
|
A gentle introduction to deep learning in medical image processing. Z Med Phys 2019; 29:86-101. [DOI: 10.1016/j.zemedi.2018.12.003] [Citation(s) in RCA: 229] [Impact Index Per Article: 38.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 02/07/2023]
|
210
|
Aggarwal HK, Mani MP, Jacob M. MULTI-SHOT SENSITIVITY-ENCODED DIFFUSION MRI USING MODEL-BASED DEEP LEARNING (MODL-MUSSELS). PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2019; 2019:1541-1544. [PMID: 33584974 PMCID: PMC7879460 DOI: 10.1109/isbi.2019.8759514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We propose a model-based deep learning architecture for the correction of phase errors in multishot diffusion-weighted echo-planar MRI images. This work is a generalization of MUSSELS, which is a structured low-rank algorithm. We show that an iterative reweighted least-squares implementation of MUSSELS resembles the model-based deep learning (MoDL) framework. We propose to replace the self-learned linear filter bank in MUSSELS with a convolutional neural network, whose parameters are learned from exemplary data. The proposed algorithm reduces the computational complexity of MUSSELS by several orders of magnitude, while providing comparable image quality.
Collapse
|
211
|
Häggström I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: A deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal 2019; 54:253-262. [PMID: 30954852 DOI: 10.1016/j.media.2019.03.013] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 03/29/2019] [Accepted: 03/30/2019] [Indexed: 01/01/2023]
Abstract
The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder-decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11%/53% lower than ordered subset expectation maximization (OSEM)/filtered back-projection (FBP), structural similarity index (1%/11% higher than OSEM/FBP), and peak signal-to-noise ratio (1.1/3.8 dB higher than OSEM/FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder-decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.
Collapse
Affiliation(s)
- Ida Häggström
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States.
| | - C Ross Schmidtlein
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Gabriele Campanella
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| | - Thomas J Fuchs
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| |
Collapse
|
212
|
Micieli D, Minniti T, Evans LM, Gorini G. Accelerating Neutron Tomography experiments through Artificial Neural Network based reconstruction. Sci Rep 2019; 9:2450. [PMID: 30792423 PMCID: PMC6385317 DOI: 10.1038/s41598-019-38903-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 12/18/2018] [Indexed: 11/19/2022] Open
Abstract
Neutron Tomography (NT) is a non-destructive technique to investigate the inner structure of a wide range of objects and, in some cases, provides valuable results in comparison to the more common X-ray imaging techniques. However, NT is time consuming and scanning a set of similar objects during a beamtime leads to data redundancy and long acquisition times. Nowadays NT is unfeasible for quality checking study of large quantities of similar objects. One way to decrease the total scan time is to reduce the number of projections. Analytical reconstruction methods are very fast but under this condition generate streaking artifacts in the reconstructed images. Iterative algorithms generally provide better reconstruction for limited data problems, but at the expense of longer reconstruction time. In this study, we propose the recently introduced Neural Network Filtered Back-Projection (NN-FBP) method to optimize the time usage in NT experiments. Simulated and real neutron data were used to assess the performance of the NN-FBP method as a function of the number of projections. For the first time a machine learning based algorithm is applied and tested for NT image reconstruction problem. We demonstrate that the NN-FBP method can reliably reduce acquisition and reconstruction times and it outperforms conventional reconstruction methods used in NT, providing high image quality for limited datasets.
Collapse
Affiliation(s)
- Davide Micieli
- Università della Calabria, Dipartimento di Fisica, Arcavacata di Rende (Cosenza), 87036, Italy.
- Università degli Studi Milano-Bicocca, Dipartimento di Fisica "G. Occhialini", Milano, 20126, Italy.
| | - Triestino Minniti
- STFC, Rutherford Appleton Laboratory, ISIS Facility, Harwell, United Kingdom
| | - Llion Marc Evans
- Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, Oxfordshire, United Kingdom
- College of Engineering, Swansea University, Bay Campus, Fabian Way, Swansea, United Kingdom
| | - Giuseppe Gorini
- Università degli Studi Milano-Bicocca, Dipartimento di Fisica "G. Occhialini", Milano, 20126, Italy
| |
Collapse
|
213
|
Deep Variational Networks with Exponential Weighting for Learning Computed Tomography. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32226-7_35] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
214
|
Qin C, Schlemper J, Caballero J, Price AN, Hajnal JV, Rueckert D. Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:280-290. [PMID: 30080145 DOI: 10.1109/tmi.2018.2863670] [Citation(s) in RCA: 242] [Impact Index Per Article: 40.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Accelerating the data acquisition of dynamic magnetic resonance imaging leads to a challenging ill-posed inverse problem, which has received great interest from both the signal processing and machine learning communities over the last decades. The key ingredient to the problem is how to exploit the temporal correlations of the MR sequence to resolve aliasing artifacts. Traditionally, such observation led to a formulation of an optimization problem, which was solved using iterative algorithms. Recently, however, deep learning-based approaches have gained significant popularity due to their ability to solve general inverse problems. In this paper, we propose a unique, novel convolutional recurrent neural network architecture which reconstructs high quality cardiac MR images from highly undersampled k-space data by jointly exploiting the dependencies of the temporal sequences as well as the iterative nature of the traditional optimization algorithms. In particular, the proposed architecture embeds the structure of the traditional iterative algorithms, efficiently modeling the recurrence of the iterative reconstruction stages by using recurrent hidden connections over such iterations. In addition, spatio-temporal dependencies are simultaneously learnt by exploiting bidirectional recurrent hidden connections across time sequences. The proposed method is able to learn both the temporal dependence and the iterative reconstruction process effectively with only a very small number of parameters, while outperforming current MR reconstruction methods in terms of reconstruction accuracy and speed.
Collapse
|
215
|
Kang E, Koo HJ, Yang DH, Seo JB, Ye JC. Cycle-consistent adversarial denoising network for multiphase coronary CT angiography. Med Phys 2018; 46:550-562. [PMID: 30449055 DOI: 10.1002/mp.13284] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 10/10/2018] [Accepted: 10/23/2018] [Indexed: 11/10/2022] Open
Abstract
PURPOSE In multiphase coronary CT angiography (CTA), a series of CT images are taken at different levels of radiation dose during the examination. Although this reduces the total radiation dose, the image quality during the low-dose phases is significantly degraded. Recently, deep neural network approaches based on supervised learning technique have demonstrated impressive performance improvement over conventional model-based iterative methods for low-dose CT. However, matched low- and routine-dose CT image pairs are difficult to obtain in multiphase CT. To address this problem, we aim at developing a new deep learning framework. METHOD We propose an unsupervised learning technique that can remove the noise of the CT images in the low-dose phases by learning from the CT images in the routine dose phases. Although a supervised learning approach is not applicable due to the differences in the underlying heart structure in two phases, the images are closely related in two phases, so we propose a cycle-consistent adversarial denoising network to learn the mapping between the low- and high-dose cardiac phases. RESULTS Experimental results showed that the proposed method effectively reduces the noise in the low-dose CT image while preserving detailed texture and edge information. Moreover, thanks to the cyclic consistency and identity loss, the proposed network does not create any artificial features that are not present in the input images. Visual grading and quality evaluation also confirm that the proposed method provides significant improvement in diagnostic quality. CONCLUSIONS The proposed network can learn the image distributions from the routine-dose cardiac phases, which is a big advantage over the existing supervised learning networks that need exactly matched low- and routine-dose CT images. Considering the effectiveness and practicability of the proposed method, we believe that the proposed can be applied for many other CT acquisition protocols.
Collapse
Affiliation(s)
- Eunhee Kang
- Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| | - Hyun Jung Koo
- Department of Radiology, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Dong Hyun Yang
- Department of Radiology, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Bum Seo
- Department of Radiology, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jong Chul Ye
- Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST, Daejeon, Republic of Korea
| |
Collapse
|
216
|
Improving Tomographic Reconstruction from Limited Data Using Mixed-Scale Dense Convolutional Neural Networks. J Imaging 2018. [DOI: 10.3390/jimaging4110128] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
In many applications of tomography, the acquired data are limited in one or more ways due to unavoidable experimental constraints. In such cases, popular direct reconstruction algorithms tend to produce inaccurate images, and more accurate iterative algorithms often have prohibitively high computational costs. Using machine learning to improve the image quality of direct algorithms is a recently proposed alternative, for which promising results have been shown. However, previous attempts have focused on using encoder–decoder networks, which have several disadvantages when applied to large tomographic images, preventing wide application in practice. Here, we propose the use of the Mixed-Scale Dense convolutional neural network architecture, which was specifically designed to avoid these disadvantages, to improve tomographic reconstruction from limited data. Results are shown for various types of data limitations and object types, for both simulated data and large-scale real-world experimental data. The results are compared with popular tomographic reconstruction algorithms and machine learning algorithms, showing that Mixed-Scale Dense networks are able to significantly improve reconstruction quality even with severely limited data, and produce more accurate results than existing algorithms.
Collapse
|
217
|
Hauptmann A, Lucka F, Betcke M, Huynh N, Adler J, Cox B, Beard P, Ourselin S, Arridge S. Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1382-1393. [PMID: 29870367 PMCID: PMC7613684 DOI: 10.1109/tmi.2018.2820382] [Citation(s) in RCA: 135] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed up. In this paper, we present a deep neural network that is specifically designed to provide high resolution 3-D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artifacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung computed tomography scans and then applied to in-vivo photoacoustic measurement data.
Collapse
Affiliation(s)
- Andreas Hauptmann
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| | - Felix Lucka
- the Department of Computer Science, University College London, London WC1E 6BT, U.K., and also with the Centrum Wiskunde & Informatica, 1098 XG Amsterdam, The Netherlands
| | - Marta Betcke
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| | - Nam Huynh
- the Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, U.K
| | - Jonas Adler
- the Department of Mathematics, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden, and also with Elekta, 103 93 Stockholm, Sweden
| | - Ben Cox
- the Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, U.K
| | - Paul Beard
- the Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, U.K
| | - Sebastien Ourselin
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| | - Simon Arridge
- the Department of Computer Science, University College London, London WC1E 6BT, U.K
| |
Collapse
|
218
|
Han Y, Ye JC. Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1418-1429. [PMID: 29870370 DOI: 10.1109/tmi.2018.2823768] [Citation(s) in RCA: 212] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
X-ray computed tomography (CT) using sparse projection views is a recent approach to reduce the radiation dose. However, due to the insufficient projection views, an analytic reconstruction approach using the filtered back projection (FBP) produces severe streaking artifacts. Recently, deep learning approaches using large receptive field neural networks such as U-Net have demonstrated impressive performance for sparse-view CT reconstruction. However, theoretical justification is still lacking. Inspired by the recent theory of deep convolutional framelets, the main goal of this paper is, therefore, to reveal the limitation of U-Net and propose new multi-resolution deep learning schemes. In particular, we show that the alternative U-Net variants such as dual frame and tight frame U-Nets satisfy the so-called frame condition which makes them better for effective recovery of high frequency edges in sparse-view CT. Using extensive experiments with real patient data set, we demonstrate that the new network architectures provide better reconstruction performance.
Collapse
|
219
|
Hauptmann A, Lucka F, Betcke M, Huynh N, Adler J, Cox B, Beard P, Ourselin S, Arridge S. Model-Based Learning for Accelerated, Limited-View 3-D Photoacoustic Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1382-1393. [PMID: 29870367 DOI: 10.1109/tmi.42] [Citation(s) in RCA: 105] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed up. In this paper, we present a deep neural network that is specifically designed to provide high resolution 3-D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artifacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung computed tomography scans and then applied to in-vivo photoacoustic measurement data.
Collapse
|