101
|
Zhang C, Li Y, Chen GH. Accurate and robust sparse-view angle CT image reconstruction using deep learning and prior image constrained compressed sensing (DL-PICCS). Med Phys 2021; 48:5765-5781. [PMID: 34458996 DOI: 10.1002/mp.15183] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/09/2021] [Accepted: 08/02/2021] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Sparse-view CT image reconstruction problems encountered in dynamic CT acquisitions are technically challenging. Recently, many deep learning strategies have been proposed to reconstruct CT images from sparse-view angle acquisitions showing promising results. However, two fundamental problems with these deep learning reconstruction methods remain to be addressed: (1) limited reconstruction accuracy for individual patients and (2) limited generalizability for patient statistical cohorts. PURPOSE The purpose of this work is to address the previously mentioned challenges in current deep learning methods. METHODS A method that combines a deep learning strategy with prior image constrained compressed sensing (PICCS) was developed to address these two problems. In this method, the sparse-view CT data were reconstructed by the conventional filtered backprojection (FBP) method first, and then processed by the trained deep neural network to eliminate streaking artifacts. The outputs of the deep learning architecture were then used as the needed prior image in PICCS to reconstruct the image. If the noise level from the PICCS reconstruction is not satisfactory, another light duty deep neural network can then be used to reduce noise level. Both extensive numerical simulation data and human subject data have been used to quantitatively and qualitatively assess the performance of the proposed DL-PICCS method in terms of reconstruction accuracy and generalizability. RESULTS Extensive evaluation studies have demonstrated that: (1) quantitative reconstruction accuracy of DL-PICCS for individual patient is improved when it is compared with the deep learning methods and CS-based methods; (2) the false-positive lesion-like structures and false negative missing anatomical structures in the deep learning approaches can be effectively eliminated in the DL-PICCS reconstructed images; and (3) DL-PICCS enables a deep learning scheme to relax its working conditions to enhance its generalizability. CONCLUSIONS DL-PICCS offers a promising opportunity to achieve personalized reconstruction with improved reconstruction accuracy and enhanced generalizability.
Collapse
Affiliation(s)
- Chengzhu Zhang
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Yinsheng Li
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Guang-Hong Chen
- Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA.,Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| |
Collapse
|
102
|
Yu L, Zhang Z, Li X, Ren H, Zhao W, Xing L. Metal artifact reduction in 2D CT images with self-supervised cross-domain learning. Phys Med Biol 2021; 66. [PMID: 34330119 DOI: 10.1088/1361-6560/ac195c] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 07/30/2021] [Indexed: 11/12/2022]
Abstract
The presence of metallic implants often introduces severe metal artifacts in the x-ray computed tomography (CT) images, which could adversely influence clinical diagnosis or dose calculation in radiation therapy. In this work, we present a novel deep-learning-based approach for metal artifact reduction (MAR). In order to alleviate the need for anatomically identical CT image pairs (i.e. metal artifact-corrupted CT image and metal artifact-free CT image) for network learning, we propose a self-supervised cross-domain learning framework. Specifically, we train a neural network to restore the metal trace region values in the given metal-free sinogram, where the metal trace is identified by the forward projection of metal masks. We then design a novel filtered backward projection (FBP) reconstruction loss to encourage the network to generate more perfect completion results and a residual-learning-based image refinement module to reduce the secondary artifacts in the reconstructed CT images. To preserve the fine structure details and fidelity of the final MAR image, instead of directly adopting convolutional neural network (CNN)-refined images as output, we incorporate the metal trace replacement into our framework and replace the metal-affected projections of the original sinogram with the prior sinogram generated by the forward projection of the CNN output. We then use the FBP algorithms for final MAR image reconstruction. We conduct an extensive evaluation on simulated and real artifact data to show the effectiveness of our design. Our method produces superior MAR results and outperforms other compelling methods. We also demonstrate the potential of our framework for other organ sites.
Collapse
Affiliation(s)
- Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China, and also with the Department of Radiation Oncology, Stanford University, United States of America
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, United States of America
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Hong Kong, China, and also with the Department of Radiation Oncology, Stanford University, United States of America
| | - Hongyi Ren
- Department of Radiation Oncology, Stanford University, United States of America
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, United States of America
| |
Collapse
|
103
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
104
|
Zhang Z, Liang X, Zhao W, Xing L. Noise2Context: Context-assisted learning 3D thin-layer for low-dose CT. Med Phys 2021; 48:5794-5803. [PMID: 34287948 DOI: 10.1002/mp.15119] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/31/2021] [Accepted: 07/08/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of x-ray radiation exposure attract more and more attention. To lower the x-ray radiation, low-dose CT (LDCT) has been widely adopted in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a deep learning-based method that can train denoising neural networks without any clean data. METHODS In this work, for 3D thin-slice LDCT scanning, we first drive an unsupervised loss function which was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Then, we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a single 3D thin-layer LDCT scanning, simultaneously. In essence, with some latent assumptions, we proposed an unsupervised loss function to train the denoising neural network in an unsupervised manner, which integrated the similarity between adjacent CT slices in 3D thin-layer LDCT. RESULTS Further experiments on Mayo LDCT dataset and a realistic pig head were carried out. In the experiments using Mayo LDCT dataset, our unsupervised method can obtain performance comparable to that of the supervised baseline. With the realistic pig head, our method can achieve optimal performance at different noise levels as compared to all the other methods that demonstrated the superiority and robustness of the proposed Noise2Context. CONCLUSIONS In this work, we present a generalizable LDCT image denoising method without any clean data. As a result, our method not only gets rid of the complex artificial image priors but also amounts of paired high-quality training datasets.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
105
|
Zhou B, Zhou SK, Duncan JS, Liu C. Limited View Tomographic Reconstruction Using a Cascaded Residual Dense Spatial-Channel Attention Network With Projection Data Fidelity Layer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1792-1804. [PMID: 33729929 PMCID: PMC8325575 DOI: 10.1109/tmi.2021.3066318] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Limited view tomographic reconstruction aims to reconstruct a tomographic image from a limited number of projection views arising from sparse view or limited angle acquisitions that reduce radiation dose or shorten scanning time. However, such a reconstruction suffers from severe artifacts due to the incompleteness of sinogram. To derive quality reconstruction, previous methods use UNet-like neural architectures to directly predict the full view reconstruction from limited view data; but these methods leave the deep network architecture issue largely intact and cannot guarantee the consistency between the sinogram of the reconstructed image and the acquired sinogram, leading to a non-ideal reconstruction. In this work, we propose a cascaded residual dense spatial-channel attention network consisting of residual dense spatial-channel attention networks and projection data fidelity layers. We evaluate our methods on two datasets. Our experimental results on AAPM Low Dose CT Grand Challenge datasets demonstrate that our algorithm achieves a consistent and substantial improvement over the existing neural network methods on both limited angle reconstruction and sparse view reconstruction. In addition, our experimental results on Deep Lesion datasets demonstrate that our method is able to generate high-quality reconstruction for 8 major lesion types.
Collapse
|
106
|
Liu J, Kang Y, Qiang J, Wang Y, Hu D, Chen Y. Low-dose CT imaging via cascaded ResUnet with spectrum loss. Methods 2021; 202:78-87. [PMID: 33992773 DOI: 10.1016/j.ymeth.2021.05.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 04/07/2021] [Accepted: 05/10/2021] [Indexed: 11/29/2022] Open
Abstract
The suppression of artifact noise in computed tomography (CT) with a low-dose scan protocol is challenging. Conventional statistical iterative algorithms can improve reconstruction but cannot substantially eliminate large streaks and strong noise elements. In this paper, we present a 3D cascaded ResUnet neural network (Ca-ResUnet) strategy with modified noise power spectrum loss for reducing artifact noise in low-dose CT imaging. The imaging workflow consists of four components. The first component is filtered backprojection (FBP) reconstruction via a domain transformation module for suppressing artifact noise. The second is a ResUnet neural network that operates on the CT image. The third is an image compensation module that compensates for the loss of tiny structures, and the last is a second ResUnet neural network with modified spectrum loss for fine-tuning the reconstructed image. Verification results based on American Association of Physicists in Medicine (AAPM) and United Image Healthcare (UIH) datasets confirm that the proposed strategy significantly reduces serious artifact noise while retaining desired structures.
Collapse
Affiliation(s)
- Jin Liu
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China; Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China.
| | - Yanqin Kang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China; Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China
| | - Jun Qiang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China
| | - Yong Wang
- College of Computer and Information, Anhui Polytechnic University, Wuhu, China
| | - Dianlin Hu
- Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China; School of Cyber Science and Engineering, Southeast University, Nanjing, China; School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Yang Chen
- Key Laboratory of Computer Network and Information Integration (Southeast University) Ministry of Education Nanjing, China; School of Cyber Science and Engineering, Southeast University, Nanjing, China; School of Computer Science and Engineering, Southeast University, Nanjing, China
| |
Collapse
|
107
|
Shi Z, Li H, Cao Q, Wang Z, Cheng M. A material decomposition method for dual-energy CT via dual interactive Wasserstein generative adversarial networks. Med Phys 2021; 48:2891-2905. [PMID: 33704786 DOI: 10.1002/mp.14828] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 02/26/2021] [Accepted: 02/28/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Dual-energy computed tomography (DECT) is highly promising for material characterization and identification, whereas reconstructed material-specific images are affected by magnified noise and beam-hardening artifacts. Although various DECT material decomposition methods have been proposed to solve this problem, the quality of the decomposed images is still unsatisfactory, particularly in the image edges. In this study, a data-driven approach using dual interactive Wasserstein generative adversarial networks (DIWGAN) is developed to improve DECT decomposition accuracy and perform edge-preserving images. METHODS In proposed DIWGAN, two interactive generators are used to synthesize decomposed images of two basis materials by modeling the spatial and spectral correlations from input DECT reconstructed images, and the corresponding discriminators are employed to distinguish the difference between the generated images and labels. The DECT images reconstructed from high- and low-energy bins are sent to two generators separately, and each generator synthesizes one material-specific image, thereby ensuring the specificity of the network modeling. In addition, the information from different energy bins is exploited through the feature sharing of two generators. During decomposition model training, a hybrid loss function including L1 loss, edge loss, and adversarial loss is incorporated to preserve the texture and edges in the generated images. Additionally, a selector is employed to define the generator that should be trained in each iteration, which can ensure the modeling ability of two different generators and improve the material decomposition accuracy. The performance of the proposed method is evaluated using digital phantom, XCAT phantom, and real data from a mouse. RESULTS On the digital phantom, the regions of bone and soft tissue are strictly and accurately separated using the trained decomposition model. The material densities in different bone and soft-tissue regions are near the ground truth, and the error of material densities is lower than 3 mg/ml. The results from XCAT phantom show that the material-specific images generated by directed matrix inversion and iterative decomposition methods have severe noise and artifacts. Regarding to the learning-based methods, the decomposed images of fully convolutional network (FCN) and butterfly network (Butterfly-Net) still contain varying degrees of artifacts, while proposed DIWGAN can yield high quality images. Compared to Butterfly-Net, the root-mean-square error (RMSE) of soft-tissue images generated by the DIWGAN decreased by 0.01 g/ml, whereas the peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) of the soft-tissue images reached 31.43 dB and 0.9987, respectively. The mass densities of the decomposed materials are nearest to the ground truth when using the DIWGAN method. The noise standard deviation of the decomposition images reduced by 69%, 60%, 33%, and 21% compared with direct matrix inversion, iterative decomposition, FCN, and Butterfly-Net, respectively. Furthermore, the performance of the mouse data indicates the potential of the proposed material decomposition method in real scanned data. CONCLUSIONS A DECT material decomposition method based on deep learning is proposed, and the relationship between reconstructed and material-specific images is mapped by training the DIWGAN model. Results from both the simulation phantoms and real data demonstrate the advantages of this method in suppressing noise and beam-hardening artifacts.
Collapse
Affiliation(s)
- Zaifeng Shi
- School of Microelectronics, Tianjin University, Tianjin, 300072, China.,Tianjin Key Laboratory of Imaging and Sensing Microelectronic Technology, Tianjin, 300072, China
| | - Huilong Li
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Qingjie Cao
- School of Mathematical Sciences, Tianjin Normal University, Tianjin, 300072, China
| | - Zhongqi Wang
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| | - Ming Cheng
- School of Microelectronics, Tianjin University, Tianjin, 300072, China
| |
Collapse
|
108
|
Shao W, Rowe SP, Du Y. SPECTnet: a deep learning neural network for SPECT image reconstruction. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:819. [PMID: 34268432 PMCID: PMC8246183 DOI: 10.21037/atm-20-3345] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Accepted: 07/30/2020] [Indexed: 12/22/2022]
Abstract
Background Single photon emission computed tomography (SPECT) is an important functional tool for clinical diagnosis and scientific research of brain disorders, but suffers from limited spatial resolution and high noise due to hardware design and imaging physics. The present study is to develop a deep learning technique for SPECT image reconstruction that directly converts raw projection data to image with high resolution and low noise, while an efficient training method specifically applicable to medical image reconstruction is presented. Methods A custom software was developed to generate 20,000 2-D brain phantoms, of which 16,000 were used to train the neural network, 2,000 for validation, and the final 2,000 for testing. To reduce development difficulty, a two-step training strategy for network design was adopted. We first compressed full-size activity image (128×128 pixels) to a one-D vector consisting of 256×1 pixels, accomplished by an autoencoder (AE) consisting of an encoder and a decoder. The vector is a good representation of the full-size image in a lower-dimensional space and was used as a compact label to develop the second network that maps between the projection-data domain and the vector domain. Since the label had 256 pixels only, the second network was compact and easy to converge. The second network, when successfully developed, was connected to the decoder (a portion of AE) to decompress the vector to a regular 128×128 image. Therefore, a complex network was essentially divided into two compact neural networks trained separately in sequence but eventually connectable. Results A total of 2,000 test examples, a synthetic brain phantom, and de-identified patient data were used to validate SPECTnet. Results obtained from SPECTnet were compared with those obtained from our clinic OS-EM method. Images with lower noise and more accurate information in the uptake areas were obtained by SPECTnet. Conclusions The challenge of developing a complex deep neural network is reduced by training two separate compact connectable networks. The combination of the two networks forms the full version of SPECTnet. Results show that the developed neural network can produce more accurate SPECT images.
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Steven P Rowe
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
109
|
Mizusawa S, Sei Y, Orihara R, Ohsuga A. Computed tomography image reconstruction using stacked U-Net. Comput Med Imaging Graph 2021; 90:101920. [PMID: 33901918 DOI: 10.1016/j.compmedimag.2021.101920] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 02/10/2021] [Accepted: 04/05/2021] [Indexed: 10/21/2022]
Abstract
Since the development of deep learning methods, many researchers have focused on image quality improvement using convolutional neural networks. They proved its effectivity in noise reduction, single-image super-resolution, and segmentation. In this study, we apply stacked U-Net, a deep learning method, for X-ray computed tomography image reconstruction to generate high-quality images in a short time with a small number of projections. It is not easy to create highly accurate models because medical images have few training images due to patients' privacy issues. Thus, we utilize various images from the ImageNet, a widely known visual database. Results show that a cross-sectional image with a peak signal-to-noise ratio of 27.93 db and a structural similarity of 0.886 is recovered for a 512 × 512 image using 360-degree rotation, 512 detectors, and 64 projections, with a processing time of 0.11 s on the GPU. Therefore, the proposed method has a shorter reconstruction time and better image quality than the existing methods.
Collapse
Affiliation(s)
- Satoru Mizusawa
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan.
| | - Yuichi Sei
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
| | - Ryohei Orihara
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
| | - Akihiko Ohsuga
- The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu, Tokyo 182-8585, Japan
| |
Collapse
|
110
|
Adaptive channel and multiscale spatial context network for breast mass segmentation in full-field mammograms. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02297-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
111
|
Zhang Z, Yu L, Zhao W, Xing L. Modularized data-driven reconstruction framework for nonideal focal spot effect elimination in computed tomography. Med Phys 2021; 48:2245-2257. [PMID: 33595900 DOI: 10.1002/mp.14785] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 01/17/2021] [Accepted: 02/12/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE High-performance computed tomography (CT) plays a vital role in clinical decision-making. However, the performance of CT imaging is adversely affected by the nonideal focal spot size of the x-ray source or degraded by an enlarged focal spot size due to aging. In this work, we aim to develop a deep learning-based strategy to mitigate the problem so that high spatial resolution CT images can be obtained even in the case of a nonideal x-ray source. METHODS To reconstruct high-quality CT images from blurred sinograms via joint image and sinogram learning, a cross-domain hybrid model is formulated via deep learning into a modularized data-driven reconstruction (MDR) framework. The proposed MDR framework comprises several blocks, and all the blocks share the same network architecture and network parameters. In essence, each block utilizes two sub-models to generate an estimated blur kernel and a high-quality CT image simultaneously. In this way, our framework generates not only a final high-quality CT image but also a series of intermediate images with gradually improved anatomical details, enhancing the visual perception for clinicians through the dynamic process. We used simulated training datasets to train our model in an end-to-end manner and tested our model on both simulated and realistic experimental datasets. RESULTS On the simulated testing datasets, our approach increases the information fidelity criterion (IFC) by up to 34.2%, the universal quality index (UQI) by up to 20.3%, the signal-to-noise (SNR) by up to 6.7%, and reduces the root mean square error (RMSE) by up to 10.5% as compared with FBP. Compared with the iterative deconvolution method (NSM), MDR increases IFC by up to 24.7%, UQI by up to 16.7%, SNR by up to 6.0%, and reduces RMSE by up to 9.4%. In the modulation transfer function (MTF) experiment, our method improves the MTF50% by 34.5% and MTF10% by 18.7% as compared with FBP, Similarly remarkably, our method improves MTF50% by 14.3% and MTF10% by 0.9% as compared with NSM. Also, our method shows better imaging results in the edge of bony structures and other tiny structures in the experiments using phantom consisting of ham and a bottle of peanuts. CONCLUSIONS A modularized data-driven CT reconstruction framework is established to mitigate the blurring effect caused by a nonideal x-ray source with relatively large focal spot. The proposed method enables us to obtain high-resolution images with less ideal x-ray source.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lequan Yu
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
112
|
Ketcha MD, Marrama M, Souza A, Uneri A, Wu P, Zhang X, Helm PA, Siewerdsen JH. Sinogram + image domain neural network approach for metal artifact reduction in low-dose cone-beam computed tomography. J Med Imaging (Bellingham) 2021; 8:052103. [PMID: 33732755 DOI: 10.1117/1.jmi.8.5.052103] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 02/22/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Cone-beam computed tomography (CBCT) is commonly used in the operating room to evaluate the placement of surgical implants in relation to critical anatomical structures. A particularly problematic setting, however, is the imaging of metallic implants, where strong artifacts can obscure visualization of both the implant and surrounding anatomy. Such artifacts are compounded when combined with low-dose imaging techniques such as sparse-view acquisition. Approach: This work presents a dual convolutional neural network approach, one operating in the sinogram domain and one in the reconstructed image domain, that is specifically designed for the physics and setting of intraoperative CBCT to address the sources of beam hardening and sparse view sampling that contribute to metal artifacts. The networks were trained with images from cadaver scans with simulated metal hardware. Results: The trained networks were tested on images of cadavers with surgically implanted metal hardware, and performance was compared with a method operating in the image domain alone. While both methods removed most image artifacts, superior performance was observed for the dual-convolutional neural network (CNN) approach in which beam-hardening and view sampling effects were addressed in both the sinogram and image domain. Conclusion: The work demonstrates an innovative approach for eliminating metal and sparsity artifacts in CBCT using a dual-CNN framework which does not require a metal segmentation.
Collapse
Affiliation(s)
- Michael D Ketcha
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | | | - Andre Souza
- Medtronic, Littleton, Massachusetts, United States
| | - Ali Uneri
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | - Pengwei Wu
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | - Xiaoxuan Zhang
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| | | | - Jeffrey H Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore Maryland, United States
| |
Collapse
|
113
|
Xu P, Geng C, Shu D, Tang X, Liu H, Tian F, Ye H. Two-dimensional dose distribution measurement based on rotational optical fiber array: A Monte Carlo simulation study. RADIAT MEAS 2021. [DOI: 10.1016/j.radmeas.2021.106556] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
114
|
Xie S, Yang T. Artifact Removal in Sparse-Angle CT Based on Feature Fusion Residual Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3000789] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
115
|
Di J, Han W, Liu S, Wang K, Tang J, Zhao J. Sparse-view imaging of a fiber internal structure in holographic diffraction tomography via a convolutional neural network. APPLIED OPTICS 2021; 60:A234-A242. [PMID: 33690374 DOI: 10.1364/ao.404276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 10/29/2020] [Indexed: 06/12/2023]
Abstract
Deep learning has recently shown great potential in computational imaging. Here, we propose a deep-learning-based reconstruction method to realize the sparse-view imaging of a fiber internal structure in holographic diffraction tomography. By taking the sparse-view sinogram as the input and the cross-section image obtained by the dense-view sinogram as the ground truth, the neural network can reconstruct the cross-section image from the sparse-view sinogram. It performs better than the corresponding filtered back-projection algorithm with a sparse-view sinogram, both in the case of simulated data and real experimental data.
Collapse
|
116
|
Zhang H, Liu B, Yu H, Dong B. MetaInv-Net: Meta Inversion Network for Sparse View CT Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:621-634. [PMID: 33104506 DOI: 10.1109/tmi.2020.3033541] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
X-ray Computed Tomography (CT) is widely used in clinical applications such as diagnosis and image-guided interventions. In this paper, we propose a new deep learning based model for CT image reconstruction with the backbone network architecture built by unrolling an iterative algorithm. However, unlike the existing strategy to include as many data-adaptive components in the unrolled dynamics model as possible, we find that it is enough to only learn the parts where traditional designs mostly rely on intuitions and experience. More specifically, we propose to learn an initializer for the conjugate gradient (CG) algorithm that involved in one of the subproblems of the backbone model. Other components, such as image priors and hyperparameters, are kept as the original design. Since a hypernetwork is introduced to inference on the initialization of the CG module, it makes the proposed model a certain meta-learning model. Therefore, we shall call the proposed model the meta-inversion network (MetaInv-Net). The proposed MetaInv-Net can be designed with much less trainable parameters while still preserves its superior image reconstruction performance than some state-of-the-art deep models in CT imaging. In simulated and real data experiments, MetaInv-Net performs very well and can be generalized beyond the training setting, i.e., to other scanning settings, noise levels, and data sets.
Collapse
|
117
|
Nogales A, García-Tejedor ÁJ, Monge D, Vara JS, Antón C. A survey of deep learning models in medical therapeutic areas. Artif Intell Med 2021; 112:102020. [PMID: 33581832 DOI: 10.1016/j.artmed.2021.102020] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 12/21/2020] [Accepted: 01/10/2021] [Indexed: 12/18/2022]
Abstract
Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis.
Collapse
Affiliation(s)
- Alberto Nogales
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Álvaro J García-Tejedor
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Diana Monge
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Juan Serrano Vara
- CEIEC, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| | - Cristina Antón
- Faculty of Medicine, Research Institute, Universidad Francisco de Vitoria, Ctra. M-515 Pozuelo-Majadahonda km 1800, 28223, Pozuelo de Alarcón, Spain.
| |
Collapse
|
118
|
Huang Z, Chen Z, Chen J, Lu P, Quan G, Du Y, Li C, Gu Z, Yang Y, Liu X, Zheng H, Liang D, Hu Z. DaNet: dose-aware network embedded with dose-level estimation for low-dose CT imaging. Phys Med Biol 2021; 66:015005. [PMID: 33120378 DOI: 10.1088/1361-6560/abc5cc] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Many deep learning (DL)-based image restoration methods for low-dose CT (LDCT) problems directly employ the end-to-end networks on low-dose training data without considering dose differences. However, the radiation dose difference has a great impact on the ultimate results, and lower doses increase the difficulty of restoration. Moreover, there is increasing demand to design and estimate acceptable scanning doses for patients in clinical practice, necessitating dose-aware networks embedded with adaptive dose estimation. In this paper, we consider these dose differences of input LDCT images and propose an adaptive dose-aware network. First, considering a large dose distribution range for simulation convenience, we coarsely define five dose levels in advance as lowest, lower, mild, higher and highest radiation dose levels. Instead of directly building the end-to-end mapping function between LDCT images and high-dose CT counterparts, the dose level is primarily estimated in the first stage. In the second stage, the adaptively learned low-dose level is used to guide the image restoration process as the pattern of prior information through the channel feature transform. We conduct experiments on a simulated dataset based on original high dose parts of American Association of Physicists in Medicine challenge datasets from the Mayo Clinic. Ablation studies validate the effectiveness of the dose-level estimation, and the experimental results show that our method is superior to several other DL-based methods. Specifically, our method provides obviously better performance in terms of the peak signal-to-noise ratio and visual quality reflected in subjective scores. Due to the dual-stage process, our method may suffer limitations under more parameters and coarse dose-level definitions, and thus, further improvements in clinical practical applications with different CT equipment vendors are planned in future work.
Collapse
Affiliation(s)
- Zhenxing Huang
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science & Technology, Wuhan 430074, People's Republic of China. School of Computer Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, People's Republic of China. Key Laboratory of Information Storage System, Engineering Research Center of Data Storage Systems and Technology, Ministry of Education of China, Wuhan 430074, People's Republic of China. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
119
|
Hu D, Liu J, Lv T, Zhao Q, Zhang Y, Quan G, Feng J, Chen Y, Luo L. Hybrid-Domain Neural Network Processing for Sparse-View CT Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3011413] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
120
|
Baessler B. [Artificial Intelligence in Radiology - Definition, Potential and Challenges]. PRAXIS 2021; 110:48-53. [PMID: 33406927 DOI: 10.1024/1661-8157/a003597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Artificial Intelligence in Radiology - Definition, Potential and Challenges Abstract. Artificial Intelligence (AI) is omnipresent. It has neatly permeated our daily life, even if we are not always fully aware of its ubiquitous presence. The healthcare sector in particular is experiencing a revolution which will change our daily routine considerably in the near future. Due to its advanced digitization and its historical technical affinity radiology is especially prone to these developments. But what exactly is AI and what makes AI so potent that established medical disciplines such as radiology worry about their future job perspectives? What are the assets of AI in radiology today - and what are the major challenges? This review article tries to give some answers to these questions.
Collapse
Affiliation(s)
- Bettina Baessler
- Institut für Diagnostische und Interventionelle Radiologie, Universitätsspital Zürich
| |
Collapse
|
121
|
Yu L, Zhang Z, Li X, Xing L. Deep Sinogram Completion With Image Prior for Metal Artifact Reduction in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:228-238. [PMID: 32956044 PMCID: PMC7875504 DOI: 10.1109/tmi.2020.3025064] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Computed tomography (CT) has been widely used for medical diagnosis, assessment, and therapy planning and guidance. In reality, CT images may be affected adversely in the presence of metallic objects, which could lead to severe metal artifacts and influence clinical diagnosis or dose calculation in radiation therapy. In this article, we propose a generalizable framework for metal artifact reduction (MAR) by simultaneously leveraging the advantages of image domain and sinogram domain-based MAR techniques. We formulate our framework as a sinogram completion problem and train a neural network (SinoNet) to restore the metal-affected projections. To improve the continuity of the completed projections at the boundary of metal trace and thus alleviate new artifacts in the reconstructed CT images, we train another neural network (PriorNet) to generate a good prior image to guide sinogram learning, and further design a novel residual sinogram learning strategy to effectively utilize the prior image information for better sinogram completion. The two networks are jointly trained in an end-to-end fashion with a differentiable forward projection (FP) operation so that the prior image generation and deep sinogram completion procedures can benefit from each other. Finally, the artifact-reduced CT images are reconstructed using the filtered backward projection (FBP) from the completed sinogram. Extensive experiments on simulated and real artifacts data demonstrate that our method produces superior artifact-reduced results while preserving the anatomical structures and outperforms other MAR methods.
Collapse
|
122
|
Luo Y, Majoe S, Kui J, Qi H, Pushparajah K, Rhode K. Ultra-Dense Denoising Network: Application to Cardiac Catheter-Based X-Ray Procedures. IEEE Trans Biomed Eng 2020; 68:2626-2636. [PMID: 33259291 DOI: 10.1109/tbme.2020.3041571] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Reducing radiation dose in cardiac catheter-based X-ray procedures increases safety but also image noise and artifacts. Excessive noise and artifacts can compromise vital image information, which can affect clinical decision-making. Developing more effective X-ray denoising methodologies will be beneficial to both patients and healthcare professionals by allowing imaging at lower radiation dose without compromising image information. This paper proposes a framework based on a convolutional neural network (CNN), namely Ultra-Dense Denoising Network (UDDN), for low-dose X-ray image denoising. To promote feature extraction, we designed a novel residual block which establishes a solid correlation among multiple-path neural units via abundant cross connections in its representation enhancement section. Experiments on synthetic additive noise X-ray data show that the UDDN achieves statistically significant higher peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) than other comparative methods. We enhanced the clinical adaptability of our framework by training using normally-distributed noise and tested on clinical data taken from procedures at St. Thomas' hospital in London. The performance was assessed by using local SNR and by clinical voting using ten cardiologists. The results show that the UDDN outperforms the other comparative methods and is a promising solution to this challenging but clinically impactful task.
Collapse
|
123
|
Awasthi N, Jain G, Kalva SK, Pramanik M, Yalavarthy PK. Deep Neural Network-Based Sinogram Super-Resolution and Bandwidth Enhancement for Limited-Data Photoacoustic Tomography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2660-2673. [PMID: 32142429 DOI: 10.1109/tuffc.2020.2977210] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Photoacoustic tomography (PAT) is a noninvasive imaging modality combining the benefits of optical contrast at ultrasonic resolution. Analytical reconstruction algorithms for photoacoustic (PA) signals require a large number of data points for accurate image reconstruction. However, in practical scenarios, data are collected using the limited number of transducers along with data being often corrupted with noise resulting in only qualitative images. Furthermore, the collected boundary data are band-limited due to limited bandwidth (BW) of the transducer, making the PA imaging with limited data being qualitative. In this work, a deep neural network-based model with loss function being scaled root-mean-squared error was proposed for super-resolution, denoising, as well as BW enhancement of the PA signals collected at the boundary of the domain. The proposed network has been compared with traditional as well as other popular deep-learning methods in numerical as well as experimental cases and is shown to improve the collected boundary data, in turn, providing superior quality reconstructed PA image. The improvement obtained in the Pearson correlation, structural similarity index metric, and root-mean-square error was as high as 35.62%, 33.81%, and 41.07%, respectively, for phantom cases and signal-to-noise ratio improvement in the reconstructed PA images was as high as 11.65 dB for in vivo cases compared with reconstructed image obtained using original limited BW data. Code is available at https://sites.google.com/site/sercmig/home/dnnpat.
Collapse
|
124
|
Zuo G, Zheng T, Liu Y, Xu Z, Gong D, Yu J. Fine semantic mapping based on dense segmentation network. INTEL SERV ROBOT 2020. [DOI: 10.1007/s11370-020-00341-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
125
|
Singh R, Wu W, Wang G, Kalra MK. Artificial intelligence in image reconstruction: The change is here. Phys Med 2020; 79:113-125. [DOI: 10.1016/j.ejmp.2020.11.012] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/19/2022] Open
|
126
|
Li Y, Xie D, Cember A, Nanga RPR, Yang H, Kumar D, Hariharan H, Bai L, Detre JA, Reddy R, Wang Z. Accelerating GluCEST imaging using deep learning for B 0 correction. Magn Reson Med 2020; 84:1724-1733. [PMID: 32301185 PMCID: PMC8082274 DOI: 10.1002/mrm.28289] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 03/27/2020] [Accepted: 03/27/2020] [Indexed: 02/06/2023]
Abstract
PURPOSE Glutamate weighted Chemical Exchange Saturation Transfer (GluCEST) MRI is a noninvasive technique for mapping parenchymal glutamate in the brain. Because of the sensitivity to field (B0 ) inhomogeneity, the total acquisition time is prolonged due to the repeated image acquisitions at several saturation offset frequencies, which can cause practical issues such as increased sensitivity to patient motions. Because GluCEST signal is derived from the small z-spectrum difference, it often has a low signal-to-noise-ratio (SNR). We proposed a novel deep learning (DL)-based algorithm armed with wide activation neural network blocks to address both issues. METHODS B0 correction based on reduced saturation offset acquisitions was performed for the positive and negative sides of the z-spectrum separately. For each side, a separate deep residual network was trained to learn the nonlinear mapping from few CEST-weighted images acquired at different ppm values to the one at 3 ppm (where GluCEST peaks) in the same side of the z-spectrum. RESULTS All DL-based methods outperformed the "traditional" method visually and quantitatively. The wide activation blocks-based method showed the highest performance in terms of Structural Similarity Index (SSIM) and peak signal-to-noise ratio (PSNR), which were 0.84 and 25dB respectively. SNR increases in regions of interest were over 8dB. CONCLUSION We demonstrated that the new DL-based method can reduce the entire GluCEST imaging time by ˜50% and yield higher SNR than current state-of-the-art.
Collapse
Affiliation(s)
- Yiran Li
- Department of Electrical and Computer Engineering, Temple University, Philadelphia PA 19121, USA
| | - Danfeng Xie
- Department of Electrical and Computer Engineering, Temple University, Philadelphia PA 19121, USA
| | - Abigail Cember
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Ravi Prakash Reddy Nanga
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Hanlu Yang
- Department of Electrical and Computer Engineering, Temple University, Philadelphia PA 19121, USA
| | - Dushyant Kumar
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Hari Hariharan
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Li Bai
- Department of Electrical and Computer Engineering, Temple University, Philadelphia PA 19121, USA
| | - John A. Detre
- Department of Neurology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Ravinder Reddy
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA 19104, USA
| | - Ze Wang
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| |
Collapse
|
127
|
|
128
|
Shao W, Du Y. Microwave Imaging by Deep Learning Network: Feasibility and Training Method. IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION 2020; 68:5626-5635. [PMID: 34113046 PMCID: PMC8189033 DOI: 10.1109/tap.2020.2978952] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Microwave image reconstruction based on a deep-learning method is investigated in this paper. The neural network is capable of converting measured microwave signals acquired from a 24×24 antenna array at 4 GHz into a 128×128 image. To reduce the training difficulty, we first developed an autoencoder by which high-resolution images (128×128) were represented with 256×1 vectors; then we developed the second neural network which aimed to map microwave signals to the compressed features (256×1 vector). Two neural networks can be combined to a full network to make reconstructions, when both are successfully developed. The present two-stage training method reduces the difficulty in training deep learning networks (DLN) for inverse reconstruction. The developed neural network is validated by simulation examples and experimental data with objects in different shapes/sizes, placed in different locations, and with dielectric constant ranging from 2~6. Comparisons between the imaging results achieved by the present method and two conventional approaches: distorted Born iterative method (DBIM) and phase confocal method (PCM) are also provided.
Collapse
Affiliation(s)
- Wenyi Shao
- Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287 USA
| | - Yong Du
- Department of Radiology and Radiological Science, Johns Hopkins University, Baltimore, MD 21287 USA
| |
Collapse
|
129
|
Ding Q, Chen G, Zhang X, Huang Q, Ji H, Gao H. Low-dose CT with deep learning regularization via proximal forward-backward splitting. Phys Med Biol 2020; 65:125009. [PMID: 32209742 DOI: 10.1088/1361-6560/ab831a] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Low-dose x-ray computed tomography (LDCT) is desirable for reduced patient dose. This work develops new image reconstruction methods with deep learning (DL) regularization for LDCT. Our methods are based on the unrolling of a proximal forward-backward splitting (PFBS) framework with data-driven image regularization via deep neural networks. In contrast to PFBS-IR, which utilizes standard data fidelity updates via an iterative reconstruction (IR) method, PFBS-AIR involves preconditioned data fidelity updates that fuse the analytical reconstruction (AR) and IR methods in a synergistic way, i.e. fused analytical and iterative reconstruction (AIR). The results suggest that the DL-regularized methods (PFBS-IR and PFBS-AIR) provide better reconstruction quality compared to conventional methods (AR or IR). In addition, owing to the AIR, PFBS-AIR noticeably outperformed PFBS-IR and another DL-based postprocessing method, FBPConvNet.
Collapse
Affiliation(s)
- Qiaoqiao Ding
- Department of Mathematics, National University of Singapore, Singapore 119076, Singapore
| | | | | | | | | | | |
Collapse
|
130
|
Kugele A, Pfeil T, Pfeiffer M, Chicca E. Efficient Processing of Spatio-Temporal Data Streams With Spiking Neural Networks. Front Neurosci 2020; 14:439. [PMID: 32431592 PMCID: PMC7214871 DOI: 10.3389/fnins.2020.00439] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 04/09/2020] [Indexed: 11/15/2022] Open
Abstract
Spiking neural networks (SNNs) are potentially highly efficient models for inference on fully parallel neuromorphic hardware, but existing training methods that convert conventional artificial neural networks (ANNs) into SNNs are unable to exploit these advantages. Although ANN-to-SNN conversion has achieved state-of-the-art accuracy for static image classification tasks, the following subtle but important difference in the way SNNs and ANNs integrate information over time makes the direct application of conversion techniques for sequence processing tasks challenging. Whereas all connections in SNNs have a certain propagation delay larger than zero, ANNs assign different roles to feed-forward connections, which immediately update all neurons within the same time step, and recurrent connections, which have to be rolled out in time and are typically assigned a delay of one time step. Here, we present a novel method to obtain highly accurate SNNs for sequence processing by modifying the ANN training before conversion, such that delays induced by ANN rollouts match the propagation delays in the targeted SNN implementation. Our method builds on the recently introduced framework of streaming rollouts, which aims for fully parallel model execution of ANNs and inherently allows for temporal integration by merging paths of different delays between input and output of the network. The resulting networks achieve state-of-the-art accuracy for multiple event-based benchmark datasets, including N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture, and through the use of spatio-temporal shortcut connections yield low-latency approximate network responses that improve over time as more of the input sequence is processed. In addition, our converted SNNs are consistently more energy-efficient than their corresponding ANNs.
Collapse
Affiliation(s)
- Alexander Kugele
- Faculty of Technology and Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Bosch Center for Artificial Intelligence, Renningen, Germany
| | - Thomas Pfeil
- Bosch Center for Artificial Intelligence, Renningen, Germany
| | | | - Elisabetta Chicca
- Faculty of Technology and Center of Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
131
|
Chen G, Hong X, Ding Q, Zhang Y, Chen H, Fu S, Zhao Y, Zhang X, Ji H, Wang G, Huang Q, Gao H. AirNet: Fused analytical and iterative reconstruction with deep neural network regularization for sparse‐data CT. Med Phys 2020; 47:2916-2930. [DOI: 10.1002/mp.14170] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Revised: 03/26/2020] [Accepted: 03/28/2020] [Indexed: 11/06/2022] Open
Affiliation(s)
- Gaoyu Chen
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| | - Xiang Hong
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Qiaoqiao Ding
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Yi Zhang
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Hu Chen
- College of Computer Science Sichuan University Chengdu Sichuan 610065 China
| | - Shujun Fu
- School of Mathematics Shandong University Jinan Shandong 250100 China
| | - Yunsong Zhao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
- School of Mathematical Sciences Capital Normal University Beijing 100048 China
| | - Xiaoqun Zhang
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hui Ji
- Department of Mathematics National University of Singapore 119077 Singapore
| | - Ge Wang
- Department of Biomedical Engineering Rensselaer Polytechnic Institute Troy NY 12180 USA
| | - Qiu Huang
- Department of Nuclear Medicine Rui Jin Hospital School of Medcine Shanghai Jiao Tong University Shanghai 200240 China
- School of Biomedical Engineering Shanghai Jiao Tong University Shanghai 200240 China
| | - Hao Gao
- Department of Radiation Oncology Winship Cancer Institute of Emory University Atlanta GA 30322 USA
| |
Collapse
|
132
|
Feigin M, Freedman D, Anthony BW. A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound. IEEE Trans Biomed Eng 2020; 67:1142-1151. [DOI: 10.1109/tbme.2019.2931195] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
133
|
Lin L, Tao X, Pang S, Su Z, Lu H, Li S, Feng Q, Chen B. Multiple Axial Spine Indices Estimation via Dense Enhancing Network With Cross-Space Distance-Preserving Regularization. IEEE J Biomed Health Inform 2020; 24:3248-3257. [PMID: 32142463 DOI: 10.1109/jbhi.2020.2977224] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automatic estimation of axial spine indices is clinically desired for various spine computer aided procedures, such as disease diagnosis, therapeutic evaluation, pathophysiological understanding, risk assessment, and biomechanical modeling. Currently, the spine indices are manually measured by physicians, which is time-consuming and laborious. Even worse, the tedious manual procedure might result in inaccurate measurement. To deal with this problem, in this paper, we aim at developing an automatic method to estimate multiple indices from axial spine images. Inspired by the success of deep learning for regression problems and the densely connected network for image classification, we propose a dense enhancing network (DE-Net) which uses the dense enhancing blocks (DEBs) as its main body, where a feature enhancing layer is added to each of the bypass in a dense block. The DEB is designed to enhance discriminative feature embedding from the intervertebral disc and the dural sac areas. In addition, the cross-space distance-preserving regularization (CSDPR), which enforces consistent inter-sample distances between the output and the label spaces, is proposed to regularize the loss function of the DE-Net. To train and validate the proposed method, we collected 895 axial spine MRI images from 143 subjects and manually measured the indices as the ground truth. The results show that all deep learning models obtain very small prediction errors, and the proposed DE-Net with CSDPR acquires the smallest error among all methods, indicating that our method has great potential for spine computer aided procedures.
Collapse
|
134
|
Bi Q, Qin K, Li Z, Zhang H, Xu K, Xia GS. A Multiple-Instance Densely-Connected ConvNet for Aerial Scene Classification. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:4911-4926. [PMID: 32149687 DOI: 10.1109/tip.2020.2975718] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In contrast with nature scenes, aerial scenes are often composed of many objects crowdedly distributed on the surface in bird's view, the description of which usually demands more discriminative features as well as local semantics. However, when applied to scene classification, most of the existing convolution neural networks (ConvNets) tend to depict global semantics of images, and the loss of low- and mid-level features can hardly be avoided, especially when the model goes deeper. To tackle these challenges, in this paper, we propose a multiple-instance densely-connected ConvNet (MIDC-Net) for aerial scene classification. It regards aerial scene classification as a multiple-instance learning problem so that local semantics can be further investigated. Our classification model consists of an instance-level classifier, a multiple instance pooling and followed by a bag-level classification layer. In the instance-level classifier, we propose a simplified dense connection structure to effectively preserve features from different levels. The extracted convolution features are further converted into instance feature vectors. Then, we propose a trainable attention-based multiple instance pooling. It highlights the local semantics relevant to the scene label and outputs the bag-level probability directly. Finally, with our bag-level classification layer, this multiple instance learning framework is under the direct supervision of bag labels. Experiments on three widely-utilized aerial scene benchmarks demonstrate that our proposed method outperforms many state-of-the-art methods by a large margin with much fewer parameters.
Collapse
|
135
|
Cerebrovascular segmentation from TOF-MRA using model- and data-driven method via sparse labels. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.092] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
136
|
|
137
|
Yang F, Zhang D, Zhang H, Huang K, Du Y, Teng M. Streaking artifacts suppression for cone-beam computed tomography with the residual learning in neural network. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.087] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
138
|
Ge Y, Su T, Zhu J, Deng X, Zhang Q, Chen J, Hu Z, Zheng H, Liang D. ADAPTIVE-NET: deep computed tomography reconstruction network with analytical domain transformation knowledge. Quant Imaging Med Surg 2020; 10:415-427. [PMID: 32190567 DOI: 10.21037/qims.2019.12.12] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Background Recently, the paradigm of computed tomography (CT) reconstruction has shifted as the deep learning technique evolves. In this study, we proposed a new convolutional neural network (called ADAPTIVE-NET) to perform CT image reconstruction directly from a sinogram by integrating the analytical domain transformation knowledge. Methods In the proposed ADAPTIVE-NET, a specific network layer with constant weights was customized to transform the sinogram onto the CT image domain via analytical back-projection. With this new framework, feature extractions were performed simultaneously on both the sinogram domain and the CT image domain. The Mayo low dose CT (LDCT) data was used to validate the new network. In particular, the new network was compared with the previously proposed residual encoder-decoder (RED)-CNN network. For each network, the mean square error (MSE) loss with and without VGG-based perceptual loss was compared. Furthermore, to evaluate the image quality with certain metrics, the noise correlation was quantified via the noise power spectrum (NPS) on the reconstructed LDCT for each method. Results CT images that have clinically relevant dimensions of 512×512 can be easily reconstructed from a sinogram on a single graphics processing unit (GPU) with moderate memory size (e.g., 11 GB) by ADAPTIVE-NET. With the same MSE loss function, the new network is able to generate better results than the RED-CNN. Moreover, the new network is able to reconstruct natural looking CT images with enhanced image quality if jointly using the VGG loss. Conclusions The newly proposed end-to-end supervised ADAPTIVE-NET is able to reconstruct high-quality LDCT images directly from a sinogram.
Collapse
Affiliation(s)
- Yongshuai Ge
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Ting Su
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jiongtao Zhu
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Xiaolei Deng
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Qiyang Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jianwei Chen
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhanli Hu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| | - Dong Liang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.,Chinese Academy of Sciences Key Laboratory of Health Informatics, Shenzhen Institutes of Advanced Technology, Shenzhen 518055, China
| |
Collapse
|
139
|
Zhao Y, Ji D, Li Y, Zhao X, Lv W, Xin X, Han S, Hu C. Three-dimensional visualization of microvasculature from few-projection data using a novel CT reconstruction algorithm for propagation-based X-ray phase-contrast imaging. BIOMEDICAL OPTICS EXPRESS 2020; 11:364-387. [PMID: 32010522 PMCID: PMC6968748 DOI: 10.1364/boe.380084] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 11/29/2019] [Accepted: 12/12/2019] [Indexed: 05/23/2023]
Abstract
Propagation-based X-ray phase-contrast imaging (PBI) is a powerful nondestructive imaging technique that can reveal the internal detailed structures in weakly absorbing samples. Extending PBI to CT (PBCT) enables high-resolution and high-contrast 3D visualization of microvasculature, which can be used for the understanding, diagnosis and therapy of diseases involving vasculopathy, such as cardiovascular disease, stroke and tumor. However, the long scan time for PBCT impedes its wider use in biomedical and preclinical microvascular studies. To address this issue, a novel CT reconstruction algorithm for PBCT is presented that aims at shortening the scan time for microvascular samples by reducing the number of projections while maintaining the high quality of reconstructed images. The proposed algorithm combines the filtered backprojection method into the iterative reconstruction framework, and a weighted guided image filtering approach (WGIF) is utilized to optimize the intermediate reconstructed images. Notably, the homogeneity assumption on the microvasculature sample is adopted as prior knowledge, and therefore, a prior image of microvasculature structures can be acquired by a k-means clustering approach. Then, the prior image is used as the guided image in the WGIF procedure to effectively suppress streaking artifacts and preserve microvasculature structures. To evaluate the effectiveness and capability of the proposed algorithm, simulation experiments on 3D microvasculature numerical phantom and real experiments with CT reconstruction on the microvasculature sample are performed. The results demonstrate that the proposed algorithm can, under noise-free and noisy conditions, significantly reduce the artifacts and effectively preserve the microvasculature structures on the reconstructed images and thus enables it to be used for clear and accurate 3D visualization of microvasculature from few-projection data. Therefore, for 3D visualization of microvasculature, the proposed algorithm can be considered an effective approach for reducing the scan time required by PBCT.
Collapse
Affiliation(s)
- Yuqing Zhao
- School of Biomedical Engineering and
Technology, Tianjin Medical University, Tianjin 300070, China
| | - Dongjiang Ji
- The School of Science, Tianjin University
of Technology and Education, Tianjin 300222, China
| | - Yimin Li
- School of Biomedical Engineering and
Technology, Tianjin Medical University, Tianjin 300070, China
| | - Xinyan Zhao
- Liver Research Center, Beijing Friendship
Hospital, Capital Medical University, Beijing 100050, China
| | - Wenjuan Lv
- School of Biomedical Engineering and
Technology, Tianjin Medical University, Tianjin 300070, China
| | - Xiaohong Xin
- School of Biomedical Engineering and
Technology, Tianjin Medical University, Tianjin 300070, China
| | - Shuo Han
- School of Biomedical Engineering and
Technology, Tianjin Medical University, Tianjin 300070, China
| | - Chunhong Hu
- School of Biomedical Engineering and
Technology, Tianjin Medical University, Tianjin 300070, China
| |
Collapse
|
140
|
Bastiaannet R, van der Velden S, Lam MGEH, Viergever MA, de Jong HWAM. Fast and accurate quantitative determination of the lung shunt fraction in hepatic radioembolization. ACTA ACUST UNITED AC 2019; 64:235002. [DOI: 10.1088/1361-6560/ab4e49] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
141
|
An Efficient and Scene-Adaptive Algorithm for Vehicle Detection in Aerial Images Using an Improved YOLOv3 Framework. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2019. [DOI: 10.3390/ijgi8110483] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Vehicle detection in aerial images has attracted great attention as an approach to providing the necessary information for transportation road network planning and traffic management. However, because of the low resolution, complex scene, occlusion, shadows, and high requirement for detection efficiency, implementing vehicle detection in aerial images is challenging. Therefore, we propose an efficient and scene-adaptive algorithm for vehicle detection in aerial images using an improved YOLOv3 framework, and it is applied to not only aerial still images but also videos composed of consecutive frame images. First, rather than directly using the traditional YOLOv3 network, we construct a new structure with fewer layers to improve the detection efficiency. Then, since complex scenes in aerial images can cause the partial occlusion of vehicles, we construct a context-aware-based feature map fusion to make full use of the information in the adjacent frames and accurately detect partially occluded vehicles. The traditional YOLOv3 network adopts a horizontal bounding box, which can attain the expected detection effects only for vehicles with small length–width ratio. Moreover, vehicles that are close to each other are liable to cause lower accuracy and a higher detection error rate. Hence, we design a sloping bounding box attached to the angle of the target vehicles. This modification is conducive to predicting not only the position but also the angle. Finally, two data sets were used to perform extensive experiments and comparisons. The results show that the proposed algorithm generates the desired and excellent performance.
Collapse
|
142
|
Fu J, Dong J, Zhao F. A Deep Learning Reconstruction Framework for Differential Phase-Contrast Computed Tomography With Incomplete Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2190-2202. [PMID: 31647435 DOI: 10.1109/tip.2019.2947790] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Differential phase-contrast computed tomography (DPC-CT) is a powerful analysis tool for soft-tissue and low-atomic-number samples. Limited by the implementation conditions, DPC-CT with incomplete projections happens quite often. Conventional reconstruction algorithms face difficulty when given incomplete data. They usually involve complicated parameter selection operations, which are also sensitive to noise and are time-consuming. In this paper, we report a new deep learning reconstruction framework for incomplete data DPC-CT. It involves the tight coupling of the deep learning neural network and DPC-CT reconstruction algorithm in the domain of DPC projection sinograms. The estimated result is not an artifact caused by the incomplete data, but a complete phase-contrast projection sinogram. After training, this framework is determined and can be used to reconstruct the final DPC-CT images for a given incomplete projection sinogram. Taking the sparse-view, limited-view and missing-view DPC-CT as examples, this framework is validated and demonstrated with synthetic and experimental data sets. Compared with other methods, our framework can achieve the best imaging quality at a faster speed and with fewer parameters. This work supports the application of the state-of-the-art deep learning theory in the field of DPC-CT.
Collapse
|
143
|
[Sparse-view helical CT reconstruction based on tensor total generalized variation minimization]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2019; 39:1213-1220. [PMID: 31801709 PMCID: PMC6867954 DOI: 10.12122/j.issn.1673-4254.2019.10.13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
OBJECTIVE We propose a sparse-view helical CT iterative reconstruction algorithm based on projection of convex set tensor total generalized variation minimization (TTGV-POCS) to reduce the X-ray dose of helical CT scanning. METHODS The three-dimensional volume data of helical CT reconstruction was viewed as the third-order tensor. The tensor generalized total variation (TTGV) was used to describe the structural sparsity of the three-dimensional image. The POCS iterative reconstruction framework was adopted to achieve a robust result of sparse-view helical CT reconstruction. The TTGV-POCS algorithm fully used the structural sparsity of first-order and second-order derivation and the correlation between the slices of helical CT image data to effectively suppress artifacts and noise in the image of sparse-view reconstruction and better preserve image edge information. RESULTS The experimental results of XCAT phantom and patient scan data showed that the TTGVPOCS algorithm had better performance in reducing noise, removing artifacts and maintaining edges than the existing reconstruction algorithms. Comparison of the sparse-view reconstruction results of XCAT phantom data with 144 exposure views showed that the TTGV-POCS algorithm proposed herein increased the PSNR quantitative index by 9.17%-15.24% compared with the experimental comparison algorithm; the FSIM quantitative index was increased by 1.27%-9.30%. CONCLUSIONS The TTGV-POCS algorithm can effectively improve the image quality of helical CT sparse-view reconstruction and reduce the radiation dose of helical CT examination to improve the clinical imaging diagnosis.
Collapse
|
144
|
Li Y, Li K, Zhang C, Montoya J, Chen GH. Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2469-2481. [PMID: 30990179 PMCID: PMC7962902 DOI: 10.1109/tmi.2019.2910760] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computed tomography (CT) is widely used in medical diagnosis and non-destructive detection. Image reconstruction in CT aims to accurately recover pixel values from measured line integrals, i.e., the summed pixel values along straight lines. Provided that the acquired data satisfy the data sufficiency condition as well as other conditions regarding the view angle sampling interval and the severity of transverse data truncation, researchers have discovered many solutions to accurately reconstruct the image. However, if these conditions are violated, accurate image reconstruction from line integrals remains an intellectual challenge. In this paper, a deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy. Particularly, accurate reconstructions were achieved for the case when the sparse view reconstruction problem (i.e., compressed sensing problem) is entangled with the classical interior tomographic problems.
Collapse
Affiliation(s)
- Yinsheng Li
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Ke Li
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| | - Chengzhu Zhang
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Juan Montoya
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Guang-Hong Chen
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| |
Collapse
|
145
|
Liu J, Zhang Y, Zhao Q, Lv T, Wu W, Cai N, Quan G, Yang W, Chen Y, Luo L, Shu H, Coatrieux JL. Deep iterative reconstruction estimation (DIRE): approximate iterative reconstruction estimation for low dose CT imaging. ACTA ACUST UNITED AC 2019; 64:135007. [DOI: 10.1088/1361-6560/ab18db] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
146
|
Liang X, Li N, Zhang Z, Yu S, Qin W, Li Y, Chen S, Zhang H, Xie Y. Shading correction for volumetric CT using deep convolutional neural network and adaptive filter. Quant Imaging Med Surg 2019; 9:1242-1254. [PMID: 31448210 PMCID: PMC6685805 DOI: 10.21037/qims.2019.05.19] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2019] [Accepted: 05/15/2019] [Indexed: 01/17/2023]
Abstract
BACKGROUND Shading artifact may lead to CT number inaccuracy, image contrast loss and spatial non-uniformity (SNU), which is considered as one of the fundamental limitations for volumetric CT (VCT) application. To correct the shading artifact, a novel approach is proposed using deep learning and an adaptive filter (AF). METHODS Firstly, we apply the deep convolutional neural network (DCNN) to train a human tissue segmentation model. The trained model is implemented to segment the tissue. According to the general knowledge that CT number of the same human tissue is approximately the same, a template image without shading artifact can be generated using segmentation and then each tissue is filled with the corresponding CT number of a specific tissue. By subtracting the template image from the uncorrected image, the residual image with image detail and shading artifact are generated. The shading artifact is mainly low-frequency signals while the image details are mainly high-frequency signals. Therefore, we proposed an adaptive filter to separate the shading artifact and image details accurately. Finally, the estimated shading artifacts are deleted from the raw image to generate the corrected image. RESULTS On the Catphan©504 study, the error of CT number in the corrected image's region of interest (ROI) is reduced from 109 to 11 HU, and the image contrast is increased by a factor of 1.46 on average. On the patient pelvis study, the error of CT number in selected ROI is reduced from 198 to 10 HU. The SNU calculated from the ROIs decreases from 24% to 9% after correction. CONCLUSIONS The proposed shading correction method using DCNN and AF may find a useful application in future clinical practice.
Collapse
Affiliation(s)
- Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Na Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Zhicheng Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Shaode Yu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Yafen Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | | | - Huailing Zhang
- School of Information Engineering, Guangdong Medical University, Dongguan 523808, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
147
|
A gentle introduction to deep learning in medical image processing. Z Med Phys 2019; 29:86-101. [DOI: 10.1016/j.zemedi.2018.12.003] [Citation(s) in RCA: 229] [Impact Index Per Article: 38.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 02/07/2023]
|
148
|
Guan S, Khan AA, Sikdar S, Chitnis PV. Fully Dense UNet for 2-D Sparse Photoacoustic Tomography Artifact Removal. IEEE J Biomed Health Inform 2019; 24:568-576. [PMID: 31021809 DOI: 10.1109/jbhi.2019.2912935] [Citation(s) in RCA: 171] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Photoacoustic imaging is an emerging imaging modality that is based upon the photoacoustic effect. In photoacoustic tomography (PAT), the induced acoustic pressure waves are measured by an array of detectors and used to reconstruct an image of the initial pressure distribution. A common challenge faced in PAT is that the measured acoustic waves can only be sparsely sampled. Reconstructing sparsely sampled data using standard methods results in severe artifacts that obscure information within the image. We propose a modified convolutional neural network (CNN) architecture termed fully dense UNet (FD-UNet) for removing artifacts from two-dimensional PAT images reconstructed from sparse data and compare the proposed CNN with the standard UNet in terms of reconstructed image quality.
Collapse
|
149
|
Häggström I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: A deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal 2019; 54:253-262. [PMID: 30954852 DOI: 10.1016/j.media.2019.03.013] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 03/29/2019] [Accepted: 03/30/2019] [Indexed: 01/01/2023]
Abstract
The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder-decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11%/53% lower than ordered subset expectation maximization (OSEM)/filtered back-projection (FBP), structural similarity index (1%/11% higher than OSEM/FBP), and peak signal-to-noise ratio (1.1/3.8 dB higher than OSEM/FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder-decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.
Collapse
Affiliation(s)
- Ida Häggström
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States.
| | - C Ross Schmidtlein
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Gabriele Campanella
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| | - Thomas J Fuchs
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| |
Collapse
|
150
|
Zou L, Yu S, Meng T, Zhang Z, Liang X, Xie Y. A Technical Review of Convolutional Neural Network-Based Mammographic Breast Cancer Diagnosis. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:6509357. [PMID: 31019547 PMCID: PMC6452645 DOI: 10.1155/2019/6509357] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Accepted: 02/25/2019] [Indexed: 12/27/2022]
Abstract
This study reviews the technique of convolutional neural network (CNN) applied in a specific field of mammographic breast cancer diagnosis (MBCD). It aims to provide several clues on how to use CNN for related tasks. MBCD is a long-standing problem, and massive computer-aided diagnosis models have been proposed. The models of CNN-based MBCD can be broadly categorized into three groups. One is to design shallow or to modify existing models to decrease the time cost as well as the number of instances for training; another is to make the best use of a pretrained CNN by transfer learning and fine-tuning; the third is to take advantage of CNN models for feature extraction, and the differentiation of malignant lesions from benign ones is fulfilled by using machine learning classifiers. This study enrolls peer-reviewed journal publications and presents technical details and pros and cons of each model. Furthermore, the findings, challenges and limitations are summarized and some clues on the future work are also given. Conclusively, CNN-based MBCD is at its early stage, and there is still a long way ahead in achieving the ultimate goal of using deep learning tools to facilitate clinical practice. This review benefits scientific researchers, industrial engineers, and those who are devoted to intelligent cancer diagnosis.
Collapse
Affiliation(s)
- Lian Zou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Cancer Center of Sichuan Provincial People's Hospital, Chengdu, China
| | - Shaode Yu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Tiebao Meng
- Department of Medical Imaging, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhicheng Zhang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
- Medical Physics Division in the Department of Radiation Oncology, Stanford University, Palo Alto, CA, USA
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|