151
|
Yuan N, Zhou J, Qi J. Half2Half: deep neural network based CT image denoising without independent reference data. ACTA ACUST UNITED AC 2020; 65:215020. [DOI: 10.1088/1361-6560/aba939] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
152
|
Yang CC. Evaluation of Impact of Factors Affecting CT Radiation Dose for Optimizing Patient Dose Levels. Diagnostics (Basel) 2020; 10:E787. [PMID: 33028021 PMCID: PMC7600150 DOI: 10.3390/diagnostics10100787] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 09/30/2020] [Accepted: 10/03/2020] [Indexed: 11/16/2022] Open
Abstract
The dose metrics and factors influencing radiation exposure for patients undergoing head, chest, and abdominal computed tomography (CT) scans were investigated for optimization of patient dose levels. The local diagnostic reference levels (DRLs) of adult CT scans performed in our hospital were established based on 28,147 consecutive examinations, including 5510 head scans, 9091 chest scans, and 13,526 abdominal scans. Among the six CT scanners used in our hospital, four of them are 64-slice multi-detector CT units (MDCT64), and the other two have detector slices higher than 64 (MDCTH). Multivariate analysis was conducted to evaluate the effects of body size, kVp, mAs, and pitch on volume CT dose index (CTDIvol). The local DRLs expressed in terms of the 75th percentile of CTDIvol for the head, chest, and abdominal scans performed on MDCT64 were 59.32, 9.24, and 10.64 mGy, respectively. The corresponding results for MDCTH were 57.90, 7.67, and 9.86 mGy. In regard to multivariate analysis, CTDIvol showed various dependence on the predictors investigated in this study. All regression relationships have coefficient of determination (R2) larger than 0.75, indicating a good fit to the data. Overall, the research results obtained through our workflow could facilitate the modification of CT imaging procedures once the local DRLs are unusually high compared to the national DRLs.
Collapse
Affiliation(s)
- Ching-Ching Yang
- Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung 80708, Taiwan;
- Department of Medical Research, Kaohsiung Medical University Chung-Ho Memorial Hospital, Kaohsiung 80708, Taiwan
| |
Collapse
|
153
|
Nemoto M, Chida K. Reducing the Breast Cancer Risk and Radiation Dose of Radiography for Scoliosis in Children: A Phantom Study. Diagnostics (Basel) 2020; 10:E753. [PMID: 32993028 PMCID: PMC7600947 DOI: 10.3390/diagnostics10100753] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 09/18/2020] [Accepted: 09/22/2020] [Indexed: 11/22/2022] Open
Abstract
Full-spinal radiographs (FRs) are often the first choice of imaging modality in the investigation of scoliosis. However, FRs are strongly related to breast cancer occurrence due to multiple large-field radiographic examinations taken during childhood and adolescence, which may increase the risk for breast cancer in adulthood among women with scoliosis. The purpose of this study was to consider various technical parameters to reduce the patient radiation dose of FRs for scoliosis. To evaluate breast surface doses (BSDs) in FRs, radio photoluminescence dosimeters were placed in contact with a child phantom. Using the PC-based Monte Carlo (PMC) program for calculating patient doses in medical X-ray examinations, the breast organ dose (BOD) and the effective dose were calculated by performing Monte Carlo simulations using mathematical phantom models. The BSDs in the posteroanterior (PA) view were 0.15-0.34-fold those in the anteroposterior (AP) view. The effective dose in the PA view was 0.4-0.61-fold that in the AP view. BSD measurements were almost equivalent to the BODs obtained using PMC at all exposure settings. During FRs, the PA view without an anti-scatter grid significantly reduced the breast dose compared to the AP view with an anti-scatter grid.
Collapse
Affiliation(s)
- Manami Nemoto
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, 2-1 Seiryo, Aoba, Sendai 980-8575, Miyagi, Japan;
| | - Koichi Chida
- Course of Radiological Technology, Health Sciences, Tohoku University Graduate School of Medicine, 2-1 Seiryo, Aoba, Sendai 980-8575, Miyagi, Japan;
- Department of Radiation Disaster Medicine, International Research Institute of Disaster Science, Tohoku University, 468-1 Aramaki Aza-Aoba, Aoba, Sendai 980-0845, Miyagi, Japan
| |
Collapse
|
154
|
Al'Aref SJ, Einstein AJ. Reduction in radiation exposure using a focused low-voltage scan before coronary CT angiography. J Cardiovasc Comput Tomogr 2020; 15:246-248. [PMID: 32948486 DOI: 10.1016/j.jcct.2020.08.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2020] [Accepted: 08/27/2020] [Indexed: 12/25/2022]
Affiliation(s)
- Subhi J Al'Aref
- Division of Cardiology, Department of Medicine. University of Arkansas for Medical Sciences (UAMS), Little Rock, AR, United States
| | - Andrew J Einstein
- Department of Medicine, Seymour, Paul, and Gloria Milstein Division of Cardiology, And Department of Radiology, Columbia University Irving Medical Center and New York-Presbyterian Hospital, New York, NY, United States.
| |
Collapse
|
155
|
Lyu Q, Shan H, Steber C, Helis C, Whitlow CT, Chan M, Wang G. Multi-Contrast Super-Resolution MRI Through a Progressive Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2738-2749. [PMID: 32086201 PMCID: PMC7673259 DOI: 10.1109/tmi.2020.2974858] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Magnetic resonance imaging (MRI) is widely used for screening, diagnosis, image-guided therapy, and scientific research. A significant advantage of MRI over other imaging modalities such as computed tomography (CT) and nuclear imaging is that it clearly shows soft tissues in multi-contrasts. Compared with other medical image super-resolution methods that are in a single contrast, multi-contrast super-resolution studies can synergize multiple contrast images to achieve better super-resolution results. In this paper, we propose a one-level non-progressive neural network for low up-sampling multi-contrast super-resolution and a two-level progressive network for high up-sampling multi-contrast super-resolution. The proposed networks integrate multi-contrast information in a high-level feature space and optimize the imaging performance by minimizing a composite loss function, which includes mean-squared-error, adversarial loss, perceptual loss, and textural loss. Our experimental results demonstrate that 1) the proposed networks can produce MRI super-resolution images with good image quality and outperform other multi-contrast super-resolution methods in terms of structural similarity and peak signal-to-noise ratio; 2) combining multi-contrast information in a high-level feature space leads to a significantly improved result than a combination in the low-level pixel space; and 3) the progressive network produces a better super-resolution image quality than the non-progressive network, even if the original low-resolution images were highly down-sampled.
Collapse
Affiliation(s)
- Qing Lyu
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | | | - Cole Steber
- Department of Radiation Oncology, Wake Forest School of Medicine, Winston-Salem, NC, 27101, USA
| | - Corbin Helis
- Department of Radiation Oncology, Wake Forest School of Medicine, Winston-Salem, NC, 27101, USA
| | - Christopher T. Whitlow
- Department of Radiology, Department of Biomedical Engineering, and Department of Biostatistics and Data Science, Wake Forest School of Medicine, Winston-Salem, NC, 27157, USA
| | | | | |
Collapse
|
156
|
Ramon AJ, Yang Y, Pretorius PH, Johnson KL, King MA, Wernick MN. Improving Diagnostic Accuracy in Low-Dose SPECT Myocardial Perfusion Imaging With Convolutional Denoising Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2893-2903. [PMID: 32167887 PMCID: PMC9472754 DOI: 10.1109/tmi.2020.2979940] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Lowering the administered dose in SPECT myocardial perfusion imaging (MPI) has become an important clinical problem. In this study we investigate the potential benefit of applying a deep learning (DL) approach for suppressing the elevated imaging noise in low-dose SPECT-MPI studies. We adopt a supervised learning approach to train a neural network by using image pairs obtained from full-dose (target) and low-dose (input) acquisitions of the same patients. In the experiments, we made use of acquisitions from 1,052 subjects and demonstrated the approach for two commonly used reconstruction methods in clinical SPECT-MPI: 1) filtered backprojection (FBP), and 2) ordered-subsets expectation-maximization (OSEM) with corrections for attenuation, scatter and resolution. We evaluated the DL output for the clinical task of perfusion-defect detection at a number of successively reduced dose levels (1/2, 1/4, 1/8, 1/16 of full dose). The results indicate that the proposed DL approach can achieve substantial noise reduction and lead to improvement in the diagnostic accuracy of low-dose data. In particular, at 1/2 dose, DL yielded an area-under-the-ROC-curve (AUC) of 0.799, which is nearly identical to the AUC = 0.801 obtained by OSEM at full-dose ( p -value = 0.73); similar results were also obtained for FBP reconstruction. Moreover, even at 1/8 dose, DL achieved AUC = 0.770 for OSEM, which is above the AUC = 0.755 obtained at full-dose by FBP. These results indicate that, compared to conventional reconstruction filtering, DL denoising can allow for additional dose reduction without sacrificing the diagnostic accuracy in SPECT-MPI.
Collapse
|
157
|
Kiyasseh D, Zhu T, Clifton D. The Promise of Clinical Decision Support Systems Targetting Low-Resource Settings. IEEE Rev Biomed Eng 2020; 15:354-371. [PMID: 32813662 DOI: 10.1109/rbme.2020.3017868] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Low-resource clinical settings are plagued by low physician-to-patient ratios and a shortage of high-quality medical expertise and infrastructure. Together, these phenomena lead to over-burdened healthcare systems that under-serve the needs of the community. Alleviating this burden can be undertaken by the introduction of clinical decision support systems (CDSSs); systems that support stakeholders (ranging from physicians to patients) within the clinical setting in their day-to-day activities. Such systems, which have proven to be effective in the developed world, remain to be under-explored in low-resource settings. This review attempts to summarize the research focused on clinical decision support systems that either target stakeholders within low-resource clinical settings or diseases commonly found in such environments. When categorizing our findings according to disease applications, we find that CDSSs are predominantly focused on dealing with bacterial infections and maternal care, do not leverage deep learning, and have not been evaluated prospectively. Together, these highlight the need for increased research in this domain in order to impact a diverse set of medical conditions and ultimately improve patient outcomes.
Collapse
|
158
|
Generalization of diffusion magnetic resonance imaging–based brain age prediction model through transfer learning. Neuroimage 2020; 217:116831. [DOI: 10.1016/j.neuroimage.2020.116831] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 03/18/2020] [Accepted: 03/19/2020] [Indexed: 11/23/2022] Open
|
159
|
Baffour FI, Glazebrook KN, Kumar SK, Broski SM. Role of imaging in multiple myeloma. Am J Hematol 2020; 95:966-977. [PMID: 32350883 DOI: 10.1002/ajh.25846] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/03/2020] [Accepted: 04/21/2020] [Indexed: 12/17/2022]
Abstract
With rapid advancements in the diagnosis and treatment of multiple myeloma (MM), imaging has become instrumental in detection of intramedullary and extramedullary disease, providing prognostic information, and assessing therapeutic efficacy. Whole-body low dose computed tomography (WBLDCT) has emerged as the study of choice to detect osteolytic bone disease. Positron emission tomography/computed tomography (PET/CT) combines functional and morphologic information to identify MM disease activity and assess treatment response. Magnetic resonance imaging (MRI) has excellent soft-tissue contrast and is the modality of choice for bone marrow evaluation. This review focuses on the imaging modalities available for MM patient management, highlighting advantages, disadvantages, and applications of each.
Collapse
Affiliation(s)
| | | | - Shaji K. Kumar
- Department of Internal Medicine, Division of HematologyMayo Clinic Rochester Minnesota USA
| | | |
Collapse
|
160
|
Sheng K. Artificial intelligence in radiotherapy: a technological review. Front Med 2020; 14:431-449. [PMID: 32728877 DOI: 10.1007/s11684-020-0761-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2019] [Accepted: 02/14/2020] [Indexed: 12/19/2022]
Abstract
Radiation therapy (RT) is widely used to treat cancer. Technological advances in RT have occurred in the past 30 years. These advances, such as three-dimensional image guidance, intensity modulation, and robotics, created challenges and opportunities for the next breakthrough, in which artificial intelligence (AI) will possibly play important roles. AI will replace certain repetitive and labor-intensive tasks and improve the accuracy and consistency of others, particularly those with increased complexity because of technological advances. The improvement in efficiency and consistency is important to manage the increasing cancer patient burden to the society. Furthermore, AI may provide new functionalities that facilitate satisfactory RT. The functionalities include superior images for real-time intervention and adaptive and personalized RT. AI may effectively synthesize and analyze big data for such purposes. This review describes the RT workflow and identifies areas, including imaging, treatment planning, quality assurance, and outcome prediction, that benefit from AI. This review primarily focuses on deep-learning techniques, although conventional machine-learning techniques are also mentioned.
Collapse
Affiliation(s)
- Ke Sheng
- Department of Radiation Oncology, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
161
|
Zhou Q, Ding M, Zhang X. Image Deblurring Using Multi-Stream Bottom-Top-Bottom Attention Network and Global Information-Based Fusion and Reconstruction Network. SENSORS 2020; 20:s20133724. [PMID: 32635206 PMCID: PMC7374418 DOI: 10.3390/s20133724] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 06/25/2020] [Accepted: 06/30/2020] [Indexed: 11/25/2022]
Abstract
Image deblurring has been a challenging ill-posed problem in computer vision. Gaussian blur is a common model for image and signal degradation. The deep learning-based deblurring methods have attracted much attention due to their advantages over the traditional methods relying on hand-designed features. However, the existing deep learning-based deblurring techniques still cannot perform well in restoring the fine details and reconstructing the sharp edges. To address this issue, we have designed an effective end-to-end deep learning-based non-blind image deblurring algorithm. In the proposed method, a multi-stream bottom-top-bottom attention network (MBANet) with the encoder-to-decoder structure is designed to integrate low-level cues and high-level semantic information, which can facilitate extracting image features more effectively and improve the computational efficiency of the network. Moreover, the MBANet adopts a coarse-to-fine multi-scale strategy to process the input images to improve image deblurring performance. Furthermore, the global information-based fusion and reconstruction network is proposed to fuse multi-scale output maps to improve the global spatial information and recurrently refine the output deblurred image. The experiments were done on the public GoPro dataset and the realistic and dynamic scenes (REDS) dataset to evaluate the effectiveness and robustness of the proposed method. The experimental results show that the proposed method generally outperforms some traditional deburring methods and deep learning-based state-of-the-art deblurring methods such as scale-recurrent network (SRN) and denoising prior driven deep neural network (DPDNN) in terms of such quantitative indexes as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) and human vision.
Collapse
|
162
|
Li M, Hsu W, Xie X, Cong J, Gao W. SACNN: Self-Attention Convolutional Neural Network for Low-Dose CT Denoising With Self-Supervised Perceptual Loss Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2289-2301. [PMID: 31985412 DOI: 10.1109/tmi.2020.2968472] [Citation(s) in RCA: 111] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Computed tomography (CT) is a widely used screening and diagnostic tool that allows clinicians to obtain a high-resolution, volumetric image of internal structures in a non-invasive manner. Increasingly, efforts have been made to improve the image quality of low-dose CT (LDCT) to reduce the cumulative radiation exposure of patients undergoing routine screening exams. The resurgence of deep learning has yielded a new approach for noise reduction by training a deep multi-layer convolutional neural networks (CNN) to map the low-dose to normal-dose CT images. However, CNN-based methods heavily rely on convolutional kernels, which use fixed-size filters to process one local neighborhood within the receptive field at a time. As a result, they are not efficient at retrieving structural information across large regions. In this paper, we propose a novel 3D self-attention convolutional neural network for the LDCT denoising problem. Our 3D self-attention module leverages the 3D volume of CT images to capture a wide range of spatial information both within CT slices and between CT slices. With the help of the 3D self-attention module, CNNs are able to leverage pixels with stronger relationships regardless of their distance and achieve better denoising results. In addition, we propose a self-supervised learning scheme to train a domain-specific autoencoder as the perceptual loss function. We combine these two methods and demonstrate their effectiveness on both CNN-based neural networks and WGAN-based neural networks with comprehensive experiments. Tested on the AAPM-Mayo Clinic Low Dose CT Grand Challenge data set, our experiments demonstrate that self-attention (SA) module and autoencoder (AE) perceptual loss function can efficiently enhance traditional CNNs and can achieve comparable or better results than the state-of-the-art methods.
Collapse
|
163
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
164
|
Kalare KW, Bajpai MK. RecDNN: deep neural network for image reconstruction from limited view projection data. Soft comput 2020. [DOI: 10.1007/s00500-020-05013-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
165
|
Fan F, Shan H, Kalra MK, Singh R, Qian G, Getzin M, Teng Y, Hahn J, Wang G. Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2035-2050. [PMID: 31902758 PMCID: PMC7376975 DOI: 10.1109/tmi.2019.2963248] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Inspired by complexity and diversity of biological neurons, our group proposed quadratic neurons by replacing the inner product in current artificial neurons with a quadratic operation on input data, thereby enhancing the capability of an individual neuron. Along this direction, we are motivated to evaluate the power of quadratic neurons in popular network architectures, simulating human-like learning in the form of "quadratic-neuron-based deep learning". Our prior theoretical studies have shown important merits of quadratic neurons and networks in representation, efficiency, and interpretability. In this paper, we use quadratic neurons to construct an encoder-decoder structure, referred as the quadratic autoencoder, and apply it to low-dose CT denoising. The experimental results on the Mayo low-dose CT dataset demonstrate the utility and robustness of quadratic autoencoder in terms of image denoising and model efficiency. To our best knowledge, this is the first time that the deep learning approach is implemented with a new type of neurons and demonstrates a significant potential in the medical imaging field.
Collapse
Affiliation(s)
- Fenglei Fan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Guhan Qian
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Matthew Getzin
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yueyang Teng
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China, 110169
| | - Juergen Hahn
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
166
|
Stepwise PathNet: a layer-by-layer knowledge-selection-based transfer learning algorithm. Sci Rep 2020; 10:8132. [PMID: 32424180 PMCID: PMC7235242 DOI: 10.1038/s41598-020-64165-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Accepted: 04/09/2020] [Indexed: 01/12/2023] Open
Abstract
Some neural network can be trained by transfer learning, which uses a pre-trained neural network as the source task, for a small target task’s dataset. The performance of the transfer learning depends on the knowledge (i.e., layers) selected from the pre-trained network. At present, this knowledge is usually chosen by humans. The transfer learning method PathNet automatically selects pre-trained modules or adjustable modules in a modular neural network. However, PathNet requires modular neural networks as the pre-trained networks, therefore non-modular pre-trained neural networks are currently unavailable. Consequently, PathNet limits the versatility of the network structure. To address this limitation, we propose Stepwise PathNet, which regards the layers of a non-modular pre-trained neural network as the module in PathNet and selects the layers automatically through training. In an experimental validation of transfer learning from InceptionV3 pre-trained on the ImageNet dataset to networks trained on three other datasets (CIFAR-100, SVHN and Food-101), Stepwise PathNet was up to 8% and 10% more accurate than finely tuned and from-scratch approaches, respectively. Also, some of the selected layers were not supported by the layer functions assumed in PathNet.
Collapse
|
167
|
Abstract
Deep learning research has demonstrated the effectiveness of using pre-trained networks as feature encoders. The large majority of these networks are trained on 2D datasets with millions of samples and diverse classes of information. We demonstrate and evaluate approaches to transferring deep 2D feature spaces to 3D in order to take advantage of these and related resources in the biomedical domain. First, we show how VGG-19 activations can be mapped to a 3D variant of the network (VGG-19-3D). Second, using varied medical decathlon data, we provide a technique for training 3D networks to predict the encodings induced by 3D VGG-19. Lastly, we compare five different 3D networks (one of which is trained only on 3D MRI and another of which is not trained at all) across layers and patch sizes in terms of their ability to identify hippocampal landmark points in 3D MRI data that was not included in their training. We make observations about the performance, recommend different networks and layers and make them publicly available for further evaluation.
Collapse
|
168
|
Deep Learning-based Inaccuracy Compensation in Reconstruction of High Resolution XCT Data. Sci Rep 2020; 10:7682. [PMID: 32376852 PMCID: PMC7203197 DOI: 10.1038/s41598-020-64733-7] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Accepted: 04/17/2020] [Indexed: 11/08/2022] Open
Abstract
While X-ray computed tomography (XCT) is pushed further into the micro- and nanoscale, the limitations of various tool components and object motion become more apparent. For high-resolution XCT, it is necessary but practically difficult to align these tool components with sub-micron precision. The aim is to develop a novel reconstruction methodology that considers unavoidable misalignment and object motion during the data acquisition in order to obtain high-quality three-dimensional images and that is applicable for data recovery from incomplete datasets. A reconstruction software empowered by sophisticated correction modules that autonomously estimates and compensates artefacts using gradient descent and deep learning algorithms has been developed and applied. For motion estimation, a novel computer vision methodology coupled with a deep convolutional neural network approach provides estimates for the object motion by tracking features throughout the adjacent projections. The model is trained using the forward projections of simulated phantoms that consist of several simple geometrical features such as sphere, triangle and rectangular. The feature maps extracted by a neural network are used to detect and to classify features done by a support vector machine. For missing data recovery, a novel deep convolutional neural network is used to infer high-quality reconstruction data from incomplete sets of projections. The forward and back projections of simulated geometric shapes from a range of angular ranges are used to train the model. The model is able to learn the angular dependency based on a limited angle coverage and to propose a new set of projections to suppress artefacts. High-quality three-dimensional images demonstrate that it is possible to effectively suppress artefacts caused by thermomechanical instability of tool components and objects resulting in motion, by center of rotation misalignment and by inaccuracy in the detector position without additional computational efforts. Data recovery from incomplete sets of projections result in directly corrected projections instead of suppressing artefacts in the final reconstructed images. The proposed methodology has been proven and is demonstrated for a ball bearing sample. The reconstruction results are compared to prior corrections and benchmarked with a commercially available reconstruction software. Compared to conventional approaches in XCT imaging and data analysis, the proposed methodology for the generation of high-quality three-dimensional X-ray images is fully autonomous. The methodology presented here has been proven for high-resolution micro-XCT and nano-XCT, however, is applicable for all length scales.
Collapse
|
169
|
Feigin M, Freedman D, Anthony BW. A Deep Learning Framework for Single-Sided Sound Speed Inversion in Medical Ultrasound. IEEE Trans Biomed Eng 2020; 67:1142-1151. [DOI: 10.1109/tbme.2019.2931195] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
170
|
Automatic mandibular canal detection using a deep convolutional neural network. Sci Rep 2020; 10:5711. [PMID: 32235882 PMCID: PMC7109125 DOI: 10.1038/s41598-020-62586-8] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 03/16/2020] [Indexed: 11/08/2022] Open
Abstract
The practicability of deep learning techniques has been demonstrated by their successful implementation in varied fields, including diagnostic imaging for clinicians. In accordance with the increasing demands in the healthcare industry, techniques for automatic prediction and detection are being widely researched. Particularly in dentistry, for various reasons, automated mandibular canal detection has become highly desirable. The positioning of the inferior alveolar nerve (IAN), which is one of the major structures in the mandible, is crucial to prevent nerve injury during surgical procedures. However, automatic segmentation using Cone beam computed tomography (CBCT) poses certain difficulties, such as the complex appearance of the human skull, limited number of datasets, unclear edges, and noisy images. Using work-in-progress automation software, experiments were conducted with models based on 2D SegNet, 2D and 3D U-Nets as preliminary research for a dental segmentation automation tool. The 2D U-Net with adjacent images demonstrates higher global accuracy of 0.82 than naïve U-Net variants. The 2D SegNet showed the second highest global accuracy of 0.96, and the 3D U-Net showed the best global accuracy of 0.99. The automated canal detection system through deep learning will contribute significantly to efficient treatment planning and to reducing patients’ discomfort by a dentist. This study will be a preliminary report and an opportunity to explore the application of deep learning to other dental fields.
Collapse
|
171
|
Hu H. Recent Advances of Bioresponsive Nano-Sized Contrast Agents for Ultra-High-Field Magnetic Resonance Imaging. Front Chem 2020; 8:203. [PMID: 32266217 PMCID: PMC7100386 DOI: 10.3389/fchem.2020.00203] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Accepted: 03/04/2020] [Indexed: 12/11/2022] Open
Abstract
The ultra-high-field magnetic resonance imaging (MRI) nowadays has been receiving enormous attention in both biomaterial research and clinical diagnosis. MRI contrast agents are generally comprising of T1-weighted and T2-weighted contrast agent types, where T1-weighted contrast agents show positive contrast enhancement with brighter images by decreasing the proton's longitudinal relaxation times and T2-weighted contrast agents show negative contrast enhancement with darker images by decreasing the proton's transverse relaxation times. To meet the incredible demand of MRI, ultra-high-field T2 MRI is gradually attracting the attention of research and medical needs owing to its high resolution and high accuracy for detection. It is anticipated that high field MRI contrast agents can achieve high performance in MRI imaging, where parameters of chemical composition, molecular structure and size of varied contrast agents show contrasted influence in each specific diagnostic test. This review firstly presents the recent advances of nanoparticle contrast agents for MRI. Moreover, multimodal molecular imaging with MRI for better monitoring is discussed during biological process. To fasten the process of developing better contrast agents, deep learning of artificial intelligent (AI) can be well-integrated into optimizing the crucial parameters of nanoparticle contrast agents and achieving high resolution MRI prior to the clinical applications. Finally, prospects and challenges are summarized.
Collapse
Affiliation(s)
- Hailong Hu
- School of Aeronautics and Astronautics, Central South University, Changsha, China
- Research Center in Intelligent Thermal Structures for Aerospace, Central South University, Changsha, China
| |
Collapse
|
172
|
Zhu Q, Du B, Yan P. Boundary-Weighted Domain Adaptive Neural Network for Prostate MR Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:753-763. [PMID: 31425022 PMCID: PMC7015773 DOI: 10.1109/tmi.2019.2935018] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Accurate segmentation of the prostate from magnetic resonance (MR) images provides useful information for prostate cancer diagnosis and treatment. However, automated prostate segmentation from 3D MR images faces several challenges. The lack of clear edge between the prostate and other anatomical structures makes it challenging to accurately extract the boundaries. The complex background texture and large variation in size, shape and intensity distribution of the prostate itself make segmentation even further complicated. Recently, as deep learning, especially convolutional neural networks (CNNs), emerging as the best performed methods for medical image segmentation, the difficulty in obtaining large number of annotated medical images for training CNNs has become much more pronounced than ever. Since large-scale dataset is one of the critical components for the success of deep learning, lack of sufficient training data makes it difficult to fully train complex CNNs. To tackle the above challenges, in this paper, we propose a boundary-weighted domain adaptive neural network (BOWDA-Net). To make the network more sensitive to the boundaries during segmentation, a boundary-weighted segmentation loss is proposed. Furthermore, an advanced boundary-weighted transfer leaning approach is introduced to address the problem of small medical imaging datasets. We evaluate our proposed model on three different MR prostate datasets. The experimental results demonstrate that the proposed model is more sensitive to object boundaries and outperformed other state-of-the-art methods.
Collapse
Affiliation(s)
- Qikui Zhu
- School of Computer Science, Wuhan University, Wuhan, China
| | - Bo Du
- co-corresponding authors: B. Du (), P. Yan ()
| | - Pingkun Yan
- co-corresponding authors: B. Du (), P. Yan ()
| |
Collapse
|
173
|
Tao X, Zhang H, Wang Y, Yan G, Zeng D, Chen W, Ma J. VVBP-Tensor in the FBP Algorithm: Its Properties and Application in Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:764-776. [PMID: 31425024 DOI: 10.1109/tmi.2019.2935187] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
For decades, commercial X-ray computed tomography (CT) scanners have been using the filtered backprojection (FBP) algorithm for image reconstruction. However, the desire for lower radiation doses has pushed the FBP algorithm to its limit. Previous studies have made significant efforts to improve the results of FBP through preprocessing the sinogram, modifying the ramp filter, or postprocessing the reconstructed images. In this paper, we focus on analyzing and processing the stacked view-by-view backprojections (named VVBP-Tensor) in the FBP algorithm. A key challenge for our analysis lies in the radial structures in each backprojection slice. To overcome this difficulty, a sorting operation was introduced to the VVBP-Tensor in its z direction (the direction of the projection views). The results show that, after sorting, the tensor contains structures that are similar to those of the object, and structures in different slices of the tensor are correlated. We then analyzed the properties of the VVBP-Tensor, including structural self-similarity, tensor sparsity, and noise statistics. Considering these properties, we have developed an algorithm using the tensor singular value decomposition (named VVBP-tSVD) to denoise the VVBP-Tensor for low-mAs CT imaging. Experiments were conducted using a physical phantom and clinical patient data with different mAs levels. The results demonstrate that the VVBP-tSVD is superior to all competing methods under different reconstruction schemes, including sinogram preprocessing, image postprocessing, and iterative reconstruction. We conclude that the VVBP-Tensor is a suitable processing target for improving the quality of FBP reconstruction, and the proposed VVBP-tSVD is an effective algorithm for noise reduction in low-mAs CT imaging. This preliminary work might provide a heuristic perspective for reviewing and rethinking the FBP algorithm.
Collapse
|
174
|
Liu Z, Bicer T, Kettimuthu R, Gursoy D, De Carlo F, Foster I. TomoGAN: low-dose synchrotron x-ray tomography with generative adversarial networks: discussion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:422-434. [PMID: 32118926 DOI: 10.1364/josaa.375595] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 01/08/2020] [Indexed: 06/10/2023]
Abstract
Synchrotron-based x-ray tomography is a noninvasive imaging technique that allows for reconstructing the internal structure of materials at high spatial resolutions from tens of micrometers to a few nanometers. In order to resolve sample features at smaller length scales, however, a higher radiation dose is required. Therefore, the limitation on the achievable resolution is set primarily by noise at these length scales. We present TomoGAN, a denoising technique based on generative adversarial networks, for improving the quality of reconstructed images for low-dose imaging conditions. We evaluate our approach in two photon-budget-limited experimental conditions: (1) sufficient number of low-dose projections (based on Nyquist sampling), and (2) insufficient or limited number of high-dose projections. In both cases, the angular sampling is assumed to be isotropic, and the photon budget throughout the experiment is fixed based on the maximum allowable radiation dose on the sample. Evaluation with both simulated and experimental datasets shows that our approach can significantly reduce noise in reconstructed images, improving the structural similarity score of simulation and experimental data from 0.18 to 0.9 and from 0.18 to 0.41, respectively. Furthermore, the quality of the reconstructed images with filtered back projection followed by our denoising approach exceeds that of reconstructions with the simultaneous iterative reconstruction technique, showing the computational superiority of our approach.
Collapse
|
175
|
Chen J, Bi S, Zhang G, Cao G. High-Density Surface EMG-Based Gesture Recognition Using a 3D Convolutional Neural Network. SENSORS 2020; 20:s20041201. [PMID: 32098264 PMCID: PMC7070985 DOI: 10.3390/s20041201] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 02/13/2020] [Accepted: 02/19/2020] [Indexed: 11/23/2022]
Abstract
High-density surface electromyography (HD-sEMG) and deep learning technology are becoming increasingly used in gesture recognition. Based on electrode grid data, information can be extracted in the form of images that are generated with instant values of multi-channel sEMG signals. In previous studies, image-based, two-dimensional convolutional neural networks (2D CNNs) have been applied in order to recognize patterns in the electrical activity of muscles from an instantaneous image. However, 2D CNNs with 2D kernels are unable to handle a sequence of images that carry information concerning how the instantaneous image evolves with time. This paper presents a 3D CNN with 3D kernels to capture both spatial and temporal structures from sequential sEMG images and investigates its performance on HD-sEMG-based gesture recognition in comparison to the 2D CNN. Extensive experiments were carried out on two benchmark datasets (i.e., CapgMyo DB-a and CSL-HDEMG). The results show that, where the same network architecture is used, 3D CNN can achieve a better performance than 2D CNN, especially for CSL-HDEMG, which contains the dynamic part of finger movement. For CapgMyo DB-a, the accuracy of 3D CNN was 1% higher than 2D CNN when the recognition window length was equal to 40 ms, and was 1.5% higher when equal to 150 ms. For CSL-HDEMG, the accuracies of 3D CNN were 15.3% and 18.6% higher than 2D CNN when the window length was equal to 40 ms and 150 ms, respectively. Furthermore, 3D CNN achieves a competitive performance in comparison to the baseline methods.
Collapse
Affiliation(s)
- Jiangcheng Chen
- Shenzhen Academy of Robotics, Shenzhen 518057, China;
- Correspondence: (J.C.); (S.B.)
| | - Sheng Bi
- Shenzhen Academy of Robotics, Shenzhen 518057, China;
- School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
- Correspondence: (J.C.); (S.B.)
| | - George Zhang
- Shenzhen Academy of Robotics, Shenzhen 518057, China;
| | - Guangzhong Cao
- Shenzhen Key Laboratory of Electromagnetic Control, Shenzhen University, Shenzhen 518060, China;
| |
Collapse
|
176
|
Xue X, Wang Y, Li J, Jiao Z, Ren Z, Gao X. Progressive Sub-Band Residual-Learning Network for MR Image Super Resolution. IEEE J Biomed Health Inform 2020; 24:377-386. [DOI: 10.1109/jbhi.2019.2945373] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
177
|
Lei Y, Tian Y, Shan H, Zhang J, Wang G, Kalra MK. Shape and margin-aware lung nodule classification in low-dose CT images via soft activation mapping. Med Image Anal 2020; 60:101628. [DOI: 10.1016/j.media.2019.101628] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 12/04/2019] [Accepted: 12/06/2019] [Indexed: 10/25/2022]
|
178
|
Alla Takam C, Samba O, Tchagna Kouanou A, Tchiotsop D. Spark Architecture for deep learning-based dose optimization in medical imaging. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100335] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
179
|
Xie H, Shan H, Cong W, Liu C, Zhang X, Liu S, Ning R, Wang GE. Deep Efficient End-to-end Reconstruction (DEER) Network for Few-view Breast CT Image Reconstruction. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:196633-196646. [PMID: 33251081 PMCID: PMC7695229 DOI: 10.1109/access.2020.3033795] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences. Since breast is sensitive to x-ray radiation, dose reduction of breast CT is an important topic, and for this purpose, few-view scanning is a main approach. In this article, we propose a Deep Efficient End-to-end Reconstruction (DEER) network for few-view breast CT image reconstruction. The major merits of our network include high dose efficiency, excellent image quality, and low model complexity. By the design, the proposed network can learn the reconstruction process with as few as O ( N ) parameters, where N is the side length of an image to be reconstructed, which represents orders of magnitude improvements relative to the state-of-the-art deep-learning-based reconstruction methods that map raw data to tomographic images directly. Also, validated on a cone-beam breast CT dataset prepared by Koning Corporation on a commercial scanner, our method demonstrates a competitive performance over the state-of-the-art reconstruction networks in terms of image quality. The source code of this paper is available at: https://github.com/HuidongXie/DEER.
Collapse
Affiliation(s)
- Huidong Xie
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Hongming Shan
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Wenxiang Cong
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Ruola Ning
- Koning Corporation, West Henrietta, NY USA
| | - G E Wang
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| |
Collapse
|
180
|
Kim J, Kim J, Han G, Rim C, Jo H. Low-dose CT Image Restoration using generative adversarial networks. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
181
|
You C, Li G, Zhang Y, Zhang X, Shan H, Li M, Ju S, Zhao Z, Zhang Z, Cong W, Vannier MW, Saha PK, Hoffman EA, Wang G. CT Super-Resolution GAN Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:188-203. [PMID: 31217097 PMCID: PMC11662229 DOI: 10.1109/tmi.2019.2922960] [Citation(s) in RCA: 181] [Impact Index Per Article: 36.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
In this paper, we present a semi-supervised deep learning approach to accurately recover high-resolution (HR) CT images from low-resolution (LR) counterparts. Specifically, with the generative adversarial network (GAN) as the building block, we enforce the cycle-consistency in terms of the Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs. We also include the joint constraints in the loss function to facilitate structural preservation. In this process, we incorporate deep convolutional neural network (CNN), residual learning, and network in network techniques for feature extraction and restoration. In contrast to the current trend of increasing network depth and complexity to boost the imaging performance, we apply a parallel 1×1 CNN to compress the output of the hidden layer and optimize the number of layers and the number of filters for each convolutional layer. The quantitative and qualitative evaluative results demonstrate that our proposed model is accurate, efficient and robust for super-resolution (SR) image restoration from noisy LR input images. In particular, we validate our composite SR networks on three large-scale CT datasets, and obtain promising results as compared to the other state-of-the-art methods.
Collapse
|
182
|
Xie H, Shan H, Wang G. Deep Encoder-Decoder Adversarial Reconstruction(DEAR) Network for 3D CT from Few-View Data. Bioengineering (Basel) 2019; 6:E111. [PMID: 31835430 PMCID: PMC6956312 DOI: 10.3390/bioengineering6040111] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 11/20/2019] [Accepted: 12/05/2019] [Indexed: 11/16/2022] Open
Abstract
X-ray computed tomography (CT) is widely used in clinical practice. The involved ionizingX-ray radiation, however, could increase cancer risk. Hence, the reduction of the radiation dosehas been an important topic in recent years. Few-view CT image reconstruction is one of the mainways to minimize radiation dose and potentially allow a stationary CT architecture. In this paper,we propose a deep encoder-decoder adversarial reconstruction (DEAR) network for 3D CT imagereconstruction from few-view data. Since the artifacts caused by few-view reconstruction appear in3D instead of 2D geometry, a 3D deep network has a great potential for improving the image qualityin a data driven fashion. More specifically, our proposed DEAR-3D network aims at reconstructing3D volume directly from clinical 3D spiral cone-beam image data. DEAR is validated on a publiclyavailable abdominal CT dataset prepared and authorized by Mayo Clinic. Compared with other2D deep learning methods, the proposed DEAR-3D network can utilize 3D information to producepromising reconstruction results.
Collapse
Affiliation(s)
| | | | - Ge Wang
- Biomedical Imaging Center, Department of Biomedical Engineering, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180, USA; (H.X.); (H.S.)
| |
Collapse
|
183
|
Manee V, Zhu W, Romagnoli JA. A Deep Learning Image-Based Sensor for Real-Time Crystal Size Distribution Characterization. Ind Eng Chem Res 2019. [DOI: 10.1021/acs.iecr.9b02450] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
- V. Manee
- Department of Chemical Engineering, Louisiana State University, Baton Rouge, Louisiana 70803, United States
| | - W. Zhu
- Department of Chemical Engineering, Louisiana State University, Baton Rouge, Louisiana 70803, United States
| | - J. A. Romagnoli
- Department of Chemical Engineering, Louisiana State University, Baton Rouge, Louisiana 70803, United States
| |
Collapse
|
184
|
Javaid U, Souris K, Dasnoy D, Huang S, Lee JA. Mitigating inherent noise in Monte Carlo dose distributions using dilated U-Net. Med Phys 2019; 46:5790-5798. [PMID: 31600829 DOI: 10.1002/mp.13856] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 09/17/2019] [Accepted: 09/29/2019] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Monte Carlo (MC) algorithms offer accurate modeling of dose calculation by simulating the transport and interactions of many particles through the patient geometry. However, given their random nature, the resulting dose distributions have statistical uncertainty (noise), which prevents making reliable clinical decisions. This issue is partly addressable using a huge number of simulated particles but is computationally expensive as it results in significantly greater computation times. Therefore, there is a trade-off between the computation time and the noise level in MC dose maps. In this work, we address the mitigation of noise inherent to MC dose distributions using dilated U-Net - an encoder-decoder-styled fully convolutional neural network, which allows fast and fully automated denoising of whole-volume dose maps. METHODS We use mean squared error (MSE) as loss function to train the model, where training is done in 2D and 2.5D settings by considering a number of adjacent slices. Our model is trained on proton therapy MC dose distributions of different tumor sites (brain, head and neck, liver, lungs, and prostate) acquired from 35 patients. We provide the network with input MC dose distributions simulated using 1 × 10 6 particles while keeping 1 × 10 9 particles as reference. RESULTS After training, our model successfully denoises new MC dose maps. On average (averaged over five patients with different tumor sites), our model recovers D 95 of 55.99 Gy from the noisy MC input of 49.51 Gy, whereas the low noise MC (reference) offers 56.03 Gy. We observed a significant reduction in average RMSE (thresholded >10% max ref) for reference vs denoised (1.25 Gy) than reference vs input (16.96 Gy) leading to an improvement in signal-to-noise ratio (ISNR) by 18.06 dB. Moreover, the inference time of our model for a dose distribution is less than 10 s vs 100 min (MC simulation using 1 × 10 9 particles). CONCLUSIONS We propose an end-to-end fully convolutional network that can denoise Monte Carlo dose distributions. The networks provide comparable qualitative and quantitative results as the MC dose distribution simulated with 1 × 10 9 particles, offering a significant reduction in computation time.
Collapse
Affiliation(s)
- Umair Javaid
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
- IREC/MIRO, UCLouvain, Brussels, 1200, Belgium
| | | | | | - Sheng Huang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - John A Lee
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
- IREC/MIRO, UCLouvain, Brussels, 1200, Belgium
| |
Collapse
|
185
|
Gjesteby L, Shan H, Yang Q, Xi Y, Jin Y, Giantsoudi D, Paganetti H, De Man B, Wang G. A dual-stream deep convolutional network for reducing metal streak artifacts in CT images. ACTA ACUST UNITED AC 2019; 64:235003. [PMID: 31618724 DOI: 10.1088/1361-6560/ab4e3e] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Machine learning and deep learning are rapidly finding applications in the medical imaging field. In this paper, we address the long-standing problem of metal artifacts in computed tomography (CT) images by training a dual-stream deep convolutional neural network for streak removal. While many metal artifact reduction methods exist, even state-of-the-art algorithms fall short in some clinical applications. Specifically, proton therapy planning requires high image quality with accurate tumor volumes to ensure treatment success. We explore a dual-stream deep network structure with residual learning to correct metal streak artifacts after a first-pass by a state-of-the-art interpolation-based algorithm, NMAR. We provide the network with a mask of the streaks in order to focus attention on those areas. Our experiments compare a mean squared error loss function with a perceptual loss function to emphasize preservation of image features and texture. Both visual and quantitative metrics are used to assess the resulting image quality for metal implant cases. Success may be due to the duality of information processing, with one network stream performing local structure correction, while the other stream provides an attention mechanism to destreak effectively. This study shows that image-domain deep learning can be highly effective for metal artifact reduction (MAR), and highlights the benefits and drawbacks of different loss functions for solving a major CT reconstruction challenge.
Collapse
|
186
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
187
|
Zuo W, Zhou F, He Y, Li X. Automatic classification of lung nodule candidates based on a novel 3D convolution network and knowledge transferred from a 2D network. Med Phys 2019; 46:5499-5513. [PMID: 31621916 DOI: 10.1002/mp.13867] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 10/01/2019] [Accepted: 10/02/2019] [Indexed: 12/19/2022] Open
Abstract
OBJECTIVE In the automatic lung nodule detection system, the authenticity of a large number of nodule candidates needs to be judged, which is a classification task. However, the variable shapes and sizes of the lung nodules have posed a great challenge to the classification of candidates. To solve this problem, we propose a method for classifying nodule candidates through three-dimensional (3D) convolution neural network (ConvNet) model which is trained by transferring knowledge from a multiresolution two-dimensional (2D) ConvNet model. METHODS In this scheme, a novel 3D ConvNet model is preweighted with the weights of the trained 2D ConvNet model, and then the 3D ConvNet model is trained with 3D image volumes. In this way, the knowledge transfer method can make 3D network easier to converge and make full use of the spatial information of nodules with different sizes and shapes to improve the classification accuracy. RESULTS The experimental results on 551 065 pulmonary nodule candidates in the LUNA16 dataset show that our method gains a competitive average score in the false-positive reduction track in lung nodule detection, with the sensitivities of 0.619 and 0.642 at 0.125 and 0.25 FPs per scan, respectively. CONCLUSIONS The proposed method can maintain satisfactory classification accuracy even when the false-positive rate is extremely small in the face of nodules of different sizes and shapes. Moreover, as a transfer learning idea, the method to transfer knowledge from 2D ConvNet to 3D ConvNet is the first attempt to carry out full migration of parameters of various layers including convolution layers, full connection layers, and classifier between different dimensional models, which is more conducive to utilizing the existing 2D ConvNet resources and generalizing transfer learning schemes.
Collapse
Affiliation(s)
- Wangxia Zuo
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China.,College of Electrical Engineering, University of South China, Hengyang, Hunan, 421001, China
| | - Fuqiang Zhou
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China
| | - Yuzhu He
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China
| | - Xiaosong Li
- School of Instrumentation and Optoelectronic Engineering, Beihang University, Beijing, 100083, China
| |
Collapse
|
188
|
Li Y, Li K, Zhang C, Montoya J, Chen GH. Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2469-2481. [PMID: 30990179 PMCID: PMC7962902 DOI: 10.1109/tmi.2019.2910760] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computed tomography (CT) is widely used in medical diagnosis and non-destructive detection. Image reconstruction in CT aims to accurately recover pixel values from measured line integrals, i.e., the summed pixel values along straight lines. Provided that the acquired data satisfy the data sufficiency condition as well as other conditions regarding the view angle sampling interval and the severity of transverse data truncation, researchers have discovered many solutions to accurately reconstruct the image. However, if these conditions are violated, accurate image reconstruction from line integrals remains an intellectual challenge. In this paper, a deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy. Particularly, accurate reconstructions were achieved for the case when the sparse view reconstruction problem (i.e., compressed sensing problem) is entangled with the classical interior tomographic problems.
Collapse
Affiliation(s)
- Yinsheng Li
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Ke Li
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| | - Chengzhu Zhang
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Juan Montoya
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Guang-Hong Chen
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| |
Collapse
|
189
|
The Role of Generative Adversarial Networks in Radiation Reduction and Artifact Correction in Medical Imaging. J Am Coll Radiol 2019; 16:1273-1278. [DOI: 10.1016/j.jacr.2019.05.040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Accepted: 05/23/2019] [Indexed: 01/08/2023]
|
190
|
Kim B, Han M, Shim H, Baek J. A performance comparison of convolutional neural network-based image denoising methods: The effect of loss functions on low-dose CT images. Med Phys 2019; 46:3906-3923. [PMID: 31306488 PMCID: PMC9555720 DOI: 10.1002/mp.13713] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 07/03/2019] [Accepted: 07/05/2019] [Indexed: 02/04/2023] Open
Abstract
PURPOSE Convolutional neural network (CNN)-based image denoising techniques have shown promising results in low-dose CT denoising. However, CNN often introduces blurring in denoised images when trained with a widely used pixel-level loss function. Perceptual loss and adversarial loss have been proposed recently to further improve the image denoising performance. In this paper, we investigate the effect of different loss functions on image denoising performance using task-based image quality assessment methods for various signals and dose levels. METHODS We used a modified version of U-net that was effective at reducing the correlated noise in CT images. The loss functions used for comparison were two pixel-level losses (i.e., the mean-squared error and the mean absolute error), Visual Geometry Group network-based perceptual loss (VGG loss), adversarial loss used to train the Wasserstein generative adversarial network with gradient penalty (WGAN-GP), and their weighted summation. Each image denoising method was applied to reconstructed images and sinogram images independently and validated using the extended cardiac-torso (XCAT) simulation and Mayo Clinic datasets. In the XCAT simulation, we generated fan-beam CT datasets with four different dose levels (25%, 50%, 75%, and 100% of a normal-dose level) using 10 XCAT phantoms and inserted signals in a test set. The signals had two different shapes (spherical and spiculated), sizes (4 and 12 mm), and contrast levels (60 and 160 HU). To evaluate signal detectability, we used a detection task SNR (tSNR) calculated from a non-prewhitening model observer with an eye filter. We also measured the noise power spectrum (NPS) and modulation transfer function (MTF) to compare the noise and signal transfer properties. RESULTS Compared to CNNs without VGG loss, VGG-loss-based CNNs achieved a more similar tSNR to that of the normal-dose CT for all signals at different dose levels except for a small signal at the 25% dose level. For a low-contrast signal at 25% or 50% dose, adding other losses to the VGG loss showed more improved performance than only using VGG loss. The NPS shapes from VGG-loss-based CNN closely matched that of normal-dose CT images while CNN without VGG loss overly reduced the mid-high-frequency noise power at all dose levels. MTF also showed VGG-loss-based CNN with better-preserved high resolution for all dose and contrast levels. It is also observed that additional WGAN-GP loss helps improve the noise and signal transfer properties of VGG-loss-based CNN. CONCLUSIONS The evaluation results using tSNR, NPS, and MTF indicate that VGG-loss-based CNNs are more effective than those without VGG loss for natural denoising of low-dose images and WGAN-GP loss improves the denoising performance of VGG-loss-based CNNs, which corresponds with the qualitative evaluation.
Collapse
Affiliation(s)
- Byeongjoon Kim
- School of Integrated Technology and Yonsei Institute of Convergence TechnologyYonsei UniversityIncheon21983South Korea
| | - Minah Han
- School of Integrated Technology and Yonsei Institute of Convergence TechnologyYonsei UniversityIncheon21983South Korea
| | - Hyunjung Shim
- School of Integrated Technology and Yonsei Institute of Convergence TechnologyYonsei UniversityIncheon21983South Korea
| | - Jongduk Baek
- School of Integrated Technology and Yonsei Institute of Convergence TechnologyYonsei UniversityIncheon21983South Korea
| |
Collapse
|
191
|
Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review. Med Image Anal 2019; 58:101552. [PMID: 31521965 DOI: 10.1016/j.media.2019.101552] [Citation(s) in RCA: 597] [Impact Index Per Article: 99.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 08/23/2019] [Accepted: 08/30/2019] [Indexed: 01/30/2023]
Abstract
Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.
Collapse
Affiliation(s)
- Xin Yi
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| | - Ekta Walia
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada; Philips Canada, 281 Hillmount Road, Markham, Ontario, ON L6C 2S3, Canada.
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| |
Collapse
|
192
|
Ran M, Hu J, Chen Y, Chen H, Sun H, Zhou J, Zhang Y. Denoising of 3D magnetic resonance images using a residual encoder–decoder Wasserstein generative adversarial network. Med Image Anal 2019; 55:165-180. [DOI: 10.1016/j.media.2019.05.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 04/25/2019] [Accepted: 05/04/2019] [Indexed: 10/26/2022]
|
193
|
Accelerated Correction of Reflection Artifacts by Deep Neural Networks in Photo-Acoustic Tomography. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9132615] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Photo-Acoustic Tomography (PAT) is an emerging non-invasive hybrid modality driven by a constant yearning for superior imaging performance. The image quality, however, hinges on the acoustic reflection, which may compromise the diagnostic performance. To address this challenge, we propose to incorporate a deep neural network into conventional iterative algorithms to accelerate and improve the correction of reflection artifacts. Based on the simulated PAT dataset from computed tomography (CT) scans, this network-accelerated reconstruction approach is shown to outperform two state-of-the-art iterative algorithms in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) in the presence of noise. The proposed network also demonstrates considerably higher computational efficiency than conventional iterative algorithms, which are time-consuming and cumbersome.
Collapse
|
194
|
Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction. NAT MACH INTELL 2019; 1:269-276. [PMID: 33244514 DOI: 10.1038/s42256-019-0057-9] [Citation(s) in RCA: 186] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Commercial iterative reconstruction techniques help to reduce CT radiation dose but altered image appearance and artifacts limit their adoptability and potential use. Deep learning has been investigated for low-dose CT (LDCT). Here we design a modularized neural network for LDCT and compared it with commercial iterative reconstruction methods from three leading CT vendors. While popular networks are trained for an end-to-end mapping, our network performs an end-to-process mapping so that intermediate denoised images are obtained with associated noise reduction directions towards a final denoised image. The learned workflow allows radiologists-in-the-loop to optimize the denoising depth in a task-specific fashion. Our network was trained with the Mayo LDCT Dataset, and tested on separate chest and abdominal CT exams from Massachusetts General Hospital. The best deep learning reconstructions were systematically compared to the best iterative reconstructions in a double-blinded reader study. This study confirms that our deep learning approach performed either favorably or comparably in terms of noise suppression and structural fidelity, and is much faster than the commercial iterative reconstruction algorithms.
Collapse
|
195
|
Häggström I, Schmidtlein CR, Campanella G, Fuchs TJ. DeepPET: A deep encoder-decoder network for directly solving the PET image reconstruction inverse problem. Med Image Anal 2019; 54:253-262. [PMID: 30954852 DOI: 10.1016/j.media.2019.03.013] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 03/29/2019] [Accepted: 03/30/2019] [Indexed: 01/01/2023]
Abstract
The purpose of this research was to implement a deep learning network to overcome two of the major bottlenecks in improved image reconstruction for clinical positron emission tomography (PET). These are the lack of an automated means for the optimization of advanced image reconstruction algorithms, and the computational expense associated with these state-of-the art methods. We thus present a novel end-to-end PET image reconstruction technique, called DeepPET, based on a deep convolutional encoder-decoder network, which takes PET sinogram data as input and directly and quickly outputs high quality, quantitative PET images. Using simulated data derived from a whole-body digital phantom, we randomly sampled the configurable parameters to generate realistic images, which were each augmented to a total of more than 291,000 reference images. Realistic PET acquisitions of these images were simulated, resulting in noisy sinogram data, used for training, validation, and testing the DeepPET network. We demonstrated that DeepPET generates higher quality images compared to conventional techniques, in terms of relative root mean squared error (11%/53% lower than ordered subset expectation maximization (OSEM)/filtered back-projection (FBP), structural similarity index (1%/11% higher than OSEM/FBP), and peak signal-to-noise ratio (1.1/3.8 dB higher than OSEM/FBP). In addition, we show that DeepPET reconstructs images 108 and 3 times faster than OSEM and FBP, respectively. Finally, DeepPET was successfully applied to real clinical data. This study shows that an end-to-end encoder-decoder network can produce high quality PET images at a fraction of the time compared to conventional methods.
Collapse
Affiliation(s)
- Ida Häggström
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States.
| | - C Ross Schmidtlein
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Gabriele Campanella
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| | - Thomas J Fuchs
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States; Department of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10065, United States
| |
Collapse
|
196
|
Shan H, Zhang Y, Yang Q, Kruger U, Kalra MK, Sun L, Cong W, Wang G. Correction for “3D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning From a 2D Trained Network” [Jun 18 1522-1534]. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2750-2750. [DOI: 10.1109/tmi.2018.2878429] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
197
|
You C, Yang Q, Shan H, Gjesteby L, Li G, Ju S, Zhang Z, Zhao Z, Zhang Y, Wenxiang C, Wang G. Structurally-sensitive Multi-scale Deep Neural Network for Low-Dose CT Denoising. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2018; 6:41839-41855. [PMID: 30906683 PMCID: PMC6426337 DOI: 10.1109/access.2018.2858196] [Citation(s) in RCA: 108] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Computed tomography (CT) is a popular medical imaging modality and enjoys wide clinical applications. At the same time, the x-ray radiation dose associated with CT scannings raises a public concern due to its potential risks to the patients. Over the past years, major efforts have been dedicated to the development of Low-Dose CT (LDCT) methods. However, the radiation dose reduction compromises the signal-to-noise ratio (SNR), leading to strong noise and artifacts that downgrade CT image quality. In this paper, we propose a novel 3D noise reduction method, called Structurally-sensitive Multi-scale Generative Adversarial Net (SMGAN), to improve the LDCT image quality. Specifically, we incorporate three-dimensional (3D) volumetric information to improve the image quality. Also, different loss functions for training denoising models are investigated. Experiments show that the proposed method can effectively preserve structural and textural information in reference to normal-dose CT (NDCT) images, and significantly suppress noise and artifacts. Qualitative visual assessments by three experienced radiologists demonstrate that the proposed method retrieves more information, and outperforms competing methods.
Collapse
Affiliation(s)
- Chenyu You
- Departments of Bioengineering and Electrical Engineering, Stanford University, Stanford, CA, 94305
| | - Qingsong Yang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Lars Gjesteby
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Guang Li
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Shenghong Ju
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing 210009, China
| | - Zhuiyang Zhang
- Department of Radiology, Wuxi No.2 People's Hospital,Wuxi, 214000, China
| | - Zhen Zhao
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing 210009, China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Cong Wenxiang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| |
Collapse
|