451
|
Mishra D, Chaudhury S, Sarkar M, Soin AS. Ultrasound Image Enhancement Using Structure Oriented Adversarial Network. IEEE SIGNAL PROCESSING LETTERS 2018; 25:1349-1353. [DOI: 10.1109/lsp.2018.2858147] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
452
|
Park J, Hwang D, Kim KY, Kang SK, Kim YK, Lee JS. Computed tomography super-resolution using deep convolutional neural network. Phys Med Biol 2018; 63:145011. [PMID: 29923839 DOI: 10.1088/1361-6560/aacdd4] [Citation(s) in RCA: 114] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super-resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin-slice CT images as input and their middle slice as the label. Fifty-two CT studies are used as the CNN training set, and 13 CT studies are used as the test set. We perform five-fold cross-validation to confirm the performance consistency. Because all input and output images are used in two-dimensional slice format, the total number of slices for training the CNN is 7670. We assess the performance of the proposed method with respect to the resolution and contrast, as well as the noise properties. The CNN generates output images that are virtually equivalent to the ground truth. The most remarkable image-recovery improvement by the CNN is deblurring of boundaries of bone structures and air cavities. The CNN output yields an approximately 10% higher peak signal-to-noise ratio and lower normalized root mean square error than the input (thicker slices). The CNN output noise level is lower than the ground truth and equivalent to the iterative image reconstruction result. The proposed deep learning method is useful for both super-resolution and de-noising.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Biomedical Sciences, College of Medicine, Seoul National University, Seoul 03080, People's Republic of Korea. Department of Nuclear Medicine, College of Medicine, Seoul National University, Seoul 03080, People's Republic of Korea
| | | | | | | | | | | |
Collapse
|
453
|
Hu B, Tang Y, Chang EIC, Fan Y, Lai M, Xu Y. Unsupervised Learning for Cell-Level Visual Representation in Histopathology Images With Generative Adversarial Networks. IEEE J Biomed Health Inform 2018; 23:1316-1328. [PMID: 29994411 DOI: 10.1109/jbhi.2018.2852639] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.
Collapse
|
454
|
Hasan AM, Melli A, Wahid KA, Babyn P. Denoising Low-Dose CT Images Using Multiframe Blind Source Separation and Block Matching Filter. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2018. [DOI: 10.1109/trpms.2018.2810221] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
455
|
Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, Geis JR, Pandharipande PV, Brink JA, Dreyer KJ. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018; 288:318-328. [PMID: 29944078 DOI: 10.1148/radiol.2018171820] [Citation(s) in RCA: 469] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.
Collapse
Affiliation(s)
- Garry Choy
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Omid Khalilzadeh
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Mark Michalski
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Synho Do
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Anthony E Samir
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Oleg S Pianykh
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - J Raymond Geis
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Pari V Pandharipande
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - James A Brink
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Keith J Dreyer
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| |
Collapse
|
456
|
Shan H, Zhang Y, Yang Q, Kruger U, Kalra MK, Sun L, Cong W, Wang G. 3-D Convolutional Encoder-Decoder Network for Low-Dose CT via Transfer Learning From a 2-D Trained Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1522-1534. [PMID: 29870379 PMCID: PMC6022756 DOI: 10.1109/tmi.2018.2832217] [Citation(s) in RCA: 196] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Low-dose computed tomography (LDCT) has attracted major attention in the medical imaging field, since CT-associated X-ray radiation carries health risks for patients. The reduction of the CT radiation dose, however, compromises the signal-to-noise ratio, which affects image quality and diagnostic performance. Recently, deep-learning-based algorithms have achieved promising results in LDCT denoising, especially convolutional neural network (CNN) and generative adversarial network (GAN) architectures. This paper introduces a conveying path-based convolutional encoder-decoder (CPCE) network in 2-D and 3-D configurations within the GAN framework for LDCT denoising. A novel feature of this approach is that an initial 3-D CPCE denoising model can be directly obtained by extending a trained 2-D CNN, which is then fine-tuned to incorporate 3-D spatial information from adjacent slices. Based on the transfer learning from 2-D to 3-D, the 3-D network converges faster and achieves a better denoising performance when compared with a training from scratch. By comparing the CPCE network with recently published work based on the simulated Mayo data set and the real MGH data set, we demonstrate that the 3-D CPCE denoising model has a better performance in that it suppresses image noise and preserves subtle structures.
Collapse
Affiliation(s)
- Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180 USA
| | - Yi Zhang
- Corresponding author. G. Wang ; Y. Zhang
| | - Qingsong Yang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180 USA
| | - Uwe Kruger
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180 USA
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, 02114 USA
| | - Ling Sun
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital, Sichuan University, Chengdu, 610041 China
| | - Wenxiang Cong
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180 USA
| | - Ge Wang
- Corresponding author. G. Wang ; Y. Zhang
| |
Collapse
|
457
|
Kang E, Chang W, Yoo J, Ye JC. Deep Convolutional Framelet Denosing for Low-Dose CT via Wavelet Residual Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1358-1369. [PMID: 29870365 DOI: 10.1109/tmi.2018.2823756] [Citation(s) in RCA: 133] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Model-based iterative reconstruction algorithms for low-dose X-ray computed tomography (CT) are computationally expensive. To address this problem, we recently proposed a deep convolutional neural network (CNN) for low-dose X-ray CT and won the second place in 2016 AAPM Low-Dose CT Grand Challenge. However, some of the textures were not fully recovered. To address this problem, here we propose a novel framelet-based denoising algorithm using wavelet residual network which synergistically combines the expressive power of deep learning and the performance guarantee from the framelet-based denoising algorithms. The new algorithms were inspired by the recent interpretation of the deep CNN as a cascaded convolution framelet signal representation. Extensive experimental results confirm that the proposed networks have significantly improved performance and preserve the detail texture of the original images.
Collapse
|
458
|
Han Y, Ye JC. Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1418-1429. [PMID: 29870370 DOI: 10.1109/tmi.2018.2823768] [Citation(s) in RCA: 212] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
X-ray computed tomography (CT) using sparse projection views is a recent approach to reduce the radiation dose. However, due to the insufficient projection views, an analytic reconstruction approach using the filtered back projection (FBP) produces severe streaking artifacts. Recently, deep learning approaches using large receptive field neural networks such as U-Net have demonstrated impressive performance for sparse-view CT reconstruction. However, theoretical justification is still lacking. Inspired by the recent theory of deep convolutional framelets, the main goal of this paper is, therefore, to reveal the limitation of U-Net and propose new multi-resolution deep learning schemes. In particular, we show that the alternative U-Net variants such as dual frame and tight frame U-Nets satisfy the so-called frame condition which makes them better for effective recovery of high frequency edges in sparse-view CT. Using extensive experiments with real patient data set, we demonstrate that the new network architectures provide better reconstruction performance.
Collapse
|
459
|
Chen H, Zhang Y, Chen Y, Zhang J, Zhang W, Sun H, Lv Y, Liao P, Zhou J, Wang G. LEARN: Learned Experts' Assessment-Based Reconstruction Network for Sparse-Data CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1333-1347. [PMID: 29870363 PMCID: PMC6019143 DOI: 10.1109/tmi.2018.2805692] [Citation(s) in RCA: 184] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Compressive sensing (CS) has proved effective for tomographic reconstruction from sparsely collected data or under-sampled measurements, which are practically important for few-view computed tomography (CT), tomosynthesis, interior tomography, and so on. To perform sparse-data CT, the iterative reconstruction commonly uses regularizers in the CS framework. Currently, how to choose the parameters adaptively for regularization is a major open problem. In this paper, inspired by the idea of machine learning especially deep learning, we unfold the state-of-the-art "fields of experts"-based iterative reconstruction scheme up to a number of iterations for data-driven training, construct a learned experts' assessment-based reconstruction network (LEARN) for sparse-data CT, and demonstrate the feasibility and merits of our LEARN network. The experimental results with our proposed LEARN network produces a superior performance with the well-known Mayo Clinic low-dose challenge data set relative to the several state-of-the-art methods, in terms of artifact reduction, feature preservation, and computational speed. This is consistent to our insight that because all the regularization terms and parameters used in the iterative reconstruction are now learned from the training data, our LEARN network utilizes application-oriented knowledge more effectively and recovers underlying images more favorably than competing algorithms. Also, the number of layers in the LEARN network is only 50, reducing the computational complexity of typical iterative algorithms by orders of magnitude.
Collapse
Affiliation(s)
- Hu Chen
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | | | - Junfeng Zhang
- School of Computer and Information Engineering, Henan University of Economics and Law, Zhengzhou 450046, China
| | - Weihua Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, China
| | - Yang Lv
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai 210807, China.
| | - Peixi Liao
- Department of Scientific Research and Education, The Sixth People’s Hospital of Chengdu, Chengdu 610065, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| |
Collapse
|
460
|
Yang Q, Yan P, Zhang Y, Yu H, Shi Y, Mou X, Kalra MK, Zhang Y, Sun L, Wang G. Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1348-1357. [PMID: 29870364 PMCID: PMC6021013 DOI: 10.1109/tmi.2018.2827462] [Citation(s) in RCA: 599] [Impact Index Per Article: 85.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists' judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images.
Collapse
|
461
|
Zhang Y, Yu H. Convolutional Neural Network Based Metal Artifact Reduction in X-Ray Computed Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1370-1381. [PMID: 29870366 PMCID: PMC5998663 DOI: 10.1109/tmi.2018.2823083] [Citation(s) in RCA: 207] [Impact Index Per Article: 29.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
In the presence of metal implants, metal artifacts are introduced to x-ray computed tomography CT images. Although a large number of metal artifact reduction (MAR) methods have been proposed in the past decades, MAR is still one of the major problems in clinical x-ray CT. In this paper, we develop a convolutional neural network (CNN)-based open MAR framework, which fuses the information from the original and corrected images to suppress artifacts. The proposed approach consists of two phases. In the CNN training phase, we build a database consisting of metal-free, metal-inserted and pre-corrected CT images, and image patches are extracted and used for CNN training. In the MAR phase, the uncorrected and pre-corrected images are used as the input of the trained CNN to generate a CNN image with reduced artifacts. To further reduce the remaining artifacts, water equivalent tissues in a CNN image are set to a uniform value to yield a CNN prior, whose forward projections are used to replace the metal-affected projections, followed by the FBP reconstruction. The effectiveness of the proposed method is validated on both simulated and real data. Experimental results demonstrate the superior MAR capability of the proposed method to its competitors in terms of artifact suppression and preservation of anatomical structures in the vicinity of metal implants.
Collapse
|
462
|
Lee D, Yoo J, Tak S, Ye JC. Deep Residual Learning for Accelerated MRI Using Magnitude and Phase Networks. IEEE Trans Biomed Eng 2018; 65:1985-1995. [PMID: 29993390 DOI: 10.1109/tbme.2018.2821699] [Citation(s) in RCA: 151] [Impact Index Per Article: 21.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE Accelerated magnetic resonance (MR) image acquisition with compressed sensing (CS) and parallel imaging is a powerful method to reduce MR imaging scan time. However, many reconstruction algorithms have high computational costs. To address this, we investigate deep residual learning networks to remove aliasing artifacts from artifact corrupted images. METHODS The deep residual learning networks are composed of magnitude and phase networks that are separately trained. If both phase and magnitude information are available, the proposed algorithm can work as an iterative k-space interpolation algorithm using framelet representation. When only magnitude data are available, the proposed approach works as an image domain postprocessing algorithm. RESULTS Even with strong coherent aliasing artifacts, the proposed network successfully learned and removed the aliasing artifacts, whereas current parallel and CS reconstruction methods were unable to remove these artifacts. CONCLUSION Comparisons using single and multiple coil acquisition show that the proposed residual network provides good reconstruction results with orders of magnitude faster computational time than existing CS methods. SIGNIFICANCE The proposed deep learning framework may have a great potential for accelerated MR reconstruction by generating accurate results immediately.
Collapse
|
463
|
|
464
|
Wang Y, Yu B, Wang L, Zu C, Lalush DS, Lin W, Wu X, Zhou J, Shen D, Zhou L. 3D conditional generative adversarial networks for high-quality PET image estimation at low dose. Neuroimage 2018; 174:550-562. [PMID: 29571715 DOI: 10.1016/j.neuroimage.2018.03.045] [Citation(s) in RCA: 245] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 03/11/2018] [Accepted: 03/17/2018] [Indexed: 11/29/2022] Open
Abstract
Positron emission tomography (PET) is a widely used imaging modality, providing insight into both the biochemical and physiological processes of human body. Usually, a full dose radioactive tracer is required to obtain high-quality PET images for clinical needs. This inevitably raises concerns about potential health hazards. On the other hand, dose reduction may cause the increased noise in the reconstructed PET images, which impacts the image quality to a certain extent. In this paper, in order to reduce the radiation exposure while maintaining the high quality of PET images, we propose a novel method based on 3D conditional generative adversarial networks (3D c-GANs) to estimate the high-quality full-dose PET images from low-dose ones. Generative adversarial networks (GANs) include a generator network and a discriminator network which are trained simultaneously with the goal of one beating the other. Similar to GANs, in the proposed 3D c-GANs, we condition the model on an input low-dose PET image and generate a corresponding output full-dose PET image. Specifically, to render the same underlying information between the low-dose and full-dose PET images, a 3D U-net-like deep architecture which can combine hierarchical features by using skip connection is designed as the generator network to synthesize the full-dose image. In order to guarantee the synthesized PET image to be close to the real one, we take into account of the estimation error loss in addition to the discriminator feedback to train the generator network. Furthermore, a concatenated 3D c-GANs based progressive refinement scheme is also proposed to further improve the quality of estimated images. Validation was done on a real human brain dataset including both the normal subjects and the subjects diagnosed as mild cognitive impairment (MCI). Experimental results show that our proposed 3D c-GANs method outperforms the benchmark methods and achieves much better performance than the state-of-the-art methods in both qualitative and quantitative measures.
Collapse
Affiliation(s)
- Yan Wang
- School of Computer Science, Sichuan University, China
| | - Biting Yu
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Lei Wang
- School of Computing and Information Technology, University of Wollongong, Australia
| | - Chen Zu
- School of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China
| | - David S Lalush
- Joint Department of Biomedical Engineering, University of North Carolina at Chapel Hill and North Carolina State University, NC, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, China; School of Computer Science, Chengdu University of Information Technology, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, South Korea.
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia; School of Computing and Information Technology, University of Wollongong, Australia.
| |
Collapse
|
465
|
Burlingame EA, Margolin AA, Gray JW, Chang YH. SHIFT: speedy histopathological-to-immunofluorescent translation of whole slide images using conditional generative adversarial networks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10581. [PMID: 30283195 DOI: 10.1117/12.2293249] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Multiplexed imaging such as multicolor immunofluorescence staining, multiplexed immunohistochemistry (mIHC) or cyclic immunofluorescence (cycIF) enables deep assessment of cellular complexity in situ and, in conjunction with standard histology stains like hematoxylin and eosin (H&E), can help to unravel the complex molecular relationships and spatial interdependencies that undergird disease states. However, these multiplexed imaging methods are costly and can degrade both tissue quality and antigenicity with each successive cycle of staining. In addition, computationally intensive image processing such as image registration across multiple channels is required. We have developed a novel method, speedy histopathological-to-immunofluorescent translation (SHIFT) of whole slide images (WSIs) using conditional generative adversarial networks (cGANs). This approach is rooted in the assumption that specific patterns captured in IF images by stains like DAPI, pan-cytokeratin (panCK), or α-smooth muscle actin ( α-SMA) are encoded in H&E images, such that a SHIFT model can learn useful feature representations or architectural patterns in the H&E stain that help generate relevant IF stain patterns. We demonstrate that the proposed method is capable of generating realistic tumor marker IF WSIs conditioned on corresponding H&E-stained WSIs with up to 94.5% accuracy in a matter of seconds. Thus, this method has the potential to not only improve our understanding of the mapping of histological and morphological profiles into protein expression profiles, but also greatly increase the e ciency of diagnostic and prognostic decision-making.
Collapse
Affiliation(s)
| | | | - Joe W Gray
- Oregon Health and Science University, Portland, OR, USA
| | | |
Collapse
|
466
|
You C, Yang Q, Shan H, Gjesteby L, Li G, Ju S, Zhang Z, Zhao Z, Zhang Y, Wenxiang C, Wang G. Structurally-sensitive Multi-scale Deep Neural Network for Low-Dose CT Denoising. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2018; 6:41839-41855. [PMID: 30906683 PMCID: PMC6426337 DOI: 10.1109/access.2018.2858196] [Citation(s) in RCA: 108] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Computed tomography (CT) is a popular medical imaging modality and enjoys wide clinical applications. At the same time, the x-ray radiation dose associated with CT scannings raises a public concern due to its potential risks to the patients. Over the past years, major efforts have been dedicated to the development of Low-Dose CT (LDCT) methods. However, the radiation dose reduction compromises the signal-to-noise ratio (SNR), leading to strong noise and artifacts that downgrade CT image quality. In this paper, we propose a novel 3D noise reduction method, called Structurally-sensitive Multi-scale Generative Adversarial Net (SMGAN), to improve the LDCT image quality. Specifically, we incorporate three-dimensional (3D) volumetric information to improve the image quality. Also, different loss functions for training denoising models are investigated. Experiments show that the proposed method can effectively preserve structural and textural information in reference to normal-dose CT (NDCT) images, and significantly suppress noise and artifacts. Qualitative visual assessments by three experienced radiologists demonstrate that the proposed method retrieves more information, and outperforms competing methods.
Collapse
Affiliation(s)
- Chenyu You
- Departments of Bioengineering and Electrical Engineering, Stanford University, Stanford, CA, 94305
| | - Qingsong Yang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Lars Gjesteby
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Guang Li
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Shenghong Ju
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing 210009, China
| | - Zhuiyang Zhang
- Department of Radiology, Wuxi No.2 People's Hospital,Wuxi, 214000, China
| | - Zhen Zhao
- Jiangsu Key Laboratory of Molecular and Functional Imaging, Department of Radiology, Zhongda Hospital, Medical School, Southeast University, Nanjing 210009, China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Cong Wenxiang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180
| |
Collapse
|
467
|
Cui X, Liu Y, Zhang Y, Wang C. Tire Defects Classification with Multi-Contrast Convolutional Neural Networks. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001418500118] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The objective of this study is to improve the accuracy in tire defect classification with limited training samples under varying illuminations. We investigate an algorithm based on deep learning to achieve high accuracy with limited samples. First, image contrast normalizations and data augmentation were used to avoid overfitting problems of the network with a large number of parameters. Furthermore, multi-column CNN is proposed by combining several CNNs trained on differently preprocessed data into a multi-column CNN (MC-CNN), and then their predictions are averaged as the output of the proposed network. An average accuracy of 98.47% is achieved with the proposed CNN-based method. Experimental results show that our scheme receives satisfactory classification accuracy and outperforms state-of-the-art methods on the same tire defect dataset.
Collapse
Affiliation(s)
- Xuehong Cui
- School of Information Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, P. R. China
| | - Yun Liu
- Development Planning Office, Qingdao University of Science and Technology, Qingdao 266061, P. R. China
| | - Yan Zhang
- School of Electromechanical Engineering, Qingdao University of Science and Technology, Qingdao 266061, P. R. China
| | - Chuanxu Wang
- School of Information Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, P. R. China
| |
Collapse
|
468
|
Wu D, Kim K, El Fakhri G, Li Q. Iterative Low-Dose CT Reconstruction With Priors Trained by Artificial Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2479-2486. [PMID: 28922116 PMCID: PMC5897914 DOI: 10.1109/tmi.2017.2753138] [Citation(s) in RCA: 128] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Dose reduction in computed tomography (CT) is essential for decreasing radiation risk in clinical applications. Iterative reconstruction algorithms are one of the most promising way to compensate for the increased noise due to reduction of photon flux. Most iterative reconstruction algorithms incorporate manually designed prior functions of the reconstructed image to suppress noises while maintaining structures of the image. These priors basically rely on smoothness constraints and cannot exploit more complex features of the image. The recent development of artificial neural networks and machine learning enabled learning of more complex features of image, which has the potential to improve reconstruction quality. In this letter, K-sparse auto encoder was used for unsupervised feature learning. A manifold was learned from normal-dose images and the distance between the reconstructed image and the manifold was minimized along with data fidelity during reconstruction. Experiments on 2016 Low-dose CT Grand Challenge were used for the method verification, and results demonstrated the noise reduction and detail preservation abilities of the proposed method.
Collapse
|
469
|
Adversarial Training and Dilated Convolutions for Brain MRI Segmentation. DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT 2017. [DOI: 10.1007/978-3-319-67558-9_7] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|