351
|
Simulation Study of Low-Dose Sparse-Sampling CT with Deep Learning-Based Reconstruction: Usefulness for Evaluation of Ovarian Cancer Metastasis. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10134446] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
The usefulness of sparse-sampling CT with deep learning-based reconstruction for detection of metastasis of malignant ovarian tumors was evaluated. We obtained contrast-enhanced CT images (n = 141) of ovarian cancers from a public database, whose images were randomly divided into 71 training, 20 validation, and 50 test cases. Sparse-sampling CT images were calculated slice-by-slice by software simulation. Two deep-learning models for deep learning-based reconstruction were evaluated: Residual Encoder-Decoder Convolutional Neural Network (RED-CNN) and deeper U-net. For 50 test cases, we evaluated the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as quantitative measures. Two radiologists independently performed a qualitative evaluation for the following points: entire CT image quality; visibility of the iliac artery; and visibility of peritoneal dissemination, liver metastasis, and lymph node metastasis. Wilcoxon signed-rank test and McNemar test were used to compare image quality and metastasis detectability between the two models, respectively. The mean PSNR and SSIM performed better with deeper U-net over RED-CNN. For all items of the visual evaluation, deeper U-net scored significantly better than RED-CNN. The metastasis detectability with deeper U-net was more than 95%. Sparse-sampling CT with deep learning-based reconstruction proved useful in detecting metastasis of malignant ovarian tumors and might contribute to reducing overall CT-radiation exposure.
Collapse
|
352
|
Xie Z, Baikejiang R, Li T, Zhang X, Gong K, Zhang M, Qi W, Asma E, Qi J. Generative adversarial network based regularized image reconstruction for PET. Phys Med Biol 2020; 65:125016. [PMID: 32357352 PMCID: PMC7413644 DOI: 10.1088/1361-6560/ab8f72] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Positron emission tomography (PET) is an ill-posed inverse problem and suffers high noise due to limited number of detected events. Prior information can be used to improve the quality of reconstructed PET images. Deep neural networks have also been applied to regularized image reconstruction. One method is to use a pretrained denoising neural network to represent the PET image and to perform a constrained maximum likelihood estimation. In this work, we propose to use a generative adversarial network (GAN) to further improve the network performance. We also modify the objective function to include a data-matching term on the network input. Experimental studies using computer-based Monte Carlo simulations and real patient datasets demonstrate that the proposed method leads to noticeable improvements over the kernel-based and U-net-based regularization methods in terms of lesion contrast recovery versus background noise trade-offs.
Collapse
Affiliation(s)
- Zhaoheng Xie
- Department of Biomedical Engineering University of California Davis CA United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
353
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
354
|
Ding Q, Chen G, Zhang X, Huang Q, Ji H, Gao H. Low-dose CT with deep learning regularization via proximal forward-backward splitting. Phys Med Biol 2020; 65:125009. [PMID: 32209742 DOI: 10.1088/1361-6560/ab831a] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Low-dose x-ray computed tomography (LDCT) is desirable for reduced patient dose. This work develops new image reconstruction methods with deep learning (DL) regularization for LDCT. Our methods are based on the unrolling of a proximal forward-backward splitting (PFBS) framework with data-driven image regularization via deep neural networks. In contrast to PFBS-IR, which utilizes standard data fidelity updates via an iterative reconstruction (IR) method, PFBS-AIR involves preconditioned data fidelity updates that fuse the analytical reconstruction (AR) and IR methods in a synergistic way, i.e. fused analytical and iterative reconstruction (AIR). The results suggest that the DL-regularized methods (PFBS-IR and PFBS-AIR) provide better reconstruction quality compared to conventional methods (AR or IR). In addition, owing to the AIR, PFBS-AIR noticeably outperformed PFBS-IR and another DL-based postprocessing method, FBPConvNet.
Collapse
Affiliation(s)
- Qiaoqiao Ding
- Department of Mathematics, National University of Singapore, Singapore 119076, Singapore
| | | | | | | | | | | |
Collapse
|
355
|
Jamshidi MB, Lalbakhsh A, Talla J, Peroutka Z, Hadjilooei F, Lalbakhsh P, Jamshidi M, Spada LL, Mirmozafari M, Dehghani M, Sabet A, Roshani S, Roshani S, Bayat-Makou N, Mohamadzade B, Malek Z, Jamshidi A, Kiani S, Hashemi-Dezaki H, Mohyuddin W. Artificial Intelligence and COVID-19: Deep Learning Approaches for Diagnosis and Treatment. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:109581-109595. [PMID: 34192103 PMCID: PMC8043506 DOI: 10.1109/access.2020.3001973] [Citation(s) in RCA: 193] [Impact Index Per Article: 38.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 06/02/2020] [Indexed: 05/15/2023]
Abstract
COVID-19 outbreak has put the whole world in an unprecedented difficult situation bringing life around the world to a frightening halt and claiming thousands of lives. Due to COVID-19's spread in 212 countries and territories and increasing numbers of infected cases and death tolls mounting to 5,212,172 and 334,915 (as of May 22 2020), it remains a real threat to the public health system. This paper renders a response to combat the virus through Artificial Intelligence (AI). Some Deep Learning (DL) methods have been illustrated to reach this goal, including Generative Adversarial Networks (GANs), Extreme Learning Machine (ELM), and Long/Short Term Memory (LSTM). It delineates an integrated bioinformatics approach in which different aspects of information from a continuum of structured and unstructured data sources are put together to form the user-friendly platforms for physicians and researchers. The main advantage of these AI-based platforms is to accelerate the process of diagnosis and treatment of the COVID-19 disease. The most recent related publications and medical reports were investigated with the purpose of choosing inputs and targets of the network that could facilitate reaching a reliable Artificial Neural Network-based tool for challenges associated with COVID-19. Furthermore, there are some specific inputs for each platform, including various forms of the data, such as clinical data and medical imaging which can improve the performance of the introduced approaches toward the best responses in practical applications.
Collapse
Affiliation(s)
- Mohammad Behdad Jamshidi
- Department of Electromechanical Engineering and Power Electronics (KEV)University of West Bohemia in Pilsen301 00PilsenCzech Republic
| | - Ali Lalbakhsh
- School of EngineeringMacquarie UniversitySydneyNSW2109Australia
| | - Jakub Talla
- Department of Electromechanical Engineering and Power Electronics (KEV)University of West Bohemia in Pilsen301 00PilsenCzech Republic
| | - Zdeněk Peroutka
- Regional Innovation Centre for Electrical engineering (RICE)University of West Bohemia in Pilsen301 00PilsenCzech Republic
| | - Farimah Hadjilooei
- Department of Radiation OncologyCancer Institute, Tehran University of Medical SciencesTehran1416753955Iran
| | - Pedram Lalbakhsh
- Department of English Language and LiteratureRazi UniversityKermanshah6714414971Iran
| | - Morteza Jamshidi
- Young Researchers and Elite Club, Kermanshah BranchIslamic Azad UniversityKermanshah1477893855Iran
| | - Luigi La Spada
- School of Engineering and the Built EnvironmentEdinburgh Napier UniversityEdinburghEH11 4DYU.K.
| | - Mirhamed Mirmozafari
- Department of Electrical and Computer EngineeringUniversity of Wisconsin–MadisonMadisonWI53706USA
| | - Mojgan Dehghani
- Physics and Astronomy DepartmentLouisiana State UniversityBaton RougeLA70803USA
| | - Asal Sabet
- Irma Lerma Rangel College of PharmacyTexas A&M UniversityKingsvilleTX78363USA
| | - Saeed Roshani
- Department of Electrical EngineeringKermanshah Branch, Islamic Azad UniversityKermanshah1477893855Iran
| | - Sobhan Roshani
- Department of Electrical EngineeringKermanshah Branch, Islamic Azad UniversityKermanshah1477893855Iran
| | - Nima Bayat-Makou
- The Edward S. Rogers, Sr. Department of Electrical and Computer EngineeringUniversity of TorontoTorontoON M5SCanada
| | | | - Zahra Malek
- Medical Sciences Research Center, Faculty of Medicine, Tehran Medical Sciences BranchIslamic Azad UniversityTehran1477893855Iran
| | - Alireza Jamshidi
- Dentistry SchoolBabol University of Medical SciencesBabol4717647745Iran
| | - Sarah Kiani
- Medical Biology Research CenterHealth Technology Institute, Kermanshah University of Medical SciencesKermanshah6715847141Iran
| | - Hamed Hashemi-Dezaki
- Regional Innovation Centre for Electrical engineering (RICE)University of West Bohemia in Pilsen301 00PilsenCzech Republic
| | - Wahab Mohyuddin
- Research Institute for Microwave and Millimeter-Wave Studies, National University of Sciences and TechnologyIslamabad24090Pakistan
| |
Collapse
|
356
|
Beyer T, Bidaut L, Dickson J, Kachelriess M, Kiessling F, Leitgeb R, Ma J, Shiyam Sundar LK, Theek B, Mawlawi O. What scans we will read: imaging instrumentation trends in clinical oncology. Cancer Imaging 2020; 20:38. [PMID: 32517801 PMCID: PMC7285725 DOI: 10.1186/s40644-020-00312-3] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 04/17/2020] [Indexed: 12/16/2022] Open
Abstract
Oncological diseases account for a significant portion of the burden on public healthcare systems with associated costs driven primarily by complex and long-lasting therapies. Through the visualization of patient-specific morphology and functional-molecular pathways, cancerous tissue can be detected and characterized non-invasively, so as to provide referring oncologists with essential information to support therapy management decisions. Following the onset of stand-alone anatomical and functional imaging, we witness a push towards integrating molecular image information through various methods, including anato-metabolic imaging (e.g., PET/CT), advanced MRI, optical or ultrasound imaging.This perspective paper highlights a number of key technological and methodological advances in imaging instrumentation related to anatomical, functional, molecular medicine and hybrid imaging, that is understood as the hardware-based combination of complementary anatomical and molecular imaging. These include novel detector technologies for ionizing radiation used in CT and nuclear medicine imaging, and novel system developments in MRI and optical as well as opto-acoustic imaging. We will also highlight new data processing methods for improved non-invasive tissue characterization. Following a general introduction to the role of imaging in oncology patient management we introduce imaging methods with well-defined clinical applications and potential for clinical translation. For each modality, we report first on the status quo and, then point to perceived technological and methodological advances in a subsequent status go section. Considering the breadth and dynamics of these developments, this perspective ends with a critical reflection on where the authors, with the majority of them being imaging experts with a background in physics and engineering, believe imaging methods will be in a few years from now.Overall, methodological and technological medical imaging advances are geared towards increased image contrast, the derivation of reproducible quantitative parameters, an increase in volume sensitivity and a reduction in overall examination time. To ensure full translation to the clinic, this progress in technologies and instrumentation is complemented by advances in relevant acquisition and image-processing protocols and improved data analysis. To this end, we should accept diagnostic images as "data", and - through the wider adoption of advanced analysis, including machine learning approaches and a "big data" concept - move to the next stage of non-invasive tumour phenotyping. The scans we will be reading in 10 years from now will likely be composed of highly diverse multi-dimensional data from multiple sources, which mandate the use of advanced and interactive visualization and analysis platforms powered by Artificial Intelligence (AI) for real-time data handling by cross-specialty clinical experts with a domain knowledge that will need to go beyond that of plain imaging.
Collapse
Affiliation(s)
- Thomas Beyer
- QIMP Team, Centre for Medical Physics and Biomedical Engineering, Medical University Vienna, Währinger Gürtel 18-20/4L, 1090, Vienna, Austria.
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - John Dickson
- Institute of Nuclear Medicine, University College London Hospital, London, UK
| | - Marc Kachelriess
- Division of X-ray imaging and CT, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120, Heidelberg, DE, Germany
| | - Fabian Kiessling
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Pauwelsstrasse 20, 52074, Aachen, DE, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359, Bremen, DE, Germany
| | - Rainer Leitgeb
- Centre for Medical Physics and Biomedical Engineering, Medical University Vienna, Vienna, AT, Austria
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Lalith Kumar Shiyam Sundar
- QIMP Team, Centre for Medical Physics and Biomedical Engineering, Medical University Vienna, Währinger Gürtel 18-20/4L, 1090, Vienna, Austria
| | - Benjamin Theek
- Institute for Experimental Molecular Imaging, University Clinic and Helmholtz Institute for Biomedical Engineering, RWTH Aachen University, Pauwelsstrasse 20, 52074, Aachen, DE, Germany
- Fraunhofer Institute for Digital Medicine MEVIS, Am Fallturm 1, 28359, Bremen, DE, Germany
| | - Osama Mawlawi
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|
357
|
Kalare KW, Bajpai MK. RecDNN: deep neural network for image reconstruction from limited view projection data. Soft comput 2020. [DOI: 10.1007/s00500-020-05013-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
358
|
Fan F, Shan H, Kalra MK, Singh R, Qian G, Getzin M, Teng Y, Hahn J, Wang G. Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2035-2050. [PMID: 31902758 PMCID: PMC7376975 DOI: 10.1109/tmi.2019.2963248] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Inspired by complexity and diversity of biological neurons, our group proposed quadratic neurons by replacing the inner product in current artificial neurons with a quadratic operation on input data, thereby enhancing the capability of an individual neuron. Along this direction, we are motivated to evaluate the power of quadratic neurons in popular network architectures, simulating human-like learning in the form of "quadratic-neuron-based deep learning". Our prior theoretical studies have shown important merits of quadratic neurons and networks in representation, efficiency, and interpretability. In this paper, we use quadratic neurons to construct an encoder-decoder structure, referred as the quadratic autoencoder, and apply it to low-dose CT denoising. The experimental results on the Mayo low-dose CT dataset demonstrate the utility and robustness of quadratic autoencoder in terms of image denoising and model efficiency. To our best knowledge, this is the first time that the deep learning approach is implemented with a new type of neurons and demonstrates a significant potential in the medical imaging field.
Collapse
Affiliation(s)
- Fenglei Fan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hongming Shan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Ramandeep Singh
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Guhan Qian
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Matthew Getzin
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yueyang Teng
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China, 110169
| | - Juergen Hahn
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
359
|
Hariri A, Alipour K, Mantri Y, Schulze JP, Jokerst JV. Deep learning improves contrast in low-fluence photoacoustic imaging. BIOMEDICAL OPTICS EXPRESS 2020; 11:3360-3373. [PMID: 32637260 PMCID: PMC7316023 DOI: 10.1364/boe.395683] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 05/15/2020] [Accepted: 05/20/2020] [Indexed: 05/18/2023]
Abstract
Low fluence illumination sources can facilitate clinical transition of photoacoustic imaging because they are rugged, portable, affordable, and safe. However, these sources also decrease image quality due to their low fluence. Here, we propose a denoising method using a multi-level wavelet-convolutional neural network to map low fluence illumination source images to its corresponding high fluence excitation map. Quantitative and qualitative results show a significant potential to remove the background noise and preserve the structures of target. Substantial improvements up to 2.20, 2.25, and 4.3-fold for PSNR, SSIM, and CNR metrics were observed, respectively. We also observed enhanced contrast (up to 1.76-fold) in an in vivo application using our proposed methods. We suggest that this tool can improve the value of such sources in photoacoustic imaging.
Collapse
Affiliation(s)
- Ali Hariri
- Department of NanoEngineering, University of California, San Diego, La Jolla, CA 92093, USA
- These authors contributed equally to this paper
| | - Kamran Alipour
- Department of Computer Science, University of California, San Diego, La Jolla, CA 92093, USA
- These authors contributed equally to this paper
| | - Yash Mantri
- Department of BioEngineering, University of California, San Diego, La Jolla, CA 92093, USA
| | - Jurgen P. Schulze
- Department of Computer Science, University of California, San Diego, La Jolla, CA 92093, USA
- Qualcomm Institute, University of California, San Diego, La Jolla, CA 92093, USA
| | - Jesse V. Jokerst
- Department of NanoEngineering, University of California, San Diego, La Jolla, CA 92093, USA
- Department of Radiology, University of California, San Diego, La Jolla, CA 92093, USA
- Material Science Program, University of California, San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
360
|
Abstract
Computed tomography angiography (CTA) has become a mainstay for the imaging of vascular diseases, because of high accuracy, availability, and rapid turnaround time. High-quality CTA images can now be routinely obtained with high isotropic spatial resolution and temporal resolution. Advances in CTA have focused on improving the image quality, increasing the acquisition speed, eliminating artifacts, and reducing the doses of radiation and iodinated contrast media. Dual-energy computed tomography provides material composition capabilities that can be used for characterizing lesions, optimizing contrast, decreasing artifact, and reducing radiation dose. Deep learning techniques can be used for classification, segmentation, quantification, and image enhancement.
Collapse
Affiliation(s)
- Prabhakar Rajiah
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN 55904, USA.
| |
Collapse
|
361
|
van Assen M, Lee SJ, De Cecco CN. Artificial intelligence from A to Z: From neural network to legal framework. Eur J Radiol 2020; 129:109083. [PMID: 32526670 DOI: 10.1016/j.ejrad.2020.109083] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 05/08/2020] [Accepted: 05/19/2020] [Indexed: 12/22/2022]
Abstract
Artificial intelligence (AI) will continue to cause substantial changes within the field of radiology, and it will become increasingly important for clinicians to be familiar with several concepts behind AI algorithms in order to effectively guide their clinical implementation. This review aims to give medical professionals the basic information needed to understand AI development and research. The general concepts behind several AI algorithms, including their data requirements, training, and evaluation methods are explained. The potential legal implications of using AI algorithms in clinical practice are also discussed.
Collapse
Affiliation(s)
- Marly van Assen
- Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University Hospital
- Emory Healthcare, Inc., Atlanta, GA, USA
| | - Scott J Lee
- Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University Hospital
- Emory Healthcare, Inc., Atlanta, GA, USA
| | - Carlo N De Cecco
- Division of Cardiothoracic Imaging, Department of Radiology and Imaging Sciences, Emory University Hospital
- Emory Healthcare, Inc., Atlanta, GA, USA.
| |
Collapse
|
362
|
Gao Y, Tan J, Shi Y, Lu S, Gupta A, Li H, Liang Z. Constructing a tissue-specific texture prior by machine learning from previous full-dose scan for Bayesian reconstruction of current ultralow-dose CT images. J Med Imaging (Bellingham) 2020; 7:032502. [PMID: 32118093 PMCID: PMC7040436 DOI: 10.1117/1.jmi.7.3.032502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/27/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Bayesian theory provides a sound framework for ultralow-dose computed tomography (ULdCT) image reconstruction with two terms for modeling the data statistical property and incorporating a priori knowledge for the image that is to be reconstructed. We investigate the feasibility of using a machine learning (ML) strategy, particularly the convolutional neural network (CNN), to construct a tissue-specific texture prior from previous full-dose computed tomography. Approach: Our study constructs four tissue-specific texture priors, corresponding with lung, bone, fat, and muscle, and integrates the prior with the prelog shift Poisson (SP) data property for Bayesian reconstruction of ULdCT images. The Bayesian reconstruction was implemented by an algorithm called SP-CNN-T and compared with our previous Markov random field (MRF)-based tissue-specific texture prior algorithm called SP-MRF-T. Results: In addition to conventional quantitative measures, mean squared error and peak signal-to-noise ratio, structure similarity index, feature similarity, and texture Haralick features were used to measure the performance difference between SP-CNN-T and SP-MRF-T algorithms in terms of the structure and tissue texture preservation, demonstrating the feasibility and the potential of the investigated ML approach. Conclusions: Both training performance and image reconstruction results showed the feasibility of constructing CNN texture prior model and the potential of improving the structure preservation of the nodule comparing to our previous regional tissue-specific MRF texture prior model.
Collapse
Affiliation(s)
- Yongfeng Gao
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Jiaxing Tan
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Yongyi Shi
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Siming Lu
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| | - Amit Gupta
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Haifang Li
- State University of New York, Department of Radiology, Stony Brook, New York, United States
| | - Zhengrong Liang
- State University of New York, Department of Radiology, Stony Brook, New York, United States
- State University of New York, Department of Biomedical Engineering, Stony Brook, New York, United States
| |
Collapse
|
363
|
Willemink MJ, Koszek WA, Hardell C, Wu J, Fleischmann D, Harvey H, Folio LR, Summers RM, Rubin DL, Lungren MP. Preparing Medical Imaging Data for Machine Learning. Radiology 2020; 295:4-15. [PMID: 32068507 PMCID: PMC7104701 DOI: 10.1148/radiol.2020192224] [Citation(s) in RCA: 400] [Impact Index Per Article: 80.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 12/03/2019] [Accepted: 12/30/2019] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) continues to garner substantial interest in medical imaging. The potential applications are vast and include the entirety of the medical imaging life cycle from image creation to diagnosis to outcome prediction. The chief obstacles to development and clinical implementation of AI algorithms include availability of sufficiently large, curated, and representative training data that includes expert labeling (eg, annotations). Current supervised AI methods require a curation process for data to optimally train, validate, and test algorithms. Currently, most research groups and industry have limited data access based on small sample sizes from small geographic areas. In addition, the preparation of data is a costly and time-intensive process, the results of which are algorithms with limited utility and poor generalization. In this article, the authors describe fundamental steps for preparing medical imaging data in AI algorithm development, explain current limitations to data curation, and explore new approaches to address the problem of data availability.
Collapse
Affiliation(s)
- Martin J. Willemink
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Wojciech A. Koszek
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Cailin Hardell
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Jie Wu
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Dominik Fleischmann
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Hugh Harvey
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Les R. Folio
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Ronald M. Summers
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Daniel L. Rubin
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| | - Matthew P. Lungren
- From the Department of Radiology, Stanford University School of Medicine, 300 Pasteur Dr, S-072, Stanford, CA 94305-5105 (M.J.W., D.F., D.L.R., M.P.L.); Segmed, Menlo Park, Calif (M.J.W., W.A.K., C.H., J.W.); School of Engineering, Stanford University, Stanford, Calif (J.W.); Institute of Cognitive Neuroscience, University College London, London, England (H.H.); Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (L.R.F.); Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, National Institutes of Health, Clinical Center, Bethesda, Md (R.M.S.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (D.L.R.); and Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI), Stanford, Calif (M.P.L.)
| |
Collapse
|
364
|
Possibility of Deep Learning in Medical Imaging Focusing Improvement of Computed Tomography Image Quality. J Comput Assist Tomogr 2020; 44:161-167. [PMID: 31789682 DOI: 10.1097/rct.0000000000000928] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Deep learning (DL), part of a broader family of machine learning methods, is based on learning data representations rather than task-specific algorithms. Deep learning can be used to improve the image quality of clinical scans with image noise reduction. We review the ability of DL to reduce the image noise, present the advantages and disadvantages of computed tomography image reconstruction, and examine the potential value of new DL-based computed tomography image reconstruction.
Collapse
|
365
|
Kalmet PHS, Sanduleanu S, Primakov S, Wu G, Jochems A, Refaee T, Ibrahim A, Hulst LV, Lambin P, Poeze M. Deep learning in fracture detection: a narrative review. Acta Orthop 2020; 91:215-220. [PMID: 31928116 PMCID: PMC7144272 DOI: 10.1080/17453674.2019.1711323] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI, particularly deep learning, has recently made substantial strides in perception tasks allowing machines to better represent and interpret complex data. Deep learning is a subset of AI represented by the combination of artificial neuron layers. In the last years, deep learning has gained great momentum. In the field of orthopaedics and traumatology, some studies have been done using deep learning to detect fractures in radiographs. Deep learning studies to detect and classify fractures on computed tomography (CT) scans are even more limited. In this narrative review, we provide a brief overview of deep learning technology: we (1) describe the ways in which deep learning until now has been applied to fracture detection on radiographs and CT examinations; (2) discuss what value deep learning offers to this field; and finally (3) comment on future directions of this technology.
Collapse
Affiliation(s)
| | - Sebastian Sanduleanu
- The D-Lab: Decision Support for Precision Medicine, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht
| | - Sergey Primakov
- The D-Lab: Decision Support for Precision Medicine, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht
| | - Guangyao Wu
- The D-Lab: Decision Support for Precision Medicine, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht
| | - Arthur Jochems
- The D-Lab: Decision Support for Precision Medicine, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht
| | - Turkey Refaee
- The D-Lab: Decision Support for Precision Medicine, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht
| | - Abdalla Ibrahim
- The D-Lab: Decision Support for Precision Medicine, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht
| | - Luca v. Hulst
- Maastricht University Medical Center+, Department of Trauma Surgery, Maastricht
| | - Philippe Lambin
- The D-Lab: Decision Support for Precision Medicine, GROW—School for Oncology and Developmental Biology, Maastricht University Medical Center+, Maastricht
| | - Martijn Poeze
- Maastricht University Medical Center+, Department of Trauma Surgery, Maastricht
- Nutrim School for Nutrition, Toxicology and Metabolism, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
366
|
|
367
|
Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert D. Deep Learning for Cardiac Image Segmentation: A Review. Front Cardiovasc Med 2020; 7:25. [PMID: 32195270 PMCID: PMC7066212 DOI: 10.3389/fcvm.2020.00025] [Citation(s) in RCA: 355] [Impact Index Per Article: 71.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 02/17/2020] [Indexed: 12/15/2022] Open
Abstract
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound and major anatomical structures of interest (ventricles, atria, and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research.
Collapse
Affiliation(s)
- Chen Chen
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Chen Qin
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Huaqi Qiu
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Giacomo Tarroni
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
- CitAI Research Centre, Department of Computer Science, City University of London, London, United Kingdom
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - Wenjia Bai
- Data Science Institute, Imperial College London, London, United Kingdom
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| |
Collapse
|
368
|
Tao X, Zhang H, Wang Y, Yan G, Zeng D, Chen W, Ma J. VVBP-Tensor in the FBP Algorithm: Its Properties and Application in Low-Dose CT Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:764-776. [PMID: 31425024 DOI: 10.1109/tmi.2019.2935187] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
For decades, commercial X-ray computed tomography (CT) scanners have been using the filtered backprojection (FBP) algorithm for image reconstruction. However, the desire for lower radiation doses has pushed the FBP algorithm to its limit. Previous studies have made significant efforts to improve the results of FBP through preprocessing the sinogram, modifying the ramp filter, or postprocessing the reconstructed images. In this paper, we focus on analyzing and processing the stacked view-by-view backprojections (named VVBP-Tensor) in the FBP algorithm. A key challenge for our analysis lies in the radial structures in each backprojection slice. To overcome this difficulty, a sorting operation was introduced to the VVBP-Tensor in its z direction (the direction of the projection views). The results show that, after sorting, the tensor contains structures that are similar to those of the object, and structures in different slices of the tensor are correlated. We then analyzed the properties of the VVBP-Tensor, including structural self-similarity, tensor sparsity, and noise statistics. Considering these properties, we have developed an algorithm using the tensor singular value decomposition (named VVBP-tSVD) to denoise the VVBP-Tensor for low-mAs CT imaging. Experiments were conducted using a physical phantom and clinical patient data with different mAs levels. The results demonstrate that the VVBP-tSVD is superior to all competing methods under different reconstruction schemes, including sinogram preprocessing, image postprocessing, and iterative reconstruction. We conclude that the VVBP-Tensor is a suitable processing target for improving the quality of FBP reconstruction, and the proposed VVBP-tSVD is an effective algorithm for noise reduction in low-mAs CT imaging. This preliminary work might provide a heuristic perspective for reviewing and rethinking the FBP algorithm.
Collapse
|
369
|
Erickson BJ, Cai J. Magician's Corner: 5. Generative Adversarial Networks. Radiol Artif Intell 2020; 2:e190215. [PMID: 33937820 PMCID: PMC8017406 DOI: 10.1148/ryai.2020190215] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Revised: 01/14/2020] [Accepted: 01/22/2020] [Indexed: 11/11/2022]
Affiliation(s)
- Bradley J. Erickson
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| | - Jason Cai
- From the Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN 55905
| |
Collapse
|
370
|
Liu Z, Bicer T, Kettimuthu R, Gursoy D, De Carlo F, Foster I. TomoGAN: low-dose synchrotron x-ray tomography with generative adversarial networks: discussion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:422-434. [PMID: 32118926 DOI: 10.1364/josaa.375595] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 01/08/2020] [Indexed: 06/10/2023]
Abstract
Synchrotron-based x-ray tomography is a noninvasive imaging technique that allows for reconstructing the internal structure of materials at high spatial resolutions from tens of micrometers to a few nanometers. In order to resolve sample features at smaller length scales, however, a higher radiation dose is required. Therefore, the limitation on the achievable resolution is set primarily by noise at these length scales. We present TomoGAN, a denoising technique based on generative adversarial networks, for improving the quality of reconstructed images for low-dose imaging conditions. We evaluate our approach in two photon-budget-limited experimental conditions: (1) sufficient number of low-dose projections (based on Nyquist sampling), and (2) insufficient or limited number of high-dose projections. In both cases, the angular sampling is assumed to be isotropic, and the photon budget throughout the experiment is fixed based on the maximum allowable radiation dose on the sample. Evaluation with both simulated and experimental datasets shows that our approach can significantly reduce noise in reconstructed images, improving the structural similarity score of simulation and experimental data from 0.18 to 0.9 and from 0.18 to 0.41, respectively. Furthermore, the quality of the reconstructed images with filtered back projection followed by our denoising approach exceeds that of reconstructions with the simultaneous iterative reconstruction technique, showing the computational superiority of our approach.
Collapse
|
371
|
Yang Y, Sun J, Li H, Xu Z. ADMM-CSNet: A Deep Learning Approach for Image Compressive Sensing. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:521-538. [PMID: 30507495 DOI: 10.1109/tpami.2018.2883941] [Citation(s) in RCA: 170] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Compressive sensing (CS) is an effective technique for reconstructing image from a small amount of sampled data. It has been widely applied in medical imaging, remote sensing, image compression, etc. In this paper, we propose two versions of a novel deep learning architecture, dubbed as ADMM-CSNet, by combining the traditional model-based CS method and data-driven deep learning method for image reconstruction from sparsely sampled measurements. We first consider a generalized CS model for image reconstruction with undetermined regularizations in undetermined transform domains, and then two efficient solvers using Alternating Direction Method of Multipliers (ADMM) algorithm for optimizing the model are proposed. We further unroll and generalize the ADMM algorithm to be two deep architectures, in which all parameters of the CS model and the ADMM algorithm are discriminatively learned by end-to-end training. For both applications of fast CS complex-valued MR imaging and CS imaging of real-valued natural images, the proposed ADMM-CSNet achieved favorable reconstruction accuracy in fast computational speed compared with the traditional and the other deep learning methods.
Collapse
|
372
|
Yang Q, Li N, Zhao Z, Fan X, Chang EIC, Xu Y. MRI Cross-Modality Image-to-Image Translation. Sci Rep 2020; 10:3753. [PMID: 32111966 PMCID: PMC7048849 DOI: 10.1038/s41598-020-60520-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 02/12/2020] [Indexed: 11/23/2022] Open
Abstract
We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks (cGANs). Our framework jointly exploits the low-level features (pixel-wise information) and high-level representations (e.g. brain tumors, brain structure like gray matter, etc.) between cross modalities which are important for resolving the challenging complexity in brain structures. Our framework can serve as an auxiliary method in medical use and has great application potential. Based on our proposed framework, we first propose a method for cross-modality registration by fusing the deformation fields to adopt the cross-modality information from translated modalities. Second, we propose an approach for MRI segmentation, translated multichannel segmentation (TMS), where given modalities, along with translated modalities, are segmented by fully convolutional networks (FCN) in a multichannel manner. Both of these two methods successfully adopt the cross-modality information to improve the performance without adding any extra data. Experiments demonstrate that our proposed framework advances the state-of-the-art on five brain MRI datasets. We also observe encouraging results in cross-modality registration and segmentation on some widely adopted brain datasets. Overall, our work can serve as an auxiliary method in medical use and be applied to various tasks in medical fields.
Collapse
Grants
- This work is supported by Microsoft Research under the eHealth program, the National Natural Science Foundation in China under Grant 81771910, the National Science and Technology Major Project of the Ministry of Science and Technology in China under Grant 2017YFC0110903, the Beijing Natural Science Foundation in China under Grant 4152033, the Technology and Innovation Commission of Shenzhen in China under Grant shenfagai2016-627, Beijing Young Talent Project in China, the Fundamental Research Funds for the Central Universities of China under Grant SKLSDE-2017ZX-08 from the State Key Laboratory of Software Development Environment in Beihang University in China, the 111 Project in China under Grant B13003.
- This work is supported by the National Science and Technology Major Project of the Ministry of Science and Technology in China under Grant 2017YFC0110903, Microsoft Research under the eHealth program, the National Natural Science Foundation in China under Grant 81771910, the Beijing Natural Science Foundation in China under Grant 4152033, the Technology and Innovation Commission of Shenzhen in China under Grant shenfagai2016-627, Beijing Young Talent Project in China, the Fundamental Research Funds for the Central Universities of China under Grant SKLSDE-2017ZX-08 from the State Key Laboratory of Software Development Environment in Beihang University in China, the 111 Project in China under Grant B13003.
Collapse
Affiliation(s)
- Qianye Yang
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Nannan Li
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
- Ping An Technology (Shenzhen) Co., Ltd., Shanghai, 200030, China
| | - Zixu Zhao
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Xingyu Fan
- Bioengineering College of Chongqing University, Chongqing, 400044, China
| | | | - Yan Xu
- State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and Research Institute of Beihang University in Shenzhen, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
- Microsoft Research Asia, Beijing, 100080, China.
| |
Collapse
|
373
|
Abstract
CLINICAL/METHODOLOGICAL PROBLEM In the reconstruction of three-dimensional image data, artifacts that interfere with the appraisal often occur as a result of trying to minimize the dose or due to missing data. Used iterative reconstruction methods are time-consuming and have disadvantages. STANDARD RADIOLOGICAL METHODS These problems are known to occur in computed tomography (CT), cone beam CT, interventional imaging, magnetic resonance imaging (MRI) and nuclear medicine imaging (PET and SPECT). METHODOLOGICAL INNOVATIONS Using techniques based on the use of artificial intelligence (AI) in data analysis and data supplementation, a number of problems can be solved up to a certain extent. PERFORMANCE The performance of the methods varies greatly. Since the generated image data usually look very good using the AI-based methods presented here while their results depend strongly on the study design, reliable comparable quantitative statements on the performance are not yet available in broad terms. EVALUATION In principle, the methods of image reconstruction based on AI algorithms offer many possibilities for improving and optimizing three-dimensional image datasets. However, the validity strongly depends on the design of the respective study in the structure of the individual procedure. It is therefore essential to have a suitable test prior to use in clinical practice. PRACTICAL RECOMMENDATIONS Before the widespread use of AI-based reconstruction methods can be recommended, it is necessary to establish meaningful test procedures that can characterize the actual performance and applicability in terms of information content and a meaningful study design during the learning phase of the algorithms.
Collapse
Affiliation(s)
- C Hoeschen
- Institut für Medizintechnik, Fakultät für Elektro- und Informationstechnik, Otto-von-Guericke-Universität Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Deutschland.
| |
Collapse
|
374
|
Guha I, Nadeem SA, You C, Zhang X, Levy SM, Wang G, Torner JC, Saha PK. Deep Learning Based High-Resolution Reconstruction of Trabecular Bone Microstructures from Low-Resolution CT Scans using GAN-CIRCLE. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2020; 11317:113170U. [PMID: 32201450 PMCID: PMC7085412 DOI: 10.1117/12.2549318] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Osteoporosis is a common age-related disease characterized by reduced bone density and increased fracture-risk. Microstructural quality of trabecular bone (Tb), commonly found at axial skeletal sites and at the end of long bones, is an important determinant of bone-strength and fracture-risk. High-resolution emerging CT scanners enable in vivo measurement of Tb microstructures at peripheral sites. However, resolution-dependence of microstructural measures and wide resolution-discrepancies among various CT scanners together with rapid upgrades in technology warrant data harmonization in CT-based cross-sectional and longitudinal bone studies. This paper presents a deep learning-based method for high-resolution reconstruction of Tb microstructures from low-resolution CT scans using GAN-CIRCLE. A network was developed and evaluated using post-registered ankle CT scans of nineteen volunteers on both low- and high-resolution CT scanners. 9,000 matching pairs of low- and high-resolution patches of size 64×64 were randomly harvested from ten volunteers for training and validation. Another 5,000 matching pairs of patches from nine other volunteers were used for evaluation. Quantitative comparison shows that predicted high-resolution scans have significantly improved structural similarity index (p < 0.01) with true high-resolution scans as compared to the same metric for low-resolution data. Different Tb microstructural measures such as thickness, spacing, and network area density are also computed from low- and predicted high-resolution images, and compared with the values derived from true high-resolution scans. Thickness and network area measures from predicted images showed higher agreement with true high-resolution CT (CCC = [0.95, 0.91]) derived values than the same measures from low-resolution images (CCC = [0.72, 0.88]).
Collapse
Affiliation(s)
- Indranil Guha
- Department of Electrical and Computer Engineering, College of Engineering, University of Iowa, Iowa City, IA 52242
| | - Syed Ahmed Nadeem
- Department of Electrical and Computer Engineering, College of Engineering, University of Iowa, Iowa City, IA 52242
| | - Chenyu You
- Department of Computer Science, Yale University, New Haven, CT 05620
| | - Xiaoliu Zhang
- Department of Electrical and Computer Engineering, College of Engineering, University of Iowa, Iowa City, IA 52242
| | - Steven M Levy
- Department of Preventive and Community Dentistry, College of Dentistry, University of Iowa, Iowa City, IA 52242
| | - Ge Wang
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, New York, NY 12180
| | - James C Torner
- Department of Epidemiology, University of Iowa, Iowa City, IA 52242
| | - Punam K Saha
- Department of Electrical and Computer Engineering, College of Engineering, University of Iowa, Iowa City, IA 52242
- Department of Radiology, Carver College of Medicine, University of Iowa, Iowa City, IA 52242
| |
Collapse
|
375
|
Satpute N, Naseem R, Pelanis E, Gómez-Luna J, Cheikh FA, Elle OJ, Olivares J. GPU acceleration of liver enhancement for tumor segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 184:105285. [PMID: 31896055 DOI: 10.1016/j.cmpb.2019.105285] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 11/27/2019] [Accepted: 12/16/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Medical image segmentation plays a vital role in medical image analysis. There are many algorithms developed for medical image segmentation which are based on edge or region characteristics. These are dependent on the quality of the image. The contrast of a CT or MRI image plays an important role in identifying region of interest i.e. lesion(s). In order to enhance the contrast of image, clinicians generally use manual histogram adjustment technique which is based on 1D histogram specification. This is time consuming and results in poor distribution of pixels over the image. Cross modality based contrast enhancement is 2D histogram specification technique. This is robust and provides a more uniform distribution of pixels over CT image by exploiting the inner structure information from MRI image. This helps in increasing the sensitivity and accuracy of lesion segmentation from enhanced CT image. The sequential implementation of cross modality based contrast enhancement is slow. Hence we propose GPU acceleration of cross modality based contrast enhancement for tumor segmentation. METHODS The aim of this study is fast parallel cross modality based contrast enhancement for CT liver images. This includes pairwise 2D histogram, histogram equalization and histogram matching. The sequential implementation of the cross modality based contrast enhancement is computationally expensive and hence time consuming. We propose persistence and grid-stride loop based fast parallel contrast enhancement for CT liver images. We use enhanced CT liver image for the lesion or tumor segmentation. We implement the fast parallel gradient based dynamic seeded region growing for lesion segmentation. RESULTS The proposed parallel approach is 104.416 ( ± 5.166) times faster compared to the sequential implementation and increases the sensitivity and specificity of tumor segmentation. CONCLUSION The cross modality approach is inspired by 2D histogram specification which incorporates spatial information existing in both guidance and input images for remapping the input image intensity values. The cross modality based liver contrast enhancement improves the quality of tumor segmentation.
Collapse
Affiliation(s)
- Nitin Satpute
- Department of Electronic and Computer Engineering, Universidad de Córdoba, Spain.
| | - Rabia Naseem
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Norway
| | - Egidijus Pelanis
- The Intervention Centre, Oslo University Hospital, Oslo, Norway; The Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
| | | | - Faouzi Alaya Cheikh
- Norwegian Colour and Visual Computing Lab, Norwegian University of Science and Technology, Norway
| | - Ole Jakob Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway; The Department of Informatics, The Faculty of Mathematics and Natural Sciences, University of Oslo, Oslo, Norway
| | - Joaquín Olivares
- Department of Electronic and Computer Engineering, Universidad de Córdoba, Spain
| |
Collapse
|
376
|
Huang L, Jiang H, Li S, Bai Z, Zhang J. Two stage residual CNN for texture denoising and structure enhancement on low dose CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 184:105115. [PMID: 31627148 DOI: 10.1016/j.cmpb.2019.105115] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 09/27/2019] [Accepted: 10/02/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE X-ray computed tomography (CT) plays an important role in modern medical science. Human health problems caused by CT radiation have attracted the attention of the academic community widely. Reducing radiation dose results in a deterioration in image quality and further affects doctor's diagnosis. Therefore, this paper introduces a new denoise method for low dose CT (LDCT) images, called two stage residual convolutional neural network (TS-RCNN). METHODS There are two important parts with respect to our network. 1) The first stage RCNN is proposed for texture denoising via the stationary wavelet transform (SWT) and the perceptual loss. Specifically, SWT is performed on each normal dose CT (NDCT) image and generated four wavelet images are considered as the labels. 2) The second stage RCNN is established for structure enhancement via the average NDCT model on the basis of the first network's result. Finally, the denoised CT image is obtained via inverse SWT. RESULTS Our proposed TS-RCNN is trained on three groups of simulated LDCT images in 1123 images per group and evaluated on 129 simulated LDCT images for each group. Besides, to demonstrate the clinical application of TS-RCNN, we also test our method on the 2016 Low Dose CT Grand Challenge dataset. Quantitative results show that TS-RCNN almost achieves the best results in terms of MSE, SSIM and PSNR compared to other methods. CONCLUSIONS The experimental results and comparisons demonstrate that TS-RCNN not only preserves more texture information, but also enhances structural information on LDCT images.
Collapse
Affiliation(s)
- Liangliang Huang
- Software College, Northeastern University, Shenyang 110819, China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China.
| | - Shaojie Li
- Sino-Dutch Biomedical and Information Engineering College, Northeastern University, Shenyang 110819, China
| | - Zhiqi Bai
- Software College, Northeastern University, Shenyang 110819, China
| | - Jitong Zhang
- Sino-Dutch Biomedical and Information Engineering College, Northeastern University, Shenyang 110819, China
| |
Collapse
|
377
|
Choi K, Vania M, Kim S. Semi-Supervised Learning for Low-Dose CT Image Restoration with Hierarchical Deep Generative Adversarial Network (HD-GAN). ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:2683-2686. [PMID: 31946448 DOI: 10.1109/embc.2019.8857572] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In the absence of duplicate high-dose CT data, it is challenging to restore high-quality images based on deep learning with only low-dose CT (LDCT) data. When different reconstruction algorithms and settings are adopted to prepare high-quality images, LDCT datasets for deep learning can be unpaired. To address this problem, we propose hierarchical deep generative adversarial networks (HD-GANs) for semi-supervised learning with the unpaired datasets. We first cluster each patient's CT images into multiple categories, and then collect the images in the same categories across different patients to build an imageset for denoising. Each imageset is fed into a generative adversarial network that consists of a denoising network and a following classification network. The denoising network efficiently reuses feature maps from the lower layers for end-to-end learning with full-size images. The classifier is trained to distinguish between the denoised images and the high-quality images. Evaluated with a clinical LDCT dataset, the proposed semi-supervised learning approach efficiently reduces the noise level of LDCT images without loss of information, thereby addressing the major shortcomings of IR such as computation time and anatomical inaccuracy.
Collapse
|
378
|
Ganesan P, Xue Z, Singh S, Long R, Ghoraani B, Antani S. Performance Evaluation of a Generative Adversarial Network for Deblurring Mobile-phone Cervical Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:4487-4490. [PMID: 31946862 DOI: 10.1109/embc.2019.8857124] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Visual examination forms an integral part of cervical cancer screening. With the recent rise in smartphone-based health technologies, capturing cervical images using a smartphone camera for telemedicine and automated screening is gaining popularity. However, such images are highly prone to image corruption, typically out-of-focus target or camera shake blur. In this paper, we applied a generative adversarial network (GAN) to deblur mobile-phone cervical (MC) images, and we evaluate the deblur quality using various measures. Our evaluation process is three-fold: first, we calculate the peak signal to noise ratio (PSNR) and the structural similarity (SSIM) of a test dataset with ground truth availability. Next, we calculate the perception based image quality evaluator (PIQE) score of a test dataset without ground truth availability. Finally, we classify a dataset of blurred and the corresponding deblurred images into normal/abnormal MC images. The resulting change in classification accuracy was our final assessment. Our evaluation experiments show that deblurring of MC images can potentially improve the accuracy of both manual and automated cancerous lesion screening.
Collapse
|
379
|
Gholizadeh-Ansari M, Alirezaie J, Babyn P. Low-dose CT Denoising Using Edge Detection Layer and Perceptual Loss. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:6247-6250. [PMID: 31947270 DOI: 10.1109/embc.2019.8857940] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Low-dose CT imaging is a valid approach to reduce patients' exposure to X-ray radiation. However, reducing X-ray current increases noise and artifacts in the reconstructed CT images. Deep neural networks have been successfully employed to remove noise from low-dose CT images. This study proposes two novel techniques to boost the performance of a neural network with minimal change in the complexity. First, a non-trainable edge detection layer is proposed that extracts four edge maps from the input image. The layer improves quantitative metrics (PSNR and SSIM) and helps to predict a CT image with more precise boundaries. Next, a joint function of mean-square error and perceptual loss is employed to optimize the network. Using the perceptual loss helps to preserve structural detail; however, it adds check-board artifacts to the output. The proposed joint objective function takes advantage of the benefits offered by each loss. It improves the over-smoothing problem caused by mean-square error and the check-board artifacts caused by perceptual loss.
Collapse
|
380
|
Xie H, Shan H, Cong W, Liu C, Zhang X, Liu S, Ning R, Wang GE. Deep Efficient End-to-end Reconstruction (DEER) Network for Few-view Breast CT Image Reconstruction. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:196633-196646. [PMID: 33251081 PMCID: PMC7695229 DOI: 10.1109/access.2020.3033795] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Breast CT provides image volumes with isotropic resolution in high contrast, enabling detection of small calcification (down to a few hundred microns in size) and subtle density differences. Since breast is sensitive to x-ray radiation, dose reduction of breast CT is an important topic, and for this purpose, few-view scanning is a main approach. In this article, we propose a Deep Efficient End-to-end Reconstruction (DEER) network for few-view breast CT image reconstruction. The major merits of our network include high dose efficiency, excellent image quality, and low model complexity. By the design, the proposed network can learn the reconstruction process with as few as O ( N ) parameters, where N is the side length of an image to be reconstructed, which represents orders of magnitude improvements relative to the state-of-the-art deep-learning-based reconstruction methods that map raw data to tomographic images directly. Also, validated on a cone-beam breast CT dataset prepared by Koning Corporation on a commercial scanner, our method demonstrates a competitive performance over the state-of-the-art reconstruction networks in terms of image quality. The source code of this paper is available at: https://github.com/HuidongXie/DEER.
Collapse
Affiliation(s)
- Huidong Xie
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Hongming Shan
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Wenxiang Cong
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Ruola Ning
- Koning Corporation, West Henrietta, NY USA
| | - G E Wang
- Department of Biomedical Engineering, Biomedical Imaging Center, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY USA
| |
Collapse
|
381
|
|
382
|
Kim J, Kim J, Han G, Rim C, Jo H. Low-dose CT Image Restoration using generative adversarial networks. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
383
|
Ichikawa K. [9. Virtual Monochromatic X-ray Computed Tomography]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:237-241. [PMID: 32074533 DOI: 10.6009/jjrt.2020_jsrt_76.2.237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Affiliation(s)
- Katsuhiro Ichikawa
- Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University
| |
Collapse
|
384
|
Gong K, Berg E, Cherry SR, Qi J. Machine Learning in PET: from Photon Detection to Quantitative Image Reconstruction. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:51-68. [PMID: 38045770 PMCID: PMC10691821 DOI: 10.1109/jproc.2019.2936809] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Machine learning has found unique applications in nuclear medicine from photon detection to quantitative image reconstruction. While there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time and position information from the detector signals. Now with the availability of fast waveform digitizers, machine learning techniques have been applied to estimate the position and arrival time of high-energy photons. In quantitative image reconstruction, machine learning has been used to estimate various corrections factors, including scattered events and attenuation images, as well as to reduce statistical noise in reconstructed images. Here machine learning either provides a faster alternative to an existing time-consuming computation, such as in the case of scatter estimation, or creates a data-driven approach to map an implicitly defined function, such as in the case of estimating the attenuation map for PET/MR scans. In this article, we will review the abovementioned applications of machine learning in nuclear medicine.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Biomedical Engineering, University of California, Davis, CA, USA and is now with Massachusetts General Hospital, Boston, MA, USA
| | - Eric Berg
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Simon R. Cherry
- Department of Biomedical Engineering and Department of Radiology, University of California, Davis, CA, USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| |
Collapse
|
385
|
Higaki T, Nakamura Y, Zhou J, Yu Z, Nemoto T, Tatsugami F, Awai K. Deep Learning Reconstruction at CT: Phantom Study of the Image Characteristics. Acad Radiol 2020; 27:82-87. [PMID: 31818389 DOI: 10.1016/j.acra.2019.09.008] [Citation(s) in RCA: 138] [Impact Index Per Article: 27.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 06/21/2019] [Accepted: 09/08/2019] [Indexed: 12/22/2022]
Abstract
OBJECTIVES Noise, commonly encountered on computed tomography (CT) images, can impact diagnostic accuracy. To reduce the image noise, we developed a deep-learning reconstruction (DLR) method that integrates deep convolutional neural networks into image reconstruction. In this phantom study, we compared the image noise characteristics, spatial resolution, and task-based detectability on DLR images and images reconstructed with other state-of-the art techniques. METHODS We scanned a phantom harboring cylindrical modules with different contrast on a 320-row detector CT scanner. Phantom images were reconstructed with filtered back projection, hybrid iterative reconstruction, model-based iterative reconstruction, and DLR. The standard deviation of the CT number and the noise power spectrum were calculated for noise characterization. The 10% modulation-transfer function (MTF) level was used to evaluate spatial resolution; task-based detectability was assessed using the model observer method. RESULTS On images reconstructed with DLR, the noise was lower than on images subjected to other reconstructions, especially at low radiation dose settings. Noise power spectrum measurements also showed that the noise amplitude was lower, especially for low-frequency components, on DLR images. Based on the MTF, spatial resolution was higher on model-based iterative reconstruction image than DLR image, however, for lower-contrast objects, the MTF on DLR images was comparable to images reconstructed with other methods. The machine observer study showed that at reduced radiation-dose settings, DLR yielded the best detectability. CONCLUSION On DLR images, the image noise was lower, and high-contrast spatial resolution and task-based detectability were better than on images reconstructed with other state-of-the art techniques. DLR also outperformed other methods with respect to task-based detectability.
Collapse
|
386
|
Ravishankar S, Ye JC, Fessler JA. Image Reconstruction: From Sparsity to Data-adaptive Methods and Machine Learning. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:86-109. [PMID: 32095024 PMCID: PMC7039447 DOI: 10.1109/jproc.2019.2936204] [Citation(s) in RCA: 91] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The field of medical image reconstruction has seen roughly four types of methods. The first type tended to be analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. A second type is iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. A third type of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as sparsity or low-rank. A fourth type of methods replaces mathematically designed models of signals and systems with data-driven or adaptive models inspired by the field of machine learning. This paper focuses on the two most recent trends in medical image reconstruction: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.
Collapse
Affiliation(s)
- Saiprasad Ravishankar
- Departments of Computational Mathematics, Science and Engineering, and Biomedical Engineering at Michigan State University, East Lansing, MI, 48824 USA
| | - Jong Chul Ye
- Department of Bio and Brain Engineering and Department of Mathematical Sciences at the Korea Advanced Institute of Science & Technology (KAIST), Daejeon, South Korea
| | - Jeffrey A Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109 USA
| |
Collapse
|
387
|
Lim LJ, Tison GH, Delling FN. Artificial Intelligence in Cardiovascular Imaging. Methodist Debakey Cardiovasc J 2020; 16:138-145. [PMID: 32670474 PMCID: PMC7350824 DOI: 10.14797/mdcj-16-2-138] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
The number of cardiovascular imaging studies is growing exponentially, and so is the need to improve clinical workflow efficiency and avoid missed diagnoses. With the availability and use of large datasets, artificial intelligence (AI) has the potential to improve patient care at every stage of the imaging chain. Current literature indicates that in the short-term, AI has the capacity to reduce human error and save time in the clinical workflow through automated segmentation of cardiac structures. In the future, AI may expand the informational value of diagnostic images based on images alone or a combination of images and clinical variables, thus facilitating disease detection, prognosis, and decision making. This review describes the role of AI, specifically machine learning, in multimodality imaging, including echocardiography, nuclear imaging, computed tomography, and cardiac magnetic resonance, and highlights current uses of AI as well as potential challenges to its widespread implementation.
Collapse
Affiliation(s)
- Lisa J. Lim
- UNIVERSITY OF CALIFORNIA SAN FRANCISCO, SAN FRANCISCO, CALIFORNIA
| | | | | |
Collapse
|
388
|
Jia X, Xing X, Yuan Y, Xing L, Meng MQH. Wireless Capsule Endoscopy: A New Tool for Cancer Screening in the Colon With Deep-Learning-Based Polyp Recognition. PROCEEDINGS OF THE IEEE 2020; 108:178-197. [DOI: 10.1109/jproc.2019.2950506] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
389
|
You C, Li G, Zhang Y, Zhang X, Shan H, Li M, Ju S, Zhao Z, Zhang Z, Cong W, Vannier MW, Saha PK, Hoffman EA, Wang G. CT Super-Resolution GAN Constrained by the Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:188-203. [PMID: 31217097 PMCID: PMC11662229 DOI: 10.1109/tmi.2019.2922960] [Citation(s) in RCA: 181] [Impact Index Per Article: 36.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
In this paper, we present a semi-supervised deep learning approach to accurately recover high-resolution (HR) CT images from low-resolution (LR) counterparts. Specifically, with the generative adversarial network (GAN) as the building block, we enforce the cycle-consistency in terms of the Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs. We also include the joint constraints in the loss function to facilitate structural preservation. In this process, we incorporate deep convolutional neural network (CNN), residual learning, and network in network techniques for feature extraction and restoration. In contrast to the current trend of increasing network depth and complexity to boost the imaging performance, we apply a parallel 1×1 CNN to compress the output of the hidden layer and optimize the number of layers and the number of filters for each convolutional layer. The quantitative and qualitative evaluative results demonstrate that our proposed model is accurate, efficient and robust for super-resolution (SR) image restoration from noisy LR input images. In particular, we validate our composite SR networks on three large-scale CT datasets, and obtain promising results as compared to the other state-of-the-art methods.
Collapse
|
390
|
Xie H, Shan H, Wang G. Deep Encoder-Decoder Adversarial Reconstruction(DEAR) Network for 3D CT from Few-View Data. Bioengineering (Basel) 2019; 6:E111. [PMID: 31835430 PMCID: PMC6956312 DOI: 10.3390/bioengineering6040111] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2019] [Revised: 11/20/2019] [Accepted: 12/05/2019] [Indexed: 11/16/2022] Open
Abstract
X-ray computed tomography (CT) is widely used in clinical practice. The involved ionizingX-ray radiation, however, could increase cancer risk. Hence, the reduction of the radiation dosehas been an important topic in recent years. Few-view CT image reconstruction is one of the mainways to minimize radiation dose and potentially allow a stationary CT architecture. In this paper,we propose a deep encoder-decoder adversarial reconstruction (DEAR) network for 3D CT imagereconstruction from few-view data. Since the artifacts caused by few-view reconstruction appear in3D instead of 2D geometry, a 3D deep network has a great potential for improving the image qualityin a data driven fashion. More specifically, our proposed DEAR-3D network aims at reconstructing3D volume directly from clinical 3D spiral cone-beam image data. DEAR is validated on a publiclyavailable abdominal CT dataset prepared and authorized by Mayo Clinic. Compared with other2D deep learning methods, the proposed DEAR-3D network can utilize 3D information to producepromising reconstruction results.
Collapse
Affiliation(s)
| | | | - Ge Wang
- Biomedical Imaging Center, Department of Biomedical Engineering, Center for Biotechnology & Interdisciplinary Studies, Rensselaer Polytechnic Institute, 110 Eighth Street, Troy, NY 12180, USA; (H.X.); (H.S.)
| |
Collapse
|
391
|
Unpaired Low-Dose CT Denoising Network Based on Cycle-Consistent Generative Adversarial Network with Prior Image Information. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2019; 2019:8639825. [PMID: 31885686 PMCID: PMC6925923 DOI: 10.1155/2019/8639825] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2019] [Revised: 09/02/2019] [Accepted: 09/16/2019] [Indexed: 01/22/2023]
Abstract
The widespread application of X-ray computed tomography (CT) in clinical diagnosis has led to increasing public concern regarding excessive radiation dose administered to patients. However, reducing the radiation dose will inevitably cause server noise and affect radiologists' judgment and confidence. Hence, progressive low-dose CT (LDCT) image reconstruction methods must be developed to improve image quality. Over the past two years, deep learning-based approaches have shown impressive performance in noise reduction for LDCT images. Most existing deep learning-based approaches usually require the paired training dataset which the LDCT images correspond to the normal-dose CT (NDCT) images one-to-one, but the acquisition of well-paired datasets requires multiple scans, resulting the increase of radiation dose. Therefore, well-paired datasets are not readily available. To resolve this problem, this paper proposes an unpaired LDCT image denoising network based on cycle generative adversarial networks (CycleGAN) with prior image information which does not require a one-to-one training dataset. In this method, cyclic loss, an important trick in unpaired image-to-image translation, promises to map the distribution from LDCT to NDCT by using unpaired training data. Furthermore, to guarantee the accurate correspondence of the image content between the output and NDCT, the prior information obtained from the result preprocessed using the LDCT image is integrated into the network to supervise the generation of content. Given the map of distribution through the cyclic loss and the supervision of content through the prior image loss, our proposed method can not only reduce the image noise but also retain critical information. Real-data experiments were carried out to test the performance of the proposed method. The peak signal-to-noise ratio (PSNR) improves by more than 3 dB, and the structural similarity (SSIM) increases when compared with the original CycleGAN without prior information. The real LDCT data experiment demonstrates the superiority of the proposed method according to both visual inspection and quantitative evaluation.
Collapse
|
392
|
Yin X, Zhao Q, Liu J, Yang W, Yang J, Quan G, Chen Y, Shu H, Luo L, Coatrieux JL. Domain Progressive 3D Residual Convolution Network to Improve Low-Dose CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2903-2913. [PMID: 31107644 DOI: 10.1109/tmi.2019.2917258] [Citation(s) in RCA: 113] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
The wide applications of X-ray computed tomography (CT) bring low-dose CT (LDCT) into a clinical prerequisite, but reducing the radiation exposure in CT often leads to significantly increased noise and artifacts, which might lower the judgment accuracy of radiologists. In this paper, we put forward a domain progressive 3D residual convolution network (DP-ResNet) for the LDCT imaging procedure that contains three stages: sinogram domain network (SD-net), filtered back projection (FBP), and image domain network (ID-net). Though both are based on the residual network structure, the SD-net and ID-net provide complementary effect on improving the final LDCT quality. The experimental results with both simulated and real projection data show that this domain progressive deep-learning network achieves significantly improved performance by combing the network processing in the two domains.
Collapse
|
393
|
Lewis SJ, Gandomkar Z, Brennan PC. Artificial Intelligence in medical imaging practice: looking to the future. J Med Radiat Sci 2019; 66:292-295. [PMID: 31709775 PMCID: PMC6920680 DOI: 10.1002/jmrs.369] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Revised: 10/03/2019] [Accepted: 10/09/2019] [Indexed: 01/17/2023] Open
Abstract
Artificial intelligence (AI) is heralded as the most disruptive technology to health services in the 21st century. Many commentary articles published in the general public and health domains recognise that medical imaging is at the forefront of these changes due to our large digital data footprint. Radiomics is transforming medical images into mineable high-dimensional data to optimise clinical decision-making; however, some would argue that AI could infiltrate workplaces with very few ethical checks and balances. In this commentary article, we describe how AI is beginning to change medical imaging services and the innovations that are on the horizon. We explore how AI and its various forms, including machine learning, will challenge the way medical imaging is delivered from workflow, image acquisition, image registration to interpretation. Diagnostic radiographers will need to learn to work alongside our 'virtual colleagues', and we argue that there are vital changes to entry and advanced curricula together with national professional capabilities to ensure machine-learning tools are used in the safest and most effective manner for our patients.
Collapse
Affiliation(s)
- Sarah J Lewis
- Discipline of Medical Imaging ScienceThe University of SydneyLidcombeNew South WalesAustralia
| | - Ziba Gandomkar
- Discipline of Medical Imaging ScienceThe University of SydneyLidcombeNew South WalesAustralia
| | - Patrick C Brennan
- Discipline of Medical Imaging ScienceThe University of SydneyLidcombeNew South WalesAustralia
| |
Collapse
|
394
|
Hong Y, Kim J, Chen G, Lin W, Yap PT, Shen D. Longitudinal Prediction of Infant Diffusion MRI Data via Graph Convolutional Adversarial Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2717-2725. [PMID: 30990424 PMCID: PMC6935161 DOI: 10.1109/tmi.2019.2911203] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Missing data is a common problem in longitudinal studies due to subject dropouts and failed scans. We present a graph-based convolutional neural network to predict missing diffusion MRI data. In particular, we consider the relationships between sampling points in the spatial domain and the diffusion wave-vector domain to construct a graph. We then use a graph convolutional network to learn the non-linear mapping from available data to missing data. Our method harnesses a multi-scale residual architecture with adversarial learning for prediction with greater accuracy and perceptual quality. Experimental results show that our method is accurate and robust in the longitudinal prediction of infant brain diffusion MRI data.
Collapse
Affiliation(s)
- Yoonmi Hong
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, U.S.A
| | - Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Geng Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, U.S.A
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, U.S.A
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, U.S.A
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, U.S.A
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| |
Collapse
|
395
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
396
|
Application of Artificial Intelligence–based Image Optimization for Computed Tomography Angiography of the Aorta With Low Tube Voltage and Reduced Contrast Medium Volume. J Thorac Imaging 2019; 34:393-399. [DOI: 10.1097/rti.0000000000000438] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
397
|
Mao Z, Miki A, Mei S, Dong Y, Maruyama K, Kawasaki R, Usui S, Matsushita K, Nishida K, Chan K. Deep learning based noise reduction method for automatic 3D segmentation of the anterior of lamina cribrosa in optical coherence tomography volumetric scans. BIOMEDICAL OPTICS EXPRESS 2019; 10:5832-5851. [PMID: 31799050 PMCID: PMC6865099 DOI: 10.1364/boe.10.005832] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 10/15/2019] [Accepted: 10/16/2019] [Indexed: 06/07/2023]
Abstract
A deep-learning (DL) based noise reduction algorithm, in combination with a vessel shadow compensation method and a three-dimensional (3D) segmentation technique, has been developed to achieve, to the authors best knowledge, the first automatic segmentation of the anterior surface of the lamina cribrosa (LC) in volumetric ophthalmic optical coherence tomography (OCT) scans. The present DL-based OCT noise reduction algorithm was trained without the need of noise-free ground truth images by utilizing the latest development in deep learning of de-noising from single noisy images, and was demonstrated to be able to cover more locations in the retina and disease cases of different types to achieve high robustness. Compared with the original single OCT images, a 6.6 dB improvement in peak signal-to-noise ratio and a 0.65 improvement in the structural similarity index were achieved. The vessel shadow compensation method analyzes the energy profile in each A-line and automatically compensates the pixel intensity of locations underneath the detected blood vessel. Combining the noise reduction algorithm and the shadow compensation and contrast enhancement technique, medical experts were able to identify the anterior surface of the LC in 98.3% of the OCT images. The 3D segmentation algorithm employs a two-round procedure based on gradients information and information from neighboring images. An accuracy of 90.6% was achieved in a validation study involving 180 individual B-scans from 36 subjects, compared to 64.4% in raw images. This imaging and analysis strategy enables the first automatic complete view of the anterior LC surface, to the authors best knowledge, which may have the potentials in new LC parameters development for glaucoma diagnosis and management.
Collapse
Affiliation(s)
- Zaixing Mao
- Topcon Advanced Biomedical Imaging
Laboratory, Oakland, NJ 07436, USA
| | - Atsuya Miki
- Department of Ophthalmology, Osaka
University Graduate School of Medicine, Osaka, Japan
| | - Song Mei
- Topcon Advanced Biomedical Imaging
Laboratory, Oakland, NJ 07436, USA
| | - Ying Dong
- Topcon Advanced Biomedical Imaging
Laboratory, Oakland, NJ 07436, USA
| | - Kazuichi Maruyama
- Department of Ophthalmology, Osaka
University Graduate School of Medicine, Osaka, Japan
| | - Ryo Kawasaki
- Department of Ophthalmology, Osaka
University Graduate School of Medicine, Osaka, Japan
| | - Shinichi Usui
- Department of Ophthalmology, Osaka
University Graduate School of Medicine, Osaka, Japan
| | - Kenji Matsushita
- Department of Ophthalmology, Osaka
University Graduate School of Medicine, Osaka, Japan
| | - Kohji Nishida
- Department of Ophthalmology, Osaka
University Graduate School of Medicine, Osaka, Japan
| | - Kinpui Chan
- Topcon Advanced Biomedical Imaging
Laboratory, Oakland, NJ 07436, USA
| |
Collapse
|
398
|
Zhu H, Tong D, Zhang L, Wang S, Wu W, Tang H, Chen Y, Luo L, Zhu J, Li B. Temporally downsampled cerebral CT perfusion image restoration using deep residual learning. Int J Comput Assist Radiol Surg 2019; 15:193-201. [DOI: 10.1007/s11548-019-02082-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Accepted: 10/18/2019] [Indexed: 12/27/2022]
|
399
|
Shen T, Gou C, Wang FY, He Z, Chen W. Learning from adversarial medical images for X-ray breast mass segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 180:105012. [PMID: 31421601 DOI: 10.1016/j.cmpb.2019.105012] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 07/06/2019] [Accepted: 08/03/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Simulation of diverse lesions in images is proposed and applied to overcome the scarcity of labeled data, which has hindered the application of deep learning in medical imaging. However, most of current studies focus on generating samples with class labels for classification and detection rather than segmentation, because generating images with precise masks remains a challenge. Therefore, we aim to generate realistic medical images with precise masks for improving lesion segmentation in mammagrams. METHODS In this paper, we propose a new framework for improving X-ray breast mass segmentation performance aided by generated adversarial lesion images with precise masks. Firstly, we introduce a conditional generative adversarial network (cGAN) to learn the distribution of real mass images as well as a mapping between images and corresponding segmentation masks. Subsequently, a number of lesion images are generated from various binary input masks using the generator in the trained cGAN. Then the generated adversarial samples are concatenated with original samples to produce a dataset with increased diversity. Furthermore, we introduce an improved U-net and train it on the previous augmented dataset for breast mass segmentation. RESULTS To demonstrate the effectiveness of our proposed method, we conduct experiments on publicly available mammogram database of INbreast and a private database provided by Nanfang Hospital in China. Experimental results show that an improvement up to 7% in Jaccard index can be achieved over the same model trained on original real lesion images. CONCLUSIONS Our proposed method can be viewed as one of the first steps toward generating realistic X-ray breast mass images with masks for precise segmentation.
Collapse
Affiliation(s)
- Tianyu Shen
- Institute of Automation, Chinese Academy of Sciences, Zhongguancun East Road 95, Beijing 100190, China; Qingdao Academy of Intelligent Industries, Zhilidao Road 1, Qingdao 266000, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chao Gou
- School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510275, China.
| | - Fei-Yue Wang
- Institute of Automation, Chinese Academy of Sciences, Zhongguancun East Road 95, Beijing 100190, China; Qingdao Academy of Intelligent Industries, Zhilidao Road 1, Qingdao 266000, China
| | - Zilong He
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Weiguo Chen
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
400
|
Li Y, Li K, Zhang C, Montoya J, Chen GH. Learning to Reconstruct Computed Tomography Images Directly From Sinogram Data Under A Variety of Data Acquisition Conditions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2469-2481. [PMID: 30990179 PMCID: PMC7962902 DOI: 10.1109/tmi.2019.2910760] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Computed tomography (CT) is widely used in medical diagnosis and non-destructive detection. Image reconstruction in CT aims to accurately recover pixel values from measured line integrals, i.e., the summed pixel values along straight lines. Provided that the acquired data satisfy the data sufficiency condition as well as other conditions regarding the view angle sampling interval and the severity of transverse data truncation, researchers have discovered many solutions to accurately reconstruct the image. However, if these conditions are violated, accurate image reconstruction from line integrals remains an intellectual challenge. In this paper, a deep learning method with a common network architecture, termed iCT-Net, was developed and trained to accurately reconstruct images for previously solved and unsolved CT reconstruction problems with high quantitative accuracy. Particularly, accurate reconstructions were achieved for the case when the sparse view reconstruction problem (i.e., compressed sensing problem) is entangled with the classical interior tomographic problems.
Collapse
Affiliation(s)
- Yinsheng Li
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Ke Li
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| | - Chengzhu Zhang
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Juan Montoya
- Department of Medical Physics at the University of Wisconsin-Madison
| | - Guang-Hong Chen
- Department of Medical Physics at the University of Wisconsin-Madison
- Department of Radiology at the University of Wisconsin-Madison
| |
Collapse
|