401
|
Mupparapu M, Chen YC, Hong DK, Wu CW. The Use of Deep Convolutional Neural Networks in Biomedical Imaging: A Review. JOURNAL OF OROFACIAL SCIENCES 2019. [DOI: 10.4103/jofs.jofs_55_19] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
402
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 398] [Impact Index Per Article: 66.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
403
|
|
404
|
Xue B, Tong N. Real-World ISAR Object Recognition and Relation Discovery Using Deep Relation Graph Learning. IEEE ACCESS 2019; 7:43906-43914. [DOI: 10.1109/access.2019.2896293] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
405
|
Impact of Enhancement for Coronary Artery Segmentation Based on Deep Learning Neural Network. PATTERN RECOGNITION AND IMAGE ANALYSIS 2019. [DOI: 10.1007/978-3-030-31321-0_23] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
406
|
Hu J, Chen Y, Zhong J, Ju R, Yi Z. Automated Analysis for Retinopathy of Prematurity by Deep Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:269-279. [PMID: 30080144 DOI: 10.1109/tmi.2018.2863562] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Retinopathy of Prematurity (ROP) is a retinal vasproliferative disorder disease principally observed in infants born prematurely with low birth weight. ROP is an important cause of childhood blindness. Although automatic or semi-automatic diagnosis of ROP has been conducted, most previous studies have focused on "plus" disease, which is indicated by abnormalities of retinal vasculature. Few studies have reported methods for identifying the "stage" of the ROP disease. Deep neural networks have achieved impressive results in many computer vision and medical image analysis problems, raising expectations that it might be a promising tool in the automatic diagnosis of ROP. In this paper, convolutional neural networks with a novel architecture are proposed to recognize the existence and severity of ROP disease per-examination. The severity of ROP is divided into mild and severe cases according to the disease progression. The proposed architecture consists of two sub-networks connected by a feature aggregate operator. The first sub-network is designed to extract high-level features from images of the fundus. These features from different images in an examination are fused by the aggregate operator, then used as the input for the second sub-network to predict its class. A large data set imaged by RetCam 3 is used to train and evaluate the model. The high classification accuracy in the experiment demonstrates the effectiveness of the proposed architecture for recognizing the ROP disease.
Collapse
|
407
|
Zhang Q, Wang H, Yoon SW, Won D, Srihari K. Lung Nodule Diagnosis on 3D Computed Tomography Images Using Deep Convolutional Neural Networks. ACTA ACUST UNITED AC 2019. [DOI: 10.1016/j.promfg.2020.01.375] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
408
|
Napel S, Mu W, Jardim‐Perassi BV, Aerts HJWL, Gillies RJ. Quantitative imaging of cancer in the postgenomic era: Radio(geno)mics, deep learning, and habitats. Cancer 2018; 124:4633-4649. [PMID: 30383900 PMCID: PMC6482447 DOI: 10.1002/cncr.31630] [Citation(s) in RCA: 138] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 07/11/2018] [Accepted: 07/17/2018] [Indexed: 11/07/2022]
Abstract
Although cancer often is referred to as "a disease of the genes," it is indisputable that the (epi)genetic properties of individual cancer cells are highly variable, even within the same tumor. Hence, preexisting resistant clones will emerge and proliferate after therapeutic selection that targets sensitive clones. Herein, the authors propose that quantitative image analytics, known as "radiomics," can be used to quantify and characterize this heterogeneity. Virtually every patient with cancer is imaged radiologically. Radiomics is predicated on the beliefs that these images reflect underlying pathophysiologies, and that they can be converted into mineable data for improved diagnosis, prognosis, prediction, and therapy monitoring. In the last decade, the radiomics of cancer has grown from a few laboratories to a worldwide enterprise. During this growth, radiomics has established a convention, wherein a large set of annotated image features (1-2000 features) are extracted from segmented regions of interest and used to build classifier models to separate individual patients into their appropriate class (eg, indolent vs aggressive disease). An extension of this conventional radiomics is the application of "deep learning," wherein convolutional neural networks can be used to detect the most informative regions and features without human intervention. A further extension of radiomics involves automatically segmenting informative subregions ("habitats") within tumors, which can be linked to underlying tumor pathophysiology. The goal of the radiomics enterprise is to provide informed decision support for the practice of precision oncology.
Collapse
Affiliation(s)
- Sandy Napel
- Department of RadiologyStanford UniversityStanfordCalifornia
| | - Wei Mu
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| | | | - Hugo J. W. L. Aerts
- Dana‐Farber Cancer Institute, Department of Radiology, Brigham and Women’s HospitalHarvard Medical SchoolBostonMassachusetts
| | - Robert J. Gillies
- Department of Cancer PhysiologyH. Lee Moffitt Cancer CenterTampaFlorida
| |
Collapse
|
409
|
Luo J, Ning Z, Zhang S, Feng Q, Zhang Y. Bag of deep features for preoperative prediction of sentinel lymph node metastasis in breast cancer. ACTA ACUST UNITED AC 2018; 63:245014. [DOI: 10.1088/1361-6560/aaf241] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
410
|
Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, Mahajan V, Rao P, Warier P. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018; 392:2388-2396. [PMID: 30318264 DOI: 10.1016/s0140-6736(18)31645-3] [Citation(s) in RCA: 486] [Impact Index Per Article: 69.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 07/05/2018] [Accepted: 07/12/2018] [Indexed: 12/18/2022]
Abstract
BACKGROUND Non-contrast head CT scan is the current standard for initial imaging of patients with head trauma or stroke symptoms. We aimed to develop and validate a set of deep learning algorithms for automated detection of the following key findings from these scans: intracranial haemorrhage and its types (ie, intraparenchymal, intraventricular, subdural, extradural, and subarachnoid); calvarial fractures; midline shift; and mass effect. METHODS We retrospectively collected a dataset containing 313 318 head CT scans together with their clinical reports from around 20 centres in India between Jan 1, 2011, and June 1, 2017. A randomly selected part of this dataset (Qure25k dataset) was used for validation and the rest was used to develop algorithms. An additional validation dataset (CQ500 dataset) was collected in two batches from centres that were different from those used for the development and Qure25k datasets. We excluded postoperative scans and scans of patients younger than 7 years. The original clinical radiology report and consensus of three independent radiologists were considered as gold standard for the Qure25k and CQ500 datasets, respectively. Areas under the receiver operating characteristic curves (AUCs) were primarily used to assess the algorithms. FINDINGS The Qure25k dataset contained 21 095 scans (mean age 43 years; 9030 [43%] female patients), and the CQ500 dataset consisted of 214 scans in the first batch (mean age 43 years; 94 [44%] female patients) and 277 scans in the second batch (mean age 52 years; 84 [30%] female patients). On the Qure25k dataset, the algorithms achieved an AUC of 0·92 (95% CI 0·91-0·93) for detecting intracranial haemorrhage (0·90 [0·89-0·91] for intraparenchymal, 0·96 [0·94-0·97] for intraventricular, 0·92 [0·90-0·93] for subdural, 0·93 [0·91-0·95] for extradural, and 0·90 [0·89-0·92] for subarachnoid). On the CQ500 dataset, AUC was 0·94 (0·92-0·97) for intracranial haemorrhage (0·95 [0·93-0·98], 0·93 [0·87-1·00], 0·95 [0·91-0·99], 0·97 [0·91-1·00], and 0·96 [0·92-0·99], respectively). AUCs on the Qure25k dataset were 0·92 (0·91-0·94) for calvarial fractures, 0·93 (0·91-0·94) for midline shift, and 0·86 (0·85-0·87) for mass effect, while AUCs on the CQ500 dataset were 0·96 (0·92-1·00), 0·97 (0·94-1·00), and 0·92 (0·89-0·95), respectively. INTERPRETATION Our results show that deep learning algorithms can accurately identify head CT scan abnormalities requiring urgent attention, opening up the possibility to use these algorithms to automate the triage process. FUNDING Qure.ai.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Vidur Mahajan
- Centre for Advanced Research in Imaging, Neurosciences and Genomics, New Delhi, India
| | - Pooja Rao
- Qure.ai, Goregaon East, Mumbai, India
| | | |
Collapse
|
411
|
Automatic classification of cervical cancer from cytological images by using convolutional neural network. Biosci Rep 2018; 38:BSR20181769. [PMID: 30341239 PMCID: PMC6259017 DOI: 10.1042/bsr20181769] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 10/13/2018] [Accepted: 10/14/2018] [Indexed: 02/06/2023] Open
Abstract
Cervical cancer (CC) is one of the most common gynecologic malignancies in the world. The incidence and mortality keep high in some remote and poor medical condition regions in China. In order to improve the current situation and promote the pathologists' diagnostic accuracy of CC in such regions, we tried to propose an intelligent and efficient classification model for CC based on convolutional neural network (CNN) with relatively simple architecture compared with others. The model was trained and tested by two groups of image datasets, respectively, which were original image group with a volume of 3012 datasets and augmented image group with a volume of 108432 datasets. Each group has a number of fixed-size RGB images (227*227) of keratinizing squamous, non-keratinizing squamous, and basaloid squamous. The method of three-folder cross-validation was applied to the model. And the classification accuracy of the models, overall, 93.33% for original image group and 89.48% for augmented image group. The improvement of 3.85% has been achieved by using augmented images as input data for the model. The results got from paired-samples ttest indicated that two models' classification accuracy has a significant difference (P<0.05). The developed scheme we proposed was useful for classifying CCs from cytological images and the model can be served as a pathologist assistance to improve the doctor's diagnostic level of CC, which has a great meaning and huge potential application in poor medical condition areas in China.
Collapse
|
412
|
Miyamoto A, Kurosaki A, Moriguchi S, Takahashi Y, Ogawa K, Murase K, Hanada S, Uruga H, Takaya H, Morokawa N, Fujii T, Hoshino J, Kishi K. Reduced area of the normal lung on high-resolution computed tomography predicts poor survival in patients with lung cancer and combined pulmonary fibrosis and emphysema. Respir Investig 2018; 57:140-149. [PMID: 30472091 DOI: 10.1016/j.resinv.2018.10.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 09/22/2018] [Accepted: 10/05/2018] [Indexed: 12/13/2022]
Abstract
BACKGROUND This study aimed to determine the radiologic predictors and clarify the clinical features related to survival in patients with combined pulmonary fibrosis and emphysema (CPFE) and lung cancer. METHODS We retrospectively reviewed the medical chart data and high-resolution computed tomography (HRCT) findings for 81 consecutive patients with CPFE and 92 primary lung cancers (70 men, 11 women; mean age, 70.9 years). We selected 8 axial HRCT images per patient, and visually determined the normal lung, modified Goddard, and fibrosis scores. Multivariate analysis was performed using the Cox proportional hazards regression model. RESULTS The major clinical features were a high smoking index of 54.8 pack-years and idiopathic pulmonary fibrosis (n = 44). The major lung cancer profile was a peripherally located squamous cell carcinoma (n = 40) or adenocarcinoma (n = 31) adjacent to emphysema in the upper/middle lobe (n = 27) or fibrosis in the lower lobe (n = 26). The median total normal lung, modified Goddard, and fibrosis scores were 10, 8, and 8, respectively. TNM Classification of malignant tumors (TNM) stage I, II, III, and IV was noted in 37, 7, 26, and 22 patients, respectively. Acute exacerbation occurred in 20 patients. Multivariate analysis showed that a higher normal lung score and TNM stage were independent radiologic and clinical predictors of poor survival at the time of diagnosis of lung cancer. CONCLUSIONS A markedly reduced area of normal lung on HRCT was a relevant radiologic predictor of survival.
Collapse
Affiliation(s)
- Atsushi Miyamoto
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Atsuko Kurosaki
- Department of Diagnostic Radiology, Fukujuji Hospital, Japan Anti-tuberculosis Association, 3-1-24 Matsuyama Kiyose-shi, Tokyo 204-8522, Japan.
| | - Shuhei Moriguchi
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Yui Takahashi
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Kazumasa Ogawa
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Kyoko Murase
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Shigeo Hanada
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Hironori Uruga
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Hisashi Takaya
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Nasa Morokawa
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Takeshi Fujii
- Department of Pathology, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan; Okinaka Memorial Institute for Medical Research, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Junichi Hoshino
- Clinical Research Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| | - Kazuma Kishi
- Department of Respiratory Medicine, Respiratory Centre, Toranomon Hospital, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan; Okinaka Memorial Institute for Medical Research, 2-2-2 Toranomon Minato-ku, Tokyo 105-8470, Japan.
| |
Collapse
|
413
|
McBee MP, Awan OA, Colucci AT, Ghobadi CW, Kadom N, Kansagra AP, Tridandapani S, Auffermann WF. Deep Learning in Radiology. Acad Radiol 2018; 25:1472-1480. [PMID: 29606338 DOI: 10.1016/j.acra.2018.02.018] [Citation(s) in RCA: 248] [Impact Index Per Article: 35.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2017] [Revised: 02/22/2018] [Accepted: 02/23/2018] [Indexed: 02/07/2023]
Abstract
As radiology is inherently a data-driven specialty, it is especially conducive to utilizing data processing techniques. One such technique, deep learning (DL), has become a remarkably powerful tool for image processing in recent years. In this work, the Association of University Radiologists Radiology Research Alliance Task Force on Deep Learning provides an overview of DL for the radiologist. This article aims to present an overview of DL in a manner that is understandable to radiologists; to examine past, present, and future applications; as well as to evaluate how radiologists may benefit from this remarkable new tool. We describe several areas within radiology in which DL techniques are having the most significant impact: lesion or disease detection, classification, quantification, and segmentation. The legal and ethical hurdles to implementation are also discussed. By taking advantage of this powerful tool, radiologists can become increasingly more accurate in their interpretations with fewer errors and spend more time to focus on patient care.
Collapse
Affiliation(s)
- Morgan P McBee
- Department of Radiology and Medical Imaging, Cincinnati Children's Hospital, Cincinnati, Ohio
| | - Omer A Awan
- Department of Radiology, Temple University Hospital, Philadelphia, Pennsylvania
| | - Andrew T Colucci
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | | | - Nadja Kadom
- Department of Radiology and Imaging Sciences, Children's Healthcare of Atlanta (Egleston), Emory University School of Medicine, Atlanta, Georgia
| | - Akash P Kansagra
- Mallinckrodt Institute of Radiology and Departments of Neurological Surgery and Neurology, Washington University School of Medicine, Saint Louis, Missouri
| | - Srini Tridandapani
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Electrical & Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia
| | - William F Auffermann
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1365 Clifton Road NE, Atlanta, GA 30322.
| |
Collapse
|
414
|
Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK. Medical Image Analysis using Convolutional Neural Networks: A Review. J Med Syst 2018; 42:226. [DOI: 10.1007/s10916-018-1088-1] [Citation(s) in RCA: 247] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 09/25/2018] [Indexed: 01/03/2023]
|
415
|
Iakovidis DK, Georgakopoulos SV, Vasilakakis M, Koulaouzidis A, Plagianakos VP. Detecting and Locating Gastrointestinal Anomalies Using Deep Learning and Iterative Cluster Unification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2196-2210. [PMID: 29994763 DOI: 10.1109/tmi.2018.2837002] [Citation(s) in RCA: 81] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper proposes a novel methodology for automatic detection and localization of gastrointestinal (GI) anomalies in endoscopic video frame sequences. Training is performed with weakly annotated images, using only image-level, semantic labels instead of detailed, and pixel-level annotations. This makes it a cost-effective approach for the analysis of large videoendoscopy repositories. Other advantages of the proposed methodology include its capability to suggest possible locations of GI anomalies within the video frames, and its generality, in the sense that abnormal frame detection is based on automatically derived image features. It is implemented in three phases: 1) it classifies the video frames into abnormal or normal using a weakly supervised convolutional neural network (WCNN) architecture; 2) detects salient points from deeper WCNN layers, using a deep saliency detection algorithm; and 3) localizes GI anomalies using an iterative cluster unification (ICU) algorithm. ICU is based on a pointwise cross-feature-map (PCFM) descriptor extracted locally from the detected salient points using information derived from the WCNN. Results, from extensive experimentation using publicly available collections of gastrointestinal endoscopy video frames, are presented. The data sets used include a variety of GI anomalies. Both anomaly detection and localization performance achieved, in terms of the area under receiver operating characteristic (AUC), were >80%. The highest AUC for anomaly detection was obtained on conventional gastroscopy images, reaching 96%, and the highest AUC for anomaly localization was obtained on wireless capsule endoscopy images, reaching 88%.
Collapse
|
416
|
Coudray N, Ocampo PS, Sakellaropoulos T, Narula N, Snuderl M, Fenyö D, Moreira AL, Razavian N, Tsirigos A. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med 2018; 24:1559-1567. [PMID: 30224757 PMCID: PMC9847512 DOI: 10.1038/s41591-018-0177-5] [Citation(s) in RCA: 1481] [Impact Index Per Article: 211.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 07/06/2018] [Indexed: 02/06/2023]
Abstract
Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them-STK11, EGFR, FAT1, SETBP1, KRAS and TP53-can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. Our approach can be applied to any cancer type, and the code is available at https://github.com/ncoudray/DeepPATH .
Collapse
Affiliation(s)
- Nicolas Coudray
- Applied Bioinformatics Laboratories, New York University School of Medicine, NY 10016, USA,Skirball Institute, Dept. of Cell Biology, New York University School of Medicine, NY 10016, USA
| | | | - Theodore Sakellaropoulos
- School of Mechanical Engineering, National Technical University of Athens, Zografou 15780, Greece
| | - Navneet Narula
- Department of Pathology, New York University School of Medicine, NY 10016, USA
| | - Matija Snuderl
- Department of Pathology, New York University School of Medicine, NY 10016, USA
| | - David Fenyö
- Institute for Systems Genetics, New York University School of Medicine, NY 10016, USA,Department of Biochemistry and molecular Pharmacology, New York University School of Medicine, NY 10016, USA
| | - Andre L. Moreira
- Department of Pathology, New York University School of Medicine, NY 10016, USA,Center for Biospecimen Research and Development, New York University, NY 10016, USA
| | - Narges Razavian
- Department of Population Health and the Center for Healthcare Innovation and Delivery Science, New York University School of Medicine, NY 10016, USA,To whom correspondence should be addressed. Tel: +1 646 501 2693; ; Correspondence may also be addressed to Narges Razavian. Tel: +1 212 263 2234,
| | - Aristotelis Tsirigos
- Applied Bioinformatics Laboratories, New York University School of Medicine, NY 10016, USA,Department of Pathology, New York University School of Medicine, NY 10016, USA,To whom correspondence should be addressed. Tel: +1 646 501 2693; ; Correspondence may also be addressed to Narges Razavian. Tel: +1 212 263 2234,
| |
Collapse
|
417
|
TRec: an efficient recommendation system for hunting passengers with deep neural networks. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3728-2] [Citation(s) in RCA: 57] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
418
|
Bi XA, Jiang Q, Sun Q, Shu Q, Liu Y. Analysis of Alzheimer's Disease Based on the Random Neural Network Cluster in fMRI. Front Neuroinform 2018; 12:60. [PMID: 30245623 PMCID: PMC6137384 DOI: 10.3389/fninf.2018.00060] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2018] [Accepted: 08/22/2018] [Indexed: 01/16/2023] Open
Abstract
As Alzheimer’s disease (AD) is featured with degeneration and irreversibility, the diagnosis of AD at early stage is important. In recent years, some researchers have tried to apply neural network (NN) to classify AD patients from healthy controls (HC) based on functional MRI (fMRI) data. But most study focus on a single NN and the classification accuracy was not high. Therefore, this paper used the random neural network cluster which was composed of multiple NNs to improve classification performance. Sixty one subjects (25 AD and 36 HC) were acquired from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. This method not only could be used in the classification, but also could be used for feature selection. Firstly, we chose Elman NN from five types of NNs as the optimal base classifier of random neural network cluster based on the results of feature selection, and the accuracies of the random Elman neural network cluster could reach to 92.31% which was the highest and stable. Then we used the random Elman neural network cluster to select significant features and these features could be used to find out the abnormal regions. Finally, we found out 23 abnormal regions such as the precentral gyrus, the frontal gyrus and supplementary motor area. These results fully show that the random neural network cluster is worthwhile and meaningful for the diagnosis of AD.
Collapse
Affiliation(s)
- Xia-An Bi
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Qin Jiang
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Qi Sun
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Qing Shu
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| | - Yingchao Liu
- College of Information Science and Engineering, Hunan Normal University, Changsha, China
| |
Collapse
|
419
|
Bernal J, Kushibar K, Asfaw DS, Valverde S, Oliver A, Martí R, Lladó X. Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review. Artif Intell Med 2018; 95:64-81. [PMID: 30195984 DOI: 10.1016/j.artmed.2018.08.008] [Citation(s) in RCA: 150] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 04/25/2018] [Accepted: 08/27/2018] [Indexed: 02/07/2023]
Abstract
In recent years, deep convolutional neural networks (CNNs) have shown record-shattering performance in a variety of computer vision problems, such as visual object recognition, detection and segmentation. These methods have also been utilised in medical image analysis domain for lesion segmentation, anatomical segmentation and classification. We present an extensive literature review of CNN techniques applied in brain magnetic resonance imaging (MRI) analysis, focusing on the architectures, pre-processing, data-preparation and post-processing strategies available in these works. The aim of this study is three-fold. Our primary goal is to report how different CNN architectures have evolved, discuss state-of-the-art strategies, condense their results obtained using public datasets and examine their pros and cons. Second, this paper is intended to be a detailed reference of the research activity in deep CNN for brain MRI analysis. Finally, we present a perspective on the future of CNNs in which we hint some of the research directions in subsequent years.
Collapse
Affiliation(s)
- Jose Bernal
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Kaisar Kushibar
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Daniel S Asfaw
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Sergi Valverde
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Arnau Oliver
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Robert Martí
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Xavier Lladó
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| |
Collapse
|
420
|
Zhang R, Zhao L, Lou W, Abrigo JM, Mok VCT, Chu WCW, Wang D, Shi L. Automatic Segmentation of Acute Ischemic Stroke From DWI Using 3-D Fully Convolutional DenseNets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2149-2160. [PMID: 29994088 DOI: 10.1109/tmi.2018.2821244] [Citation(s) in RCA: 94] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Acute ischemic stroke is recognized as a common cerebral vascular disease in aging people. Accurate diagnosis and timely treatment can effectively improve the blood supply of the ischemic area and reduce the risk of disability or even death. Understanding the location and size of infarcts plays a critical role in the diagnosis decision. However, manual localization and quantification of stroke lesions are laborious and time-consuming. In this paper, we propose a novel automatic method to segment acute ischemic stroke from diffusion weighted images (DWIs) using deep 3-D convolutional neural networks (CNNs). Our method can efficiently utilize 3-D contextual information and automatically learn very discriminative features in an end-to-end and data-driven way. To relieve the difficulty of training very deep 3-D CNN, we equip our network with dense connectivity to enable the unimpeded propagation of information and gradients throughout the network. We train our model with Dice objective function to combat the severe class imbalance problem in data. A DWI data set containing 242 subjects (90 for training, 62 for validation, and 90 for testing) with various types of acute ischemic stroke was constructed to evaluate our method. Our model achieved high performance on various metrics (Dice similarity coefficient: 79.13%, lesionwise precision: 92.67%, and lesionwise F1 score: 89.25%), outperforming the other state-of-the-art CNN methods by a large margin. We also evaluated the model on ISLES2015-SSIS data set and achieved very competitive performance, which further demonstrated its generalization capacity. The proposed method is fast and accurate, demonstrating a good potential in clinical routines.
Collapse
|
421
|
Bisoi R, Dash PK, Das PP. Short-term electricity price forecasting and classification in smart grids using optimized multikernel extreme learning machine. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3652-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
422
|
Commandeur F, Goeller M, Betancur J, Cadet S, Doris M, Chen X, Berman DS, Slomka PJ, Tamarappoo BK, Dey D. Deep Learning for Quantification of Epicardial and Thoracic Adipose Tissue From Non-Contrast CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1835-1846. [PMID: 29994362 PMCID: PMC6076348 DOI: 10.1109/tmi.2018.2804799] [Citation(s) in RCA: 105] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Epicardial adipose tissue (EAT) is a visceral fat deposit related to coronary artery disease. Fully automated quantification of EAT volume in clinical routine could be a timesaving and reliable tool for cardiovascular risk assessment. We propose a new fully automated deep learning framework for EAT and thoracic adipose tissue (TAT) quantification from non-contrast coronary artery calcium computed tomography (CT) scans. The first multi-task convolutional neural network (ConvNet) is used to determine heart limits and perform segmentation of heart and adipose tissues. The second ConvNet, combined with a statistical shape model, allows for pericardium detection. EAT and TAT segmentations are then obtained from outputs of both ConvNets. We evaluate the performance of the method on CT data sets from 250 asymptomatic individuals. Strong agreement between automatic and expert manual quantification is obtained for both EAT and TAT with median Dice score coefficients of 0.823 (inter-quartile range (IQR): 0.779-0.860) and 0.905 (IQR: 0.862-0.928), respectively; with excellent correlations of 0.924 and 0.945 for EAT and TAT volumes. Computations are performed in <6 s on a standard personal computer for one CT scan. Therefore, the proposed method represents a tool for rapid fully automated quantification of adipose tissue and may improve cardiovascular risk stratification in patients referred for routine CT calcium scans.
Collapse
|
423
|
Mutasa S, Chang PD, Ruzal-Shapiro C, Ayyala R. MABAL: a Novel Deep-Learning Architecture for Machine-Assisted Bone Age Labeling. J Digit Imaging 2018; 31:513-519. [PMID: 29404850 PMCID: PMC6113150 DOI: 10.1007/s10278-018-0053-3] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
Bone age assessment (BAA) is a commonly performed diagnostic study in pediatric radiology to assess skeletal maturity. The most commonly utilized method for assessment of BAA is the Greulich and Pyle method (Pediatr Radiol 46.9:1269-1274, 2016; Arch Dis Child 81.2:172-173, 1999) atlas. The evaluation of BAA can be a tedious and time-consuming process for the radiologist. As such, several computer-assisted detection/diagnosis (CAD) methods have been proposed for automation of BAA. Classical CAD tools have traditionally relied on hard-coded algorithmic features for BAA which suffer from a variety of drawbacks. Recently, the advent and proliferation of convolutional neural networks (CNNs) has shown promise in a variety of medical imaging applications. There have been at least two published applications of using deep learning for evaluation of bone age (Med Image Anal 36:41-51, 2017; JDI 1-5, 2017). However, current implementations are limited by a combination of both architecture design and relatively small datasets. The purpose of this study is to demonstrate the benefits of a customized neural network algorithm carefully calibrated to the evaluation of bone age utilizing a relatively large institutional dataset. In doing so, this study will aim to show that advanced architectures can be successfully trained from scratch in the medical imaging domain and can generate results that outperform any existing proposed algorithm. The training data consisted of 10,289 images of different skeletal age examinations, 8909 from the hospital Picture Archiving and Communication System at our institution and 1383 from the public Digital Hand Atlas Database. The data was separated into four cohorts, one each for male and female children above the age of 8, and one each for male and female children below the age of 10. The testing set consisted of 20 radiographs of each 1-year-age cohort from 0 to 1 years to 14-15+ years, half male and half female. The testing set included left-hand radiographs done for bone age assessment, trauma evaluation without significant findings, and skeletal surveys. A 14 hidden layer-customized neural network was designed for this study. The network included several state of the art techniques including residual-style connections, inception layers, and spatial transformer layers. Data augmentation was applied to the network inputs to prevent overfitting. A linear regression output was utilized. Mean square error was used as the network loss function and mean absolute error (MAE) was utilized as the primary performance metric. MAE accuracies on the validation and test sets for young females were 0.654 and 0.561 respectively. For older females, validation and test accuracies were 0.662 and 0.497 respectively. For young males, validation and test accuracies were 0.649 and 0.585 respectively. Finally, for older males, validation and test set accuracies were 0.581 and 0.501 respectively. The female cohorts were trained for 900 epochs each and the male cohorts were trained for 600 epochs. An eightfold cross-validation set was employed for hyperparameter tuning. Test error was obtained after training on a full data set with the selected hyperparameters. Using our proposed customized neural network architecture on our large available data, we achieved an aggregate validation and test set mean absolute errors of 0.637 and 0.536 respectively. To date, this is the best published performance on utilizing deep learning for bone age assessment. Our results support our initial hypothesis that customized, purpose-built neural networks provide improved performance over networks derived from pre-trained imaging data sets. We build on that initial work by showing that the addition of state-of-the-art techniques such as residual connections and inception architecture further improves prediction accuracy. This is important because the current assumption for use of residual and/or inception architectures is that a large pre-trained network is required for successful implementation given the relatively small datasets in medical imaging. Instead we show that a small, customized architecture incorporating advanced CNN strategies can indeed be trained from scratch, yielding significant improvements in algorithm accuracy. It should be noted that for all four cohorts, testing error outperformed validation error. One reason for this is that our ground truth for our test set was obtained by averaging two pediatric radiologist reads compared to our training data for which only a single read was used. This suggests that despite relatively noisy training data, the algorithm could successfully model the variation between observers and generate estimates that are close to the expected ground truth.
Collapse
Affiliation(s)
- Simukayi Mutasa
- Columbia University Medical Center, PB 1-301, New York, NY, 10032, USA.
| | - Peter D Chang
- Columbia University Medical Center, PB 1-301, New York, NY, 10032, USA
| | | | - Rama Ayyala
- Columbia University Medical Center, PB 1-301, New York, NY, 10032, USA
| |
Collapse
|
424
|
Lee H, Mansouri M, Tajmir S, Lev MH, Do S. A Deep-Learning System for Fully-Automated Peripherally Inserted Central Catheter (PICC) Tip Detection. J Digit Imaging 2018; 31:393-402. [PMID: 28983851 PMCID: PMC6113157 DOI: 10.1007/s10278-017-0025-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
A peripherally inserted central catheter (PICC) is a thin catheter that is inserted via arm veins and threaded near the heart, providing intravenous access. The final catheter tip position is always confirmed on a chest radiograph (CXR) immediately after insertion since malpositioned PICCs can cause potentially life-threatening complications. Although radiologists interpret PICC tip location with high accuracy, delays in interpretation can be significant. In this study, we proposed a fully-automated, deep-learning system with a cascading segmentation AI system containing two fully convolutional neural networks for detecting a PICC line and its tip location. A preprocessing module performed image quality and dimension normalization, and a post-processing module found the PICC tip accurately by pruning false positives. Our best model, trained on 400 training cases and selectively tuned on 50 validation cases, obtained absolute distances from ground truth with a mean of 3.10 mm, a standard deviation of 2.03 mm, and a root mean squares error (RMSE) of 3.71 mm on 150 held-out test cases. This system could help speed confirmation of PICC position and further be generalized to include other types of vascular access and therapeutic support devices.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Mohammad Mansouri
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Michael H. Lev
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
425
|
Liu X, Fu T, Pan Z, Liu D, Hu W, Liu J, Zhang K. Automated Layer Segmentation of Retinal Optical Coherence Tomography Images Using a Deep Feature Enhanced Structured Random Forests Classifier. IEEE J Biomed Health Inform 2018; 23:1404-1416. [PMID: 30010602 DOI: 10.1109/jbhi.2018.2856276] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Optical coherence tomography (OCT) is a high-resolution and noninvasive imaging modality that has become one of the most prevalent techniques for ophthalmic diagnosis. Retinal layer segmentation is very crucial for doctors to diagnose and study retinal diseases. However, manual segmentation is often a time-consuming and subjective process. In this work, we propose a new method for automatically segmenting retinal OCT images, which integrates deep features and hand-designed features to train a structured random forests classifier. The deep convolutional features are learned from deep residual network. With the trained classifier, we can get the contour probability graph of each layer; finally, the shortest path is employed to achieve the final layer segmentation. The experimental results show that our method achieves good results with the mean layer contour error of 1.215 pixels, whereas that of the state of the art was 1.464 pixels, and achieves an F1-score of 0.885, which is also better than 0.863 that is obtained by the state of the art method.
Collapse
|
426
|
Kim J, Hong J, Park H. Prospects of deep learning for medical imaging. PRECISION AND FUTURE MEDICINE 2018; 2:37-52. [DOI: 10.23838/pfm.2018.00030] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 04/14/2018] [Indexed: 08/29/2023] Open
|
427
|
Huang M, Han H, Wang H, Li L, Zhang Y, Bhatti UA. A Clinical Decision Support Framework for Heterogeneous Data Sources. IEEE J Biomed Health Inform 2018; 22:1824-1833. [PMID: 29994279 DOI: 10.1109/jbhi.2018.2846626] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
To keep pace with the developments in medical informatics, health medical data is being collected continually. But, owing to the diversity of its categories and sources, medical data has become so complicated in many hospitals that it now needs a clinical decision support (CDS) system for its management. To effectively utilize the accumulating health data, we propose a CDS framework that can integrate heterogeneous health data from different sources such as laboratory test results, basic information of patients, and health records into a consolidated representation of features of all patients. Using the electronic health medical data so created, multilabel classification was employed to recommend a list of diseases and thus assist physicians in diagnosing or treating their patients' health issues more efficiently. Once the physician diagnoses the disease of a patient, the next step is to consider the likely complications of that disease, which can lead to more diseases. Previous studies reveal that correlations do exist among some diseases. Considering these correlations, a k-nearest neighbors algorithm is improved for multilabel learning by using correlations among labels (CML-kNN). The CML- kNN algorithm first exploits the dependence between every two labels to update the origin label matrix and then performs multilabel learning to estimate the probabilities of labels by using the integrated features. Finally, it recommends the top N diseases to the physicians. Experimental results on real health medical data establish the effectiveness and practicability of the proposed CDS framework.
Collapse
|
428
|
Mahmud M, Kaiser MS, Hussain A, Vassanelli S. Applications of Deep Learning and Reinforcement Learning to Biological Data. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:2063-2079. [PMID: 29771663 DOI: 10.1109/tnnls.2018.2790388] [Citation(s) in RCA: 242] [Impact Index Per Article: 34.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.
Collapse
|
429
|
Pan F, He P, Liu C, Li T, Murray A, Zheng D. Variation of the Korotkoff Stethoscope Sounds During Blood Pressure Measurement: Analysis Using a Convolutional Neural Network. IEEE J Biomed Health Inform 2018; 21:1593-1598. [PMID: 29136608 DOI: 10.1109/jbhi.2017.2703115] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Korotkoff sounds are known to change their characteristics during blood pressure (BP) measurement, resulting in some uncertainties for systolic and diastolic pressure (SBP and DBP) determinations. The aim of this study was to assess the variation of Korotkoff sounds during BP measurement by examining all stethoscope sounds associated with each heartbeat from above systole to below diastole during linear cuff deflation. Three repeat BP measurements were taken from 140 healthy subjects (age 21 to 73 years; 62 female and 78 male) by a trained observer, giving 420 measurements. During the BP measurements, the cuff pressure and stethoscope signals were simultaneously recorded digitally to a computer for subsequent analysis. Heartbeats were identified from the oscillometric cuff pressure pulses. The presence of each beat was used to create a time window (1 s, 2000 samples) centered on the oscillometric pulse peak for extracting beat-by-beat stethoscope sounds. A time-frequency two-dimensional matrix was obtained for the stethoscope sounds associated with each beat, and all beats between the manually determined SBPs and DBPs were labeled as "Korotkoff." A convolutional neural network was then used to analyze consistency in sound patterns that were associated with Korotkoff sounds. A 10-fold cross-validation strategy was applied to the stethoscope sounds from all 140 subjects, with the data from ten groups of 14 subjects being analyzed separately, allowing consistency to be evaluated between groups. Next, within-subject variation of the Korotkoff sounds analyzed from the three repeats was quantified, separately for each stethoscope sound beat. There was consistency between folds with no significant differences between groups of 14 subjects (P = 0.09 to P = 0.62). Our results showed that 80.7% beats at SBP and 69.5% at DBP were analyzed as Korotkoff sounds, with significant differences between adjacent beats at systole (13.1%, P = 0.001) and diastole (17.4%, P < 0.001). Results reached stability for SBP (97.8%, at sixth beat below SBP) and DBP (98.1%, at sixth beat above DBP) with no significant differences between adjacent beats (SBP P = 0.74; DBP P = 0.88). There were no significant differences at high-cuff pressures, but at low pressures close to diastole there was a small difference (3.3%, P = 0.02). In addition, greater within subject variability was observed at SBP (21.4%) and DBP (28.9%), with a significant difference between both (P < 0.02). In conclusion, this study has demonstrated that Korotkoff sounds can be consistently identified during the period below SBP and above DBP, but that at systole and diastole there can be substantial variations that are associated with high variation in the three repeat measurements in each subject.
Collapse
|
430
|
Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks. Biosci Rep 2018; 38:BSR20180289. [PMID: 29572387 PMCID: PMC5938423 DOI: 10.1042/bsr20180289] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Revised: 03/20/2018] [Accepted: 03/20/2018] [Indexed: 12/29/2022] Open
Abstract
Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images.
Collapse
|
431
|
Tang A, Tam R, Cadrin-Chênevert A, Guest W, Chong J, Barfett J, Chepelev L, Cairns R, Mitchell JR, Cicero MD, Poudrette MG, Jaremko JL, Reinhold C, Gallix B, Gray B, Geis R, O'Connell T, Babyn P, Koff D, Ferguson D, Derkatch S, Bilbily A, Shabana W. Canadian Association of Radiologists White Paper on Artificial Intelligence in Radiology. Can Assoc Radiol J 2018; 69:120-135. [DOI: 10.1016/j.carj.2018.02.002] [Citation(s) in RCA: 238] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Accepted: 02/13/2018] [Indexed: 02/07/2023] Open
Abstract
Artificial intelligence (AI) is rapidly moving from an experimental phase to an implementation phase in many fields, including medicine. The combination of improved availability of large datasets, increasing computing power, and advances in learning algorithms has created major performance breakthroughs in the development of AI applications. In the last 5 years, AI techniques known as deep learning have delivered rapidly improving performance in image recognition, caption generation, and speech recognition. Radiology, in particular, is a prime candidate for early adoption of these techniques. It is anticipated that the implementation of AI in radiology over the next decade will significantly improve the quality, value, and depth of radiology's contribution to patient care and population health, and will revolutionize radiologists' workflows. The Canadian Association of Radiologists (CAR) is the national voice of radiology committed to promoting the highest standards in patient-centered imaging, lifelong learning, and research. The CAR has created an AI working group with the mandate to discuss and deliberate on practice, policy, and patient care issues related to the introduction and implementation of AI in imaging. This white paper provides recommendations for the CAR derived from deliberations between members of the AI working group. This white paper on AI in radiology will inform CAR members and policymakers on key terminology, educational needs of members, research and development, partnerships, potential clinical applications, implementation, structure and governance, role of radiologists, and potential impact of AI on radiology in Canada.
Collapse
Affiliation(s)
- An Tang
- Department of Radiology, Université de Montréal, Montréal, Québec, Canada
- Centre de recherche du Centre hospitalier de l'Université de Montréal, Montréal, Québec, Canada
| | - Roger Tam
- Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada
- School of Biomedical Engineering, University of British Columbia, Vancouver, British Columbia, Canada
| | | | - Will Guest
- Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jaron Chong
- Department of Radiology, McGill University Health Center, Montréal, Québec, Canada
| | - Joseph Barfett
- Department of Medical Imaging, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Leonid Chepelev
- Department of Radiology, University of Ottawa, Ottawa, Ontario, Canada
| | - Robyn Cairns
- Department of Radiology, British Columbia's Children's Hospital, University of British Columbia, Vancouver, British Columbia, Canada
| | | | - Mark D. Cicero
- Department of Medical Imaging, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | | | - Jacob L. Jaremko
- Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, Canada
| | - Caroline Reinhold
- Department of Radiology, McGill University Health Center, Montréal, Québec, Canada
| | - Benoit Gallix
- Department of Radiology, McGill University Health Center, Montréal, Québec, Canada
| | - Bruce Gray
- Department of Medical Imaging, St. Michael's Hospital, University of Toronto, Toronto, Ontario, Canada
| | - Raym Geis
- Department of Radiology, National Jewish Health, Denver, Colorado, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
432
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
433
|
Fan F, Cong W, Wang G. Generalized backpropagation algorithm for training second-order neural networks. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2018; 34:e2956. [PMID: 29277960 DOI: 10.1002/cnm.2956] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 12/10/2017] [Accepted: 12/11/2017] [Indexed: 06/07/2023]
Abstract
The artificial neural network is a popular framework in machine learning. To empower individual neurons, we recently suggested that the current type of neurons could be upgraded to second-order counterparts, in which the linear operation between inputs to a neuron and the associated weights is replaced with a nonlinear quadratic operation. A single second-order neurons already have a strong nonlinear modeling ability, such as implementing basic fuzzy logic operations. In this paper, we develop a general backpropagation algorithm to train the network consisting of second-order neurons. The numerical studies are performed to verify the generalized backpropagation algorithm.
Collapse
Affiliation(s)
- Fenglei Fan
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Wenxiang Cong
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Ge Wang
- Biomedical Imaging Center, BME/CBIS, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
434
|
Chartrand G, Cheng PM, Vorontsov E, Drozdzal M, Turcotte S, Pal CJ, Kadoury S, Tang A. Deep Learning: A Primer for Radiologists. Radiographics 2018; 37:2113-2131. [PMID: 29131760 DOI: 10.1148/rg.2017170077] [Citation(s) in RCA: 681] [Impact Index Per Article: 97.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. ©RSNA, 2017.
Collapse
Affiliation(s)
- Gabriel Chartrand
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| | - Phillip M Cheng
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| | - Eugene Vorontsov
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| | - Michal Drozdzal
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| | - Simon Turcotte
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| | - Christopher J Pal
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| | - Samuel Kadoury
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| | - An Tang
- From the Departments of Radiology (G.C., E.V., A.T.) and Hepatopancreatobiliary Surgery (S.T.), Centre Hospitalier de l'Université de Montréal, Hôpital Saint-Luc, 850 rue Saint-Denis, Montréal, QC, Canada H2X 0A9; Imagia Cybernetics, Montréal, Québec, Canada (G.C., M.D.); Department of Radiology, Keck School of Medicine, University of Southern California, Los Angeles, Calif (P.M.C.); Montreal Institute for Learning Algorithms, Montréal, Québec, Canada (E.V., M.D., C.J.P.); École Polytechnique, Montréal, Québec, Canada (E.V., C.J.P., S.K.); Department of Surgery, University of Montreal, Montréal, Québec, Canada (S.T.); and Centre de Recherche du Centre Hospitalier de l'Université de Montréal, Montréal, Québec, Canada (S.T., S.K., A.T.)
| |
Collapse
|
435
|
Abstract
Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.
Collapse
Affiliation(s)
| | | | | | - Timothy Kline
- Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA
| | | |
Collapse
|
436
|
Computerized Classification of Pneumoconiosis on Digital Chest Radiography Artificial Neural Network with Three Stages. J Digit Imaging 2018; 30:413-426. [PMID: 28108817 DOI: 10.1007/s10278-017-9942-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
Abstract
It is difficult for radiologists to classify pneumoconiosis from category 0 to category 3 on chest radiographs. Therefore, we have developed a computer-aided diagnosis (CAD) system based on a three-stage artificial neural network (ANN) method for classification based on four texture features. The image database consists of 36 chest radiographs classified as category 0 to category 3. Regions of interest (ROIs) with a matrix size of 32 × 32 were selected from chest radiographs. We obtained a gray-level histogram, histogram of gray-level difference, gray-level run-length matrix (GLRLM) feature image, and gray-level co-occurrence matrix (GLCOM) feature image in each ROI. For ROI-based classification, the first ANN was trained with each texture feature. Next, the second ANN was trained with output patterns obtained from the first ANN. Finally, we obtained a case-based classification for distinguishing among four categories with the third ANN method. We determined the performance of the third ANN by receiver operating characteristic (ROC) analysis. The areas under the ROC curve (AUC) of the highest category (severe pneumoconiosis) case and the lowest category (early pneumoconiosis) case were 0.89 ± 0.09 and 0.84 ± 0.12, respectively. The three-stage ANN with four texture features showed the highest performance for classification among the four categories. Our CAD system would be useful for assisting radiologists in classification of pneumoconiosis from category 0 to category 3.
Collapse
|
437
|
Lee H, Troschel FM, Tajmir S, Fuchs G, Mario J, Fintelmann FJ, Do S. Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies Muscle on Computed Tomography for Body Morphometric Analysis. J Digit Imaging 2018; 30:487-498. [PMID: 28653123 PMCID: PMC5537099 DOI: 10.1007/s10278-017-9988-z] [Citation(s) in RCA: 120] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Pretreatment risk stratification is key for personalized medicine. While many physicians rely on an “eyeball test” to assess whether patients will tolerate major surgery or chemotherapy, “eyeballing” is inherently subjective and difficult to quantify. The concept of morphometric age derived from cross-sectional imaging has been found to correlate well with outcomes such as length of stay, morbidity, and mortality. However, the determination of the morphometric age is time intensive and requires highly trained experts. In this study, we propose a fully automated deep learning system for the segmentation of skeletal muscle cross-sectional area (CSA) on an axial computed tomography image taken at the third lumbar vertebra. We utilized a fully automated deep segmentation model derived from an extended implementation of a fully convolutional network with weight initialization of an ImageNet pre-trained model, followed by post processing to eliminate intramuscular fat for a more accurate analysis. This experiment was conducted by varying window level (WL), window width (WW), and bit resolutions in order to better understand the effects of the parameters on the model performance. Our best model, fine-tuned on 250 training images and ground truth labels, achieves 0.93 ± 0.02 Dice similarity coefficient (DSC) and 3.68 ± 2.29% difference between predicted and ground truth muscle CSA on 150 held-out test cases. Ultimately, the fully automated segmentation system can be embedded into the clinical environment to accelerate the quantification of muscle and expanded to volume analysis of 3D datasets.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Fabian M. Troschel
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Georg Fuchs
- Department of Radiology, Charite - Universitaetsmedizin Berlin, Chariteplatz 1, 10117 Berlin, Germany
| | - Julia Mario
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Florian J. Fintelmann
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
438
|
Rajkomar A, Lingam S, Taylor AG, Blum M, Mongan J. High-Throughput Classification of Radiographs Using Deep Convolutional Neural Networks. J Digit Imaging 2018; 30:95-101. [PMID: 27730417 PMCID: PMC5267603 DOI: 10.1007/s10278-016-9914-9] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
The study aimed to determine if computer vision techniques rooted in deep learning can use a small set of radiographs to perform clinically relevant image classification with high fidelity. One thousand eight hundred eighty-five chest radiographs on 909 patients obtained between January 2013 and July 2015 at our institution were retrieved and anonymized. The source images were manually annotated as frontal or lateral and randomly divided into training, validation, and test sets. Training and validation sets were augmented to over 150,000 images using standard image manipulations. We then pre-trained a series of deep convolutional networks based on the open-source GoogLeNet with various transformations of the open-source ImageNet (non-radiology) images. These trained networks were then fine-tuned using the original and augmented radiology images. The model with highest validation accuracy was applied to our institutional test set and a publicly available set. Accuracy was assessed by using the Youden Index to set a binary cutoff for frontal or lateral classification. This retrospective study was IRB approved prior to initiation. A network pre-trained on 1.2 million greyscale ImageNet images and fine-tuned on augmented radiographs was chosen. The binary classification method correctly classified 100 % (95 % CI 99.73–100 %) of both our test set and the publicly available images. Classification was rapid, at 38 images per second. A deep convolutional neural network created using non-radiological images, and an augmented set of radiographs is effective in highly accurate classification of chest radiograph view type and is a feasible, rapid method for high-throughput annotation.
Collapse
Affiliation(s)
- Alvin Rajkomar
- Department of Medicine, Division of Hospital Medicine, University of California, San Francisco, 533 Parnassus Ave., Suite 127a, San Francisco, CA, 94143-0131, USA. .,Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA.
| | - Sneha Lingam
- Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA
| | - Andrew G Taylor
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Michael Blum
- Center for Digital Health Innovation, University of California, San Francisco, San Francisco, CA, USA
| | - John Mongan
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA
| |
Collapse
|
439
|
Cheng PM, Malhi HS. Transfer Learning with Convolutional Neural Networks for Classification of Abdominal Ultrasound Images. J Digit Imaging 2018; 30:234-243. [PMID: 27896451 DOI: 10.1007/s10278-016-9929-2] [Citation(s) in RCA: 102] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
The purpose of this study is to evaluate transfer learning with deep convolutional neural networks for the classification of abdominal ultrasound images. Grayscale images from 185 consecutive clinical abdominal ultrasound studies were categorized into 11 categories based on the text annotation specified by the technologist for the image. Cropped images were rescaled to 256 × 256 resolution and randomized, with 4094 images from 136 studies constituting the training set, and 1423 images from 49 studies constituting the test set. The fully connected layers of two convolutional neural networks based on CaffeNet and VGGNet, previously trained on the 2012 Large Scale Visual Recognition Challenge data set, were retrained on the training set. Weights in the convolutional layers of each network were frozen to serve as fixed feature extractors. Accuracy on the test set was evaluated for each network. A radiologist experienced in abdominal ultrasound also independently classified the images in the test set into the same 11 categories. The CaffeNet network classified 77.3% of the test set images accurately (1100/1423 images), with a top-2 accuracy of 90.4% (1287/1423 images). The larger VGGNet network classified 77.9% of the test set accurately (1109/1423 images), with a top-2 accuracy of VGGNet was 89.7% (1276/1423 images). The radiologist classified 71.7% of the test set images correctly (1020/1423 images). The differences in classification accuracies between both neural networks and the radiologist were statistically significant (p < 0.001). The results demonstrate that transfer learning with convolutional neural networks may be used to construct effective classifiers for abdominal ultrasound images.
Collapse
Affiliation(s)
- Phillip M Cheng
- Department of Radiology, Keck School of Medicine of USC, Los Angeles, CA, USA.
- USC Norris Cancer Center and Hospital, 1441 Eastlake Avenue, Suite 2315B, Los Angeles, CA, 90033-0377, USA.
| | - Harshawn S Malhi
- Department of Radiology, Keck School of Medicine of USC, Los Angeles, CA, USA
| |
Collapse
|
440
|
Yang Y, Feng X, Chi W, Li Z, Duan W, Liu H, Liang W, Wang W, Chen P, He J, Liu B. Deep learning aided decision support for pulmonary nodules diagnosing: a review. J Thorac Dis 2018; 10:S867-S875. [PMID: 29780633 DOI: 10.21037/jtd.2018.02.57] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Deep learning techniques have recently emerged as promising decision supporting approaches to automatically analyze medical images for different clinical diagnosing purposes. Diagnosing of pulmonary nodules by using computer-assisted diagnosing has received considerable theoretical, computational, and empirical research work, and considerable methods have been developed for detection and classification of pulmonary nodules on different formats of images including chest radiographs, computed tomography (CT), and positron emission tomography in the past five decades. The recent remarkable and significant progress in deep learning for pulmonary nodules achieved in both academia and the industry has demonstrated that deep learning techniques seem to be promising alternative decision support schemes to effectively tackle the central issues in pulmonary nodules diagnosing, including feature extraction, nodule detection, false-positive reduction, and benign-malignant classification for the huge volume of chest scan data. The main goal of this investigation is to provide a comprehensive state-of-the-art review of the deep learning aided decision support for pulmonary nodules diagnosing. As far as the authors know, this is the first time that a review is devoted exclusively to deep learning techniques for pulmonary nodules diagnosing.
Collapse
Affiliation(s)
- Yixin Yang
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xiaoyi Feng
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenhao Chi
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhengyang Li
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenzhe Duan
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China.,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Haiping Liu
- PET/CT Center, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Wenhua Liang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Wei Wang
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Ping Chen
- PET/CT Center, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Jianxing He
- Department of Thoracic Surgery and Oncology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Bo Liu
- Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
441
|
|
442
|
Bermejo-Peláez D, San José Estépar R, Ledesma-Carbayo MJ. EMPHYSEMA CLASSIFICATION USING A MULTI-VIEW CONVOLUTIONAL NETWORK. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2018; 2018:519-522. [PMID: 32454948 PMCID: PMC7243961 DOI: 10.1109/isbi.2018.8363629] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this article we propose and validate a fully automatic tool for emphysema classification in Computed Tomography (CT) images. We hypothesize that a relatively simple Convolutional Neural Network (CNN) architecture can learn even better discriminative features from the input data compared with more complex and deeper architectures. The proposed architecture is comprised of only 4 convolutional and 3 pooling layers, where the input corresponds to a 2.5D multiview representation of the pulmonary segment tissue to classify, corresponding to axial, sagittal and coronal views. The proposed architecture is compared to similar 2D CNN and 3D CNN, and to more complex architectures which involve a larger number of parameters (up to six times larger). This method has been evaluated in 1553 tissue samples, and achieves an overall sensitivity of 81.78 % and a specificity of 97.34%, and results show that the proposed method outperforms deeper state-of-the-art architectures particularly designed for lung pattern classification. The method shows satisfactory results in full-lung classification.
Collapse
Affiliation(s)
- David Bermejo-Peláez
- Biomedical Image Technologies, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain
| | | | - M J Ledesma-Carbayo
- Biomedical Image Technologies, Universidad Politécnica de Madrid & CIBER-BBN, Madrid, Spain
| |
Collapse
|
443
|
Anthimopoulos M, Christodoulidis S, Ebner L, Geiser T, Christe A, Mougiakakou S. Semantic Segmentation of Pathological Lung Tissue With Dilated Fully Convolutional Networks. IEEE J Biomed Health Inform 2018; 23:714-722. [PMID: 29993791 DOI: 10.1109/jbhi.2018.2818620] [Citation(s) in RCA: 54] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Early and accurate diagnosis of interstitial lung diseases (ILDs) is crucial for making treatment decisions, but can be challenging even for experienced radiologists. The diagnostic procedure is based on the detection and recognition of the different ILD pathologies in thoracic CT scans, yet their manifestation often appears similar. In this study, we propose the use of a deep purely convolutional neural network for the semantic segmentation of ILD patterns, as the basic component of a computer aided diagnosis system for ILDs. The proposed CNN, which consists of convolutional layers with dilated filters, takes as input a lung CT image of arbitrary size and outputs the corresponding label map. We trained and tested the network on a data set of 172 sparsely annotated CT scans, within a cross-validation scheme. The training was performed in an end-to-end and semisupervised fashion, utilizing both labeled and nonlabeled image regions. The experimental results show significant performance improvement with respect to the state of the art.
Collapse
|
444
|
Affiliation(s)
- Eyal Klang
- Department of Radiology, The Chaim Sheba Medical Center, Tel Hashomer, Israel
| |
Collapse
|
445
|
Joyseeree R, Müller H, Depeursinge A. Rotation-covariant tissue analysis for interstitial lung diseases using learned steerable filters: Performance evaluation and relevance for diagnostic aid. Comput Med Imaging Graph 2018; 64:1-11. [DOI: 10.1016/j.compmedimag.2018.01.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Revised: 12/19/2017] [Accepted: 01/09/2018] [Indexed: 11/30/2022]
|
446
|
Yasaka K, Akai H, Kunimatsu A, Kiryu S, Abe O. Deep learning with convolutional neural network in radiology. Jpn J Radiol 2018; 36:257-272. [PMID: 29498017 DOI: 10.1007/s11604-018-0726-3] [Citation(s) in RCA: 211] [Impact Index Per Article: 30.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Accepted: 02/26/2018] [Indexed: 12/28/2022]
Abstract
Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan.
| | - Hiroyuki Akai
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Akira Kunimatsu
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Shigeru Kiryu
- Department of Radiology, Graduate School of Medical Sciences, International University of Health and Welfare, 4-3 Kozunomori, Narita, Chiba, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan
| |
Collapse
|
447
|
Thrall JH, Li X, Li Q, Cruz C, Do S, Dreyer K, Brink J. Artificial Intelligence and Machine Learning in Radiology: Opportunities, Challenges, Pitfalls, and Criteria for Success. J Am Coll Radiol 2018; 15:504-508. [PMID: 29402533 DOI: 10.1016/j.jacr.2017.12.026] [Citation(s) in RCA: 302] [Impact Index Per Article: 43.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 12/15/2017] [Indexed: 12/13/2022]
Abstract
Worldwide interest in artificial intelligence (AI) applications, including imaging, is high and growing rapidly, fueled by availability of large datasets ("big data"), substantial advances in computing power, and new deep-learning algorithms. Apart from developing new AI methods per se, there are many opportunities and challenges for the imaging community, including the development of a common nomenclature, better ways to share image data, and standards for validating AI program use across different imaging platforms and patient populations. AI surveillance programs may help radiologists prioritize work lists by identifying suspicious or positive cases for early review. AI programs can be used to extract "radiomic" information from images not discernible by visual inspection, potentially increasing the diagnostic and prognostic value derived from image datasets. Predictions have been made that suggest AI will put radiologists out of business. This issue has been overstated, and it is much more likely that radiologists will beneficially incorporate AI methods into their practices. Current limitations in availability of technical expertise and even computing power will be resolved over time and can also be addressed by remote access solutions. Success for AI in imaging will be measured by value created: increased diagnostic certainty, faster turnaround, better outcomes for patients, and better quality of work life for radiologists. AI offers a new and promising set of methods for analyzing image data. Radiologists will explore these new pathways and are likely to play a leading role in medical applications of AI.
Collapse
Affiliation(s)
- James H Thrall
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts.
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Cinthia Cruz
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Keith Dreyer
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - James Brink
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
448
|
Giger ML. Machine Learning in Medical Imaging. J Am Coll Radiol 2018; 15:512-520. [PMID: 29398494 DOI: 10.1016/j.jacr.2017.12.028] [Citation(s) in RCA: 257] [Impact Index Per Article: 36.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 12/20/2017] [Indexed: 12/12/2022]
Abstract
Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine.
Collapse
Affiliation(s)
- Maryellen L Giger
- Department of Radiology, The University of Chicago, Chicago, Illinois.
| |
Collapse
|
449
|
Nasr-Esfahani E, Karimi N, Jafari M, Soroushmehr S, Samavi S, Nallamothu B, Najarian K. Segmentation of vessels in angiograms using convolutional neural networks. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.09.012] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
450
|
Kalantari A, Kamsin A, Shamshirband S, Gani A, Alinejad-Rokny H, Chronopoulos AT. Computational intelligence approaches for classification of medical data: State-of-the-art, future challenges and research directions. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.01.126] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|