1801
|
Deep Learning for Medical Image Processing: Overview, Challenges and the Future. LECTURE NOTES IN COMPUTATIONAL VISION AND BIOMECHANICS 2018. [DOI: 10.1007/978-3-319-65981-7_12] [Citation(s) in RCA: 369] [Impact Index Per Article: 52.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
1802
|
Lei H, Zhao Y, Wen Y, Luo Q, Cai Y, Liu G, Lei B. Sparse feature learning for multi-class Parkinson's disease classification. Technol Health Care 2018; 26:193-203. [PMID: 29710748 PMCID: PMC6004973 DOI: 10.3233/thc-174548] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
This paper solves the multi-class classification problem for Parkinson's disease (PD) analysis by a sparse discriminative feature selection framework. Specifically, we propose a framework to construct a least square regression model based on the Fisher's linear discriminant analysis (LDA) and locality preserving projection (LPP). This framework utilizes the global and local information to select the most relevant and discriminative features to boost classification performance. Differing in previous methods for binary classification, we perform a multi-class classification for PD diagnosis. Our proposed method is evaluated on the public available Parkinson's progression markers initiative (PPMI) datasets. Extensive experimental results indicate that our proposed method identifies highly suitable regions for further PD analysis and diagnosis and outperforms state-of-the-art methods.
Collapse
Affiliation(s)
- Haijun Lei
- College of Computer Science and Software Engineering, Shenzhen University, Key Laboratory of Service Computing and Applications, Guangdong Province Key Laboratory of Popular High Performance Computers, Shenzhen, Guangdong, China
| | - Yujia Zhao
- College of Computer Science and Software Engineering, Shenzhen University, Key Laboratory of Service Computing and Applications, Guangdong Province Key Laboratory of Popular High Performance Computers, Shenzhen, Guangdong, China
| | - Yuting Wen
- College of Computer Science and Software Engineering, Shenzhen University, Key Laboratory of Service Computing and Applications, Guangdong Province Key Laboratory of Popular High Performance Computers, Shenzhen, Guangdong, China
| | - Qiuming Luo
- College of Computer Science and Software Engineering, Shenzhen University, Key Laboratory of Service Computing and Applications, Guangdong Province Key Laboratory of Popular High Performance Computers, Shenzhen, Guangdong, China
| | - Ye Cai
- College of Computer Science and Software Engineering, Shenzhen University, Key Laboratory of Service Computing and Applications, Guangdong Province Key Laboratory of Popular High Performance Computers, Shenzhen, Guangdong, China
| | - Gang Liu
- College of Computer Science and Software Engineering, Shenzhen University, Key Laboratory of Service Computing and Applications, Guangdong Province Key Laboratory of Popular High Performance Computers, Shenzhen, Guangdong, China
| | - Baiying Lei
- School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen, Guangdong, China
| |
Collapse
|
1803
|
Medical Image Synthesis for Data Augmentation and Anonymization Using Generative Adversarial Networks. SIMULATION AND SYNTHESIS IN MEDICAL IMAGING 2018. [DOI: 10.1007/978-3-030-00536-8_1] [Citation(s) in RCA: 186] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
1804
|
Discriminant analysis of neural style representations for breast lesion classification in ultrasound. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.05.003] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
1805
|
Vesal S, Ravikumar N, Davari A, Ellmann S, Maier A. Classification of Breast Cancer Histology Images Using Transfer Learning. LECTURE NOTES IN COMPUTER SCIENCE 2018. [DOI: 10.1007/978-3-319-93000-8_92] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
1806
|
Segmentation of the hippocampus by transferring algorithmic knowledge for large cohort processing. Med Image Anal 2018; 43:214-228. [DOI: 10.1016/j.media.2017.11.004] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2017] [Revised: 09/14/2017] [Accepted: 11/06/2017] [Indexed: 01/27/2023]
|
1807
|
Mohamed AA, Berg WA, Peng H, Luo Y, Jankowitz RC, Wu S. A deep learning method for classifying mammographic breast density categories. Med Phys 2017; 45:314-321. [PMID: 29159811 DOI: 10.1002/mp.12683] [Citation(s) in RCA: 125] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 11/09/2017] [Accepted: 11/12/2017] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Mammographic breast density is an established risk marker for breast cancer and is visually assessed by radiologists in routine mammogram image reading, using four qualitative Breast Imaging and Reporting Data System (BI-RADS) breast density categories. It is particularly difficult for radiologists to consistently distinguish the two most common and most variably assigned BI-RADS categories, i.e., "scattered density" and "heterogeneously dense". The aim of this work was to investigate a deep learning-based breast density classifier to consistently distinguish these two categories, aiming at providing a potential computerized tool to assist radiologists in assigning a BI-RADS category in current clinical workflow. METHODS In this study, we constructed a convolutional neural network (CNN)-based model coupled with a large (i.e., 22,000 images) digital mammogram imaging dataset to evaluate the classification performance between the two aforementioned breast density categories. All images were collected from a cohort of 1,427 women who underwent standard digital mammography screening from 2005 to 2016 at our institution. The truths of the density categories were based on standard clinical assessment made by board-certified breast imaging radiologists. Effects of direct training from scratch solely using digital mammogram images and transfer learning of a pretrained model on a large nonmedical imaging dataset were evaluated for the specific task of breast density classification. In order to measure the classification performance, the CNN classifier was also tested on a refined version of the mammogram image dataset by removing some potentially inaccurately labeled images. Receiver operating characteristic (ROC) curves and the area under the curve (AUC) were used to measure the accuracy of the classifier. RESULTS The AUC was 0.9421 when the CNN-model was trained from scratch on our own mammogram images, and the accuracy increased gradually along with an increased size of training samples. Using the pretrained model followed by a fine-tuning process with as few as 500 mammogram images led to an AUC of 0.9265. After removing the potentially inaccurately labeled images, AUC was increased to 0.9882 and 0.9857 for without and with the pretrained model, respectively, both significantly higher (P < 0.001) than when using the full imaging dataset. CONCLUSIONS Our study demonstrated high classification accuracies between two difficult to distinguish breast density categories that are routinely assessed by radiologists. We anticipate that our approach will help enhance current clinical assessment of breast density and better support consistent density notification to patients in breast cancer screening.
Collapse
Affiliation(s)
- Aly A Mohamed
- Department of Radiology, University of Pittsburgh School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Wendie A Berg
- Department of Radiology, University of Pittsburgh School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Hong Peng
- Department of Radiology, Chinese PLA General Hospital, 28 Fuxing Rd, Haidian District, Beijing, 100853, China
| | - Yahong Luo
- Department of Radiology, Liaoning Cancer Hospital & Institute, 44 Xiaoheyan Rd, Dadong District, Shenyang City, Liaoning, 110042, China
| | - Rachel C Jankowitz
- Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA.,Department of Medicine, School of Medicine, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Shandong Wu
- Departments of Radiology, Biomedical Informatics, Bioengineering, and Computer Science, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| |
Collapse
|
1808
|
Leveraging uncertainty information from deep neural networks for disease detection. Sci Rep 2017; 7:17816. [PMID: 29259224 PMCID: PMC5736701 DOI: 10.1038/s41598-017-17876-z] [Citation(s) in RCA: 144] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 12/01/2017] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has revolutionized the field of computer vision and image processing. In medical imaging, algorithmic solutions based on DL have been shown to achieve high performance on tasks that previously required medical experts. However, DL-based solutions for disease detection have been proposed without methods to quantify and control their uncertainty in a decision. In contrast, a physician knows whether she is uncertain about a case and will consult more experienced colleagues if needed. Here we evaluate drop-out based Bayesian uncertainty measures for DL in diagnosing diabetic retinopathy (DR) from fundus images and show that it captures uncertainty better than straightforward alternatives. Furthermore, we show that uncertainty informed decision referral can improve diagnostic performance. Experiments across different networks, tasks and datasets show robust generalization. Depending on network capacity and task/dataset difficulty, we surpass 85% sensitivity and 80% specificity as recommended by the NHS when referring 0−20% of the most uncertain decisions for further inspection. We analyse causes of uncertainty by relating intuitions from 2D visualizations to the high-dimensional image space. While uncertainty is sensitive to clinically relevant cases, sensitivity to unfamiliar data samples is task dependent, but can be rendered more robust.
Collapse
|
1809
|
Co-trained convolutional neural networks for automated detection of prostate cancer in multi-parametric MRI. Med Image Anal 2017; 42:212-227. [DOI: 10.1016/j.media.2017.08.006] [Citation(s) in RCA: 84] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 08/14/2017] [Accepted: 08/16/2017] [Indexed: 11/20/2022]
|
1810
|
Affiliation(s)
- Michael F Byrne
- University of British Columbia, Vancouver, British Columbia, Canada
| | - Neal Shahidi
- University of British Columbia, Vancouver, British Columbia, Canada
| | - Douglas K Rex
- Indiana University Medical Center, Indianapolis, Indiana
| |
Collapse
|
1811
|
Building data-driven models with microstructural images: Generalization and interpretability. ACTA ACUST UNITED AC 2017. [DOI: 10.1016/j.md.2018.03.002] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
1812
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4787] [Impact Index Per Article: 598.4] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
1813
|
Olczak J, Fahlberg N, Maki A, Razavian AS, Jilert A, Stark A, Sköldenberg O, Gordon M. Artificial intelligence for analyzing orthopedic trauma radiographs. Acta Orthop 2017; 88:581-586. [PMID: 28681679 PMCID: PMC5694800 DOI: 10.1080/17453674.2017.1344459] [Citation(s) in RCA: 254] [Impact Index Per Article: 31.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Accepted: 06/06/2017] [Indexed: 02/06/2023] Open
Abstract
Background and purpose - Recent advances in artificial intelligence (deep learning) have shown remarkable performance in classifying non-medical images, and the technology is believed to be the next technological revolution. So far it has never been applied in an orthopedic setting, and in this study we sought to determine the feasibility of using deep learning for skeletal radiographs. Methods - We extracted 256,000 wrist, hand, and ankle radiographs from Danderyd's Hospital and identified 4 classes: fracture, laterality, body part, and exam view. We then selected 5 openly available deep learning networks that were adapted for these images. The most accurate network was benchmarked against a gold standard for fractures. We furthermore compared the network's performance with 2 senior orthopedic surgeons who reviewed images at the same resolution as the network. Results - All networks exhibited an accuracy of at least 90% when identifying laterality, body part, and exam view. The final accuracy for fractures was estimated at 83% for the best performing network. The network performed similarly to senior orthopedic surgeons when presented with images at the same resolution as the network. The 2 reviewer Cohen's kappa under these conditions was 0.76. Interpretation - This study supports the use for orthopedic radiographs of artificial intelligence, which can perform at a human level. While current implementation lacks important features that surgeons require, e.g. risk of dislocation, classifications, measurements, and combining multiple exam views, these problems have technical solutions that are waiting to be implemented for orthopedics.
Collapse
Affiliation(s)
- Jakub Olczak
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| | | | - Atsuto Maki
- Department of Robotics, Perception and Learning (RPL), School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Ali Sharif Razavian
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
- Department of Robotics, Perception and Learning (RPL), School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anthony Jilert
- Radiology clinic, Danderyd Hospital, Danderyd Hospital AB
| | - André Stark
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| | - Olof Sköldenberg
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| | - Max Gordon
- Department of Clinical Sciences, Karolinska Institutet, Danderyd Hospital
| |
Collapse
|
1814
|
Kovacs W, Hsieh N, Roth H, Nnamdi-Emeratom C, Bandettini WP, Arai A, Mankodi A, Summers RM, Yao J. Holistic segmentation of the lung in cine MRI. J Med Imaging (Bellingham) 2017; 4:041310. [PMID: 29226176 DOI: 10.1117/1.jmi.4.4.041310] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Accepted: 11/06/2017] [Indexed: 01/01/2023] Open
Abstract
Duchenne muscular dystrophy (DMD) is a childhood-onset neuromuscular disease that results in the degeneration of muscle, starting in the extremities, before progressing to more vital areas, such as the lungs. Respiratory failure and pneumonia due to respiratory muscle weakness lead to hospitalization and early mortality. However, tracking the disease in this region can be difficult, as current methods are based on breathing tests and are incapable of distinguishing between muscle involvements. Cine MRI scans give insight into respiratory muscle movements, but the images suffer due to low spatial resolution and poor signal-to-noise ratio. Thus, a robust lung segmentation method is required for accurate analysis of the lung and respiratory muscle movement. We deployed a deep learning approach that utilizes sequence-specific prior information to assist the segmentation of lung in cine MRI. More specifically, we adopt a holistically nested network to conduct image-to-image holistic training and prediction. One frame of the cine MRI is used in the training and applied to the remainder of the sequence ([Formula: see text] frames). We applied this method to cine MRIs of the lung in the axial, sagittal, and coronal planes. Characteristic lung motion patterns during the breathing cycle were then derived from the segmentations and used for diagnosis. Our data set consisted of 31 young boys, age [Formula: see text] years, 15 of whom suffered from DMD. The remaining 16 subjects were age-matched healthy volunteers. For validation, slices from inspiratory and expiratory cycles were manually segmented and compared with results obtained from our method. The Dice similarity coefficient for the deep learning-based method was [Formula: see text] for the sagittal view, [Formula: see text] for the axial view, and [Formula: see text] for the coronal view. The holistic neural network approach was compared with an approach using Demon's registration and showed superior performance. These results suggest that the deep learning-based method reliably and accurately segments the lung across the breathing cycle.
Collapse
Affiliation(s)
- William Kovacs
- National Institutes of Health, Radiology and Imaging Sciences, Clinical Center, Clinical Image Processing Services, Bethesda, Maryland, United States
| | - Nathan Hsieh
- National Institutes of Health, Radiology and Imaging Sciences, Clinical Center, Clinical Image Processing Services, Bethesda, Maryland, United States
| | - Holger Roth
- National Institutes of Health, Radiology and Imaging Sciences, Clinical Center, Clinical Image Processing Services, Bethesda, Maryland, United States
| | - Chioma Nnamdi-Emeratom
- National Institutes of Health, National Institute of Neurological Disorders and Stroke, Neurogenetics Branch, Bethesda, Maryland, United States
| | - W Patricia Bandettini
- National Institutes of Health, National Heart, Lung and Blood Institute, Advanced Cardiovascular Imaging, Bethesda, Maryland, United States
| | - Andrew Arai
- National Institutes of Health, National Heart, Lung and Blood Institute, Advanced Cardiovascular Imaging, Bethesda, Maryland, United States
| | - Ami Mankodi
- National Institutes of Health, National Institute of Neurological Disorders and Stroke, Neurogenetics Branch, Bethesda, Maryland, United States
| | - Ronald M Summers
- National Institutes of Health, Radiology and Imaging Sciences, Clinical Center, Clinical Image Processing Services, Bethesda, Maryland, United States
| | - Jianhua Yao
- National Institutes of Health, Radiology and Imaging Sciences, Clinical Center, Clinical Image Processing Services, Bethesda, Maryland, United States
| |
Collapse
|
1815
|
Deep-learning Versus OBIA for Scattered Shrub Detection with Google Earth Imagery: Ziziphus lotus as Case Study. REMOTE SENSING 2017. [DOI: 10.3390/rs9121220] [Citation(s) in RCA: 96] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
1816
|
Computational biology: deep learning. Emerg Top Life Sci 2017; 1:257-274. [PMID: 33525807 PMCID: PMC7289034 DOI: 10.1042/etls20160025] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Revised: 09/13/2017] [Accepted: 09/18/2017] [Indexed: 02/06/2023]
Abstract
Deep learning is the trendiest tool in a computational biologist's toolbox. This exciting class of methods, based on artificial neural networks, quickly became popular due to its competitive performance in prediction problems. In pioneering early work, applying simple network architectures to abundant data already provided gains over traditional counterparts in functional genomics, image analysis, and medical diagnostics. Now, ideas for constructing and training networks and even off-the-shelf models have been adapted from the rapidly developing machine learning subfield to improve performance in a range of computational biology tasks. Here, we review some of these advances in the last 2 years.
Collapse
|
1817
|
Samala RK, Chan HP, Hadjiiski LM, Helvie MA, Cha KH, Richter CD. Multi-task transfer learning deep convolutional neural network: application to computer-aided diagnosis of breast cancer on mammograms. Phys Med Biol 2017; 62:8894-8908. [PMID: 29035873 PMCID: PMC5859950 DOI: 10.1088/1361-6560/aa93d4] [Citation(s) in RCA: 98] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Transfer learning in deep convolutional neural networks (DCNNs) is an important step in its application to medical imaging tasks. We propose a multi-task transfer learning DCNN with the aim of translating the 'knowledge' learned from non-medical images to medical diagnostic tasks through supervised training and increasing the generalization capabilities of DCNNs by simultaneously learning auxiliary tasks. We studied this approach in an important application: classification of malignant and benign breast masses. With Institutional Review Board (IRB) approval, digitized screen-film mammograms (SFMs) and digital mammograms (DMs) were collected from our patient files and additional SFMs were obtained from the Digital Database for Screening Mammography. The data set consisted of 2242 views with 2454 masses (1057 malignant, 1397 benign). In single-task transfer learning, the DCNN was trained and tested on SFMs. In multi-task transfer learning, SFMs and DMs were used to train the DCNN, which was then tested on SFMs. N-fold cross-validation with the training set was used for training and parameter optimization. On the independent test set, the multi-task transfer learning DCNN was found to have significantly (p = 0.007) higher performance compared to the single-task transfer learning DCNN. This study demonstrates that multi-task transfer learning may be an effective approach for training DCNN in medical imaging applications when training samples from a single modality are limited.
Collapse
Affiliation(s)
- Ravi K Samala
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109-5842, United States of America
| | | | | | | | | | | |
Collapse
|
1818
|
Cheplygina V, Pena IP, Pedersen JH, Lynch DA, Sorensen L, de Bruijne M. Transfer Learning for Multicenter Classification of Chronic Obstructive Pulmonary Disease. IEEE J Biomed Health Inform 2017; 22:1486-1496. [PMID: 29990220 DOI: 10.1109/jbhi.2017.2769800] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Chronic obstructive pulmonary disease (COPD) is a lung disease that can be quantified using chest computed tomography scans. Recent studies have shown that COPD can be automatically diagnosed using weakly supervised learning of intensity and texture distributions. However, up till now such classifiers have only been evaluated on scans from a single domain, and it is unclear whether they would generalize across domains, such as different scanners or scanning protocols. To address this problem, we investigate classification of COPD in a multicenter dataset with a total of 803 scans from three different centers, four different scanners, with heterogenous subject distributions. Our method is based on Gaussian texture features, and a weighted logistic classifier, which increases the weights of samples similar to the test data. We show that Gaussian texture features outperform intensity features previously used in multicenter classification tasks. We also show that a weighting strategy based on a classifier that is trained to discriminate between scans from different domains can further improve the results. To encourage further research into transfer learning methods for the classification of COPD, upon acceptance of this paper we will release two feature datasets used in this study on http://bigr.nl/research/projects/copd.
Collapse
|
1819
|
Carneiro G, Nascimento J, Bradley AP. Automated Analysis of Unregistered Multi-View Mammograms With Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2355-2365. [PMID: 28920897 DOI: 10.1109/tmi.2017.2751523] [Citation(s) in RCA: 67] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We describe an automated methodology for the analysis of unregistered cranio-caudal (CC) and medio-lateral oblique (MLO) mammography views in order to estimate the patient's risk of developing breast cancer. The main innovation behind this methodology lies in the use of deep learning models for the problem of jointly classifying unregistered mammogram views and respective segmentation maps of breast lesions (i.e., masses and micro-calcifications). This is a holistic methodology that can classify a whole mammographic exam, containing the CC and MLO views and the segmentation maps, as opposed to the classification of individual lesions, which is the dominant approach in the field. We also demonstrate that the proposed system is capable of using the segmentation maps generated by automated mass and micro-calcification detection systems, and still producing accurate results. The semi-automated approach (using manually defined mass and micro-calcification segmentation maps) is tested on two publicly available data sets (INbreast and DDSM), and results show that the volume under ROC surface (VUS) for a 3-class problem (normal tissue, benign, and malignant) is over 0.9, the area under ROC curve (AUC) for the 2-class "benign versus malignant" problem is over 0.9, and for the 2-class breast screening problem (malignancy versus normal/benign) is also over 0.9. For the fully automated approach, the VUS results on INbreast is over 0.7, and the AUC for the 2-class "benign versus malignant" problem is over 0.78, and the AUC for the 2-class breast screening is 0.86.
Collapse
|
1820
|
Epithelium-Stroma Classification via Convolutional Neural Networks and Unsupervised Domain Adaptation in Histopathological Images. IEEE J Biomed Health Inform 2017; 21:1625-1632. [DOI: 10.1109/jbhi.2017.2691738] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
1821
|
Qayyum A, Anwar SM, Awais M, Majid M. Medical image retrieval using deep convolutional neural network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.05.025] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
1822
|
Development of automatic retinal vessel segmentation method in fundus images via convolutional neural networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:681-684. [PMID: 29059964 DOI: 10.1109/embc.2017.8036916] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The analysis of fundus photograph is one of useful diagnosis tools for diverse retinal diseases such as diabetic retinopathy and hypertensive retinopathy. Specifically, the morphology of retinal vessels in patients is used as a measure of classification in retinal diseases and the automatic processing of fundus image has been investigated widely for diagnostic efficiency. The automatic segmentation of retinal vessels is essential and needs to precede computer-aided diagnosis system. In this study, we propose the method which implements patch-based pixel-wise segmentation with convolutional neural networks (CNNs) in fundus images for automatic retinal vessel segmentation. We construct the network composed of several modules which include convolutional layers and upsampling layers. Feature maps are made by modules and concatenated into a single feature map to capture coarse and fine structures of vessel simultaneously. The concatenated feature map is followed by a convolutional layer for performing a pixel-wise prediction. The performance of the proposed method is measured on DRIVE dataset. We show that our method is comparable to the results of other state-of-the-art algorithms.
Collapse
|
1823
|
Zhen X, Chen J, Zhong Z, Hrycushko B, Zhou L, Jiang S, Albuquerque K, Gu X. Deep convolutional neural network with transfer learning for rectum toxicity prediction in cervical cancer radiotherapy: a feasibility study. Phys Med Biol 2017; 62:8246-8263. [PMID: 28914611 DOI: 10.1088/1361-6560/aa8d09] [Citation(s) in RCA: 110] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Better understanding of the dose-toxicity relationship is critical for safe dose escalation to improve local control in late-stage cervical cancer radiotherapy. In this study, we introduced a convolutional neural network (CNN) model to analyze rectum dose distribution and predict rectum toxicity. Forty-two cervical cancer patients treated with combined external beam radiotherapy (EBRT) and brachytherapy (BT) were retrospectively collected, including twelve toxicity patients and thirty non-toxicity patients. We adopted a transfer learning strategy to overcome the limited patient data issue. A 16-layers CNN developed by the visual geometry group (VGG-16) of the University of Oxford was pre-trained on a large-scale natural image database, ImageNet, and fine-tuned with patient rectum surface dose maps (RSDMs), which were accumulated EBRT + BT doses on the unfolded rectum surface. We used the adaptive synthetic sampling approach and the data augmentation method to address the two challenges, data imbalance and data scarcity. The gradient-weighted class activation maps (Grad-CAM) were also generated to highlight the discriminative regions on the RSDM along with the prediction model. We compare different CNN coefficients fine-tuning strategies, and compare the predictive performance using the traditional dose volume parameters, e.g. D 0.1/1/2cc, and the texture features extracted from the RSDM. Satisfactory prediction performance was achieved with the proposed scheme, and we found that the mean Grad-CAM over the toxicity patient group has geometric consistence of distribution with the statistical analysis result, which indicates possible rectum toxicity location. The evaluation results have demonstrated the feasibility of building a CNN-based rectum dose-toxicity prediction model with transfer learning for cervical cancer radiotherapy.
Collapse
Affiliation(s)
- Xin Zhen
- Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, TX 75390, United States of America. Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | | | | | | | | | | | | | | |
Collapse
|
1824
|
Xu Y, Ma J, Liaw A, Sheridan RP, Svetnik V. Demystifying Multitask Deep Neural Networks for Quantitative Structure-Activity Relationships. J Chem Inf Model 2017; 57:2490-2504. [PMID: 28872869 DOI: 10.1021/acs.jcim.7b00087] [Citation(s) in RCA: 137] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Deep neural networks (DNNs) are complex computational models that have found great success in many artificial intelligence applications, such as computer vision1,2 and natural language processing.3,4 In the past four years, DNNs have also generated promising results for quantitative structure-activity relationship (QSAR) tasks.5,6 Previous work showed that DNNs can routinely make better predictions than traditional methods, such as random forests, on a diverse collection of QSAR data sets. It was also found that multitask DNN models-those trained on and predicting multiple QSAR properties simultaneously-outperform DNNs trained separately on the individual data sets in many, but not all, tasks. To date there has been no satisfactory explanation of why the QSAR of one task embedded in a multitask DNN can borrow information from other unrelated QSAR tasks. Thus, using multitask DNNs in a way that consistently provides a predictive advantage becomes a challenge. In this work, we explored why multitask DNNs make a difference in predictive performance. Our results show that during prediction a multitask DNN does borrow "signal" from molecules with similar structures in the training sets of the other tasks. However, whether this borrowing leads to better or worse predictive performance depends on whether the activities are correlated. On the basis of this, we have developed a strategy to use multitask DNNs that incorporate prior domain knowledge to select training sets with correlated activities, and we demonstrate its effectiveness on several examples.
Collapse
Affiliation(s)
- Yuting Xu
- Biometrics Research Department, Merck & Co., Inc. , Rahway, New Jersey 07065, United States
| | - Junshui Ma
- Biometrics Research Department, Merck & Co., Inc. , Rahway, New Jersey 07065, United States
| | - Andy Liaw
- Biometrics Research Department, Merck & Co., Inc. , Rahway, New Jersey 07065, United States
| | - Robert P Sheridan
- Modeling and Informatics Department, Merck & Co., Inc. , Kenilworth, New Jersey 07033, United States
| | - Vladimir Svetnik
- Biometrics Research Department, Merck & Co., Inc. , Rahway, New Jersey 07065, United States
| |
Collapse
|
1825
|
Li Z, Zhang X, Müller H, Zhang S. Large-scale retrieval for medical image analytics: A comprehensive review. Med Image Anal 2017; 43:66-84. [PMID: 29031831 DOI: 10.1016/j.media.2017.09.007] [Citation(s) in RCA: 75] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 08/01/2017] [Accepted: 09/29/2017] [Indexed: 12/27/2022]
Abstract
Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis.
Collapse
Affiliation(s)
- Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Xiaofan Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Henning Müller
- Information Systems Institute, HES-SO Valais, Sierre, Switzerland
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| |
Collapse
|
1826
|
Dou Q, Yu L, Chen H, Jin Y, Yang X, Qin J, Heng PA. 3D deeply supervised network for automated segmentation of volumetric medical images. Med Image Anal 2017; 41:40-54. [DOI: 10.1016/j.media.2017.05.001] [Citation(s) in RCA: 198] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 04/14/2017] [Accepted: 05/01/2017] [Indexed: 10/19/2022]
|
1827
|
Weng S, Xu X, Li J, Wong STC. Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:1-10. [PMID: 29086544 PMCID: PMC5661703 DOI: 10.1117/1.jbo.22.10.106017] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 10/04/2017] [Indexed: 05/05/2023]
Abstract
Lung cancer is the most prevalent type of cancer and the leading cause of cancer-related deaths worldwide. Coherent anti-Stokes Raman scattering (CARS) is capable of providing cellular-level images and resolving pathologically related features on human lung tissues. However, conventional means of analyzing CARS images requires extensive image processing, feature engineering, and human intervention. This study demonstrates the feasibility of applying a deep learning algorithm to automatically differentiate normal and cancerous lung tissue images acquired by CARS. We leverage the features learned by pretrained deep neural networks and retrain the model using CARS images as the input. We achieve 89.2% accuracy in classifying normal, small-cell carcinoma, adenocarcinoma, and squamous cell carcinoma lung images. This computational method is a step toward on-the-spot diagnosis of lung cancer and can be further strengthened by the efforts aimed at miniaturizing the CARS technique for fiber-based microendoscopic imaging.
Collapse
Affiliation(s)
- Sheng Weng
- Translational Biophotonics Laboratory, Department of Systems Medicine and Bioengineering, Houston Methodist Research Institute, Weill Cornell Medicine, Houston, Texas, United States
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
| | - Xiaoyun Xu
- Translational Biophotonics Laboratory, Department of Systems Medicine and Bioengineering, Houston Methodist Research Institute, Weill Cornell Medicine, Houston, Texas, United States
| | - Jiasong Li
- Translational Biophotonics Laboratory, Department of Systems Medicine and Bioengineering, Houston Methodist Research Institute, Weill Cornell Medicine, Houston, Texas, United States
| | - Stephen T. C. Wong
- Translational Biophotonics Laboratory, Department of Systems Medicine and Bioengineering, Houston Methodist Research Institute, Weill Cornell Medicine, Houston, Texas, United States
- Rice University, Department of Electrical and Computer Engineering, Houston, Texas, United States
- Address all correspondence to: Stephen T. C. Wong, E-mail:
| |
Collapse
|
1828
|
Liu S, Xie Y, Jirapatnakul A, Reeves AP. Pulmonary nodule classification in lung cancer screening with three-dimensional convolutional neural networks. J Med Imaging (Bellingham) 2017; 4:041308. [PMID: 29181428 PMCID: PMC5685809 DOI: 10.1117/1.jmi.4.4.041308] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 10/23/2017] [Indexed: 12/17/2022] Open
Abstract
A three-dimensional (3-D) convolutional neural network (CNN) trained from scratch is presented for the classification of pulmonary nodule malignancy from low-dose chest CT scans. Recent approval of lung cancer screening in the United States provides motivation for determining the likelihood of malignancy of pulmonary nodules from the initial CT scan finding to minimize the number of follow-up actions. Classifier ensembles of different combinations of the 3-D CNN and traditional machine learning models based on handcrafted 3-D image features are also explored. The dataset consisting of 326 nodules is constructed with balanced size and class distribution with the malignancy status pathologically confirmed. The results show that both the 3-D CNN single model and the ensemble models with 3-D CNN outperform the respective counterparts constructed using only traditional models. Moreover, complementary information can be learned by the 3-D CNN and the conventional models, which together are combined to construct an ensemble model with statistically superior performance compared with the single traditional model. The performance of the 3-D CNN model demonstrates the potential for improving the lung cancer screening follow-up protocol, which currently mainly depends on the nodule size.
Collapse
Affiliation(s)
- Shuang Liu
- Cornell University, School of Electrical and Computer Engineering, Ithaca, New York, United States
| | - Yiting Xie
- Cornell University, School of Electrical and Computer Engineering, Ithaca, New York, United States
| | - Artit Jirapatnakul
- Icahn School of Medicine at Mount Sinai, Department of Radiology, New York, United States
| | - Anthony P. Reeves
- Cornell University, School of Electrical and Computer Engineering, Ithaca, New York, United States
| |
Collapse
|
1829
|
Aubreville M, Knipfer C, Oetter N, Jaremenko C, Rodner E, Denzler J, Bohr C, Neumann H, Stelzle F, Maier A. Automatic Classification of Cancerous Tissue in Laserendomicroscopy Images of the Oral Cavity using Deep Learning. Sci Rep 2017; 7:11979. [PMID: 28931888 PMCID: PMC5607286 DOI: 10.1038/s41598-017-12320-8] [Citation(s) in RCA: 123] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Accepted: 09/07/2017] [Indexed: 12/15/2022] Open
Abstract
Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).
Collapse
Affiliation(s)
- Marc Aubreville
- Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.
| | - Christian Knipfer
- Department of Oral and Maxillofacial Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.,Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Nicolai Oetter
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Department of Oral and Maxillofacial Surgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Christian Jaremenko
- Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Erik Rodner
- Computer Vision Group, Friedrich-Schiller-Universität Jena, Jena, Germany
| | - Joachim Denzler
- Computer Vision Group, Friedrich-Schiller-Universität Jena, Jena, Germany
| | - Christopher Bohr
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Helmut Neumann
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,First Department of Internal Medicine, University Hospital Mainz, Johannes Gutenberg-Universität Mainz, Mainz, Germany
| | - Florian Stelzle
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Department of Oral and Maxillofacial Surgery, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Computer Science, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.,Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
1830
|
Automatic Radiographic Position Recognition from Image Frequency and Intensity. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:2727686. [PMID: 29104743 PMCID: PMC5623794 DOI: 10.1155/2017/2727686] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2017] [Accepted: 07/17/2017] [Indexed: 12/17/2022]
Abstract
Purpose With the development of digital X-ray imaging and processing methods, the categorization and analysis of massive digital radiographic images need to be automatically finished. What is crucial in this processing is the automatic retrieval and recognition of radiographic position. To address these concerns, we developed an automatic method to identify a patient's position and body region using only frequency curve classification and gray matching. Methods Our new method is combined with frequency analysis and gray image matching. The radiographic position was determined from frequency similarity and amplitude classification. The body region recognition was performed by image matching in the whole-body phantom image with prior knowledge of templates. The whole-body phantom image was stitched by radiological images of different parts. Results The proposed method can automatically retrieve and recognize the radiographic position and body region using frequency and intensity information. It replaces 2D image retrieval with 1D frequency curve classification, with higher speed and accuracy up to 93.78%. Conclusion The proposed method is able to outperform the digital X-ray image's position recognition with a limited time cost and a simple algorithm. The frequency information of radiography can make image classification quicker and more accurate.
Collapse
|
1831
|
Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, Seong YK. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 2017; 62:7714-7728. [PMID: 28753132 DOI: 10.1088/1361-6560/aa82ec] [Citation(s) in RCA: 188] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
Collapse
Affiliation(s)
- Seokmin Han
- Korea National University of Transportation, Uiwang-si, Kyunggi-do, Republic of Korea
| | | | | | | | | | | | | |
Collapse
|
1832
|
Li H, Giger ML, Huynh BQ, Antropova NO. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms. J Med Imaging (Bellingham) 2017; 4:041304. [PMID: 28924576 DOI: 10.1117/1.jmi.4.4.041304] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Accepted: 08/18/2017] [Indexed: 01/11/2023] Open
Abstract
To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text]] and RTA ([Formula: see text]; [Formula: see text]) in distinguishing BRCA1/2 carriers and low-risk women. However, in distinguishing unilateral cancer patients and low-risk women, performance was significantly greater with CNN ([Formula: see text]; [Formula: see text]) compared to RTA ([Formula: see text]; [Formula: see text]). Fusion classifiers performed significantly better than the RTA-alone classifiers with AUC values of 0.86 and 0.84 in differentiating BRCA1/2 carriers from low-risk women and unilateral cancer patients from low-risk women, respectively. In conclusion, deep learning extracted parenchymal characteristics from FFDMs performed as well as, or better than, conventional texture analysis in the task of distinguishing between cancer risk populations.
Collapse
Affiliation(s)
- Hui Li
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L Giger
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Benjamin Q Huynh
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| | - Natalia O Antropova
- University of Chicago, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
1833
|
Bi L, Kim J, Ahn E, Kumar A, Fulham M, Feng D. Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks. IEEE Trans Biomed Eng 2017; 64:2065-2074. [DOI: 10.1109/tbme.2017.2712771] [Citation(s) in RCA: 168] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
1834
|
Dmitriev K, Kaufman AE, Javed AA, Hruban RH, Fishman EK, Lennon AM, Saltz JH. Classification of Pancreatic Cysts in Computed Tomography Images Using a Random Forest and Convolutional Neural Network Ensemble. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2017; 10435:150-158. [PMID: 29881827 PMCID: PMC5987215 DOI: 10.1007/978-3-319-66179-7_18] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
There are many different types of pancreatic cysts. These range from completely benign to malignant, and identifying the exact cyst type can be challenging in clinical practice. This work describes an automatic classification algorithm that classifies the four most common types of pancreatic cysts using computed tomography images. The proposed approach utilizes the general demographic information about a patient as well as the imaging appearance of the cyst. It is based on a Bayesian combination of the random forest classifier, which learns subclass-specific demographic, intensity, and shape features, and a new convolutional neural network that relies on the fine texture information. Quantitative assessment of the proposed method was performed using a 10-fold cross validation on 134 patients and reported a classification accuracy of 83.6%.
Collapse
Affiliation(s)
| | - Arie E Kaufman
- Department of Computer Science, Stony Brook University, Stony Brook, USA
| | - Ammar A Javed
- Department of Surgery, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Ralph H Hruban
- The Department of Pathology, The Sol Goldman Pancreatic Cancer Research Center, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Elliot K Fishman
- Department of Radiology, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Anne Marie Lennon
- Department of Surgery, Johns Hopkins School of Medicine, Baltimore, MD, USA
- Division of Gastroenterology and Hepatology, Johns Hopkins School of Medicine, Baltimore, MD, USA
| | - Joel H Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
1835
|
Yuan Y, Chao M, Lo YC. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1876-1886. [PMID: 28436853 DOI: 10.1109/tmi.2017.2695227] [Citation(s) in RCA: 217] [Impact Index Per Article: 27.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks.
Collapse
|
1836
|
Transfer Learning with Deep Convolutional Neural Network for SAR Target Classification with Limited Labeled Data. REMOTE SENSING 2017. [DOI: 10.3390/rs9090907] [Citation(s) in RCA: 85] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
1837
|
Zhou X, Takayama R, Wang S, Hara T, Fujita H. Deep learning of the sectional appearances of 3D CT images for anatomical structure segmentation based on an FCN voting method. Med Phys 2017; 44:5221-5233. [PMID: 28730602 DOI: 10.1002/mp.12480] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2016] [Revised: 07/03/2017] [Accepted: 07/10/2017] [Indexed: 12/31/2022] Open
Abstract
PURPOSE We propose a single network trained by pixel-to-label deep learning to address the general issue of automatic multiple organ segmentation in three-dimensional (3D) computed tomography (CT) images. Our method can be described as a voxel-wise multiple-class classification scheme for automatically assigning labels to each pixel/voxel in a 2D/3D CT image. METHODS We simplify the segmentation algorithms of anatomical structures (including multiple organs) in a CT image (generally in 3D) to a majority voting scheme over the semantic segmentation of multiple 2D slices drawn from different viewpoints with redundancy. The proposed method inherits the spirit of fully convolutional networks (FCNs) that consist of "convolution" and "deconvolution" layers for 2D semantic image segmentation, and expands the core structure with 3D-2D-3D transformations to adapt to 3D CT image segmentation. All parameters in the proposed network are trained pixel-to-label from a small number of CT cases with human annotations as the ground truth. The proposed network naturally fulfills the requirements of multiple organ segmentations in CT cases of different sizes that cover arbitrary scan regions without any adjustment. RESULTS The proposed network was trained and validated using the simultaneous segmentation of 19 anatomical structures in the human torso, including 17 major organs and two special regions (lumen and content inside of stomach). Some of these structures have never been reported in previous research on CT segmentation. A database consisting of 240 (95% for training and 5% for testing) 3D CT scans, together with their manually annotated ground-truth segmentations, was used in our experiments. The results show that the 19 structures of interest were segmented with acceptable accuracy (88.1% and 87.9% voxels in the training and testing datasets, respectively, were labeled correctly) against the ground truth. CONCLUSIONS We propose a single network based on pixel-to-label deep learning to address the challenging issue of anatomical structure segmentation in 3D CT cases. The novelty of this work is the policy of deep learning of the different 2D sectional appearances of 3D anatomical structures for CT cases and the majority voting of the 3D segmentation results from multiple crossed 2D sections to achieve availability and reliability with better efficiency, generality, and flexibility than conventional segmentation methods, which must be guided by human expertise.
Collapse
Affiliation(s)
- Xiangrong Zhou
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| | - Ryosuke Takayama
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| | - Song Wang
- Department of Computer Science and Engineering, University of South Carolina, Columbia, SC, 29208, USA
| | - Takeshi Hara
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| | - Hiroshi Fujita
- Department of Intelligent Image Information, Graduate School of Medicine, Gifu University, Gifu, 501-1194, Japan
| |
Collapse
|
1838
|
Bladder Cancer Treatment Response Assessment in CT using Radiomics with Deep-Learning. Sci Rep 2017; 7:8738. [PMID: 28821822 PMCID: PMC5562694 DOI: 10.1038/s41598-017-09315-w] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2016] [Accepted: 07/18/2017] [Indexed: 02/06/2023] Open
Abstract
Cross-sectional X-ray imaging has become the standard for staging most solid organ malignancies. However, for some malignancies such as urinary bladder cancer, the ability to accurately assess local extent of the disease and understand response to systemic chemotherapy is limited with current imaging approaches. In this study, we explored the feasibility that radiomics-based predictive models using pre- and post-treatment computed tomography (CT) images might be able to distinguish between bladder cancers with and without complete chemotherapy responses. We assessed three unique radiomics-based predictive models, each of which employed different fundamental design principles ranging from a pattern recognition method via deep-learning convolution neural network (DL-CNN), to a more deterministic radiomics feature-based approach and then a bridging method between the two, utilizing a system which extracts radiomics features from the image patterns. Our study indicates that the computerized assessment using radiomics information from the pre- and post-treatment CT of bladder cancer patients has the potential to assist in assessment of treatment response.
Collapse
|
1839
|
Antropova N, Huynh BQ, Giger ML. A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets. Med Phys 2017; 44:5162-5171. [PMID: 28681390 DOI: 10.1002/mp.12453] [Citation(s) in RCA: 215] [Impact Index Per Article: 26.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Revised: 06/12/2017] [Accepted: 06/25/2017] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Deep learning methods for radiomics/computer-aided diagnosis (CADx) are often prohibited by small datasets, long computation time, and the need for extensive image preprocessing. AIMS We aim to develop a breast CADx methodology that addresses the aforementioned issues by exploiting the efficiency of pre-trained convolutional neural networks (CNNs) and using pre-existing handcrafted CADx features. MATERIALS & METHODS We present a methodology that extracts and pools low- to mid-level features using a pretrained CNN and fuses them with handcrafted radiomic features computed using conventional CADx methods. Our methodology is tested on three different clinical imaging modalities (dynamic contrast enhanced-MRI [690 cases], full-field digital mammography [245 cases], and ultrasound [1125 cases]). RESULTS From ROC analysis, our fusion-based method demonstrates, on all three imaging modalities, statistically significant improvements in terms of AUC as compared to previous breast cancer CADx methods in the task of distinguishing between malignant and benign lesions. (DCE-MRI [AUC = 0.89 (se = 0.01)], FFDM [AUC = 0.86 (se = 0.01)], and ultrasound [AUC = 0.90 (se = 0.01)]). DISCUSSION/CONCLUSION We proposed a novel breast CADx methodology that can be used to more effectively characterize breast lesions in comparison to existing methods. Furthermore, our proposed methodology is computationally efficient and circumvents the need for image preprocessing.
Collapse
Affiliation(s)
- Natalia Antropova
- Department of Radiology, University of Chicago, 5841 S Maryland Ave., Chicago, IL, 60637, USA
| | - Benjamin Q Huynh
- Department of Radiology, University of Chicago, 5841 S Maryland Ave., Chicago, IL, 60637, USA
| | - Maryellen L Giger
- Department of Radiology, University of Chicago, 5841 S Maryland Ave., Chicago, IL, 60637, USA
| |
Collapse
|
1840
|
Song Q, Zhao L, Luo X, Dou X. Using Deep Learning for Classification of Lung Nodules on Computed Tomography Images. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:8314740. [PMID: 29065651 PMCID: PMC5569872 DOI: 10.1155/2017/8314740] [Citation(s) in RCA: 131] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Accepted: 05/14/2017] [Indexed: 12/28/2022]
Abstract
Lung cancer is the most common cancer that cannot be ignored and cause death with late health care. Currently, CT can be used to help doctors detect the lung cancer in the early stages. In many cases, the diagnosis of identifying the lung cancer depends on the experience of doctors, which may ignore some patients and cause some problems. Deep learning has been proved as a popular and powerful method in many medical imaging diagnosis areas. In this paper, three types of deep neural networks (e.g., CNN, DNN, and SAE) are designed for lung cancer calcification. Those networks are applied to the CT image classification task with some modification for the benign and malignant lung nodules. Those networks were evaluated on the LIDC-IDRI database. The experimental results show that the CNN network archived the best performance with an accuracy of 84.15%, sensitivity of 83.96%, and specificity of 84.32%, which has the best result among the three networks.
Collapse
Affiliation(s)
- QingZeng Song
- School of Computer Science & Software Engineering, Tianjin Polytechnics University, Tianjin, China
| | - Lei Zhao
- School of Computer Science & Software Engineering, Tianjin Polytechnics University, Tianjin, China
| | - XingKe Luo
- School of Computer Science & Software Engineering, Tianjin Polytechnics University, Tianjin, China
| | - XueChen Dou
- School of Computer Science & Software Engineering, Tianjin Polytechnics University, Tianjin, China
| |
Collapse
|
1841
|
Lopes UK, Valiati JF. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection. Comput Biol Med 2017; 89:135-143. [PMID: 28800442 DOI: 10.1016/j.compbiomed.2017.08.001] [Citation(s) in RCA: 100] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Revised: 08/01/2017] [Accepted: 08/01/2017] [Indexed: 02/07/2023]
Abstract
It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors.
Collapse
Affiliation(s)
- U K Lopes
- DevGrid, 482, Italia Avenue, Caxias do Sul, RS, Brazil
| | - J F Valiati
- Artificial Intelligence Engineers - AIE, 262, Vieira de Castro Street, Porto Alegre, RS, Brazil.
| |
Collapse
|
1842
|
Song Y, Li Q, Huang H, Feng D, Chen M, Cai W. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1636-1649. [PMID: 28358678 DOI: 10.1109/tmi.2017.2687466] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.
Collapse
|
1843
|
Akkus Z, Galimzianova A, Hoogi A, Rubin DL, Erickson BJ. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J Digit Imaging 2017; 30:449-459. [PMID: 28577131 PMCID: PMC5537095 DOI: 10.1007/s10278-017-9983-4] [Citation(s) in RCA: 472] [Impact Index Per Article: 59.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.
Collapse
Affiliation(s)
- Zeynettin Akkus
- Radiology Informatics Lab, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA
| | - Alfiia Galimzianova
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Assaf Hoogi
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Daniel L Rubin
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Bradley J Erickson
- Radiology Informatics Lab, Mayo Clinic, 200 First Street SW, Rochester, MN, 55905, USA.
| |
Collapse
|
1844
|
Lee H, Tajmir S, Lee J, Zissen M, Yeshiwas BA, Alkasab TK, Choy G, Do S. Fully Automated Deep Learning System for Bone Age Assessment. J Digit Imaging 2017; 30:427-441. [PMID: 28275919 PMCID: PMC5537090 DOI: 10.1007/s10278-017-9955-8] [Citation(s) in RCA: 198] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Skeletal maturity progresses through discrete phases, a fact that is used routinely in pediatrics where bone age assessments (BAAs) are compared to chronological age in the evaluation of endocrine and metabolic disorders. While central to many disease evaluations, little has changed to improve the tedious process since its introduction in 1950. In this study, we propose a fully automated deep learning pipeline to segment a region of interest, standardize and preprocess input radiographs, and perform BAA. Our models use an ImageNet pretrained, fine-tuned convolutional neural network (CNN) to achieve 57.32 and 61.40% accuracies for the female and male cohorts on our held-out test images. Female test radiographs were assigned a BAA within 1 year 90.39% and within 2 years 98.11% of the time. Male test radiographs were assigned 94.18% within 1 year and 99.00% within 2 years. Using the input occlusion method, attention maps were created which reveal what features the trained model uses to perform BAA. These correspond to what human experts look at when manually performing BAA. Finally, the fully automated BAA system was deployed in the clinical environment as a decision supporting system for more accurate and efficient BAAs at much faster interpretation time (<2 s) than the conventional method.
Collapse
Affiliation(s)
- Hyunkwang Lee
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Shahein Tajmir
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Jenny Lee
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Maurice Zissen
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Bethel Ayele Yeshiwas
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Tarik K. Alkasab
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Garry Choy
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| | - Synho Do
- Massachusetts General Hospital and Harvard Medical School, Radiology, 25 New Chardon Street, Suite 400B, Boston, MA 02114 USA
| |
Collapse
|
1845
|
|
1846
|
Mankodi A, Kovacs W, Norato G, Hsieh N, Bandettini WP, Bishop CA, Shimellis H, Newbould RD, Kim E, Fischbeck KH, Arai AE, Yao J. Respiratory magnetic resonance imaging biomarkers in Duchenne muscular dystrophy. Ann Clin Transl Neurol 2017; 4:655-662. [PMID: 28904987 PMCID: PMC5590523 DOI: 10.1002/acn3.440] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2017] [Accepted: 06/28/2017] [Indexed: 02/04/2023] Open
Abstract
OBJECTIVE To examine the diaphragm and chest wall dynamics with cine breathing magnetic resonance imaging (MRI) in ambulatory boys with Duchenne muscular dystrophy (DMD) without respiratory symptoms and controls. METHODS In 11 DMD boys and 15 controls, cine MRI of maximal breathing was recorded for 10 sec. The lung segmentations were done by an automated pipeline based on a Holistically-Nested Network model (HNN method). Lung areas, diaphragm, and chest wall motion were measured throughout the breathing cycle. RESULTS The HNN method reliably identified the contours of the lung and the diaphragm in every frame of each dataset (~180 frames) within seconds. The lung areas at maximal inspiration and expiration were reduced in DMD patients relative to controls (P = 0.02 and <0.01, respectively). The change in the lung area between inspiration and expiration correlated with percent predicted forced vital capacity (FVC) in patients (rs = 0.75, P = 0.03) and was not significantly different between groups. The diaphragm position, length, contractility, and motion were not significantly different between groups. Chest wall motion was reduced in patients compared to controls (P < 0.01). INTERPRETATION Cine breathing MRI allows independent and reliable assessment of the diaphragm and chest wall dynamics during the breathing cycle in DMD patients and controls. The MRI data indicate that ambulatory DMD patients breathe at lower lung volumes than controls when their FVC is in the normal range. The diaphragm moves normally, whereas chest wall motion is reduced in these boys with DMD.
Collapse
Affiliation(s)
- Ami Mankodi
- Neurogenetics Branch National Institute of Neurological Disorders and Stroke National Institutes of Health Bethesda Maryland
| | - William Kovacs
- Radiology and Imaging Sciences The National Institutes of Health Clinical Center Bethesda Maryland
| | - Gina Norato
- Office of Biostatistics National Institute of Neurological Disorders and Stroke National Institutes of Health Bethesda Maryland
| | - Nathan Hsieh
- Radiology and Imaging Sciences The National Institutes of Health Clinical Center Bethesda Maryland
| | - W Patricia Bandettini
- Advanced Cardiovascular Imaging National Heart Lung and Blood Institute National Institutes of Health Bethesda Maryland
| | - Courtney A Bishop
- Imanova Center for Imaging Sciences Imperial College London Hammersmith Hospital London United Kingdom
| | - Hirity Shimellis
- Neurogenetics Branch National Institute of Neurological Disorders and Stroke National Institutes of Health Bethesda Maryland
| | - Rexford D Newbould
- Imanova Center for Imaging Sciences Imperial College London Hammersmith Hospital London United Kingdom
| | - Eunhee Kim
- Office of Biostatistics National Institute of Neurological Disorders and Stroke National Institutes of Health Bethesda Maryland
| | - Kenneth H Fischbeck
- Neurogenetics Branch National Institute of Neurological Disorders and Stroke National Institutes of Health Bethesda Maryland
| | - Andrew E Arai
- Advanced Cardiovascular Imaging National Heart Lung and Blood Institute National Institutes of Health Bethesda Maryland
| | - Jianhua Yao
- Radiology and Imaging Sciences The National Institutes of Health Clinical Center Bethesda Maryland
| |
Collapse
|
1847
|
Le MH, Chen J, Wang L, Wang Z, Liu W, Cheng KTT, Yang X. Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks. Phys Med Biol 2017; 62:6497-6514. [PMID: 28582269 DOI: 10.1088/1361-6560/aa7731] [Citation(s) in RCA: 93] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
Automated methods for prostate cancer (PCa) diagnosis in multi-parametric magnetic resonance imaging (MP-MRIs) are critical for alleviating requirements for interpretation of radiographs while helping to improve diagnostic accuracy (Artan et al 2010 IEEE Trans. Image Process. 19 2444-55, Litjens et al 2014 IEEE Trans. Med. Imaging 33 1083-92, Liu et al 2013 SPIE Medical Imaging (International Society for Optics and Photonics) p 86701G, Moradi et al 2012 J. Magn. Reson. Imaging 35 1403-13, Niaf et al 2014 IEEE Trans. Image Process. 23 979-91, Niaf et al 2012 Phys. Med. Biol. 57 3833, Peng et al 2013a SPIE Medical Imaging (International Society for Optics and Photonics) p 86701H, Peng et al 2013b Radiology 267 787-96, Wang et al 2014 BioMed. Res. Int. 2014). This paper presents an automated method based on multimodal convolutional neural networks (CNNs) for two PCa diagnostic tasks: (1) distinguishing between cancerous and noncancerous tissues and (2) distinguishing between clinically significant (CS) and indolent PCa. Specifically, our multimodal CNNs effectively fuse apparent diffusion coefficients (ADCs) and T2-weighted MP-MRI images (T2WIs). To effectively fuse ADCs and T2WIs we design a new similarity loss function to enforce consistent features being extracted from both ADCs and T2WIs. The similarity loss is combined with the conventional classification loss functions and integrated into the back-propagation procedure of CNN training. The similarity loss enables better fusion results than existing methods as the feature learning processes of both modalities are mutually guided, jointly facilitating CNN to 'see' the true visual patterns of PCa. The classification results of multimodal CNNs are further combined with the results based on handcrafted features using a support vector machine classifier. To achieve a satisfactory accuracy for clinical use, we comprehensively investigate three critical factors which could greatly affect the performance of our multimodal CNNs but have not been carefully studied previously. (1) Given limited training data, how can these be augmented in sufficient numbers and variety for fine-tuning deep CNN networks for PCa diagnosis? (2) How can multimodal MP-MRI information be effectively combined in CNNs? (3) What is the impact of different CNN architectures on the accuracy of PCa diagnosis? Experimental results on extensive clinical data from 364 patients with a total of 463 PCa lesions and 450 identified noncancerous image patches demonstrate that our system can achieve a sensitivity of 89.85% and a specificity of 95.83% for distinguishing cancer from noncancerous tissues and a sensitivity of 100% and a specificity of 76.92% for distinguishing indolent PCa from CS PCa. This result is significantly superior to the state-of-the-art method relying on handcrafted features.
Collapse
Affiliation(s)
- Minh Hung Le
- School of Electronics and Communications, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|
1848
|
Jamaludin A, Kadir T, Zisserman A. SpineNet: Automated classification and evidence visualization in spinal MRIs. Med Image Anal 2017; 41:63-73. [PMID: 28756059 DOI: 10.1016/j.media.2017.07.002] [Citation(s) in RCA: 91] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Revised: 07/19/2017] [Accepted: 07/20/2017] [Indexed: 12/28/2022]
Abstract
The objective of this work is to automatically produce radiological gradings of spinal lumbar MRIs and also localize the predicted pathologies. We show that this can be achieved via a Convolutional Neural Network (CNN) framework that takes intervertebral disc volumes as inputs and is trained only on disc-specific class labels. Our contributions are: (i) a CNN architecture that predicts multiple gradings at once, and we propose variants of the architecture including using 3D convolutions; (ii) showing that this architecture can be trained using a multi-task loss function without requiring segmentation level annotation; and (iii) a localization method that clearly shows pathological regions in the disc volumes. We compare three visualization methods for the localization. The network is applied to a large corpus of MRI T2 sagittal spinal MRIs (using a standard clinical scan protocol) acquired from multiple machines, and is used to automatically compute disk and vertebra gradings for each MRI. These are: Pfirrmann grading, disc narrowing, upper/lower endplate defects, upper/lower marrow changes, spondylolisthesis, and central canal stenosis. We report near human performances across the eight gradings, and also visualize the evidence for these gradings localized on the original scans.
Collapse
Affiliation(s)
- Amir Jamaludin
- VGG, Department of Engineering Science, University of Oxford, Oxford, UK.
| | | | - Andrew Zisserman
- VGG, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
1849
|
Jiang H, Ma H, Qian W, Gao M, Li Y. An Automatic Detection System of Lung Nodule Based on Multigroup Patch-Based Deep Learning Network. IEEE J Biomed Health Inform 2017; 22:1227-1237. [PMID: 28715341 DOI: 10.1109/jbhi.2017.2725903] [Citation(s) in RCA: 102] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer-aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography image transformation, the lung nodule segmentation, and the feature extraction, to construct a whole CADe system. It is difficult for these schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multigroup patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multigroup patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data.
Collapse
|
1850
|
Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma. Sci Rep 2017; 7:5467. [PMID: 28710497 PMCID: PMC5511238 DOI: 10.1038/s41598-017-05848-2] [Citation(s) in RCA: 188] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Accepted: 05/25/2017] [Indexed: 02/06/2023] Open
Abstract
Deep learning-based radiomics (DLR) was developed to extract deep information from multiple modalities of magnetic resonance (MR) images. The performance of DLR for predicting the mutation status of isocitrate dehydrogenase 1 (IDH1) was validated in a dataset of 151 patients with low-grade glioma. A modified convolutional neural network (CNN) structure with 6 convolutional layers and a fully connected layer with 4096 neurons was used to segment tumors. Instead of calculating image features from segmented images, as typically performed for normal radiomics approaches, image features were obtained by normalizing the information of the last convolutional layers of the CNN. Fisher vector was used to encode the CNN features from image slices of different sizes. High-throughput features with dimensionality greater than 1.6*104 were obtained from the CNN. Paired t-tests and F-scores were used to select CNN features that were able to discriminate IDH1. With the same dataset, the area under the operating characteristic curve (AUC) of the normal radiomics method was 86% for IDH1 estimation, whereas for DLR the AUC was 92%. The AUC of IDH1 estimation was further improved to 95% using DLR based on multiple-modality MR images. DLR could be a powerful way to extract deep information from medical images.
Collapse
|