451
|
Gandomkar Z, Brennan PC, Mello-Thoms C. MuDeRN: Multi-category classification of breast histopathological image using deep residual networks. Artif Intell Med 2018; 88:14-24. [PMID: 29705552 DOI: 10.1016/j.artmed.2018.04.005] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 02/20/2018] [Accepted: 04/13/2018] [Indexed: 10/17/2022]
Abstract
MOTIVATION Identifying carcinoma subtype can help to select appropriate treatment options and determining the subtype of benign lesions can be beneficial to estimate the patients' risk of developing cancer in the future. Pathologists' assessment of lesion subtypes is considered as the gold standard, however, sometimes strong disagreements among pathologists for distinction among lesion subtypes have been previously reported in the literature. OBJECTIVE To propose a framework for classifying hematoxylin-eosin stained breast digital slides either as benign or cancer, and then categorizing cancer and benign cases into four different subtypes each. MATERIALS AND METHODS We used data from a publicly available database (BreakHis) of 81 patients where each patient had images at four magnification factors (×40, ×100, ×200, and ×400) available, for a total of 7786 images. The proposed framework, called MuDeRN (MUlti-category classification of breast histopathological image using DEep Residual Networks) consisted of two stages. In the first stage, for each magnification factor, a deep residual network (ResNet) with 152 layers has been trained for classifying patches from the images as benign or malignant. In the next stage, the images classified as malignant were subdivided into four cancer subcategories and those categorized as benign were classified into four subtypes. Finally, the diagnosis for each patient was made by combining outputs of ResNets' processed images in different magnification factors using a meta-decision tree. RESULTS For the malignant/benign classification of images, MuDeRN's first stage achieved correct classification rates (CCR) of 98.52%, 97.90%, 98.33%, and 97.66% in ×40, ×100, ×200, and ×400 magnification factors respectively. For eight-class categorization of images based on the output of MuDeRN's both stages, CCRs in four magnification factors were 95.40%, 94.90%, 95.70%, and 94.60%. Finally, for making patient-level diagnosis, MuDeRN achieved a CCR of 96.25% for eight-class categorization. CONCLUSIONS MuDeRN can be helpful in the categorization of breast lesions.
Collapse
Affiliation(s)
- Ziba Gandomkar
- Image Optimisation and Perception, Discipline of Medical Imaging and Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia.
| | - Patrick C Brennan
- Image Optimisation and Perception, Discipline of Medical Imaging and Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia
| | - Claudia Mello-Thoms
- Image Optimisation and Perception, Discipline of Medical Imaging and Radiation Sciences, Faculty of Health Sciences, University of Sydney, Sydney, NSW, Australia; Department of Biomedical Informatics, School of Medicine, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
452
|
Saltz J, Gupta R, Hou L, Kurc T, Singh P, Nguyen V, Samaras D, Shroyer KR, Zhao T, Batiste R, Van Arnam J, Shmulevich I, Rao AUK, Lazar AJ, Sharma A, Thorsson V. Spatial Organization and Molecular Correlation of Tumor-Infiltrating Lymphocytes Using Deep Learning on Pathology Images. Cell Rep 2018; 23:181-193.e7. [PMID: 29617659 PMCID: PMC5943714 DOI: 10.1016/j.celrep.2018.03.086] [Citation(s) in RCA: 590] [Impact Index Per Article: 84.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 02/27/2018] [Accepted: 03/20/2018] [Indexed: 02/07/2023] Open
Abstract
Beyond sample curation and basic pathologic characterization, the digitized H&E-stained images of TCGA samples remain underutilized. To highlight this resource, we present mappings of tumor-infiltrating lymphocytes (TILs) based on H&E images from 13 TCGA tumor types. These TIL maps are derived through computational staining using a convolutional neural network trained to classify patches of images. Affinity propagation revealed local spatial structure in TIL patterns and correlation with overall survival. TIL map structural patterns were grouped using standard histopathological parameters. These patterns are enriched in particular T cell subpopulations derived from molecular measures. TIL densities and spatial structure were differentially enriched among tumor types, immune subtypes, and tumor molecular subtypes, implying that spatial infiltrate state could reflect particular tumor cell aberration states. Obtaining spatial lymphocytic patterns linked to the rich genomic characterization of TCGA samples demonstrates one use for the TCGA image archives with insights into the tumor-immune microenvironment.
Collapse
Affiliation(s)
- Joel Saltz
- Department of Biomedical Informatics, Stony Brook Medicine, Stony Brook, NY 11794, USA.
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook Medicine, Stony Brook, NY 11794, USA; Department of Pathology, Stony Brook Medicine, Stony Brook, NY 11794, USA
| | - Le Hou
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook Medicine, Stony Brook, NY 11794, USA
| | - Pankaj Singh
- Department of Bioinformatics and Computational Biology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Vu Nguyen
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Kenneth R Shroyer
- Department of Pathology, Stony Brook Medicine, Stony Brook, NY 11794, USA
| | - Tianhao Zhao
- Department of Pathology, Stony Brook Medicine, Stony Brook, NY 11794, USA
| | - Rebecca Batiste
- Department of Pathology, Stony Brook Medicine, Stony Brook, NY 11794, USA
| | - John Van Arnam
- Department of Pathology and Laboratory Medicine, Perelman School at the University of Pennsylvania, Philadelphia, PA 19104, USA
| | | | - Arvind U K Rao
- Department of Bioinformatics and Computational Biology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alexander J Lazar
- Departments of Pathology, Genomic Medicine, and Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, Emory University, Atlanta, GA 30322, USA
| | | |
Collapse
|
453
|
Hermessi H, Mourali O, Zagrouba E. Convolutional neural network-based multimodal image fusion via similarity learning in the shearlet domain. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3441-1] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
454
|
Yang SJ, Berndl M, Michael Ando D, Barch M, Narayanaswamy A, Christiansen E, Hoyer S, Roat C, Hung J, Rueden CT, Shankar A, Finkbeiner S, Nelson P. Assessing microscope image focus quality with deep learning. BMC Bioinformatics 2018. [PMID: 29540156 PMCID: PMC5853029 DOI: 10.1186/s12859-018-2087-4] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Background Large image datasets acquired on automated microscopes typically have some fraction of low quality, out-of-focus images, despite the use of hardware autofocus systems. Identification of these images using automated image analysis with high accuracy is important for obtaining a clean, unbiased image dataset. Complicating this task is the fact that image focus quality is only well-defined in foreground regions of images, and as a result, most previous approaches only enable a computation of the relative difference in quality between two or more images, rather than an absolute measure of quality. Results We present a deep neural network model capable of predicting an absolute measure of image focus on a single image in isolation, without any user-specified parameters. The model operates at the image-patch level, and also outputs a measure of prediction certainty, enabling interpretable predictions. The model was trained on only 384 in-focus Hoechst (nuclei) stain images of U2OS cells, which were synthetically defocused to one of 11 absolute defocus levels during training. The trained model can generalize on previously unseen real Hoechst stain images, identifying the absolute image focus to within one defocus level (approximately 3 pixel blur diameter difference) with 95% accuracy. On a simpler binary in/out-of-focus classification task, the trained model outperforms previous approaches on both Hoechst and Phalloidin (actin) stain images (F-scores of 0.89 and 0.86, respectively over 0.84 and 0.83), despite only having been presented Hoechst stain images during training. Lastly, we observe qualitatively that the model generalizes to two additional stains, Hoechst and Tubulin, of an unseen cell type (Human MCF-7) acquired on a different instrument. Conclusions Our deep neural network enables classification of out-of-focus microscope images with both higher accuracy and greater precision than previous approaches via interpretable patch-level focus and certainty predictions. The use of synthetically defocused images precludes the need for a manually annotated training dataset. The model also generalizes to different image and cell types. The framework for model training and image prediction is available as a free software library and the pre-trained model is available for immediate use in Fiji (ImageJ) and CellProfiler.
Collapse
Affiliation(s)
| | | | | | - Mariya Barch
- Taube/Koret Center for Neurodegenerative Disease Research and DaedalusBio, Gladstone, USA
| | | | | | | | | | - Jane Hung
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, MA, USA.,Department of Chemical Engineering, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Curtis T Rueden
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin at Madison, Madison, WI, USA
| | | | - Steven Finkbeiner
- Taube/Koret Center for Neurodegenerative Disease Research and DaedalusBio, Gladstone, USA.,Departments of Neurology and Physiology, University of California, San Francisco, CA, USA
| | | |
Collapse
|
455
|
Abstract
Predicting the expected outcome of patients diagnosed with cancer is a critical step in treatment. Advances in genomic and imaging technologies provide physicians with vast amounts of data, yet prognostication remains largely subjective, leading to suboptimal clinical management. We developed a computational approach based on deep learning to predict the overall survival of patients diagnosed with brain tumors from microscopic images of tissue biopsies and genomic biomarkers. This method uses adaptive feedback to simultaneously learn the visual patterns and molecular biomarkers associated with patient outcomes. Our approach surpasses the prognostic accuracy of human experts using the current clinical standard for classifying brain tumors and presents an innovative approach for objective, accurate, and integrated prediction of patient outcomes. Cancer histology reflects underlying molecular processes and disease progression and contains rich phenotypic information that is predictive of patient outcomes. In this study, we show a computational approach for learning patient outcomes from digital pathology images using deep learning to combine the power of adaptive machine learning algorithms with traditional survival models. We illustrate how these survival convolutional neural networks (SCNNs) can integrate information from both histology images and genomic biomarkers into a single unified framework to predict time-to-event outcomes and show prediction accuracy that surpasses the current clinical paradigm for predicting the overall survival of patients diagnosed with glioma. We use statistical sampling techniques to address challenges in learning survival from histology images, including tumor heterogeneity and the need for large training cohorts. We also provide insights into the prediction mechanisms of SCNNs, using heat map visualization to show that SCNNs recognize important structures, like microvascular proliferation, that are related to prognosis and that are used by pathologists in grading. These results highlight the emerging role of deep learning in precision medicine and suggest an expanding utility for computational analysis of histology in the future practice of pathology.
Collapse
|
456
|
Saha M, Chakraborty C, Racoceanu D. Efficient deep learning model for mitosis detection using breast histopathology images. Comput Med Imaging Graph 2018; 64:29-40. [PMID: 29409716 DOI: 10.1016/j.compmedimag.2017.12.001] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Revised: 06/28/2017] [Accepted: 12/07/2017] [Indexed: 01/18/2023]
Abstract
Mitosis detection is one of the critical factors of cancer prognosis, carrying significant diagnostic information required for breast cancer grading. It provides vital clues to estimate the aggressiveness and the proliferation rate of the tumour. The manual mitosis quantification from whole slide images is a very labor-intensive and challenging task. The aim of this study is to propose a supervised model to detect mitosis signature from breast histopathology WSI images. The model has been designed using deep learning architecture with handcrafted features. We used handcrafted features issued from previous medical challenges MITOS @ ICPR 2012, AMIDA-13 and projects (MICO ANR TecSan) expertise. The deep learning architecture mainly consists of five convolution layers, four max-pooling layers, four rectified linear units (ReLU), and two fully connected layers. ReLU has been used after each convolution layer as an activation function. Dropout layer has been included after first fully connected layer to avoid overfitting. Handcrafted features mainly consist of morphological, textural and intensity features. The proposed architecture has shown to have an improved 92% precision, 88% recall and 90% F-score. Prospectively, the proposed model will be very beneficial in routine exam, providing pathologists with efficient and - as we will prove - effective second opinion for breast cancer grading from whole slide images. Last but not the least, this model could lead junior and senior pathologists, as medical researchers, to a superior understanding and evaluation of breast cancer stage and genesis.
Collapse
Affiliation(s)
- Monjoy Saha
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Chandan Chakraborty
- School of Medical Science and Technology, Indian Institute of Technology, Kharagpur, West Bengal, India
| | - Daniel Racoceanu
- Sorbonne University, Paris, France; Pontifical Catholic University of Peru, Lima, Peru.
| |
Collapse
|
457
|
Kolachalama VB, Singh P, Lin CQ, Mun D, Belghasem ME, Henderson JM, Francis JM, Salant DJ, Chitalia VC. Association of Pathological Fibrosis With Renal Survival Using Deep Neural Networks. Kidney Int Rep 2018; 3:464-475. [PMID: 29725651 PMCID: PMC5932308 DOI: 10.1016/j.ekir.2017.11.002] [Citation(s) in RCA: 88] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Revised: 10/04/2017] [Accepted: 11/06/2017] [Indexed: 12/18/2022] Open
Abstract
INTRODUCTION Chronic kidney damage is routinely assessed semiquantitatively by scoring the amount of fibrosis and tubular atrophy in a renal biopsy sample. Although image digitization and morphometric techniques can better quantify the extent of histologic damage, we need more widely applicable ways to stratify kidney disease severity. METHODS We leveraged a deep learning architecture to better associate patient-specific histologic images with clinical phenotypes (training classes) including chronic kidney disease (CKD) stage, serum creatinine, and nephrotic-range proteinuria at the time of biopsy, and 1-, 3-, and 5-year renal survival. Trichrome-stained images processed from renal biopsy samples were collected on 171 patients treated at the Boston Medical Center from 2009 to 2012. Six convolutional neural network (CNN) models were trained using these images as inputs and the training classes as outputs, respectively. For comparison, we also trained separate classifiers using the pathologist-estimated fibrosis score (PEFS) as input and the training classes as outputs, respectively. RESULTS CNN models outperformed PEFS across the classification tasks. Specifically, the CNN model predicted the CKD stage more accurately than the PEFS model (κ = 0.519 vs. 0.051). For creatinine models, the area under curve (AUC) was 0.912 (CNN) versus 0.840 (PEFS). For proteinuria models, AUC was 0.867 (CNN) versus 0.702 (PEFS). AUC values for the CNN models for 1-, 3-, and 5-year renal survival were 0.878, 0.875, and 0.904, respectively, whereas the AUC values for PEFS model were 0.811, 0.800, and 0.786, respectively. CONCLUSION The study demonstrates a proof of principle that deep learning can be applied to routine renal biopsy images.
Collapse
Affiliation(s)
- Vijaya B. Kolachalama
- Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
- Whitaker Cardiovascular Institute, Boston University School of Medicine, Boston, Massachusetts, USA
- Hariri Institute for Computing and Computational Science & Engineering, Boston University, Boston, MA, USA
| | - Priyamvada Singh
- Renal Section, Department of Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
| | | | - Dan Mun
- College of Health & Rehabilitation Sciences: Sargent College, Boston University, Boston, Massachusetts, USA
| | - Mostafa E. Belghasem
- Department of Pathology and Laboratory Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
| | - Joel M. Henderson
- Department of Pathology and Laboratory Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
| | - Jean M. Francis
- Renal Section, Department of Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
| | - David J. Salant
- Renal Section, Department of Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
| | - Vipul C. Chitalia
- Whitaker Cardiovascular Institute, Boston University School of Medicine, Boston, Massachusetts, USA
- Renal Section, Department of Medicine, Boston University School of Medicine, Boston, Massachusetts, USA
| |
Collapse
|
458
|
Bychkov D, Linder N, Turkki R, Nordling S, Kovanen PE, Verrill C, Walliander M, Lundin M, Haglund C, Lundin J. Deep learning based tissue analysis predicts outcome in colorectal cancer. Sci Rep 2018; 8:3395. [PMID: 29467373 PMCID: PMC5821847 DOI: 10.1038/s41598-018-21758-3] [Citation(s) in RCA: 352] [Impact Index Per Article: 50.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2017] [Accepted: 02/12/2018] [Indexed: 11/09/2022] Open
Abstract
Image-based machine learning and deep learning in particular has recently shown expert-level accuracy in medical image classification. In this study, we combine convolutional and recurrent architectures to train a deep network to predict colorectal cancer outcome based on images of tumour tissue samples. The novelty of our approach is that we directly predict patient outcome, without any intermediate tissue classification. We evaluate a set of digitized haematoxylin-eosin-stained tumour tissue microarray (TMA) samples from 420 colorectal cancer patients with clinicopathological and outcome data available. The results show that deep learning-based outcome prediction with only small tissue areas as input outperforms (hazard ratio 2.3; CI 95% 1.79-3.03; AUC 0.69) visual histological assessment performed by human experts on both TMA spot (HR 1.67; CI 95% 1.28-2.19; AUC 0.58) and whole-slide level (HR 1.65; CI 95% 1.30-2.15; AUC 0.57) in the stratification into low- and high-risk patients. Our results suggest that state-of-the-art deep learning techniques can extract more prognostic information from the tissue morphology of colorectal cancer than an experienced human observer.
Collapse
Affiliation(s)
- Dmitrii Bychkov
- Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland.
| | - Nina Linder
- Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland
- Department of Women's and Children's Health, International Maternal and Child Health (IMCH), Uppsala University, Uppsala, Sweden
| | - Riku Turkki
- Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland
| | - Stig Nordling
- Department of Pathology, Medicum, University of Helsinki, Helsinki, Finland
| | - Panu E Kovanen
- Department of Pathology, University of Helsinki and HUSLAB, Helsinki University Hospital, Helsinki, Finland
| | - Clare Verrill
- Nuffield Department of Surgical Sciences, NIHR Oxford Biomedical Research Centre, University of Oxford, Oxford, UK
| | - Margarita Walliander
- Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland
| | - Mikael Lundin
- Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland
| | - Caj Haglund
- Department of Surgery, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
- Research Programs Unit, Translational Cancer Biology, University of Helsinki, Helsinki, Finland
| | - Johan Lundin
- Institute for Molecular Medicine Finland FIMM, Helsinki Institute for Life Science HiLIFE, University of Helsinki, Helsinki, Finland
- Department of Public Health Sciences, Global Health/IHCAR, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
459
|
Liu C, Huang Y, Ozolek JA, Hanna MG, Singh R, Rohde GK. SetSVM: An Approach to Set Classification in Nuclei-Based Cancer Detection. IEEE J Biomed Health Inform 2018; 23:351-361. [PMID: 29994380 DOI: 10.1109/jbhi.2018.2803793] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Due to the importance of nuclear structure in cancer diagnosis, several predictive models have been described for diagnosing a wide variety of cancers based on nuclear morphology. In many computer-aided diagnosis (CAD) systems, cancer detection tasks can be generally formulated as set classification problems, which can not be directly solved by classifying single instances. In this paper, we propose a novel set classification approach SetSVM to build a predictive model by considering any nuclei set as a whole without specific assumptions. SetSVM features highly discriminative power in cancer detection challenges in the sense that it not only optimizes the classifier decision boundary but also transfers discriminative information to set representation learning. During model training, these two processes are unified in the support vector machine (SVM) maximum separation margin problem. Experiment results show that SetSVM provides significant improvements compared with five commonly used approaches in cancer detection tasks utilizing 260 patients in total across three different cancer types, namely, thyroid cancer, liver cancer, and melanoma. In addition, we show that SetSVM enables visual interpretation of discriminative nuclear characteristics representing the nuclei set. These features make SetSVM a potentially practical tool in building accurate and interpretable CAD systems for cancer detection.
Collapse
|
460
|
González G, Washko GR, Estépar RSJ. Deep learning for biomarker regression: application to osteoporosis and emphysema on chest CT scans. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10574:105741H. [PMID: 30122802 PMCID: PMC6097534 DOI: 10.1117/12.2293455] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
Abstract
INTRODUCTION Biomarker computation using deep-learning often relies on a two-step process, where the deep learning algorithm segments the region of interest and then the biomarker is measured. We propose an alternative paradigm, where the biomarker is estimated directly using a regression network. We showcase this image-to-biomarker paradigm using two biomarkers: the estimation of bone mineral density (BMD) and the estimation of lung percentage of emphysema from CT scans. MATERIALS AND METHODS We use a large database of 9,925 CT scans to train, validate and test the network for which reference standard BMD and percentage emphysema have been already computed. First, the 3D dataset is reduced to a set of canonical 2D slices where the organ of interest is visible (either spine for BMD or lungs for emphysema). This data reduction is performed using an automatic object detector. Second, The regression neural network is composed of three convolutional layers, followed by a fully connected and an output layer. The network is optimized using a momentum optimizer with an exponential decay rate, using the root mean squared error as cost function. RESULTS The Pearson correlation coefficients obtained against the reference standards are r = 0.940 (p < 0.00001) and r = 0.976 (p < 0.00001) for BMD and percentage emphysema respectively. CONCLUSIONS The deep-learning regression architecture can learn biomarkers from images directly, without indicating the structures of interest. This approach simplifies the development of biomarker extraction algorithms. The proposed data reduction based on object detectors conveys enough information to compute the biomarkers of interest.
Collapse
Affiliation(s)
| | - George R Washko
- Division of Pulmonary and Critical Care Medicine. Department of Medicine. Brigham and Women's Hospital. 75 Francis St, Boston, MA. USA
| | - Raúl San José Estépar
- Applied Chest Imaging Laboratory. Department of Radiology. Brigham and Women's Hospital. 1249 Boylston St, Boston MA, 02115
| |
Collapse
|
461
|
Xie Y, Xing F, Shi X, Kong X, Su H, Yang L. Efficient and robust cell detection: A structured regression approach. Med Image Anal 2018; 44:245-254. [PMID: 28797548 PMCID: PMC6051760 DOI: 10.1016/j.media.2017.07.003] [Citation(s) in RCA: 61] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Revised: 02/22/2017] [Accepted: 07/21/2017] [Indexed: 10/19/2022]
Abstract
Efficient and robust cell detection serves as a critical prerequisite for many subsequent biomedical image analysis methods and computer-aided diagnosis (CAD). It remains a challenging task due to touching cells, inhomogeneous background noise, and large variations in cell sizes and shapes. In addition, the ever-increasing amount of available datasets and the high resolution of whole-slice scanned images pose a further demand for efficient processing algorithms. In this paper, we present a novel structured regression model based on a proposed fully residual convolutional neural network for efficient cell detection. For each testing image, our model learns to produce a dense proximity map that exhibits higher responses at locations near cell centers. Our method only requires a few training images with weak annotations (just one dot indicating the cell centroids). We have extensively evaluated our method using four different datasets, covering different microscopy staining methods (e.g., H & E or Ki-67 staining) or image acquisition techniques (e.g., bright-filed image or phase contrast). Experimental results demonstrate the superiority of our method over existing state of the art methods in terms of both detection accuracy and running time.
Collapse
Affiliation(s)
- Yuanpu Xie
- Department of Biomedical Engineering, University of Florida, FL 32611 USA.
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Xiaoshuang Shi
- Department of Biomedical Engineering, University of Florida, FL 32611 USA
| | - Xiangfei Kong
- School of Electrical and Electronic Engineering, Nanyang Technological University, Nanyang Drive 637553 Singapore
| | - Hai Su
- Department of Biomedical Engineering, University of Florida, FL 32611 USA
| | - Lin Yang
- Department of Biomedical Engineering, University of Florida, FL 32611 USA; Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, 32611, USA.
| |
Collapse
|
462
|
Cao C, Liu F, Tan H, Song D, Shu W, Li W, Zhou Y, Bo X, Xie Z. Deep Learning and Its Applications in Biomedicine. GENOMICS, PROTEOMICS & BIOINFORMATICS 2018; 16:17-32. [PMID: 29522900 PMCID: PMC6000200 DOI: 10.1016/j.gpb.2017.07.003] [Citation(s) in RCA: 259] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2017] [Revised: 06/18/2017] [Accepted: 07/05/2017] [Indexed: 12/19/2022]
Abstract
Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning.
Collapse
Affiliation(s)
- Chensi Cao
- CapitalBio Corporation, Beijing 102206, China
| | - Feng Liu
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China
| | - Hai Tan
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China
| | - Deshou Song
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China
| | - Wenjie Shu
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China
| | - Weizhong Li
- Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou 500040, China
| | - Yiming Zhou
- CapitalBio Corporation, Beijing 102206, China; Department of Biomedical Engineering, Medical Systems Biology Research Center, Tsinghua University School of Medicine, Beijing 100084, China.
| | - Xiaochen Bo
- Department of Biotechnology, Beijing Institute of Radiation Medicine, Beijing 100850, China.
| | - Zhi Xie
- State Key Lab of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 500040, China.
| |
Collapse
|
463
|
Chen H, Engkvist O, Wang Y, Olivecrona M, Blaschke T. The rise of deep learning in drug discovery. Drug Discov Today 2018; 23:1241-1250. [PMID: 29366762 DOI: 10.1016/j.drudis.2018.01.039] [Citation(s) in RCA: 758] [Impact Index Per Article: 108.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Revised: 12/04/2017] [Accepted: 01/16/2018] [Indexed: 01/01/2023]
Abstract
Over the past decade, deep learning has achieved remarkable success in various artificial intelligence research areas. Evolved from the previous research on artificial neural networks, this technology has shown superior performance to other machine learning algorithms in areas such as image and voice recognition, natural language processing, among others. The first wave of applications of deep learning in pharmaceutical research has emerged in recent years, and its utility has gone beyond bioactivity predictions and has shown promise in addressing diverse problems in drug discovery. Examples will be discussed covering bioactivity prediction, de novo molecular design, synthesis prediction and biological image analysis.
Collapse
Affiliation(s)
- Hongming Chen
- Hit Discovery, Discovery Sciences, Innovative Medicines and Early Development Biotech Unit, AstraZeneca R&D Gothenburg, Mölndal 43183, Sweden.
| | - Ola Engkvist
- Hit Discovery, Discovery Sciences, Innovative Medicines and Early Development Biotech Unit, AstraZeneca R&D Gothenburg, Mölndal 43183, Sweden
| | - Yinhai Wang
- Quantitative Biology, Discovery Sciences, Innovative Medicines and Early Development Biotech Unit, AstraZeneca, Unit 310, Cambridge Science Park, Milton Road, Cambridge CB4 0WG, UK
| | - Marcus Olivecrona
- Hit Discovery, Discovery Sciences, Innovative Medicines and Early Development Biotech Unit, AstraZeneca R&D Gothenburg, Mölndal 43183, Sweden
| | - Thomas Blaschke
- Hit Discovery, Discovery Sciences, Innovative Medicines and Early Development Biotech Unit, AstraZeneca R&D Gothenburg, Mölndal 43183, Sweden
| |
Collapse
|
464
|
González G, Ash SY, Vegas-Sánchez-Ferrero G, Onieva Onieva J, Rahaghi FN, Ross JC, Díaz A, San José Estépar R, Washko GR. Disease Staging and Prognosis in Smokers Using Deep Learning in Chest Computed Tomography. Am J Respir Crit Care Med 2018; 197:193-203. [PMID: 28892454 PMCID: PMC5768902 DOI: 10.1164/rccm.201705-0860oc] [Citation(s) in RCA: 172] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 09/08/2017] [Indexed: 02/06/2023] Open
Abstract
RATIONALE Deep learning is a powerful tool that may allow for improved outcome prediction. OBJECTIVES To determine if deep learning, specifically convolutional neural network (CNN) analysis, could detect and stage chronic obstructive pulmonary disease (COPD) and predict acute respiratory disease (ARD) events and mortality in smokers. METHODS A CNN was trained using computed tomography scans from 7,983 COPDGene participants and evaluated using 1,000 nonoverlapping COPDGene participants and 1,672 ECLIPSE participants. Logistic regression (C statistic and the Hosmer-Lemeshow test) was used to assess COPD diagnosis and ARD prediction. Cox regression (C index and the Greenwood-Nam-D'Agnostino test) was used to assess mortality. MEASUREMENTS AND MAIN RESULTS In COPDGene, the C statistic for the detection of COPD was 0.856. A total of 51.1% of participants in COPDGene were accurately staged and 74.95% were within one stage. In ECLIPSE, 29.4% were accurately staged and 74.6% were within one stage. In COPDGene and ECLIPSE, the C statistics for ARD events were 0.64 and 0.55, respectively, and the Hosmer-Lemeshow P values were 0.502 and 0.380, respectively, suggesting no evidence of poor calibration. In COPDGene and ECLIPSE, CNN predicted mortality with fair discrimination (C indices, 0.72 and 0.60, respectively), and without evidence of poor calibration (Greenwood-Nam-D'Agnostino P values, 0.307 and 0.331, respectively). CONCLUSIONS A deep-learning approach that uses only computed tomography imaging data can identify those smokers who have COPD and predict who are most likely to have ARD events and those with the highest mortality. At a population level CNN analysis may be a powerful tool for risk assessment.
Collapse
Affiliation(s)
- Germán González
- Sierra Research, Alicante, Spain
- Applied Chest Imaging Laboratory, Department of Radiology, and
| | - Samuel Y. Ash
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women’s Hospital, Boston Massachusetts
| | | | | | - Farbod N. Rahaghi
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women’s Hospital, Boston Massachusetts
| | - James C. Ross
- Applied Chest Imaging Laboratory, Department of Radiology, and
| | - Alejandro Díaz
- Applied Chest Imaging Laboratory, Department of Radiology, and
| | | | - George R. Washko
- Division of Pulmonary and Critical Care Medicine, Department of Medicine, Brigham and Women’s Hospital, Boston Massachusetts
| |
Collapse
|
465
|
Affiliation(s)
- Dabao Zhang
- Department of Statistics, Purdue University, West Lafayette, IN
| |
Collapse
|
466
|
Composition Loss for Counting, Density Map Estimation and Localization in Dense Crowds. COMPUTER VISION – ECCV 2018 2018. [DOI: 10.1007/978-3-030-01216-8_33] [Citation(s) in RCA: 215] [Impact Index Per Article: 30.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
467
|
Deep Learning for Medical Image Processing: Overview, Challenges and the Future. LECTURE NOTES IN COMPUTATIONAL VISION AND BIOMECHANICS 2018. [DOI: 10.1007/978-3-319-65981-7_12] [Citation(s) in RCA: 369] [Impact Index Per Article: 52.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
468
|
Context-Aware Learning Using Transferable Features for Classification of Breast Cancer Histology Images. LECTURE NOTES IN COMPUTER SCIENCE 2018. [DOI: 10.1007/978-3-319-93000-8_89] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
469
|
Nawaz W, Ahmed S, Tahir A, Khan HA. Classification Of Breast Cancer Histology Images Using ALEXNET. LECTURE NOTES IN COMPUTER SCIENCE 2018. [DOI: 10.1007/978-3-319-93000-8_99] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
470
|
Lai Z, Deng H. Multiscale High-Level Feature Fusion for Histopathological Image Classification. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:7521846. [PMID: 29463986 PMCID: PMC5804108 DOI: 10.1155/2017/7521846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Accepted: 12/06/2017] [Indexed: 11/21/2022]
Abstract
Histopathological image classification is one of the most important steps for disease diagnosis. We proposed a method for multiclass histopathological image classification based on deep convolutional neural network referred to as coding network. It can gain better representation for the histopathological image than only using coding network. The main process is that training a deep convolutional neural network is to extract high-level feature and fuse two convolutional layers' high-level feature as multiscale high-level feature. In order to gain better performance and high efficiency, we would employ sparse autoencoder (SAE) and principal components analysis (PCA) to reduce the dimensionality of multiscale high-level feature. We evaluate the proposed method on a real histopathological image dataset. Our results suggest that the proposed method is effective and outperforms the coding network.
Collapse
Affiliation(s)
- ZhiFei Lai
- Department of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| | - HuiFang Deng
- Department of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China
| |
Collapse
|
471
|
Awan R, Sirinukunwattana K, Epstein D, Jefferyes S, Qidwai U, Aftab Z, Mujeeb I, Snead D, Rajpoot N. Glandular Morphometrics for Objective Grading of Colorectal Adenocarcinoma Histology Images. Sci Rep 2017; 7:16852. [PMID: 29203775 PMCID: PMC5715083 DOI: 10.1038/s41598-017-16516-w] [Citation(s) in RCA: 72] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2017] [Accepted: 11/14/2017] [Indexed: 12/16/2022] Open
Abstract
Determining the grade of colon cancer from tissue slides is a routine part of the pathological analysis. In the case of colorectal adenocarcinoma (CRA), grading is partly determined by morphology and degree of formation of glandular structures. Achieving consistency between pathologists is difficult due to the subjective nature of grading assessment. An objective grading using computer algorithms will be more consistent, and will be able to analyse images in more detail. In this paper, we measure the shape of glands with a novel metric that we call the Best Alignment Metric (BAM). We show a strong correlation between a novel measure of glandular shape and grade of the tumour. We used shape specific parameters to perform a two-class classification of images into normal or cancerous tissue and a three-class classification into normal, low grade cancer, and high grade cancer. The task of detecting gland boundaries, which is a prerequisite of shape-based analysis, was carried out using a deep convolutional neural network designed for segmentation of glandular structures. A support vector machine (SVM) classifier was trained using shape features derived from BAM. Through cross-validation, we achieved an accuracy of 97% for the two-class and 91% for three-class classification.
Collapse
Affiliation(s)
- Ruqayya Awan
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
- Department of Computer Science, University of Warwick, Coventry, UK
| | | | - David Epstein
- Mathematics Institute, University of Warwick, Coventry, UK
| | - Samuel Jefferyes
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Uvais Qidwai
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | - Zia Aftab
- Hamad Medical Corporation, Doha, Qatar
| | | | - David Snead
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK.
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK.
| |
Collapse
|
472
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4775] [Impact Index Per Article: 596.9] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
473
|
Ahmad A, Asif A, Rajpoot N, Arif M, Minhas FUAA. Correlation Filters for Detection of Cellular Nuclei in Histopathology Images. J Med Syst 2017; 42:7. [DOI: 10.1007/s10916-017-0863-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2016] [Accepted: 11/13/2017] [Indexed: 12/13/2022]
|
474
|
Wang X, Yang W, Weinreb J, Han J, Li Q, Kong X, Yan Y, Ke Z, Luo B, Liu T, Wang L. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning. Sci Rep 2017; 7:15415. [PMID: 29133818 PMCID: PMC5684419 DOI: 10.1038/s41598-017-15720-y] [Citation(s) in RCA: 94] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Accepted: 10/31/2017] [Indexed: 01/11/2023] Open
Abstract
Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 < 0.001). The AUCs were 0.84 (95% CI 0.78-0.89) for deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.
Collapse
Affiliation(s)
- Xinggang Wang
- Department of Radiology, Tongji Hospital, Huazhong University of Science and Technology, Jiefang Road 1095, 430030, Wuhan, China
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Luoyu Road 1037, Wuhan, Hubei, 430074, China
| | - Wei Yang
- Department of Nutrition and Food Hygiene, MOE Key Lab of Environment, Hubei Key Laboratory of Food Nutrition and Safety, Health, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Hangkong Road 13, 430030, Wuhan, China
| | - Jeffrey Weinreb
- Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, 208042, Connecticut, USA
| | - Juan Han
- Department of Maternal and Child and Adolescent & Department of Epidemiology and Biostatistics, School of Public Health, Tongji Medical College, Huazhong University of Science and Technology, Hangkong Road 13, 430030, Wuhan, China
| | - Qiubai Li
- Program in Cellular and Molecular Medicine, Boston Children's Hospital, Boston, MA, 02115, USA
| | - Xiangchuang Kong
- Department of Radiology, Union Hospital, Huazhong University of Science and Technology, Jiefang Road 1277, 430022, Wuhan, China
| | - Yongluan Yan
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Luoyu Road 1037, Wuhan, Hubei, 430074, China
| | - Zan Ke
- Department of Radiology, Tongji Hospital, Huazhong University of Science and Technology, Jiefang Road 1095, 430030, Wuhan, China
| | - Bo Luo
- School of mechanical science and engineering, Huazhong University of Science and Technology, Luoyu Road 1037, 430074, Wuhan, China
| | - Tao Liu
- School of mechanical science and engineering, Huazhong University of Science and Technology, Luoyu Road 1037, 430074, Wuhan, China
| | - Liang Wang
- Department of Radiology, Tongji Hospital, Huazhong University of Science and Technology, Jiefang Road 1095, 430030, Wuhan, China.
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science &Technology, Jie-Fang-Da-Dao 1095, Wuhan, 430030, P.R. China.
| |
Collapse
|
475
|
Mishra R, Daescu O, Leavey P, Rakheja D, Sengupta A. Convolutional Neural Network for Histopathological Analysis of Osteosarcoma. J Comput Biol 2017; 25:313-325. [PMID: 29083930 DOI: 10.1089/cmb.2017.0153] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
Abstract
Pathologists often deal with high complexity and sometimes disagreement over osteosarcoma tumor classification due to cellular heterogeneity in the dataset. Segmentation and classification of histology tissue in H&E stained tumor image datasets is a challenging task because of intra-class variations, inter-class similarity, crowded context, and noisy data. In recent years, deep learning approaches have led to encouraging results in breast cancer and prostate cancer analysis. In this article, we propose convolutional neural network (CNN) as a tool to improve efficiency and accuracy of osteosarcoma tumor classification into tumor classes (viable tumor, necrosis) versus nontumor. The proposed CNN architecture contains eight learned layers: three sets of stacked two convolutional layers interspersed with max pooling layers for feature extraction and two fully connected layers with data augmentation strategies to boost performance. The use of a neural network results in higher accuracy of average 92% for the classification. We compare the proposed architecture with three existing and proven CNN architectures for image classification: AlexNet, LeNet, and VGGNet. We also provide a pipeline to calculate percentage necrosis in a given whole slide image. We conclude that the use of neural networks can assure both high accuracy and efficiency in osteosarcoma classification.
Collapse
Affiliation(s)
- Rashika Mishra
- 1 Department of Computer Science, University of Texas at Dallas , Richardson, Texas
| | - Ovidiu Daescu
- 1 Department of Computer Science, University of Texas at Dallas , Richardson, Texas
| | - Patrick Leavey
- 2 University of Texas Southwestern Medical Center , Dallas, Texas
| | - Dinesh Rakheja
- 2 University of Texas Southwestern Medical Center , Dallas, Texas
| | - Anita Sengupta
- 2 University of Texas Southwestern Medical Center , Dallas, Texas
| |
Collapse
|
476
|
Peikari M, Salama S, Nofech-Mozes S, Martel AL. Automatic cellularity assessment from post-treated breast surgical specimens. Cytometry A 2017; 91:1078-1087. [PMID: 28976721 DOI: 10.1002/cyto.a.23244] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Revised: 07/11/2017] [Accepted: 08/25/2017] [Indexed: 12/18/2022]
Abstract
Neoadjuvant treatment (NAT) of breast cancer (BCa) is an option for patients with the locally advanced disease. It has been compared with standard adjuvant therapy with the aim of improving prognosis and surgical outcome. Moreover, the response of the tumor to the therapy provides useful information for patient management. The pathological examination of the tissue sections after surgery is the gold-standard to estimate the residual tumor and the assessment of cellularity is an important component of tumor burden assessment. In the current clinical practice, tumor cellularity is manually estimated by pathologists on hematoxylin and eosin (H&E) stained slides, the quality, and reliability of which might be impaired by inter-observer variability which potentially affects prognostic power assessment in NAT trials. This procedure is also qualitative and time-consuming. In this paper, we describe a method of automatically assessing cellularity. A pipeline to automatically segment nuclei figures and estimate residual cancer cellularity from within patches and whole slide images (WSIs) of BCa was developed. We have compared the performance of our proposed pipeline in estimating residual cancer cellularity with that of two expert pathologists. We found an intra-class agreement coefficient (ICC) of 0.89 (95% CI of [0.70, 0.95]) between pathologists, 0.74 (95% CI of [0.70, 0.77]) between pathologist #1 and proposed method, and 0.75 (95% CI of [0.71, 0.79]) between pathologist #2 and proposed method. We have also successfully applied our proposed technique on a WSI to locate areas with high concentration of residual cancer. The main advantage of our approach is that it is fully automatic and can be used to find areas with high cellularity in WSIs. This provides a first step in developing an automatic technique for post-NAT tumor response assessment from pathology slides. © 2017 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
| | - Sherine Salama
- Laboratory Medicine and Pathobiology, University of Toronto, Canada
| | | | - Anne L Martel
- Medical Biophysics, University of Toronto, Canada.,Physical Sciences, Sunnybrook Research Institute, Canada
| |
Collapse
|
477
|
Dou Q, Yu L, Chen H, Jin Y, Yang X, Qin J, Heng PA. 3D deeply supervised network for automated segmentation of volumetric medical images. Med Image Anal 2017; 41:40-54. [DOI: 10.1016/j.media.2017.05.001] [Citation(s) in RCA: 198] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Revised: 04/14/2017] [Accepted: 05/01/2017] [Indexed: 10/19/2022]
|
478
|
Conceptual data sampling for breast cancer histology image classification. Comput Biol Med 2017; 89:59-67. [DOI: 10.1016/j.compbiomed.2017.07.018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 07/11/2017] [Accepted: 07/28/2017] [Indexed: 11/19/2022]
|
479
|
Lan R, Zhou Y. Medical Image Retrieval via Histogram of Compressed Scattering Coefficients. IEEE J Biomed Health Inform 2017; 21:1338-1346. [DOI: 10.1109/jbhi.2016.2623840] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
480
|
Automated detection of arrhythmias using different intervals of tachycardia ECG segments with convolutional neural network. Inf Sci (N Y) 2017. [DOI: 10.1016/j.ins.2017.04.012] [Citation(s) in RCA: 419] [Impact Index Per Article: 52.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
481
|
Dercle L, Ammari S, Bateson M, Durand PB, Haspinger E, Massard C, Jaudet C, Varga A, Deutsch E, Soria JC, Ferté C. Limits of radiomic-based entropy as a surrogate of tumor heterogeneity: ROI-area, acquisition protocol and tissue site exert substantial influence. Sci Rep 2017; 7:7952. [PMID: 28801575 PMCID: PMC5554130 DOI: 10.1038/s41598-017-08310-5] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2016] [Accepted: 07/10/2017] [Indexed: 01/19/2023] Open
Abstract
Entropy is a promising quantitative imaging biomarker for characterizing cancer imaging phenotype. Entropy has been associated with tumor gene expression, tumor metabolism, tumor stage, patient prognosis, and treatment response. Our hypothesis states that tumor-specific biomarkers such as entropy should be correlated between synchronous metastases. Therefore, a significant proportion of the variance of entropy should be attributed to the malignant process. We analyzed 112 patients with matched/paired synchronous metastases (SM#1 and SM#2) prospectively enrolled in the MOSCATO-01 clinical trial. Imaging features were extracted from Regions Of Interest (ROI) delineated on CT-scan using TexRAD software. We showed that synchronous metastasis entropy was correlated across 5 Spatial Scale Filters: Spearman's Rho ranged between 0.41 and 0.59 (P = 0.0001, Bonferroni correction). Multivariate linear analysis revealed that entropy in SM#1 is significantly associated with (i) primary tumor type; (ii) entropy in SM#2 (same malignant process); (iii) ROI area size; (iv) metastasis site; and (v) entropy in the psoas muscle (reference tissue). Entropy was a logarithmic function of ROI area in normal control tissues (aorta, psoas) and in mathematical models (P < 0.01). We concluded that entropy is a tumor-specific metric only if confounding factors are corrected.
Collapse
Affiliation(s)
- Laurent Dercle
- INSERM U1015, Equipe Labellisée Ligue Nationale Contre le Cancer, Gustave Roussy Cancer Campus, Villejuif, France.
- Département de l'imagerie médicale, Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France.
- Department of Radiology, Columbia University Medical Center, New York, New York, USA.
| | - Samy Ammari
- Département de l'imagerie médicale, Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France
- Département d'Innovation Thérapeutique et des Essais Précoces (DITEP), Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France
| | | | - Paul Blanc Durand
- Département d'Innovation Thérapeutique et des Essais Précoces (DITEP), Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France
| | - Eva Haspinger
- Département d'Innovation Thérapeutique et des Essais Précoces (DITEP), Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France
| | - Christophe Massard
- Département d'Innovation Thérapeutique et des Essais Précoces (DITEP), Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France
| | - Cyril Jaudet
- Department of Radiotherapy, UZ Brussel, Brussels, Belgium
| | - Andrea Varga
- Département d'Innovation Thérapeutique et des Essais Précoces (DITEP), Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France
| | - Eric Deutsch
- Département de radiothérapie, Gustave Roussy Cancer Campus, Université Paris Saclay, F-94805, Villejuif, France
- INSERM U981, Biomarqueurs prédictifs et nouvelles stratégies en oncologie, Université Paris Sud, Gustave Roussy, Villejuif, France
| | - Jean-Charles Soria
- Département d'Innovation Thérapeutique et des Essais Précoces (DITEP), Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France
- INSERM U981, Biomarqueurs prédictifs et nouvelles stratégies en oncologie, Université Paris Sud, Gustave Roussy, Villejuif, France
- INSERM U1030, Paris Sud University, Gustave Roussy, Villejuif, France
| | - Charles Ferté
- Département d'Innovation Thérapeutique et des Essais Précoces (DITEP), Gustave Roussy, Université Paris Saclay, F-94805, Villejuif, France.
- INSERM U981, Biomarqueurs prédictifs et nouvelles stratégies en oncologie, Université Paris Sud, Gustave Roussy, Villejuif, France.
- INSERM U1030, Paris Sud University, Gustave Roussy, Villejuif, France.
| |
Collapse
|
482
|
Song Y, Li Q, Huang H, Feng D, Chen M, Cai W. Low Dimensional Representation of Fisher Vectors for Microscopy Image Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1636-1649. [PMID: 28358678 DOI: 10.1109/tmi.2017.2687466] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Microscopy image classification is important in various biomedical applications, such as cancer subtype identification, and protein localization for high content screening. To achieve automated and effective microscopy image classification, the representative and discriminative capability of image feature descriptors is essential. To this end, in this paper, we propose a new feature representation algorithm to facilitate automated microscopy image classification. In particular, we incorporate Fisher vector (FV) encoding with multiple types of local features that are handcrafted or learned, and we design a separation-guided dimension reduction method to reduce the descriptor dimension while increasing its discriminative capability. Our method is evaluated on four publicly available microscopy image data sets of different imaging types and applications, including the UCSB breast cancer data set, MICCAI 2015 CBTC challenge data set, and IICBU malignant lymphoma, and RNAi data sets. Our experimental results demonstrate the advantage of the proposed low-dimensional FV representation, showing consistent performance improvement over the existing state of the art and the commonly used dimension reduction techniques.
Collapse
|
483
|
Korbar B, Olofson AM, Miraflor AP, Nicka CM, Suriawinata MA, Torresani L, Suriawinata AA, Hassanpour S. Deep Learning for Classification of Colorectal Polyps on Whole-slide Images. J Pathol Inform 2017; 8:30. [PMID: 28828201 PMCID: PMC5545773 DOI: 10.4103/jpi.jpi_34_17] [Citation(s) in RCA: 162] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 05/22/2017] [Indexed: 11/04/2022] Open
Abstract
CONTEXT Histopathological characterization of colorectal polyps is critical for determining the risk of colorectal cancer and future rates of surveillance for patients. However, this characterization is a challenging task and suffers from significant inter- and intra-observer variability. AIMS We built an automatic image analysis method that can accurately classify different types of colorectal polyps on whole-slide images to help pathologists with this characterization and diagnosis. SETTING AND DESIGN Our method is based on deep-learning techniques, which rely on numerous levels of abstraction for data representation and have shown state-of-the-art results for various image analysis tasks. SUBJECTS AND METHODS Our method covers five common types of polyps (i.e., hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous) that are included in the US Multisociety Task Force guidelines for colorectal cancer risk assessment and surveillance. We developed multiple deep-learning approaches by leveraging a dataset of 2074 crop images, which were annotated by multiple domain expert pathologists as reference standards. STATISTICAL ANALYSIS We evaluated our method on an independent test set of 239 whole-slide images and measured standard machine-learning evaluation metrics of accuracy, precision, recall, and F1 score and their 95% confidence intervals. RESULTS Our evaluation shows that our method with residual network architecture achieves the best performance for classification of colorectal polyps on whole-slide images (overall accuracy: 93.0%, 95% confidence interval: 89.0%-95.9%). CONCLUSIONS Our method can reduce the cognitive burden on pathologists and improve their efficacy in histopathological characterization of colorectal polyps and in subsequent risk assessment and follow-up recommendations.
Collapse
Affiliation(s)
- Bruno Korbar
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA.,Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Andrea M Olofson
- Department of Pathology and Laboratory Medicine, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA
| | - Allen P Miraflor
- Department of Pathology and Laboratory Medicine, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA
| | - Catherine M Nicka
- Department of Pathology and Laboratory Medicine, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA
| | - Matthew A Suriawinata
- Department of Pathology and Laboratory Medicine, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA
| | - Lorenzo Torresani
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Arief A Suriawinata
- Department of Pathology and Laboratory Medicine, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA.,Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA.,Department of Epidemiology, Geisel School of Medicine at Dartmouth, One Medical Center Drive, Lebanon, NH 03756, USA
| |
Collapse
|
484
|
Young Hwan Chang, Thibault G, Madin O, Azimi V, Meyers C, Johnson B, Link J, Margolin A, Gray JW. Deep learning based Nucleus Classification in pancreas histological images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:672-675. [PMID: 29059962 DOI: 10.1109/embc.2017.8036914] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Tumor specimens contain a variety of healthy cells as well as cancerous cells, and this heterogeneity underlies resistance to various cancer therapies. But this problem has not been thoroughly investigated until recently. Meanwhile, technological breakthroughs in imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples, and modern machine learning approaches including deep learning have been shown to produce encouraging results by finding hidden structures and make accurate predictions. In this paper, we propose a Deep learning based Nucleus Classification (DeepNC) approach using paired histopathology and immunofluorescence images (for label), and demonstrate its classification prediction power. This method can solve current issue on discrepancy between genomic- or transcriptomic-based and pathology-based tumor purity estimates by improving histological evaluation. We also explain challenges in training a deep learning model for huge dataset.
Collapse
|
485
|
Kumar N, Verma R, Sharma S, Bhargava S, Vahadane A, Sethi A. A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1550-1560. [PMID: 28287963 DOI: 10.1109/tmi.2017.2677499] [Citation(s) in RCA: 363] [Impact Index Per Article: 45.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.
Collapse
|
486
|
Abstract
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Collapse
Affiliation(s)
- Dinggang Shen
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| | - Guorong Wu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| |
Collapse
|
487
|
An Advanced Deep Learning Approach for Ki-67 Stained Hotspot Detection and Proliferation Rate Scoring for Prognostic Evaluation of Breast Cancer. Sci Rep 2017; 7:3213. [PMID: 28607456 PMCID: PMC5468356 DOI: 10.1038/s41598-017-03405-5] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 04/26/2017] [Indexed: 02/08/2023] Open
Abstract
Being a non-histone protein, Ki-67 is one of the essential biomarkers for the immunohistochemical assessment of proliferation rate in breast cancer screening and grading. The Ki-67 signature is always sensitive to radiotherapy and chemotherapy. Due to random morphological, color and intensity variations of cell nuclei (immunopositive and immunonegative), manual/subjective assessment of Ki-67 scoring is error-prone and time-consuming. Hence, several machine learning approaches have been reported; nevertheless, none of them had worked on deep learning based hotspots detection and proliferation scoring. In this article, we suggest an advanced deep learning model for computerized recognition of candidate hotspots and subsequent proliferation rate scoring by quantifying Ki-67 appearance in breast cancer immunohistochemical images. Unlike existing Ki-67 scoring techniques, our methodology uses Gamma mixture model (GMM) with Expectation-Maximization for seed point detection and patch selection and deep learning, comprises with decision layer, for hotspots detection and proliferation scoring. Experimental results provide 93% precision, 0.88% recall and 0.91% F-score value. The model performance has also been compared with the pathologists’ manual annotations and recently published articles. In future, the proposed deep learning framework will be highly reliable and beneficial to the junior and senior pathologists for fast and efficient Ki-67 scoring.
Collapse
|
488
|
Araújo T, Aresta G, Castro E, Rouco J, Aguiar P, Eloy C, Polónia A, Campilho A. Classification of breast cancer histology images using Convolutional Neural Networks. PLoS One 2017; 12:e0177544. [PMID: 28570557 PMCID: PMC5453426 DOI: 10.1371/journal.pone.0177544] [Citation(s) in RCA: 345] [Impact Index Per Article: 43.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2016] [Accepted: 04/28/2017] [Indexed: 11/26/2022] Open
Abstract
Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.
Collapse
Affiliation(s)
- Teresa Araújo
- Faculdade de Engenharia da Universidade do Porto (FEUP), R. Dr. Roberto Frias s/n, 4200-465 Porto, Portugal
- Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência (INESC-TEC), R. Dr. Roberto Frias, 4200 Porto, Portugal
- * E-mail:
| | - Guilherme Aresta
- Faculdade de Engenharia da Universidade do Porto (FEUP), R. Dr. Roberto Frias s/n, 4200-465 Porto, Portugal
- Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência (INESC-TEC), R. Dr. Roberto Frias, 4200 Porto, Portugal
| | - Eduardo Castro
- Faculdade de Engenharia da Universidade do Porto (FEUP), R. Dr. Roberto Frias s/n, 4200-465 Porto, Portugal
| | - José Rouco
- Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência (INESC-TEC), R. Dr. Roberto Frias, 4200 Porto, Portugal
| | - Paulo Aguiar
- Instituto de Investigação e Inovação em Saúde (i3S), Universidade do Porto, Rua Alfredo Allen, 208, 4200-135 Porto, Portugal
- Instituto de Engenharia Biomédica (INEB), Universidade do Porto, Rua Alfredo Allen, 208, 4200-135 Porto, Portugal
| | - Catarina Eloy
- Laboratório de Anatomia Patológica, Ipatimup Diagnósticos, Rua Júlio Amaral de Carvalho, 45, 4200-135 Porto, Portugal
- Faculdade de Medicina, Universidade do Porto, Alameda Prof Hernâni Monteiro, 4200-319 Porto, Portugal
| | - António Polónia
- Laboratório de Anatomia Patológica, Ipatimup Diagnósticos, Rua Júlio Amaral de Carvalho, 45, 4200-135 Porto, Portugal
- Faculdade de Medicina, Universidade do Porto, Alameda Prof Hernâni Monteiro, 4200-319 Porto, Portugal
| | - Aurélio Campilho
- Faculdade de Engenharia da Universidade do Porto (FEUP), R. Dr. Roberto Frias s/n, 4200-465 Porto, Portugal
- Instituto de Engenharia de Sistemas e Computadores - Tecnologia e Ciência (INESC-TEC), R. Dr. Roberto Frias, 4200 Porto, Portugal
| |
Collapse
|
489
|
Kost H, Homeyer A, Molin J, Lundström C, Hahn HK. Training Nuclei Detection Algorithms with Simple Annotations. J Pathol Inform 2017; 8:21. [PMID: 28584683 PMCID: PMC5450511 DOI: 10.4103/jpi.jpi_3_17] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Accepted: 03/17/2017] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Generating good training datasets is essential for machine learning-based nuclei detection methods. However, creating exhaustive nuclei contour annotations, to derive optimal training data from, is often infeasible. METHODS We compared different approaches for training nuclei detection methods solely based on nucleus center markers. Such markers contain less accurate information, especially with regard to nuclear boundaries, but can be produced much easier and in greater quantities. The approaches use different automated sample extraction methods to derive image positions and class labels from nucleus center markers. In addition, the approaches use different automated sample selection methods to improve the detection quality of the classification algorithm and reduce the run time of the training process. We evaluated the approaches based on a previously published generic nuclei detection algorithm and a set of Ki-67-stained breast cancer images. RESULTS A Voronoi tessellation-based sample extraction method produced the best performing training sets. However, subsampling of the extracted training samples was crucial. Even simple class balancing improved the detection quality considerably. The incorporation of active learning led to a further increase in detection quality. CONCLUSIONS With appropriate sample extraction and selection methods, nuclei detection algorithms trained on the basis of simple center marker annotations can produce comparable quality to algorithms trained on conventionally created training sets.
Collapse
Affiliation(s)
- Henning Kost
- Fraunhofer Institute for Medical Image Computing MEVIS, 28359 Bremen, Germany
| | - André Homeyer
- Fraunhofer Institute for Medical Image Computing MEVIS, 28359 Bremen, Germany
| | - Jesper Molin
- Department of Applied Information Technology, Chalmers University of Technology, 41258 Gothenburg, Sweden.,Sectra AB, 58330 Linköping, Sweden.,Center for Medical Image Science and Visualization, Linköping University, 58183 Linköping, Sweden
| | - Claes Lundström
- Sectra AB, 58330 Linköping, Sweden.,Center for Medical Image Science and Visualization, Linköping University, 58183 Linköping, Sweden
| | - Horst Karl Hahn
- Fraunhofer Institute for Medical Image Computing MEVIS, 28359 Bremen, Germany
| |
Collapse
|
490
|
夏 靖, 纪 小. 计算机深度学习与智能图像诊断对胃高分化腺癌病理诊断的价值. Shijie Huaren Xiaohua Zazhi 2017; 25:1043-1049. [DOI: 10.11569/wcjd.v25.i12.1043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
随着计算机技术的发展, 机器学习被深入研究并应用到各个领域, 机器学习在医学中的应用将转换现在的医学模式, 利用机器学习处理医学中庞大数据可提高医生诊断准确率, 指导治疗, 评估预后. 机器学习中的深度学习已广泛应用在病理智能图像诊断方面, 目前在有丝分裂检测, 细胞核的分割和检测, 组织分类中已取得较好成效. 在病理组织学上, 胃高分化腺癌因其组织结构和细胞形态异型性小, 取材标本表浅等原因容易漏诊. 现有的早期胃癌的病理智能图像诊断系统中没有关于腺腔圆度的研究, 圆度测量可以将腺腔结构的不规则, 腺腔扩张等特征转换为具体数值的定量指标, 通过数值大小来进行诊断分析, 为病理诊断提供参考价值.
Collapse
|
491
|
Valkonen M, Kartasalo K, Liimatainen K, Nykter M, Latonen L, Ruusuvuori P. Metastasis detection from whole slide images using local features and random forests. Cytometry A 2017; 91:555-565. [PMID: 28426134 DOI: 10.1002/cyto.a.23089] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Digital pathology has led to a demand for automated detection of regions of interest, such as cancerous tissue, from scanned whole slide images. With accurate methods using image analysis and machine learning, significant speed-up, and savings in costs through increased throughput in histological assessment could be achieved. This article describes a machine learning approach for detection of cancerous tissue from scanned whole slide images. Our method is based on feature engineering and supervised learning with a random forest model. The features extracted from the whole slide images include several local descriptors related to image texture, spatial structure, and distribution of nuclei. The method was evaluated in breast cancer metastasis detection from lymph node samples. Our results show that the method detects metastatic areas with high accuracy (AUC = 0.97-0.98 for tumor detection within whole image area, AUC = 0.84-0.91 for tumor vs. normal tissue detection) and that the method generalizes well for images from more than one laboratory. Further, the method outputs an interpretable classification model, enabling the linking of individual features to differences between tissue types. © 2017 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Mira Valkonen
- BioMediTech and Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,BioMediTech Institute and Faculty of Biomedical Science and Engineering, Tampere University of Technology, Tampere, Finland
| | - Kimmo Kartasalo
- BioMediTech and Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,BioMediTech Institute and Faculty of Biomedical Science and Engineering, Tampere University of Technology, Tampere, Finland
| | - Kaisa Liimatainen
- BioMediTech and Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,BioMediTech Institute and Faculty of Biomedical Science and Engineering, Tampere University of Technology, Tampere, Finland
| | - Matti Nykter
- BioMediTech and Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,BioMediTech Institute and Faculty of Biomedical Science and Engineering, Tampere University of Technology, Tampere, Finland
| | - Leena Latonen
- BioMediTech and Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland
| | - Pekka Ruusuvuori
- BioMediTech and Faculty of Medicine and Life Sciences, University of Tampere, Tampere, Finland.,Faculty of Computing and Electrical Engineering, Tampere University of Technology, Pori, Finland
| |
Collapse
|
492
|
Khoshdeli M, Cong R, Parvin B. Detection of Nuclei in H&E Stained Sections Using Convolutional Neural Networks. ... IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS. IEEE-EMBS INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2017; 2017:105-108. [PMID: 28580455 DOI: 10.1109/bhi.2017.7897216] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Detection of nuclei is an important step in phenotypic profiling of histology sections that are usually imaged in bright field. However, nuclei can have multiple phenotypes, which are difficult to model. It is shown that convolutional neural networks (CNN)s can learn different phenotypic signatures for nuclear detection, and that the performance is improved with the feature-based representation of the original image. The feature-based representation utilizes Laplacian of Gaussian (LoG) filter, which accentuates blob-shape objects. Several combinations of input data representations are evaluated to show that by LoG representation, detection of nuclei is advanced. In addition, the efficacy of CNN for vesicular and hyperchromatic nuclei is evaluated. In particular, the frequency of detection of nuclei with the vesicular and apoptotic phenotypes is increased. The overall system has been evaluated against manually annotated nuclei and the F-Scores for alternative representations have been reported.
Collapse
Affiliation(s)
- Mina Khoshdeli
- Biomedical and Electrical Engineering Department, University of Nevada, Reno, NV, U.S.A
| | - Richard Cong
- Amador Valley High School, Pleasanton, Ca, U.S.A
| | - Bahram Parvin
- Biomedical and Electrical Engineering Department, University of Nevada, Reno, NV, U.S.A
| |
Collapse
|
493
|
Fatima K, Majeed H, Irshad H. Nuclear spatial and spectral features based evolutionary method for meningioma subtypes classification in histopathology. Microsc Res Tech 2017; 80:851-861. [PMID: 28379628 DOI: 10.1002/jemt.22874] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2017] [Accepted: 03/17/2017] [Indexed: 11/11/2022]
Abstract
Meningioma subtypes classification is a real-world multiclass problem from the realm of neuropathology. The major challenge in solving this problem is the inherent complexity due to high intra-class variability and low inter-class variation in tissue samples. The development of computational methods to assist pathologists in characterization of these tissue samples would have great diagnostic and prognostic value. In this article, we proposed an optimized evolutionary framework for the classification of benign meningioma into four subtypes. This framework investigates the imperative role of RGB color channels for discrimination of tumor subtypes and compute structural, statistical and spectral phenotypes. An evolutionary technique, Genetic Algorithm, in combination with Support Vector Machine is applied to tune classifier parameters and to select the best possible combination of extracted phenotypes that improved the classification accuracy (94.88%) on meningioma histology dataset, provided by the Institute of Neuropathology, Bielefeld. These statistics show that computational framework can robustly discriminate four subtypes of benign meningioma and may aid pathologists in the diagnosis and classification of these lesions.
Collapse
Affiliation(s)
- Kiran Fatima
- Department of Computer Science, National University of Computer and Emerging Sciences, A. K. Brohi Road, H-11/4, Islamabad, Pakistan
| | - Hammad Majeed
- Department of Computer Science, National University of Computer and Emerging Sciences, A. K. Brohi Road, H-11/4, Islamabad, Pakistan
| | - Humayun Irshad
- Department of Pathology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
494
|
Vandenberghe ME, Scott MLJ, Scorer PW, Söderberg M, Balcerzak D, Barker C. Relevance of deep learning to facilitate the diagnosis of HER2 status in breast cancer. Sci Rep 2017; 7:45938. [PMID: 28378829 PMCID: PMC5380996 DOI: 10.1038/srep45938] [Citation(s) in RCA: 110] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2017] [Accepted: 03/06/2017] [Indexed: 11/10/2022] Open
Abstract
Tissue biomarker scoring by pathologists is central to defining the appropriate therapy for patients with cancer. Yet, inter-pathologist variability in the interpretation of ambiguous cases can affect diagnostic accuracy. Modern artificial intelligence methods such as deep learning have the potential to supplement pathologist expertise to ensure constant diagnostic accuracy. We developed a computational approach based on deep learning that automatically scores HER2, a biomarker that defines patient eligibility for anti-HER2 targeted therapies in breast cancer. In a cohort of 71 breast tumour resection samples, automated scoring showed a concordance of 83% with a pathologist. The twelve discordant cases were then independently reviewed, leading to a modification of diagnosis from initial pathologist assessment for eight cases. Diagnostic discordance was found to be largely caused by perceptual differences in assessing HER2 expression due to high HER2 staining heterogeneity. This study provides evidence that deep learning aided diagnosis can facilitate clinical decision making in breast cancer by identifying cases at high risk of misdiagnosis.
Collapse
Affiliation(s)
- Michel E. Vandenberghe
- Personalised Healthcare & Biomarkers, IMED Biotech Unit, AstraZeneca, HODGKIN, C/o B310 Cambridge Science Park, Milton Road, Cambridge, CB4 0WG, United Kingdom
| | - Marietta L. J. Scott
- Personalised Healthcare & Biomarkers, IMED Biotech Unit, AstraZeneca, HODGKIN, C/o B310 Cambridge Science Park, Milton Road, Cambridge, CB4 0WG, United Kingdom
| | - Paul W. Scorer
- Personalised Healthcare & Biomarkers, IMED Biotech Unit, AstraZeneca, HODGKIN, C/o B310 Cambridge Science Park, Milton Road, Cambridge, CB4 0WG, United Kingdom
| | - Magnus Söderberg
- Pathology, Drug Safety & Metabolism, IMED Biotech Unit, AstraZeneca, Pepparedsleden 1, 431 50 Mölndal, Sweden
| | - Denis Balcerzak
- Personalised Healthcare & Biomarkers, IMED Biotech Unit, AstraZeneca, HODGKIN, C/o B310 Cambridge Science Park, Milton Road, Cambridge, CB4 0WG, United Kingdom
| | - Craig Barker
- Personalised Healthcare & Biomarkers, IMED Biotech Unit, AstraZeneca, HODGKIN, C/o B310 Cambridge Science Park, Milton Road, Cambridge, CB4 0WG, United Kingdom
| |
Collapse
|
495
|
Li S, Jiang H, Pang W. Joint multiple fully connected convolutional neural network with extreme learning machine for hepatocellular carcinoma nuclei grading. Comput Biol Med 2017; 84:156-167. [PMID: 28365546 DOI: 10.1016/j.compbiomed.2017.03.017] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 03/14/2017] [Accepted: 03/17/2017] [Indexed: 12/27/2022]
Abstract
Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture.
Collapse
Affiliation(s)
- Siqi Li
- Software College, Northeastern University, Shenyang 110819, China.
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, China.
| | - Wenbo Pang
- Software College, Northeastern University, Shenyang 110819, China.
| |
Collapse
|
496
|
Murthy V, Hou L, Samaras D, Kurc TM, Saltz JH. Center-Focusing Multi-task CNN with Injected Features for Classification of Glioma Nuclear Images. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2017; 2017:834-841. [PMID: 29881826 PMCID: PMC5988234 DOI: 10.1109/wacv.2017.98] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Classifying the various shapes and attributes of a glioma cell nucleus is crucial for diagnosis and understanding of the disease. We investigate the automated classification of the nuclear shapes and visual attributes of glioma cells, using Convolutional Neural Networks (CNNs) on pathology images of automatically segmented nuclei. We propose three methods that improve the performance of a previously-developed semi-supervised CNN. First, we propose a method that allows the CNN to focus on the most important part of an image-the image's center containing the nucleus. Second, we inject (concatenate) pre-extracted VGG features into an intermediate layer of our Semi-Supervised CNN so that during training, the CNN can learn a set of additional features. Third, we separate the losses of the two groups of target classes (nuclear shapes and attributes) into a single-label loss and a multi-label loss in order to incorporate prior knowledge of inter-label exclusiveness. On a dataset of 2078 images, the combination of the proposed methods reduces the error rate of attribute and shape classification by 21.54% and 15.07% respectively compared to the existing state-of-the-art method on the same dataset.
Collapse
Affiliation(s)
| | | | | | - Tahsin M Kurc
- Stony Brook University, Oak Ridge National Laboratory
| | - Joel H Saltz
- Stony Brook University, Stony Brook University Hospital
| |
Collapse
|
497
|
Chen H, Zhang Y, Zhang W, Liao P, Li K, Zhou J, Wang G. Low-dose CT via convolutional neural network. BIOMEDICAL OPTICS EXPRESS 2017; 8:679-694. [PMID: 28270976 PMCID: PMC5330597 DOI: 10.1364/boe.8.000679] [Citation(s) in RCA: 332] [Impact Index Per Article: 41.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2016] [Revised: 12/26/2016] [Accepted: 12/27/2016] [Indexed: 05/11/2023]
Abstract
In order to reduce the potential radiation risk, low-dose CT has attracted an increasing attention. However, simply lowering the radiation dose will significantly degrade the image quality. In this paper, we propose a new noise reduction method for low-dose CT via deep learning without accessing original projection data. A deep convolutional neural network is here used to map low-dose CT images towards its corresponding normal-dose counterparts in a patch-by-patch fashion. Qualitative results demonstrate a great potential of the proposed method on artifact reduction and structure preservation. In terms of the quantitative metrics, the proposed method has showed a substantial improvement on PSNR, RMSE and SSIM than the competing state-of-art methods. Furthermore, the speed of our method is one order of magnitude faster than the iterative reconstruction and patch-based image denoising methods.
Collapse
Affiliation(s)
- Hu Chen
- College of Computer Science, Sichuan University, Chengdu 610065, China
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu 610065, China
| | - Yi Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Weihua Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Peixi Liao
- Department of Scientific Research and Education, The Sixth People’s Hospital of Chengdu, Chengdu 610065, China
| | - Ke Li
- College of Computer Science, Sichuan University, Chengdu 610065, China
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu 610065, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| |
Collapse
|
498
|
Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation. PLoS One 2017; 12:e0169875. [PMID: 28076381 PMCID: PMC5226799 DOI: 10.1371/journal.pone.0169875] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Accepted: 12/23/2016] [Indexed: 01/16/2023] Open
Abstract
Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners.
Collapse
|
499
|
Review of Deep Learning Methods in Mammography, Cardiovascular, and Microscopy Image Analysis. DEEP LEARNING AND CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE COMPUTING 2017. [DOI: 10.1007/978-3-319-42999-1_2] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
500
|
Automatic Recognition of Mild Cognitive Impairment from MRI Images Using Expedited Convolutional Neural Networks. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING – ICANN 2017 2017. [DOI: 10.1007/978-3-319-68600-4_43] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|