401
|
Vu QD, Graham S, Kurc T, To MNN, Shaban M, Qaiser T, Koohbanani NA, Khurram SA, Kalpathy-Cramer J, Zhao T, Gupta R, Kwak JT, Rajpoot N, Saltz J, Farahani K. Methods for Segmentation and Classification of Digital Microscopy Tissue Images. Front Bioeng Biotechnol 2019; 7:53. [PMID: 31001524 PMCID: PMC6454006 DOI: 10.3389/fbioe.2019.00053] [Citation(s) in RCA: 110] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 03/01/2019] [Indexed: 12/12/2022] Open
Abstract
High-resolution microscopy images of tissue specimens provide detailed information about the morphology of normal and diseased tissue. Image analysis of tissue morphology can help cancer researchers develop a better understanding of cancer biology. Segmentation of nuclei and classification of tissue images are two common tasks in tissue image analysis. Development of accurate and efficient algorithms for these tasks is a challenging problem because of the complexity of tissue morphology and tumor heterogeneity. In this paper we present two computer algorithms; one designed for segmentation of nuclei and the other for classification of whole slide tissue images. The segmentation algorithm implements a multiscale deep residual aggregation network to accurately segment nuclear material and then separate clumped nuclei into individual nuclei. The classification algorithm initially carries out patch-level classification via a deep learning method, then patch-level statistical and morphological features are used as input to a random forest regression model for whole slide image classification. The segmentation and classification algorithms were evaluated in the MICCAI 2017 Digital Pathology challenge. The segmentation algorithm achieved an accuracy score of 0.78. The classification algorithm achieved an accuracy score of 0.81. These scores were the highest in the challenge.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Simon Graham
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Minh Nguyen Nhat To
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | - Talha Qaiser
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | | | - Syed Ali Khurram
- School of Clinical Dentistry, The University of Sheffield, Sheffield, United Kingdom
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Harvard Medical School and Mass General Hospital, Boston, MA, United States
| | - Tianhao Zhao
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
- Department of Pathology, Stony Brook University, Stony Brook, NY, United States
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul, South Korea
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, United States
| | - Keyvan Farahani
- Cancer Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
402
|
Koelzer VH, Sirinukunwattana K, Rittscher J, Mertz KD. Precision immunoprofiling by image analysis and artificial intelligence. Virchows Arch 2019; 474:511-522. [PMID: 30470933 PMCID: PMC6447694 DOI: 10.1007/s00428-018-2485-z] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Revised: 11/06/2018] [Accepted: 11/09/2018] [Indexed: 02/06/2023]
Abstract
Clinical success of immunotherapy is driving the need for new prognostic and predictive assays to inform patient selection and stratification. This requirement can be met by a combination of computational pathology and artificial intelligence. Here, we critically assess computational approaches supporting the development of a standardized methodology in the assessment of immune-oncology biomarkers, such as PD-L1 and immune cell infiltrates. We examine immunoprofiling through spatial analysis of tumor-immune cell interactions and multiplexing technologies as a predictor of patient response to cancer treatment. Further, we discuss how integrated bioinformatics can enable the amalgamation of complex morphological phenotypes with the multiomics datasets that drive precision medicine. We provide an outline to machine learning (ML) and artificial intelligence tools and illustrate fields of application in immune-oncology, such as pattern-recognition in large and complex datasets and deep learning approaches for survival analysis. Synergies of surgical pathology and computational analyses are expected to improve patient stratification in immuno-oncology. We propose that future clinical demands will be best met by (1) dedicated research at the interface of pathology and bioinformatics, supported by professional societies, and (2) the integration of data sciences and digital image analysis in the professional education of pathologists.
Collapse
Affiliation(s)
- Viktor H Koelzer
- Institute of Cancer and Genomic Science, University of Birmingham, 6 Mindelsohn Way, Birmingham, B15 2SY, UK.
- Molecular and Population Genetics Laboratory, Wellcome Centre for Human Genetics, University of Oxford, Headington, Oxford, OX3 7BN, UK.
| | - Korsuk Sirinukunwattana
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, Oxford, OX3 7DQ, UK
| | - Jens Rittscher
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, Oxford, OX3 7DQ, UK
- Ludwig Institute for Cancer Research, Nuffield Department of Medicine, University of Oxford, Old Road Campus Research Building, Oxford, OX3 7DQ, UK
- Target Discovery Institute, NDM Research Building, University of Oxford, Old Road Campus, Headington, OX3 7FZ, UK
| | - Kirsten D Mertz
- Institute of Pathology, Cantonal Hospital Baselland, Mühlemattstrasse 11, CH-4410, Liestal, Switzerland
| |
Collapse
|
403
|
Saikia AR, Bora K, Mahanta LB, Das AK. Comparative assessment of CNN architectures for classification of breast FNAC images. Tissue Cell 2019; 57:8-14. [PMID: 30947968 DOI: 10.1016/j.tice.2019.02.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2018] [Revised: 01/30/2019] [Accepted: 02/02/2019] [Indexed: 01/27/2023]
Abstract
Fine needle aspiration cytology (FNAC) entails using a narrow gauge (25-22 G) needle to collect a sample of a lesion for microscopic examination. It allows a minimally invasive, rapid diagnosis of tissue but does not preserve its histological architecture. FNAC is commonly used for diagnosis of breast cancer, with traditional practice being based on the subjective visual assessment of the breast cytopathology cell samples under a microscope to evaluate the state of various cytological features. Therefore, there are many challenges in maintaining consistency and reproducibility of findings. However, the advent of digital imaging and computational aid in diagnosis can improve the diagnostic accuracy and reduce the effective workload of pathologists. This paper presents a comparison of various deep convolutional neural network (CNN) based fine-tuned transfer learned classification approach for the diagnosis of the cell samples. The proposed approach has been tested using VGG16, VGG19, ResNet-50 and GoogLeNet-V3 (aka Inception V3) architectures of CNN on an image dataset of 212 images (99 benign and 113 malignant), later augmented and cleansed to 2120 images (990 benign and 1130 malignant), where the network was trained using images of 80% cell samples and tested on the rest. This paper presents a comparative assessment of the models giving a new dimension to FNAC study where GoogLeNet-V3 (fine-tuned) achieved an accuracy of 96.25% which is highly satisfactory.
Collapse
Affiliation(s)
- Amartya Ranjan Saikia
- The Department of Computer Science and Engineering, Assam Engineering College, Guwahati 781013, Assam, India.
| | - Kangkana Bora
- The Department of Centre for Computational and Numerical Sciences, Institute of Advanced Study in Science and Technology, Guwahati 781035, Assam, India.
| | - Lipi B Mahanta
- The Department of Centre for Computational and Numerical Sciences, Institute of Advanced Study in Science and Technology, Guwahati 781035, Assam, India
| | | |
Collapse
|
404
|
Pell R, Oien K, Robinson M, Pitman H, Rajpoot N, Rittscher J, Snead D, Verrill C. The use of digital pathology and image analysis in clinical trials. J Pathol Clin Res 2019; 5:81-90. [PMID: 30767396 PMCID: PMC6463857 DOI: 10.1002/cjp2.127] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 02/08/2019] [Accepted: 02/12/2019] [Indexed: 02/06/2023]
Abstract
Digital pathology and image analysis potentially provide greater accuracy, reproducibility and standardisation of pathology-based trial entry criteria and endpoints, alongside extracting new insights from both existing and novel features. Image analysis has great potential to identify, extract and quantify features in greater detail in comparison to pathologist assessment, which may produce improved prediction models or perform tasks beyond manual capability. In this article, we provide an overview of the utility of such technologies in clinical trials and provide a discussion of the potential applications, current challenges, limitations and remaining unanswered questions that require addressing prior to routine adoption in such studies. We reiterate the value of central review of pathology in clinical trials, and discuss inherent logistical, cost and performance advantages of using a digital approach. The current and emerging regulatory landscape is outlined. The role of digital platforms and remote learning to improve the training and performance of clinical trial pathologists is discussed. The impact of image analysis on quantitative tissue morphometrics in key areas such as standardisation of immunohistochemical stain interpretation, assessment of tumour cellularity prior to molecular analytical applications and the assessment of novel histological features is described. The standardisation of digital image production, establishment of criteria for digital pathology use in pre-clinical and clinical studies, establishment of performance criteria for image analysis algorithms and liaison with regulatory bodies to facilitate incorporation of image analysis applications into clinical practice are key issues to be addressed to improve digital pathology incorporation into clinical trials.
Collapse
Affiliation(s)
- Robert Pell
- Nuffield Department of Surgical SciencesUniversity of Oxford, and Oxford NIHR Biomedical Research CentreOxfordUK
| | - Karin Oien
- Institute of Cancer Sciences – PathologyUniversity of GlasgowGlasgowUK
| | - Max Robinson
- Centre for Oral Health ResearchNewcastle UniversityNewcastle upon TyneUK
| | - Helen Pitman
- Strategy and InitiativesNational Cancer Research InstituteLondonUK
| | - Nasir Rajpoot
- Department of Computer ScienceUniversity of WarwickWarwickUK
| | - Jens Rittscher
- Nuffield Department of Surgical SciencesUniversity of Oxford, and Oxford NIHR Biomedical Research CentreOxfordUK
| | - David Snead
- Department of PathologyUniversity Hospitals Coventry and WarwickshireCoventryUK
| | - Clare Verrill
- Nuffield Department of Surgical SciencesUniversity of Oxford, and Oxford NIHR Biomedical Research CentreOxfordUK
| |
Collapse
|
405
|
Shapcott M, Hewitt KJ, Rajpoot N. Deep Learning With Sampling in Colon Cancer Histology. Front Bioeng Biotechnol 2019; 7:52. [PMID: 30972333 PMCID: PMC6445856 DOI: 10.3389/fbioe.2019.00052] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2018] [Accepted: 03/01/2019] [Indexed: 12/17/2022] Open
Abstract
This study applied a deep-learning cell identification algorithm to diagnostic images from the colon cancer repository at The Cancer Genome Atlas (TCGA). Within-image sampling improved performance without loss of accuracy. The features thus derived were associated with various clinical variables including metastasis, residual tumor, venous invasion, and lymphatic invasion. The deep-learning algorithm was trained using images from a locally available data set, then applied to the TCGA images by tiling them, and identifying cells in each patch defined by the tiling. In this application the average number of patches containing tissue in an image was ~900. Processing a random sample of patches greatly reduced computation costs. The cell identification algorithm was applied directly to each sampled patch, resulting in a list of cells. Each cell was labeled with its location and classification (“epithelial,” “inflammatory,” “fibroblast,” or “other”). The number of cells of a given type in the patch was calculated, resulting in a patch profile containing four features. A morphological profile that applied to the entire image was obtained by averaging profiles over all patches. Two sampling policies were examined. The first policy was random sampling which samples patches with uniform weighting. The second policy was systematic random sampling which takes spatial dependencies into account. Compared with the processing of complete whole slide images there was a seven-fold improvement in performance when systematic random spatial sampling was used to select 100 tiles from the whole-slide image for processing, with very little loss of accuracy (~4% on average). We found links between the predicted features and clinical variables in the TCGA colon cancer data set. Several significant associations were found: increased fibroblast numbers were associated with the presence of metastasis, venous invasion, lymphatic invasion and residual tumor while decreased numbers of inflammatory cells were associated with mucinous carcinomas. Regarding the four different types of cell, deep learning has generated morphological features that are indicators of cell density. The features are related to cellularity, the numbers, degree, or quality of cells present in a tumor. Cellularity has been reported to be related to patient survival and other diagnostic and prognostic indicators, indicating that the features calculated here may be of general usefulness.
Collapse
Affiliation(s)
- Mary Shapcott
- Department of Computer Science, University of Warwick, Coventry, United Kingdom
| | - Katherine J Hewitt
- Cellular Pathology Department, University Hospital of Coventry and Warwickshire, Coventry, United Kingdom
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, United Kingdom.,Cellular Pathology Department, University Hospital of Coventry and Warwickshire, Coventry, United Kingdom
| |
Collapse
|
406
|
NHL Pathological Image Classification Based on Hierarchical Local Information and GoogLeNet-Based Representations. BIOMED RESEARCH INTERNATIONAL 2019; 2019:1065652. [PMID: 31016181 PMCID: PMC6448331 DOI: 10.1155/2019/1065652] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Accepted: 03/04/2019] [Indexed: 01/31/2023]
Abstract
Background Accurate classification for different non-Hodgkin lymphomas (NHL) is one of the main challenges in clinical pathological diagnosis due to its intrinsic complexity. Therefore, this paper proposes an effective classification model for three types of NHL pathological images, including mantle cell lymphoma (MCL), follicular lymphoma (FL), and chronic lymphocytic leukemia (CLL). Methods There are three main parts with respect to our model. First, NHL pathological images stained by hematoxylin and eosin (H&E) are transferred into blue ratio (BR) and Lab spaces, respectively. Then specific patch-level textural and statistical features are extracted from BR images and color features are obtained from Lab images both using a hierarchical way, yielding a set of hand-crafted representations corresponding to different image spaces. A random forest classifier is subsequently trained for patch-level classification. Second, H&E images are cropped and fed into a pretrained google inception net (GoogLeNet) for learning high-level representations and a softmax classifier is used for patch-level classification. Finally, three image-level classification strategies based on patch-level results are discussed including a novel method for calculating the weighted sum of patch results. Different classification results are fused at both feature 1 and image levels to obtain a more satisfactory result. Results The proposed model is evaluated on a public IICBU Malignant Lymphoma Dataset and achieves an improved overall accuracy of 0.991 and area under the receiver operating characteristic curve of 0.998. Conclusion The experimentations demonstrate the significantly increased classification performance of the proposed model, indicating that it is a suitable classification approach for NHL pathological images.
Collapse
|
407
|
Liu C, Li HC, Liao W, Philips W, Emery WJ. Variational Textured Dirichlet Process Mixture Model with Pairwise Constraint for Unsupervised Classification of Polarimetric SAR Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:4145-4160. [PMID: 30892209 DOI: 10.1109/tip.2019.2906009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper proposes an unsupervised classification method for multilook polarimetric synthetic aperture radar (Pol-SAR) data. The proposed method simultaneously deals with the heterogeneity and incorporates the local correlation in PolSAR images. Specifically, within the probabilistic framework of the Dirichlet process mixture model (DPMM), an observed PolSAR data point is described by the multiplication of a Wishartdistributed component and a class-dependent random variable (i.e., the textual variable). This modeling scheme leads to the proposed textured DPMM (tDPMM), which possesses more flexibility in characterizing PolSAR data in heterogeneous areas and from high-resolution images due to the introduction of the classdependent texture variable. The proposed tDPMM is learned by solving an optimization problem to achieve its Bayesian inference. With the knowledge of this optimization-based learning, the local correlation is incorporated through the pairwise constraint, which integrates an appropriate penalty term into the objective function so as to encourage the neighboring pixels to fall into the same category and to alleviate the "salt-and-pepper" classification appearance.We develop the learning algorithm with all the closed-form updates. The performance of the proposed method is evaluated with both low-resolution and high-resolution PolSAR images, which involve homogeneous, heterogeneous, and extremely heterogeneous areas. The experimental results reveal that the class-dependent texture variable is beneficial to PolSAR image classification and the pairwise constraint can effectively incorporate the local correlation in PolSAR images.
Collapse
|
408
|
The present and future of deep learning in radiology. Eur J Radiol 2019; 114:14-24. [PMID: 31005165 DOI: 10.1016/j.ejrad.2019.02.038] [Citation(s) in RCA: 182] [Impact Index Per Article: 30.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2018] [Revised: 02/17/2019] [Accepted: 02/26/2019] [Indexed: 12/18/2022]
Abstract
The advent of Deep Learning (DL) is poised to dramatically change the delivery of healthcare in the near future. Not only has DL profoundly affected the healthcare industry it has also influenced global businesses. Within a span of very few years, advances such as self-driving cars, robots performing jobs that are hazardous to human, and chat bots talking with human operators have proved that DL has already made large impact on our lives. The open source nature of DL and decreasing prices of computer hardware will further propel such changes. In healthcare, the potential is immense due to the need to automate the processes and evolve error free paradigms. The sheer quantum of DL publications in healthcare has surpassed other domains growing at a very fast pace, particular in radiology. It is therefore imperative for the radiologists to learn about DL and how it differs from other approaches of Artificial Intelligence (AI). The next generation of radiology will see a significant role of DL and will likely serve as the base for augmented radiology (AR). Better clinical judgement by AR will help in improving the quality of life and help in life saving decisions, while lowering healthcare costs. A comprehensive review of DL as well as its implications upon the healthcare is presented in this review. We had analysed 150 articles of DL in healthcare domain from PubMed, Google Scholar, and IEEE EXPLORE focused in medical imagery only. We have further examined the ethic, moral and legal issues surrounding the use of DL in medical imaging.
Collapse
|
409
|
Xing F, Cornish TC, Bennett T, Ghosh D, Yang L. Pixel-to-Pixel Learning With Weak Supervision for Single-Stage Nucleus Recognition in Ki67 Images. IEEE Trans Biomed Eng 2019; 66:3088-3097. [PMID: 30802845 DOI: 10.1109/tbme.2019.2900378] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Nucleus recognition is a critical yet challenging step in histopathology image analysis, for example, in Ki67 immunohistochemistry stained images. Although many automated methods have been proposed, most use a multi-stage processing pipeline to categorize nuclei, leading to cumbersome, low-throughput, and error-prone assessments. To address this issue, we propose a novel deep fully convolutional network for single-stage nucleus recognition. METHODS Instead of conducting direct pixel-wise classification, we formulate nucleus identification as a deep structured regression model. For each input image, it produces multiple proximity maps, each of which corresponds to one nucleus category and exhibits strong responses in central regions of the nuclei. In addition, by taking into consideration the nucleus distribution in histopathology images, we further introduce an auxiliary task, region of interest (ROI) extraction, to assist and boost the nucleus quantification with weak ROI annotation. The proposed network can be learned in an end-to-end, pixel-to-pixel manner for simultaneous nucleus detection and classification. RESULTS We have evaluated this network on a pancreatic neuroendocrine tumor Ki67 image dataset, and the experiments demonstrate that our method outperforms recent state-of-the-art approaches. CONCLUSION We present a new, pixel-to-pixel deep neural network with two sibling branches for effective nucleus recognition and observe that learning with another relevant task, ROI extraction, can further boost individual nucleus localization and classification. SIGNIFICANCE Our method provides a clean, single-stage nucleus recognition pipeline for histopathology image analysis, especially a new perspective for Ki67 image quantification, which would potentially benefit individual object quantification in whole-slide images.
Collapse
|
410
|
Effland A, Kobler E, Brandenburg A, Klatzer T, Neuhäuser L, Hölzel M, Landsberg J, Pock T, Rumpf M. Joint reconstruction and classification of tumor cells and cell interactions in melanoma tissue sections with synthesized training data. Int J Comput Assist Radiol Surg 2019; 14:587-599. [PMID: 30779021 PMCID: PMC6420907 DOI: 10.1007/s11548-019-01919-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Accepted: 01/21/2019] [Indexed: 01/02/2023]
Abstract
Purpose Cancers are almost always diagnosed by morphologic features in tissue sections. In this context, machine learning tools provide new opportunities to describe tumor immune cell interactions within the tumor microenvironment and thus provide phenotypic information that might be predictive for the response to immunotherapy. Methods We develop a machine learning approach using variational networks for joint image denoising and classification of tissue sections for melanoma, which is an established model tumor for immuno-oncology research. The manual annotation of real training data would require substantial user interaction of experienced pathologists for each single training image, and the training of larger networks would rely on a very large number of such data sets with ground truth annotation. To overcome this bottleneck, we synthesize training data together with a proper tissue structure classification. To this end, a stochastic data generation process is used to mimic cell morphology, cell distribution and tissue architecture in the tumor microenvironment. Particular components of this tool are random placement and rotation of a large number of patches for presegmented cell nuclei, a stochastic fast marching approach to mimic the geometry of cells and texture generation based on a color covariance analysis of real data. Here, the generated training data reflect a large range of interaction patterns. Results In several applications to histological tissue sections, we analyze the efficiency and accuracy of the proposed approach. As a result, depending on the scenario considered, almost all cells and nuclei which ought to be detected are actually marked as classified and hardly any misclassifications occur. Conclusions The proposed method allows for a computer-aided screening of histological tissue sections utilizing variational networks with a particular emphasis on tumor immune cell interactions and on the robust cell nuclei classification.
Collapse
Affiliation(s)
- Alexander Effland
- Institute for Numerical Simulation, University of Bonn, Bonn, Germany.
| | - Erich Kobler
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Anne Brandenburg
- Department of Dermatology and Allergy, University of Bonn, Bonn, Germany
| | - Teresa Klatzer
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Leonie Neuhäuser
- Institute for Numerical Simulation, University of Bonn, Bonn, Germany
| | - Michael Hölzel
- Institute of Clinical Chemistry and Clinical Pharmacology, University of Bonn, Bonn, Germany
| | - Jennifer Landsberg
- Department of Dermatology and Allergy, University of Bonn, Bonn, Germany
| | - Thomas Pock
- Institute of Computer Graphics and Vision, Graz University of Technology, Graz, Austria
| | - Martin Rumpf
- Institute for Numerical Simulation, University of Bonn, Bonn, Germany
| |
Collapse
|
411
|
Liimatainen K, Kananen L, Latonen L, Ruusuvuori P. Iterative unsupervised domain adaptation for generalized cell detection from brightfield z-stacks. BMC Bioinformatics 2019; 20:80. [PMID: 30767778 PMCID: PMC6376647 DOI: 10.1186/s12859-019-2605-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Accepted: 01/04/2019] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Cell counting from cell cultures is required in multiple biological and biomedical research applications. Especially, accurate brightfield-based cell counting methods are needed for cell growth analysis. With deep learning, cells can be detected with high accuracy, but manually annotated training data is required. We propose a method for cell detection that requires annotated training data for one cell line only, and generalizes to other, unseen cell lines. RESULTS Training a deep learning model with one cell line only can provide accurate detections for similar unseen cell lines (domains). However, if the new domain is very dissimilar from training domain, high precision but lower recall is achieved. Generalization capabilities of the model can be improved with training data transformations, but only to a certain degree. To further improve the detection accuracy of unseen domains, we propose iterative unsupervised domain adaptation method. Predictions of unseen cell lines with high precision enable automatic generation of training data, which is used to train the model together with parts of the previously used annotated training data. We used U-Net-based model, and three consecutive focal planes from brightfield image z-stacks. We trained the model initially with PC-3 cell line, and used LNCaP, BT-474 and 22Rv1 cell lines as target domains for domain adaptation. Highest improvement in accuracy was achieved for 22Rv1 cells. F1-score after supervised training was only 0.65, but after unsupervised domain adaptation we achieved a score of 0.84. Mean accuracy for target domains was 0.87, with mean improvement of 16 percent. CONCLUSIONS With our method for generalized cell detection, we can train a model that accurately detects different cell lines from brightfield images. A new cell line can be introduced to the model without a single manual annotation, and after iterative domain adaptation the model is ready to detect these cells with high accuracy.
Collapse
Affiliation(s)
- Kaisa Liimatainen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Lauri Kananen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | - Leena Latonen
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| |
Collapse
|
412
|
Heeke S, Delingette H, Fanjat Y, Long-Mira E, Lassalle S, Hofman V, Benzaquen J, Marquette CH, Hofman P, Ilié M. [The age of artificial intelligence in lung cancer pathology: Between hope, gloom and perspectives]. Ann Pathol 2019; 39:130-136. [PMID: 30772062 DOI: 10.1016/j.annpat.2019.01.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 01/16/2019] [Indexed: 12/14/2022]
Abstract
Histopathology is the fundamental tool of pathology used for more than a century to establish the final diagnosis of lung cancer. In addition, the phenotypic data contained in the histological images reflects the overall effect of molecular alterations on the behavior of cancer cells and provides a practical visual reading of the aggressiveness of the disease. However, the human evaluation of the histological images is sometimes subjective and may lack reproducibility. Therefore, computational analysis of histological imaging using so-called "artificial intelligence" (AI) approaches has recently received considerable attention to improve this diagnostic accuracy. Thus, computational analysis of lung cancer images has recently been evaluated for the optimization of histological or cytological classification, prognostic prediction or genomic profile of patients with lung cancer. This rapidly growing field constantly demonstrates great power in the field of computing medical imaging by producing highly accurate detection, segmentation or recognition tasks. However, there are still several challenges or issues to be addressed in order to successfully succeed the actual transfer into clinical routine. The objective of this review is to emphasize recent applications of AI in pulmonary cancer pathology, but also to clarify the advantages and limitations of this approach, as well as the perspectives to be implemented for a potential transfer into clinical routine.
Collapse
Affiliation(s)
- Simon Heeke
- Laboratoire de pathologie clinique et expérimentale/biobanque (BB 0033-00025), Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France; Équipe 4, CNRS UMR7284, Inserm U1081, faculté de médecine, institut de recherche sur le cancer et le vieillissement de Nice (Ircan), 28, avenue de Valombrose, 06107 Nice, France
| | - Hervé Delingette
- Équipe Asclepios, Inria Sophia-Antipolis, université Côte-d'Azur, 2004, route des Lucioles, 06902 Sophia-Antipolis, France
| | - Youta Fanjat
- Laboratoire de pathologie clinique et expérimentale/biobanque (BB 0033-00025), Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France
| | - Elodie Long-Mira
- Laboratoire de pathologie clinique et expérimentale/biobanque (BB 0033-00025), Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France; Équipe 4, CNRS UMR7284, Inserm U1081, faculté de médecine, institut de recherche sur le cancer et le vieillissement de Nice (Ircan), 28, avenue de Valombrose, 06107 Nice, France
| | - Sandra Lassalle
- Laboratoire de pathologie clinique et expérimentale/biobanque (BB 0033-00025), Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France; Équipe 4, CNRS UMR7284, Inserm U1081, faculté de médecine, institut de recherche sur le cancer et le vieillissement de Nice (Ircan), 28, avenue de Valombrose, 06107 Nice, France
| | - Véronique Hofman
- Laboratoire de pathologie clinique et expérimentale/biobanque (BB 0033-00025), Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France; Équipe 4, CNRS UMR7284, Inserm U1081, faculté de médecine, institut de recherche sur le cancer et le vieillissement de Nice (Ircan), 28, avenue de Valombrose, 06107 Nice, France
| | - Jonathan Benzaquen
- Équipe 4, CNRS UMR7284, Inserm U1081, faculté de médecine, institut de recherche sur le cancer et le vieillissement de Nice (Ircan), 28, avenue de Valombrose, 06107 Nice, France; Service de pneumologie, Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France
| | - Charles-Hugo Marquette
- Service de pneumologie, Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France
| | - Paul Hofman
- Laboratoire de pathologie clinique et expérimentale/biobanque (BB 0033-00025), Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France; Équipe 4, CNRS UMR7284, Inserm U1081, faculté de médecine, institut de recherche sur le cancer et le vieillissement de Nice (Ircan), 28, avenue de Valombrose, 06107 Nice, France
| | - Marius Ilié
- Laboratoire de pathologie clinique et expérimentale/biobanque (BB 0033-00025), Fédération hospitalo-universitaire OncoAge, CHU de Nice, université Côte-d'Azur, 30, voie Romaine, 06000 Nice, France; Équipe 4, CNRS UMR7284, Inserm U1081, faculté de médecine, institut de recherche sur le cancer et le vieillissement de Nice (Ircan), 28, avenue de Valombrose, 06107 Nice, France.
| |
Collapse
|
413
|
Xu J, Gong L, Wang G, Lu C, Gilmore H, Zhang S, Madabhushi A. Convolutional neural network initialized active contour model with adaptive ellipse fitting for nuclear segmentation on breast histopathological images. J Med Imaging (Bellingham) 2019; 6:017501. [PMID: 30840729 DOI: 10.1117/1.jmi.6.1.017501] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 01/07/2019] [Indexed: 11/14/2022] Open
Abstract
Automated detection and segmentation of nuclei from high-resolution histopathological images is a challenging problem owing to the size and complexity of digitized histopathologic images. In the context of breast cancer, the modified Bloom-Richardson Grading system is highly correlated with the morphological and topological nuclear features are highly correlated with Modified Bloom-Richardson grading. Therefore, to develop a computer-aided prognosis system, automated detection and segmentation of nuclei are critical prerequisite steps. We present a method for automated detection and segmentation of breast cancer nuclei named a convolutional neural network initialized active contour model with adaptive ellipse fitting (CoNNACaeF). The CoNNACaeF model is able to detect and segment nuclei simultaneously, which consist of three different modules: convolutional neural network (CNN) for accurate nuclei detection, (2) region-based active contour (RAC) model for subsequent nuclear segmentation based on the initial CNN-based detection of nuclear patches, and (3) adaptive ellipse fitting for overlapping solution of clumped nuclear regions. The performance of the CoNNACaeF model is evaluated on three different breast histological data sets, comprising a total of 257 H&E-stained images. The model is shown to have improved detection accuracy of F-measure 80.18%, 85.71%, and 80.36% and average area under precision-recall curves (AveP) 77%, 82%, and 74% on a total of 3 million nuclei from 204 whole slide images from three different datasets. Additionally, CoNNACaeF yielded an F-measure at 74.01% and 85.36%, respectively, for two different breast cancer datasets. The CoNNACaeF model also outperformed the three other state-of-the-art nuclear detection and segmentation approaches, which are blue ratio initialized local region active contour, iterative radial voting initialized local region active contour, and maximally stable extremal region initialized local region active contour models.
Collapse
Affiliation(s)
- Jun Xu
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Lei Gong
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Guanhao Wang
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Cheng Lu
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Hannah Gilmore
- University Hospitals Case Medical Center, Case Western Reserve University, Institute for Pathology, Cleveland, Ohio, United States
| | - Shaoting Zhang
- University of North Carolina at Charlotte, Department of Computer Science, Charlotte, North Carolina, United States
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, United States
| |
Collapse
|
414
|
Investigation of Polymer Coatings Formed by Polyvinyl Alcohol and Silver Nanoparticles on Copper Surface in Acid Medium by Means of Deep Convolutional Neural Networks. COATINGS 2019. [DOI: 10.3390/coatings9020105] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In order to assemble effective protective coatings against corrosion, electrochemical techniques such as linear potentiometry and cyclic voltammetry were performed on a copper surface in 0.1 mol·L−1 HCl solution containing 0.1% polyvinyl alcohol (PVA) in the absence and presence of silver nanoparticles (nAg/PVA). A recent paradigm was used to distinguish the features of the coatings, that is, a deep convolutional neural network (CNN) was implemented to automatically and hierarchically extract the discriminative characteristics from the information given by optical microscopy images. In our study, the material surface morphology, controlled by the CNN without the interference of the human factor, was successfully conducted to extract the similarities/differences between unprotected and protected surfaces in order to establish the PVA and nAg/PVA performance to retard copper corrosion. The CNN results were confirmed by the classical investigation of copper behavior in hydrochloric acid solution in the absence and presence of polyvinyl alcohol and silver nanoparticles. The electrochemical measurements showed that the corrosion current density (icorr) decreased and polarization resistance (Rp) increased, with both PVA and nAg/PVA being effective inhibitors for copper corrosion in an acid environment, forming polymer protective coatings by adsorption on the metal surface. Furthermore, scanning electron microscopy (SEM) certifies the formation of polymer coatings, revealing a specific morphology of the copper surface in the presence of PVA and nAg/PVA, very different from that of corroded copper in uninhibited solutions. Finally, the correlation of the CNN information with experimental data was reported.
Collapse
|
415
|
Graham S, Chen H, Gamper J, Dou Q, Heng PA, Snead D, Tsang YW, Rajpoot N. MILD-Net: Minimal information loss dilated network for gland instance segmentation in colon histology images. Med Image Anal 2019; 52:199-211. [PMID: 30594772 DOI: 10.1016/j.media.2018.12.001] [Citation(s) in RCA: 126] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 12/04/2018] [Accepted: 12/14/2018] [Indexed: 02/08/2023]
Abstract
The analysis of glandular morphology within colon histopathology images is an important step in determining the grade of colon cancer. Despite the importance of this task, manual segmentation is laborious, time-consuming and can suffer from subjectivity among pathologists. The rise of computational pathology has led to the development of automated methods for gland segmentation that aim to overcome the challenges of manual segmentation. However, this task is non-trivial due to the large variability in glandular appearance and the difficulty in differentiating between certain glandular and non-glandular histological structures. Furthermore, a measure of uncertainty is essential for diagnostic decision making. To address these challenges, we propose a fully convolutional neural network that counters the loss of information caused by max-pooling by re-introducing the original image at multiple points within the network. We also use atrous spatial pyramid pooling with varying dilation rates for preserving the resolution and multi-level aggregation. To incorporate uncertainty, we introduce random transformations during test time for an enhanced segmentation result that simultaneously generates an uncertainty map, highlighting areas of ambiguity. We show that this map can be used to define a metric for disregarding predictions with high uncertainty. The proposed network achieves state-of-the-art performance on the GlaS challenge dataset and on a second independent colorectal adenocarcinoma dataset. In addition, we perform gland instance segmentation on whole-slide images from two further datasets to highlight the generalisability of our method. As an extension, we introduce MILD-Net+ for simultaneous gland and lumen segmentation, to increase the diagnostic power of the network.
Collapse
Affiliation(s)
- Simon Graham
- Mathematics for Real World Systems Centre for Doctoral Training, University of Warwick, Coventry, CV4 7AL, UK; Department of Computer Science, University of Warwick, UK.
| | - Hao Chen
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Jevgenij Gamper
- Mathematics for Real World Systems Centre for Doctoral Training, University of Warwick, Coventry, CV4 7AL, UK; Department of Computer Science, University of Warwick, UK
| | - Qi Dou
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, China
| | - David Snead
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK; The Alan Turing Institute, London, UK
| |
Collapse
|
416
|
Hou L, Nguyen V, Kanevsky AB, Samaras D, Kurc TM, Zhao T, Gupta RR, Gao Y, Chen W, Foran D, Saltz JH. Sparse Autoencoder for Unsupervised Nucleus Detection and Representation in Histopathology Images. PATTERN RECOGNITION 2019; 86:188-200. [PMID: 30631215 PMCID: PMC6322841 DOI: 10.1016/j.patcog.2018.09.007] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
We propose a sparse Convolutional Autoencoder (CAE) for simultaneous nucleus detection and feature extraction in histopathology tissue images. Our CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei. A primary contribution of our work is the development of an unsupervised detection network by using the characteristics of histopathology image patches. The pretrained nucleus detection and feature extraction modules in our CAE can be fine-tuned for supervised learning in an end-to-end fashion. We evaluate our method on four datasets and achieve state-of-the-art results. In addition, we are able to achieve comparable performance with only 5% of the fully- supervised annotation cost.
Collapse
Affiliation(s)
- Le Hou
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Vu Nguyen
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Ariel B Kanevsky
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Montreal Institute for Learning Algorithms, University of Montreal, Montreal, Canada
| | - Dimitris Samaras
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Tahsin M Kurc
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Tianhao Zhao
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
| | - Rajarsi R Gupta
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
| | - Yi Gao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China
| | - Wenjin Chen
- Center for Biomedical Imaging & Informatics, Rutgers, the State University of New Jersey,New Brunswick, NJ, USA
- Rutgers Cancer Institute of New Jersey, Rutgers, the State University of New Jersey, NJ, USA
| | - David Foran
- Center for Biomedical Imaging & Informatics, Rutgers, the State University of New Jersey,New Brunswick, NJ, USA
- Rutgers Cancer Institute of New Jersey, Rutgers, the State University of New Jersey, NJ, USA
- Div. of Medical Informatics, Rutgers-Robert Wood Johnson Medical School, Piscataway Township, NJ, USA
| | - Joel H Saltz
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
- Cancer Center, Stony Brook University Hospital, Stony Brook, NY, USA
| |
Collapse
|
417
|
Lichtblau D, Stoean C. Cancer diagnosis through a tandem of classifiers for digitized histopathological slides. PLoS One 2019; 14:e0209274. [PMID: 30650087 PMCID: PMC6334911 DOI: 10.1371/journal.pone.0209274] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 12/03/2018] [Indexed: 11/18/2022] Open
Abstract
The current research study is concerned with the automated differentiation between histopathological slides from colon tissues with respect to four classes (healthy tissue and cancerous of grades 1, 2 or 3) through an optimized ensemble of predictors. Six distinct classifiers with prediction accuracies ranging from 87% to 95% are considered for the task. The proposed method of combining them takes into account the probabilities of the individual classifiers for each sample to be assigned to any of the four classes, optimizes weights for each technique by differential evolution and attains an accuracy that is significantly better than the individual results. Moreover, a degree of confidence is defined that would allow the pathologists to separate the data into two distinct sets, one that is correctly classified with a high level of confidence and the rest that would need their further attention. The tandem is also validated on other benchmark data sets. The proposed methodology proves to be efficient in improving the classification accuracy of each algorithm taken separately and performs reasonably well on other data sets, even with default weights. In addition, by establishing a degree of confidence the method becomes more viable for use by actual practitioners.
Collapse
Affiliation(s)
| | - Catalin Stoean
- Faculty of Sciences, University of Craiova, Craiova, Romania
- * E-mail:
| |
Collapse
|
418
|
Khoshdeli M, Parvin B. Feature-Based Representation Improves Color Decomposition and Nuclear Detection Using a Convolutional Neural Network. IEEE Trans Biomed Eng 2019; 65:625-634. [PMID: 29461964 DOI: 10.1109/tbme.2017.2711529] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Detection of nuclei is an important step in phenotypic profiling of 1) histology sections imaged in bright field; and 2) colony formation of the 3-D cell culture models that are imaged using confocal microscopy. It is shown that feature-based representation of the original image improves color decomposition (CD) and subsequent nuclear detection using convolutional neural networks independent of the imaging modality. The feature-based representation utilizes the Laplacian of Gaussian (LoG) filter, which accentuates blob-shape objects. Moreover, in the case of samples imaged in bright field, the LoG response also provides the necessary initial statistics for CD using nonnegative matrix factorization. Several permutations of input data representations and network architectures are evaluated to show that by coupling improved CD and the LoG response of this representation, detection of nuclei is advanced. In particular, the frequencies of detection of nuclei with the vesicular or necrotic phenotypes, or poor staining, are improved. The overall system has been evaluated against manually annotated images, and the F-scores for alternative representations and architectures are reported.
Collapse
|
419
|
Mittal H, Saraswat M. Classification of Histopathological Images Through Bag-of-Visual-Words and Gravitational Search Algorithm. ADVANCES IN INTELLIGENT SYSTEMS AND COMPUTING 2019. [DOI: 10.1007/978-981-13-1595-4_18] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
|
420
|
Deep Instance-Level Hard Negative Mining Model for Histopathology Images. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-32239-7_57] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
421
|
Hernandez-Cabronero M, Sanchez V, Blanes I, Auli-Llinas F, Marcellin MW, Serra-Sagrista J. Mosaic-Based Color-Transform Optimization for Lossy and Lossy-to-Lossless Compression of Pathology Whole-Slide Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:21-32. [PMID: 29994394 DOI: 10.1109/tmi.2018.2852685] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though the state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called mosaic optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than the other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen-Loève Transform in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB), and the accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). In addition, reversible optimized transforms achieve PSNR, HDR-VDP-2, and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB, and 0.025, respectively, when compared with the reversible color transform in lossy-to-lossless compression regimes.
Collapse
|
422
|
Sabeena Beevi K, Nair MS, Bindu G. Automatic mitosis detection in breast histopathology images using Convolutional Neural Network based deep transfer learning. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2018.10.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
423
|
Bergler M, Benz M, Rauber D, Hartmann D, Kötter M, Eckstein M, Schneider-Stock R, Hartmann A, Merkel S, Bruns V, Wittenberg T, Geppert C. Automatic Detection of Tumor Buds in Pan-Cytokeratin Stained Colorectal Cancer Sections by a Hybrid Image Analysis Approach. DIGITAL PATHOLOGY 2019. [DOI: 10.1007/978-3-030-23937-4_10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
424
|
Carse J, McKenna S. Active Learning for Patch-Based Digital Pathology Using Convolutional Neural Networks to Reduce Annotation Costs. DIGITAL PATHOLOGY 2019. [DOI: 10.1007/978-3-030-23937-4_3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
425
|
Qaiser T, Pugh M, Margielewska S, Hollows R, Murray P, Rajpoot N. Digital Tumor-Collagen Proximity Signature Predicts Survival in Diffuse Large B-Cell Lymphoma. DIGITAL PATHOLOGY 2019:163-171. [DOI: 10.1007/978-3-030-23937-4_19] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
426
|
Ramesh N, Tasdizen T. Cell Segmentation Using a Similarity Interface With a Multi-Task Convolutional Neural Network. IEEE J Biomed Health Inform 2018; 23:1457-1468. [PMID: 30530343 DOI: 10.1109/jbhi.2018.2885544] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Even though convolutional neural networks (CNN) have been used for cell segmentation, they require pixel-level ground truth annotations. This paper proposes a multitask learning algorithm for cell detection and segmentation using CNNs. We use dot annotations placed inside each cell indicating approximate cell centroids to create training datasets for the detection and segmentation tasks. The segmentation task is used to map the input image to foreground versus background regions, whereas the detection task is used to predict the centroids of the cells. Our multitask model shares convolutional layers between the two tasks, while having task-specific output layers. Learning two tasks simultaneously reduces the risks of overfitting and also helps in separating overlapping cells better. We also introduce a similarity interface (SI) that can be integrated with our multitask network to allow easy adaptation between domains, and to compensate for the variability in contrast and texture of cells seen in microscopy images. The SI comprises an unsupervised first layer in combination with a neighborhood similarity layer (NSL). A layer of logistic sigmoid functions is used as an unsupervised first layer to separate clustered image patches from each other. The NSL transforms its input feature map at a given pixel by computing its similarity to the surrounding neighborhood. Our proposed method achieves higher/comparable detection and segmentation scores as compared to recent state-of-the-art methods with significantly reduced effort for generating training data.
Collapse
|
427
|
Song TH, Sanchez V, EIDaly H, Rajpoot NM. Simultaneous Cell Detection and Classification in Bone Marrow Histology Images. IEEE J Biomed Health Inform 2018; 23:1469-1476. [PMID: 30387756 DOI: 10.1109/jbhi.2018.2878945] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Recently, deep learning frameworks have been shown to be successful and efficient in processing digital histology images for various detection and classification tasks. Among these tasks, cell detection and classification are key steps in many computer-assisted diagnosis systems. Traditionally, cell detection and classification is performed as a sequence of two consecutive steps by using two separate deep learning networks: one for detection and the other for classification. This strategy inevitably increases the computational complexity of the training stage. In this paper, we propose a synchronized deep autoencoder network for simultaneous detection and classification of cells in bone marrow histology images. The proposed network uses a single architecture to detect the positions of cells and classify the detected cells, in parallel. It uses a curve-support Gaussian model to compute probability maps that allow detecting irregularly shape cells precisely. Moreover, the network includes a novel neighborhood selection mechanism to boost the classification accuracy. We show that the performance of the proposed network is superior than traditional deep learning detection methods and very competitive compared to traditional deep learning classification networks. Runtime comparison also shows that our network requires less time to be trained.
Collapse
|
428
|
Abstract
Suspected fractures are among the most common reasons for patients to visit emergency departments (EDs), and X-ray imaging is the primary diagnostic tool used by clinicians to assess patients for fractures. Missing a fracture in a radiograph often has severe consequences for patients, resulting in delayed treatment and poor recovery of function. Nevertheless, radiographs in emergency settings are often read out of necessity by emergency medicine clinicians who lack subspecialized expertise in orthopedics, and misdiagnosed fractures account for upward of four of every five reported diagnostic errors in certain EDs. In this work, we developed a deep neural network to detect and localize fractures in radiographs. We trained it to accurately emulate the expertise of 18 senior subspecialized orthopedic surgeons by having them annotate 135,409 radiographs. We then ran a controlled experiment with emergency medicine clinicians to evaluate their ability to detect fractures in wrist radiographs with and without the assistance of the deep learning model. The average clinician's sensitivity was 80.8% (95% CI, 76.7-84.1%) unaided and 91.5% (95% CI, 89.3-92.9%) aided, and specificity was 87.5% (95 CI, 85.3-89.5%) unaided and 93.9% (95% CI, 92.9-94.9%) aided. The average clinician experienced a relative reduction in misinterpretation rate of 47.0% (95% CI, 37.4-53.9%). The significant improvements in diagnostic accuracy that we observed in this study show that deep learning methods are a mechanism by which senior medical specialists can deliver their expertise to generalists on the front lines of medicine, thereby providing substantial improvements to patient care.
Collapse
|
429
|
Nishida K, Hotta K. Robust cell particle detection to dense regions and subjective training samples based on prediction of particle center using convolutional neural network. PLoS One 2018; 13:e0203646. [PMID: 30303957 PMCID: PMC6179199 DOI: 10.1371/journal.pone.0203646] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2017] [Accepted: 08/26/2018] [Indexed: 01/09/2023] Open
Abstract
In recent years, finding the cause of pathogenesis is expected by observing the cell images. In this paper, we propose a cell particle detection method in cell images. However, there are mainly two kinds of problems in particle detection in cell image. The first is the different properties between cell images and standard images used in computer vision researches. Edges of cell particles are ambiguous, and overlaps between cell particles are often occurred in dense regions. It is difficult to detect cell particles by simple detection method using a binary classifier. The second is the ground truth made by cell biologists. The number of training samples for training a classifier is limited, and incorrect samples are included by the subjectivity of observers. From the background, we propose a cell particle detection method to address those problems. In our proposed method, we predict the center of a cell particle from the peripheral regions by convolutional neural network, and the prediction results are voted. By using the obvious peripheral edges, we can robustly detect overlapped cell particles because all edges of overlapping cell particles are not ambiguous. In addition, voting from peripheral views enables reliable detection. Moreover, our method is useful in practical applications because we can prepare many training samples from a cell particle. In experiments, we evaluate our detection methods on two kinds of cell detection datasets. One is challenging dataset for synthetic cells, and our method achieved the state-of-the-art performance. The other is real dataset of lipid droplets, and our method outperformed the conventional detector using CNN with binary outputs for particles and non-particles classification.
Collapse
Affiliation(s)
- Kenshiro Nishida
- Department of Electrical and Electronic Engineering, Graduate School of Science and Technology, The University of Meijo, Nagoya-shi, Aichi, Japan
- * E-mail:
| | - Kazuhiro Hotta
- Department of Electrical and Electronic Engineering, Faculty of Science and Technology, The University of Meijo, Nagoya-shi, Aichi, Japan
| |
Collapse
|
430
|
Anwar SM, Majid M, Qayyum A, Awais M, Alnowami M, Khan MK. Medical Image Analysis using Convolutional Neural Networks: A Review. J Med Syst 2018; 42:226. [DOI: 10.1007/s10916-018-1088-1] [Citation(s) in RCA: 247] [Impact Index Per Article: 35.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 09/25/2018] [Indexed: 01/03/2023]
|
431
|
Coudray N, Ocampo PS, Sakellaropoulos T, Narula N, Snuderl M, Fenyö D, Moreira AL, Razavian N, Tsirigos A. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nat Med 2018; 24:1559-1567. [PMID: 30224757 PMCID: PMC9847512 DOI: 10.1038/s41591-018-0177-5] [Citation(s) in RCA: 1481] [Impact Index Per Article: 211.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Accepted: 07/06/2018] [Indexed: 02/06/2023]
Abstract
Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them-STK11, EGFR, FAT1, SETBP1, KRAS and TP53-can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. Our approach can be applied to any cancer type, and the code is available at https://github.com/ncoudray/DeepPATH .
Collapse
Affiliation(s)
- Nicolas Coudray
- Applied Bioinformatics Laboratories, New York University School of Medicine, NY 10016, USA,Skirball Institute, Dept. of Cell Biology, New York University School of Medicine, NY 10016, USA
| | | | - Theodore Sakellaropoulos
- School of Mechanical Engineering, National Technical University of Athens, Zografou 15780, Greece
| | - Navneet Narula
- Department of Pathology, New York University School of Medicine, NY 10016, USA
| | - Matija Snuderl
- Department of Pathology, New York University School of Medicine, NY 10016, USA
| | - David Fenyö
- Institute for Systems Genetics, New York University School of Medicine, NY 10016, USA,Department of Biochemistry and molecular Pharmacology, New York University School of Medicine, NY 10016, USA
| | - Andre L. Moreira
- Department of Pathology, New York University School of Medicine, NY 10016, USA,Center for Biospecimen Research and Development, New York University, NY 10016, USA
| | - Narges Razavian
- Department of Population Health and the Center for Healthcare Innovation and Delivery Science, New York University School of Medicine, NY 10016, USA,To whom correspondence should be addressed. Tel: +1 646 501 2693; ; Correspondence may also be addressed to Narges Razavian. Tel: +1 212 263 2234,
| | - Aristotelis Tsirigos
- Applied Bioinformatics Laboratories, New York University School of Medicine, NY 10016, USA,Department of Pathology, New York University School of Medicine, NY 10016, USA,To whom correspondence should be addressed. Tel: +1 646 501 2693; ; Correspondence may also be addressed to Narges Razavian. Tel: +1 212 263 2234,
| |
Collapse
|
432
|
Xing F, Xie Y, Su H, Liu F, Yang L. Deep Learning in Microscopy Image Analysis: A Survey. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2018; 29:4550-4568. [PMID: 29989994 DOI: 10.1109/tnnls.2017.2766168] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Computerized microscopy image analysis plays an important role in computer aided diagnosis and prognosis. Machine learning techniques have powered many aspects of medical investigation and clinical practice. Recently, deep learning is emerging as a leading machine learning tool in computer vision and has attracted considerable attention in biomedical image analysis. In this paper, we provide a snapshot of this fast-growing field, specifically for microscopy image analysis. We briefly introduce the popular deep neural networks and summarize current deep learning achievements in various tasks, such as detection, segmentation, and classification in microscopy image analysis. In particular, we explain the architectures and the principles of convolutional neural networks, fully convolutional networks, recurrent neural networks, stacked autoencoders, and deep belief networks, and interpret their formulations or modelings for specific tasks on various microscopy images. In addition, we discuss the open challenges and the potential trends of future research in microscopy image analysis using deep learning.
Collapse
|
433
|
Analysis on the potential of an EA–surrogate modelling tandem for deep learning parametrization: an example for cancer classification from medical images. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3709-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
434
|
Höfener H, Homeyer A, Weiss N, Molin J, Lundström CF, Hahn HK. Deep learning nuclei detection: A simple approach can deliver state-of-the-art results. Comput Med Imaging Graph 2018; 70:43-52. [PMID: 30286333 DOI: 10.1016/j.compmedimag.2018.08.010] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Revised: 07/13/2018] [Accepted: 08/23/2018] [Indexed: 11/28/2022]
Abstract
BACKGROUND Deep convolutional neural networks have become a widespread tool for the detection of nuclei in histopathology images. Many implementations share a basic approach that includes generation of an intermediate map indicating the presence of a nucleus center, which we refer to as PMap. Nevertheless, these implementations often still differ in several parameters, resulting in different detection qualities. METHODS We identified several essential parameters and configured the basic PMap approach using combinations of them. We thoroughly evaluated and compared various configurations on multiple datasets with respect to detection quality, efficiency and training effort. RESULTS Post-processing of the PMap was found to have the largest impact on detection quality. Also, two different network architectures were identified that improve either detection quality or runtime performance. The best-performing configuration yields f1-measures of 0.816 on H&E stained images of colorectal adenocarcinomas and 0.819 on Ki-67 stained images of breast tumor tissue. On average, it was fully trained in less than 15,000 iterations and processed 4.15 megapixels per second at prediction time. CONCLUSIONS The basic PMap approach is greatly affected by certain parameters. Our evaluation provides guidance on their impact and best settings. When configured properly, this simple and efficient approach can yield equal detection quality as more complex and time-consuming state-of-the-art approaches.
Collapse
Affiliation(s)
| | - André Homeyer
- Fraunhofer MEVIS, Am Fallturm 1, 28359, Bremen, Germany.
| | - Nick Weiss
- Fraunhofer MEVIS, Am Fallturm 1, 28359, Bremen, Germany.
| | - Jesper Molin
- Sectra AB, Teknikringen 20, 58330, Linköping, Sweden.
| | - Claes F Lundström
- Sectra AB, Teknikringen 20, 58330, Linköping, Sweden; Center for Medical Image Science and Visualization, Linköping University, 58183, Linköping, Sweden.
| | - Horst K Hahn
- Fraunhofer MEVIS, Am Fallturm 1, 28359, Bremen, Germany; Jacobs University, Campus Ring 1, 28759, Bremen, Germany.
| |
Collapse
|
435
|
Sirinukunwattana K, Snead D, Epstein D, Aftab Z, Mujeeb I, Tsang YW, Cree I, Rajpoot N. Novel digital signatures of tissue phenotypes for predicting distant metastasis in colorectal cancer. Sci Rep 2018; 8:13692. [PMID: 30209315 PMCID: PMC6135776 DOI: 10.1038/s41598-018-31799-3] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Accepted: 08/07/2018] [Indexed: 12/18/2022] Open
Abstract
Distant metastasis is the major cause of death in colorectal cancer (CRC). Patients at high risk of developing distant metastasis could benefit from appropriate adjuvant and follow-up treatments if stratified accurately at an early stage of the disease. Studies have increasingly recognized the role of diverse cellular components within the tumor microenvironment in the development and progression of CRC tumors. In this paper, we show that automated analysis of digitized images from locally advanced colorectal cancer tissue slides can provide estimate of risk of distant metastasis on the basis of novel tissue phenotypic signatures of the tumor microenvironment. Specifically, we determine what cell types are found in the vicinity of other cell types, and in what numbers, rather than concentrating exclusively on the cancerous cells. We then extract novel tissue phenotypic signatures using statistical measurements about tissue composition. Such signatures can underpin clinical decisions about the advisability of various types of adjuvant therapy.
Collapse
Affiliation(s)
| | - David Snead
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - David Epstein
- Mathematics Institute, University of Warwick, Coventry, UK
| | - Zia Aftab
- Hamad Medical Corporation, Doha, Qatar
| | | | - Yee Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Ian Cree
- International Agency for Research on Cancer, Lyon, France
| | - Nasir Rajpoot
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK.
- Department of Computer Science, University of Warwick, Coventry, UK.
- The Alan Turing Institute, London, UK.
| |
Collapse
|
436
|
High-throughput ovarian follicle counting by an innovative deep learning approach. Sci Rep 2018; 8:13499. [PMID: 30202115 PMCID: PMC6131397 DOI: 10.1038/s41598-018-31883-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Accepted: 08/02/2018] [Indexed: 01/16/2023] Open
Abstract
The evaluation of the number of mouse ovarian primordial follicles (PMF) can provide important information about ovarian function, regulation of folliculogenesis or the impact of chemotherapy on fertility. This counting, usually performed by specialized operators, is a tedious, time-consuming but indispensable procedure.The development and increasing use of deep machine learning algorithms promise to speed up and improve this process. Here, we present a new methodology of automatically detecting and counting PMF, using convolutional neural networks driven by labelled datasets and a sliding window algorithm to select test data. Trained from a database of 9 millions of images extracted from mouse ovaries, and tested over two ovaries (3 millions of images to classify and 2 000 follicles to detect), the algorithm processes the digitized histological slides of a completed ovary in less than one minute, dividing the usual processing time by a factor of about 30. It also outperforms the measurements made by a pathologist through optical detection. Its ability to correct label errors enables conducting an active learning process with the operator, improving the overall counting iteratively. These results could be suitable to adapt the methodology to the human ovarian follicles by transfer learning.
Collapse
|
437
|
Bernal J, Kushibar K, Asfaw DS, Valverde S, Oliver A, Martí R, Lladó X. Deep convolutional neural networks for brain image analysis on magnetic resonance imaging: a review. Artif Intell Med 2018; 95:64-81. [PMID: 30195984 DOI: 10.1016/j.artmed.2018.08.008] [Citation(s) in RCA: 150] [Impact Index Per Article: 21.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 04/25/2018] [Accepted: 08/27/2018] [Indexed: 02/07/2023]
Abstract
In recent years, deep convolutional neural networks (CNNs) have shown record-shattering performance in a variety of computer vision problems, such as visual object recognition, detection and segmentation. These methods have also been utilised in medical image analysis domain for lesion segmentation, anatomical segmentation and classification. We present an extensive literature review of CNN techniques applied in brain magnetic resonance imaging (MRI) analysis, focusing on the architectures, pre-processing, data-preparation and post-processing strategies available in these works. The aim of this study is three-fold. Our primary goal is to report how different CNN architectures have evolved, discuss state-of-the-art strategies, condense their results obtained using public datasets and examine their pros and cons. Second, this paper is intended to be a detailed reference of the research activity in deep CNN for brain MRI analysis. Finally, we present a perspective on the future of CNNs in which we hint some of the research directions in subsequent years.
Collapse
Affiliation(s)
- Jose Bernal
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Kaisar Kushibar
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Daniel S Asfaw
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Sergi Valverde
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Arnau Oliver
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Robert Martí
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| | - Xavier Lladó
- Computer Vision and Robotics Institute, Dept. of Computer Architecture and Technology, University of Girona, Ed. P-IV, Av. Lluis Santaló s/n, 17003 Girona, Spain.
| |
Collapse
|
438
|
A deep convolutional neural network approach for astrocyte detection. Sci Rep 2018; 8:12878. [PMID: 30150631 PMCID: PMC6110828 DOI: 10.1038/s41598-018-31284-x] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Accepted: 08/10/2018] [Indexed: 12/26/2022] Open
Abstract
Astrocytes are involved in various brain pathologies including trauma, stroke, neurodegenerative disorders such as Alzheimer’s and Parkinson’s diseases, or chronic pain. Determining cell density in a complex tissue environment in microscopy images and elucidating the temporal characteristics of morphological and biochemical changes is essential to understand the role of astrocytes in physiological and pathological conditions. Nowadays, manual stereological cell counting or semi-automatic segmentation techniques are widely used for the quantitative analysis of microscopy images. Detecting astrocytes automatically is a highly challenging computational task, for which we currently lack efficient image analysis tools. We have developed a fast and fully automated software that assesses the number of astrocytes using Deep Convolutional Neural Networks (DCNN). The method highly outperforms state-of-the-art image analysis and machine learning methods and provides precision comparable to those of human experts. Additionally, the runtime of cell detection is significantly less than that of other three computational methods analysed, and it is faster than human observers by orders of magnitude. We applied our DCNN-based method to examine the number of astrocytes in different brain regions of rats with opioid-induced hyperalgesia/tolerance (OIH/OIT), as morphine tolerance is believed to activate glia. We have demonstrated a strong positive correlation between manual and DCNN-based quantification of astrocytes in rat brain.
Collapse
|
439
|
Khoshdeli M, Winkelmaier G, Parvin B. Fusion of encoder-decoder deep networks improves delineation of multiple nuclear phenotypes. BMC Bioinformatics 2018; 19:294. [PMID: 30086715 PMCID: PMC6081825 DOI: 10.1186/s12859-018-2285-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 07/16/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Nuclear segmentation is an important step for profiling aberrant regions of histology sections. If nuclear segmentation can be resolved, then new biomarkers of nuclear phenotypes and their organization can be predicted for the application of precision medicine. However, segmentation is a complex problem as a result of variations in nuclear geometry (e.g., size, shape), nuclear type (e.g., epithelial, fibroblast), nuclear phenotypes (e.g., vesicular, aneuploidy), and overlapping nuclei. The problem is further complicated as a result of variations in sample preparation (e.g., fixation, staining). Our hypothesis is that (i) deep learning techniques can learn complex phenotypic signatures that rise in tumor sections, and (ii) fusion of different representations (e.g., regions, boundaries) contributes to improved nuclear segmentation. RESULTS We have demonstrated that training of deep encoder-decoder convolutional networks overcomes complexities associated with multiple nuclear phenotypes, where we evaluate alternative architecture of deep learning for an improved performance against the simplicity of the design. In addition, improved nuclear segmentation is achieved by color decomposition and combining region- and boundary-based features through a fusion network. The trained models have been evaluated against approximately 19,000 manually annotated nuclei, and object-level Precision, Recall, F1-score and Standard Error are reported with the best F1-score being 0.91. Raw training images, annotated images, processed images, and source codes are released as a part of the Additional file 1. CONCLUSIONS There are two intrinsic barriers in nuclear segmentation to H&E stained images, which correspond to the diversity of nuclear phenotypes and perceptual boundaries between adjacent cells. We demonstrate that (i) the encoder-decoder architecture can learn complex phenotypes that include the vesicular type; (ii) delineation of overlapping nuclei is enhanced by fusion of region- and edge-based networks; (iii) fusion of ENets produces an improved result over the fusion of UNets; and (iv) fusion of networks is better than multitask learning. We suggest that our protocol enables processing a large cohort of whole slide images for applications in precision medicine.
Collapse
Affiliation(s)
- Mina Khoshdeli
- Electrical and Biomedical Department, University of Nevada, Reno, 1664 N. Virginia, Reno, USA
| | - Garrett Winkelmaier
- Electrical and Biomedical Department, University of Nevada, Reno, 1664 N. Virginia, Reno, USA
| | - Bahram Parvin
- Electrical and Biomedical Department, University of Nevada, Reno, 1664 N. Virginia, Reno, USA
| |
Collapse
|
440
|
Loughrey MB, Bankhead P, Coleman HG, Hagan RS, Craig S, McCorry AMB, Gray RT, McQuaid S, Dunne PD, Hamilton PW, James JA, Salto-Tellez M. Validation of the systematic scoring of immunohistochemically stained tumour tissue microarrays using QuPath digital image analysis. Histopathology 2018; 73:327-338. [PMID: 29575153 DOI: 10.1111/his.13516] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Accepted: 03/11/2018] [Indexed: 12/12/2022]
Abstract
AIMS Output from biomarker studies involving immunohistochemistry applied to tissue microarrays (TMA) is limited by the lack of an efficient and reproducible scoring methodology. In this study, we examine the functionality and reproducibility of biomarker scoring using the new, open-source, digital image analysis software, QuPath. METHODS AND RESULTS Three different reviewers, with varying experience of digital pathology and image analysis, applied an agreed QuPath scoring methodology to CD3 and p53 immunohistochemically stained TMAs from a colon cancer cohort (n = 661). Manual assessment was conducted by one reviewer for CD3. Survival analyses were conducted and intra- and interobserver reproducibility assessed. Median raw scores differed significantly between reviewers, but this had little impact on subsequent analyses. Lower CD3 scores were detected in cases who died from colorectal cancer compared to control cases, and this finding was significant for all three reviewers (P-value range = 0.002-0.02). Higher median p53 scores were generated among cases who died from colorectal cancer compared with controls (P-value range = 0.04-0.12). The ability to dichomotise cases into high versus low expression of CD3 and p53 showed excellent agreement between all three reviewers (kappa score range = 0.82-0.93). All three reviewers produced dichotomised expression scores that resulted in very similar hazard ratios for colorectal cancer-specific survival for each biomarker. Results from manual and QuPath methods of CD3 scoring were comparable, but QuPath scoring revealed stronger prognostic stratification. CONCLUSIONS Scoring of immunohistochemically stained tumour TMAs using QuPath is functional and reproducible, even among users of limited experience of digital pathology images, and more accurate than manual scoring.
Collapse
Affiliation(s)
- Maurice B Loughrey
- Northern Ireland Molecular Pathology Laboratory, Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast, Northern Ireland, UK
| | - Peter Bankhead
- Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
| | - Helen G Coleman
- Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
- Centre for Public Health, Queen's University Belfast, Belfast, Northern Ireland, UK
| | - Ryan S Hagan
- Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
| | - Stephanie Craig
- Northern Ireland Molecular Pathology Laboratory, Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
| | - Amy M B McCorry
- Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
| | - Ronan T Gray
- Centre for Public Health, Queen's University Belfast, Belfast, Northern Ireland, UK
| | - Stephen McQuaid
- Northern Ireland Molecular Pathology Laboratory, Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast, Northern Ireland, UK
| | - Philip D Dunne
- Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
| | - Peter W Hamilton
- Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
- Philips Digital Pathology Solutions, Belfast, Northern Ireland, UK
| | - Jacqueline A James
- Northern Ireland Molecular Pathology Laboratory, Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast, Northern Ireland, UK
| | - Manuel Salto-Tellez
- Northern Ireland Molecular Pathology Laboratory, Centre for Cancer Research and Cell Biology, Queen's University Belfast, Belfast, Northern Ireland, UK
- Cellular Pathology, Belfast Health and Social Care Trust, Belfast, Northern Ireland, UK
| |
Collapse
|
441
|
Amezcua J, Melin P. A new fuzzy learning vector quantization method for classification problems based on a granular approach. GRANULAR COMPUTING 2018. [DOI: 10.1007/s41066-018-0120-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
442
|
Van Eycke YR, Balsat C, Verset L, Debeir O, Salmon I, Decaestecker C. Segmentation of glandular epithelium in colorectal tumours to automatically compartmentalise IHC biomarker quantification: A deep learning approach. Med Image Anal 2018; 49:35-45. [PMID: 30081241 DOI: 10.1016/j.media.2018.07.004] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Revised: 06/29/2018] [Accepted: 07/05/2018] [Indexed: 12/18/2022]
Abstract
In this paper, we propose a method for automatically annotating slide images from colorectal tissue samples. Our objective is to segment glandular epithelium in histological images from tissue slides submitted to different staining techniques, including usual haematoxylin-eosin (H&E) as well as immunohistochemistry (IHC). The proposed method makes use of Deep Learning and is based on a new convolutional network architecture. Our method achieves better performances than the state of the art on the H&E images of the GlaS challenge contest, whereas it uses only the haematoxylin colour channel extracted by colour deconvolution from the RGB images in order to extend its applicability to IHC. The network only needs to be fine-tuned on a small number of additional examples to be accurate on a new IHC dataset. Our approach also includes a new method of data augmentation to achieve good generalisation when working with different experimental conditions and different IHC markers. We show that our methodology enables to automate the compartmentalisation of the IHC biomarker analysis, results concurring highly with manual annotations.
Collapse
Affiliation(s)
- Yves-Rémi Van Eycke
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium.
| | - Cédric Balsat
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium
| | - Laurine Verset
- Department of Pathology, Erasme Hospital, Université Libre de Bruxelles (ULB), Route de Lennik 808, Brussels 1070, Belgium
| | - Olivier Debeir
- Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium; MIP, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium
| | - Isabelle Salmon
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Department of Pathology, Erasme Hospital, Université Libre de Bruxelles (ULB), Route de Lennik 808, Brussels 1070, Belgium
| | - Christine Decaestecker
- DIAPath, Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles (ULB), CPI 305/1, Rue Adrienne Bolland, 8, 6041 Gosselies, Belgium; Laboratories of Image, Signal processing & Acoustics, Université Libre de Bruxelles (ULB), CPI 165/57, Avenue Franklin Roosevelt 50, Brussels 1050 , Belgium.
| |
Collapse
|
443
|
|
444
|
Hu B, Tang Y, Chang EIC, Fan Y, Lai M, Xu Y. Unsupervised Learning for Cell-Level Visual Representation in Histopathology Images With Generative Adversarial Networks. IEEE J Biomed Health Inform 2018; 23:1316-1328. [PMID: 29994411 DOI: 10.1109/jbhi.2018.2852639] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The visual attributes of cells, such as the nuclear morphology and chromatin openness, are critical for histopathology image analysis. By learning cell-level visual representation, we can obtain a rich mix of features that are highly reusable for various tasks, such as cell-level classification, nuclei segmentation, and cell counting. In this paper, we propose a unified generative adversarial networks architecture with a new formulation of loss to perform robust cell-level visual representation learning in an unsupervised setting. Our model is not only label-free and easily trained but also capable of cell-level unsupervised classification with interpretable visualization, which achieves promising results in the unsupervised classification of bone marrow cellular components. Based on the proposed cell-level visual representation learning, we further develop a pipeline that exploits the varieties of cellular elements to perform histopathology image classification, the advantages of which are demonstrated on bone marrow datasets.
Collapse
|
445
|
Tahir MW, Zaidi NA, Rao AA, Blank R, Vellekoop MJ, Lang W. A Fungus Spores Dataset and a Convolutional Neural Network Based Approach for Fungus Detection. IEEE Trans Nanobioscience 2018; 17:281-290. [DOI: 10.1109/tnb.2018.2839585] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
446
|
Salvi M, Molinari F. Multi-tissue and multi-scale approach for nuclei segmentation in H&E stained images. Biomed Eng Online 2018; 17:89. [PMID: 29925379 PMCID: PMC6011253 DOI: 10.1186/s12938-018-0518-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 06/12/2018] [Indexed: 02/04/2023] Open
Abstract
Background Accurate nuclei detection and segmentation in histological images is essential for many clinical purposes. While manual annotations are time-consuming and operator-dependent, full automated segmentation remains a challenging task due to the high variability of cells intensity, size and morphology. Most of the proposed algorithms for the automated segmentation of nuclei were designed for specific organ or tissues. Results The aim of this study was to develop and validate a fully multiscale method, named MANA (Multiscale Adaptive Nuclei Analysis), for nuclei segmentation in different tissues and magnifications. MANA was tested on a dataset of H&E stained tissue images with more than 59,000 annotated nuclei, taken from six organs (colon, liver, bone, prostate, adrenal gland and thyroid) and three magnifications (10×, 20×, 40×). Automatic results were compared with manual segmentations and three open-source software designed for nuclei detection. For each organ, MANA obtained always an F1-score higher than 0.91, with an average F1 of 0.9305 ± 0.0161. The average computational time was about 20 s independently of the number of nuclei to be detected (anyway, higher than 1000), indicating the efficiency of the proposed technique. Conclusion To the best of our knowledge, MANA is the first fully automated multi-scale and multi-tissue algorithm for nuclei detection. Overall, the robustness and versatility of MANA allowed to achieve, on different organs and magnifications, performances in line or better than those of state-of-art algorithms optimized for single tissues.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, Department of Electronics and Telecomunications, Politecnico di Torino, 10129, Turin, Italy.
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecomunications, Politecnico di Torino, 10129, Turin, Italy
| |
Collapse
|
447
|
Ma Y, Jiang Z, Zhang H, Xie F, Zheng Y, Shi H, Zhao Y, Shi J. Generating region proposals for histopathological whole slide image retrieval. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 159:1-10. [PMID: 29650303 DOI: 10.1016/j.cmpb.2018.02.020] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2017] [Revised: 01/15/2018] [Accepted: 02/22/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Content-based image retrieval is an effective method for histopathological image analysis. However, given a database of huge whole slide images (WSIs), acquiring appropriate region-of-interests (ROIs) for training is significant and difficult. Moreover, histopathological images can only be annotated by pathologists, resulting in the lack of labeling information. Therefore, it is an important and challenging task to generate ROIs from WSI and retrieve image with few labels. METHODS This paper presents a novel unsupervised region proposing method for histopathological WSI based on Selective Search. Specifically, the WSI is over-segmented into regions which are hierarchically merged until the WSI becomes a single region. Nucleus-oriented similarity measures for region mergence and Nucleus-Cytoplasm color space for histopathological image are specially defined to generate accurate region proposals. Additionally, we propose a new semi-supervised hashing method for image retrieval. The semantic features of images are extracted with Latent Dirichlet Allocation and transformed into binary hashing codes with Supervised Hashing. RESULTS The methods are tested on a large-scale multi-class database of breast histopathological WSIs. The results demonstrate that for one WSI, our region proposing method can generate 7.3 thousand contoured regions which fit well with 95.8% of the ROIs annotated by pathologists. The proposed hashing method can retrieve a query image among 136 thousand images in 0.29 s and reach precision of 91% with only 10% of images labeled. CONCLUSIONS The unsupervised region proposing method can generate regions as predictions of lesions in histopathological WSI. The region proposals can also serve as the training samples to train machine-learning models for image retrieval. The proposed hashing method can achieve fast and precise image retrieval with small amount of labels. Furthermore, the proposed methods can be potentially applied in online computer-aided-diagnosis systems.
Collapse
Affiliation(s)
- Yibing Ma
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Haopeng Zhang
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Fengying Xie
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Yushan Zheng
- Image Processing Center, School of Astronautics, Beihang University, Beijing 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China; Beijing Key Laboratory of Digital Media, Beijing 100191, China.
| | - Huaqiang Shi
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China; People's Liberation Army Air Force General Hospital, Beijing 100142, China.
| | - Yu Zhao
- Motic (Xiamen) Medical Diagnostic Systems Co. Ltd., Xiamen 361101, China.
| | - Jun Shi
- School of Software, Hefei University of Technology, Hefei 230601, China.
| |
Collapse
|
448
|
Nuclear IHC enumeration: A digital phantom to evaluate the performance of automated algorithms in digital pathology. PLoS One 2018; 13:e0196547. [PMID: 29746503 PMCID: PMC5944932 DOI: 10.1371/journal.pone.0196547] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 04/15/2018] [Indexed: 12/20/2022] Open
Abstract
Automatic and accurate detection of positive and negative nuclei from images of immunostained tissue biopsies is critical to the success of digital pathology. The evaluation of most nuclei detection algorithms relies on manually generated ground truth prepared by pathologists, which is unfortunately time-consuming and suffers from inter-pathologist variability. In this work, we developed a digital immunohistochemistry (IHC) phantom that can be used for evaluating computer algorithms for enumeration of IHC positive cells. Our phantom development consists of two main steps, 1) extraction of the individual as well as nuclei clumps of both positive and negative nuclei from real WSI images, and 2) systematic placement of the extracted nuclei clumps on an image canvas. The resulting images are visually similar to the original tissue images. We created a set of 42 images with different concentrations of positive and negative nuclei. These images were evaluated by four board certified pathologists in the task of estimating the ratio of positive to total number of nuclei. The resulting concordance correlation coefficients (CCC) between the pathologist and the true ratio range from 0.86 to 0.95 (point estimates). The same ratio was also computed by an automated computer algorithm, which yielded a CCC value of 0.99. Reading the phantom data with known ground truth, the human readers show substantial variability and lower average performance than the computer algorithm in terms of CCC. This shows the limitation of using a human reader panel to establish a reference standard for the evaluation of computer algorithms, thereby highlighting the usefulness of the phantom developed in this work. Using our phantom images, we further developed a function that can approximate the true ratio from the area of the positive and negative nuclei, hence avoiding the need to detect individual nuclei. The predicted ratios of 10 held-out images using the function (trained on 32 images) are within ±2.68% of the true ratio. Moreover, we also report the evaluation of a computerized image analysis method on the synthetic tissue dataset.
Collapse
|
449
|
He JY, Wu X, Jiang YG, Peng Q, Jain R. Hookworm Detection in Wireless Capsule Endoscopy Images With Deep Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2379-2392. [PMID: 29470172 DOI: 10.1109/tip.2018.2801119] [Citation(s) in RCA: 74] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
As one of the most common human helminths, hookworm is a leading cause of maternal and child morbidity, which seriously threatens human health. Recently, wireless capsule endoscopy (WCE) has been applied to automatic hookworm detection. Unfortunately, it remains a challenging task. In recent years, deep convolutional neural network (CNN) has demonstrated impressive performance in various image and video analysis tasks. In this paper, a novel deep hookworm detection framework is proposed for WCE images, which simultaneously models visual appearances and tubular patterns of hookworms. This is the first deep learning framework specifically designed for hookworm detection in WCE images. Two CNN networks, namely edge extraction network and hookworm classification network, are seamlessly integrated in the proposed framework, which avoid the edge feature caching and speed up the classification. Two edge pooling layers are introduced to integrate the tubular regions induced from edge extraction network and the feature maps from hookworm classification network, leading to enhanced feature maps emphasizing the tubular regions. Experiments have been conducted on one of the largest WCE datasets with WCE images, which demonstrate the effectiveness of the proposed hookworm detection framework. It significantly outperforms the state-of-the-art approaches. The high sensitivity and accuracy of the proposed method in detecting hookworms shows its potential for clinical application.
Collapse
|
450
|
Fondón I, Sarmiento A, García AI, Silvestre M, Eloy C, Polónia A, Aguiar P. Automatic classification of tissue malignancy for breast carcinoma diagnosis. Comput Biol Med 2018; 96:41-51. [DOI: 10.1016/j.compbiomed.2018.03.003] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2017] [Revised: 03/05/2018] [Accepted: 03/05/2018] [Indexed: 02/08/2023]
|