351
|
Learned and handcrafted features for early-stage laryngeal SCC diagnosis. Med Biol Eng Comput 2019; 57:2683-2692. [PMID: 31728933 DOI: 10.1007/s11517-019-02051-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 09/21/2019] [Indexed: 01/08/2023]
Abstract
Squamous cell carcinoma (SCC) is the most common and malignant laryngeal cancer. An early-stage diagnosis is of crucial importance to lower patient mortality and preserve both the laryngeal anatomy and vocal-fold function. However, this may be challenging as the initial larynx modifications, mainly concerning the mucosa vascular tree and the epithelium texture and color, are small and can pass unnoticed to the human eye. The primary goal of this paper was to investigate a learning-based approach to early-stage SCC diagnosis, and compare the use of (i) texture-based global descriptors, such as local binary patterns, and (ii) deep-learning-based descriptors. These features, extracted from endoscopic narrow-band images of the larynx, were classified with support vector machines as to discriminate healthy, precancerous, and early-stage SCC tissues. When tested on a benchmark dataset, a median classification recall of 98% was obtained with the best feature combination, outperforming the state of the art (recall = 95%). Despite further investigation is needed (e.g., testing on a larger dataset), the achieved results support the use of the developed methodology in the actual clinical practice to provide accurate early-stage SCC diagnosis. Graphical Abstract Workflow of the proposed solution. Patches of laryngeal tissue are pre-processed and feature extraction is performed. These features are used in the laryngeal tissue classification.
Collapse
|
352
|
Niemann A, Weigand S, Hoffmann T, Skalej M, Tulamo R, Preim B, Saalfeld S. Interactive exploration of a 3D intracranial aneurysm wall model extracted from histologic slices. Int J Comput Assist Radiol Surg 2019; 15:99-107. [PMID: 31705419 DOI: 10.1007/s11548-019-02083-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Accepted: 10/18/2019] [Indexed: 01/23/2023]
Abstract
PURPOSE Currently no detailed in vivo imaging of the intracranial vessel wall exists. Ex vivo histologic images can provide information about the intracranial aneurysm (IA) wall composition that is useful for the understanding of IA development and rupture risk. For a 3D analysis, the 2D histologic slices must be incorporated in a 3D model which can be used for a spatial evaluation of the IA's morphology, including analysis of the IA neck. METHODS In 2D images of histologic slices, different wall layers were manually segmented and a 3D model was generated. The nuclei were automatically detected and classified as round or elongated, and a neural network-based wall type classification was performed. The information was combined in a software prototype visualization providing a unique view of the wall characteristics of an IA and allowing interactive exploration. Furthermore, the heterogeneity (as variance of the wall thickness) of the wall was evaluated. RESULT A 3D model correctly representing the histologic data was reconstructed. The visualization integrating wall information was perceived as useful by a medical expert. The classification produces a plausible result. CONCLUSION The usage of histologic images allows to create a 3D model with new information about the aneurysm wall. The model provides information about the wall thickness, its heterogeneity and, when performed on cadaveric samples, includes information about the transition between IA neck and sac.
Collapse
Affiliation(s)
- Annika Niemann
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany.
| | - Simon Weigand
- Ludwig-Maximilians-Universität Klinikum, Munich, Germany
| | | | | | - Riikka Tulamo
- Helsinki University Hospital, University of Helsinki, Helsinki, Finland
| | - Bernhard Preim
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany
| | - Sylvia Saalfeld
- Faculty of Computer Science, Otto-von-Guericke University Magdeburg, Universitätsplatz 2, 39106, Magdeburg, Germany.,Research Campus STIMULATE, Magdeburg, Germany
| |
Collapse
|
353
|
Cai CJ, Winter S, Steiner D, Wilcox L, Terry M. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. ACTA ACUST UNITED AC 2019. [DOI: 10.1145/3359206] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
354
|
FABnet: feature attention-based network for simultaneous segmentation of microvessels and nerves in routine histology images of oral cancer. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04516-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
355
|
Xue Y, Bigras G, Hugh J, Ray N. Training Convolutional Neural Networks and Compressed Sensing End-to-End for Microscopy Cell Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2632-2641. [PMID: 30908206 DOI: 10.1109/tmi.2019.2907093] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Automated cell detection and localization from microscopy images are significant tasks in biomedical research and clinical practice. In this paper, we design a new cell detection and localization algorithm that combines deep convolutional neural network (CNN) and compressed sensing (CS) or sparse coding (SC) for end-to-end training. We also derive, for the first time, a backpropagation rule, which is applicable to train any algorithm that implements a sparse code recovery layer. The key innovation behind our algorithm is that the cell detection task is structured as a point object detection task in computer vision, where the cell centers (i.e., point objects) occupy only a tiny fraction of the total number of pixels in an image. Thus, we can apply compressed sensing (or equivalently SC) to compactly represent a variable number of cells in a projected space. Subsequently, CNN regresses this compressed vector from the input microscopy image. The SC/CS recovery algorithm ( L 1 optimization) can then recover sparse cell locations from the output of CNN. We train this entire processing pipeline end-to-end and demonstrate that end-to-end training improves accuracy over a training paradigm that treats CNN and CS-recovery layers separately. We have validated our algorithm on five benchmark datasets with excellent results.
Collapse
|
356
|
Turner OC, Aeffner F, Bangari DS, High W, Knight B, Forest T, Cossic B, Himmel LE, Rudmann DG, Bawa B, Muthuswamy A, Aina OH, Edmondson EF, Saravanan C, Brown DL, Sing T, Sebastian MM. Society of Toxicologic Pathology Digital Pathology and Image Analysis Special Interest Group Article*: Opinion on the Application of Artificial Intelligence and Machine Learning to Digital Toxicologic Pathology. Toxicol Pathol 2019; 48:277-294. [DOI: 10.1177/0192623319881401] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Toxicologic pathology is transitioning from analog to digital methods. This transition seems inevitable due to a host of ongoing social and medical technological forces. Of these, artificial intelligence (AI) and in particular machine learning (ML) are globally disruptive, rapidly growing sectors of technology whose impact on the long-established field of histopathology is quickly being realized. The development of increasing numbers of algorithms, peering ever deeper into the histopathological space, has demonstrated to the scientific community that AI pathology platforms are now poised to truly impact the future of precision and personalized medicine. However, as with all great technological advances, there are implementation and adoption challenges. This review aims to define common and relevant AI and ML terminology, describe data generation and interpretation, outline current and potential future business cases, discuss validation and regulatory hurdles, and most importantly, propose how overcoming the challenges of this burgeoning technology may shape toxicologic pathology for years to come, enabling pathologists to contribute even more effectively to answering scientific questions and solving global health issues. [Box: see text]
Collapse
Affiliation(s)
- Oliver C. Turner
- Novartis, Novartis Institutes for Biomedical Research, Preclinical Safety, East Hanover, NJ, USA
| | - Famke Aeffner
- Amgen Inc, Research, Comparative Biology and Safety Sciences, San Francisco, CA, USA
| | | | - Wanda High
- High Preclinical Pathology Consulting, Rochester, NY, USA
| | - Brian Knight
- Boehringer Ingelheim Pharmaceuticals Incorporated, Nonclinical Drug Safety, Ridgefield, CT, USA
| | | | - Brieuc Cossic
- Roche, Pharmaceutical Research and Early Development (pRED), Roche Innovation Center, Basel, Switzerland
| | - Lauren E. Himmel
- Division of Animal Care, Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | | | | | | | - Elijah F. Edmondson
- Pathology/Histotechnology Laboratory, Frederick National Laboratory for Cancer Research, NIH, Frederick, MD, USA
| | - Chandrassegar Saravanan
- Novartis, Novartis Institutes for Biomedical Research, Preclinical Safety, Cambridge, MA, USA
| | | | - Tobias Sing
- Novartis, Novartis Institutes for Biomedical Research, NIBR Informatics, Basel, Switzerland
| | - Manu M. Sebastian
- Department of Epigenetics and Molecular Carcinogenesis, The University of Texas MD Anderson Cancer Center, Smithville, TX, USA
| |
Collapse
|
357
|
Jung H, Lodhi B, Kang J. An automatic nuclei segmentation method based on deep convolutional neural networks for histopathology images. BMC Biomed Eng 2019; 1:24. [PMID: 32903361 PMCID: PMC7422516 DOI: 10.1186/s42490-019-0026-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 09/02/2019] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND Since nuclei segmentation in histopathology images can provide key information for identifying the presence or stage of a disease, the images need to be assessed carefully. However, color variation in histopathology images, and various structures of nuclei are two major obstacles in accurately segmenting and analyzing histopathology images. Several machine learning methods heavily rely on hand-crafted features which have limitations due to manual thresholding. RESULTS To obtain robust results, deep learning based methods have been proposed. Deep convolutional neural networks (DCNN) used for automatically extracting features from raw image data have been proven to achieve great performance. Inspired by such achievements, we propose a nuclei segmentation method based on DCNNs. To normalize the color of histopathology images, we use a deep convolutional Gaussian mixture color normalization model which is able to cluster pixels while considering the structures of nuclei. To segment nuclei, we use Mask R-CNN which achieves state-of-the-art object segmentation performance in the field of computer vision. In addition, we perform multiple inference as a post-processing step to boost segmentation performance. We evaluate our segmentation method on two different datasets. The first dataset consists of histopathology images of various organ while the other consists histopathology images of the same organ. Performance of our segmentation method is measured in various experimental setups at the object-level and the pixel-level. In addition, we compare the performance of our method with that of existing state-of-the-art methods. The experimental results show that our nuclei segmentation method outperforms the existing methods. CONCLUSIONS We propose a nuclei segmentation method based on DCNNs for histopathology images. The proposed method which uses Mask R-CNN with color normalization and multiple inference post-processing provides robust nuclei segmentation results. Our method also can facilitate downstream nuclei morphological analyses as it provides high-quality features extracted from histopathology images.
Collapse
Affiliation(s)
- Hwejin Jung
- Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea
| | - Bilal Lodhi
- Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea
| | - Jaewoo Kang
- Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea
- Interdisciplinary Graduate Program in Bioinformatics, Korea University, Seoul, Republic of Korea
| |
Collapse
|
358
|
Van Eycke YR, Foucart A, Decaestecker C. Strategies to Reduce the Expert Supervision Required for Deep Learning-Based Segmentation of Histopathological Images. Front Med (Lausanne) 2019; 6:222. [PMID: 31681779 PMCID: PMC6803466 DOI: 10.3389/fmed.2019.00222] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 09/27/2019] [Indexed: 12/21/2022] Open
Abstract
The emergence of computational pathology comes with a demand to extract more and more information from each tissue sample. Such information extraction often requires the segmentation of numerous histological objects (e.g., cell nuclei, glands, etc.) in histological slide images, a task for which deep learning algorithms have demonstrated their effectiveness. However, these algorithms require many training examples to be efficient and robust. For this purpose, pathologists must manually segment hundreds or even thousands of objects in histological images, i.e., a long, tedious and potentially biased task. The present paper aims to review strategies that could help provide the very large number of annotated images needed to automate the segmentation of histological images using deep learning. This review identifies and describes four different approaches: the use of immunohistochemical markers as labels, realistic data augmentation, Generative Adversarial Networks (GAN), and transfer learning. In addition, we describe alternative learning strategies that can use imperfect annotations. Adding real data with high-quality annotations to the training set is a safe way to improve the performance of a well configured deep neural network. However, the present review provides new perspectives through the use of artificially generated data and/or imperfect annotations, in addition to transfer learning opportunities.
Collapse
Affiliation(s)
- Yves-Rémi Van Eycke
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Adrien Foucart
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| | - Christine Decaestecker
- Digital Image Analysis in Pathology (DIAPath), Center for Microscopy and Molecular Imaging (CMMI), Université Libre de Bruxelles, Charleroi, Belgium
- Laboratory of Image Synthesis and Analysis (LISA), Ecole Polytechnique de Bruxelles, Université Libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
359
|
Zormpas-Petridis K, Failmezger H, Raza SEA, Roxanis I, Jamin Y, Yuan Y. Superpixel-Based Conditional Random Fields (SuperCRF): Incorporating Global and Local Context for Enhanced Deep Learning in Melanoma Histopathology. Front Oncol 2019; 9:1045. [PMID: 31681583 PMCID: PMC6798642 DOI: 10.3389/fonc.2019.01045] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 09/25/2019] [Indexed: 01/08/2023] Open
Abstract
Computational pathology-based cell classification algorithms are revolutionizing the study of the tumor microenvironment and can provide novel predictive/prognosis biomarkers crucial for the delivery of precision oncology. Current algorithms used on hematoxylin and eosin slides are based on individual cell nuclei morphology with limited local context features. Here, we propose a novel multi-resolution hierarchical framework (SuperCRF) inspired by the way pathologists perceive regional tissue architecture to improve cell classification and demonstrate its clinical applications. We develop SuperCRF by training a state-of-art deep learning spatially constrained- convolution neural network (SC-CNN) to detect and classify cells from 105 high-resolution (20×) H&E-stained slides of The Cancer Genome Atlas melanoma dataset and subsequently, a conditional random field (CRF) by combining cellular neighborhood with tumor regional classification from lower resolution images (5, 1.25×) given by a superpixel-based machine learning framework. SuperCRF led to an 11.85% overall improvement in the accuracy of the state-of-art deep learning SC-CNN cell classifier. Consistent with a stroma-mediated immune suppressive microenvironment, SuperCRF demonstrated that (i) a high ratio of lymphocytes to all lymphocytes within the stromal compartment (p = 0.026) and (ii) a high ratio of stromal cells to all cells (p < 0.0001 compared to p = 0.039 for SC-CNN only) are associated with poor survival in patients with melanoma. SuperCRF improves cell classification by introducing global and local context-based information and can be implemented in combination with any single-cell classifier. SuperCRF provides valuable tools to study the tumor microenvironment and identify predictors of survival and response to therapy.
Collapse
Affiliation(s)
- Konstantinos Zormpas-Petridis
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, United Kingdom
- The Royal Marsden NHS Trust, Surrey, United Kingdom
| | - Henrik Failmezger
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| | - Shan E Ahmed Raza
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| | - Ioannis Roxanis
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
- Royal Free London NHS Foundation Trust, London, United Kingdom
| | - Yann Jamin
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, United Kingdom
- The Royal Marsden NHS Trust, Surrey, United Kingdom
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| |
Collapse
|
360
|
Rączkowska A, Możejko M, Zambonelli J, Szczurek E. ARA: accurate, reliable and active histopathological image classification framework with Bayesian deep learning. Sci Rep 2019; 9:14347. [PMID: 31586139 PMCID: PMC6778075 DOI: 10.1038/s41598-019-50587-1] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 09/16/2019] [Indexed: 02/07/2023] Open
Abstract
Machine learning algorithms hold the promise to effectively automate the analysis of histopathological images that are routinely generated in clinical practice. Any machine learning method used in the clinical diagnostic process has to be extremely accurate and, ideally, provide a measure of uncertainty for its predictions. Such accurate and reliable classifiers need enough labelled data for training, which requires time-consuming and costly manual annotation by pathologists. Thus, it is critical to minimise the amount of data needed to reach the desired accuracy by maximising the efficiency of training. We propose an accurate, reliable and active (ARA) image classification framework and introduce a new Bayesian Convolutional Neural Network (ARA-CNN) for classifying histopathological images of colorectal cancer. The model achieves exceptional classification accuracy, outperforming other models trained on the same dataset. The network outputs an uncertainty measurement for each tested image. We show that uncertainty measures can be used to detect mislabelled training samples and can be employed in an efficient active learning workflow. Using a variational dropout-based entropy measure of uncertainty in the workflow speeds up the learning process by roughly 45%. Finally, we utilise our model to segment whole-slide images of colorectal tissue and compute segmentation-based spatial statistics.
Collapse
Affiliation(s)
- Alicja Rączkowska
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland
| | - Marcin Możejko
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland
| | - Joanna Zambonelli
- Department of Pathology, Medical University of Warsaw, Warsaw, Poland
| | - Ewa Szczurek
- Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland.
| |
Collapse
|
361
|
Akbar S, Peikari M, Salama S, Panah AY, Nofech-Mozes S, Martel AL. Automated and Manual Quantification of Tumour Cellularity in Digital Slides for Tumour Burden Assessment. Sci Rep 2019; 9:14099. [PMID: 31576001 PMCID: PMC6773948 DOI: 10.1038/s41598-019-50568-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Accepted: 09/02/2019] [Indexed: 01/03/2023] Open
Abstract
The residual cancer burden index is an important quantitative measure used for assessing treatment response following neoadjuvant therapy for breast cancer. It has shown to be predictive of overall survival and is composed of two key metrics: qualitative assessment of lymph nodes and the percentage of invasive or in situ tumour cellularity (TC) in the tumour bed (TB). Currently, TC is assessed through eye-balling of routine histopathology slides estimating the proportion of tumour cells within the TB. With the advances in production of digitized slides and increasing availability of slide scanners in pathology laboratories, there is potential to measure TC using automated algorithms with greater precision and accuracy. We describe two methods for automated TC scoring: 1) a traditional approach to image analysis development whereby we mimic the pathologists’ workflow, and 2) a recent development in artificial intelligence in which features are learned automatically in deep neural networks using image data alone. We show strong agreements between automated and manual analysis of digital slides. Agreements between our trained deep neural networks and experts in this study (0.82) approach the inter-rater agreements between pathologists (0.89). We also reveal properties that are captured when we apply deep neural network to whole slide images, and discuss the potential of using such visualisations to improve upon TC assessment in the future.
Collapse
Affiliation(s)
- Shazia Akbar
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada. .,Medical Biophysics, University of Toronto, Toronto, Canada. .,Vector Institute, Toronto, Canada.
| | | | | | | | | | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada.,Medical Biophysics, University of Toronto, Toronto, Canada.,Vector Institute, Toronto, Canada
| |
Collapse
|
362
|
Yoon JS, Choi EY, Saad M, Choi TS. Automated integrated system for stained neuron detection: An end-to-end framework with a high negative predictive rate. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 180:105028. [PMID: 31437805 DOI: 10.1016/j.cmpb.2019.105028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 07/13/2019] [Accepted: 08/09/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Mapping the architecture of the brain is essential for identifying the neural computations that affect behavior. Traditionally in histology, stained objects in tissue slices are hand-marked under a microscope in a manually intensive, time-consuming process. An integrated hardware and software system is needed to automate image acquisition, image processing, and object detection. Such a system would enable high throughput tissue analysis to rapidly map an entire brain. METHODS We demonstrate an automated system to detect neurons using a monkey brain slice immunohistochemically stained for retrogradely labeled neurons. The proposed system obtains a reconstructed image of the sample, and stained neurons are detected in three steps. First, the reconstructed image is pre-processed using adaptive histogram equalization. Second, candidates for stained neurons are segmented from each region via marker-controlled watershed transformation (MCWT) using maximally stable extremal regions (MSERs). Third, the candidates are categorized as neurons or non-neurons using deep transfer learning via pre-trained convolutional neural networks (CNN). RESULTS The proposed MCWT algorithm was compared qualitatively against MorphLibJ and an IHC analysis tool, while our unified classification algorithm was evaluated quantitatively using ROC analyses. The proposed classification system was first compared with five previously developed layers (AlexNet, VGG-16, VGG-19, GoogleNet, and ResNet). A comparison with conventional multi-stage frameworks followed using six off-the-shelf classifiers [Bayesian network (BN), support vector machines (SVM), decision tree (DT), bagging (BAG), AdaBoost (ADA), and logistic regression (LR)] and two descriptors (LBP and HOG). The system achieved a 0.918 F1-score with an 86.6% negative prediction value. Remarkably, other metrics such as precision, recall, and F-scores surpassed the 90% threshold compared to traditional methods. CONCLUSIONS We demonstrate a fully automated, integrated hardware and software system for rapidly acquiring focused images and identifying neurons from a stained brain slice. This system could be adapted for the identification of stained features of any biological tissue.
Collapse
Affiliation(s)
- Ji-Seok Yoon
- School of Mechatronics, Gwangju Institute of Science and Technology, 123 Cheomdan-gwagiro, Buk-gu, Gwangju 61005, Republic of Korea
| | - Eun Young Choi
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| | - Maliazurina Saad
- School of Mechatronics, Gwangju Institute of Science and Technology, 123 Cheomdan-gwagiro, Buk-gu, Gwangju 61005, Republic of Korea
| | - Tae-Sun Choi
- School of Mechatronics, Gwangju Institute of Science and Technology, 123 Cheomdan-gwagiro, Buk-gu, Gwangju 61005, Republic of Korea.
| |
Collapse
|
363
|
Das DK, Koley S, Bose S, Maiti AK, Mitra B, Mukherjee G, Dutta PK. Computer aided tool for automatic detection and delineation of nucleus from oral histopathology images for OSCC screening. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105642] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
364
|
Sun H, Zeng X, Xu T, Peng G, Ma Y. Computer-Aided Diagnosis in Histopathological Images of the Endometrium Using a Convolutional Neural Network and Attention Mechanisms. IEEE J Biomed Health Inform 2019; 24:1664-1676. [PMID: 31581102 DOI: 10.1109/jbhi.2019.2944977] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Uterine cancer (also known as endometrial cancer) can seriously affect the female reproductive system, and histopathological image analysis is the gold standard for diagnosing endometrial cancer. Due to the limited ability to model the complicated relationships between histopathological images and their interpretations, existing computer-aided diagnosis (CAD) approaches using traditional machine learning algorithms often failed to achieve satisfying results. In this study, we develop a CAD approach based on a convolutional neural network (CNN) and attention mechanisms, called HIENet. In the ten-fold cross-validation on ∼3,300 hematoxylin and eosin (H&E) image patches from ∼500 endometrial specimens, HIENet achieved a 76.91 ± 1.17% (mean ± s. d.) accuracy for four classes of endometrial tissue, i.e., normal endometrium, endometrial polyp, endometrial hyperplasia, and endometrial adenocarcinoma. Also, HIENet obtained an area-under-the-curve (AUC) of 0.9579 ± 0.0103 with an 81.04 ± 3.87% sensitivity and 94.78 ± 0.87% specificity in a binary classification task that detected endometrioid adenocarcinoma. Besides, in the external validation on 200 H&E image patches from 50 randomly-selected female patients, HIENet achieved an 84.50% accuracy in the four-class classification task, as well as an AUC of 0.9829 with a 77.97% (95% confidence interval, CI, 65.27%∼87.71%) sensitivity and 100% (95% CI, 97.42%∼100.00%) specificity. The proposed CAD method outperformed three human experts and five CNN-based classifiers regarding overall classification performance. It was also able to provide pathologists better interpretability of diagnoses by highlighting the histopathological correlations of local pixel-level image features to morphological characteristics of endometrial tissue.
Collapse
|
365
|
Microvascularity detection and quantification in glioma: a novel deep-learning-based framework. J Transl Med 2019; 99:1515-1526. [PMID: 31201368 DOI: 10.1038/s41374-019-0272-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 03/25/2019] [Accepted: 04/18/2019] [Indexed: 01/08/2023] Open
Abstract
Microvascularity is highly correlated with the grading and subtyping of gliomas, making this one of its most important histological features. Accurate quantitative analysis of microvessels is helpful for the development of a targeted therapy for antiangiogenesis. The deep-learning algorithm is by far the most effective segmentation and detection model and enables location and recognition of complex microvascular networks in large images obtained from hematoxylin and eosin (H&E) stained specimens. We proposed an automated deep-learning-based method to detect and quantify the microvascularity in glioma and applied it to comprehensive clinical analyses. A total of 350 glioma patients were enrolled in our study, for which digitalized imaging of H&E stained slides were reviewed, molecular diagnosis was performed and follow-up was investigated. The microvascular features were compared according to their histologic types, molecular types, and patients' prognosis. The results show that the proposed method can quantify microvascular characteristics automatically and effectively. Significant increases of microvascular density and microvascular area were observed in glioblastomas (95% p < 0.001 in density, 170% p < 0.001 in area) in comparison with other histologic types; increases were also observed in cases with TERT-mut only (68% p < 0.001 in density, 54% p < 0.001 in area) compared with other molecular types. Survival analysis showed that microvascular features can be used to cluster cases into two groups with different survival periods (hazard ratio [HR] 2.843, log-rank <0.001), which indicates the quantified microvascular features may potentially be alternative signatures for revealing patients' prognosis. This deep-learning-based method may be a useful tool in routine clinical practice for precise diagnosis and antiangiogenic treatment.
Collapse
|
366
|
Xing F, Bennett T, Ghosh D. Adversarial Domain Adaptation and Pseudo-Labeling for Cross-Modality Microscopy Image Quantification. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11764:740-749. [PMID: 31825019 PMCID: PMC6903918 DOI: 10.1007/978-3-030-32239-7_82] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Cell or nucleus quantification has recently achieved state-of-the-art performance by using convolutional neural networks (CNNs). In general, training CNNs requires a large amount of annotated microscopy image data, which is prohibitively expensive or even impossible to obtain in some applications. Additionally, when applying a deep supervised model to new datasets, it is common to annotate individual cells in those target datasets for model re-training or fine-tuning, leading to low-throughput image analysis. In this paperSSS, we propose a novel adversarial domain adaptation method for cell/nucleus quantification across multimodality microscopy image data. Specifically, we learn a fully convolutional network detector with task-specific cycle-consistent adversarial learning, which conducts pixel-level adaptation between source and target domains and completes a cell/nucleus detection task. Then we generate pseudo-labels on target training data using the detector trained with adapted source images and further fine-tune the detector towards the target domain to boost the performance. We evaluate the proposed method on multiple cross-modality microscopy image datasets and obtain a significant improvement in cell/nucleus detection compared to the reference baselines and a recent state-of-the-art deep domain adaptation approach. In addition, our method is very competitive with the fully supervised models trained with all real target training labels.
Collapse
Affiliation(s)
- Fuyong Xing
- Depatment of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus
- Data Science to Patient Value, University of Colorado Anschutz Medical Campus
| | - Tell Bennett
- Data Science to Patient Value, University of Colorado Anschutz Medical Campus
- Department of Pediatrics, University of Colorado Anschutz Medical Campus
| | - Debashis Ghosh
- Depatment of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus
- Data Science to Patient Value, University of Colorado Anschutz Medical Campus
| |
Collapse
|
367
|
Halicek M, Shahedi M, Little JV, Chen AY, Myers LL, Sumer BD, Fei B. Head and Neck Cancer Detection in Digitized Whole-Slide Histology Using Convolutional Neural Networks. Sci Rep 2019; 9:14043. [PMID: 31575946 PMCID: PMC6773771 DOI: 10.1038/s41598-019-50313-x] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2019] [Accepted: 09/10/2019] [Indexed: 01/01/2023] Open
Abstract
Primary management for head and neck cancers, including squamous cell carcinoma (SCC), involves surgical resection with negative cancer margins. Pathologists guide surgeons during these operations by detecting cancer in histology slides made from the excised tissue. In this study, 381 digitized, histological whole-slide images (WSI) from 156 patients with head and neck cancer were used to train, validate, and test an inception-v4 convolutional neural network. The proposed method is able to detect and localize primary head and neck SCC on WSI with an AUC of 0.916 for patients in the SCC testing group and 0.954 for patients in the thyroid carcinoma testing group. Moreover, the proposed method is able to diagnose WSI with cancer versus normal slides with an AUC of 0.944 and 0.995 for the SCC and thyroid carcinoma testing groups, respectively. For comparison, we tested the proposed, diagnostic method on an open-source dataset of WSI from sentinel lymph nodes with breast cancer metastases, CAMELYON 2016, to obtain patch-based cancer localization and slide-level cancer diagnoses. The experimental design yields a robust method with potential to help create a tool to increase efficiency and accuracy of pathologists detecting head and neck cancers in histological images.
Collapse
Affiliation(s)
- Martin Halicek
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX, USA.,Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Maysam Shahedi
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX, USA
| | - James V Little
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Amy Y Chen
- Department of Otolaryngology, Emory University School of Medicine, Atlanta, GA, USA
| | - Larry L Myers
- Department of Otolaryngology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Baran D Sumer
- Department of Otolaryngology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX, USA. .,Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX, USA. .,Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
368
|
Xu J, Jing M, Wang S, Yang C, Chen X. A review of medical image detection for cancers in digestive system based on artificial intelligence. Expert Rev Med Devices 2019; 16:877-889. [PMID: 31530047 DOI: 10.1080/17434440.2019.1669447] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Introduction: At present, cancer imaging examination relies mainly on manual reading of doctors, which requests a high standard of doctors' professional skills, clinical experience, and concentration. However, the increasing amount of medical imaging data has brought more and more challenges to radiologists. The detection of digestive system cancer (DSC) based on artificial intelligence (AI) can provide a solution for automatic analysis of medical images and assist doctors to achieve high-precision intelligent diagnosis of cancers. Areas covered: The main goal of this paper is to introduce the main research methods of the AI based detection of DSC, and provide relevant reference for researchers. Meantime, it summarizes the main problems existing in these methods, and provides better guidance for future research. Expert commentary: The automatic classification, recognition, and segmentation of DSC can be better realized through the methods of machine learning and deep learning, which minimize the internal information of images that are difficult for humans to discover. In the diagnosis of DSC, the use of AI to assist imaging surgeons can achieve cancer detection rapidly and effectively and save doctors' diagnosis time. These can lay the foundation for better clinical diagnosis, treatment planning and accurate quantitative evaluation of DSC.
Collapse
Affiliation(s)
- Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Mengjie Jing
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Shiming Wang
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| | - Cuiping Yang
- Department of Gastroenterology, Ruijin North Hospital of Shanghai Jiao Tong University School of Medicine , Shanghai , China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University , Shanghai , China
| |
Collapse
|
369
|
Xia T, Kumar A, Feng D, Kim J. Patch-level Tumor Classification in Digital Histopathology Images with Domain Adapted Deep Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:644-647. [PMID: 30440479 DOI: 10.1109/embc.2018.8512353] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Tumor histopathology is a crucial step in cancer diagnosis which involves visual inspection of imaging data to detect the presence of tumor cells among healthy tissues. This manual process can be time-consuming, error-prone, and influenced by the expertise of the pathologist. Recent deep learning methods for image classification and detection using convolutional neural networks (CNNs) have demonstrated marked improvements in the accuracy of a variety of medical imaging analysis tasks. However, most well-established deep learning methods require large annotated training datasets that are specific to the particular problem domain; such datasets are difficult to acquire for histopathology data where visual characteristics differ between different tissue types, in addition to the need for precise annotations. In this study, we overcome the lack of annotated training dataset in histopathology images of a particular domain by adapting annotated histopathology images from different domains (tissue types). The data from other tissue types are used to pre-train CNNs into a shared histopathology domain (e.g., stains, cellular structures) such that it can be further tuned/optimized for a specific tissue type. We evaluated our classification method on publically available datasets of histopathology images; the accuracy and area under the receiver operating characteristic curve (AUC) of our method was higher than CNNs trained from scratch on limited data (accuracy: 84.3% vs. 78.3%; AUC: 0.918 vs. 0.867), suggesting that domain adaptation can be a valuable approach to histopathological images classification.
Collapse
|
370
|
Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, Rajpoot N. Hover-Net: Simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med Image Anal 2019; 58:101563. [PMID: 31561183 DOI: 10.1016/j.media.2019.101563] [Citation(s) in RCA: 460] [Impact Index Per Article: 76.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 09/04/2019] [Accepted: 09/16/2019] [Indexed: 12/21/2022]
Abstract
Nuclear segmentation and classification within Haematoxylin & Eosin stained histology images is a fundamental prerequisite in the digital pathology work-flow. The development of automated methods for nuclear segmentation and classification enables the quantitative analysis of tens of thousands of nuclei within a whole-slide pathology image, opening up possibilities of further analysis of large-scale nuclear morphometry. However, automated nuclear segmentation and classification is faced with a major challenge in that there are several different types of nuclei, some of them exhibiting large intra-class variability such as the nuclei of tumour cells. Additionally, some of the nuclei are often clustered together. To address these challenges, we present a novel convolutional neural network for simultaneous nuclear segmentation and classification that leverages the instance-rich information encoded within the vertical and horizontal distances of nuclear pixels to their centres of mass. These distances are then utilised to separate clustered nuclei, resulting in an accurate segmentation, particularly in areas with overlapping instances. Then, for each segmented instance the network predicts the type of nucleus via a devoted up-sampling branch. We demonstrate state-of-the-art performance compared to other methods on multiple independent multi-tissue histology image datasets. As part of this work, we introduce a new dataset of Haematoxylin & Eosin stained colorectal adenocarcinoma image tiles, containing 24,319 exhaustively annotated nuclei with associated class labels.
Collapse
Affiliation(s)
- Simon Graham
- Mathematics for Real World Systems Centre for Doctoral Training, University of Warwick, UK; Department of Computer Science, University of Warwick, UK.
| | - Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, South Korea
| | - Shan E Ahmed Raza
- Department of Computer Science, University of Warwick, UK; Centre for Evolution and Cancer & Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Ayesha Azam
- Department of Computer Science, University of Warwick, UK; University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Yee Wah Tsang
- University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, South Korea
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, London, UK
| |
Collapse
|
371
|
Shaban M, Khurram SA, Fraz MM, Alsubaie N, Masood I, Mushtaq S, Hassan M, Loya A, Rajpoot NM. A Novel Digital Score for Abundance of Tumour Infiltrating Lymphocytes Predicts Disease Free Survival in Oral Squamous Cell Carcinoma. Sci Rep 2019; 9:13341. [PMID: 31527658 PMCID: PMC6746698 DOI: 10.1038/s41598-019-49710-z] [Citation(s) in RCA: 98] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 07/31/2019] [Indexed: 01/06/2023] Open
Abstract
Oral squamous cell carcinoma (OSCC) is the most common type of head and neck (H&N) cancers with an increasing worldwide incidence and a worsening prognosis. The abundance of tumour infiltrating lymphocytes (TILs) has been shown to be a key prognostic indicator in a range of cancers with emerging evidence of its role in OSCC progression and treatment response. However, the current methods of TIL analysis are subjective and open to variability in interpretation. An automated method for quantification of TIL abundance has the potential to facilitate better stratification and prognostication of oral cancer patients. We propose a novel method for objective quantification of TIL abundance in OSCC histology images. The proposed TIL abundance (TILAb) score is calculated by first segmenting the whole slide images (WSIs) into underlying tissue types (tumour, lymphocytes, etc.) and then quantifying the co-localization of lymphocytes and tumour areas in a novel fashion. We investigate the prognostic significance of TILAb score on digitized WSIs of Hematoxylin and Eosin (H&E) stained slides of OSCC patients. Our deep learning based tissue segmentation achieves high accuracy of 96.31%, which paves the way for reliable downstream analysis. We show that the TILAb score is a strong prognostic indicator (p = 0.0006) of disease free survival (DFS) on our OSCC test cohort. The automated TILAb score has a significantly higher prognostic value than the manual TIL score (p = 0.0024). In summary, the proposed TILAb score is a digital biomarker which is based on more accurate classification of tumour and lymphocytic regions, is motivated by the biological definition of TILs as tumour infiltrating lymphocytes, with the added advantages of objective and reproducible quantification.
Collapse
Affiliation(s)
- Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, CV47AL, UK
| | - Syed Ali Khurram
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Muhammad Moazam Fraz
- Department of Computer Science, University of Warwick, Coventry, CV47AL, UK
- School of Electrical Engineering and Computer Science, National University of Sciences and Technology, H-12, Islamabad, Pakistan
- The Alan Turing Institute, NW1 2DB, London, UK
| | - Najah Alsubaie
- Department of Computer Science, University of Warwick, Coventry, CV47AL, UK
- Department of Computer Science, Princess Nourah University, Riyadh, Saudi Arabia
| | - Iqra Masood
- Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Sajid Mushtaq
- Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Mariam Hassan
- Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Asif Loya
- Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Nasir M Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV47AL, UK.
- The Alan Turing Institute, NW1 2DB, London, UK.
- University Hospitals Coventry, Department of Pathology, Warwickshire, UK.
| |
Collapse
|
372
|
Xing F, Xie Y, Shi X, Chen P, Zhang Z, Yang L. Towards pixel-to-pixel deep nucleus detection in microscopy images. BMC Bioinformatics 2019; 20:472. [PMID: 31521104 PMCID: PMC6744696 DOI: 10.1186/s12859-019-3037-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2018] [Accepted: 08/21/2019] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Nucleus is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed. RESULTS We analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance. CONCLUSIONS We conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, and the Data Science to Patient Value initiative, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, Colorado 80045, United States
| | - Yuanpu Xie
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Xiaoshuang Shi
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Pingjun Chen
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
| | - Zizhao Zhang
- Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Drive, Gainesville, Florida 32611, United States
| | - Lin Yang
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, 1275 Center Drive, Gainesville, Florida 32611, United States
- Department of Computer and Information Science and Engineering, University of Florida, 432 Newell Drive, Gainesville, Florida 32611, United States
| |
Collapse
|
373
|
Joseph J, Roudier MP, Narayanan PL, Augulis R, Ros VR, Pritchard A, Gerrard J, Laurinavicius A, Harrington EA, Barrett JC, Howat WJ. Proliferation Tumour Marker Network (PTM-NET) for the identification of tumour region in Ki67 stained breast cancer whole slide images. Sci Rep 2019; 9:12845. [PMID: 31492872 PMCID: PMC6731323 DOI: 10.1038/s41598-019-49139-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Accepted: 08/16/2019] [Indexed: 12/20/2022] Open
Abstract
Uncontrolled proliferation is a hallmark of cancer and can be assessed by labelling breast tissue using immunohistochemistry for Ki67, a protein associated with cell proliferation. Accurate measurement of Ki67-positive tumour nuclei is of critical importance, but requires annotation of the tumour regions by a pathologist. This manual annotation process is highly subjective, time-consuming and subject to inter- and intra-annotator experience. To address this challenge, we have developed Proliferation Tumour Marker Network (PTM-NET), a deep learning model that objectively annotates the tumour regions in Ki67-labelled breast cancer digital pathology images using a convolution neural network. Our custom designed deep learning model was trained on 45 immunohistochemical Ki67-labelled whole slide images to classify tumour and non-tumour regions and was validated on 45 whole slide images from two different sources that were stained using different protocols. Our results show a Dice coefficient of 0.74, positive predictive value of 70% and negative predictive value of 88.3% against the manual ground truth annotation for the combined dataset. There were minimal differences between the images from different sources and the model was further tested in oestrogen receptor and progesterone receptor-labelled images. Finally, using an extension of the model, we could identify possible hotspot regions of high proliferation within the tumour. In the future, this approach could be useful in identifying tumour regions in biopsy samples and tissue microarray images.
Collapse
Affiliation(s)
- Jesuchristopher Joseph
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom.
| | - Martine P Roudier
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Priya Lakshmi Narayanan
- Centre for Evolution and Cancer, Division of Molecular Pathology, Institute of Cancer Research London, London, United Kingdom
| | - Renaldas Augulis
- Vilnius University, Faculty of Medicine and the National Centre of Pathology, affiliate of Vilnius University Hospital Santaros Clinics, Vilnius, Lithuania
| | - Vidalba Rocher Ros
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Alison Pritchard
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Joe Gerrard
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Arvydas Laurinavicius
- Vilnius University, Faculty of Medicine and the National Centre of Pathology, affiliate of Vilnius University Hospital Santaros Clinics, Vilnius, Lithuania
| | - Elizabeth A Harrington
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - J Carl Barrett
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - William J Howat
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| |
Collapse
|
374
|
|
375
|
Tofighi M, Guo T, Vanamala JKP, Monga V. Prior Information Guided Regularized Deep Learning for Cell Nucleus Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2047-2058. [PMID: 30703016 DOI: 10.1109/tmi.2019.2895318] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Cell nuclei detection is a challenging research topic because of limitations in cellular image quality and diversity of nuclear morphology, i.e., varying nuclei shapes, sizes, and overlaps between multiple cell nuclei. This has been a topic of enduring interest with promising recent success shown by deep learning methods. These methods train convolutional neural networks (CNNs) with a training set of input images and known, labeled nuclei locations. Many such methods are supplemented by spatial or morphological processing. Using a set of canonical cell nuclei shapes, prepared with the help of a domain expert, we develop a new approach that we call shape priors (SPs) with CNNs (SPs-CNN). We further extend the network to introduce an SP layer and then allowing it to become trainable (i.e., optimizable). We call this network as tunable SP-CNN (TSP-CNN). In summary, we present new network structures that can incorporate "expected behavior" of nucleus shapes via two components: learnable layers that perform the nucleus detection and a fixed processing part that guides the learning with prior information. Analytically, we formulate two new regularization terms that are targeted at: 1) learning the shapes and 2) reducing false positives while simultaneously encouraging detection inside the cell nucleus boundary. Experimental results on two challenging datasets reveal that the proposed SP-CNN and TSP-CNN can outperform the state-of-the-art alternatives.
Collapse
|
376
|
Wang S, Zhu Y, Yu L, Chen H, Lin H, Wan X, Fan X, Heng PA. RMDL: Recalibrated multi-instance deep learning for whole slide gastric image classification. Med Image Anal 2019; 58:101549. [PMID: 31499320 DOI: 10.1016/j.media.2019.101549] [Citation(s) in RCA: 84] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 08/24/2019] [Accepted: 08/29/2019] [Indexed: 12/11/2022]
Abstract
The whole slide histopathology images (WSIs) play a critical role in gastric cancer diagnosis. However, due to the large scale of WSIs and various sizes of the abnormal area, how to select informative regions and analyze them are quite challenging during the automatic diagnosis process. The multi-instance learning based on the most discriminative instances can be of great benefit for whole slide gastric image diagnosis. In this paper, we design a recalibrated multi-instance deep learning method (RMDL) to address this challenging problem. We first select the discriminative instances, and then utilize these instances to diagnose diseases based on the proposed RMDL approach. The designed RMDL network is capable of capturing instance-wise dependencies and recalibrating instance features according to the importance coefficient learned from the fused features. Furthermore, we build a large whole-slide gastric histopathology image dataset with detailed pixel-level annotations. Experimental results on the constructed gastric dataset demonstrate the significant improvement on the accuracy of our proposed framework compared with other state-of-the-art multi-instance learning methods. Moreover, our method is general and can be extended to other diagnosis tasks of different cancer types based on WSIs.
Collapse
Affiliation(s)
- Shujun Wang
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yaxi Zhu
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, China
| | - Lequan Yu
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hao Chen
- Imsight Medical Technology Co., Ltd., China.
| | - Huangjing Lin
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China; Imsight Medical Technology Co., Ltd., China
| | - Xiangbo Wan
- Department of Radiation Oncology, The Sixth Affiliated Hospital of Sun Yat-sen University, China
| | - Xinjuan Fan
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, China.
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
377
|
Swiderska-Chadaj Z, Pinckaers H, van Rijthoven M, Balkenhol M, Melnikova M, Geessink O, Manson Q, Sherman M, Polonia A, Parry J, Abubakar M, Litjens G, van der Laak J, Ciompi F. Learning to detect lymphocytes in immunohistochemistry with deep learning. Med Image Anal 2019; 58:101547. [PMID: 31476576 DOI: 10.1016/j.media.2019.101547] [Citation(s) in RCA: 87] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 08/12/2019] [Accepted: 08/20/2019] [Indexed: 12/17/2022]
Abstract
The immune system is of critical importance in the development of cancer. The evasion of destruction by the immune system is one of the emerging hallmarks of cancer. We have built a dataset of 171,166 manually annotated CD3+ and CD8+ cells, which we used to train deep learning algorithms for automatic detection of lymphocytes in histopathology images to better quantify immune response. Moreover, we investigate the effectiveness of four deep learning based methods when different subcompartments of the whole-slide image are considered: normal tissue areas, areas with immune cell clusters, and areas containing artifacts. We have compared the proposed methods in breast, colon and prostate cancer tissue slides collected from nine different medical centers. Finally, we report the results of an observer study on lymphocyte quantification, which involved four pathologists from different medical centers, and compare their performance with the automatic detection. The results give insights on the applicability of the proposed methods for clinical use. U-Net obtained the highest performance with an F1-score of 0.78 and the highest agreement with manual evaluation (κ=0.72), whereas the average pathologists agreement with reference standard was κ=0.64. The test set and the automatic evaluation procedure are publicly available at lyon19.grand-challenge.org.
Collapse
Affiliation(s)
| | - Hans Pinckaers
- Department of Pathology, Radboud University Medical Center, The Netherlands
| | - Mart van Rijthoven
- Department of Pathology, Radboud University Medical Center, The Netherlands
| | | | - Margarita Melnikova
- Department of Pathology, Radboud University Medical Center, The Netherlands; Department of Clinical Medicine, Aarhus University, Denmark; Institute of Pathology, Randers Regional Hospital, Denmark
| | - Oscar Geessink
- Department of Pathology, Radboud University Medical Center, The Netherlands
| | - Quirine Manson
- Department of Pathology, University Medical Center, Utrecht, The Netherlands
| | | | - Antonio Polonia
- Institute of Molecular Pathology and Immunology, University of Porto, Porto, Portugal
| | - Jeremy Parry
- Fiona Stanley Hospital, Murdoch, Perth, Western Australia
| | - Mustapha Abubakar
- Integrative Tumor Epidemiology Branch, Division of Cancer Epidemiology and Genetics, National Cancer Institute, USA
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, The Netherlands
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, The Netherlands; Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, The Netherlands
| |
Collapse
|
378
|
Azer SA. Challenges Facing the Detection of Colonic Polyps: What Can Deep Learning Do? MEDICINA (KAUNAS, LITHUANIA) 2019; 55:473. [PMID: 31409050 PMCID: PMC6723854 DOI: 10.3390/medicina55080473] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/05/2019] [Revised: 08/01/2019] [Accepted: 08/06/2019] [Indexed: 12/24/2022]
Abstract
Colorectal cancer (CRC) is one of the most common causes of cancer mortality in the world. The incidence is related to increases with age and western dietary habits. Early detection through screening by colonoscopy has been proven to effectively reduce disease-related mortality. Currently, it is generally accepted that most colorectal cancers originate from adenomas. This is known as the "adenoma-carcinoma sequence", and several studies have shown that early detection and removal of adenomas can effectively prevent the development of colorectal cancer. The other two pathways for CRC development are the Lynch syndrome pathway and the sessile serrated pathway. The adenoma detection rate is an established indicator of a colonoscopy's quality. A 1% increase in the adenoma detection rate has been associated with a 3% decrease in interval CRC incidence. However, several factors may affect the adenoma detection rate during a colonoscopy, and techniques to address these factors have been thoroughly discussed in the literature. Interestingly, despite the use of these techniques in colonoscopy training programs and the introduction of quality measures in colonoscopy, the adenoma detection rate varies widely. Considering these limitations, initiatives that use deep learning, particularly convolutional neural networks (CNNs), to detect cancerous lesions and colonic polyps have been introduced. The CNN architecture seems to offer several advantages in this field, including polyp classification, detection, and segmentation, polyp tracking, and an increase in the rate of accurate diagnosis. Given the challenges in the detection of colon cancer affecting the ascending (proximal) colon, which is more common in women aged over 65 years old and is responsible for the higher mortality of these patients, one of the questions that remains to be answered is whether CNNs can help to maximize the CRC detection rate in proximal versus distal colon in relation to a gender distribution. This review discusses the current challenges facing CRC screening and training programs, quality measures in colonoscopy, and the role of CNNs in increasing the detection rate of colonic polyps and early cancerous lesions.
Collapse
Affiliation(s)
- Samy A Azer
- Department of Medical Education, King Saud University, College of Medicine, Riyadh 11461, Saudi Arabia.
| |
Collapse
|
379
|
Lin H, Chen H, Graham S, Dou Q, Rajpoot N, Heng PA. Fast ScanNet: Fast and Dense Analysis of Multi-Gigapixel Whole-Slide Images for Cancer Metastasis Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1948-1958. [PMID: 30624213 DOI: 10.1109/tmi.2019.2891305] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Lymph node metastasis is one of the most important indicators in breast cancer diagnosis, that is traditionally observed under the microscope by pathologists. In recent years, with the dramatic advance of high-throughput scanning and deep learning technology, automatic analysis of histology from whole-slide images has received a wealth of interest in the field of medical image computing, which aims to alleviate pathologists' workload and simultaneously reduce misdiagnosis rate. However, the automatic detection of lymph node metastases from whole-slide images remains a key challenge because such images are typically very large, where they can often be multiple gigabytes in size. Also, the presence of hard mimics may result in a large number of false positives. In this paper, we propose a novel method with anchor layers for model conversion, which not only leverages the efficiency of fully convolutional architectures to meet the speed requirement in clinical practice but also densely scans the whole-slide image to achieve accurate predictions on both micro- and macro-metastases. Incorporating the strategies of asynchronous sample prefetching and hard negative mining, the network can be effectively trained. The efficacy of our method is corroborated on the benchmark dataset of 2016 Camelyon Grand Challenge. Our method achieved significant improvements in comparison with the state-of-the-art methods on tumor localization accuracy with a much faster speed and even surpassed human performance on both challenge tasks.
Collapse
|
380
|
Cai J, He WG, Wang L, Zhou K, Wu TX. Osteoporosis Recognition in Rats under Low-Power Lens Based on Convexity Optimization Feature Fusion. Sci Rep 2019; 9:10971. [PMID: 31358772 PMCID: PMC6662810 DOI: 10.1038/s41598-019-47281-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Accepted: 07/15/2019] [Indexed: 11/09/2022] Open
Abstract
Considering the poor medical conditions in some regions of China, this paper attempts to develop a simple and easy way to extract and process the bone features of blurry medical images and improve the diagnosis accuracy of osteoporosis as much as possible. After reviewing the previous studies on osteoporosis, especially those focusing on texture analysis, a convexity optimization model was proposed based on intra-class dispersion, which combines texture features and shape features. Experimental results show that the proposed model boasts a larger application scope than Lasso, a popular feature selection method that only supports generalized linear models. The research findings ensure the accuracy of osteoporosis diagnosis and enjoy good potentials for clinical application.
Collapse
Affiliation(s)
- Jie Cai
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Wen-Guang He
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Long Wang
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Ke Zhou
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Tian-Xiu Wu
- School of Basic Medical Science, Guangdong Medical University, Zhanjiang, 524023, China.
| |
Collapse
|
381
|
Gupta R, Kurc T, Sharma A, Almeida JS, Saltz J. The Emergence of Pathomics. CURRENT PATHOBIOLOGY REPORTS 2019. [DOI: 10.1007/s40139-019-00200-x] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
382
|
Cui Y, Zhang G, Liu Z, Xiong Z, Hu J. A deep learning algorithm for one-step contour aware nuclei segmentation of histopathology images. Med Biol Eng Comput 2019; 57:2027-2043. [PMID: 31346949 DOI: 10.1007/s11517-019-02008-8] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 06/24/2019] [Indexed: 12/12/2022]
Abstract
This paper addresses the task of nuclei segmentation in high-resolution histopathology images. We propose an automatic end-to-end deep neural network algorithm for segmentation of individual nuclei. A nucleus-boundary model is introduced to predict nuclei and their boundaries simultaneously using a fully convolutional neural network. Given a color-normalized image, the model directly outputs an estimated nuclei map and a boundary map. A simple, fast, and parameter-free post-processing procedure is performed on the estimated nuclei map to produce the final segmented nuclei. An overlapped patch extraction and assembling method is also designed for seamless prediction of nuclei in large whole-slide images. We also show the effectiveness of data augmentation methods for nuclei segmentation task. Our experiments showed our method outperforms prior state-of-the-art methods. Moreover, it is efficient that one 1000×1000 image can be segmented in less than 5 s. This makes it possible to precisely segment the whole-slide image in acceptable time. The source code is available at https://github.com/easycui/nuclei_segmentation . Graphical Abstract The neural network for nuclei segmentation.
Collapse
Affiliation(s)
- Yuxin Cui
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA
| | - Guiying Zhang
- Department of Medical Information Engineering, Zunyi Medical University, Zunyi, China
| | - Zhonghao Liu
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA
| | - Zheng Xiong
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA
| | - Jianjun Hu
- Department of Computer Science and Technology, University of South Carolina, Columbia, SC, 29208, USA.
| |
Collapse
|
383
|
Lafarge MW, Pluim JPW, Eppenhof KAJ, Veta M. Learning Domain-Invariant Representations of Histological Images. Front Med (Lausanne) 2019; 6:162. [PMID: 31380377 PMCID: PMC6646468 DOI: 10.3389/fmed.2019.00162] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 07/01/2019] [Indexed: 11/13/2022] Open
Abstract
Histological images present high appearance variability due to inconsistent latent parameters related to the preparation and scanning procedure of histological slides, as well as the inherent biological variability of tissues. Machine-learning models are trained with images from a limited set of domains, and are expected to generalize to images from unseen domains. Methodological design choices have to be made in order to yield domain invariance and proper generalization. In digital pathology, standard approaches focus either on ad-hoc normalization of the latent parameters based on prior knowledge, such as staining normalization, or aim at anticipating new variations of these parameters via data augmentation. Since every histological image originates from a unique data distribution, we propose to consider every histological slide of the training data as a domain and investigated the alternative approach of domain-adversarial training to learn features that are invariant to this available domain information. We carried out a comparative analysis with staining normalization and data augmentation on two different tasks: generalization to images acquired in unseen pathology labs for mitosis detection and generalization to unseen organs for nuclei segmentation. We report that the utility of each method depends on the type of task and type of data variability present at training and test time. The proposed framework for domain-adversarial training is able to improve generalization performances on top of conventional methods.
Collapse
Affiliation(s)
- Maxime W. Lafarge
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | | | | | | |
Collapse
|
384
|
İnik Ö, Ceyhan A, Balcıoğlu E, Ülker E. A new method for automatic counting of ovarian follicles on whole slide histological images based on convolutional neural network. Comput Biol Med 2019; 112:103350. [PMID: 31330319 DOI: 10.1016/j.compbiomed.2019.103350] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 07/03/2019] [Accepted: 07/04/2019] [Indexed: 01/30/2023]
Abstract
The ovary is a complex endocrine organ that shows significant structural and functional changes in the female reproductive system over recurrent cycles. There are different types of follicles in the ovarian tissue. The reproductive potential of each individual depends on the numbers of these follicles. However, genetic mutations, toxins, and some specific drugs have an effect on follicles. To determine these effects, it is of great importance to count the follicles. The number of follicles in the ovary is usually counted manually by experts, which is a tedious, time-consuming and intense process. In some cases, the experts count the follicles in a subjective way due to their knowledge. In this study, for the first time, a method has been proposed for automatically counting the follicles of ovarian tissue. Our method primarily involves filter-based segmentation applied to whole slide histological images, based on a convolutional neural network (CNN). A new method is also proposed to eliminate the noise that occurs after the segmentation process and to determine the boundaries of the follicles. Finally, the follicles whose boundaries are determined are classified. To evaluate its performance, the results of the proposed method were compared with those obtained by two different experts and the results of the Faster R-CNN model. The number of follicles obtained by the proposed method was very close to the number of follicles counted by the experts. It was also found that the proposed method was much more successful than the Faster R-CNN model.
Collapse
Affiliation(s)
- Özkan İnik
- Department of Computer Engineering, Gaziosmanpaşa University, Tokat, Turkey.
| | - Ayşe Ceyhan
- Department of Histology and Embryology, Erciyes University School of Medicine, Kayseri, Turkey
| | - Esra Balcıoğlu
- Department of Histology and Embryology, Erciyes University School of Medicine, Kayseri, Turkey
| | - Erkan Ülker
- Department of Computer Engineering, Konya Technical University, Konya, Turkey
| |
Collapse
|
385
|
Martin DR, Hanson JA, Gullapalli RR, Schultz FA, Sethi A, Clark DP. A Deep Learning Convolutional Neural Network Can Recognize Common Patterns of Injury in Gastric Pathology. Arch Pathol Lab Med 2019; 144:370-378. [PMID: 31246112 DOI: 10.5858/arpa.2019-0004-oa] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
CONTEXT.— Most deep learning (DL) studies have focused on neoplastic pathology, with the realm of inflammatory pathology remaining largely untouched. OBJECTIVE.— To investigate the use of DL for nonneoplastic gastric biopsies. DESIGN.— Gold standard diagnoses were blindly established by 2 gastrointestinal pathologists. For phase 1, 300 classic cases (100 normal, 100 Helicobacter pylori, 100 reactive gastropathy) that best displayed the desired pathology were scanned and annotated for DL analysis. A total of 70% of the cases for each group were selected for the training set, and 30% were included in the test set. The software assigned colored labels to the test biopsies, which corresponded to the area of the tissue assigned a diagnosis by the DL algorithm, termed area distribution (AD). For Phase 2, an additional 106 consecutive nonclassical gastric biopsies from our archives were tested in the same fashion. RESULTS.— For Phase 1, receiver operating curves showed near perfect agreement with the gold standard diagnoses at an AD percentage cutoff of 50% for normal (area under the curve [AUC] = 99.7%) and H pylori (AUC = 100%), and 40% for reactive gastropathy (AUC = 99.9%). Sensitivity/specificity pairings were as follows: normal (96.7%, 86.7%), H pylori (100%, 98.3%), and reactive gastropathy (96.7%, 96.7%). For phase 2, receiver operating curves were slightly less discriminatory, with optimal AD cutoffs reduced to 40% across diagnostic groups. The AUCs were 91.9% for normal, 100% for H pylori, and 94.0% for reactive gastropathy. Sensitivity/specificity parings were as follows: normal (73.7%, 79.6%), H pylori (95.7%, 100%), reactive gastropathy (100%, 62.5%). CONCLUSIONS.— A convolutional neural network can serve as an effective screening tool/diagnostic aid for H pylori gastritis.
Collapse
Affiliation(s)
- David R Martin
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Joshua A Hanson
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Rama R Gullapalli
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Fred A Schultz
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Aisha Sethi
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Douglas P Clark
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| |
Collapse
|
386
|
An Industrial Micro-Defect Diagnosis System via Intelligent Segmentation Region. SENSORS 2019; 19:s19112636. [PMID: 31212594 PMCID: PMC6603651 DOI: 10.3390/s19112636] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 06/05/2019] [Accepted: 06/06/2019] [Indexed: 11/25/2022]
Abstract
In the field of machine vision defect detection for a micro workpiece, it is very important to make the neural network realize the integrity of the mask in analyte segmentation regions. In the process of the recognition of small workpieces, fatal defects are always contained in borderline areas that are difficult to demarcate. The non-maximum suppression (NMS) of intersection over union (IOU) will lose crucial texture information especially in the clutter and occlusion detection areas. In this paper, simple linear iterative clustering (SLIC) is used to augment the mask as well as calibrate the score of the mask. We propose an SLIC head of object instance segmentation in proposal regions (Mask R-CNN) containing a network block to learn the quality of the predict masks. It is found that parallel K-means in the limited region mechanism in the SLIC head improved the confidence of the mask score, in the context of our workpiece. A continuous fine-tune mechanism was utilized to continuously improve the model robustness in a large-scale production line. We established a detection system, which included an optical fiber locator, telecentric lens system, matrix stereoscopic light, a rotating platform, and a neural network with an SLIC head. The accuracy of defect detection is effectively improved for micro workpieces with clutter and borderline areas.
Collapse
|
387
|
Local bit-plane decoded convolutional neural network features for biomedical image retrieval. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04279-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
388
|
Deep Learning and Big Data in Healthcare: A Double Review for Critical Beginners. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9112331] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
In the last few years, there has been a growing expectation created about the analysis of large amounts of data often available in organizations, which has been both scrutinized by the academic world and successfully exploited by industry. Nowadays, two of the most common terms heard in scientific circles are Big Data and Deep Learning. In this double review, we aim to shed some light on the current state of these different, yet somehow related branches of Data Science, in order to understand the current state and future evolution within the healthcare area. We start by giving a simple description of the technical elements of Big Data technologies, as well as an overview of the elements of Deep Learning techniques, according to their usual description in scientific literature. Then, we pay attention to the application fields that can be said to have delivered relevant real-world success stories, with emphasis on examples from large technology companies and financial institutions, among others. The academic effort that has been put into bringing these technologies to the healthcare sector are then summarized and analyzed from a twofold view as follows: first, the landscape of application examples is globally scrutinized according to the varying nature of medical data, including the data forms in electronic health recordings, medical time signals, and medical images; second, a specific application field is given special attention, in particular the electrocardiographic signal analysis, where a number of works have been published in the last two years. A set of toy application examples are provided with the publicly-available MIMIC dataset, aiming to help the beginners start with some principled, basic, and structured material and available code. Critical discussion is provided for current and forthcoming challenges on the use of both sets of techniques in our future healthcare.
Collapse
|
389
|
|
390
|
Halicek M, Fabelo H, Ortega S, Callico GM, Fei B. In-Vivo and Ex-Vivo Tissue Analysis through Hyperspectral Imaging Techniques: Revealing the Invisible Features of Cancer. Cancers (Basel) 2019; 11:E756. [PMID: 31151223 PMCID: PMC6627361 DOI: 10.3390/cancers11060756] [Citation(s) in RCA: 109] [Impact Index Per Article: 18.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Revised: 05/20/2019] [Accepted: 05/24/2019] [Indexed: 12/27/2022] Open
Abstract
In contrast to conventional optical imaging modalities, hyperspectral imaging (HSI) is able to capture much more information from a certain scene, both within and beyond the visual spectral range (from 400 to 700 nm). This imaging modality is based on the principle that each material provides different responses to light reflection, absorption, and scattering across the electromagnetic spectrum. Due to these properties, it is possible to differentiate and identify the different materials/substances presented in a certain scene by their spectral signature. Over the last two decades, HSI has demonstrated potential to become a powerful tool to study and identify several diseases in the medical field, being a non-contact, non-ionizing, and a label-free imaging modality. In this review, the use of HSI as an imaging tool for the analysis and detection of cancer is presented. The basic concepts related to this technology are detailed. The most relevant, state-of-the-art studies that can be found in the literature using HSI for cancer analysis are presented and summarized, both in-vivo and ex-vivo. Lastly, we discuss the current limitations of this technology in the field of cancer detection, together with some insights into possible future steps in the improvement of this technology.
Collapse
Affiliation(s)
- Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Department of Biomedical Engineering, Emory University and The Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, GA 30329, USA.
| | - Himar Fabelo
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Samuel Ortega
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Gustavo M Callico
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), 35017 Las Palmas de Gran Canaria, Spain.
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, 800 W. Campbell Road, Richardson, TX 75080, USA.
- Advanced Imaging Research Center, University of Texas Southwestern Medical Center, 5323 Harry Hine Blvd, Dallas, TX 75390, USA.
- Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hine Blvd, Dallas, TX 75390, USA.
| |
Collapse
|
391
|
Automated segmentation of cell membranes to evaluate HER2 status in whole slide images using a modified deep learning network. Comput Biol Med 2019; 110:164-174. [PMID: 31163391 DOI: 10.1016/j.compbiomed.2019.05.020] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/24/2019] [Accepted: 05/25/2019] [Indexed: 02/06/2023]
Abstract
The uncontrollable growth of cells in the breast tissue causes breast cancer which is the second most common type of cancer affecting women in the United States. Normally, human epidermal growth factor receptor 2 (HER2) proteins are responsible for the division and growth of healthy breast cells. HER2 status is currently assessed using immunohistochemistry (IHC) as well as in situ hybridization (ISH) in equivocal cases. Manual HER2 evaluation of IHC stained microscopic images involves an error-prone, tedious, inter-observer variable, and time-consuming routine lab work due to diverse staining, overlapped regions, and non-homogeneous remarkable large slides. To address these issues, digital pathology offers reproducible, automatic, and objective analysis and interpretation of whole slide image (WSI). In this paper, we present a machine learning (ML) framework to segment, classify, and quantify IHC breast cancer images in an effective way. The proposed method consists of two major classifying and segmentation parts. Since HER2 is associated with tumors of an epithelial region and most of the breast tumors originate in epithelial tissue, it is crucial to develop an approach to segment different tissue structures. The proposed technique is comprised of three steps. In the first step, a superpixel-based support vector machine (SVM) feature learning classifier is proposed to classify epithelial and stromal regions from WSI. In the second stage, on classified epithelial regions, a convolutional neural network (CNN) based segmentation method is applied to segment membrane regions. Finally, divided tiles are merged and the overall score of each slide is evaluated. Experimental results for 127 slides are presented and compared with state-of-the-art handcraft and deep learning-based approaches. The experiments demonstrate that the proposed method achieved promising performance on IHC stained data. The presented automated algorithm was shown to outperform other approaches in terms of superpixel based classifying of epithelial regions and segmentation of membrane staining using CNN.
Collapse
|
392
|
Jang HJ, Cho KO. Applications of deep learning for the analysis of medical data. Arch Pharm Res 2019; 42:492-504. [PMID: 31140082 DOI: 10.1007/s12272-019-01162-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 05/20/2019] [Indexed: 02/06/2023]
Abstract
Over the past decade, deep learning has demonstrated superior performances in solving many problems in various fields of medicine compared with other machine learning methods. To understand how deep learning has surpassed traditional machine learning techniques, in this review, we briefly explore the basic learning algorithms underlying deep learning. In addition, the procedures for building deep learning-based classifiers for seizure electroencephalograms and gastric tissue slides are described as examples to demonstrate the simplicity and effectiveness of deep learning applications. Finally, we review the clinical applications of deep learning in radiology, pathology, and drug discovery, where deep learning has been actively adopted. Considering the great advantages of deep learning techniques, deep learning will be increasingly and widely utilized in a wide variety of different areas in medicine in the coming decades.
Collapse
Affiliation(s)
- Hyun-Jong Jang
- Department of Physiology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, College of Medicine, The Catholic University of Korea, Seoul, 06591, South Korea
| | - Kyung-Ok Cho
- Department of Pharmacology, Department of Biomedicine & Health Sciences, Catholic Neuroscience Institute, Institute of Aging and Metabolic Diseases, College of Medicine, The Catholic University of Korea, 222 Banpo-Daero, Seocho-Gu, Seoul, 06591, South Korea.
| |
Collapse
|
393
|
Medical image classification using synergic deep learning. Med Image Anal 2019; 54:10-19. [DOI: 10.1016/j.media.2019.02.010] [Citation(s) in RCA: 152] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 01/21/2019] [Accepted: 02/15/2019] [Indexed: 02/07/2023]
|
394
|
Vu QD, Kwak JT. A dense multi-path decoder for tissue segmentation in histopathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 173:119-129. [PMID: 31046986 DOI: 10.1016/j.cmpb.2019.03.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 02/19/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Segmenting different tissue components in histopathological images is of great importance for analyzing tissues and tumor environments. In recent years, an encoder-decoder family of convolutional neural networks has increasingly adopted to develop automated segmentation tools. While an encoder has been the main focus of most investigations, the role of a decoder so far has not been well studied and understood. Herein, we proposed an improved design of a decoder for the segmentation of epithelium and stroma components in histopathology images. METHODS The proposed decoder is built upon a multi-path layout and dense shortcut connections between layers to maximize the learning and inference capability. Equipped with the proposed decoder, neural networks are built using three types of encoders (VGG, ResNet and preactived ResNet). To assess the proposed method, breast and prostate tissue datasets are utilized, including 108 and 52 hematoxylin and eosin (H&E) breast tissues images and 224 H&E prostate tissue images. RESULTS Combining the pre-activated ResNet encoder and the proposed decoder, we achieved a pixel wise accuracy (ACC) of 0.9122, a rand index (RAND) score of 0.8398, an area under receiver operating characteristic curve (AUC) of 0.9716, Dice coefficient for stroma (DICE_STR) of 0.9092 and Dice coefficient for epithelium (DICE_EPI) of 0.9150 on the breast tissue dataset. The same network obtained 0.9074 ACC, 0.8320 Rand index, 0.9719 AUC, 0.9021 DICE_EPI and 0.9121 DICE_STR on the prostate dataset. CONCLUSIONS In general, the experimental results confirmed that the proposed network is superior to the networks combined with the conventional decoder. Therefore, the proposed decoder could aid in improving tissue analysis in histopathology images.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea.
| |
Collapse
|
395
|
Sari CT, Gunduz-Demir C. Unsupervised Feature Extraction via Deep Learning for Histopathological Classification of Colon Tissue Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1139-1149. [PMID: 30403624 DOI: 10.1109/tmi.2018.2879369] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Histopathological examination is today's gold standard for cancer diagnosis. However, this task is time consuming and prone to errors as it requires a detailed visual inspection and interpretation of a pathologist. Digital pathology aims at alleviating these problems by providing computerized methods that quantitatively analyze digitized histopathological tissue images. The performance of these methods mainly relies on the features that they use, and thus, their success strictly depends on the ability of these features by successfully quantifying the histopathology domain. With this motivation, this paper presents a new unsupervised feature extractor for effective representation and classification of histopathological tissue images. This feature extractor has three main contributions: First, it proposes to identify salient subregions in an image, based on domain-specific prior knowledge, and to quantify the image by employing only the characteristics of these subregions instead of considering the characteristics of all image locations. Second, it introduces a new deep learning-based technique that quantizes the salient subregions by extracting a set of features directly learned on image data and uses the distribution of these quantizations for image representation and classification. To this end, the proposed deep learning-based technique constructs a deep belief network of the restricted Boltzmann machines (RBMs), defines the activation values of the hidden unit nodes in the final RBM as the features, and learns the quantizations by clustering these features in an unsupervised way. Third, this extractor is the first example for successfully using the restricted Boltzmann machines in the domain of histopathological image analysis. Our experiments on microscopic colon tissue images reveal that the proposed feature extractor is effective to obtain more accurate classification results compared to its counterparts.
Collapse
|
396
|
Physician perspectives on integration of artificial intelligence into diagnostic pathology. NPJ Digit Med 2019; 2:28. [PMID: 31304375 PMCID: PMC6550202 DOI: 10.1038/s41746-019-0106-0] [Citation(s) in RCA: 133] [Impact Index Per Article: 22.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 04/09/2019] [Indexed: 02/07/2023] Open
Abstract
Advancements in computer vision and artificial intelligence (AI) carry the potential to make significant contributions to health care, particularly in diagnostic specialties such as radiology and pathology. The impact of these technologies on physician stakeholders is the subject of significant speculation. There is however a dearth of information regarding the opinions, enthusiasm, and concerns of the pathology community at large. Here, we report results from a survey of 487 pathologist-respondents practicing in 54 countries, conducted to examine perspectives on AI implementation in clinical practice. Despite limitations, including difficulty with quantifying response bias and verifying identity of respondents to this anonymous and voluntary survey, several interesting findings were uncovered. Overall, respondents carried generally positive attitudes towards AI, with nearly 75% reporting interest or excitement in AI as a diagnostic tool to facilitate improvements in workflow efficiency and quality assurance in pathology. Importantly, even within the more optimistic cohort, a significant number of respondents endorsed concerns about AI, including the potential for job displacement and replacement. Overall, around 80% of respondents predicted the introduction of AI technology in the pathology laboratory within the coming decade. Attempts to identify statistically significant demographic characteristics (e.g., age, sex, type/place of practice) predictive of attitudes towards AI using Kolmogorov–Smirnov (KS) testing revealed several associations. Important themes which were commented on by respondents included the need for increasing efforts towards physician training and resolving medical-legal implications prior to the generalized implementation of AI in pathology.
Collapse
|
397
|
Shen N, Li X, Zheng S, Zhang L, Fu Y, Liu X, Li M, Li J, Guo S, Zhang H. Automated and accurate quantification of subcutaneous and visceral adipose tissue from magnetic resonance imaging based on machine learning. Magn Reson Imaging 2019; 64:28-36. [PMID: 31004712 DOI: 10.1016/j.mri.2019.04.007] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 04/02/2019] [Accepted: 04/17/2019] [Indexed: 02/07/2023]
Abstract
Accurate measuring of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) is vital for the research of many diseases. The localization and quantification of SAT and VAT by computed tomography (CT) expose patients to harmful ionizing radiation. Magnetic resonance imaging (MRI) is a safe and painless test. The aim of this paper is to explore a practical method for the segmentation of SAT and VAT based on the iterative decomposition of water and fat with echo asymmetry and least square estimation‑iron quantification (IDEAL-IQ) technology and machine learning. The approach involves two main steps. First, a deep network is designed to segment the inner and outer boundaries of SAT in fat images and the peritoneal cavity contour in water images. Second, after mapping the peritoneal cavity contour onto the fat images, the assumption-free K-means++ with a Markov chain Monte Carlo (AFK-MC2) clustering method is used to obtain the VAT content. An MRI data set from 75 subjects is utilized to construct and evaluate the new strategy. The Dice coefficients for the SAT and VAT content obtained from the proposed method and the manual measurements performed by experts are 0.96 and 0.97, respectively. The experimental results indicate that the proposed method and the manual measurements exhibit high reliability.
Collapse
Affiliation(s)
- Ning Shen
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Xueyan Li
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Shuang Zheng
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China
| | - Lei Zhang
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China
| | - Yu Fu
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China
| | - Xiaoming Liu
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Mingyang Li
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China
| | - Jiasheng Li
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Shuxu Guo
- State Key Laboratory on Integrated Optoelectronics, College of Electronic Science and Engineering, Jilin University, 130012 Changchun, China.
| | - Huimao Zhang
- Department of Radiology, the First Hospital of Jilin University, 130021 Changchun, China.
| |
Collapse
|
398
|
A Novel Multispace Image Reconstruction Method for Pathological Image Classification Based on Structural Information. BIOMED RESEARCH INTERNATIONAL 2019; 2019:3530903. [PMID: 31111048 PMCID: PMC6487174 DOI: 10.1155/2019/3530903] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2018] [Revised: 03/13/2019] [Accepted: 03/28/2019] [Indexed: 12/13/2022]
Abstract
Pathological image classification is of great importance in various biomedical applications, such as for lesion detection, cancer subtype identification, and pathological grading. To this end, this paper proposed a novel classification framework using the multispace image reconstruction inputs and the transfer learning technology. Specifically, a multispace image reconstruction method was first developed to generate a new image containing three channels composed of gradient, gray level cooccurrence matrix (GLCM) and local binary pattern (LBP) spaces, respectively. Then, the pretrained VGG-16 net was utilized to extract the high-level semantic features of original images (RGB) and reconstructed images. Subsequently, the long short-term memory (LSTM) layer was used for feature selection and refinement while increasing its discrimination capability. Finally, the classification task was performed via the softmax classifier. Our framework was evaluated on a publicly available microscopy image dataset of IICBU malignant lymphoma. Experimental results demonstrated the performance advantages of our proposed classification framework by comparing with the related works.
Collapse
|
399
|
Aprupe L, Litjens G, Brinker TJ, van der Laak J, Grabe N. Robust and accurate quantification of biomarkers of immune cells in lung cancer micro-environment using deep convolutional neural networks. PeerJ 2019; 7:e6335. [PMID: 30993030 PMCID: PMC6462181 DOI: 10.7717/peerj.6335] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 12/23/2018] [Indexed: 01/24/2023] Open
Abstract
Recent years have seen a growing awareness of the role the immune system plays in successful cancer treatment, especially in novel therapies like immunotherapy. The characterization of the immunological composition of tumors and their micro-environment is thus becoming a necessity. In this paper we introduce a deep learning-based immune cell detection and quantification method, which is based on supervised learning, i.e., the input data for training comprises labeled images. Our approach objectively deals with staining variation and staining artifacts in immunohistochemically stained lung cancer tissue and is as precise as humans. This is evidenced by the low cell count difference to humans of 0.033 cells on average. This method, which is based on convolutional neural networks, has the potential to provide a new quantitative basis for research on immunotherapy.
Collapse
Affiliation(s)
- Lilija Aprupe
- Hamamatsu Tissue Imaging and Analysis (TIGA) Center, BioQuant, Heidelberg University, Heidelberg, Germany.,Department of Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands.,Steinbeis Center for Medical Systems Biology (STCMSB), Heidelberg, Germany
| | - Titus J Brinker
- Department of Dermatology and National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Niels Grabe
- Hamamatsu Tissue Imaging and Analysis (TIGA) Center, BioQuant, Heidelberg University, Heidelberg, Germany.,Department of Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.,Steinbeis Center for Medical Systems Biology (STCMSB), Heidelberg, Germany
| |
Collapse
|
400
|
Inés A, Domínguez C, Heras J, Mata E, Pascual V. DeepClas4Bio: Connecting bioimaging tools with deep learning frameworks for image classification. Comput Biol Med 2019; 108:49-56. [PMID: 31003179 DOI: 10.1016/j.compbiomed.2019.03.026] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 03/27/2019] [Accepted: 03/27/2019] [Indexed: 10/27/2022]
Abstract
BACKGROUND AND OBJECTIVE Deep learning techniques have been successfully applied to tackle several image classification problems in bioimaging. However, the models created from deep learning frameworks cannot be easily accessed from bioimaging tools such as ImageJ or Icy; this means that life scientists are not able to take advantage of the results obtained with those models from their usual tools. In this paper, we aim to facilitate the interoperability of bioimaging tools with deep learning frameworks. METHODS In this project, called DeepClas4Bio, we have developed an extensible API that provides a common access point for classification models of several deep learning frameworks. In addition, this API might be employed to compare deep learning models, and to extend the functionality of bioimaging programs by creating plugins. RESULTS Using the DeepClas4Bio API, we have developed a metagenerator to easily create ImageJ plugins. In addition, we have implemented a Java application that allows users to compare several deep learning models in a simple way using the DeepClas4Bio API. Moreover, we present three examples where we show how to work with different models and frameworks included in the DeepClas4Bio API using several bioimaging tools - namely, ImageJ, Icy and ImagePy. CONCLUSIONS This project brings to the table benefits from several perspectives. Developers of deep learning models can disseminate those models using well-known tools widely employed by life-scientists. Developers of bioimaging programs can easily create plugins that use models from deep learning frameworks. Finally, users of bioimaging tools have access to powerful tools in a known environment for them.
Collapse
Affiliation(s)
- A Inés
- Department of Mathematics and Computer Science of University of La Rioja, Spain.
| | - C Domínguez
- Department of Mathematics and Computer Science of University of La Rioja, Spain.
| | - J Heras
- Department of Mathematics and Computer Science of University of La Rioja, Spain.
| | - E Mata
- Department of Mathematics and Computer Science of University of La Rioja, Spain.
| | - V Pascual
- Department of Mathematics and Computer Science of University of La Rioja, Spain.
| |
Collapse
|