301
|
Deng S, Zhang X, Yan W, Chang EIC, Fan Y, Lai M, Xu Y. Deep learning in digital pathology image analysis: a survey. Front Med 2020; 14:470-487. [PMID: 32728875 DOI: 10.1007/s11684-020-0782-9] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/21/2022]
Abstract
Deep learning (DL) has achieved state-of-the-art performance in many digital pathology analysis tasks. Traditional methods usually require hand-crafted domain-specific features, and DL methods can learn representations without manually designed features. In terms of feature extraction, DL approaches are less labor intensive compared with conventional machine learning methods. In this paper, we comprehensively summarize recent DL-based image analysis studies in histopathology, including different tasks (e.g., classification, semantic segmentation, detection, and instance segmentation) and various applications (e.g., stain normalization, cell/gland/region structure analysis). DL methods can provide consistent and accurate outcomes. DL is a promising tool to assist pathologists in clinical diagnosis.
Collapse
Affiliation(s)
- Shujian Deng
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Xin Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Wen Yan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | | | - Yubo Fan
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China
| | - Maode Lai
- Department of Pathology, School of Medicine, Zhejiang University, Hangzhou, 310007, China
| | - Yan Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China.
- Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education and State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100191, China.
- Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
- Microsoft Research Asia, Beijing, 100080, China.
| |
Collapse
|
302
|
Mudali D, Jeevanandam J, Danquah MK. Probing the characteristics and biofunctional effects of disease-affected cells and drug response via machine learning applications. Crit Rev Biotechnol 2020; 40:951-977. [PMID: 32633615 DOI: 10.1080/07388551.2020.1789062] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Drug-induced transformations in disease characteristics at the cellular and molecular level offers the opportunity to predict and evaluate the efficacy of pharmaceutical ingredients whilst enabling the optimal design of new and improved drugs with enhanced pharmacokinetics and pharmacodynamics. Machine learning is a promising in-silico tool used to simulate cells with specific disease properties and to determine their response toward drug uptake. Differences in the properties of normal and infected cells, including biophysical, biochemical and physiological characteristics, plays a key role in developing fundamental cellular probing platforms for machine learning applications. Cellular features can be extracted periodically from both the drug treated, infected, and normal cells via image segmentations in order to probe dynamic differences in cell behavior. Cellular segmentation can be evaluated to reflect the levels of drug effect on a distinct cell or group of cells via probability scoring. This article provides an account for the use of machine learning methods to probe differences in the biophysical, biochemical and physiological characteristics of infected cells in response to pharmacokinetics uptake of drug ingredients for application in cancer, diabetes and neurodegenerative disease therapies.
Collapse
Affiliation(s)
- Deborah Mudali
- Department of Computer Science, University of Tennessee, Chattanooga, TN, USA
| | - Jaison Jeevanandam
- Department of Chemical Engineering, Faculty of Engineering and Science, Curtin University, Miri, Malaysia
| | - Michael K Danquah
- Chemical Engineering Department, University of Tennessee, Chattanooga, TN, USA
| |
Collapse
|
303
|
Automatic Detection and Counting of Lymphocytes from Immunohistochemistry Cancer Images Using Deep Learning. J Med Biol Eng 2020. [DOI: 10.1007/s40846-020-00545-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
304
|
Panayides AS, Amini A, Filipovic ND, Sharma A, Tsaftaris SA, Young A, Foran D, Do N, Golemati S, Kurc T, Huang K, Nikita KS, Veasey BP, Zervakis M, Saltz JH, Pattichis CS. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J Biomed Health Inform 2020; 24:1837-1857. [PMID: 32609615 PMCID: PMC8580417 DOI: 10.1109/jbhi.2020.2991043] [Citation(s) in RCA: 157] [Impact Index Per Article: 31.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper reviews state-of-the-art research solutions across the spectrum of medical imaging informatics, discusses clinical translation, and provides future directions for advancing clinical practice. More specifically, it summarizes advances in medical imaging acquisition technologies for different modalities, highlighting the necessity for efficient medical data management strategies in the context of AI in big healthcare data analytics. It then provides a synopsis of contemporary and emerging algorithmic methods for disease classification and organ/ tissue segmentation, focusing on AI and deep learning architectures that have already become the de facto approach. The clinical benefits of in-silico modelling advances linked with evolving 3D reconstruction and visualization applications are further documented. Concluding, integrative analytics approaches driven by associate research branches highlighted in this study promise to revolutionize imaging informatics as known today across the healthcare continuum for both radiology and digital pathology applications. The latter, is projected to enable informed, more accurate diagnosis, timely prognosis, and effective treatment planning, underpinning precision medicine.
Collapse
|
305
|
Abstract
Additive manufacturing (AM) is evolving rapidly and this trend is creating a number of growth opportunities for several industries. Recent studies on AM have focused mainly on developing new machines and materials, with only a limited number of studies on the troubleshooting, maintenance, and problem-solving aspects of AM processes. Deep learning (DL) is an emerging machine learning (ML) type that has widely been used in several research studies. This research team believes that applying DL can help make AM processes smoother and make AM-printed objects more accurate. In this research, a new DL application is developed and implemented to minimize the material consumption of a failed print. The material used in this research is polylactic acid (PLA) and the DL method is the convolutional neural network (CNN). This study reports the nature of this newly developed DL application and the relationships between various algorithm parameters and the accuracy of the algorithm.
Collapse
|
306
|
Ho DJ, Mas Montserrat D, Fu C, Salama P, Dunn KW, Delp EJ. Sphere estimation network: three-dimensional nuclei detection of fluorescence microscopy images. J Med Imaging (Bellingham) 2020; 7:044003. [PMID: 32904135 PMCID: PMC7451995 DOI: 10.1117/1.jmi.7.4.044003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Accepted: 08/07/2020] [Indexed: 02/04/2023] Open
Abstract
Purpose: Fluorescence microscopy visualizes three-dimensional subcellular structures in tissue with two-photon microscopy achieving deeper penetration into tissue. Nuclei detection, which is essential for analyzing tissue for clinical and research purposes, remains a challenging problem due to the spatial variability of nuclei. Recent advancements in deep learning techniques have enabled the analysis of fluorescence microscopy data to localize and segment nuclei. However, these localization or segmentation techniques would require additional steps to extract characteristics of nuclei. We develop a 3D convolutional neural network, called Sphere Estimation Network (SphEsNet), to extract characteristics of nuclei without any postprocessing steps. Approach: To simultaneously estimate the center locations of nuclei and their sizes, SphEsNet is composed of two branches to localize nuclei center coordinates and to estimate their radii. Synthetic microscopy volumes automatically generated using a spatially constrained cycle-consistent adversarial network are used for training the network because manually generating 3D real ground truth volumes would be extremely tedious. Results: Three SphEsNet models based on the size of nuclei were trained and tested on five real fluorescence microscopy data sets from rat kidney and mouse intestine. Our method can successfully detect nuclei in multiple locations with various sizes. In addition, our method was compared with other techniques and outperformed them based on object-level precision, recall, and F 1 score. Our model achieved 89.90% for F 1 score. Conclusions: SphEsNet can simultaneously localize nuclei and estimate their size without additional steps. SphEsNet can be potentially used to extract more information from nuclei in fluorescence microscopy images.
Collapse
Affiliation(s)
- David Joon Ho
- Memorial Sloan Kettering Cancer Center, Department of Pathology, New York, New York, United States
| | - Daniel Mas Montserrat
- Purdue University, School of Electrical and Computer Engineering, Video and Image Processing Laboratory, West Lafayette, Indiana, United States
| | - Chichen Fu
- Purdue University, School of Electrical and Computer Engineering, Video and Image Processing Laboratory, West Lafayette, Indiana, United States
| | - Paul Salama
- Indiana University-Purdue University, Department of Electrical and Computer Engineering, Indianapolis, Indiana, United States
| | - Kenneth W. Dunn
- Indiana University, School of Medicine, Division of Nephrology, Indianapolis, Indiana, United States
| | - Edward J. Delp
- Purdue University, School of Electrical and Computer Engineering, Video and Image Processing Laboratory, West Lafayette, Indiana, United States
| |
Collapse
|
307
|
A case-based ensemble learning system for explainable breast cancer recurrence prediction. Artif Intell Med 2020; 107:101858. [DOI: 10.1016/j.artmed.2020.101858] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 04/02/2020] [Accepted: 04/03/2020] [Indexed: 02/06/2023]
|
308
|
Hussain E, Mahanta LB, Das CR, Choudhury M, Chowdhury M. A shape context fully convolutional neural network for segmentation and classification of cervical nuclei in Pap smear images. Artif Intell Med 2020; 107:101897. [DOI: 10.1016/j.artmed.2020.101897] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Revised: 05/21/2020] [Accepted: 05/29/2020] [Indexed: 01/22/2023]
|
309
|
Shaban M, Awan R, Fraz MM, Azam A, Tsang YW, Snead D, Rajpoot NM. Context-Aware Convolutional Neural Network for Grading of Colorectal Cancer Histology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2395-2405. [PMID: 32012004 DOI: 10.1109/tmi.2020.2971006] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Digital histology images are amenable to the application of convolutional neural networks (CNNs) for analysis due to the sheer size of pixel data present in them. CNNs are generally used for representation learning from small image patches (e.g. 224×224 ) extracted from digital histology images due to computational and memory constraints. However, this approach does not incorporate high-resolution contextual information in histology images. We propose a novel way to incorporate a larger context by a context-aware neural network based on images with a dimension of 1792×1792 pixels. The proposed framework first encodes the local representation of a histology image into high dimensional features then aggregates the features by considering their spatial organization to make a final prediction. We evaluated the proposed method on two colorectal cancer datasets for the task of cancer grading. Our method outperformed the traditional patch-based approaches, problem-specific methods, and existing context-based methods. We also presented a comprehensive analysis of different variants of the proposed method.
Collapse
|
310
|
AbdulJabbar K, Raza SEA, Rosenthal R, Jamal-Hanjani M, Veeriah S, Akarca A, Lund T, Moore DA, Salgado R, Al Bakir M, Zapata L, Hiley CT, Officer L, Sereno M, Smith CR, Loi S, Hackshaw A, Marafioti T, Quezada SA, McGranahan N, Le Quesne J, Swanton C, Yuan Y. Geospatial immune variability illuminates differential evolution of lung adenocarcinoma. Nat Med 2020; 26:1054-1062. [PMID: 32461698 PMCID: PMC7610840 DOI: 10.1038/s41591-020-0900-x] [Citation(s) in RCA: 167] [Impact Index Per Article: 33.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 04/23/2020] [Indexed: 01/09/2023]
Abstract
Remarkable progress in molecular analyses has improved our understanding of the evolution of cancer cells toward immune escape1-5. However, the spatial configurations of immune and stromal cells, which may shed light on the evolution of immune escape across tumor geographical locations, remain unaddressed. We integrated multiregion exome and RNA-sequencing (RNA-seq) data with spatial histology mapped by deep learning in 100 patients with non-small cell lung cancer from the TRACERx cohort6. Cancer subclones derived from immune cold regions were more closely related in mutation space, diversifying more recently than subclones from immune hot regions. In TRACERx and in an independent multisample cohort of 970 patients with lung adenocarcinoma, tumors with more than one immune cold region had a higher risk of relapse, independently of tumor size, stage and number of samples per patient. In lung adenocarcinoma, but not lung squamous cell carcinoma, geometrical irregularity and complexity of the cancer-stromal cell interface significantly increased in tumor regions without disruption of antigen presentation. Decreased lymphocyte accumulation in adjacent stroma was observed in tumors with low clonal neoantigen burden. Collectively, immune geospatial variability elucidates tumor ecological constraints that may shape the emergence of immune-evading subclones and aggressive clinical phenotypes.
Collapse
Affiliation(s)
- Khalid AbdulJabbar
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Shan E Ahmed Raza
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Rachel Rosenthal
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, UK
| | - Mariam Jamal-Hanjani
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
- Department of Medical Oncology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Selvaraju Veeriah
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, UK
| | - Ayse Akarca
- Department of Cellular Pathology, University College London, University College Hospital, London, UK
| | - Tom Lund
- Translational Immune Oncology Group, Centre for Molecular Medicine, Royal Marsden Hospital NHS Trust, London, UK
| | - David A Moore
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
- Department of Cellular Pathology, University College London, University College Hospital, London, UK
| | - Roberto Salgado
- Department of Pathology, GZA-ZNA-Ziekenhuizen, Antwerp, Belgium
- Division of Research, Peter MacCallum Cancer Centre, University of Melbourne, Melbourne, Victoria, Australia
| | - Maise Al Bakir
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, UK
| | - Luis Zapata
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK
| | - Crispin T Hiley
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, UK
| | - Leah Officer
- MRC Toxicology Unit, University of Cambridge, Leicester, UK
| | - Marco Sereno
- Leicester Cancer Research Centre, University of Leicester, Leicester, UK
| | | | - Sherene Loi
- Division of Research, Peter MacCallum Cancer Centre, University of Melbourne, Melbourne, Victoria, Australia
| | - Allan Hackshaw
- Cancer Research UK & University College London Cancer Trials Centre, University College London, London, UK
| | - Teresa Marafioti
- Department of Cellular Pathology, University College London, University College Hospital, London, UK
| | - Sergio A Quezada
- Cancer Immunology Unit, University College London Cancer Institute, London, UK
| | - Nicholas McGranahan
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
- Cancer Genome Evolution Research Group, University College London Cancer Institute, University College London, London, UK
| | - John Le Quesne
- MRC Toxicology Unit, University of Cambridge, Leicester, UK.
- Leicester Cancer Research Centre, University of Leicester, Leicester, UK.
- Glenfield Hospital, University Hospitals Leicester NHS Trust, Leicester, UK.
| | - Charles Swanton
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK.
- Cancer Evolution and Genome Instability Laboratory, The Francis Crick Institute, London, UK.
- Department of Medical Oncology, University College London Hospitals NHS Foundation Trust, London, UK.
| | - Yinyin Yuan
- Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK.
| |
Collapse
|
311
|
Khoshdeli M, Winkelmaier G, Parvin B. Deep fusion of contextual and object-based representations for delineation of multiple nuclear phenotypes. Bioinformatics 2020; 35:4860-4861. [PMID: 31135022 DOI: 10.1093/bioinformatics/btz430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Revised: 04/23/2019] [Accepted: 05/23/2019] [Indexed: 11/12/2022] Open
Abstract
MOTIVATION Nuclear delineation and phenotypic profiling are important steps in the automated analysis of histology sections. However, these are challenging problems due to (i) technical variations (e.g. fixation, staining) that originate as a result of sample preparation; (ii) biological heterogeneity (e.g. vesicular versus high chromatin phenotypes, nuclear atypia) and (iii) overlapping nuclei. This Application-Note couples contextual information about the cellular organization with the individual signature of nuclei to improve performance. As a result, routine delineation of nuclei in H&E stained histology sections is enabled for either computer-aided pathology or integration with genome-wide molecular data. RESULTS The method has been evaluated on two independent datasets. One dataset originates from our lab and includes H&E stained sections of brain and breast samples. The second dataset is publicly available through IEEE with a focus on gland-based tissue architecture. We report an approximate AJI of 0.592 and an F1-score 0.93 on both datasets. AVAILABILITY AND IMPLEMENTATION The code-base, modified dataset and results are publicly available. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Mina Khoshdeli
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV 89557-0260, USA
| | - Garrett Winkelmaier
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV 89557-0260, USA
| | - Bahram Parvin
- Department of Electrical and Biomedical Engineering, University of Nevada, Reno, NV 89557-0260, USA
| |
Collapse
|
312
|
|
313
|
Enhancing the Value of Histopathological Assessment of Allograft Biopsy Monitoring. Transplantation 2020; 103:1306-1322. [PMID: 30768568 DOI: 10.1097/tp.0000000000002656] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Traditional histopathological allograft biopsy evaluation provides, within hours, diagnoses, prognostic information, and mechanistic insights into disease processes. However, proponents of an array of alternative monitoring platforms, broadly classified as "invasive" or "noninvasive" depending on whether allograft tissue is needed, question the value proposition of tissue histopathology. The authors explore the pros and cons of current analytical methods relative to the value of traditional and illustrate advancements of next-generation histopathological evaluation of tissue biopsies. We describe the continuing value of traditional histopathological tissue assessment and "next-generation pathology (NGP)," broadly defined as staining/labeling techniques coupled with digital imaging and automated image analysis. Noninvasive imaging and fluid (blood and urine) analyses promote low-risk, global organ assessment, and "molecular" data output, respectively; invasive alternatives promote objective, "mechanistic" insights by creating gene lists with variably increased/decreased expression compared with steady state/baseline. Proponents of alternative approaches contrast their preferred methods with traditional histopathology and: (1) fail to cite the main value of traditional and NGP-retention of spatial and inferred temporal context available for innumerable objective analyses and (2) belie an unfamiliarity with the impact of advances in imaging and software-guided analytics on emerging histopathology practices. Illustrative NGP examples demonstrate the value of multidimensional data that preserve tissue-based spatial and temporal contexts. We outline a path forward for clinical NGP implementation where "software-assisted sign-out" will enable pathologists to conduct objective analyses that can be incorporated into their final reports and improve patient care.
Collapse
|
314
|
Kosaraju SC, Hao J, Koh HM, Kang M. Deep-Hipo: Multi-scale receptive field deep learning for histopathological image analysis. Methods 2020; 179:3-13. [PMID: 32442672 DOI: 10.1016/j.ymeth.2020.05.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 05/01/2020] [Accepted: 05/14/2020] [Indexed: 01/15/2023] Open
Abstract
Digitizing whole-slide imaging in digital pathology has led to the advancement of computer-aided tissue examination using machine learning techniques, especially convolutional neural networks. A number of convolutional neural network-based methodologies have been proposed to accurately analyze histopathological images for cancer detection, risk prediction, and cancer subtype classification. Most existing methods have conducted patch-based examinations, due to the extremely large size of histopathological images. However, patches of a small window often do not contain sufficient information or patterns for the tasks of interest. It corresponds that pathologists also examine tissues at various magnification levels, while checking complex morphological patterns in a microscope. We propose a novel multi-task based deep learning model for HIstoPatholOgy (named Deep-Hipo) that takes multi-scale patches simultaneously for accurate histopathological image analysis. Deep-Hipo extracts two patches of the same size in both high and low magnification levels, and captures complex morphological patterns in both large and small receptive fields of a whole-slide image. Deep-Hipo has outperformed the current state-of-the-art deep learning methods. We assessed the proposed method in various types of whole-slide images of the stomach: well-differentiated, moderately-differentiated, and poorly-differentiated adenocarcinoma; poorly cohesive carcinoma, including signet-ring cell features; and normal gastric mucosa. The optimally trained model was also applied to histopathological images of The Cancer Genome Atlas (TCGA), Stomach Adenocarcinoma (TCGA-STAD) and TCGA Colon Adenocarcinoma (TCGA-COAD), which show similar pathological patterns with gastric carcinoma, and the experimental results were clinically verified by a pathologist. The source code of Deep-Hipo is publicly available athttp://dataxlab.org/deep-hipo.
Collapse
Affiliation(s)
| | - Jie Hao
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, PA, USA.
| | - Hyun Min Koh
- Department of Pathology, Gyeongsang National University Changwon Hospital, Changwon, Republic of Korea.
| | - Mingon Kang
- Department of Computer Science, University of Nevada, Las Vegas, NV, USA.
| |
Collapse
|
315
|
Koyuncu CF, Gunesli GN, Cetin-Atalay R, Gunduz-Demir C. DeepDistance: A multi-task deep regression model for cell detection in inverted microscopy images. Med Image Anal 2020; 63:101720. [PMID: 32438298 DOI: 10.1016/j.media.2020.101720] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/28/2020] [Accepted: 05/04/2020] [Indexed: 11/25/2022]
Abstract
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.
Collapse
Affiliation(s)
| | - Gozde Nur Gunesli
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey.
| | - Rengul Cetin-Atalay
- CanSyL,Graduate School of Informatics, Middle East Technical University, Ankara TR-06800, Turkey.
| | - Cigdem Gunduz-Demir
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara TR-06800, Turkey.
| |
Collapse
|
316
|
Kumar N, Verma R, Anand D, Zhou Y, Onder OF, Tsougenis E, Chen H, Heng PA, Li J, Hu Z, Wang Y, Koohbanani NA, Jahanifar M, Tajeddin NZ, Gooya A, Rajpoot N, Ren X, Zhou S, Wang Q, Shen D, Yang CK, Weng CH, Yu WH, Yeh CY, Yang S, Xu S, Yeung PH, Sun P, Mahbod A, Schaefer G, Ellinger I, Ecker R, Smedby O, Wang C, Chidester B, Ton TV, Tran MT, Ma J, Do MN, Graham S, Vu QD, Kwak JT, Gunda A, Chunduri R, Hu C, Zhou X, Lotfi D, Safdari R, Kascenas A, O'Neil A, Eschweiler D, Stegmaier J, Cui Y, Yin B, Chen K, Tian X, Gruening P, Barth E, Arbel E, Remer I, Ben-Dor A, Sirazitdinova E, Kohl M, Braunewell S, Li Y, Xie X, Shen L, Ma J, Baksi KD, Khan MA, Choo J, Colomer A, Naranjo V, Pei L, Iftekharuddin KM, Roy K, Bhattacharjee D, Pedraza A, Bueno MG, Devanathan S, Radhakrishnan S, Koduganty P, Wu Z, Cai G, Liu X, Wang Y, Sethi A. A Multi-Organ Nucleus Segmentation Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1380-1391. [PMID: 31647422 PMCID: PMC10439521 DOI: 10.1109/tmi.2019.2947628] [Citation(s) in RCA: 151] [Impact Index Per Article: 30.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.
Collapse
|
317
|
An End-to-end System for Automatic Characterization of Iba1 Immunopositive Microglia in Whole Slide Imaging. Neuroinformatics 2020; 17:373-389. [PMID: 30406865 DOI: 10.1007/s12021-018-9405-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Traumatic brain injury (TBI) is one of the leading causes of death and disability worldwide. Detailed studies of the microglial response after TBI require high throughput quantification of changes in microglial count and morphology in histological sections throughout the brain. In this paper, we present a fully automated end-to-end system that is capable of assessing microglial activation in white matter regions on whole slide images of Iba1 stained sections. Our approach involves the division of the full brain slides into smaller image patches that are subsequently automatically classified into white and grey matter sections. On the patches classified as white matter, we jointly apply functional minimization methods and deep learning classification to identify Iba1-immunopositive microglia. Detected cells are then automatically traced to preserve their complex branching structure after which fractal analysis is applied to determine the activation states of the cells. The resulting system detects white matter regions with 84% accuracy, detects microglia with a performance level of 0.70 (F1 score, the harmonic mean of precision and sensitivity) and performs binary microglia morphology classification with a 70% accuracy. This automated pipeline performs these analyses at a 20-fold increase in speed when compared to a human pathologist. Moreover, we have demonstrated robustness to variations in stain intensity common for Iba1 immunostaining. A preliminary analysis was conducted that indicated that this pipeline can identify differences in microglia response due to TBI. An automated solution to microglia cell analysis can greatly increase standardized analysis of brain slides, allowing pathologists and neuroscientists to focus on characterizing the associated underlying diseases and injuries.
Collapse
|
318
|
A Novel Architecture to Classify Histopathology Images Using Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10082929] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Histopathology is the study of tissue structure under the microscope to determine if the cells are normal or abnormal. Histopathology is a very important exam that is used to determine the patients’ treatment plan. The classification of histopathology images is very difficult to even an experienced pathologist, and a second opinion is often needed. Convolutional neural network (CNN), a particular type of deep learning architecture, obtained outstanding results in computer vision tasks like image classification. In this paper, we propose a novel CNN architecture to classify histopathology images. The proposed model consists of 15 convolution layers and two fully connected layers. A comparison between different activation functions was performed to detect the most efficient one, taking into account two different optimizers. To train and evaluate the proposed model, the publicly available PatchCamelyon dataset was used. The dataset consists of 220,000 annotated images for training and 57,000 unannotated images for testing. The proposed model achieved higher performance compared to the state-of-the-art architectures with an AUC of 95.46%.
Collapse
|
319
|
Yoon H, Lee J, Oh JE, Kim HR, Lee S, Chang HJ, Sohn DK. Tumor Identification in Colorectal Histology Images Using a Convolutional Neural Network. J Digit Imaging 2020; 32:131-140. [PMID: 30066123 DOI: 10.1007/s10278-018-0112-9] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023] Open
Abstract
Colorectal cancer (CRC) is a major global health concern. Its early diagnosis is extremely important, as it determines treatment options and strongly influences the length of survival. Histologic diagnosis can be made by pathologists based on images of tissues obtained from a colonoscopic biopsy. Convolutional neural networks (CNNs)-i.e., deep neural networks (DNNs) specifically adapted to image data-have been employed to effectively classify or locate tumors in many types of cancer. Colorectal histology images of 28 normal and 29 tumor samples were obtained from the National Cancer Center, South Korea, and cropped into 6806 normal and 3474 tumor images. We developed five modifications of the system from the Visual Geometry Group (VGG), the winning entry in the classification task in the 2014 ImageNet Large Scale Visual Recognition Competition (ILSVRC) and examined them in two experiments. In the first experiment, we determined the best modified VGG configuration for our partial dataset, resulting in accuracies of 82.50%, 87.50%, 87.50%, 91.40%, and 94.30%, respectively. In the second experiment, the best modified VGG configuration was applied to evaluate the performance of the CNN model. Subsequently, using the entire dataset on the modified VGG-E configuration, the highest results for accuracy, loss, sensitivity, and specificity, respectively, were 93.48%, 0.4385, 95.10%, and 92.76%, which equates to correctly classifying 294 normal images out of 309 and 667 tumor images out of 719.
Collapse
Affiliation(s)
- Hongjun Yoon
- Innovative Medical Engineering and Technology Branch, Research Institute and Hospital, National Cancer Center, Goyang, Gyeonggi, South Korea.,College of Engineering and Computer Science, Syracuse University, Syracuse, NY, USA
| | - Joohyung Lee
- Innovative Medical Engineering and Technology Branch, Research Institute and Hospital, National Cancer Center, Goyang, Gyeonggi, South Korea
| | - Ji Eun Oh
- Innovative Medical Engineering and Technology Branch, Research Institute and Hospital, National Cancer Center, Goyang, Gyeonggi, South Korea
| | - Hong Rae Kim
- Innovative Medical Engineering and Technology Branch, Research Institute and Hospital, National Cancer Center, Goyang, Gyeonggi, South Korea
| | - Seonhye Lee
- Innovative Medical Engineering and Technology Branch, Research Institute and Hospital, National Cancer Center, Goyang, Gyeonggi, South Korea
| | - Hee Jin Chang
- Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, 323 Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, Republic of Korea
| | - Dae Kyung Sohn
- Innovative Medical Engineering and Technology Branch, Research Institute and Hospital, National Cancer Center, Goyang, Gyeonggi, South Korea. .,Center for Colorectal Cancer, Research Institute and Hospital, National Cancer Center, 323 Ilsan-ro, Ilsandong-gu, Goyang-si, Gyeonggi-do, 10408, Republic of Korea.
| |
Collapse
|
320
|
Hägele M, Seegerer P, Lapuschkin S, Bockmayr M, Samek W, Klauschen F, Müller KR, Binder A. Resolving challenges in deep learning-based analyses of histopathological images using explanation methods. Sci Rep 2020; 10:6423. [PMID: 32286358 PMCID: PMC7156509 DOI: 10.1038/s41598-020-62724-2] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 03/16/2020] [Indexed: 12/15/2022] Open
Abstract
Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, many explanation methods have emerged. This work shows how heatmaps generated by these explanation methods allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. We elaborate on biases which are typically inherent in histopathological image data. In the binary classification task of tumour tissue discrimination in publicly available haematoxylin-eosin-stained images of various tumour entities, we investigate three types of biases: (1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument. This insight is shown to not only be helpful to detect but also to remove the effects of common hidden biases, which improves generalisation within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic (ROC) curve by 5% when reducing a labelling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology.
Collapse
Affiliation(s)
- Miriam Hägele
- TU Berlin, Machine Learning Group, Berlin, 10587, Germany
| | | | - Sebastian Lapuschkin
- Fraunhofer Heinrich Hertz Institute, Department of Video Coding & Analytics, Berlin, 10587, Germany
| | - Michael Bockmayr
- Charité University Hospital, Institute of Pathology, Berlin, 10117, Germany.,University Medical Center Hamburg-Eppendorf, Department of Pediatric Hematology and Oncology, Hamburg, 20251, Germany
| | - Wojciech Samek
- Fraunhofer Heinrich Hertz Institute, Department of Video Coding & Analytics, Berlin, 10587, Germany
| | - Frederick Klauschen
- Charité University Hospital, Institute of Pathology, Berlin, 10117, Germany.
| | - Klaus-Robert Müller
- TU Berlin, Machine Learning Group, Berlin, 10587, Germany. .,Korea University, Department of Brain and Cognitive Engineering, Anam-dong, Seongbuk-gu, Seoul, 02841, Korea. .,Max-Planck-Institute for Informatics, Campus E1 4, Saarbrücken, 66123, Germany.
| | - Alexander Binder
- Singapore University of Technology and Design, ISTD Pillar, Singapore, 487372, Singapore.
| |
Collapse
|
321
|
Javed S, Mahmood A, Fraz MM, Koohbanani NA, Benes K, Tsang YW, Hewitt K, Epstein D, Snead D, Rajpoot N. Cellular community detection for tissue phenotyping in colorectal cancer histology images. Med Image Anal 2020; 63:101696. [PMID: 32330851 DOI: 10.1016/j.media.2020.101696] [Citation(s) in RCA: 65] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 02/18/2020] [Accepted: 04/02/2020] [Indexed: 02/01/2023]
Abstract
Classification of various types of tissue in cancer histology images based on the cellular compositions is an important step towards the development of computational pathology tools for systematic digital profiling of the spatial tumor microenvironment. Most existing methods for tissue phenotyping are limited to the classification of tumor and stroma and require large amount of annotated histology images which are often not available. In the current work, we pose the problem of identifying distinct tissue phenotypes as finding communities in cellular graphs or networks. First, we train a deep neural network for cell detection and classification into five distinct cellular components. Considering the detected nuclei as nodes, potential cell-cell connections are assigned using Delaunay triangulation resulting in a cell-level graph. Based on this cell graph, a feature vector capturing potential cell-cell connection of different types of cells is computed. These feature vectors are used to construct a patch-level graph based on chi-square distance. We map patch-level nodes to the geometric space by representing each node as a vector of geodesic distances from other nodes in the network and iteratively drifting the patch nodes in the direction of positive density gradients towards maximum density regions. The proposed algorithm is evaluated on a publicly available dataset and another new large-scale dataset consisting of 280K patches of seven tissue phenotypes. The estimated communities have significant biological meanings as verified by the expert pathologists. A comparison with current state-of-the-art methods reveals significant performance improvement in tissue phenotyping.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Khalifa University Center for Autonomous Robotic Systems (KUCARS), Abu Dhabi, P.O. Box 127788, UAE
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan
| | - Muhammad Moazam Fraz
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; National University of Science and Technology (NUST), Islamabad, Pakistan
| | | | - Ksenija Benes
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Yee-Wah Tsang
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Katherine Hewitt
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - David Epstein
- Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK
| | - David Snead
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK; The Alan Turing Institute, London, UK.
| |
Collapse
|
322
|
Wang L, Chen A, Zhang Y, Wang X, Zhang Y, Shen Q, Xue Y. AK-DL: A Shallow Neural Network Model for Diagnosing Actinic Keratosis with Better Performance Than Deep Neural Networks. Diagnostics (Basel) 2020; 10:diagnostics10040217. [PMID: 32294962 PMCID: PMC7235884 DOI: 10.3390/diagnostics10040217] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 04/07/2020] [Accepted: 04/09/2020] [Indexed: 11/29/2022] Open
Abstract
Actinic keratosis (AK) is one of the most common precancerous skin lesions, which is easily confused with benign keratosis (BK). At present, the diagnosis of AK mainly depends on histopathological examination, and ignorance can easily occur in the early stage, thus missing the opportunity for treatment. In this study, we designed a shallow convolutional neural network (CNN) named actinic keratosis deep learning (AK-DL) and further developed an intelligent diagnostic system for AK based on the iOS platform. After data preprocessing, the AK-DL model was trained and tested with AK and BK images from dataset HAM10000. We further compared it with mainstream deep CNN models, such as AlexNet, GoogLeNet, and ResNet, as well as traditional medical image processing algorithms. Our results showed that the performance of AK-DL was better than the mainstream deep CNN models and traditional medical image processing algorithms based on the AK dataset. The recognition accuracy of AK-DL was 0.925, the area under the receiver operating characteristic curve (AUC) was 0.887, and the training time was only 123.0 s. An iOS app of intelligent diagnostic system was developed based on the AK-DL model for accurate and automatic diagnosis of AK. Our results indicate that it is better to employ a shallow CNN in the recognition of AK.
Collapse
Affiliation(s)
- Liyang Wang
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Angxuan Chen
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Yan Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Xiaoya Wang
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Yu Zhang
- College of Information and Electrical Engineering, China Agricultural University, Beijing 100083, China; (A.C.); (Y.Z.); (Y.Z.)
| | - Qun Shen
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
| | - Yong Xue
- Beijing Advanced Innovation Center for Food Nutrition and Human Health, Key Laboratory of Plant Protein and Grain Processing, National Engineering and Technology Research Center for Fruits and Vegetables, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China; (L.W.); (X.W.); (Q.S.)
- Correspondence:
| |
Collapse
|
323
|
Precise pulmonary scanning and reducing medical radiation exposure by developing a clinically applicable intelligent CT system: Toward improving patient care. EBioMedicine 2020; 54:102724. [PMID: 32251997 PMCID: PMC7132170 DOI: 10.1016/j.ebiom.2020.102724] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 02/21/2020] [Accepted: 03/04/2020] [Indexed: 02/07/2023] Open
Abstract
Background Interstitial lung disease requires frequent re-examination, which directly causes excessive cumulative radiation exposure. To date, AI has not been applied to CT for enhancing clinical care; thus, we hypothesize AI may empower CT with intelligence to realize automatic and accurate pulmonary scanning, thus dramatically decrease medical radiation exposure without compromising patient care. Methods Facial boundary detection was realized by recognizing adjacent jaw position through training and testing a region proposal network (RPN) on 76,882 human faces using a preinstalled 2-dimensional camera; the lung-fields was then segmented by V-Net on another training set with 314 subjects and calculated the moving distance of the scanning couch based on a pre-generated calibration table. A multi-cohort study, including 1,186 patients was used for validation and radiation dose quantification under three clinical scenarios. Findings A U-HAPPY (United imaging Human Automatic Planbox for PulmonarY) scanning CT was designed. Error distance of RPN was 4·46±0·02 pixels with a success rate of 98·7% in training set and 2·23±0·10 pixels with 100% success rate in testing set. Average Dice's coefficient was 0·99 in training set and 0·96 in testing set. A calibration table with 1,344,000 matches was generated to support the linkage between camera and scanner. This real-time automation makes an accurate plan-box to cover exact location and area needed to scan, thus reducing amounts of radiation exposures significantly (all, P<0·001). Interpretation U-HAPPY CT designed for pulmonary imaging acquisition standardization is promising for reducing patient risk and optimizing public health expenditures. Funding The National Natural Science Foundation of China.
Collapse
|
324
|
Tennakoon R, Bortsova G, Orting S, Gostar AK, Wille MMW, Saghir Z, Hoseinnezhad R, de Bruijne M, Bab-Hadiashar A. Classification of Volumetric Images Using Multi-Instance Learning and Extreme Value Theorem. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:854-865. [PMID: 31425069 DOI: 10.1109/tmi.2019.2936244] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Volumetric imaging is an essential diagnostic tool for medical practitioners. The use of popular techniques such as convolutional neural networks (CNN) for analysis of volumetric images is constrained by the availability of detailed (with local annotations) training data and GPU memory. In this paper, the volumetric image classification problem is posed as a multi-instance classification problem and a novel method is proposed to adaptively select positive instances from positive bags during the training phase. This method uses the extreme value theory to model the feature distribution of the images without a pathology and use it to identify positive instances of an imaged pathology. The experimental results, on three separate image classification tasks (i.e. classify retinal OCT images according to the presence or absence of fluid build-ups, emphysema detection in pulmonary 3D-CT images and detection of cancerous regions in 2D histopathology images) show that the proposed method produces classifiers that have similar performance to fully supervised methods and achieves the state of the art performance in all examined test cases.
Collapse
|
325
|
Mansoor A, Cerrolaza JJ, Perez G, Biggs E, Okada K, Nino G, Linguraru MG. A Generic Approach to Lung Field Segmentation From Chest Radiographs Using Deep Space and Shape Learning. IEEE Trans Biomed Eng 2020; 67:1206-1220. [PMID: 31425015 PMCID: PMC7293875 DOI: 10.1109/tbme.2019.2933508] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Computer-aided diagnosis (CAD) techniques for lung field segmentation from chest radiographs (CXR) have been proposed for adult cohorts, but rarely for pediatric subjects. Statistical shape models (SSMs), the workhorse of most state-of-the-art CXR-based lung field segmentation methods, do not efficiently accommodate shape variation of the lung field during the pediatric developmental stages. The main contributions of our work are: 1) a generic lung field segmentation framework from CXR accommodating large shape variation for adult and pediatric cohorts; 2) a deep representation learning detection mechanism, ensemble space learning, for robust object localization; and 3) marginal shape deep learning for the shape deformation parameter estimation. Unlike the iterative approach of conventional SSMs, the proposed shape learning mechanism transforms the parameter space into marginal subspaces that are solvable efficiently using the recursive representation learning mechanism. Furthermore, our method is the first to include the challenging retro-cardiac region in the CXR-based lung segmentation for accurate lung capacity estimation. The framework is evaluated on 668 CXRs of patients between 3 month to 89 year of age. We obtain a mean Dice similarity coefficient of 0.96 ±0.03 (including the retro-cardiac region). For a given accuracy, the proposed approach is also found to be faster than conventional SSM-based iterative segmentation methods. The computational simplicity of the proposed generic framework could be similarly applied to the fast segmentation of other deformable objects.
Collapse
|
326
|
Jin Y, Qin C, Huang Y, Zhao W, Liu C. Multi-domain modeling of atrial fibrillation detection with twin attentional convolutional long short-term memory neural networks. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2019.105460] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
327
|
Applied Deep Learning in Plastic Surgery: Classifying Rhinoplasty With a Mobile App. J Craniofac Surg 2020; 31:102-106. [PMID: 31633665 DOI: 10.1097/scs.0000000000005905] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Advances in deep learning (DL) have been transformative in computer vision and natural language processing, as well as in healthcare. The authors present a novel application of DL to plastic surgery. Here, the authors describe and demonstrate the mobile deployment of a deep neural network that predicts rhinoplasty status, assess model accuracy compared to surgeons, and describe future directions for such applications in plastic surgery. METHODS A deep convolutional neural network ("RhinoNet") was developed to classify rhinoplasty images using only pixels and rhinoplasty status labels ("before"/"after") as inputs. RhinoNet was trained using a dataset of 22,686 before and after photos which were collected from publicly available sites. Network classification was compared to that of plastic surgery attendings and residents on 2269 previously-unseen test-set images. RESULTS RhinoNet correctly predicted rhinoplasty status in 85% of the test-set images. Sensitivity and specificity of model predictions were 0.840 (0.79-0.89) and 0.826 (0.77-0.88), respectively; the corresponding values for expert consensus predictions were 0.814 (0.76-0.87) and 0.867 (0.82-0.91). RhinoNet and humans performed with effectively equivalent accuracy in this classification task. CONCLUSION The authors describe the development of DL applications to identify the presence of superficial surgical procedures solely from images and labels. DL is especially well suited for unstructured, high-fidelity visual and auditory data that does not lend itself to classical statistical analysis, and may be deployed as mobile applications for potentially unbridled use, so the authors expect DL to play a key role in many areas of plastic surgery.
Collapse
|
328
|
Optimizing the Performance of Breast Cancer Classification by Employing the Same Domain Transfer Learning from Hybrid Deep Convolutional Neural Network Model. ELECTRONICS 2020. [DOI: 10.3390/electronics9030445] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is a significant factor in female mortality. An early cancer diagnosis leads to a reduction in the breast cancer death rate. With the help of a computer-aided diagnosis system, the efficiency increased, and the cost was reduced for the cancer diagnosis. Traditional breast cancer classification techniques are based on handcrafted features techniques, and their performance relies upon the chosen features. They also are very sensitive to different sizes and complex shapes. However, histopathological breast cancer images are very complex in shape. Currently, deep learning models have become an alternative solution for diagnosis, and have overcome the drawbacks of classical classification techniques. Although deep learning has performed well in various tasks of computer vision and pattern recognition, it still has some challenges. One of the main challenges is the lack of training data. To address this challenge and optimize the performance, we have utilized a transfer learning technique which is where the deep learning models train on a task, and then fine-tune the models for another task. We have employed transfer learning in two ways: Training our proposed model first on the same domain dataset, then on the target dataset, and training our model on a different domain dataset, then on the target dataset. We have empirically proven that the same domain transfer learning optimized the performance. Our hybrid model of parallel convolutional layers and residual links is utilized to classify hematoxylin–eosin-stained breast biopsy images into four classes: invasive carcinoma, in-situ carcinoma, benign tumor and normal tissue. To reduce the effect of overfitting, we have augmented the images with different image processing techniques. The proposed model achieved state-of-the-art performance, and it outperformed the latest methods by achieving a patch-wise classification accuracy of 90.5%, and an image-wise classification accuracy of 97.4% on the validation set. Moreover, we have achieved an image-wise classification accuracy of 96.1% on the test set of the microscopy ICIAR-2018 dataset.
Collapse
|
329
|
Encarnacion-Rivera L, Foltz S, Hartzell HC, Choo H. Myosoft: An automated muscle histology analysis tool using machine learning algorithm utilizing FIJI/ImageJ software. PLoS One 2020; 15:e0229041. [PMID: 32130242 PMCID: PMC7055860 DOI: 10.1371/journal.pone.0229041] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 01/28/2020] [Indexed: 11/18/2022] Open
Abstract
METHODS Muscle sections were stained for cell boundary (laminin) and myofiber type (myosin heavy chain isoforms). Myosoft, running in the open access software platform FIJI (ImageJ), was used to analyze myofiber size and type in transverse sections of entire gastrocnemius/soleus muscles. RESULTS Myosoft provides an accurate analysis of hundreds to thousands of muscle fibers within 25 minutes, which is >10-times faster than manual analysis. We demonstrate that Myosoft is capable of handling high-content images even when image or staining quality is suboptimal, which is a marked improvement over currently available and comparable programs. CONCLUSIONS Myosoft is a reliable, accurate, high-throughput, and convenient tool to analyze high-content muscle histology. Myosoft is freely available to download from Github at https://github.com/Hyojung-Choo/Myosoft/tree/Myosoft-hub.
Collapse
Affiliation(s)
- Lucas Encarnacion-Rivera
- Department of Cell Biology, School of Medicine, Emory University, Atlanta, Georgia, United States of America
- Undergraduate program in Neuroscience and Behavioral Biology, School of Medicine, Emory University, Atlanta, Georgia, United States of America
| | - Steven Foltz
- Department of Cell Biology, School of Medicine, Emory University, Atlanta, Georgia, United States of America
| | - H. Criss Hartzell
- Department of Cell Biology, School of Medicine, Emory University, Atlanta, Georgia, United States of America
| | - Hyojung Choo
- Department of Cell Biology, School of Medicine, Emory University, Atlanta, Georgia, United States of America
| |
Collapse
|
330
|
Morriss NJ, Conley GM, Ospina SM, Meehan III WP, Qiu J, Mannix R. Automated Quantification of Immunohistochemical Staining of Large Animal Brain Tissue Using QuPath Software. Neuroscience 2020; 429:235-244. [DOI: 10.1016/j.neuroscience.2020.01.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 12/20/2019] [Accepted: 01/06/2020] [Indexed: 12/14/2022]
|
331
|
|
332
|
Orbit Image Analysis: An open-source whole slide image analysis tool. PLoS Comput Biol 2020; 16:e1007313. [PMID: 32023239 PMCID: PMC7028292 DOI: 10.1371/journal.pcbi.1007313] [Citation(s) in RCA: 84] [Impact Index Per Article: 16.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 02/18/2020] [Accepted: 12/11/2019] [Indexed: 11/20/2022] Open
Abstract
We describe Orbit Image Analysis, an open-source whole slide image analysis tool. The tool consists of a generic tile-processing engine which allows the execution of various image analysis algorithms provided by either Orbit itself or from other open-source platforms using a tile-based map-reduce execution framework. Orbit Image Analysis is capable of sophisticated whole slide imaging analyses due to several key features. First, Orbit has machine-learning capabilities. This deep learning segmentation can be integrated with complex object detection for analysis of intricate tissues. In addition, Orbit can run locally as standalone or connect to the open-source image server OMERO. Another important characteristic is its scale-out functionality, using the Apache Spark framework for distributed computing. In this paper, we describe the use of Orbit in three different real-world applications: quantification of idiopathic lung fibrosis, nerve fibre density quantification, and glomeruli detection in the kidney.
Collapse
|
333
|
Yasuda Y, Tokunaga K, Koga T, Sakamoto C, Goldberg IG, Saitoh N, Nakao M. Computational analysis of morphological and molecular features in gastric cancer tissues. Cancer Med 2020; 9:2223-2234. [PMID: 32012497 PMCID: PMC7064096 DOI: 10.1002/cam4.2885] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 11/13/2019] [Accepted: 01/14/2020] [Indexed: 02/06/2023] Open
Abstract
Biological morphologies of cells and tissues represent their physiological and pathological conditions. The importance of quantitative assessment of morphological information has been highly recognized in clinical diagnosis and therapeutic strategies. In this study, we used a supervised machine learning algorithm wndchrm to classify hematoxylin and eosin (H&E)‐stained images of human gastric cancer tissues. This analysis distinguished between noncancer and cancer tissues with different histological grades. We then classified the H&E‐stained images by expression levels of cancer‐associated nuclear ATF7IP/MCAF1 and membranous PD‐L1 proteins using immunohistochemistry of serial sections. Interestingly, classes with low and high expressions of each protein exhibited significant morphological dissimilarity in H&E images. These results indicated that morphological features in cancer tissues are correlated with expression of specific cancer‐associated proteins, suggesting the usefulness of biomolecular‐based morphological classification.
Collapse
Affiliation(s)
- Yoko Yasuda
- Department of Medical Cell Biology, Institute of Molecular Embryology and Genetics, Kumamoto University, Kumamoto, Japan.,Department of Health Science, Faculty of Medical Science, Kyushu University, Fukuoka, Japan
| | - Kazuaki Tokunaga
- Department of Medical Cell Biology, Institute of Molecular Embryology and Genetics, Kumamoto University, Kumamoto, Japan
| | - Tomoaki Koga
- Department of Medical Cell Biology, Institute of Molecular Embryology and Genetics, Kumamoto University, Kumamoto, Japan
| | - Chiyomi Sakamoto
- Department of Medical Cell Biology, Institute of Molecular Embryology and Genetics, Kumamoto University, Kumamoto, Japan
| | - Ilya G Goldberg
- Image Informatics and Computational Biology Unit, Laboratory of Genetics, National Institute on Aging, National Institutes of Health, Baltimore, MD, USA
| | | | - Mitsuyoshi Nakao
- Department of Medical Cell Biology, Institute of Molecular Embryology and Genetics, Kumamoto University, Kumamoto, Japan
| |
Collapse
|
334
|
Zeng Y, Xu S, Chapman WC, Li S, Alipour Z, Abdelal H, Chatterjee D, Mutch M, Zhu Q. Real-time colorectal cancer diagnosis using PR-OCT with deep learning. Theranostics 2020; 10:2587-2596. [PMID: 32194821 PMCID: PMC7052898 DOI: 10.7150/thno.40099] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2019] [Accepted: 11/08/2019] [Indexed: 12/24/2022] Open
Abstract
Prior reports have shown optical coherence tomography (OCT) can differentiate normal colonic mucosa from neoplasia, potentially offering an alternative technique to endoscopic biopsy - the current gold-standard colorectal cancer screening and surveillance modality. To help clinical translation limited by processing the large volume of generated data, we designed a deep learning-based pattern recognition (PR) OCT system that automates image processing and provides accurate diagnosis potentially in real-time. Method: OCT is an emerging imaging technique to obtain 3-dimensional (3D) "optical biopsies" of biological samples with high resolution. We designed a convolutional neural network to capture the structure patterns in human colon OCT images. The network is trained and tested using around 26,000 OCT images acquired from 20 tumor areas, 16 benign areas, and 6 other abnormal areas. Results: The trained network successfully detected patterns that identify normal and neoplastic colorectal tissue. Experimental diagnoses predicted by the PR-OCT system were compared to the known histologic findings and quantitatively evaluated. A sensitivity of 100% and specificity of 99.7% can be reached. Further, the area under the receiver operating characteristic (ROC) curves (AUC) of 0.998 is achieved. Conclusions: Our results demonstrate that PR-OCT can be used to give an accurate real-time computer-aided diagnosis of colonic neoplastic mucosa. Future development of this system as an "optical biopsy" tool to assist doctors in real-time for early mucosal neoplasms screening and treatment evaluation following initial oncologic therapy is planned.
Collapse
Affiliation(s)
- Yifeng Zeng
- Department of Biomedical Engineering, Washington University in St. Louis
| | - Shiqi Xu
- Department of Electrical & System Engineering, Washington University in St. Louis
| | - William C. Chapman
- Department of Surgery, Section of Colon and Rectal Surgery, Washington University School of Medicine
| | - Shuying Li
- Department of Biomedical Engineering, Washington University in St. Louis
| | - Zahra Alipour
- Department of Pathology and Immunology, Washington University School of Medicine
| | - Heba Abdelal
- Department of Pathology and Immunology, Washington University School of Medicine
| | - Deyali Chatterjee
- Department of Pathology and Immunology, Washington University School of Medicine
| | - Matthew Mutch
- Department of Surgery, Section of Colon and Rectal Surgery, Washington University School of Medicine
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis
- Department of Radiology, Washington University School of Medicine
| |
Collapse
|
335
|
Kowal M, Żejmo M, Skobel M, Korbicz J, Monczak R. Cell Nuclei Segmentation in Cytological Images Using Convolutional Neural Network and Seeded Watershed Algorithm. J Digit Imaging 2020; 33:231-242. [PMID: 31161430 PMCID: PMC7064474 DOI: 10.1007/s10278-019-00200-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Morphometric analysis of nuclei is crucial in cytological examinations. Unfortunately, nuclei segmentation presents many challenges because they usually create complex clusters in cytological samples. To deal with this problem, we are proposing an approach, which combines convolutional neural network and watershed transform to segment nuclei in cytological images of breast cancer. The method initially is preprocessing images using color deconvolution to highlight hematoxylin-stained objects (nuclei). Next, convolutional neural network is applied to perform semantic segmentation of preprocessed image. It finds nuclei areas, cytoplasm areas, edges of nuclei, and background. All connected components in the binary mask of nuclei are treated as potential nuclei. However, some objects actually are clusters of overlapping nuclei. They are detected by their outlying values of morphometric features. Then an attempt is made to separate them using the seeded watershed segmentation. If the attempt is successful, they are included in the nuclei set. The accuracy of this approach is evaluated with the help of referenced, manually segmented images. The degree of matching between reference nuclei and discovered objects is measured with the help of Jaccard distance and Hausdorff distance. As part of the study, we verified how the use of a convolutional neural network instead of the intensity thresholding to generate a topographical map for the watershed improves segmentation outcomes. Our results show that convolutional neural network outperforms Otsu thresholding and adaptive thresholding in most cases, especially in scenarios with many overlapping nuclei.
Collapse
Affiliation(s)
- Marek Kowal
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland
| | - Michał Żejmo
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland.
| | - Marcin Skobel
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland
| | - Józef Korbicz
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland
| | - Roman Monczak
- Department of Pathology, University Hospital in Zielona Góra, Zyty 26, 65-046, Zielona Góra, Poland
| |
Collapse
|
336
|
Basha SS, Dubey SR, Pulabaigari V, Mukherjee S. Impact of fully connected layers on performance of convolutional neural networks for image classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.10.008] [Citation(s) in RCA: 101] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
337
|
N Kalimuthu S, Wilson GW, Grant RC, Seto M, O'Kane G, Vajpeyi R, Notta F, Gallinger S, Chetty R. Morphological classification of pancreatic ductal adenocarcinoma that predicts molecular subtypes and correlates with clinical outcome. Gut 2020; 69:317-328. [PMID: 31201285 DOI: 10.1136/gutjnl-2019-318217] [Citation(s) in RCA: 89] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Revised: 05/06/2019] [Accepted: 05/14/2019] [Indexed: 02/04/2023]
Abstract
INTRODUCTION Transcriptional analyses have identified several distinct molecular subtypes in pancreatic ductal adenocarcinoma (PDAC) that have prognostic and potential therapeutic significance. However, to date, an indepth, clinicomorphological correlation of these molecular subtypes has not been performed. We sought to identify specific morphological patterns to compare with known molecular subtypes, interrogate their biological significance, and furthermore reappraise the current grading system in PDAC. DESIGN We first assessed 86 primary, chemotherapy-naive PDAC resection specimens with matched RNA-Seq data for specific, reproducible morphological patterns. Differential expression was applied to the gene expression data using the morphological features. We next compared the differentially expressed gene signatures with previously published molecular subtypes. Overall survival (OS) was correlated with the morphological and molecular subtypes. RESULTS We identified four morphological patterns that segregated into two components ('gland forming' and 'non-gland forming') based on the presence/absence of well-formed glands. A morphological cut-off (≥40% 'non-gland forming') was established using RNA-Seq data, which identified two groups (A and B) with gene signatures that correlated with known molecular subtypes. There was a significant difference in OS between the groups. The morphological groups remained significantly prognostic within cancers that were moderately differentiated and classified as 'classical' using RNA-Seq. CONCLUSION Our study has demonstrated that PDACs can be morphologically classified into distinct and biologically relevant categories which predict known molecular subtypes. These results provide the basis for an improved taxonomy of PDAC, which may lend itself to future treatment strategies and the development of deep learning models.
Collapse
Affiliation(s)
- Sangeetha N Kalimuthu
- Anatomical Pathology, Laboratory Medicine Program, University Health Network, Toronto, Ontario, Canada
- Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Gavin W Wilson
- Latner Thoracic Surgery Laboratory, Division of Thoracic Surgery, Department of Surgery, University Health Network, Toronto, Ontario, Canada
| | - Robert C Grant
- Department of Medical Oncology, Princess Margaret Hospital Cancer Centre, Toronto, Ontario, Canada
- PanCuRx Translational Research Initiative, Ontario Institute for Cancer Research, Toronto, Ontario, Canada
| | - Matthew Seto
- Anatomical Pathology, Laboratory Medicine Program, University Health Network, Toronto, Ontario, Canada
| | - Grainne O'Kane
- Department of Medical Oncology, Princess Margaret Hospital Cancer Centre, Toronto, Ontario, Canada
- PanCuRx Translational Research Initiative, Ontario Institute for Cancer Research, Toronto, Ontario, Canada
| | - Rajkumar Vajpeyi
- Anatomical Pathology, Laboratory Medicine Program, University Health Network, Toronto, Ontario, Canada
- Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| | - Faiyaz Notta
- Division of Research, Princess Margaret Hospital Cancer Centre, Toronto, Ontario, Canada
- Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| | - Steven Gallinger
- PanCuRx Translational Research Initiative, Ontario Institute for Cancer Research, Toronto, Ontario, Canada
- Hepatobiliary/Pancreatic Surgical Oncology Program, University Health Network, Toronto, Ontario, Canada
| | - Runjan Chetty
- Anatomical Pathology, Laboratory Medicine Program, University Health Network, Toronto, Ontario, Canada
- Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
338
|
Iizuka O, Kanavati F, Kato K, Rambeau M, Arihiro K, Tsuneki M. Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours. Sci Rep 2020; 10:1504. [PMID: 32001752 PMCID: PMC6992793 DOI: 10.1038/s41598-020-58467-9] [Citation(s) in RCA: 194] [Impact Index Per Article: 38.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 01/14/2020] [Indexed: 11/09/2022] Open
Abstract
Histopathological classification of gastric and colonic epithelial tumours is one of the routine pathological diagnosis tasks for pathologists. Computational pathology techniques based on Artificial intelligence (AI) would be of high benefit in easing the ever increasing workloads on pathologists, especially in regions that have shortages in access to pathological diagnosis services. In this study, we trained convolutional neural networks (CNNs) and recurrent neural networks (RNNs) on biopsy histopathology whole-slide images (WSIs) of stomach and colon. The models were trained to classify WSI into adenocarcinoma, adenoma, and non-neoplastic. We evaluated our models on three independent test sets each, achieving area under the curves (AUCs) up to 0.97 and 0.99 for gastric adenocarcinoma and adenoma, respectively, and 0.96 and 0.99 for colonic adenocarcinoma and adenoma respectively. The results demonstrate the generalisation ability of our models and the high promising potential of deployment in a practical histopathological diagnostic workflow system.
Collapse
Affiliation(s)
| | - Fahdi Kanavati
- Medmain Research, Medmain Inc., Fukuoka, 810-0042, Japan
| | - Kei Kato
- Medmain Research, Medmain Inc., Fukuoka, 810-0042, Japan.,School of Medicine, Hiroshima Uniersity, Hiroshima, 734-0037, Japan
| | | | - Koji Arihiro
- Department of Anatomical Pathology, Hiroshima University Hospital, Hiroshima, 734-0037, Japan
| | - Masayuki Tsuneki
- Medmain Inc., Fukuoka, 810-0042, Japan. .,Medmain Research, Medmain Inc., Fukuoka, 810-0042, Japan.
| |
Collapse
|
339
|
Abstract
Toxoplasma gondii, one of the world's most common parasites, can infect all types of warm-blooded animals, including one-third of the world's human population. Most current routine diagnostic methods are costly, time-consuming, and labor-intensive. Although T. gondii can be directly observed under the microscope in tissue or spinal fluid samples, this form of identification is difficult and requires well-trained professionals. Nevertheless, the traditional identification of parasites under the microscope is still performed by a large number of laboratories. Novel, efficient, and reliable methods of T. gondii identification are therefore needed, particularly in developing countries. To this end, we developed a novel transfer learning-based microscopic image recognition method for T. gondii identification. This approach employs the fuzzy cycle generative adversarial network (FCGAN) with transfer learning utilizing knowledge gained by parasitologists that Toxoplasma is banana or crescent shaped. Our approach aims to build connections between microscopic and macroscopic associated objects by embedding the fuzzy C-means cluster algorithm into the cycle generative adversarial network (Cycle GAN). Our approach achieves 93.1% and 94.0% detection accuracy for ×400 and ×1,000 Toxoplasma microscopic images, respectively. We showed the high accuracy and effectiveness of our approach in newly collected unlabeled Toxoplasma microscopic images, compared to other currently available deep learning methods. This novel method for Toxoplasma microscopic image recognition will open a new window for developing cost-effective and scalable deep learning-based diagnostic solutions, potentially enabling broader clinical access in developing countries.IMPORTANCE Toxoplasma gondii, one of the world's most common parasites, can infect all types of warm-blooded animals, including one-third of the world's human population. Artificial intelligence (AI) could provide accurate and rapid diagnosis in fighting Toxoplasma So far, none of the previously reported deep learning methods have attempted to explore the advantages of transfer learning for Toxoplasma detection. The knowledge from parasitologists is that the Toxoplasma parasite is generally banana or crescent shaped. Based on this, we built connections between microscopic and macroscopic associated objects by embedding the fuzzy C-means cluster algorithm into the cycle generative adversarial network (Cycle GAN). Our approach achieves high accuracy and effectiveness in ×400 and ×1,000 Toxoplasma microscopic images.
Collapse
|
340
|
Ning Z, Pan W, Chen Y, Xiao Q, Zhang X, Luo J, Wang J, Zhang Y. Integrative analysis of cross-modal features for the prognosis prediction of clear cell renal cell carcinoma. Bioinformatics 2020; 36:2888-2895. [DOI: 10.1093/bioinformatics/btaa056] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Revised: 01/14/2020] [Accepted: 01/20/2020] [Indexed: 12/19/2022] Open
Abstract
Abstract
Motivation
As a highly heterogeneous disease, clear cell renal cell carcinoma (ccRCC) has quite variable clinical behaviors. The prognostic biomarkers play a crucial role in stratifying patients suffering from ccRCC to avoid over- and under-treatment. Researches based on hand-crafted features and single-modal data have been widely conducted to predict the prognosis of ccRCC. However, these experience-dependent methods, neglecting the synergy among multimodal data, have limited capacity to perform accurate prediction. Inspired by complementary information among multimodal data and the successful application of convolutional neural networks (CNNs) in medical image analysis, a novel framework was proposed to improve prediction performance.
Results
We proposed a cross-modal feature-based integrative framework, in which deep features extracted from computed tomography/histopathological images by using CNNs were combined with eigengenes generated from functional genomic data, to construct a prognostic model for ccRCC. Results showed that our proposed model can stratify high- and low-risk subgroups with significant difference (P-value < 0.05) and outperform the predictive performance of those models based on single-modality features in the independent testing cohort [C-index, 0.808 (0.728–0.888)]. In addition, we also explored the relationship between deep image features and eigengenes, and make an attempt to explain deep image features from the view of genomic data. Notably, the integrative framework is available to the task of prognosis prediction of other cancer with matched multimodal data.
Availability and implementation
https://github.com/zhang-de-lab/zhang-lab? from=singlemessage
Supplementary information
Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Zhenyuan Ning
- School of Biomedical Engineering
- Guangdong Provincial Key Laboratory of Medical Image Processing
| | - Weihao Pan
- School of Biomedical Engineering
- Guangdong Provincial Key Laboratory of Medical Image Processing
| | - Yuting Chen
- Department of Radiotherapy Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Qing Xiao
- School of Biomedical Engineering
- Guangdong Provincial Key Laboratory of Medical Image Processing
| | - Xinsen Zhang
- School of Biomedical Engineering
- Guangdong Provincial Key Laboratory of Medical Image Processing
| | - Jiaxiu Luo
- School of Biomedical Engineering
- Guangdong Provincial Key Laboratory of Medical Image Processing
| | - Jian Wang
- School of Biomedical Engineering
- Department of Radiotherapy Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Yu Zhang
- School of Biomedical Engineering
- Guangdong Provincial Key Laboratory of Medical Image Processing
| |
Collapse
|
341
|
Wang S, Lin B, Lin G, Lin R, Huang F, Liu W, Wang X, Liu X, Zhang Y, Wang F, Lin Y, Chen L, Chen J. Automated label-free detection of injured neuron with deep learning by two-photon microscopy. JOURNAL OF BIOPHOTONICS 2020; 13:e201960062. [PMID: 31602806 DOI: 10.1002/jbio.201960062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 09/18/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
Stroke is a significant cause of morbidity and long-term disability globally. Detection of injured neuron is a prerequisite for defining the degree of focal ischemic brain injury, which can be used to guide further therapy. Here, we demonstrate the capability of two-photon microscopy (TPM) to label-freely identify injured neurons on unstained thin section and fresh tissue of rat cerebral ischemia-reperfusion model, revealing definite diagnostic features compared with conventional staining images. Moreover, a deep learning model based on convolutional neural network is developed to automatically detect the location of injured neurons on TPM images. We then apply deep learning-assisted TPM to evaluate the ischemic regions based on tissue edema, two-photon excited fluorescence signal intensity, as well as neuronal injury, presenting a novel manner for identifying the infarct core, peri-infarct area, and remote area. These results propose an automated and label-free method that could provide supplementary information to augment the diagnostic accuracy, as well as hold the potential to be used as an intravital diagnostic tool for evaluating the effectiveness of drug interventions and predicting potential therapeutics.
Collapse
Affiliation(s)
- Shu Wang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, China
| | - Bingbing Lin
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Guimin Lin
- College of Physics & Electronic Information Engineering, Minjiang University, Fuzhou, China
| | - Ruolan Lin
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, China
| | - Feng Huang
- College of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, China
| | - Weilin Liu
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Xingfu Wang
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Xueyong Liu
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Yu Zhang
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Feng Wang
- Department of Neurosurgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Yuanxiang Lin
- Department of Neurosurgery, The First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| | - Lidian Chen
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Jianxin Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, China
| |
Collapse
|
342
|
Gao F, Wu T, Chu X, Yoon H, Xu Y, Patel B. Deep Residual Inception Encoder–Decoder Network for Medical Imaging Synthesis. IEEE J Biomed Health Inform 2020; 24:39-49. [DOI: 10.1109/jbhi.2019.2912659] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
343
|
Chandradevan R, Aljudi AA, Drumheller BR, Kunananthaseelan N, Amgad M, Gutman DA, Cooper LAD, Jaye DL. Machine-based detection and classification for bone marrow aspirate differential counts: initial development focusing on nonneoplastic cells. J Transl Med 2020; 100:98-109. [PMID: 31570774 PMCID: PMC6920560 DOI: 10.1038/s41374-019-0325-7] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 07/30/2019] [Accepted: 09/02/2019] [Indexed: 12/16/2022] Open
Abstract
Bone marrow aspirate (BMA) differential cell counts (DCCs) are critical for the classification of hematologic disorders. While manual counts are considered the gold standard, they are labor intensive, time consuming, and subject to bias. A reliable automated counter has yet to be developed, largely due to the inherent complexity of bone marrow specimens. Digital pathology imaging coupled with machine learning algorithms represents a highly promising emerging technology for this purpose. Yet, training datasets for BMA cellular constituents, critical for building and validating machine learning algorithms, are lacking. Herein, we report our experience creating and employing such datasets to develop a machine learning algorithm to detect and classify BMA cells. Utilizing a web-based system that we developed for annotating and managing digital pathology images, over 10,000 cells from scanned whole slide images of BMA smears were manually annotated, including all classes that comprise the standard clinical DCC. We implemented a two-stage, detection and classification approach that allows design flexibility and improved classification accuracy. In a sixfold cross-validation, our algorithms achieved high overall accuracy in detection (0.959 ± 0.008 precision-recall AUC) and classification (0.982 ± 0.03 ROC AUC) using nonneoplastic samples. Testing on a small set of acute myeloid leukemia and multiple myeloma samples demonstrated similar detection and classification performance. In summary, our algorithms showed promising early results and represent an important initial step in the effort to devise a reliable, objective method to automate DCCs. With further development to include formal clinical validation, such a system has the potential to assist in disease diagnosis and prognosis, and significantly impact clinical practice.
Collapse
Affiliation(s)
| | - Ahmed A Aljudi
- Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA, USA
- Department of Pathology, Children's Healthcare of Atlanta, Atlanta, GA, USA
| | - Bradley R Drumheller
- Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA, USA
| | | | - Mohamed Amgad
- Department of Biomedical Informatics, Emory University, Atlanta, GA, USA
| | - David A Gutman
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Lee A D Cooper
- Department of Biomedical Informatics, Emory University, Atlanta, GA, USA.
- Department of Pathology, Northwestern University, Chicago, IL and Robert H. Lurie Comprehensive Cancer Center of Northwestern University, Chicago, IL, USA.
| | - David L Jaye
- Department of Pathology and Laboratory Medicine, Emory University, Atlanta, GA, USA.
- Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
344
|
Abe N, Matsumoto H, Takamatsu R, Tamaki K, Takigami N, Uehara K, Kamada Y, Tamaki N, Motonari T, Unesoko M, Nakada N, Zaha H, Yoshimi N. Quantitative digital image analysis of tumor-infiltrating lymphocytes in HER2-positive breast cancer. Virchows Arch 2019; 476:701-709. [PMID: 31873876 DOI: 10.1007/s00428-019-02730-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 10/03/2019] [Accepted: 11/03/2019] [Indexed: 12/12/2022]
Abstract
As visual quantification of the density of tumor-infiltrating lymphocytes (TILs) lacks in precision, digital image analysis (DIA) approach has been applied in order to improve. In several studies, TIL density has been examined on hematoxylin and eosin (HE)-stained sections using DIA. The aim of the present study was to quantify TIL density on HE sections of core needle biopsies using DIA and investigate its association with clinicopathological parameters and pathological response to neoadjuvant chemotherapy in human epidermal growth factor receptor 2 (HER2)-positive breast cancer. The study cohort comprised of patients with HER2-positive breast cancer, all treated with neoadjuvant anti-HER2 therapy. DIA software applying machine learning-based classification of epithelial and stromal elements was used to count TILs. TIL density was determined as the number of TILs per square millimeter of stromal tissue. Median TIL density was 1287/mm2 (range, 123-8101/mm2). A high TIL density was associated with higher histological grade (P = 0.02), estrogen receptor negativity (P = 0.036), and pathological complete response (pCR) (P < 0.0001). In analyses using receiver operating characteristic curves, a threshold TIL density of 2420/mm2 best discriminated pCR from non-pCR. In multivariate analysis, high TIL density (> 2420/mm2) was significantly associated with pCR (P < 0.0001). Our results indicate that DIA can assess TIL density quantitatively, machine learning-based classification algorithm allowing determination of TIL density as the number of TILs per unit area, and TIL density established by this method appears to be an independent predictor of pCR in HER2-positive breast cancer.
Collapse
Affiliation(s)
- Norie Abe
- Department of Breast Surgery, Nakagami Hospital, Okinawa, Japan
- Department of Pathology and Oncology, Graduate School of Medicine, University of the Ryukyus, Okinawa, Japan
| | - Hirofumi Matsumoto
- Department of Pathology, Ryukyu University Hospital, 207 Uehara, Nishihara, Okinawa, 903-0215, Japan.
| | - Reika Takamatsu
- Department of Pathology and Oncology, Graduate School of Medicine, University of the Ryukyus, Okinawa, Japan
| | - Kentaro Tamaki
- Department of Breast Surgical Oncology, Nahanishi Clinic, Okinawa, Japan
| | - Naoko Takigami
- Department of Breast Surgical Oncology, Nahanishi Clinic, Okinawa, Japan
| | - Kano Uehara
- Department of Breast Surgical Oncology, Nahanishi Clinic, Okinawa, Japan
| | - Yoshihiko Kamada
- Department of Breast Surgical Oncology, Nahanishi Clinic, Okinawa, Japan
| | - Nobumitsu Tamaki
- Department of Breast Surgical Oncology, Nahanishi Clinic, Okinawa, Japan
| | - Tokiwa Motonari
- Department of Breast Surgery, Nakagami Hospital, Okinawa, Japan
| | - Mikiko Unesoko
- Department of Breast Surgery, Nakagami Hospital, Okinawa, Japan
| | | | - Hisamitsu Zaha
- Department of Breast Surgery, Nakagami Hospital, Okinawa, Japan
| | - Naoki Yoshimi
- Department of Pathology and Oncology, Graduate School of Medicine, University of the Ryukyus, Okinawa, Japan
| |
Collapse
|
345
|
Azer SA. Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: A systematic review. World J Gastrointest Oncol 2019; 11:1218-1230. [PMID: 31908726 PMCID: PMC6937442 DOI: 10.4251/wjgo.v11.i12.1218] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2019] [Revised: 07/09/2019] [Accepted: 10/03/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Artificial intelligence, such as convolutional neural networks (CNNs), has been used in the interpretation of images and the diagnosis of hepatocellular cancer (HCC) and liver masses. CNN, a machine-learning algorithm similar to deep learning, has demonstrated its capability to recognise specific features that can detect pathological lesions. AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance. METHODS The databases PubMed, EMBASE, and the Web of Science and research books were systematically searched using related keywords. Studies analysing pathological anatomy, cellular, and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer, differentiating cancer from other lesions, or staging the lesion. The data were extracted as per a predefined extraction. The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed. The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection. RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified. The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions (n = 6), HCC from cirrhosis or development of new tumours (n = 3), and HCC nuclei grading or segmentation (n = 2). The CNNs showed satisfactory levels of accuracy. The studies aimed at detecting lesions (n = 4), classification (n = 5), and segmentation (n = 2). Several methods were used to assess the accuracy of CNN models used. CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies. While a few limitations have been identified in these studies, overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.
Collapse
Affiliation(s)
- Samy A Azer
- Department of Medical Education, King Saud University College of Medicine, Riyadh 11461, Saudi Arabia
| |
Collapse
|
346
|
Pesteie M, Abolmaesumi P, Rohling RN. Adaptive Augmentation of Medical Data Using Independently Conditional Variational Auto-Encoders. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2807-2820. [PMID: 31059432 DOI: 10.1109/tmi.2019.2914656] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Current deep supervised learning methods typically require large amounts of labeled data for training. Since there is a significant cost associated with clinical data acquisition and labeling, medical datasets used for training these models are relatively small in size. In this paper, we aim to alleviate this limitation by proposing a variational generative model along with an effective data augmentation approach that utilizes the generative model to synthesize data. In our approach, the model learns the probability distribution of image data conditioned on a latent variable and the corresponding labels. The trained model can then be used to synthesize new images for data augmentation. We demonstrate the effectiveness of the approach on two independent clinical datasets consisting of ultrasound images of the spine and magnetic resonance images of the brain. For the spine dataset, a baseline and a residual model achieve an accuracy of 85% and 92%, respectively, using our method compared to 78% and 83% using a conventional training approach for image classification task. For the brain dataset, a baseline and a U-net network achieve an accuracy of 84% and 88%, respectively, in Dice coefficient in tumor segmentation compared to 80% and 83% for the convention training approach.
Collapse
|
347
|
Wang X, Yan Y, Tang P, Liu W, Guo X. Bag similarity network for deep multi-instance learning. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2019.07.071] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
348
|
Wang S, Wang T, Yang L, Yang DM, Fujimoto J, Yi F, Luo X, Yang Y, Yao B, Lin S, Moran C, Kalhor N, Weissferdt A, Minna J, Xie Y, Wistuba II, Mao Y, Xiao G. ConvPath: A software tool for lung adenocarcinoma digital pathological image analysis aided by a convolutional neural network. EBioMedicine 2019; 50:103-110. [PMID: 31767541 PMCID: PMC6921240 DOI: 10.1016/j.ebiom.2019.10.033] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 10/16/2019] [Accepted: 10/16/2019] [Indexed: 12/20/2022] Open
Abstract
BACKGROUND The spatial distributions of different types of cells could reveal a cancer cell's growth pattern, its relationships with the tumor microenvironment and the immune response of the body, all of which represent key "hallmarks of cancer". However, the process by which pathologists manually recognize and localize all the cells in pathology slides is extremely labor intensive and error prone. METHODS In this study, we developed an automated cell type classification pipeline, ConvPath, which includes nuclei segmentation, convolutional neural network-based tumor cell, stromal cell, and lymphocyte classification, and extraction of tumor microenvironment-related features for lung cancer pathology images. To facilitate users in leveraging this pipeline for their research, all source scripts for ConvPath software are available at https://qbrc.swmed.edu/projects/cnn/. FINDINGS The overall classification accuracy was 92.9% and 90.1% in training and independent testing datasets, respectively. By identifying cells and classifying cell types, this pipeline can convert a pathology image into a "spatial map" of tumor, stromal and lymphocyte cells. From this spatial map, we can extract features that characterize the tumor micro-environment. Based on these features, we developed an image feature-based prognostic model and validated the model in two independent cohorts. The predicted risk group serves as an independent prognostic factor, after adjusting for clinical variables that include age, gender, smoking status, and stage. INTERPRETATION The analysis pipeline developed in this study could convert the pathology image into a "spatial map" of tumor cells, stromal cells and lymphocytes. This could greatly facilitate and empower comprehensive analysis of the spatial organization of cells, as well as their roles in tumor progression and metastasis.
Collapse
Affiliation(s)
- Shidan Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX
| | - Tao Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX; Center for the Genetics of Host Defense, University of Texas Southwestern Medical Center, Dallas, TX
| | - Lin Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX; Department of Pathology, National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences (CHCAMS), China
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX
| | - Junya Fujimoto
- Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Faliu Yi
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX
| | - Xin Luo
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX
| | - Yikun Yang
- Department of Thoracic Surgery, National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences (CHCAMS), China
| | - Bo Yao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX
| | - ShinYi Lin
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX
| | - Cesar Moran
- Department of Pathology, Division of Pathology/Lab Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Neda Kalhor
- Department of Pathology, Division of Pathology/Lab Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - Annikka Weissferdt
- Department of Pathology, Division of Pathology/Lab Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX
| | - John Minna
- Hamon Center for Therapeutic Oncology Research, Department of Internal Medicine and Department of Pharmacology, University of Texas Southwestern Medical Center, Dallas, TX; Harold C. Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX
| | - Yang Xie
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX; Harold C. Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX
| | - Ignacio I Wistuba
- Department of Translational Molecular Pathology, University of Texas MD Anderson Cancer Center, Houston, TX
| | - Yousheng Mao
- Department of Thoracic Surgery, National Cancer Center/Cancer Hospital, Chinese Academy of Medical Sciences (CHCAMS), China
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX; Harold C. Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX.
| |
Collapse
|
349
|
Sena P, Fioresi R, Faglioni F, Losi L, Faglioni G, Roncucci L. Deep learning techniques for detecting preneoplastic and neoplastic lesions in human colorectal histological images. Oncol Lett 2019; 18:6101-6107. [PMID: 31788084 PMCID: PMC6865164 DOI: 10.3892/ol.2019.10928] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2019] [Accepted: 08/30/2019] [Indexed: 12/12/2022] Open
Abstract
Trained pathologists base colorectal cancer identification on the visual interpretation of microscope images. However, image labeling is not always straightforward and this repetitive task is prone to mistakes due to human distraction. Significant efforts are underway to develop informative tools to assist pathologists and decrease the burden and frequency of errors. The present study proposes a deep learning approach to recognize four different stages of cancerous tissue development, including normal mucosa, early preneoplastic lesion, adenoma and cancer. A dataset of human colon tissue images collected and labeled over a 10-year period by a team of pathologists was partitioned into three sets. These were used to train, validate and test the neural network, comprising several convolutional and a few linear layers. The approach used in the present study is 'direct'; it labels raw images and bypasses the segmentation step. An overall accuracy of >95% was achieved, with the majority of mislabeling referring to a near category. Tests on an external dataset with a different resolution yielded accuracies >80%. The present study demonstrated that the neural network, when properly trained, can provide fast, accurate and reproducible labeling for colon cancer images, with the potential to significantly improve the quality and speed of medical diagnoses.
Collapse
Affiliation(s)
- Paola Sena
- Department of Biomedical, Metabolic and Neurosciences, University of Modena and Reggio Emilia, I-41125 Modena, Italy
| | - Rita Fioresi
- Department of Mathematics, University of Bologna, I-40126 Bologna, Italy
| | - Francesco Faglioni
- Department of Chemistry and Geology, University of Modena and Reggio Emiliaa, I-41125 Modena, Italy
| | - Lorena Losi
- Department of Life Sciences, University of Modena and Reggio Emiliaa, I-41125 Modena, Italy
| | | | - Luca Roncucci
- Department of Diagnostic and Clinical Medicine, and Public Health, University of Modena and Reggio Emilia, I-41125 Modena, Italy
| |
Collapse
|
350
|
Arefan D, Mohamed AA, Berg WA, Zuley ML, Sumkin JH, Wu S. Deep learning modeling using normal mammograms for predicting breast cancer risk. Med Phys 2019; 47:110-118. [PMID: 31667873 DOI: 10.1002/mp.13886] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 08/30/2019] [Accepted: 10/16/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE To investigate two deep learning-based modeling schemes for predicting short-term risk of developing breast cancer using prior normal screening digital mammograms in a case-control setting. METHODS We conducted a retrospective Institutional Review Board-approved study on a case-control cohort of 226 patients (including 113 women diagnosed with breast cancer and 113 controls) who underwent general population breast cancer screening. For each patient, a prior normal (i.e., with negative or benign findings) digital mammogram examination [including mediolateral oblique (MLO) view and craniocaudal (CC) view two images] was collected. Thus, a total of 452 normal images (226 MLO view images and 226 CC view images) of this case-control cohort were analyzed to predict the outcome, i.e., developing breast cancer (cancer cases) or remaining breast cancer-free (controls) within the follow-up period. We implemented an end-to-end deep learning model and a GoogLeNet-LDA model and compared their effects in several experimental settings using two mammographic view images and inputting two different subregions of the images to the models. The proposed models were also compared to logistic regression modeling of mammographic breast density. Area under the receiver operating characteristic curve (AUC) was used as the model performance metric. RESULTS The highest AUC was 0.73 [95% Confidence Interval (CI): 0.68-0.78; GoogLeNet-LDA model on CC view] when using the whole-breast and was 0.72 (95% CI: 0.67-0.76; GoogLeNet-LDA model on MLO + CC view) when using the dense tissue, respectively, as the model input. The GoogleNet-LDA model significantly (all P < 0.05) outperformed the end-to-end GoogLeNet model in all experiments. CC view was consistently more predictive than MLO view in both deep learning models, regardless of the input subregions. Both models exhibited superior performance than the percent breast density (AUC = 0.54; 95% CI: 0.49-0.59). CONCLUSIONS The proposed deep learning modeling approach can predict short-term breast cancer risk using normal screening mammogram images. Larger studies are needed to further reveal the promise of deep learning in enhancing imaging-based breast cancer risk assessment.
Collapse
Affiliation(s)
- Dooman Arefan
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Aly A Mohamed
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| | - Wendie A Berg
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Margarita L Zuley
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Jules H Sumkin
- Department of Radiology, University of Pittsburgh, School of Medicine, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA.,Magee-Womens Hospital of University of Pittsburgh Medical Center, 300 Halket St, Pittsburgh, PA, 15213, USA
| | - Shandong Wu
- Departments of Radiology, Biomedical Informatics, Bioengineering, and Intelligent Systems Program, University of Pittsburgh, 4200 Fifth Ave, Pittsburgh, PA, 15260, USA
| |
Collapse
|