1
|
Gao Y, Yang X, Li H, Ding DW. A knowledge-enhanced interpretable network for early recurrence prediction of hepatocellular carcinoma via multi-phase CT imaging. Int J Med Inform 2024; 189:105509. [PMID: 38851131 DOI: 10.1016/j.ijmedinf.2024.105509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 05/20/2024] [Accepted: 05/28/2024] [Indexed: 06/10/2024]
Abstract
BACKGROUND Predicting early recurrence (ER) of hepatocellular carcinoma (HCC) accurately can guide treatment decisions and further enhance survival. Computed tomography (CT) imaging, analyzed by deep learning (DL) models combining domain knowledge, has been employed for the prediction. However, these DL models utilized late fusion, restricting the interaction between domain knowledge and images during feature extraction, thereby limiting the prediction performance and compromising decision-making interpretability. METHODS We propose a novel Vision Transformer (ViT)-based DL network, referred to as Dual-Style ViT (DSViT), to augment the interaction between domain knowledge and images and the effective fusion among multi-phase CT images for improving both predictive performance and interpretability. We apply the DSViT to develop pre-/post-operative models for predicting ER. Within DSViT, to balance the utilization between domain knowledge and images within DSViT, we propose an adaptive self-attention mechanism. Moreover, we present an attention-guided supervised learning module for balancing the contributions of multi-phase CT images to prediction and a domain knowledge self-supervision module for enhancing the fusion between domain knowledge and images, thereby further improving predictive performance. Finally, we provide the interpretability of the DSViT decision-making. RESULTS Experiments on our multi-phase data demonstrate that DSViTs surpass the existing models across multiple performance metrics and provide the decision-making interpretability. Additional validation on a publicly available dataset underscores the generalizability of DSViT. CONCLUSIONS The proposed DSViT can significantly improve the performance and interpretability of ER prediction, thereby fortifying the trustworthiness of artificial intelligence tool for HCC ER prediction in clinical settings.
Collapse
Affiliation(s)
- Yu Gao
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China; Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, Beijing 100083, China
| | - Xue Yang
- First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan 450052, China; Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China.
| | - Da-Wei Ding
- School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China; Key Laboratory of Knowledge Automation for Industrial Processes, Ministry of Education, Beijing 100083, China.
| |
Collapse
|
2
|
He J, Bi X. Automatic classification of spinal osteosarcoma and giant cell tumor of bone using optimized DenseNet. J Bone Oncol 2024; 46:100606. [PMID: 38778836 PMCID: PMC11109027 DOI: 10.1016/j.jbo.2024.100606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 05/06/2024] [Accepted: 05/09/2024] [Indexed: 05/25/2024] Open
Abstract
Objective This study aims to explore an optimized deep-learning model for automatically classifying spinal osteosarcoma and giant cell tumors. In particular, it aims to provide a reliable method for distinguishing between these challenging diagnoses in medical imaging. Methods This research employs an optimized DenseNet model with a self-attention mechanism to enhance feature extraction capabilities and reduce misclassification in differentiating spinal osteosarcoma and giant cell tumors. The model utilizes multi-scale feature map extraction for improved classification accuracy. The paper delves into the practical use of Gradient-weighted Class Activation Mapping (Grad-CAM) for enhancing medical image classification, specifically focusing on its application in diagnosing spinal osteosarcoma and giant cell tumors. The results demonstrate that the implementation of Grad-CAM visualization techniques has improved the performance of the deep learning model, resulting in an overall accuracy of 85.61%. Visualizations of images for these medical conditions using Grad-CAM, with corresponding class activation maps that indicate the tumor regions where the model focuses during predictions. Results The model achieves an overall accuracy of 80% or higher, with sensitivity exceeding 80% and specificity surpassing 80%. The average area under the curve AUC for spinal osteosarcoma and giant cell tumors is 0.814 and 0.882, respectively. The model significantly supports orthopedics physicians in developing treatment and care plans. Conclusion The DenseNet-based automatic classification model accurately distinguishes spinal osteosarcoma from giant cell tumors. This study contributes to medical image analysis, providing a valuable tool for clinicians in accurate diagnostic classification. Future efforts will focus on expanding the dataset and refining the algorithm to enhance the model's applicability in diverse clinical settings.
Collapse
Affiliation(s)
- Jingteng He
- General Hospital of Northern Theatre Command, Shenyang, Liaoning 110016, China
| | - Xiaojun Bi
- General Hospital of Northern Theatre Command, Shenyang, Liaoning 110016, China
| |
Collapse
|
3
|
Park Y, Kim M, Ashraf M, Ko YS, Yi MY. MixPatch: A New Method for Training Histopathology Image Classifiers. Diagnostics (Basel) 2022; 12:diagnostics12061493. [PMID: 35741303 PMCID: PMC9221905 DOI: 10.3390/diagnostics12061493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Mujin Kim
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Murtaza Ashraf
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Young Sin Ko
- Pathology Center, Seegene Medical Foundation, Seoul 04805, Korea;
| | - Mun Yong Yi
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
- Correspondence:
| |
Collapse
|
4
|
Zhu Y, Venugopalan J, Zhang Z, Chanani NK, Maher KO, Wang MD. Domain Adaptation Using Convolutional Autoencoder and Gradient Boosting for Adverse Events Prediction in the Intensive Care Unit. Front Artif Intell 2022; 5:640926. [PMID: 35481281 PMCID: PMC9036368 DOI: 10.3389/frai.2022.640926] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Accepted: 02/23/2022] [Indexed: 11/13/2022] Open
Abstract
More than 5 million patients have admitted annually to intensive care units (ICUs) in the United States. The leading causes of mortality are cardiovascular failures, multi-organ failures, and sepsis. Data-driven techniques have been used in the analysis of patient data to predict adverse events, such as ICU mortality and ICU readmission. These models often make use of temporal or static features from a single ICU database to make predictions on subsequent adverse events. To explore the potential of domain adaptation, we propose a method of data analysis using gradient boosting and convolutional autoencoder (CAE) to predict significant adverse events in the ICU, such as ICU mortality and ICU readmission. We demonstrate our results from a retrospective data analysis using patient records from a publicly available database called Multi-parameter Intelligent Monitoring in Intensive Care-II (MIMIC-II) and a local database from Children's Healthcare of Atlanta (CHOA). We demonstrate that after adopting novel data imputation on patient ICU data, gradient boosting is effective in both the mortality prediction task and the ICU readmission prediction task. In addition, we use gradient boosting to identify top-ranking temporal and non-temporal features in both prediction tasks. We discuss the relationship between these features and the specific prediction task. Lastly, we indicate that CAE might not be effective in feature extraction on one dataset, but domain adaptation with CAE feature extraction across two datasets shows promising results.
Collapse
Affiliation(s)
- Yuanda Zhu
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, United States
| | - Janani Venugopalan
- Biomedical Engineering Department, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
| | - Zhenyu Zhang
- Biomedical Engineering Department, Georgia Institute of Technology, Atlanta, GA, United States
- Department of Biomedical Engineering, Peking University, Beijing, China
| | | | - Kevin O. Maher
- Pediatrics Department, Emory University, Atlanta, GA, United States
| | - May D. Wang
- Biomedical Engineering Department, Georgia Institute of Technology, Emory University, Atlanta, GA, United States
- *Correspondence: May D. Wang
| |
Collapse
|
5
|
Ursuleanu TF, Luca AR, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, Preda C, Grigorovici A. Deep Learning Application for Analyzing of Constituents and Their Correlations in the Interpretations of Medical Images. Diagnostics (Basel) 2021; 11:1373. [PMID: 34441307 PMCID: PMC8393354 DOI: 10.3390/diagnostics11081373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/13/2022] Open
Abstract
The need for time and attention, given by the doctor to the patient, due to the increased volume of medical data to be interpreted and filtered for diagnostic and therapeutic purposes has encouraged the development of the option to support, constructively and effectively, deep learning models. Deep learning (DL) has experienced an exponential development in recent years, with a major impact on interpretations of the medical image. This has influenced the development, diversification and increase of the quality of scientific data, the development of knowledge construction methods and the improvement of DL models used in medical applications. All research papers focus on description, highlighting, classification of one of the constituent elements of deep learning models (DL), used in the interpretation of medical images and do not provide a unified picture of the importance and impact of each constituent in the performance of DL models. The novelty in our paper consists primarily in the unitary approach, of the constituent elements of DL models, namely, data, tools used by DL architectures or specifically constructed DL architecture combinations and highlighting their "key" features, for completion of tasks in current applications in the interpretation of medical images. The use of "key" characteristics specific to each constituent of DL models and the correct determination of their correlations, may be the subject of future research, with the aim of increasing the performance of DL models in the interpretation of medical images.
Collapse
Affiliation(s)
- Tudor Florin Ursuleanu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
- Department of Surgery I, Regional Institute of Oncology, 700483 Iasi, Romania
| | - Andreea Roxana Luca
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department Obstetrics and Gynecology, Integrated Ambulatory of Hospital “Sf. Spiridon”, 700106 Iasi, Romania
| | - Liliana Gheorghe
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Radiology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Roxana Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Stefan Iancu
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Maria Hlusneac
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
| | - Cristina Preda
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Endocrinology, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| | - Alexandru Grigorovici
- Faculty of General Medicine, “Grigore T. Popa” University of Medicine and Pharmacy, 700115 Iasi, Romania; (T.F.U.); (R.G.); (S.I.); (M.H.); (C.P.); (A.G.)
- Department of Surgery VI, “Sf. Spiridon” Hospital, 700111 Iasi, Romania
| |
Collapse
|
6
|
Pei L, Jones KA, Shboul ZA, Chen JY, Iftekharuddin KM. Deep Neural Network Analysis of Pathology Images With Integrated Molecular Data for Enhanced Glioma Classification and Grading. Front Oncol 2021; 11:668694. [PMID: 34277415 PMCID: PMC8282424 DOI: 10.3389/fonc.2021.668694] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Accepted: 06/17/2021] [Indexed: 11/19/2022] Open
Abstract
Gliomas are primary brain tumors that originate from glial cells. Classification and grading of these tumors is critical to prognosis and treatment planning. The current criteria for glioma classification in central nervous system (CNS) was introduced by World Health Organization (WHO) in 2016. This criteria for glioma classification requires the integration of histology with genomics. In 2017, the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy (cIMPACT-NOW) was established to provide up-to-date recommendations for CNS tumor classification, which in turn the WHO is expected to adopt in its upcoming edition. In this work, we propose a novel glioma analytical method that, for the first time in the literature, integrates a cellularity feature derived from the digital analysis of brain histopathology images integrated with molecular features following the latest WHO criteria. We first propose a novel over-segmentation strategy for region-of-interest (ROI) selection in large histopathology whole slide images (WSIs). A Deep Neural Network (DNN)-based classification method then fuses molecular features with cellularity features to improve tumor classification performance. We evaluate the proposed method with 549 patient cases from The Cancer Genome Atlas (TCGA) dataset for evaluation. The cross validated classification accuracies are 93.81% for lower-grade glioma (LGG) and high-grade glioma (HGG) using a regular DNN, and 73.95% for LGG II and LGG III using a residual neural network (ResNet) DNN, respectively. Our experiments suggest that the type of deep learning has a significant impact on tumor subtype discrimination between LGG II vs. LGG III. These results outperform state-of-the-art methods in classifying LGG II vs. LGG III and offer competitive performance in distinguishing LGG vs. HGG in the literature. In addition, we also investigate molecular subtype classification using pathology images and cellularity information. Finally, for the first time in literature this work shows promise for cellularity quantification to predict brain tumor grading for LGGs with IDH mutations.
Collapse
Affiliation(s)
- Linmin Pei
- Vision Lab, Department of Electrical & Computer Engineering, Old Dominion University, Norfolk, VA, United States
| | - Karra A. Jones
- Department of Pathology, University of Iowa Hospitals & Clinics, Iowa City, IA, United States
| | - Zeina A. Shboul
- Vision Lab, Department of Electrical & Computer Engineering, Old Dominion University, Norfolk, VA, United States
| | - James Y. Chen
- Department of Radiology, Division of Neuroradiology, San Diego VA Medical Center, La Jolla, CA, United States
- Department of Radiology, Division of Neuroradiology, UC San Diego Health System, San Diego, CA, United States
| | - Khan M. Iftekharuddin
- Vision Lab, Department of Electrical & Computer Engineering, Old Dominion University, Norfolk, VA, United States
| |
Collapse
|
7
|
Gao S, Duan H, An D, Yi X, Li J, Liao C. Knockdown of long non-coding RNA LINC00467 inhibits glioma cell progression via modulation of E2F3 targeted by miR-200a. Cell Cycle 2020; 19:2040-2053. [PMID: 32684096 PMCID: PMC7469466 DOI: 10.1080/15384101.2020.1792127] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Accepted: 11/28/2019] [Indexed: 12/19/2022] Open
Abstract
Studies have found that LINC00467 is an important regulator of cancer. However, the function of LINC00467 in glioma cell is unclear. Therefore, this experimental design based on LINC00467 to explore its mechanism of action in glioma cell. RT-qPCR was used to detect the expression of LINC0046 and miR-200a in glioma cell lines. MTT assay, Edru assay and Transwell assay and flow cytometry were used to detect the effects of LINC0046 and miR-200a on PC cell proliferation, migration and apoptosis. Target gene prediction and screening, luciferase reporter assays were used to validate downstream target genes for LINC0046 and miR-200a. Western blotting was used to detect the protein expression of E2F3. The tumor changes in mice were detected by in vivo experiments in nude mice. LINC00467 was up-regulated in glioma cells. Knockdown of LINC00467 inhibited the viability, migration and invasion of glioma cells. In glioma cells, miR-200a was significantly reduced, while E2F3 was significantly rised. LINC00467 negatively regulated the expression of miR-200a in gliomas, while miR-200a negatively regulated the expression of E2F3 in gliomas. INC00467 promoted the development of glioma by inhibiting miR-200a and promoting E2F3 expression. LINC00467 may be a potential therapeutic target for gliomas.
Collapse
Affiliation(s)
- Shuzi Gao
- Department of Cerebrovascular Diseases, Zhuhai People’s Hospital (Zhuhai Hospital Affiliated with Jinan University), Zhuhai City, Guangdong Province, PR. China
| | - Haixia Duan
- Department of Ophthalmology, Zhuhai Hospital of Integrated Traditional Chinese and Western Medicine, Zhuhai City, Guangdong Province, PR. China
| | - Dezhu An
- Department of Neurosurgery, Zhuhai People’s Hospital (Zhuhai Hospital Affiliated with Jinan University), Zhuhai City, Guangdong Province, PR. China
| | - Xinfeng Yi
- Department of Neurosurgery, Zhuhai People’s Hospital (Zhuhai Hospital Affiliated with Jinan University), Zhuhai City, Guangdong Province, PR. China
| | - Jiayan Li
- Department of Neurosurgery, Zhuhai People’s Hospital (Zhuhai Hospital Affiliated with Jinan University), Zhuhai City, Guangdong Province, PR. China
| | - Changchun Liao
- Department of Neurosurgery, Zhuhai People’s Hospital (Zhuhai Hospital Affiliated with Jinan University), Zhuhai City, Guangdong Province, PR. China
| |
Collapse
|
8
|
Hou L, Gupta R, Van Arnam JS, Zhang Y, Sivalenka K, Samaras D, Kurc TM, Saltz JH. Dataset of segmented nuclei in hematoxylin and eosin stained histopathology images of ten cancer types. Sci Data 2020; 7:185. [PMID: 32561748 PMCID: PMC7305328 DOI: 10.1038/s41597-020-0528-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 05/14/2020] [Indexed: 12/03/2022] Open
Abstract
The distribution and appearance of nuclei are essential markers for the diagnosis and study of cancer. Despite the importance of nuclear morphology, there is a lack of large scale, accurate, publicly accessible nucleus segmentation data. To address this, we developed an analysis pipeline that segments nuclei in whole slide tissue images from multiple cancer types with a quality control process. We have generated nucleus segmentation results in 5,060 Whole Slide Tissue images from 10 cancer types in The Cancer Genome Atlas. One key component of our work is that we carried out a multi-level quality control process (WSI-level and image patch-level), to evaluate the quality of our segmentation results. The image patch-level quality control used manual segmentation ground truth data from 1,356 sampled image patches. The datasets we publish in this work consist of roughly 5 billion quality controlled nuclei from more than 5,060 TCGA WSIs from 10 different TCGA cancer types and 1,356 manually segmented TCGA image patches from the same 10 cancer types plus additional 4 cancer types.
Collapse
Affiliation(s)
- Le Hou
- Computer Science Department, 203C New Computer Science Building, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Rajarsi Gupta
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - John S Van Arnam
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yuwei Zhang
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Kaustubh Sivalenka
- Computer Science Department, 203C New Computer Science Building, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Dimitris Samaras
- Computer Science Department, 203C New Computer Science Building, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Tahsin M Kurc
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Joel H Saltz
- Biomedical Informatics Department, HSC L3-045, Stony Brook Medicine, Stony Brook University, Stony Brook, NY, 11794, USA.
| |
Collapse
|
9
|
Sun J, Tárnok A, Su X. Deep Learning-Based Single-Cell Optical Image Studies. Cytometry A 2020; 97:226-240. [PMID: 31981309 DOI: 10.1002/cyto.a.23973] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 01/03/2020] [Accepted: 01/10/2020] [Indexed: 12/17/2022]
Abstract
Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Jing Sun
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| | - Attila Tárnok
- Department of Therapy Validation, Fraunhofer Institute for Cell Therapy and Immunology (IZI), Leipzig, Germany.,Institute for Medical Informatics, Statistics and Epidemiology (IMISE), University of Leipzig, Leipzig, Germany
| | - Xuantao Su
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| |
Collapse
|
10
|
Hou L, Agarwal A, Samaras D, Kurc TM, Gupta RR, Saltz JH. Robust Histopathology Image Analysis: to Label or to Synthesize? PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2019; 2019:8533-8542. [PMID: 34025103 PMCID: PMC8139403 DOI: 10.1109/cvpr.2019.00873] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Detection, segmentation and classification of nuclei are fundamental analysis operations in digital pathology. Existing state-of-the-art approaches demand extensive amount of supervised training data from pathologists and may still perform poorly in images from unseen tissue types. We propose an unsupervised approach for histopathology image segmentation that synthesizes heterogeneous sets of training image patches, of every tissue type. Although our synthetic patches are not always of high quality, we harness the motley crew of generated samples through a generally applicable importance sampling method. This proposed approach, for the first time, re-weighs the training loss over synthetic data so that the ideal (unbiased) generalization loss over the true data distribution is minimized. This enables us to use a random polygon generator to synthesize approximate cellular structures (i.e., nuclear masks) for which no real examples are given in many tissue types, and hence, GAN-based methods are not suited. In addition, we propose a hybrid synthesis pipeline that utilizes textures in real histopathology patches and GAN models, to tackle heterogeneity in tissue textures. Compared with existing state-of-the-art supervised models, our approach generalizes significantly better on cancer types without training data. Even in cancer types with training data, our approach achieves the same performance without supervision cost. We release code and segmentation results on over 5000 Whole Slide Images (WSI) in The Cancer Genome Atlas (TCGA) repository, a dataset that would be orders of magnitude larger than what is available today.
Collapse
Affiliation(s)
| | - Ayush Agarwal
- Stony Brook University
- Stanford University, California
| | | | | | | | | |
Collapse
|
11
|
Hou L, Nguyen V, Kanevsky AB, Samaras D, Kurc TM, Zhao T, Gupta RR, Gao Y, Chen W, Foran D, Saltz JH. Sparse Autoencoder for Unsupervised Nucleus Detection and Representation in Histopathology Images. PATTERN RECOGNITION 2019; 86:188-200. [PMID: 30631215 PMCID: PMC6322841 DOI: 10.1016/j.patcog.2018.09.007] [Citation(s) in RCA: 55] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
We propose a sparse Convolutional Autoencoder (CAE) for simultaneous nucleus detection and feature extraction in histopathology tissue images. Our CAE detects and encodes nuclei in image patches in tissue images into sparse feature maps that encode both the location and appearance of nuclei. A primary contribution of our work is the development of an unsupervised detection network by using the characteristics of histopathology image patches. The pretrained nucleus detection and feature extraction modules in our CAE can be fine-tuned for supervised learning in an end-to-end fashion. We evaluate our method on four datasets and achieve state-of-the-art results. In addition, we are able to achieve comparable performance with only 5% of the fully- supervised annotation cost.
Collapse
Affiliation(s)
- Le Hou
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Vu Nguyen
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Ariel B Kanevsky
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Montreal Institute for Learning Algorithms, University of Montreal, Montreal, Canada
| | - Dimitris Samaras
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Tahsin M Kurc
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Tianhao Zhao
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
| | - Rajarsi R Gupta
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
| | - Yi Gao
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China
| | - Wenjin Chen
- Center for Biomedical Imaging & Informatics, Rutgers, the State University of New Jersey,New Brunswick, NJ, USA
- Rutgers Cancer Institute of New Jersey, Rutgers, the State University of New Jersey, NJ, USA
| | - David Foran
- Center for Biomedical Imaging & Informatics, Rutgers, the State University of New Jersey,New Brunswick, NJ, USA
- Rutgers Cancer Institute of New Jersey, Rutgers, the State University of New Jersey, NJ, USA
- Div. of Medical Informatics, Rutgers-Robert Wood Johnson Medical School, Piscataway Township, NJ, USA
| | - Joel H Saltz
- Dept. of Computer Science, Stony Brook University, Stony Brook, NY, USA
- Dept. of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
- Dept. of Pathology, Stony Brook University Medical Center, Stony Brook, NY, USA
- Cancer Center, Stony Brook University Hospital, Stony Brook, NY, USA
| |
Collapse
|
12
|
Pantanowitz L, Sharma A, Carter AB, Kurc T, Sussman A, Saltz J. Twenty Years of Digital Pathology: An Overview of the Road Travelled, What is on the Horizon, and the Emergence of Vendor-Neutral Archives. J Pathol Inform 2018; 9:40. [PMID: 30607307 PMCID: PMC6289005 DOI: 10.4103/jpi.jpi_69_18] [Citation(s) in RCA: 117] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 10/28/2018] [Indexed: 12/13/2022] Open
Abstract
Almost 20 years have passed since the commercial introduction of whole-slide imaging (WSI) scanners. During this time, the creation of various WSI devices with the ability to digitize an entire glass slide has transformed the field of pathology. Parallel advances in computational technology and storage have permitted rapid processing of large-scale WSI datasets. This article provides an overview of important past and present efforts related to WSI. An account of how the virtual microscope evolved from the need to visualize and manage satellite data for earth science applications is provided. The article also discusses important milestones beginning from the first WSI scanner designed by Bacus to the Food and Drug Administration approval of the first digital pathology system for primary diagnosis in surgical pathology. As pathology laboratories commit to going fully digitalize, the need has emerged to include WSIs into an enterprise-level vendor-neutral archive (VNA). The different types of VNAs available are reviewed as well as how best to implement them and how pathology can benefit from participating in this effort. Differences between traditional image algorithms that extract pixel-, object-, and semantic-level features versus deep learning methods are highlighted. The need for large-scale data management, analysis, and visualization in computational pathology is also addressed.
Collapse
Affiliation(s)
- Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, Emory University, GA, USA
| | - Alexis B. Carter
- Department of Pathology and Laboratory Medicine, Children's Healthcare of Atlanta, GA, USA
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Alan Sussman
- Department of Computer Science, University of Maryland, College Park, MD, USA
| | - Joel Saltz
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|