251
|
Mormont R, Geurts P, Maree R. Multi-Task Pre-Training of Deep Neural Networks for Digital Pathology. IEEE J Biomed Health Inform 2021; 25:412-421. [PMID: 32386169 DOI: 10.1109/jbhi.2020.2992878] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.
Collapse
|
252
|
Li Z, Zhao W, Shi F, Qi L, Xie X, Wei Y, Ding Z, Gao Y, Wu S, Liu J, Shi Y, Shen D. A novel multiple instance learning framework for COVID-19 severity assessment via data augmentation and self-supervised learning. Med Image Anal 2021; 69:101978. [PMID: 33588121 PMCID: PMC7857016 DOI: 10.1016/j.media.2021.101978] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 12/30/2020] [Accepted: 01/20/2021] [Indexed: 01/03/2023]
Abstract
How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues - weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works.
Collapse
Affiliation(s)
- Zekun Li
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210046, China; National Institute of Healthcare Data Science, Nanjing University, Nanjing, 210046, China
| | - Wei Zhao
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, 410011, China
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 201807, China
| | - Lei Qi
- School of Computer Science and Artificial Intelligence, Southeast University, Nanjing, 210018, China
| | - Xingzhi Xie
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, 410011, China
| | - Ying Wei
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 201807, China
| | - Zhongxiang Ding
- Department of Radiology, Hangzhou First Peoples Hospital, Zhejiang University School of Medicine, Hangzhou, 310006, China
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210046, China; National Institute of Healthcare Data Science, Nanjing University, Nanjing, 210046, China
| | - Shangjie Wu
- Department of Pulmonary and Critical Care Medicine, The Second Xiangya Hospital, Central South University, Changsha, 410011, China
| | - Jun Liu
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, 410011, China; Department of Radiology Quality Control Center, Hunan Province, Changsha, 410011, China.
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, 210046, China; National Institute of Healthcare Data Science, Nanjing University, Nanjing, 210046, China.
| | - Dinggang Shen
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 201807, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
253
|
He S, Minn KT, Solnica-Krezel L, Anastasio MA, Li H. Deeply-supervised density regression for automatic cell counting in microscopy images. Med Image Anal 2021; 68:101892. [PMID: 33285481 PMCID: PMC7856299 DOI: 10.1016/j.media.2020.101892] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/21/2022]
Abstract
Accurately counting the number of cells in microscopy images is required in many medical diagnosis and biological studies. This task is tedious, time-consuming, and prone to subjective errors. However, designing automatic counting methods remains challenging due to low image contrast, complex background, large variance in cell shapes and counts, and significant cell occlusions in two-dimensional microscopy images. In this study, we proposed a new density regression-based method for automatically counting cells in microscopy images. The proposed method processes two innovations compared to other state-of-the-art density regression-based methods. First, the density regression model (DRM) is designed as a concatenated fully convolutional regression network (C-FCRN) to employ multi-scale image features for the estimation of cell density maps from given images. Second, auxiliary convolutional neural networks (AuxCNNs) are employed to assist in the training of intermediate layers of the designed C-FCRN to improve the DRM performance on unseen datasets. Experimental studies evaluated on four datasets demonstrate the superior performance of the proposed method.
Collapse
Affiliation(s)
- Shenghua He
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA
| | - Kyaw Thu Minn
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63110 USA; Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Lilianna Solnica-Krezel
- Department of Developmental Biology, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA; Center of Regenerative Medicine, Washington University School of Medicine in St. Louis, St. Louis, MO 63110 USA
| | - Mark A Anastasio
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA.
| | - Hua Li
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL 61801 USA; Carle Cancer Center, Carle Foundation Hospital, Urbana, IL 61801 USA.
| |
Collapse
|
254
|
Sub-classification of invasive and non-invasive cancer from magnification independent histopathological images using hybrid neural networks. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-021-00564-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
255
|
Masud M, Sikder N, Nahid AA, Bairagi AK, AlZain MA. A Machine Learning Approach to Diagnosing Lung and Colon Cancer Using a Deep Learning-Based Classification Framework. SENSORS 2021; 21:s21030748. [PMID: 33499364 PMCID: PMC7865416 DOI: 10.3390/s21030748] [Citation(s) in RCA: 101] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/10/2021] [Accepted: 01/18/2021] [Indexed: 12/19/2022]
Abstract
The field of Medicine and Healthcare has attained revolutionary advancements in the last forty years. Within this period, the actual reasons behind numerous diseases were unveiled, novel diagnostic methods were designed, and new medicines were developed. Even after all these achievements, diseases like cancer continue to haunt us since we are still vulnerable to them. Cancer is the second leading cause of death globally; about one in every six people die suffering from it. Among many types of cancers, the lung and colon variants are the most common and deadliest ones. Together, they account for more than 25% of all cancer cases. However, identifying the disease at an early stage significantly improves the chances of survival. Cancer diagnosis can be automated by using the potential of Artificial Intelligence (AI), which allows us to assess more cases in less time and cost. With the help of modern Deep Learning (DL) and Digital Image Processing (DIP) techniques, this paper inscribes a classification framework to differentiate among five types of lung and colon tissues (two benign and three malignant) by analyzing their histopathological images. The acquired results show that the proposed framework can identify cancer tissues with a maximum of 96.33% accuracy. Implementation of this model will help medical professionals to develop an automatic and reliable system capable of identifying various types of lung and colon cancers.
Collapse
Affiliation(s)
- Mehedi Masud
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
- Correspondence:
| | - Niloy Sikder
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Abdullah-Al Nahid
- Electronics and Communication Engineering Discipline, Khulna University, Khulna 9208, Bangladesh;
| | - Anupam Kumar Bairagi
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Mohammed A. AlZain
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| |
Collapse
|
256
|
Qayyum A, Qadir J, Bilal M, Al-Fuqaha A. Secure and Robust Machine Learning for Healthcare: A Survey. IEEE Rev Biomed Eng 2021; 14:156-180. [PMID: 32746371 DOI: 10.1109/rbme.2020.3013489] [Citation(s) in RCA: 106] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Recent years have witnessed widespread adoption of machine learning (ML)/deep learning (DL) techniques due to their superior performance for a variety of healthcare applications ranging from the prediction of cardiac arrest from one-dimensional heart signals to computer-aided diagnosis (CADx) using multi-dimensional medical images. Notwithstanding the impressive performance of ML/DL, there are still lingering doubts regarding the robustness of ML/DL in healthcare settings (which is traditionally considered quite challenging due to the myriad security and privacy issues involved), especially in light of recent results that have shown that ML/DL are vulnerable to adversarial attacks. In this paper, we present an overview of various application areas in healthcare that leverage such techniques from security and privacy point of view and present associated challenges. In addition, we present potential methods to ensure secure and privacy-preserving ML for healthcare applications. Finally, we provide insight into the current research challenges and promising directions for future research.
Collapse
|
257
|
Zormpas-Petridis K, Noguera R, Ivankovic DK, Roxanis I, Jamin Y, Yuan Y. SuperHistopath: A Deep Learning Pipeline for Mapping Tumor Heterogeneity on Low-Resolution Whole-Slide Digital Histopathology Images. Front Oncol 2021; 10:586292. [PMID: 33552964 PMCID: PMC7855703 DOI: 10.3389/fonc.2020.586292] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 11/30/2020] [Indexed: 12/27/2022] Open
Abstract
High computational cost associated with digital pathology image analysis approaches is a challenge towards their translation in routine pathology clinic. Here, we propose a computationally efficient framework (SuperHistopath), designed to map global context features reflecting the rich tumor morphological heterogeneity. SuperHistopath efficiently combines i) a segmentation approach using the linear iterative clustering (SLIC) superpixels algorithm applied directly on the whole-slide images at low resolution (5x magnification) to adhere to region boundaries and form homogeneous spatial units at tissue-level, followed by ii) classification of superpixels using a convolution neural network (CNN). To demonstrate how versatile SuperHistopath was in accomplishing histopathology tasks, we classified tumor tissue, stroma, necrosis, lymphocytes clusters, differentiating regions, fat, hemorrhage and normal tissue, in 127 melanomas, 23 triple-negative breast cancers, and 73 samples from transgenic mouse models of high-risk childhood neuroblastoma with high accuracy (98.8%, 93.1% and 98.3% respectively). Furthermore, SuperHistopath enabled discovery of significant differences in tumor phenotype of neuroblastoma mouse models emulating genomic variants of high-risk disease, and stratification of melanoma patients (high ratio of lymphocyte-to-tumor superpixels (p = 0.015) and low stroma-to-tumor ratio (p = 0.028) were associated with a favorable prognosis). Finally, SuperHistopath is efficient for annotation of ground-truth datasets (as there is no need of boundary delineation), training and application (~5 min for classifying a whole-slide image and as low as ~30 min for network training). These attributes make SuperHistopath particularly attractive for research in rich datasets and could also facilitate its adoption in the clinic to accelerate pathologist workflow with the quantification of phenotypes, predictive/prognosis markers.
Collapse
Affiliation(s)
| | - Rosa Noguera
- Department of Pathology, Medical School, University of Valencia-INCLIVA Biomedical Health Research Institute, Valencia, Spain.,Low Prevalence Tumors, Centro de Investigación Biomédica en Red de Cáncer (CIBERONC), Instituto de Salud Carlos III, Madrid, Spain
| | | | - Ioannis Roxanis
- Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, United Kingdom
| | - Yann Jamin
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London, United Kingdom
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, United Kingdom
| |
Collapse
|
258
|
Zhou C, Jin Y, Chen Y, Huang S, Huang R, Wang Y, Zhao Y, Chen Y, Guo L, Liao J. Histopathology classification and localization of colorectal cancer using global labels by weakly supervised deep learning. Comput Med Imaging Graph 2021; 88:101861. [PMID: 33497891 DOI: 10.1016/j.compmedimag.2021.101861] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 12/22/2020] [Accepted: 12/28/2020] [Indexed: 01/19/2023]
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer-related mortality worldwide. In coping with it, histopathology image analysis (HIA) provides key information for clinical diagnosis of CRC. Nowadays, the deep learning methods are widely used in improving cancer classification and localization of tumor-regions in HIA. However, these efforts are both time-consuming and labor-intensive due to the manual annotation of tumor-regions in the whole slide images (WSIs). Furthermore, classical deep learning methods to analyze thousands of patches extracted from WSIs may cause loss of integrated information of image. Herein, a novel method was developed, which used only global labels to achieve WSI classification and localization of carcinoma by combining features from different magnifications of WSIs. The model was trained and tested using 1346 colorectal cancer WSIs from the Cancer Genome Atlas (TCGA). Our method classified colorectal cancer with an accuracy of 94.6 %, which slightly outperforms most of the existing methods. Its cancerous-location probability maps were in good agreement with annotations from three individual expert pathologists. Independent tests on 50 newly-collected colorectal cancer WSIs from hospitals produced 92.0 % accuracy and cancerous-location probability maps were in good agreement with the three pathologists. The results thereby demonstrated that the method sufficiently achieved WSI classification and localization utilizing only global labels. This weakly supervised deep learning method is effective in time and cost, as it delivered a better performance in comparison with the state-of-the-art methods.
Collapse
Affiliation(s)
- Changjiang Zhou
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Yi Jin
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Yuzong Chen
- School of Science, China Pharmaceutical University, Nanjing, China; Bioinformatics and Drug Design Group, Department of Pharmacy, National University of Singapore, Singapore, Singapore
| | - Shan Huang
- Department of Pathology, The First Affiliated Hospital of Soochow University, Soochow, China
| | - Rengpeng Huang
- Department of Pathology, The First Affiliated Hospital of Soochow University, Soochow, China
| | - Yuhong Wang
- Department of Pathology, The First Affiliated Hospital of Soochow University, Soochow, China
| | - Youcai Zhao
- Department of Pathology, Nanjing First Hospital, Nanjing, China
| | - Yao Chen
- School of Science, China Pharmaceutical University, Nanjing, China
| | - Lingchuan Guo
- Department of Pathology, The First Affiliated Hospital of Soochow University, Soochow, China.
| | - Jun Liao
- School of Science, China Pharmaceutical University, Nanjing, China.
| |
Collapse
|
259
|
Sarvamangala DR, Kulkarni RV. Convolutional neural networks in medical image understanding: a survey. EVOLUTIONARY INTELLIGENCE 2021; 15:1-22. [PMID: 33425040 PMCID: PMC7778711 DOI: 10.1007/s12065-020-00540-3] [Citation(s) in RCA: 180] [Impact Index Per Article: 45.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 10/05/2020] [Accepted: 11/22/2020] [Indexed: 12/23/2022]
Abstract
Imaging techniques are used to capture anomalies of the human body. The captured images must be understood for diagnosis, prognosis and treatment planning of the anomalies. Medical image understanding is generally performed by skilled medical professionals. However, the scarce availability of human experts and the fatigue and rough estimate procedures involved with them limit the effectiveness of image understanding performed by skilled medical professionals. Convolutional neural networks (CNNs) are effective tools for image understanding. They have outperformed human experts in many image understanding tasks. This article aims to provide a comprehensive survey of applications of CNNs in medical image understanding. The underlying objective is to motivate medical image understanding researchers to extensively apply CNNs in their research and diagnosis. A brief introduction to CNNs has been presented. A discussion on CNN and its various award-winning frameworks have been presented. The major medical image understanding tasks, namely image classification, segmentation, localization and detection have been introduced. Applications of CNN in medical image understanding of the ailments of brain, breast, lung and other organs have been surveyed critically and comprehensively. A critical discussion on some of the challenges is also presented.
Collapse
|
260
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 105] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
261
|
Srinidhi CL, Ciga O, Martel AL. Deep neural network models for computational histopathology: A survey. Med Image Anal 2021; 67:101813. [PMID: 33049577 PMCID: PMC7725956 DOI: 10.1016/j.media.2020.101813] [Citation(s) in RCA: 255] [Impact Index Per Article: 63.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2019] [Revised: 05/12/2020] [Accepted: 08/09/2020] [Indexed: 12/14/2022]
Abstract
Histopathological images contain rich phenotypic information that can be used to monitor underlying mechanisms contributing to disease progression and patient survival outcomes. Recently, deep learning has become the mainstream methodological choice for analyzing and interpreting histology images. In this paper, we present a comprehensive review of state-of-the-art deep learning approaches that have been used in the context of histopathological image analysis. From the survey of over 130 papers, we review the field's progress based on the methodological aspect of different machine learning strategies such as supervised, weakly supervised, unsupervised, transfer learning and various other sub-variants of these methods. We also provide an overview of deep learning based survival models that are applicable for disease-specific prognosis tasks. Finally, we summarize several existing open datasets and highlight critical challenges and limitations with current deep learning approaches, along with possible avenues for future research.
Collapse
Affiliation(s)
- Chetan L Srinidhi
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada.
| | - Ozan Ciga
- Department of Medical Biophysics, University of Toronto, Canada
| | - Anne L Martel
- Physical Sciences, Sunnybrook Research Institute, Toronto, Canada; Department of Medical Biophysics, University of Toronto, Canada
| |
Collapse
|
262
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
263
|
Bouteldja N, Klinkhammer BM, Bülow RD, Droste P, Otten SW, Freifrau von Stillfried S, Moellmann J, Sheehan SM, Korstanje R, Menzel S, Bankhead P, Mietsch M, Drummer C, Lehrke M, Kramann R, Floege J, Boor P, Merhof D. Deep Learning-Based Segmentation and Quantification in Experimental Kidney Histopathology. J Am Soc Nephrol 2021; 32:52-68. [PMID: 33154175 PMCID: PMC7894663 DOI: 10.1681/asn.2020050597] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Accepted: 09/09/2020] [Indexed: 02/04/2023] Open
Abstract
BACKGROUND Nephropathologic analyses provide important outcomes-related data in experiments with the animal models that are essential for understanding kidney disease pathophysiology. Precision medicine increases the demand for quantitative, unbiased, reproducible, and efficient histopathologic analyses, which will require novel high-throughput tools. A deep learning technique, the convolutional neural network, is increasingly applied in pathology because of its high performance in tasks like histology segmentation. METHODS We investigated use of a convolutional neural network architecture for accurate segmentation of periodic acid-Schiff-stained kidney tissue from healthy mice and five murine disease models and from other species used in preclinical research. We trained the convolutional neural network to segment six major renal structures: glomerular tuft, glomerulus including Bowman's capsule, tubules, arteries, arterial lumina, and veins. To achieve high accuracy, we performed a large number of expert-based annotations, 72,722 in total. RESULTS Multiclass segmentation performance was very high in all disease models. The convolutional neural network allowed high-throughput and large-scale, quantitative and comparative analyses of various models. In disease models, computational feature extraction revealed interstitial expansion, tubular dilation and atrophy, and glomerular size variability. Validation showed a high correlation of findings with current standard morphometric analysis. The convolutional neural network also showed high performance in other species used in research-including rats, pigs, bears, and marmosets-as well as in humans, providing a translational bridge between preclinical and clinical studies. CONCLUSIONS We developed a deep learning algorithm for accurate multiclass segmentation of digital whole-slide images of periodic acid-Schiff-stained kidneys from various species and renal disease models. This enables reproducible quantitative histopathologic analyses in preclinical models that also might be applicable to clinical studies.
Collapse
Affiliation(s)
- Nassim Bouteldja
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Barbara M. Klinkhammer
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany,Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Roman D. Bülow
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Patrick Droste
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | - Simon W. Otten
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
| | | | - Julia Moellmann
- Department of Cardiology and Vascular Medicine, RWTH Aachen University Hospital, Aachen, Germany
| | | | | | - Sylvia Menzel
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Peter Bankhead
- Edinburgh Pathology, University of Edinburgh, Edinburgh, United Kingdom,Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, United Kingdom
| | - Matthias Mietsch
- Laboratory Animal Science Unit, German Primate Center, Goettingen, Germany
| | - Charis Drummer
- Platform Degenerative Diseases, German Primate Center, Goettingen, Germany
| | - Michael Lehrke
- Department of Cardiology and Vascular Medicine, RWTH Aachen University Hospital, Aachen, Germany
| | - Rafael Kramann
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany,Department of Internal Medicine, Nephrology and Transplantation, Erasmus Medical Center, Rotterdam, The Netherlands
| | - Jürgen Floege
- Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Peter Boor
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany,Department of Nephrology and Immunology, RWTH Aachen University Hospital, Aachen, Germany
| | - Dorit Merhof
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| |
Collapse
|
264
|
Fahami MA, Roshanzamir M, Izadi NH, Keyvani V, Alizadehsani R. Detection of effective genes in colon cancer: A machine learning approach. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
|
265
|
Reducing annotation effort in digital pathology: A Co-Representation learning framework for classification tasks. Med Image Anal 2021; 67:101859. [DOI: 10.1016/j.media.2020.101859] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2019] [Revised: 09/14/2020] [Accepted: 09/25/2020] [Indexed: 01/07/2023]
|
266
|
Applying Machine Learning for Integration of Multi-Modal Genomics Data and Imaging Data to Quantify Heterogeneity in Tumour Tissues. Methods Mol Biol 2021; 2190:209-228. [PMID: 32804368 DOI: 10.1007/978-1-0716-0826-5_10] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
With rapid advances in experimental instruments and protocols, imaging and sequencing data are being generated at an unprecedented rate contributing significantly to the current and coming big biomedical data. Meanwhile, unprecedented advances in computational infrastructure and analysis algorithms are realizing image-based digital diagnosis not only in radiology and cardiology but also oncology and other diseases. Machine learning methods, especially deep learning techniques, are already and broadly implemented in diverse technological and industrial sectors, but their applications in healthcare are just starting. Uniquely in biomedical research, a vast potential exists to integrate genomics data with histopathological imaging data. The integration has the potential to extend the pathologist's limits and boundaries, which may create breakthroughs in diagnosis, treatment, and monitoring at molecular and tissue levels. Moreover, the applications of genomics data are realizing the potential for personalized medicine, making diagnosis, treatment, monitoring, and prognosis more accurate. In this chapter, we discuss machine learning methods readily available for digital pathology applications, new prospects of integrating spatial genomics data on tissues with tissue morphology, and frontier approaches to combining genomics data with pathological imaging data. We present perspectives on how artificial intelligence can be synergized with molecular genomics and imaging to make breakthroughs in biomedical and translational research for computer-aided applications.
Collapse
|
267
|
De Vera Mudry MC, Martin J, Schumacher V, Venugopal R. Deep Learning in Toxicologic Pathology: A New Approach to Evaluate Rodent Retinal Atrophy. Toxicol Pathol 2020; 49:851-861. [PMID: 33371793 DOI: 10.1177/0192623320980674] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Quantification of retinal atrophy, caused by therapeutics and/or light, by manual measurement of retinal layers is labor intensive and time-consuming. In this study, we explored the role of deep learning (DL) in automating the assessment of retinal atrophy, particularly of the outer and inner nuclear layers, in rats. Herein, we report our experience creating and employing a hybrid approach, which combines conventional image processing and DL to quantify rodent retinal atrophy. Utilizing a DL approach based upon the VGG16 model architecture, models were trained, tested, and validated using 10,746 image patches scanned from whole slide images (WSIs) of hematoxylin-eosin stained rodent retina. The accuracy of this computational method was validated using pathologist annotated WSIs throughout and used to separately quantify the thickness of the outer and inner nuclear layers of the retina. Our results show that DL can facilitate the evaluation of therapeutic and/or light-induced atrophy, particularly of the outer retina, efficiently in rodents. In addition, this study provides a template which can be used to train, validate, and analyze the results of toxicologic pathology DL models across different animal species used in preclinical efficacy and safety studies.
Collapse
Affiliation(s)
- Maria Cristina De Vera Mudry
- Roche Pharma Research and Early Development, Pharmaceutical Sciences, Roche Innovation Center Basel, 1529F. Hoffmann-La Roche Ltd, Basel, Switzerland
| | - Jim Martin
- 1529Roche Tissue Diagnostics, Santa Clara, CA, USA
| | - Vanessa Schumacher
- Roche Pharma Research and Early Development, Pharmaceutical Sciences, Roche Innovation Center Basel, 1529F. Hoffmann-La Roche Ltd, Basel, Switzerland
| | | |
Collapse
|
268
|
Li B, Keikhosravi A, Loeffler AG, Eliceiri KW. Single image super-resolution for whole slide image using convolutional neural networks and self-supervised color normalization. Med Image Anal 2020; 68:101938. [PMID: 33359932 DOI: 10.1016/j.media.2020.101938] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 10/26/2020] [Accepted: 12/02/2020] [Indexed: 01/13/2023]
Abstract
High-quality whole slide scanners used for animal and human pathology scanning are expensive and can produce massive datasets, which limits the access to and adoption of this technique. As a potential solution to these challenges, we present a deep learning-based approach making use of single image super-resolution (SISR) to reconstruct high-resolution histology images from low-resolution inputs. Such low-resolution images can easily be shared, require less storage, and can be acquired quickly using widely available low-cost slide scanners. The network consists of multi-scale fully convolutional networks capable of capturing hierarchical features. Conditional generative adversarial loss is incorporated to penalize blurriness in the output images. The network is trained using a progressive strategy where the scaling factor is sampled from a normal distribution with an increasing mean. The results are evaluated with quantitative metrics and are used in a clinical histopathology diagnosis procedure which shows that the SISR framework can be used to reconstruct high-resolution images with clinical level quality. We further propose a self-supervised color normalization method that can remove staining variation artifacts. Quantitative evaluations show that the SISR framework can generalize well on unseen data collected from other patient tissue cohorts by incorporating the color normalization method.
Collapse
Affiliation(s)
- Bin Li
- Laboratory for Optical and Computational Instrumentation, Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA; Morgridge Institute for Research, Madison, WI 53706, USA
| | - Adib Keikhosravi
- Laboratory for Optical and Computational Instrumentation, Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA
| | - Agnes G Loeffler
- Department of Pathology, MetroHealth Medical Center, Cleveland, OH, USA
| | - Kevin W Eliceiri
- Laboratory for Optical and Computational Instrumentation, Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA; Morgridge Institute for Research, Madison, WI 53706, USA; Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53706, USA.
| |
Collapse
|
269
|
Stanitsas P, Cherian A, Morellas V, Tejpaul R, Papanikolopoulos N, Truskinovsky A. Image Descriptors for Weakly Annotated Histopathological Breast Cancer Data. Front Digit Health 2020; 2. [PMID: 33345255 PMCID: PMC7749086 DOI: 10.3389/fdgth.2020.572671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Introduction Cancerous Tissue Recognition (CTR) methodologies are continuously integrating advancements at the forefront of machine learning and computer vision, providing a variety of inference schemes for histopathological data. Histopathological data, in most cases, come in the form of high-resolution images, and thus methodologies operating at the patch level are more computationally attractive. Such methodologies capitalize on pixel level annotations (tissue delineations) from expert pathologists, which are then used to derive labels at the patch level. In this work, we envision a digital connected health system that augments the capabilities of the clinicians by providing powerful feature descriptors that may describe malignant regions. Material and Methods We start with a patch level descriptor, termed Covariance-Kernel Descriptor (CKD), capable of compactly describing tissue architectures associated with carcinomas. To leverage the recognition capability of the CKDs to larger slide regions, we resort to a multiple instance learning framework. In that direction, we derive the Weakly Annotated Image Descriptor (WAID) as the parameters of classifier decision boundaries in a Multiple Instance Learning framework. The WAID is computed on bags of patches corresponding to larger image regions for which binary labels (malignant vs. benign) are provided, thus obviating the necessity for tissue delineations. Results The CKD was seen to outperform all the considered descriptors, reaching classification accuracy (ACC) of 92.83%. and area under the curve (AUC) of 0.98. The CKD captures higher order correlations between features and was shown to achieve superior performance against a large collection of computer vision features on a private breast cancer dataset. The WAID outperform all other descriptors on the Breast Cancer Histopathological database (BreakHis) where correctly classified malignant (CCM) instances reached 91.27 and 92.00% at the patient and image level, respectively, without resorting to a deep learning scheme achieves state-of-the-art performance. Discussion Our proposed derivation of the CKD and WAID can help medical experts accomplish their work accurately and faster than the current state-of-the-art.
Collapse
Affiliation(s)
- Panagiotis Stanitsas
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Anoop Cherian
- Australian Center for Robotic Vision, Australian National University, Canberra, ACT, Australia
| | - Vassilios Morellas
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Resha Tejpaul
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Nikolaos Papanikolopoulos
- Department of Computer Science and Engineering, University of Minnesota, Minneapolis, MN, United States
| | - Alexander Truskinovsky
- Department of Pathology & Laboratory Medicine, Roswell Park Cancer Institute, Buffalo, NY, United States
| |
Collapse
|
270
|
A Semisupervised Learning Scheme with Self-Paced Learning for Classifying Breast Cancer Histopathological Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8826568. [PMID: 33376479 PMCID: PMC7738795 DOI: 10.1155/2020/8826568] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 11/02/2020] [Accepted: 11/05/2020] [Indexed: 12/18/2022]
Abstract
The unavailability of large amounts of well-labeled data poses a significant challenge in many medical imaging tasks. Even in the likelihood of having access to sufficient data, the process of accurately labeling the data is an arduous and time-consuming one, requiring expertise skills. Again, the issue of unbalanced data further compounds the abovementioned problems and presents a considerable challenge for many machine learning algorithms. In lieu of this, the ability to develop algorithms that can exploit large amounts of unlabeled data together with a small amount of labeled data, while demonstrating robustness to data imbalance, can offer promising prospects in building highly efficient classifiers. This work proposes a semisupervised learning method that integrates self-training and self-paced learning to generate and select pseudolabeled samples for classifying breast cancer histopathological images. A novel pseudolabel generation and selection algorithm is introduced in the learning scheme to generate and select highly confident pseudolabeled samples from both well-represented classes to less-represented classes. Such a learning approach improves the performance by jointly learning a model and optimizing the generation of pseudolabels on unlabeled-target data to augment the training data and retraining the model with the generated labels. A class balancing framework that normalizes the class-wise confidence scores is also proposed to prevent the model from ignoring samples from less represented classes (hard-to-learn samples), hence effectively handling the issue of data imbalance. Extensive experimental evaluation of the proposed method on the BreakHis dataset demonstrates the effectiveness of the proposed method.
Collapse
|
271
|
Pandey P, P PA, Kyatham V, Mishra D, Dastidar TR. Target-Independent Domain Adaptation for WBC Classification Using Generative Latent Search. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3979-3991. [PMID: 32746144 DOI: 10.1109/tmi.2020.3009029] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Automating the classification of camera-obtained microscopic images of White Blood Cells (WBCs) and related cell subtypes has assumed importance since it aids the laborious manual process of review and diagnosis. Several State-Of-The-Art (SOTA) methods developed using Deep Convolutional Neural Networks suffer from the problem of domain shift - severe performance degradation when they are tested on data (target) obtained in a setting different from that of the training (source). The change in the target data might be caused by factors such as differences in camera/microscope types, lenses, lighting-conditions etc. This problem can potentially be solved using Unsupervised Domain Adaptation (UDA) techniques albeit standard algorithms presuppose the existence of a sufficient amount of unlabelled target data which is not always the case with medical images. In this paper, we propose a method for UDA that is devoid of the need for target data. Given a test image from the target data, we obtain its 'closest-clone' from the source data that is used as a proxy in the classifier. We prove the existence of such a clone given that infinite number of data points can be sampled from the source distribution. We propose a method in which a latent-variable generative model based on variational inference is used to simultaneously sample and find the 'closest-clone' from the source distribution through an optimization procedure in the latent space. We demonstrate the efficacy of the proposed method over several SOTA UDA methods for WBC classification on datasets captured using different imaging modalities under multiple settings.
Collapse
|
272
|
Ramakrishna RR, Abd Hamid Z, Wan Zaki WMD, Huddin AB, Mathialagan R. Stem cell imaging through convolutional neural networks: current issues and future directions in artificial intelligence technology. PeerJ 2020; 8:e10346. [PMID: 33240655 PMCID: PMC7680049 DOI: 10.7717/peerj.10346] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 10/21/2020] [Indexed: 12/12/2022] Open
Abstract
Stem cells are primitive and precursor cells with the potential to reproduce into diverse mature and functional cell types in the body throughout the developmental stages of life. Their remarkable potential has led to numerous medical discoveries and breakthroughs in science. As a result, stem cell-based therapy has emerged as a new subspecialty in medicine. One promising stem cell being investigated is the induced pluripotent stem cell (iPSC), which is obtained by genetically reprogramming mature cells to convert them into embryonic-like stem cells. These iPSCs are used to study the onset of disease, drug development, and medical therapies. However, functional studies on iPSCs involve the analysis of iPSC-derived colonies through manual identification, which is time-consuming, error-prone, and training-dependent. Thus, an automated instrument for the analysis of iPSC colonies is needed. Recently, artificial intelligence (AI) has emerged as a novel technology to tackle this challenge. In particular, deep learning, a subfield of AI, offers an automated platform for analyzing iPSC colonies and other colony-forming stem cells. Deep learning rectifies data features using a convolutional neural network (CNN), a type of multi-layered neural network that can play an innovative role in image recognition. CNNs are able to distinguish cells with high accuracy based on morphologic and textural changes. Therefore, CNNs have the potential to create a future field of deep learning tasks aimed at solving various challenges in stem cell studies. This review discusses the progress and future of CNNs in stem cell imaging for therapy and research.
Collapse
Affiliation(s)
- Ramanaesh Rao Ramakrishna
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Zariyantey Abd Hamid
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Wan Mimi Diyana Wan Zaki
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Aqilah Baseri Huddin
- Department of Electrical, Electronic & Systems Engineering, Faculty of Engineering & Built Environment, Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia
| | - Ramya Mathialagan
- Biomedical Science Programme and Centre for Diagnostic, Therapeutic and Investigative Science, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
273
|
Berryman S, Matthews K, Lee JH, Duffy SP, Ma H. Image-based phenotyping of disaggregated cells using deep learning. Commun Biol 2020; 3:674. [PMID: 33188302 PMCID: PMC7666170 DOI: 10.1038/s42003-020-01399-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 10/20/2020] [Indexed: 12/14/2022] Open
Abstract
The ability to phenotype cells is fundamentally important in biological research and medicine. Current methods rely primarily on fluorescence labeling of specific markers. However, there are many situations where this approach is unavailable or undesirable. Machine learning has been used for image cytometry but has been limited by cell agglomeration and it is currently unclear if this approach can reliably phenotype cells that are difficult to distinguish by the human eye. Here, we show disaggregated single cells can be phenotyped with a high degree of accuracy using low-resolution bright-field and non-specific fluorescence images of the nucleus, cytoplasm, and cytoskeleton. Specifically, we trained a convolutional neural network using automatically segmented images of cells from eight standard cancer cell-lines. These cells could be identified with an average F1-score of 95.3%, tested using separately acquired images. Our results demonstrate the potential to develop an "electronic eye" to phenotype cells directly from microscopy images.
Collapse
Affiliation(s)
- Samuel Berryman
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Kerryn Matthews
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Jeong Hyun Lee
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
| | - Simon P Duffy
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada
- British Columbia Institute of Technology, Burnaby, BC, Canada
| | - Hongshen Ma
- Department of Mechanical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Centre for Blood Research, University of British Columbia, Vancouver, BC, Canada.
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Vancouver Prostate Centre, Vancouver General Hospital, Vancouver, BC, Canada.
| |
Collapse
|
274
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|
275
|
Pacal I, Karaboga D, Basturk A, Akay B, Nalbantoglu U. A comprehensive review of deep learning in colon cancer. Comput Biol Med 2020; 126:104003. [DOI: 10.1016/j.compbiomed.2020.104003] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 08/28/2020] [Accepted: 08/28/2020] [Indexed: 12/17/2022]
|
276
|
|
277
|
Qu H, Wu P, Huang Q, Yi J, Yan Z, Li K, Riedlinger GM, De S, Zhang S, Metaxas DN. Weakly Supervised Deep Nuclei Segmentation Using Partial Points Annotation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3655-3666. [PMID: 32746112 DOI: 10.1109/tmi.2020.3002244] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Nuclei segmentation is a fundamental task in histopathology image analysis. Typically, such segmentation tasks require significant effort to manually generate accurate pixel-wise annotations for fully supervised training. To alleviate such tedious and manual effort, in this paper we propose a novel weakly supervised segmentation framework based on partial points annotation, i.e., only a small portion of nuclei locations in each image are labeled. The framework consists of two learning stages. In the first stage, we design a semi-supervised strategy to learn a detection model from partially labeled nuclei locations. Specifically, an extended Gaussian mask is designed to train an initial model with partially labeled data. Then, self-training with background propagation is proposed to make use of the unlabeled regions to boost nuclei detection and suppress false positives. In the second stage, a segmentation model is trained from the detected nuclei locations in a weakly-supervised fashion. Two types of coarse labels with complementary information are derived from the detected points and are then utilized to train a deep neural network. The fully-connected conditional random field loss is utilized in training to further refine the model without introducing extra computational complexity during inference. The proposed method is extensively evaluated on two nuclei segmentation datasets. The experimental results demonstrate that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods while requiring significantly less annotation effort.
Collapse
|
278
|
Mahmood H, Shaban M, Indave BI, Santos-Silva AR, Rajpoot N, Khurram SA. Use of artificial intelligence in diagnosis of head and neck precancerous and cancerous lesions: A systematic review. Oral Oncol 2020; 110:104885. [PMID: 32674040 DOI: 10.1016/j.oraloncology.2020.104885] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 06/28/2020] [Accepted: 06/29/2020] [Indexed: 01/21/2023]
Abstract
This systematic review analyses and describes the application and diagnostic accuracy of Artificial Intelligence (AI) methods used for detection and grading of potentially malignant (pre-cancerous) and cancerous head and neck lesions using whole slide images (WSI) of human tissue slides. Electronic databases MEDLINE via OVID, Scopus and Web of Science were searched between October 2009 - April 2020. Tailored search-strings were developed using database-specific terms. Studies were selected using a strict inclusion criterion following PRISMA Guidelines. Risk of bias assessment was conducted using a tailored QUADAS-2 tool. Out of 315 records, 11 fulfilled the inclusion criteria. AI-based methods were employed for analysis of specific histological features for oral epithelial dysplasia (n = 1), oral submucous fibrosis (n = 5), oral squamous cell carcinoma (n = 4) and oropharyngeal squamous cell carcinoma (n = 1). A combination of heuristics, supervised and unsupervised learning methods were employed, including more than 10 different classification and segmentation techniques. Most studies used uni-centric datasets (range 40-270 images) comprising small sub-images within WSI with accuracy between 79 and 100%. This review provides early evidence to support the potential application of supervised machine learning methods as a diagnostic aid for some oral potentially malignant and malignant lesions; however, there is a paucity of evidence using AI for diagnosis of other head and neck pathologies. Overall, the quality of evidence is low, with most studies showing a high risk of bias which is likely to have overestimated accuracy rates. This review highlights the need for development of state-of-the-art deep learning techniques in future head and neck research.
Collapse
Affiliation(s)
- H Mahmood
- Dr Hanya Mahmood (NIHR Academic Clinical Fellow in Oral Surgery), Academic Unit of Oral & Maxillofacial Surgery, School of Clinical Dentistry, University of Sheffield, 19 Claremont Crescent, S10 2TA, UK.
| | - M Shaban
- Muhammad Shaban (Research Student), Department of Computer Science, University of Warwick, Coventry, UK.
| | - B I Indave
- Blanca Iciar Indave Ruiz (Systematic Reviewer), WHO/IARC Classification of Tumours Group, International Agency for Research on Cancer, Lyon, France.
| | - A R Santos-Silva
- Alan Roger Santos-Silva (Associate Professor in Oral Medicine & Pathology), Oral Diagnosis Department, Piracicaba Dental School, University of Campinas, Piracicaba, São Paulo, Brazil.
| | - N Rajpoot
- Nasir Rajpoot (Professor of Computational Pathology), Department of Computer Science, University of Warwick, Coventry, UK.
| | - S A Khurram
- Syed Ali Khurram (Senior Clinical Lecturer & Honorary Consultant in Oral & Maxillofacial Pathology), Academic Unit of Oral & Maxillofacial Pathology, School of Clinical Dentistry, University of Sheffield, 19 Claremont Crescent, UK.
| |
Collapse
|
279
|
Mahmood F, Borders D, Chen RJ, Mckay GN, Salimian KJ, Baras A, Durr NJ. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3257-3267. [PMID: 31283474 PMCID: PMC8588951 DOI: 10.1109/tmi.2019.2927182] [Citation(s) in RCA: 137] [Impact Index Per Article: 27.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Nuclei mymargin segmentation is a fundamental task for various computational pathology applications including nuclei morphology analysis, cell type classification, and cancer grading. Deep learning has emerged as a powerful approach to segmenting nuclei but the accuracy of convolutional neural networks (CNNs) depends on the volume and the quality of labeled histopathology data for training. In particular, conventional CNN-based approaches lack structured prediction capabilities, which are required to distinguish overlapping and clumped nuclei. Here, we present an approach to nuclei segmentation that overcomes these challenges by utilizing a conditional generative adversarial network (cGAN) trained with synthetic and real data. We generate a large dataset of H&E training images with perfect nuclei segmentation labels using an unpaired GAN framework. This synthetic data along with real histopathology data from six different organs are used to train a conditional GAN with spectral normalization and gradient penalty for nuclei segmentation. This adversarial regression framework enforces higher-order spacial-consistency when compared to conventional CNN models. We demonstrate that this nuclei segmentation approach generalizes across different organs, sites, patients and disease states, and outperforms conventional approaches, especially in isolating individual and overlapping nuclei.
Collapse
|
280
|
Shu J, Liu J, Zhang Y, Fu H, Ilyas M, Faraci G, Della Mea V, Liu B, Qiu G. Marker controlled superpixel nuclei segmentation and automatic counting on immunohistochemistry staining images. Bioinformatics 2020; 36:3225-3233. [PMID: 32073624 DOI: 10.1093/bioinformatics/btaa107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Revised: 02/03/2020] [Accepted: 02/14/2020] [Indexed: 12/17/2022] Open
Abstract
MOTIVATION For the diagnosis of cancer, manually counting nuclei on massive histopathological images is tedious and the counting results might vary due to the subjective nature of the operation. RESULTS This paper presents a new segmentation and counting method for nuclei, which can automatically provide nucleus counting results. This method segments nuclei with detected nuclei seed markers through a modified simple one-pass superpixel segmentation method. Rather than using a single pixel as a seed, we created a superseed for each nucleus to involve more information for improved segmentation results. Nucleus pixels are extracted by a newly proposed fusing method to reduce stain variations and preserve nucleus contour information. By evaluating segmentation results, the proposed method was compared to five existing methods on a dataset with 52 immunohistochemically (IHC) stained images. Our proposed method produced the highest mean F1-score of 0.668. By evaluating the counting results, another dataset with more than 30 000 IHC stained nuclei in 88 images were prepared. The correlation between automatically generated nucleus counting results and manual nucleus counting results was up to R2 = 0.901 (P < 0.001). By evaluating segmentation results of proposed method-based tool, we tested on a 2018 Data Science Bowl (DSB) competition dataset, three users obtained DSB score of 0.331 ± 0.006. AVAILABILITY AND IMPLEMENTATION The proposed method has been implemented as a plugin tool in ImageJ and the source code can be freely downloaded. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Jie Shu
- School of Information Science and Technology, North China University of Technology.,Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing 100144, China
| | - Jingxin Liu
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Yongmei Zhang
- School of Information Science and Technology, North China University of Technology
| | - Hao Fu
- College of Intelligence Science and Technology, National University of Defense Technology, Hunan 410073, China
| | - Mohammad Ilyas
- Faculty of Medicine & Health Sciences, Nottingham University Hospitals NHS Trust and University of Nottingham, Nottingham NG7 2UH, UK
| | - Giuseppe Faraci
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine 33100, Italy
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine 33100, Italy
| | - Bozhi Liu
- Guangdong Key Laboratory for Intelligent Signal Processing, Shenzhen University, Guangzhou 518061, China
| | - Guoping Qiu
- Histo Pathology Diagnostic Center, Shanghai, China.,Department of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
| |
Collapse
|
281
|
Osama S, Zafar K, Sadiq MU. Predicting Clinical Outcome in Acute Ischemic Stroke Using Parallel Multi-parametric Feature Embedded Siamese Network. Diagnostics (Basel) 2020; 10:E858. [PMID: 33105609 PMCID: PMC7690444 DOI: 10.3390/diagnostics10110858] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2020] [Revised: 10/16/2020] [Accepted: 10/19/2020] [Indexed: 11/16/2022] Open
Abstract
Stroke is the second leading cause of death and disability worldwide, with ischemic stroke as the most common type. The preferred diagnostic procedure at the acute stage is the acquisition of multi-parametric magnetic resonance imaging (MRI). This type of imaging not only detects and locates the stroke lesion, but also provides the blood flow dynamics that helps clinicians in assessing the risks and benefits of reperfusion therapies. However, evaluating the outcome of these risky therapies beforehand is a complicated task due to the variability of lesion location, size, shape, and cerebral hemodynamics involved. Though the fully automated model for predicting treatment outcomes using multi-parametric imaging would be highly valuable in clinical settings, MRI datasets acquired at the acute stage are mostly scarce and suffer high class imbalance. In this paper, parallel multi-parametric feature embedded siamese network (PMFE-SN) is proposed that can learn with few samples and can handle skewness in multi-parametric MRI data. Moreover, five suitable evaluation metrics that are insensitive to imbalance are defined for this problem. The results show that PMFE-SN not only outperforms other state-of-the-art techniques in all these metrics but also can predict the class with a small number of samples, as well as the class with high number of samples. An accuracy of 0.67 on leave one cross out testing has been achieved with only two samples (minority class) for training and accuracy of 0.61 with the highest number of samples (majority class). In comparison, state-of-the-art using hand crafted features has 0 accuracy for minority class and 0.33 accuracy for majority class.
Collapse
Affiliation(s)
| | - Kashif Zafar
- Department of Computer Science, National University of Computing and Emerging Sciences, 852-B Milaad St, Block B Faisal Town, Lahore 54000, Pakistan; (S.O.); (M.U.S.)
| | | |
Collapse
|
282
|
Li X, Plataniotis KN. How much off-the-shelf knowledge is transferable from natural images to pathology images? PLoS One 2020; 15:e0240530. [PMID: 33052964 PMCID: PMC7556818 DOI: 10.1371/journal.pone.0240530] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 09/28/2020] [Indexed: 11/19/2022] Open
Abstract
Deep learning has achieved a great success in natural image classification. To overcome data-scarcity in computational pathology, recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis, aiming to build effective pathology image diagnosis models. Since transferability of knowledge heavily depends on the similarity of the original and target tasks, significant differences in image content and statistics between pathology images and natural images raise the questions: how much knowledge is transferable? Is the transferred information equally contributed by pre-trained layers? If not, is there a sweet spot in transfer learning that balances transferred model's complexity and performance? To answer these questions, this paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning, and reports some interesting observations. Particularly, compared to the performance baseline obtained by a random-weight model, though transferability of off-the-shelf representations from deep layers heavily depend on specific pathology image sets, the general representation generated by early layers does convey transferred knowledge in various image classification applications. The trade-off between transferable performance and transferred model's complexity observed in this study encourages further investigation of specific metric and tools to quantify effectiveness of transfer learning in future.
Collapse
Affiliation(s)
- Xingyu Li
- Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
283
|
An Amalgamated Approach to Bilevel Feature Selection Techniques Utilizing Soft Computing Methods for Classifying Colon Cancer. BIOMED RESEARCH INTERNATIONAL 2020; 2020:8427574. [PMID: 33102596 PMCID: PMC7578727 DOI: 10.1155/2020/8427574] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 09/17/2020] [Accepted: 09/22/2020] [Indexed: 12/20/2022]
Abstract
One of the deadliest diseases which affects the large intestine is colon cancer. Older adults are typically affected by colon cancer though it can happen at any age. It generally starts as small benign growth of cells that forms on the inside of the colon, and later, it develops into cancer. Due to the propagation of somatic alterations that affects the gene expression, colon cancer is caused. A standardized format for assessing the expression levels of thousands of genes is provided by the DNA microarray technology. The tumors of various anatomical regions can be distinguished by the patterns of gene expression in microarray technology. As the microarray data is too huge to process due to the curse of dimensionality problem, an amalgamated approach of utilizing bilevel feature selection techniques is proposed in this paper. In the first level, the genes or the features are dimensionally reduced with the help of Multivariate Minimum Redundancy–Maximum Relevance (MRMR) technique. Then, in the second level, six optimization techniques are utilized in this work for selecting the best genes or features before proceeding to classification process. The optimization techniques considered in this work are Invasive Weed Optimization (IWO), Teaching Learning-Based Optimization (TLBO), League Championship Optimization (LCO), Beetle Antennae Search Optimization (BASO), Crow Search Optimization (CSO), and Fruit Fly Optimization (FFO). Finally, it is classified with five suitable classifiers, and the best results show when IWO is utilized with MRMR, and then classified with Quadratic Discriminant Analysis (QDA), a classification accuracy of 99.16% is obtained.
Collapse
|
284
|
Integrative Data Augmentation with U-Net Segmentation Masks Improves Detection of Lymph Node Metastases in Breast Cancer Patients. Cancers (Basel) 2020; 12:cancers12102934. [PMID: 33053723 PMCID: PMC7601653 DOI: 10.3390/cancers12102934] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/01/2020] [Accepted: 10/09/2020] [Indexed: 11/17/2022] Open
Abstract
Simple Summary In recent years many successful models have been developed to perform various tasks in digital histopathology, yet, there is still a reluctance to fully embrace the new technologies in clinical settings. One of the reasons for this is that although these models have achieved high performance at the patch-level, their performance at the image-level can still be underwhelming. Through this study, our main objective was to investigate whether integrating multiple extracted histological features to the input image had potential to further improve the performance of classifier models at the patch-level. Ideally, by achieving 100% accuracy at the patch-level, one can achieve 100% accuracy at the image-level. We hope that our research will entice the community to develop new strategies to further improve performance of existing state-of-the-art models, and facilitate their adoption in the clinics. Abstract Deep learning models have potential to improve performance of automated computer-assisted diagnosis tools in digital histopathology and reduce subjectivity. The main objective of this study was to further improve diagnostic potential of convolutional neural networks (CNNs) in detection of lymph node metastasis in breast cancer patients by integrative augmentation of input images with multiple segmentation channels. For this retrospective study, we used the PatchCamelyon dataset, consisting of 327,680 histopathology images of lymph node sections from breast cancer. Images had labels for the presence or absence of metastatic tissue. In addition, we used four separate histopathology datasets with annotations for nucleus, mitosis, tubule, and epithelium to train four instances of U-net. Then our baseline model was trained with and without additional segmentation channels and their performances were compared. Integrated gradient was used to visualize model attribution. The model trained with concatenation/integration of original input plus four additional segmentation channels, which we refer to as ConcatNet, was superior (AUC 0.924) compared to baseline with or without augmentations (AUC 0.854; 0.884). Baseline model trained with one additional segmentation channel showed intermediate performance (AUC 0.870-0.895). ConcatNet had sensitivity of 82.0% and specificity of 87.8%, which was an improvement in performance over the baseline (sensitivity of 74.6%; specificity of 80.4%). Integrated gradients showed that models trained with additional segmentation channels had improved focus on particular areas of the image containing aberrant cells. Augmenting images with additional segmentation channels improved baseline model performance as well as its ability to focus on discrete areas of the image.
Collapse
|
285
|
Koyuncu H, Barstuğan M, Öziç MÜ. A comprehensive study of brain tumour discrimination using phase combinations, feature rankings, and hybridised classifiers. Med Biol Eng Comput 2020; 58:2971-2987. [PMID: 33006703 DOI: 10.1007/s11517-020-02273-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 09/17/2020] [Indexed: 10/23/2022]
Abstract
The binary categorisation of brain tumours is challenging owing to the complexities of tumours. These challenges arise because of the diversities between shape, size, and intensity features for identical types of tumours. Accordingly, framework designs should be optimised for two phenomena: feature analyses and classification. Based on the challenges and difficulty of the issue, limited information or studies exist that consider the binary classification of three-dimensional (3D) brain tumours. In this paper, the discrimination of high-grade glioma (HGG) and low-grade glioma (LGG) is accomplished by designing various frameworks based on 3D magnetic resonance imaging (3D MRI) data. Accordingly, diverse phase combinations, feature-ranking approaches, and hybrid classifiers are integrated. Feature analyses are performed to achieve remarkable performance using first-order statistics (FOS) by examining different phase combinations near the usage of single phases (T1c, FLAIR, T1, and T2) and by considering five feature-ranking approaches (Bhattacharyya, Entropy, Roc, t test, and Wilcoxon) to detect the appropriate input to the classifier. Hybrid classifiers based on neural networks (NN) are considered due to their robustness and superiority with medical pattern classification. In this study, state-of-the-art optimisation methods are used to form the hybrid classifiers: dynamic weight particle swarm optimisation (DW-PSO), chaotic dynamic weight particle swarm optimisation (CDW-PSO), and Gauss-map-based chaotic particle-swarm optimisation (GM-CPSO). The integrated frameworks, including DW-PSO-NN, CDW-PSO-NN, and GM-CPSO-NN, are evaluated on the BraTS 2017 challenge dataset involving 210 HGG and 75 LGG samples. The 2-fold cross-validation test method and seven metrics (accuracy, AUC, sensitivity, specificity, g-mean, precision, f-measure) are processed to evaluate the performance of frameworks efficiently. In experiments, the most effective framework is provided that uses FOS, data including three phase combinations, the Wilcoxon feature-ranking approach, and the GM-CPSO-NN method. Consequently, our framework achieved remarkable scores of 90.18% (accuracy), 85.62% (AUC), 95.24% (sensitivity), 76% (specificity), 85.08% (g-mean), 91.74% (precision), and 93.46% (f-measure) for HGG/LGG discrimination of 3D brain MRI data. Graphical abstract.
Collapse
Affiliation(s)
- Hasan Koyuncu
- Faculty of Engineering and Natural Sciences, Electrical & Electronics Engineering Department, Konya Technical University, 42250, Konya, Turkey.
| | - Mücahid Barstuğan
- Faculty of Engineering and Natural Sciences, Electrical & Electronics Engineering Department, Konya Technical University, 42250, Konya, Turkey
| | - Muhammet Üsame Öziç
- Faculty of Engineering and Architecture, Biomedical Engineering Department, Necmettin Erbakan University, Konya, Turkey
| |
Collapse
|
286
|
Willemse J, van der Vaart M, Yang W, Briegel A. Mathematical Mirroring for Identification of Local Symmetry Centers in Microscopic Images Local Symmetry Detection in FIJI. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2020; 26:978-988. [PMID: 32878652 DOI: 10.1017/s1431927620024320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Symmetry is omnipresent in nature and we encounter symmetry routinely in our everyday life. It is also common on the microscopic level, where symmetry is often key to the proper function of core biological processes. The human brain is exquisitely well suited to recognize such symmetrical features with ease. In contrast, computational recognition of such patterns in images is still surprisingly challenging. In this paper we describe a mathematical approach to identifying smaller local symmetrical structures within larger images. Our algorithm attributes a local symmetry score to each image pixel, which subsequently allows the identification of the symmetrical centers of an object. Though there are already many methods available to detect symmetry in images, to the best of our knowledge, our algorithm is the first that is easily applicable in ImageJ/FIJI. We have created an interactive plugin in FIJI that allows the detection and thresholding of local symmetry values. The plugin combines the different reflection symmetry axis of a square to get a good coverage of reflection symmetry in all directions. To demonstrate the plugins potential, we analyzed images of bacterial chemoreceptor arrays and intracellular vesicle trafficking events, which are two prominent examples of biological systems with symmetrical patterns.
Collapse
Affiliation(s)
- Joost Willemse
- Institute of Biology, Leiden University, Sylviusweg 72, Leiden2333BE, The Netherlands
| | - Michiel van der Vaart
- Institute of Biology, Leiden University, Sylviusweg 72, Leiden2333BE, The Netherlands
| | - Wen Yang
- Institute of Biology, Leiden University, Sylviusweg 72, Leiden2333BE, The Netherlands
| | - Ariane Briegel
- Institute of Biology, Leiden University, Sylviusweg 72, Leiden2333BE, The Netherlands
| |
Collapse
|
287
|
Opportunistic osteoporosis screening in multi-detector CT images using deep convolutional neural networks. Eur Radiol 2020; 31:1831-1842. [PMID: 33001308 DOI: 10.1007/s00330-020-07312-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 07/29/2020] [Accepted: 09/17/2020] [Indexed: 10/23/2022]
Abstract
OBJECTIVE To explore the application of deep learning in patients with primary osteoporosis, and to develop a fully automatic method based on deep convolutional neural network (DCNN) for vertebral body segmentation and bone mineral density (BMD) calculation in CT images. MATERIALS AND METHODS A total of 1449 patients were used for experiments and analysis in this retrospective study, who underwent spinal or abdominal CT scans for other indications between March 2018 and May 2020. All data was gathered from three different CT vendors. Among them, 586 cases were used for training, and other 863 cases were used for testing. A fully convolutional neural network, called U-Net, was employed for automated vertebral body segmentation. The manually sketched region of vertebral body was used as the ground truth for comparison. A convolutional neural network, called DenseNet-121, was applied for BMD calculation. The values post-processed by quantitative computed tomography (QCT) were identified as the standards for analysis. RESULTS Based on the diversity of CT vendors, all testing cases were split into three testing cohorts: Test set 1 (n = 463), test set 2 (n = 200), and test set 3 (n = 200). Automated segmentation correlated well with manual segmentation regarding four lumbar vertebral bodies (L1-L4): the minimum average dice coefficients for three testing sets were 0.823, 0.786, and 0.782, respectively. For testing sets from different vendors, the average BMDs calculated by automated regression showed high correlation (r > 0.98) and agreement with those derived from QCT. CONCLUSIONS A deep learning-based method could achieve fully automatic identification of osteoporosis, osteopenia, and normal bone mineral density in CT images. KEY POINTS • Deep learning can perform accurate fully automated segmentation of lumbar vertebral body in CT images. • The average BMDs obtained by deep learning highly correlates with ones derived from QCT. • The deep learning-based method could be helpful for clinicians in opportunistic osteoporosis screening in spinal or abdominal CT scans.
Collapse
|
288
|
George K, Sankaran P, K PJ. Computer assisted recognition of breast cancer in biopsy images via fusion of nucleus-guided deep convolutional features. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 194:105531. [PMID: 32422473 DOI: 10.1016/j.cmpb.2020.105531] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 04/20/2020] [Accepted: 05/05/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast cancer is a commonly detected cancer among women, resulting in a high number of cancer-related mortality. Biopsy performed by pathologists is the final confirmation procedure for breast cancer diagnosis. Computer-aided diagnosis systems can support the pathologist for better diagnosis and also in reducing subjective errors. METHODS In the automation of breast cancer analysis, feature extraction is a challenging task due to the structural diversity of the breast tissue images. Here, we propose a nucleus feature extraction methodology using a convolutional neural network (CNN), 'NucDeep', for automated breast cancer detection. Non-overlapping nuclei patches detected from the images enable the design of a low complexity CNN for feature extraction. A feature fusion approach with support vector machine classifier (FF + SVM) is used to classify breast tumor images based on the extracted CNN features. The feature fusion method transforms the local nuclei features into a compact image-level feature, thus improving the classifier performance. A patch class probability based decision scheme (NucDeep + SVM + PD) for image-level classification is also introduced in this work. RESULTS The proposed framework is evaluated on the publicly available BreaKHis dataset by conducting 5 random trials with 70-30 train-test data split, achieving average image level recognition rate of 96.66 ± 0.77%, 100% specificity and 96.21% sensitivity. CONCLUSION It was found that the proposed NucDeep + FF + SVM model outperforms several recent existing methods and reveals a comparable state of the art performance even with low training complexity. As an effective and inexpensive model, the classification of biopsy images for breast tumor diagnosis introduced in this research will thus help to develop a reliable support tool for pathologists.
Collapse
Affiliation(s)
- Kalpana George
- Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Kerala, India.
| | - Praveen Sankaran
- Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Kerala, India.
| | - Paul Joseph K
- Department of Electrical Engineering, National Institute of Technology Calicut, Kerala, India.
| |
Collapse
|
289
|
Formica V, Morelli C, Riondino S, Renzi N, Nitti D, Roselli M. Artificial intelligence for the study of colorectal cancer tissue slides. Artif Intell Gastroenterol 2020; 1:51-59. [DOI: 10.35712/aig.v1.i3.51] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 09/25/2020] [Accepted: 09/27/2020] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is gaining incredible momentum as a companion diagnostic in a number of fields in oncology. In the present mini-review, we summarize the main uses and findings of AI applied to the analysis of digital histopathological images of slides from colorectal cancer (CRC) patients. Machine learning tools have been developed to automatically and objectively recognize specific CRC subtypes, such as those with microsatellite instability and high lymphocyte infiltration that would optimally respond to specific therapies. Also, AI-based classification in distinct prognostic groups with no studies of the basic biological features of the tumor have been attempted in a methodological approach that we called “biology-agnostic”.
Collapse
Affiliation(s)
- Vincenzo Formica
- Department of Systems Medicine, Medical Oncology Unit, Tor Vergata University Hospital, Rome 00133, Italy
| | - Cristina Morelli
- Department of Systems Medicine, Medical Oncology Unit, Tor Vergata University Hospital, Rome 00133, Italy
| | - Silvia Riondino
- Department of Systems Medicine, Medical Oncology Unit, Tor Vergata University Hospital, Rome 00133, Italy
| | - Nicola Renzi
- Department of Systems Medicine, Medical Oncology Unit, Tor Vergata University Hospital, Rome 00133, Italy
| | - Daniele Nitti
- Department of Systems Medicine, Medical Oncology Unit, Tor Vergata University Hospital, Rome 00133, Italy
| | - Mario Roselli
- Department of Systems Medicine, Medical Oncology Unit, Tor Vergata University Hospital, Rome 00133, Italy
| |
Collapse
|
290
|
Ortega-Ruiz MA, Karabağ C, Garduño VG, Reyes-Aldasoro CC. Morphological Estimation of Cellularity on Neo-Adjuvant Treated Breast Cancer Histological Images. J Imaging 2020; 6:jimaging6100101. [PMID: 34460542 PMCID: PMC8321162 DOI: 10.3390/jimaging6100101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 09/10/2020] [Accepted: 09/21/2020] [Indexed: 12/30/2022] Open
Abstract
This paper describes a methodology that extracts key morphological features from histological breast cancer images in order to automatically assess Tumour Cellularity (TC) in Neo-Adjuvant treatment (NAT) patients. The response to NAT gives information on therapy efficacy and it is measured by the residual cancer burden index, which is composed of two metrics: TC and the assessment of lymph nodes. The data consist of whole slide images (WSIs) of breast tissue stained with Hematoxylin and Eosin (H&E) released in the 2019 SPIE Breast Challenge. The methodology proposed is based on traditional computer vision methods (K-means, watershed segmentation, Otsu’s binarisation, and morphological operations), implementing colour separation, segmentation, and feature extraction. Correlation between morphological features and the residual TC after a NAT treatment was examined. Linear regression and statistical methods were used and twenty-two key morphological parameters from the nuclei, epithelial region, and the full image were extracted. Subsequently, an automated TC assessment that was based on Machine Learning (ML) algorithms was implemented and trained with only selected key parameters. The methodology was validated with the score assigned by two pathologists through the intra-class correlation coefficient (ICC). The selection of key morphological parameters improved the results reported over other ML methodologies and it was very close to deep learning methodologies. These results are encouraging, as a traditionally-trained ML algorithm can be useful when limited training data are available preventing the use of deep learning approaches.
Collapse
Affiliation(s)
- Mauricio Alberto Ortega-Ruiz
- Universidad del Valle de México, Departamento de Ingeniería, Campus Coyoacán, Ciudad de México 04910, Mexico
- Department of Electrical & Electronic Engineering, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK;
- Correspondence: (M.A.O.-R.); (C.C.R.-A.)
| | - Cefa Karabağ
- Department of Electrical & Electronic Engineering, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK;
| | - Victor García Garduño
- Departamento de Ingeniería en Telecomunicaciones, Facultad de Ingeniería, Universidad Nacional Autónoma de México, Av. Universidad 3000, Ciudad Universitaria, Coyoacán, Ciudad de México 04510, Mexico;
| | - Constantino Carlos Reyes-Aldasoro
- giCentre, Department of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK
- Correspondence: (M.A.O.-R.); (C.C.R.-A.)
| |
Collapse
|
291
|
Dov D, Kovalsky SZ, Assaad S, Cohen J, Range DE, Pendse AA, Henao R, Carin L. Weakly supervised instance learning for thyroid malignancy prediction from whole slide cytopathology images. Med Image Anal 2020; 67:101814. [PMID: 33049578 DOI: 10.1016/j.media.2020.101814] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 06/16/2020] [Accepted: 07/28/2020] [Indexed: 02/07/2023]
Abstract
We consider machine-learning-based thyroid-malignancy prediction from cytopathology whole-slide images (WSI). Multiple instance learning (MIL) approaches, typically used for the analysis of WSIs, divide the image (bag) into patches (instances), which are used to predict a single bag-level label. These approaches perform poorly in cytopathology slides due to a unique bag structure: sparsely located informative instances with varying characteristics of abnormality. We address these challenges by considering multiple types of labels: bag-level malignancy and ordered diagnostic scores, as well as instance-level informativeness and abnormality labels. We study their contribution beyond the MIL setting by proposing a maximum likelihood estimation (MLE) framework, from which we derive a two-stage deep-learning-based algorithm. The algorithm identifies informative instances and assigns them local malignancy scores that are incorporated into a global malignancy prediction. We derive a lower bound of the MLE, leading to an improved training strategy based on weak supervision, that we motivate through statistical analysis. The lower bound further allows us to extend the proposed algorithm to simultaneously predict multiple bag and instance-level labels from a single output of a neural network. Experimental results demonstrate that the proposed algorithm provides competitive performance compared to several competing methods, achieves (expert) human-level performance, and allows augmentation of human decisions.
Collapse
Affiliation(s)
- David Dov
- Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA.
| | | | - Serge Assaad
- Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA
| | - Jonathan Cohen
- Department of Surgery, Duke University Medical Center, Durham, NC 27710, USA
| | | | - Avani A Pendse
- Department of Pathology, Duke University Medical Center, Durham, NC 27710, USA
| | - Ricardo Henao
- Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Duke University, Durham, NC 27708, USA
| |
Collapse
|
292
|
Javed S, Mahmood A, Werghi N, Benes K, Rajpoot N. Multiplex Cellular Communities in Multi-Gigapixel Colorectal Cancer Histology Images for Tissue Phenotyping. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9204-9219. [PMID: 32966218 DOI: 10.1109/tip.2020.3023795] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In computational pathology, automated tissue phenotyping in cancer histology images is a fundamental tool for profiling tumor microenvironments. Current tissue phenotyping methods use features derived from image patches which may not carry biological significance. In this work, we propose a novel multiplex cellular community-based algorithm for tissue phenotyping integrating cell-level features within a graph-based hierarchical framework. We demonstrate that such integration offers better performance compared to prior deep learning and texture-based methods as well as to cellular community based methods using uniplex networks. To this end, we construct celllevel graphs using texture, alpha diversity and multi-resolution deep features. Using these graphs, we compute cellular connectivity features which are then employed for the construction of a patch-level multiplex network. Over this network, we compute multiplex cellular communities using a novel objective function. The proposed objective function computes a low-dimensional subspace from each cellular network and subsequently seeks a common low-dimensional subspace using the Grassmann manifold. We evaluate our proposed algorithm on three publicly available datasets for tissue phenotyping, demonstrating a significant improvement over existing state-of-the-art methods.
Collapse
|
293
|
Wang Y, Nie H, He X, Liao Z, Zhou Y, Zhou J, Ou C. The emerging role of super enhancer-derived noncoding RNAs in human cancer. Theranostics 2020; 10:11049-11062. [PMID: 33042269 PMCID: PMC7532672 DOI: 10.7150/thno.49168] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Accepted: 08/23/2020] [Indexed: 02/06/2023] Open
Abstract
Super enhancers (SEs) are large clusters of adjacent enhancers that drive the expression of genes which regulate cellular identity; SE regions can be enriched with a high density of transcription factors, co-factors, and enhancer-associated epigenetic modifications. Through enhanced activation of their target genes, SEs play an important role in various diseases and conditions, including cancer. Recent studies have shown that SEs not only activate the transcriptional expression of coding genes to directly regulate biological functions, but also drive the transcriptional expression of non-coding RNAs (ncRNAs) to indirectly regulate biological functions. SE-derived ncRNAs play critical roles in tumorigenesis, including malignant proliferation, metastasis, drug resistance, and inflammatory response. Moreover, the abnormal expression of SE-derived ncRNAs is closely related to the clinical and pathological characterization of tumors. In this review, we summarize the functions and roles of SE-derived ncRNAs in tumorigenesis and discuss their prospective applications in tumor therapy. A deeper understanding of the potential mechanism underlying the action of SE-derived ncRNAs in tumorigenesis may provide new strategies for the early diagnosis of tumors and targeted therapy.
Collapse
MESH Headings
- Antineoplastic Agents/pharmacology
- Antineoplastic Agents/therapeutic use
- Biomarkers, Tumor/analysis
- Biomarkers, Tumor/genetics
- Biomarkers, Tumor/metabolism
- Carcinogenesis/drug effects
- Carcinogenesis/genetics
- Cell Proliferation/drug effects
- Cell Proliferation/genetics
- Drug Resistance, Neoplasm/genetics
- Enhancer Elements, Genetic/genetics
- Gene Expression Regulation, Neoplastic/drug effects
- Gene Expression Regulation, Neoplastic/genetics
- Humans
- Molecular Targeted Therapy/methods
- Neoplasms/diagnosis
- Neoplasms/drug therapy
- Neoplasms/genetics
- Neoplasms/pathology
- Precision Medicine/methods
- RNA, Untranslated/analysis
- RNA, Untranslated/genetics
- RNA, Untranslated/metabolism
Collapse
Affiliation(s)
- Yutong Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Hui Nie
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Xiaoyun He
- Department of Endocrinology, Xiangya Hospital, Central South University, Changsha, Hunan 410008, China
| | - Zhiming Liao
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Yangying Zhou
- Department of Oncology, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Jianhua Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
| | - Chunlin Ou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, Hunan, 410008, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| |
Collapse
|
294
|
Breast cancer detection from biopsy images using nucleus guided transfer learning and belief based fusion. Comput Biol Med 2020; 124:103954. [DOI: 10.1016/j.compbiomed.2020.103954] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 07/30/2020] [Accepted: 07/30/2020] [Indexed: 01/22/2023]
|
295
|
Swiderska-Chadaj Z, de Bel T, Blanchet L, Baidoshvili A, Vossen D, van der Laak J, Litjens G. Impact of rescanning and normalization on convolutional neural network performance in multi-center, whole-slide classification of prostate cancer. Sci Rep 2020; 10:14398. [PMID: 32873856 PMCID: PMC7462850 DOI: 10.1038/s41598-020-71420-0] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Accepted: 08/12/2020] [Indexed: 01/12/2023] Open
Abstract
Algorithms can improve the objectivity and efficiency of histopathologic slide analysis. In this paper, we investigated the impact of scanning systems (scanners) and cycle-GAN-based normalization on algorithm performance, by comparing different deep learning models to automatically detect prostate cancer in whole-slide images. Specifically, we compare U-Net, DenseNet and EfficientNet. Models were developed on a multi-center cohort with 582 WSIs and subsequently evaluated on two independent test sets including 85 and 50 WSIs, respectively, to show the robustness of the proposed method to differing staining protocols and scanner types. We also investigated the application of normalization as a pre-processing step by two techniques, the whole-slide image color standardizer (WSICS) algorithm, and a cycle-GAN based method. For the two independent datasets we obtained an AUC of 0.92 and 0.83 respectively. After rescanning the AUC improves to 0.91/0.88 and after style normalization to 0.98/0.97. In the future our algorithm could be used to automatically pre-screen prostate biopsies to alleviate the workload of pathologists.
Collapse
Affiliation(s)
| | - Thomas de Bel
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Lionel Blanchet
- Digital and Computational Pathology, Philips, Best, The Netherlands
| | - Alexi Baidoshvili
- Laboratorium Pathologie Oost-Nederland, LabPON, Hengelo, The Netherlands
| | - Dirk Vossen
- Digital and Computational Pathology, Philips, Best, The Netherlands
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands.,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
296
|
Jackson CR, Sriharan A, Vaickus LJ. A machine learning algorithm for simulating immunohistochemistry: development of SOX10 virtual IHC and evaluation on primarily melanocytic neoplasms. Mod Pathol 2020; 33:1638-1648. [PMID: 32238879 PMCID: PMC10811656 DOI: 10.1038/s41379-020-0526-z] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 03/08/2020] [Accepted: 03/09/2020] [Indexed: 11/08/2022]
Abstract
Immunohistochemistry (IHC) is a diagnostic technique used throughout pathology. A machine learning algorithm that could predict individual cell immunophenotype based on hematoxylin and eosin (H&E) staining would save money, time, and reduce tissue consumed. Prior approaches have lacked the spatial accuracy needed for cell-specific analytical tasks. Here IHC performed on destained H&E slides is used to create a neural network that is potentially capable of predicting individual cell immunophenotype. Twelve slides were stained with H&E and scanned to create digital whole slide images. The H&E slides were then destained, and stained with SOX10 IHC. The SOX10 IHC slides were scanned, and corresponding H&E and IHC digital images were registered. Color-thresholding and machine learning techniques were applied to the registered H&E and IHC images to segment 3,396,668 SOX10-negative cells and 306,166 SOX10-positive cells. The resulting segmentation was used to annotate the original H&E images, and a convolutional neural network was trained to predict SOX10 nuclear staining. Sixteen thousand three hundred and nine image patches were used to train the virtual IHC (vIHC) neural network, and 1,813 image patches were used to quantitatively evaluate it. The resulting vIHC neural network achieved an area under the curve of 0.9422 in a receiver operator characteristics analysis when sorting individual nuclei. The vIHC network was applied to additional images from clinical practice, and was evaluated qualitatively by a board-certified dermatopathologist. Further work is needed to make the process more efficient and accurate for clinical use. This proof-of-concept demonstrates the feasibility of creating neural network-driven vIHC assays.
Collapse
Affiliation(s)
- Christopher R Jackson
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA.
| | - Aravindhan Sriharan
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| | - Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH, USA
| |
Collapse
|
297
|
Wan T, Zhao L, Feng H, Li D, Tong C, Qin Z. Robust nuclei segmentation in histopathology using ASPPU-Net and boundary refinement. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.08.103] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
298
|
Yin C, Chen Z. Developing Sustainable Classification of Diseases via Deep Learning and Semi-Supervised Learning. Healthcare (Basel) 2020; 8:E291. [PMID: 32846941 PMCID: PMC7551840 DOI: 10.3390/healthcare8030291] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 08/19/2020] [Accepted: 08/20/2020] [Indexed: 01/07/2023] Open
Abstract
Disease classification based on machine learning has become a crucial research topic in the fields of genetics and molecular biology. Generally, disease classification involves a supervised learning style; i.e., it requires a large number of labelled samples to achieve good classification performance. However, in the majority of the cases, labelled samples are hard to obtain, so the amount of training data are limited. However, many unclassified (unlabelled) sequences have been deposited in public databases, which may help the training procedure. This method is called semi-supervised learning and is very useful in many applications. Self-training can be implemented using high- to low-confidence samples to prevent noisy samples from affecting the robustness of semi-supervised learning in the training process. The deep forest method with the hyperparameter settings used in this paper can achieve excellent performance. Therefore, in this work, we propose a novel combined deep learning model and semi-supervised learning with self-training approach to improve the performance in disease classification, which utilizes unlabelled samples to update a mechanism designed to increase the number of high-confidence pseudo-labelled samples. The experimental results show that our proposed model can achieve good performance in disease classification and disease-causing gene identification.
Collapse
Affiliation(s)
- Chunwu Yin
- School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China;
| | - Zhanbo Chen
- School of Information and Statistics, Guangxi University of Finance and Economics, Nanning 530003, China
- Center of Guangxi Cooperative Innovation for Education Performance Assessment, Guangxi University of Finance and Economics, Nanning 530003, China
| |
Collapse
|
299
|
Patel SK, George B, Rai V. Artificial Intelligence to Decode Cancer Mechanism: Beyond Patient Stratification for Precision Oncology. Front Pharmacol 2020; 11:1177. [PMID: 32903628 PMCID: PMC7438594 DOI: 10.3389/fphar.2020.01177] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Accepted: 07/20/2020] [Indexed: 12/13/2022] Open
Abstract
The multitude of multi-omics data generated cost-effectively using advanced high-throughput technologies has imposed challenging domain for research in Artificial Intelligence (AI). Data curation poses a significant challenge as different parameters, instruments, and sample preparations approaches are employed for generating these big data sets. AI could reduce the fuzziness and randomness in data handling and build a platform for the data ecosystem, and thus serve as the primary choice for data mining and big data analysis to make informed decisions. However, AI implication remains intricate for researchers/clinicians lacking specific training in computational tools and informatics. Cancer is a major cause of death worldwide, accounting for an estimated 9.6 million deaths in 2018. Certain cancers, such as pancreatic and gastric cancers, are detected only after they have reached their advanced stages with frequent relapses. Cancer is one of the most complex diseases affecting a range of organs with diverse disease progression mechanisms and the effectors ranging from gene-epigenetics to a wide array of metabolites. Hence a comprehensive study, including genomics, epi-genomics, transcriptomics, proteomics, and metabolomics, along with the medical/mass-spectrometry imaging, patient clinical history, treatments provided, genetics, and disease endemicity, is essential. Cancer Moonshot℠ Research Initiatives by NIH National Cancer Institute aims to collect as much information as possible from different regions of the world and make a cancer data repository. AI could play an immense role in (a) analysis of complex and heterogeneous data sets (multi-omics and/or inter-omics), (b) data integration to provide a holistic disease molecular mechanism, (c) identification of diagnostic and prognostic markers, and (d) monitor patient's response to drugs/treatments and recovery. AI enables precision disease management well beyond the prevalent disease stratification patterns, such as differential expression and supervised classification. This review highlights critical advances and challenges in omics data analysis, dealing with data variability from lab-to-lab, and data integration. We also describe methods used in data mining and AI methods to obtain robust results for precision medicine from "big" data. In the future, AI could be expanded to achieve ground-breaking progress in disease management.
Collapse
Affiliation(s)
- Sandip Kumar Patel
- Department of Biosciences and Bioengineering, Indian Institute of Technology Bombay, Mumbai, India
- Buck Institute for Research on Aging, Novato, CA, United States
| | - Bhawana George
- Department of Hematopathology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Vineeta Rai
- Department of Entomology & Plant Pathology, North Carolina State University, Raleigh, NC, United States
| |
Collapse
|
300
|
Keikhosravi A, Li B, Liu Y, Conklin MW, Loeffler AG, Eliceiri KW. Non-disruptive collagen characterization in clinical histopathology using cross-modality image synthesis. Commun Biol 2020; 3:414. [PMID: 32737412 PMCID: PMC7395097 DOI: 10.1038/s42003-020-01151-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 07/16/2020] [Indexed: 12/20/2022] Open
Abstract
The importance of fibrillar collagen topology and organization in disease progression and prognostication in different types of cancer has been characterized extensively in many research studies. These explorations have either used specialized imaging approaches, such as specific stains (e.g., picrosirius red), or advanced and costly imaging modalities (e.g., second harmonic generation imaging (SHG)) that are not currently in the clinical workflow. To facilitate the analysis of stromal biomarkers in clinical workflows, it would be ideal to have technical approaches that can characterize fibrillar collagen on standard H&E stained slides produced during routine diagnostic work. Here, we present a machine learning-based stromal collagen image synthesis algorithm that can be incorporated into existing H&E-based histopathology workflow. Specifically, this solution applies a convolutional neural network (CNN) directly onto clinically standard H&E bright field images to extract information about collagen fiber arrangement and alignment, without requiring additional specialized imaging stains, systems or equipment.
Collapse
Affiliation(s)
- Adib Keikhosravi
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA
| | - Bin Li
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| | - Yuming Liu
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA
| | - Matthew W Conklin
- Department of Cell and Regenerative Biology, University of Wisconsin-Madison, Madison, WI, USA
| | - Agnes G Loeffler
- Department of Pathology, MetroHealth Medical Center, Cleveland, OH, USA
| | - Kevin W Eliceiri
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA.
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin-Madison, Madison, WI, USA.
- Morgridge Institute for Research, Madison, WI, USA.
- Department of Medical Physics, University of Wisconsin-Madison, Madison, WI, USA.
| |
Collapse
|