1
|
Jaiswal M, Mukhtar U, Shakya KS, Laddi A, Singha LA. Computerised assessment-a novel approach for calculation of percentage of hypomineralized lesion on incisors and its correlation with aesthetic concern. J Oral Biol Craniofac Res 2024; 14:570-577. [PMID: 39139516 PMCID: PMC11320481 DOI: 10.1016/j.jobcr.2024.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 07/09/2024] [Accepted: 07/16/2024] [Indexed: 08/15/2024] Open
Abstract
Introduction Molar-incisor hypomineralization (MIH) is a localized, qualitative, demarcated enamel defect that affects first permanent molars (FPMs) and/or permanent incisors. The aim of present study was to introduce a novel computerised assessment process to detect and quantify the percentage opacity associated with MIH affected maxillary central incisors. Methodology Children (8-16 years) enrolled in the primary study having mild (white/cream or yellow/brown) MIH lesion on fully erupted maxillary permanent central incisor. 50 standardised images of MIH lesions were captured in an artificially lit room with fixed parameters and were anonymized and securely stored. Images were analysed by AI-driven computerised software and generates output classifications via a sophisticated algorithm crafted using a meticulously annotated image dataset as reference through supervised machine learning (SML). For the validation of computerised assessment of MIH lesions, the percentage of demarked opacity was calculated using ADOBE PHOTOSHOP CS7. Results The percentage of MIH lesion was calculated through histogram plotting with the maxima ranging from 7.29 % to 71.21 % with the mean value of 34.51 %. The validation score ranged from 10.29 % to 67.27 % with the mean value of 35.32 %. The difference between the two was statistically not significant. Out of 50 patients; 11 patients had 1-30 % of surface affected with MIH and 2 had aesthetic concern; 24 had 30-60 % of surface affected and 13 had aesthetic concern; 15 had >60 % of surface affected and 12 had aesthetic concerns. Conclusions The proposed approach exhibit sufficient quality to be integrated into a dental software addressing practical challenges encountered in daily clinical settings.
Collapse
Affiliation(s)
- Manojkumar Jaiswal
- A Unit of Pediatric and Preventive Dentistry, Oral Health Sciences Center, Postgraduate Institute of Medical Education and Research, Chandigarh, India
| | - Umer Mukhtar
- A Unit of Pediatric and Preventive Dentistry, Oral Health Sciences Center, Postgraduate Institute of Medical Education and Research, Chandigarh, India
| | | | - Amit Laddi
- CSIR-Central Scientific Instruments Organisation, Chandigarh, India
| | - L Akash Singha
- A Unit of Pediatric and Preventive Dentistry, Oral Health Sciences Center, Postgraduate Institute of Medical Education and Research, Chandigarh, India
| |
Collapse
|
2
|
Elshamy R, Abu-Elnasr O, Elhoseny M, Elmougy S. Enhancing colorectal cancer histology diagnosis using modified deep neural networks optimizer. Sci Rep 2024; 14:19534. [PMID: 39174564 PMCID: PMC11341685 DOI: 10.1038/s41598-024-69193-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 08/01/2024] [Indexed: 08/24/2024] Open
Abstract
Optimizers are the bottleneck of the training process of any Convolutionolution neural networks (CNN) model. One of the critical steps when work on CNN model is choosing the optimal optimizer to solve a specific problem. Recent challenge in nowadays researches is building new versions of traditional CNN optimizers that can work more efficient than the traditional optimizers. Therefore, this work proposes a novel enhanced version of Adagrad optimizer called SAdagrad that avoids the drawbacks of Adagrad optimizer in dealing with tuning the learning rate value for each step of the training process. In order to evaluate SAdagrad, this paper builds a CNN model that combines a fine- tuning technique and a weight decay technique together. It trains the proposed CNN model on Kather colorectal cancer histology dataset which is one of the most challenging datasets in recent researches of Diagnose of Colorectal Cancer (CRC). In fact, recently, there have been plenty of deep learning models achieving successful results with regard to CRC classification experiments. However, the enhancement of these models remains challenging. To train our proposed model, a learning transfer process, which is adopted from a pre-complicated defined model is applied to the proposed model and combined it with a regularization technique that helps in avoiding overfitting. The experimental results show that SAdagrad reaches a remarkable accuracy (98%), when compared with Adaptive momentum optimizer (Adam) and Adagrad optimizer. The experiments also reveal that the proposed model has a more stable training and testing processes, can reduce the overfitting problem in multiple epochs and can achieve a higher accuracy compared with previous researches on Diagnosis CRC using the same Kather colorectal cancer histology dataset.
Collapse
Affiliation(s)
- Reham Elshamy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| | - Osama Abu-Elnasr
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohamed Elhoseny
- Department of Information Systems, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| |
Collapse
|
3
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
4
|
Duci M, Magoni A, Santoro L, Dei Tos AP, Gamba P, Uccheddu F, Fascetti-Leon F. Enhancing diagnosis of Hirschsprung's disease using deep learning from histological sections of post pull-through specimens: preliminary results. Pediatr Surg Int 2023; 40:12. [PMID: 38019366 PMCID: PMC10687181 DOI: 10.1007/s00383-023-05590-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/01/2023] [Indexed: 11/30/2023]
Abstract
PURPOSE Accurate histological diagnosis in Hirschsprung disease (HD) is challenging, due to its complexity and potential for errors. In this study, we present an artificial intelligence (AI)-based method designed to identify ganglionic cells and hypertrophic nerves in HD histology. METHODS Formalin-fixed samples were used and an expert pathologist and a surgeon annotated these slides on a web-based platform, identifying ganglionic cells and nerves. Images were partitioned into square sections, augmented through data manipulation techniques and used to develop two distinct U-net models: one for detecting ganglionic cells and normal nerves; the other to recognise hypertrophic nerves. RESULTS The study included 108 annotated samples, resulting in 19,600 images after data augmentation and manually segmentation. Subsequently, 17,655 slides without target elements were excluded. The algorithm was trained using 1945 slides (930 for model 1 and 1015 for model 2) with 1556 slides used for training the supervised network and 389 for validation. The accuracy of model 1 was found to be 92.32%, while model 2 achieved an accuracy of 91.5%. CONCLUSION The AI-based U-net technique demonstrates robustness in detecting ganglion cells and nerves in HD. The deep learning approach has the potential to standardise and streamline HD diagnosis, benefiting patients and aiding in training of pathologists.
Collapse
Affiliation(s)
- Miriam Duci
- Division of Pediatric Surgery, Department of Women's and Children's Health, University of Padova, Via Giustiniani 2, 35128, Padova, Italy
- Pediatric Surgery Unit, Division of Women's and Children's Health, Padova University Hospital, Padova, Italy
| | - Alessia Magoni
- Department of Industrial Engineering, Padova University, Padova, Italy
| | - Luisa Santoro
- Surgical Pathology and Cytopathology Unit, Department of Medicine, Padova University, Padova, Italy
| | - Angelo Paolo Dei Tos
- Surgical Pathology and Cytopathology Unit, Department of Medicine, Padova University, Padova, Italy
| | - Piergiorgio Gamba
- Division of Pediatric Surgery, Department of Women's and Children's Health, University of Padova, Via Giustiniani 2, 35128, Padova, Italy
- Pediatric Surgery Unit, Division of Women's and Children's Health, Padova University Hospital, Padova, Italy
| | | | - Francesco Fascetti-Leon
- Division of Pediatric Surgery, Department of Women's and Children's Health, University of Padova, Via Giustiniani 2, 35128, Padova, Italy.
- Pediatric Surgery Unit, Division of Women's and Children's Health, Padova University Hospital, Padova, Italy.
| |
Collapse
|
5
|
Wang Y, Ali MA, Vallon-Christersson J, Humphreys K, Hartman J, Rantalainen M. Transcriptional intra-tumour heterogeneity predicted by deep learning in routine breast histopathology slides provides independent prognostic information. Eur J Cancer 2023; 191:112953. [PMID: 37494846 DOI: 10.1016/j.ejca.2023.112953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Revised: 06/05/2023] [Accepted: 06/17/2023] [Indexed: 07/28/2023]
Abstract
BACKGROUND Intra-tumour heterogeneity (ITH) causes diagnostic challenges and increases the risk for disease recurrence. Quantification of ITH is challenging and has not been demonstrated in large studies. It has previously been shown that deep learning can enable spatially resolved prediction of molecular phenotypes from digital histopathology whole slide images (WSIs). Here we propose a novel method (Deep-ITH) to predict and measure ITH, and we evaluate its prognostic performance in breast cancer. METHODS Deep convolutional neural networks were used to spatially predict gene-expression (PAM50 set) from WSIs. For each predicted transcript, 12 measures of heterogeneity were extracted in the training data set (N = 931). A prognostic score to dichotomise patients into Deep-ITH low- and high-risk groups was established using an elastic-net regularised Cox proportional hazards model (recurrence-free survival). Prognostic performance was evaluated in two independent data sets: SöS-BC-1 (N = 1358) and SCAN-B-Lund (N = 1262). RESULTS We observed an increase in risk of recurrence in the high-risk group with hazard ratio (HR) 2.11 (95%CI:1.22-3.60; p = 0.007) using nested cross-validation. Subgroup analyses confirmed the prognostic performance in oestrogen receptor (ER)-positive, human epidermal growth factor receptor 2 (HER2)-negative, grade 3, and large tumour subgroups. The prognostic value was confirmed in the independent SöS-BC-1 cohort (HR=1.84; 95%CI:1.03-3.3; p = 3.99 ×10-2). In the other external cohort, significant HR was observed in the subgroup of histological grade 2 patients, as well as in the subgroup of patients with small tumours (<20 mm). CONCLUSION We developed a novel method for an automated, scalable, and cost-efficient measure of ITH from WSIs that provides independent prognostic value for breast cancer. SIGNIFICANCE Transcriptional ITH predicted by deep learning models enables prediction of patient survival from routine histopathology WSIs in breast cancer.
Collapse
Affiliation(s)
- Yinxi Wang
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Maya Alsheh Ali
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | | | - Keith Humphreys
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - Johan Hartman
- Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden; Department of Clinical Pathology and Cancer Diagnostics, Karolinska University Hospital, Stockholm, Sweden; MedTechLabs, BioClinicum, Karolinska University Hospital, Solna, Sweden
| | - Mattias Rantalainen
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden; MedTechLabs, BioClinicum, Karolinska University Hospital, Solna, Sweden.
| |
Collapse
|
6
|
Martos O, Hoque MZ, Keskinarkaus A, Kemi N, Näpänkangas J, Eskuri M, Pohjanen VM, Kauppila JH, Seppänen T. Optimized detection and segmentation of nuclei in gastric cancer images using stain normalization and blurred artifact removal. Pathol Res Pract 2023; 248:154694. [PMID: 37494804 DOI: 10.1016/j.prp.2023.154694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 07/03/2023] [Accepted: 07/13/2023] [Indexed: 07/28/2023]
Abstract
Histological analysis with microscopy is the gold standard to diagnose and stage cancer, where slides or whole slide images are analyzed for cell morphological and spatial features by pathologists. The nuclei of cancerous cells are characterized by nonuniform chromatin distribution, irregular shapes, and varying size. As nucleus area and shape alone carry prognostic value, detection and segmentation of nuclei are among the most important steps in disease grading. However, evaluation of nuclei is a laborious, time-consuming, and subjective process with large variation among pathologists. Recent advances in digital pathology have allowed significant applications in nuclei detection, segmentation, and classification, but automated image analysis is greatly affected by staining factors, scanner variability, and imaging artifacts, requiring robust image preprocessing, normalization, and segmentation methods for clinically satisfactory results. In this paper, we aimed to evaluate and compare the digital image analysis techniques used in clinical pathology and research in the setting of gastric cancer. A literature review was conducted to evaluate potential methods of improving nuclei detection. Digitized images of 35 patients from a retrospective cohort of gastric adenocarcinoma at Oulu University Hospital in 1987-2016 were annotated for nuclei (n = 9085) by expert pathologists and 14 images of different cancer types from public TCGA dataset with annotated nuclei (n = 7000) were used as a comparison to evaluate applicability in other cancer types. The detection and segmentation accuracy with the selected color normalization and stain separation techniques were compared between the methods. The extracted information can be supplemented by patient's medical data and fed to the existing statistical clinical tools or subjected to subsequent AI-assisted classification and prediction models. The performance of each method is evaluated by several metrics against the annotations done by expert pathologists. The F1-measure of 0.854 ± 0.068 is achieved with color normalization for the gastric cancer dataset, and 0.907 ± 0.044 with color deconvolution for the public dataset, showing comparable results to the earlier state-of-the-art works. The developed techniques serve as a basis for further research on application and interpretability of AI-assisted tools for gastric cancer diagnosis.
Collapse
Affiliation(s)
- Oleg Martos
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Md Ziaul Hoque
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland.
| | - Anja Keskinarkaus
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| | - Niko Kemi
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Juha Näpänkangas
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Maarit Eskuri
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Vesa-Matti Pohjanen
- Department of Pathology, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Joonas H Kauppila
- Department of Surgery, Oulu University Hospital, Finland, and University of Oulu, Finland
| | - Tapio Seppänen
- Center for Machine Vision and Signal Analysis, Faculty of Information Technology and Electrical Engineering, University of Oulu, Finland
| |
Collapse
|
7
|
Baidar Bakht A, Javed S, Gilani SQ, Karki H, Muneeb M, Werghi N. DeepBLS: Deep Feature-Based Broad Learning System for Tissue Phenotyping in Colorectal Cancer WSIs. J Digit Imaging 2023; 36:1653-1662. [PMID: 37059892 PMCID: PMC10406762 DOI: 10.1007/s10278-023-00797-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 04/16/2023] Open
Abstract
Tissue phenotyping is a fundamental step in computational pathology for the analysis of tumor micro-environment in whole slide images (WSIs). Automatic tissue phenotyping in whole slide images (WSIs) of colorectal cancer (CRC) assists pathologists in better cancer grading and prognostication. In this paper, we propose a novel algorithm for the identification of distinct tissue components in colon cancer histology images by blending a comprehensive learning system with deep features extraction in the current work. Firstly, we extracted the features from the pre-trained VGG19 network which are then transformed into mapped features space for nodes enhancement generation. Utilizing both mapped features and enhancement nodes, the proposed algorithm classifies seven distinct tissue components including stroma, tumor, complex stroma, necrotic, normal benign, lymphocytes, and smooth muscle. To validate our proposed model, the experiments are performed on two publicly available colorectal cancer histology datasets. We showcase that our approach achieves a remarkable performance boost surpassing existing state-of-the-art methods by (1.3% AvTP, 2% F1) and (7% AvTP, 6% F1) on CRCD-1, and CRCD-2, respectively.
Collapse
Affiliation(s)
- Ahsan Baidar Bakht
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Sajid Javed
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Syed Qasim Gilani
- Department of Electrical Engineering and Computer Science, Florida Atlantic University, Boca Raton, 33431 USA
| | - Hamad Karki
- Mechanical Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Muhammad Muneeb
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| | - Naoufel Werghi
- Electrical and Computer Engineering Department, Khalifa University, 12778 Abu Dhabi, United Arab Emirates
| |
Collapse
|
8
|
Küchler L, Posthaus C, Jäger K, Guscetti F, van der Weyden L, von Bomhard W, Schmidt JM, Farra D, Aupperle-Lellbach H, Kehl A, Rottenberg S, de Brot S. Artificial Intelligence to Predict the BRAF V595E Mutation in Canine Urinary Bladder Urothelial Carcinomas. Animals (Basel) 2023; 13:2404. [PMID: 37570213 PMCID: PMC10416820 DOI: 10.3390/ani13152404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 07/10/2023] [Accepted: 07/21/2023] [Indexed: 08/13/2023] Open
Abstract
In dogs, the BRAF mutation (V595E) is common in bladder and prostate cancer and represents a specific diagnostic marker. Recent advantages in artificial intelligence (AI) offer new opportunities in the field of tumour marker detection. While AI histology studies have been conducted in humans to detect BRAF mutation in cancer, comparable studies in animals are lacking. In this study, we used commercially available AI histology software to predict BRAF mutation in whole slide images (WSI) of bladder urothelial carcinomas (UC) stained with haematoxylin and eosin (HE), based on a training (n = 81) and a validation set (n = 96). Among 96 WSI, 57 showed identical PCR and AI-based BRAF predictions, resulting in a sensitivity of 58% and a specificity of 63%. The sensitivity increased substantially to 89% when excluding small or poor-quality tissue sections. Test reliability depended on tumour differentiation (p < 0.01), presence of inflammation (p < 0.01), slide quality (p < 0.02) and sample size (p < 0.02). Based on a small subset of cases with available adjacent non-neoplastic urothelium, AI was able to distinguish malignant from benign epithelium. This is the first study to demonstrate the use of AI histology to predict BRAF mutation status in canine UC. Despite certain limitations, the results highlight the potential of AI in predicting molecular alterations in routine tissue sections.
Collapse
Affiliation(s)
- Leonore Küchler
- Institute of Animal Pathology, Vetsuisse Faculty, University of Bern, 3012 Bern, Switzerland; (C.P.); (S.R.)
| | - Caroline Posthaus
- Institute of Animal Pathology, Vetsuisse Faculty, University of Bern, 3012 Bern, Switzerland; (C.P.); (S.R.)
| | - Kathrin Jäger
- Laboklin GmbH & Co. KG, 97688 Bad Kissingen, Germany; (K.J.); (H.A.-L.); (A.K.)
- Institute of Pathology, Department of Comparative Experimental Pathology, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Franco Guscetti
- Institute of Veterinary Pathology, Vetsuisse Faculty, University of Zurich, 8057 Zurich, Switzerland;
| | | | | | | | - Dima Farra
- Veterinary Public Health Institute, Vetsuisse Faculty, University of Bern, 3012 Bern, Switzerland;
| | - Heike Aupperle-Lellbach
- Laboklin GmbH & Co. KG, 97688 Bad Kissingen, Germany; (K.J.); (H.A.-L.); (A.K.)
- Institute of Pathology, Department of Comparative Experimental Pathology, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Alexandra Kehl
- Laboklin GmbH & Co. KG, 97688 Bad Kissingen, Germany; (K.J.); (H.A.-L.); (A.K.)
- Institute of Pathology, Department of Comparative Experimental Pathology, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Sven Rottenberg
- Institute of Animal Pathology, Vetsuisse Faculty, University of Bern, 3012 Bern, Switzerland; (C.P.); (S.R.)
- COMPATH, Vetsuisse Faculty, University of Bern, 3012 Bern, Switzerland
- Bern Center for Precision Medicine, University of Bern, 3008 Bern, Switzerland
| | - Simone de Brot
- Institute of Animal Pathology, Vetsuisse Faculty, University of Bern, 3012 Bern, Switzerland; (C.P.); (S.R.)
- COMPATH, Vetsuisse Faculty, University of Bern, 3012 Bern, Switzerland
- Bern Center for Precision Medicine, University of Bern, 3008 Bern, Switzerland
| |
Collapse
|
9
|
Amato D, Calderaro S, Lo Bosco G, Rizzo R, Vella F. Metric Learning in Histopathological Image Classification: Opening the Black Box. SENSORS (BASEL, SWITZERLAND) 2023; 23:6003. [PMID: 37447857 DOI: 10.3390/s23136003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 06/15/2023] [Accepted: 06/22/2023] [Indexed: 07/15/2023]
Abstract
The application of machine learning techniques to histopathology images enables advances in the field, providing valuable tools that can speed up and facilitate the diagnosis process. The classification of these images is a relevant aid for physicians who have to process a large number of images in long and repetitive tasks. This work proposes the adoption of metric learning that, beyond the task of classifying images, can provide additional information able to support the decision of the classification system. In particular, triplet networks have been employed to create a representation in the embedding space that gathers together images of the same class while tending to separate images with different labels. The obtained representation shows an evident separation of the classes with the possibility of evaluating the similarity and the dissimilarity among input images according to distance criteria. The model has been tested on the BreakHis dataset, a reference and largely used dataset that collects breast cancer images with eight pathology labels and four magnification levels. Our proposed classification model achieves relevant performance on the patient level, with the advantage of providing interpretable information for the obtained results, which represent a specific feature missed by the all the recent methodologies proposed for the same purpose.
Collapse
Affiliation(s)
- Domenico Amato
- Department of Mathematics and Computer Science, University of Palermo, 90123 Palermo, Italy
| | - Salvatore Calderaro
- Department of Mathematics and Computer Science, University of Palermo, 90123 Palermo, Italy
| | - Giosué Lo Bosco
- Department of Mathematics and Computer Science, University of Palermo, 90123 Palermo, Italy
| | - Riccardo Rizzo
- Institute for High-Performance Computing and Networking, National Research Council of Italy, 90146 Palermo, Italy
| | - Filippo Vella
- Institute for High-Performance Computing and Networking, National Research Council of Italy, 90146 Palermo, Italy
| |
Collapse
|
10
|
Chattopadhyay S, Singh PK, Ijaz MF, Kim S, Sarkar R. SnapEnsemFS: a snapshot ensembling-based deep feature selection model for colorectal cancer histological analysis. Sci Rep 2023; 13:9937. [PMID: 37336964 DOI: 10.1038/s41598-023-36921-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 06/12/2023] [Indexed: 06/21/2023] Open
Abstract
Colorectal cancer is the third most common type of cancer diagnosed annually, and the second leading cause of death due to cancer. Early diagnosis of this ailment is vital for preventing the tumours to spread and plan treatment to possibly eradicate the disease. However, population-wide screening is stunted by the requirement of medical professionals to analyse histological slides manually. Thus, an automated computer-aided detection (CAD) framework based on deep learning is proposed in this research that uses histological slide images for predictions. Ensemble learning is a popular strategy for fusing the salient properties of several models to make the final predictions. However, such frameworks are computationally costly since it requires the training of multiple base learners. Instead, in this study, we adopt a snapshot ensemble method, wherein, instead of the traditional method of fusing decision scores from the snapshots of a Convolutional Neural Network (CNN) model, we extract deep features from the penultimate layer of the CNN model. Since the deep features are extracted from the same CNN model but for different learning environments, there may be redundancy in the feature set. To alleviate this, the features are fed into Particle Swarm Optimization, a popular meta-heuristic, for dimensionality reduction of the feature space and better classification. Upon evaluation on a publicly available colorectal cancer histology dataset using a five-fold cross-validation scheme, the proposed method obtains a highest accuracy of 97.60% and F1-Score of 97.61%, outperforming existing state-of-the-art methods on the same dataset. Further, qualitative investigation of class activation maps provide visual explainability to medical practitioners, as well as justifies the use of the CAD framework in screening of colorectal histology. Our source codes are publicly accessible at: https://github.com/soumitri2001/SnapEnsemFS .
Collapse
Affiliation(s)
- Soumitri Chattopadhyay
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Muhammad Fazal Ijaz
- Department of Mechanical Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Grattam Street, Parkville, VIC, 3010, Australia.
| | - SeongKi Kim
- National Centre of Excellence in Software, Sangmyung University, Seoul, 03016, Korea.
| | - Ram Sarkar
- Department of Computer Science & Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
11
|
Altini N, Marvulli TM, Zito FA, Caputo M, Tommasi S, Azzariti A, Brunetti A, Prencipe B, Mattioli E, De Summa S, Bevilacqua V. The role of unpaired image-to-image translation for stain color normalization in colorectal cancer histology classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 234:107511. [PMID: 37011426 DOI: 10.1016/j.cmpb.2023.107511] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2022] [Revised: 03/14/2023] [Accepted: 03/25/2023] [Indexed: 06/19/2023]
Abstract
BACKGROUND Histological assessment of colorectal cancer (CRC) tissue is a crucial and demanding task for pathologists. Unfortunately, manual annotation by trained specialists is a burdensome operation, which suffers from problems like intra- and inter-pathologist variability. Computational models are revolutionizing the Digital Pathology field, offering reliable and fast approaches for challenges like tissue segmentation and classification. With this respect, an important obstacle to overcome consists in stain color variations among different laboratories, which can decrease the performance of classifiers. In this work, we investigated the role of Unpaired Image-to-Image Translation (UI2IT) models for stain color normalization in CRC histology and compared to classical normalization techniques for Hematoxylin-Eosin (H&E) images. METHODS Five Deep Learning normalization models based on Generative Adversarial Networks (GANs) belonging to the UI2IT paradigm have been thoroughly compared to realize a robust stain color normalization pipeline. To avoid the need for training a style transfer GAN between each pair of data domains, in this paper we introduce the concept of training by exploiting a meta-domain, which contains data coming from a wide variety of laboratories. The proposed framework enables a huge saving in terms of training time, by allowing to train a single image normalization model for a target laboratory. To prove the applicability of the proposed workflow in the clinical practice, we conceived a novel perceptive quality measure, which we defined as Pathologist Perceptive Quality (PPQ). The second stage involved the classification of tissue types in CRC histology, where deep features extracted from Convolutional Neural Networks have been exploited to realize a Computer-Aided Diagnosis system based on a Support Vector Machine (SVM). To prove the reliability of the system on new data, an external validation set composed of N = 15,857 tiles has been collected at IRCCS Istituto Tumori "Giovanni Paolo II". RESULTS The exploitation of a meta-domain consented to train normalization models that allowed achieving better classification results than normalization models explicitly trained on the source domain. PPQ metric has been found correlated to quality of distributions (Fréchet Inception Distance - FID) and to similarity of the transformed image to the original one (Learned Perceptual Image Patch Similarity - LPIPS), thus showing that GAN quality measures introduced in natural image processing tasks can be linked to pathologist evaluation of H&E images. Furthermore, FID has been found correlated to accuracies of the downstream classifiers. The SVM trained on DenseNet201 features allowed to obtain the highest classification results in all configurations. The normalization method based on the fast variant of CUT (Contrastive Unpaired Translation), FastCUT, trained with the meta-domain paradigm, allowed to achieve the best classification result for the downstream task and, correspondingly, showed the highest FID on the classification dataset. CONCLUSIONS Stain color normalization is a difficult but fundamental problem in the histopathological setting. Several measures should be considered for properly assessing normalization methods, so that they can be introduced in the clinical practice. UI2IT frameworks offer a powerful and effective way to perform the normalization process, providing realistic images with proper colorization, unlike traditional normalization methods that introduce color artifacts. By adopting the proposed meta-domain framework, the training time can be reduced, and the accuracy of downstream classifiers can be increased.
Collapse
Affiliation(s)
- Nicola Altini
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy.
| | - Tommaso Maria Marvulli
- Laboratory of Experimental Pharmacology, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Francesco Alfredo Zito
- Pathology Department, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Mariapia Caputo
- Molecular Diagnostics and Pharmacogenetics Unit, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Stefania Tommasi
- Molecular Diagnostics and Pharmacogenetics Unit, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Amalia Azzariti
- Laboratory of Experimental Pharmacology, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Antonio Brunetti
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy; Apulian Bioengineering srl, Via delle Violette, 14, Modugno 70026, Italy
| | - Berardino Prencipe
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy
| | - Eliseo Mattioli
- Pathology Department, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Simona De Summa
- Molecular Diagnostics and Pharmacogenetics Unit, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy; Apulian Bioengineering srl, Via delle Violette, 14, Modugno 70026, Italy
| |
Collapse
|
12
|
Firmbach D, Benz M, Kuritcyn P, Bruns V, Lang-Schwarz C, Stuebs FA, Merkel S, Leikauf LS, Braunschweig AL, Oldenburger A, Gloßner L, Abele N, Eck C, Matek C, Hartmann A, Geppert CI. Tumor-Stroma Ratio in Colorectal Cancer-Comparison between Human Estimation and Automated Assessment. Cancers (Basel) 2023; 15:2675. [PMID: 37345012 DOI: 10.3390/cancers15102675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/27/2023] [Accepted: 05/02/2023] [Indexed: 06/23/2023] Open
Abstract
The tumor-stroma ratio (TSR) has been repeatedly shown to be a prognostic factor for survival prediction of different cancer types. However, an objective and reliable determination of the tumor-stroma ratio remains challenging. We present an easily adaptable deep learning model for accurately segmenting tumor regions in hematoxylin and eosin (H&E)-stained whole slide images (WSIs) of colon cancer patients into five distinct classes (tumor, stroma, necrosis, mucus, and background). The tumor-stroma ratio can be determined in the presence of necrotic or mucinous areas. We employ a few-shot model, eventually aiming for the easy adaptability of our approach to related segmentation tasks or other primaries, and compare the results to a well-established state-of-the art approach (U-Net). Both models achieve similar results with an overall accuracy of 86.5% and 86.7%, respectively, indicating that the adaptability does not lead to a significant decrease in accuracy. Moreover, we comprehensively compare with TSR estimates of human observers and examine in detail discrepancies and inter-rater reliability. Adding a second survey for segmentation quality on top of a first survey for TSR estimation, we found that TSR estimations of human observers are not as reliable a ground truth as previously thought.
Collapse
Affiliation(s)
- Daniel Firmbach
- Digital Health Systems Department, Fraunhofer-Institute for Integrated Circuits IIS, Am Wolfsmantel 33, 91058 Erlangen, Germany
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Michaela Benz
- Digital Health Systems Department, Fraunhofer-Institute for Integrated Circuits IIS, Am Wolfsmantel 33, 91058 Erlangen, Germany
| | - Petr Kuritcyn
- Digital Health Systems Department, Fraunhofer-Institute for Integrated Circuits IIS, Am Wolfsmantel 33, 91058 Erlangen, Germany
| | - Volker Bruns
- Digital Health Systems Department, Fraunhofer-Institute for Integrated Circuits IIS, Am Wolfsmantel 33, 91058 Erlangen, Germany
| | - Corinna Lang-Schwarz
- Institute of Pathology, Hospital Bayreuth, Preuschwitzer Str. 101, 95445 Bayreuth, Germany
| | - Frederik A Stuebs
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
- Department of Obstetrics and Gynaecology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Universitätsstraße 21-23, 91054 Erlangen, Germany
| | - Susanne Merkel
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
- Department of Surgery, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 12, 91054 Erlangen, Germany
| | - Leah-Sophie Leikauf
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Anna-Lea Braunschweig
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Angelika Oldenburger
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Laura Gloßner
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Niklas Abele
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Christine Eck
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Christian Matek
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Arndt Hartmann
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| | - Carol I Geppert
- Institute of Pathology, University Hospital Erlangen, FAU Erlangen-Nuremberg, Krankenhausstr. 8-10, 91054 Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC), University Hospital Erlangen, FAU Erlangen-Nuremberg, Östliche Stadtmauerstr. 30, 91054 Erlangen, Germany
| |
Collapse
|
13
|
Wen Z, Wang S, Yang DM, Xie Y, Chen M, Bishop J, Xiao G. Deep learning in digital pathology for personalized treatment plans of cancer patients. Semin Diagn Pathol 2023; 40:109-119. [PMID: 36890029 DOI: 10.1053/j.semdp.2023.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 02/22/2023] [Indexed: 02/27/2023]
Abstract
Over the past decade, many new cancer treatments have been developed and made available to patients. However, in most cases, these treatments only benefit a specific subgroup of patients, making the selection of treatment for a specific patient an essential but challenging task for oncologists. Although some biomarkers were found to associate with treatment response, manual assessment is time-consuming and subjective. With the rapid developments and expanded implementation of artificial intelligence (AI) in digital pathology, many biomarkers can be quantified automatically from histopathology images. This approach allows for a more efficient and objective assessment of biomarkers, aiding oncologists in formulating personalized treatment plans for cancer patients. This review presents an overview and summary of the recent studies on biomarker quantification and treatment response prediction using hematoxylin-eosin (H&E) stained pathology images. These studies have shown that an AI-based digital pathology approach can be practical and will become increasingly important in improving the selection of cancer treatments for patients.
Collapse
Affiliation(s)
- Zhuoyu Wen
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Shidan Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Yang Xie
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA; Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Mingyi Chen
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Justin Bishop
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA; Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA; Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
14
|
Mou T, Liang J, Vu TN, Tian M, Gao Y. A Comprehensive Landscape of Imaging Feature-Associated RNA Expression Profiles in Human Breast Tissue. SENSORS (BASEL, SWITZERLAND) 2023; 23:1432. [PMID: 36772473 PMCID: PMC9921444 DOI: 10.3390/s23031432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/15/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
The expression abundance of transcripts in nondiseased breast tissue varies among individuals. The association study of genotypes and imaging phenotypes may help us to understand this individual variation. Since existing reports mainly focus on tumors or lesion areas, the heterogeneity of pathological image features and their correlations with RNA expression profiles for nondiseased tissue are not clear. The aim of this study is to discover the association between the nucleus features and the transcriptome-wide RNAs. We analyzed both microscopic histology images and RNA-sequencing data of 456 breast tissues from the Genotype-Tissue Expression (GTEx) project and constructed an automatic computational framework. We classified all samples into four clusters based on their nucleus morphological features and discovered feature-specific gene sets. The biological pathway analysis was performed on each gene set. The proposed framework evaluates the morphological characteristics of the cell nucleus quantitatively and identifies the associated genes. We found image features that capture population variation in breast tissue associated with RNA expressions, suggesting that the variation in expression pattern affects population variation in the morphological traits of breast tissue. This study provides a comprehensive transcriptome-wide view of imaging-feature-specific RNA expression for healthy breast tissue. Such a framework could also be used for understanding the connection between RNA expression and morphology in other tissues and organs. Pathway analysis indicated that the gene sets we identified were involved in specific biological processes, such as immune processes.
Collapse
Affiliation(s)
- Tian Mou
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| | - Jianwen Liang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| | - Trung Nghia Vu
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, SE 17177 Stockholm, Sweden
| | - Mu Tian
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| | - Yi Gao
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
15
|
Lhermitte E, Hilal M, Furlong R, O’Brien V, Humeau-Heurtier A. Deep Learning and Entropy-Based Texture Features for Color Image Classification. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1577. [PMID: 36359667 PMCID: PMC9688970 DOI: 10.3390/e24111577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/21/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
In the domain of computer vision, entropy-defined as a measure of irregularity-has been proposed as an effective method for analyzing the texture of images. Several studies have shown that, with specific parameter tuning, entropy-based approaches achieve high accuracy in terms of classification results for texture images, when associated with machine learning classifiers. However, few entropy measures have been extended to studying color images. Moreover, the literature is missing comparative analyses of entropy-based and modern deep learning-based classification methods for RGB color images. In order to address this matter, we first propose a new entropy-based measure for RGB images based on a multivariate approach. This multivariate approach is a bi-dimensional extension of the methods that have been successfully applied to multivariate signals (unidimensional data). Then, we compare the classification results of this new approach with those obtained from several deep learning methods. The entropy-based method for RGB image classification that we propose leads to promising results. In future studies, the measure could be extended to study other color spaces as well.
Collapse
Affiliation(s)
- Emma Lhermitte
- Univ Angers, LARIS, SFR MATHSTIC, F-49000 Angers, France
| | - Mirvana Hilal
- Univ Angers, LARIS, SFR MATHSTIC, F-49000 Angers, France
| | - Ryan Furlong
- Institute of Technology Carlow, R93 V960 Carlow, Ireland
| | | | | |
Collapse
|
16
|
Thorsted B, Bjerregaard L, Jensen PS, Rasmussen LM, Lindholt JS, Bloksgaard M. Artificial intelligence assisted compositional analyses of human abdominal aortic aneurysms ex vivo. Front Physiol 2022; 13:840965. [PMID: 36072852 PMCID: PMC9441486 DOI: 10.3389/fphys.2022.840965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 06/28/2022] [Indexed: 11/13/2022] Open
Abstract
Quantification of histological information from excised human abdominal aortic aneurysm (AAA) specimens may provide essential information on the degree of infiltration of inflammatory cells in different regions of the AAA. Such information will support mechanistic insight in AAA pathology and can be linked to clinical measures for further development of AAA treatment regimens. We hypothesize that artificial intelligence can support high throughput analyses of histological sections of excised human AAA. We present an analysis framework based on supervised machine learning. We used TensorFlow and QuPath to determine the overall architecture of the AAA: thrombus, arterial wall, and adventitial loose connective tissue. Within the wall and adventitial zones, the content of collagen, elastin, and specific inflammatory cells was quantified. A deep neural network (DNN) was trained on manually annotated, Weigert stained, tissue sections (14 patients) and validated on images from two other patients. Finally, we applied the method on 95 new patient samples. The DNN was able to segment the sections according to the overall wall architecture with Jaccard coefficients after 65 epocs of 92% for the training and 88% for the validation data set, respectively. Precision and recall both reached 92%. The zone areas were highly variable between patients, as were the outputs on total cell count and elastin/collagen fiber content. The number of specific cells or stained area per zone was deterministically determined. However, combining the masks based on the Weigert stainings, with images of immunostained serial sections requires addition of landmark recognition to the analysis path. The combination of digital pathology, the DNN we developed, and landmark registration will provide a strong tool for future analyses of the histology of excised human AAA. In combination with biomechanical testing and microstructurally motivated mathematical models of AAA remodeling, the method has the potential to be a strong tool to provide mechanistic insight in the disease. In combination with each patients’ demographic and clinical profile, the method can be an interesting tool to in supportof a better treatment regime for the patients.
Collapse
Affiliation(s)
- Bjarne Thorsted
- Department of Cardiothoracic and Vascular Surgery, Odense University Hospital, Odense, Denmark
| | - Lisette Bjerregaard
- Department of Cardiothoracic and Vascular Surgery, Odense University Hospital, Odense, Denmark
| | - Pia S. Jensen
- Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, Odense, Denmark
- Odense Artery Biobank, Odense University Hospital, Odense, Denmark
- Center for Individualized Medicine in Arterial Diseases, Odense University Hospital, Odense, Denmark
| | - Lars M. Rasmussen
- Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, Odense, Denmark
- Odense Artery Biobank, Odense University Hospital, Odense, Denmark
- Center for Individualized Medicine in Arterial Diseases, Odense University Hospital, Odense, Denmark
| | - Jes S. Lindholt
- Department of Cardiothoracic and Vascular Surgery, Odense University Hospital, Odense, Denmark
- Center for Individualized Medicine in Arterial Diseases, Odense University Hospital, Odense, Denmark
| | - Maria Bloksgaard
- Medical Molecular Pharmacology Laboratory, Cardiovascular and Renal Research Unit, Department of Molecular Medicine, University of Southern Denmark, Odense, Denmark
- *Correspondence: Maria Bloksgaard,
| |
Collapse
|
17
|
Artificial Intelligence-Based Tissue Phenotyping in Colorectal Cancer Histopathology Using Visual and Semantic Features Aggregation. MATHEMATICS 2022. [DOI: 10.3390/math10111909] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Tissue phenotyping of the tumor microenvironment has a decisive role in digital profiling of intra-tumor heterogeneity, epigenetics, and progression of cancer. Most of the existing methods for tissue phenotyping often rely on time-consuming and error-prone manual procedures. Recently, with the advent of advanced technologies, these procedures have been automated using artificial intelligence techniques. In this paper, a novel deep histology heterogeneous feature aggregation network (HHFA-Net) is proposed based on visual and semantic information fusion for the detection of tissue phenotypes in colorectal cancer (CRC). We adopted and tested various data augmentation techniques to avoid computationally expensive stain normalization procedures and handle limited and imbalanced data problems. Three publicly available datasets are used in the experiments: CRC tissue phenotyping (CRC-TP), CRC histology (CRCH), and colon cancer histology (CCH). The proposed HHFA-Net achieves higher accuracies than the state-of-the-art methods for tissue phenotyping in CRC histopathology images.
Collapse
|
18
|
Kiziloluk S, Sert E. COVID-CCD-Net: COVID-19 and colon cancer diagnosis system with optimized CNN hyperparameters using gradient-based optimizer. Med Biol Eng Comput 2022; 60:1595-1612. [PMID: 35396625 PMCID: PMC8993211 DOI: 10.1007/s11517-022-02553-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 03/12/2022] [Indexed: 02/01/2023]
Abstract
Coronavirus disease-2019 (COVID-19) is a new types of coronavirus which have turned into a pandemic within a short time. Reverse transcription–polymerase chain reaction (RT-PCR) test is used for the diagnosis of COVID-19 in national healthcare centers. Because the number of PCR test kits is often limited, it is sometimes difficult to diagnose the disease at an early stage. However, X-ray technology is accessible nearly all over the world, and it succeeds in detecting symptoms of COVID-19 more successfully. Another disease which affects people’s lives to a great extent is colorectal cancer. Tissue microarray (TMA) is a technological method which is widely used for its high performance in the analysis of colorectal cancer. Computer-assisted approaches which can classify colorectal cancer in TMA images are also needed. In this respect, the present study proposes a convolutional neural network (CNN) classification approach with optimized parameters using gradient-based optimizer (GBO) algorithm. Thanks to the proposed approach, COVID-19, normal, and viral pneumonia in various chest X-ray images can be classified accurately. Additionally, other types such as epithelial and stromal regions in epidermal growth factor receptor (EFGR) colon in TMAs can also be classified. The proposed approach was called COVID-CCD-Net. AlexNet, DarkNet-19, Inception-v3, MobileNet, ResNet-18, and ShuffleNet architectures were used in COVID-CCD-Net, and the hyperparameters of this architecture was optimized for the proposed approach. Two different medical image classification datasets, namely, COVID-19 and Epistroma, were used in the present study. The experimental findings demonstrated that proposed approach increased the classification performance of the non-optimized CNN architectures significantly and displayed a very high classification performance even in very low value of epoch.
Collapse
Affiliation(s)
- Soner Kiziloluk
- Department of Computer Engineering, Malatya Turgut Özal University, Malatya, Turkey
| | - Eser Sert
- Department of Computer Engineering, Malatya Turgut Özal University, Malatya, Turkey
| |
Collapse
|
19
|
Chen H, Li C, Li X, Rahaman MM, Hu W, Li Y, Liu W, Sun C, Sun H, Huang X, Grzegorzek M. IL-MCAM: An interactive learning and multi-channel attention mechanism-based weakly supervised colorectal histopathology image classification approach. Comput Biol Med 2022; 143:105265. [PMID: 35123138 DOI: 10.1016/j.compbiomed.2022.105265] [Citation(s) in RCA: 38] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 01/21/2022] [Accepted: 01/22/2022] [Indexed: 12/24/2022]
Abstract
In recent years, colorectal cancer has become one of the most significant diseases that endanger human health. Deep learning methods are increasingly important for the classification of colorectal histopathology images. However, existing approaches focus more on end-to-end automatic classification using computers rather than human-computer interaction. In this paper, we propose an IL-MCAM framework. It is based on attention mechanisms and interactive learning. The proposed IL-MCAM framework includes two stages: automatic learning (AL) and interactivity learning (IL). In the AL stage, a multi-channel attention mechanism model containing three different attention mechanism channels and convolutional neural networks is used to extract multi-channel features for classification. In the IL stage, the proposed IL-MCAM framework continuously adds misclassified images to the training set in an interactive approach, which improves the classification ability of the MCAM model. We carried out a comparison experiment on our dataset and an extended experiment on the HE-NCT-CRC-100K dataset to verify the performance of the proposed IL-MCAM framework, achieving classification accuracies of 98.98% and 99.77%, respectively. In addition, we conducted an ablation experiment and an interchangeability experiment to verify the ability and interchangeability of the three channels. The experimental results show that the proposed IL-MCAM framework has excellent performance in the colorectal histopathological image classification tasks.
Collapse
Affiliation(s)
- Haoyuan Chen
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China.
| | - Xiaoyan Li
- Department of Pathology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, China.
| | - Md Mamunur Rahaman
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Yixin Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Wanli Liu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China
| | - Changhao Sun
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Shenyang Institute of Automation, Chinese Academy of Sciences, China
| | - Hongzan Sun
- Department of Radiology, Shengjing Hospital of China Medical University, China
| | - Xinyu Huang
- Institute of Medical Informatics, University of Luebeck, Germany
| | | |
Collapse
|
20
|
Abstract
Machine learning techniques used in computer-aided medical image analysis usually suffer from the domain shift problem caused by different distributions between source/reference data and target data. As a promising solution, domain adaptation has attracted considerable attention in recent years. The aim of this paper is to survey the recent advances of domain adaptation methods in medical image analysis. We first present the motivation of introducing domain adaptation techniques to tackle domain heterogeneity issues for medical image analysis. Then we provide a review of recent domain adaptation models in various medical image analysis tasks. We categorize the existing methods into shallow and deep models, and each of them is further divided into supervised, semi-supervised and unsupervised methods. We also provide a brief summary of the benchmark medical image datasets that support current domain adaptation research. This survey will enable researchers to gain a better understanding of the current status, challenges and future directions of this energetic research field.
Collapse
|
21
|
Domain generalization on medical imaging classification using episodic training with task augmentation. Comput Biol Med 2021; 141:105144. [PMID: 34971982 DOI: 10.1016/j.compbiomed.2021.105144] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 12/12/2021] [Accepted: 12/13/2021] [Indexed: 12/22/2022]
Abstract
Medical imaging datasets usually exhibit domain shift due to the variations of scanner vendors, imaging protocols, etc. This raises the concern about the generalization capacity of machine learning models. Domain generalization (DG), which aims to learn a model from multiple source domains such that it can be directly generalized to unseen test domains, seems particularly promising to medical imaging community. To address DG, recent model-agnostic meta-learning (MAML) has been introduced, which transfers the knowledge from previous training tasks to facilitate the learning of novel testing tasks. However, in clinical practice, there are usually only a few annotated source domains available, which decreases the capacity of training task generation and thus increases the risk of overfitting to training tasks in the paradigm. In this paper, we propose a novel DG scheme of episodic training with task augmentation on medical imaging classification. Based on meta-learning, we develop the paradigm of episodic training to construct the knowledge transfer from episodic training-task simulation to the real testing task of DG. Motivated by the limited number of source domains in real-world medical deployment, we consider the unique task-level overfitting and we propose task augmentation to enhance the variety during training task generation to alleviate it. With the established learning framework, we further exploit a novel meta-objective to regularize the deep embedding of training domains. To validate the effectiveness of the proposed method, we perform experiments on histopathological images and abdominal CT images.
Collapse
|
22
|
Zeid MAE, El-Bahnasy K, Abo-Youssef SE. Multiclass Colorectal Cancer Histology Images Classification Using Vision Transformers. 2021 TENTH INTERNATIONAL CONFERENCE ON INTELLIGENT COMPUTING AND INFORMATION SYSTEMS (ICICIS) 2021. [DOI: 10.1109/icicis52592.2021.9694125] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Affiliation(s)
- Magdy Abd-Elghany Zeid
- Obour High Institute for Management and Informatics,Computer Science Department,Cairo,Egypt
| | - Khaled El-Bahnasy
- Obour High Institute for Management and Informatics,Computer Science Department,Cairo,Egypt
| | - S. E. Abo-Youssef
- Al-Azhar University,Faculty of Science,Mathematics and Computer Science Department,Cairo,Egypt
| |
Collapse
|
23
|
Deep Learning Approaches to Colorectal Cancer Diagnosis: A Review. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210982] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Unprecedented breakthroughs in the development of graphical processing systems have led to great potential for deep learning (DL) algorithms in analyzing visual anatomy from high-resolution medical images. Recently, in digital pathology, the use of DL technologies has drawn a substantial amount of attention for use in the effective diagnosis of various cancer types, especially colorectal cancer (CRC), which is regarded as one of the dominant causes of cancer-related deaths worldwide. This review provides an in-depth perspective on recently published research articles on DL-based CRC diagnosis and prognosis. Overall, we provide a retrospective synopsis of simple image-processing-based and machine learning (ML)-based computer-aided diagnosis (CAD) systems, followed by a comprehensive appraisal of use cases with different types of state-of-the-art DL algorithms for detecting malignancies. We first list multiple standardized and publicly available CRC datasets from two imaging types: colonoscopy and histopathology. Secondly, we categorize the studies based on the different types of CRC detected (tumor tissue, microsatellite instability, and polyps), and we assess the data preprocessing steps and the adopted DL architectures before presenting the optimum diagnostic results. CRC diagnosis with DL algorithms is still in the preclinical phase, and therefore, we point out some open issues and provide some insights into the practicability and development of robust diagnostic systems in future health care and oncology.
Collapse
|
24
|
Parameter Analysis of Multiscale Two-Dimensional Fuzzy and Dispersion Entropy Measures Using Machine Learning Classification. ENTROPY 2021; 23:e23101303. [PMID: 34682027 PMCID: PMC8535127 DOI: 10.3390/e23101303] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 09/28/2021] [Accepted: 09/29/2021] [Indexed: 11/29/2022]
Abstract
Two-dimensional fuzzy entropy, dispersion entropy, and their multiscale extensions (MFuzzyEn2D and MDispEn2D, respectively) have shown promising results for image classifications. However, these results rely on the selection of key parameters that may largely influence the entropy values obtained. Yet, the optimal choice for these parameters has not been studied thoroughly. We propose a study on the impact of these parameters in image classification. For this purpose, the entropy-based algorithms are applied to a variety of images from different datasets, each containing multiple image classes. Several parameter combinations are used to obtain the entropy values. These entropy values are then applied to a range of machine learning classifiers and the algorithm parameters are analyzed based on the classification results. By using specific parameters, we show that both MFuzzyEn2D and MDispEn2D approach state-of-the-art in terms of image classification for multiple image types. They lead to an average maximum accuracy of more than 95% for all the datasets tested. Moreover, MFuzzyEn2D results in a better classification performance than that extracted by MDispEn2D as a majority. Furthermore, the choice of classifier does not have a significant impact on the classification of the extracted features by both entropy algorithms. The results open new perspectives for these entropy-based measures in textural analysis.
Collapse
|
25
|
Olveres J, González G, Torres F, Moreno-Tagle JC, Carbajal-Degante E, Valencia-Rodríguez A, Méndez-Sánchez N, Escalante-Ramírez B. What is new in computer vision and artificial intelligence in medical image analysis applications. Quant Imaging Med Surg 2021; 11:3830-3853. [PMID: 34341753 DOI: 10.21037/qims-20-1151] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Accepted: 04/20/2021] [Indexed: 12/15/2022]
Abstract
Computer vision and artificial intelligence applications in medicine are becoming increasingly important day by day, especially in the field of image technology. In this paper we cover different artificial intelligence advances that tackle some of the most important worldwide medical problems such as cardiology, cancer, dermatology, neurodegenerative disorders, respiratory problems, and gastroenterology. We show how both areas have resulted in a large variety of methods that range from enhancement, detection, segmentation and characterizations of anatomical structures and lesions to complete systems that automatically identify and classify several diseases in order to aid clinical diagnosis and treatment. Different imaging modalities such as computer tomography, magnetic resonance, radiography, ultrasound, dermoscopy and microscopy offer multiple opportunities to build automatic systems that help medical diagnosis, taking advantage of their own physical nature. However, these imaging modalities also impose important limitations to the design of automatic image analysis systems for diagnosis aid due to their inherent characteristics such as signal to noise ratio, contrast and resolutions in time, space and wavelength. Finally, we discuss future trends and challenges that computer vision and artificial intelligence must face in the coming years in order to build systems that are able to solve more complex problems that assist medical diagnosis.
Collapse
Affiliation(s)
- Jimena Olveres
- Centro de Estudios en Computación Avanzada, Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico.,Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| | - Germán González
- Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| | - Fabian Torres
- Centro de Estudios en Computación Avanzada, Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico.,Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| | | | | | | | - Nahum Méndez-Sánchez
- Unidad de Investigación en Hígado, Fundación Clínica Médica Sur, Mexico City, Mexico.,Facultad de Medicina, UNAM, Mexico City, Mexico
| | - Boris Escalante-Ramírez
- Centro de Estudios en Computación Avanzada, Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico.,Departamento de Procesamiento de Señales, Facultad de Ingeniería, UNAM, Mexico City, Mexico
| |
Collapse
|
26
|
Doherty T, McKeever S, Al-Attar N, Murphy T, Aura C, Rahman A, O'Neill A, Finn SP, Kay E, Gallagher WM, Watson RWG, Gowen A, Jackman P. Feature fusion of Raman chemical imaging and digital histopathology using machine learning for prostate cancer detection. Analyst 2021; 146:4195-4211. [PMID: 34060548 DOI: 10.1039/d1an00075f] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The diagnosis of prostate cancer is challenging due to the heterogeneity of its presentations, leading to the over diagnosis and treatment of non-clinically important disease. Accurate diagnosis can directly benefit a patient's quality of life and prognosis. Towards addressing this issue, we present a learning model for the automatic identification of prostate cancer. While many prostate cancer studies have adopted Raman spectroscopy approaches, none have utilised the combination of Raman Chemical Imaging (RCI) and other imaging modalities. This study uses multimodal images formed from stained Digital Histopathology (DP) and unstained RCI. The approach was developed and tested on a set of 178 clinical samples from 32 patients, containing a range of non-cancerous, Gleason grade 3 (G3) and grade 4 (G4) tissue microarray samples. For each histological sample, there is a pathologist labelled DP-RCI image pair. The hypothesis tested was whether multimodal image models can outperform single modality baseline models in terms of diagnostic accuracy. Binary non-cancer/cancer models and the more challenging G3/G4 differentiation were investigated. Regarding G3/G4 classification, the multimodal approach achieved a sensitivity of 73.8% and specificity of 88.1% while the baseline DP model showed a sensitivity and specificity of 54.1% and 84.7% respectively. The multimodal approach demonstrated a statistically significant 12.7% AUC advantage over the baseline with a value of 85.8% compared to 73.1%, also outperforming models based solely on RCI and mean and median Raman spectra. Feature fusion of DP and RCI does not improve the more trivial task of tumour identification but does deliver an observed advantage in G3/G4 discrimination. Building on these promising findings, future work could include the acquisition of larger datasets for enhanced model generalization.
Collapse
Affiliation(s)
- Trevor Doherty
- Technological University Dublin, School of Computer Science, City Campus, Grangegorman Lower, Dublin 7, Ireland.
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
27
|
Damkliang K, Wongsirichot T, Thongsuksai P. TISSUE CLASSIFICATION FOR COLORECTAL CANCER UTILIZING TECHNIQUES OF DEEP LEARNING AND MACHINE LEARNING. BIOMEDICAL ENGINEERING: APPLICATIONS, BASIS AND COMMUNICATIONS 2021. [DOI: 10.4015/s1016237221500228] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Since the introduction of image pattern recognition and computer vision processing, the classification of cancer tissues has been a challenge at pixel-level, slide-level, and patient-level. Conventional machine learning techniques have given way to Deep Learning (DL), a contemporary, state-of-the-art approach to texture classification and localization of cancer tissues. Colorectal Cancer (CRC) is the third ranked cause of death from cancer worldwide. This paper proposes image-level texture classification of a CRC dataset by deep convolutional neural networks (CNN). Simple DL techniques consisting of transfer learning and fine-tuning were exploited. VGG-16, a Keras pre-trained model with initial weights by ImageNet, was applied. The transfer learning architecture and methods responding to VGG-16 are proposed. The training, validation, and testing sets included 5000 images of 150 × 150 pixels. The application set for detection and localization contained 10 large original images of 5000 × 5000 pixels. The model achieved F1-score and accuracy of 0.96 and 0.99, respectively, and produced a false positive rate of 0.01. AUC-based evaluation was also measured. The model classified ten large previously unseen images from the application set represented in false color maps. The reported results show the satisfactory performance of the model. The simplicity of the architecture, configuration, and implementation also contributes to the outcome this work.
Collapse
Affiliation(s)
- Kasikrit Damkliang
- Division of Computational Science, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand
| | - Thakerng Wongsirichot
- Division of Computational Science, Faculty of Science, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand
| | - Paramee Thongsuksai
- Department of Pathology, Faculty of Medicine, Prince of Songkla University, Hat Yai, Songkhla 90110, Thailand
| |
Collapse
|
28
|
Qi Q, Lin X, Chen C, Xie W, Huang Y, Ding X, Liu X, Yu Y. Curriculum Feature Alignment Domain Adaptation for Epithelium-Stroma Classification in Histopathological Images. IEEE J Biomed Health Inform 2021; 25:1163-1172. [PMID: 32881698 DOI: 10.1109/jbhi.2020.3021558] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, deep learning methods have received more attention in epithelial-stroma (ES) classification tasks. Traditional deep learning methods assume that the training and test data have the same distribution, an assumption that is seldom satisfied in complex imaging procedures. Unsupervised domain adaptation (UDA) transfers knowledge from a labelled source domain to a completely unlabeled target domain, and is more suitable for ES classification tasks to avoid tedious annotation. However, existing UDA methods for this task ignore the semantic alignment across domains. In this paper, we propose a Curriculum Feature Alignment Network (CFAN) to gradually align discriminative features across domains through selecting effective samples from the target domain and minimizing intra-class differences. Specifically, we developed the Curriculum Transfer Strategy (CTS) and Adaptive Centroid Alignment (ACA) steps to train our model iteratively. We validated the method using three independent public ES datasets, and experimental results demonstrate that our method achieves better performance in ES classification compared with commonly used deep learning methods and existing deep domain adaptation methods.
Collapse
|
29
|
Two Ensemble-CNN Approaches for Colorectal Cancer Tissue Type Classification. J Imaging 2021; 7:jimaging7030051. [PMID: 34460707 PMCID: PMC8321410 DOI: 10.3390/jimaging7030051] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 02/16/2021] [Accepted: 02/26/2021] [Indexed: 02/06/2023] Open
Abstract
In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.
Collapse
|
30
|
Mormont R, Geurts P, Maree R. Multi-Task Pre-Training of Deep Neural Networks for Digital Pathology. IEEE J Biomed Health Inform 2021; 25:412-421. [PMID: 32386169 DOI: 10.1109/jbhi.2020.2992878] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this work, we investigate multi-task learning as a way of pre-training models for classification tasks in digital pathology. It is motivated by the fact that many small and medium-size datasets have been released by the community over the years whereas there is no large scale dataset similar to ImageNet in the domain. We first assemble and transform many digital pathology datasets into a pool of 22 classification tasks and almost 900k images. Then, we propose a simple architecture and training scheme for creating a transferable model and a robust evaluation and selection protocol in order to evaluate our method. Depending on the target task, we show that our models used as feature extractors either improve significantly over ImageNet pre-trained models or provide comparable performance. Fine-tuning improves performance over feature extraction and is able to recover the lack of specificity of ImageNet features, as both pre-training sources yield comparable performance.
Collapse
|
31
|
Koteluk O, Wartecki A, Mazurek S, Kołodziejczak I, Mackiewicz A. How Do Machines Learn? Artificial Intelligence as a New Era in Medicine. J Pers Med 2021; 11:jpm11010032. [PMID: 33430240 PMCID: PMC7825660 DOI: 10.3390/jpm11010032] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 12/31/2020] [Accepted: 01/05/2021] [Indexed: 02/06/2023] Open
Abstract
With an increased number of medical data generated every day, there is a strong need for reliable, automated evaluation tools. With high hopes and expectations, machine learning has the potential to revolutionize many fields of medicine, helping to make faster and more correct decisions and improving current standards of treatment. Today, machines can analyze, learn, communicate, and understand processed data and are used in health care increasingly. This review explains different models and the general process of machine learning and training the algorithms. Furthermore, it summarizes the most useful machine learning applications and tools in different branches of medicine and health care (radiology, pathology, pharmacology, infectious diseases, personalized decision making, and many others). The review also addresses the futuristic prospects and threats of applying artificial intelligence as an advanced, automated medicine tool.
Collapse
Affiliation(s)
- Oliwia Koteluk
- Faculty of Medical Sciences, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland; (O.K.); (A.W.)
| | - Adrian Wartecki
- Faculty of Medical Sciences, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland; (O.K.); (A.W.)
| | - Sylwia Mazurek
- Department of Cancer Immunology, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland;
- Department of Cancer Diagnostics and Immunology, Greater Poland Cancer Centre, 61-866 Poznan, Poland
- Correspondence: ; Tel.: +48-61-885-06-67
| | - Iga Kołodziejczak
- Postgraduate School of Molecular Medicine, Medical University of Warsaw, 02-091 Warsaw, Poland;
| | - Andrzej Mackiewicz
- Department of Cancer Immunology, Chair of Medical Biotechnology, Poznan University of Medical Sciences, 61-701 Poznan, Poland;
- Department of Cancer Diagnostics and Immunology, Greater Poland Cancer Centre, 61-866 Poznan, Poland
| |
Collapse
|
32
|
Liu Y, Yin M, Sun S. DetexNet: Accurately Diagnosing Frequent and Challenging Pediatric Malignant Tumors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:395-404. [PMID: 32991280 DOI: 10.1109/tmi.2020.3027547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
The most frequent extracranial solid tumors of childhood, named peripheral neuroblastic tumors (pNTs), are very challenging to diagnose due to their diversified categories and varying forms. Auxiliary diagnosis methods of such pediatric malignant cancers are highly needed to provide pathologists assistance and reduce the risk of misdiagnosis before treatments. In this paper, inspired by the particularity of microscopic pathology images, we integrate neural networks with the texture energy measure (TEM) and propose a novel network architecture named DetexNet (deep texture network). This method enforces the low-level representation pattern clearer via embedding the expert knowledge as prior, so that the network can seize the key information of a relatively small pathological dataset more smoothly. By applying and finetuning TEM filters in the bottom layer of a network, we greatly improve the performance of the baseline. We further pre-train the model on unlabeled data with an auto-encoder architecture and implement a color space conversion on input images. Two kinds of experiments under different assumptions in the condition of limited training data are performed, and in both of them, the proposed method achieves the best performance compared with other state-of-the-art models and doctor diagnosis.
Collapse
|
33
|
Alinsaif S, Lang J. Texture features in the Shearlet domain for histopathological image classification. BMC Med Inform Decis Mak 2020; 20:312. [PMID: 33323118 PMCID: PMC7739509 DOI: 10.1186/s12911-020-01327-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022] Open
Abstract
Background A various number of imaging modalities are available (e.g., magnetic resonance, x-ray, ultrasound, and biopsy) where each modality can reveal different structural aspects of tissues. However, the analysis of histological slide images that are captured using a biopsy is considered the gold standard to determine whether cancer exists. Furthermore, it can reveal the stage of cancer. Therefore, supervised machine learning can be used to classify histopathological tissues. Several computational techniques have been proposed to study histopathological images with varying levels of success. Often handcrafted techniques based on texture analysis are proposed to classify histopathological tissues which can be used with supervised machine learning.
Methods In this paper, we construct a novel feature space to automate the classification of tissues in histology images. Our feature representation is to integrate various features sets into a new texture feature representation. All of our descriptors are computed in the complex Shearlet domain. With complex coefficients, we investigate not only the use of magnitude coefficients, but also study the effectiveness of incorporating the relative phase (RP) coefficients to create the input feature vector. In our study, four texture-based descriptors are extracted from the Shearlet coefficients: co-occurrence texture features, Local Binary Patterns, Local Oriented Statistic Information Booster, and segmentation-based Fractal Texture Analysis. Each set of these attributes captures significant local and global statistics. Therefore, we study them individually, but additionally integrate them to boost the accuracy of classifying the histopathology tissues while being fed to classical classifiers. To tackle the problem of high-dimensionality, our proposed feature space is reduced using principal component analysis. In our study, we use two classifiers to indicate the success of our proposed feature representation: Support Vector Machine (SVM) and Decision Tree Bagger (DTB). Results Our feature representation delivered high performance when used on four public datasets. As such, the best achieved accuracy: multi-class Kather (i.e., 92.56%), BreakHis (i.e., 91.73%), Epistroma (i.e., 98.04%), Warwick-QU (i.e., 96.29%). Conclusions Our proposed method in the Shearlet domain for the classification of histopathological images proved to be effective when it was investigated on four different datasets that exhibit different levels of complexity.
Collapse
|
34
|
Bianconi F, Kather JN, Reyes-Aldasoro CC. Experimental Assessment of Color Deconvolution and Color Normalization for Automated Classification of Histology Images Stained with Hematoxylin and Eosin. Cancers (Basel) 2020; 12:cancers12113337. [PMID: 33187299 PMCID: PMC7697346 DOI: 10.3390/cancers12113337] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 11/04/2020] [Indexed: 02/06/2023] Open
Abstract
Histological evaluation plays a major role in cancer diagnosis and treatment. The appearance of H&E-stained images can vary significantly as a consequence of differences in several factors, such as reagents, staining conditions, preparation procedure and image acquisition system. Such potential sources of noise can all have negative effects on computer-assisted classification. To minimize such artefacts and their potentially negative effects several color pre-processing methods have been proposed in the literature-for instance, color augmentation, color constancy, color deconvolution and color transfer. Still, little work has been done to investigate the efficacy of these methods on a quantitative basis. In this paper, we evaluated the effects of color constancy, deconvolution and transfer on automated classification of H&E-stained images representing different types of cancers-specifically breast, prostate, colorectal cancer and malignant lymphoma. Our results indicate that in most cases color pre-processing does not improve the classification accuracy, especially when coupled with color-based image descriptors. Some pre-processing methods, however, can be beneficial when used with some texture-based methods like Gabor filters and Local Binary Patterns.
Collapse
Affiliation(s)
- Francesco Bianconi
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy
- giCentre, School of Mathematics, Computer Science & Engineering, City, University of London, Northampton Square, London EC1V 0HB, UK;
- Correspondence: ; Tel.: +39-075-585-3706
| | - Jakob N. Kather
- Department of Medical Oncology and Internal Medicine VI, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany;
| | - Constantino Carlos Reyes-Aldasoro
- giCentre, School of Mathematics, Computer Science & Engineering, City, University of London, Northampton Square, London EC1V 0HB, UK;
| |
Collapse
|
35
|
Combining multiple spatial statistics enhances the description of immune cell localisation within tumours. Sci Rep 2020; 10:18624. [PMID: 33122646 PMCID: PMC7596100 DOI: 10.1038/s41598-020-75180-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 10/13/2020] [Indexed: 12/15/2022] Open
Abstract
Digital pathology enables computational analysis algorithms to be applied at scale to histological images. An example is the identification of immune cells within solid tumours. Image analysis algorithms can extract precise cell locations from immunohistochemistry slides, but the resulting spatial coordinates, or point patterns, can be difficult to interpret. Since localisation of immune cells within tumours may reflect their functional status and correlates with patient prognosis, novel descriptors of their spatial distributions are of biological and clinical interest. A range of spatial statistics have been used to analyse such point patterns but, individually, these approaches only partially describe complex immune cell distributions. In this study, we apply three spatial statistics to locations of CD68+ macrophages within human head and neck tumours, and show that images grouped semi-quantitatively by a pathologist share similar statistics. We generate a synthetic dataset which emulates human samples and use it to demonstrate that combining multiple spatial statistics with a maximum likelihood approach better predicts human classifications than any single statistic. We can also estimate the error associated with our classifications. Importantly, this methodology is adaptable and can be extended to other histological investigations or applied to point patterns outside of histology.
Collapse
|
36
|
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends. MATHEMATICS 2020. [DOI: 10.3390/math8111863] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Collapse
|
37
|
Javed S, Mahmood A, Werghi N, Benes K, Rajpoot N. Multiplex Cellular Communities in Multi-Gigapixel Colorectal Cancer Histology Images for Tissue Phenotyping. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9204-9219. [PMID: 32966218 DOI: 10.1109/tip.2020.3023795] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In computational pathology, automated tissue phenotyping in cancer histology images is a fundamental tool for profiling tumor microenvironments. Current tissue phenotyping methods use features derived from image patches which may not carry biological significance. In this work, we propose a novel multiplex cellular community-based algorithm for tissue phenotyping integrating cell-level features within a graph-based hierarchical framework. We demonstrate that such integration offers better performance compared to prior deep learning and texture-based methods as well as to cellular community based methods using uniplex networks. To this end, we construct celllevel graphs using texture, alpha diversity and multi-resolution deep features. Using these graphs, we compute cellular connectivity features which are then employed for the construction of a patch-level multiplex network. Over this network, we compute multiplex cellular communities using a novel objective function. The proposed objective function computes a low-dimensional subspace from each cellular network and subsequently seeks a common low-dimensional subspace using the Grassmann manifold. We evaluate our proposed algorithm on three publicly available datasets for tissue phenotyping, demonstrating a significant improvement over existing state-of-the-art methods.
Collapse
|
38
|
Abstract
Pathology has benefited from advanced innovation with novel technology to implement a digital solution. Whole slide imaging is a disruptive technology where glass slides are scanned to produce digital images. There have been significant advances in whole slide scanning hardware and software that have allowed for ready access of whole slide images. The digital images, or whole slide images, can be viewed comparable to glass slides in a microscope, as digital files. Whole slide imaging has increased in adoption among pathologists, pathology departments, and scientists for clinical, educational, and research initiatives. Worldwide usage of whole slide imaging has grown significantly. Pathology regulatory organizations (ie, College of American Pathologists) have put forth guidelines for clinical validation, and the US Food and Drug Administration have also approved whole slide imaging for primary diagnosis. This article will review the digital pathology ecosystem and discuss clinical and nonclinical applications of its use.
Collapse
|
39
|
Javed S, Mahmood A, Fraz MM, Koohbanani NA, Benes K, Tsang YW, Hewitt K, Epstein D, Snead D, Rajpoot N. Cellular community detection for tissue phenotyping in colorectal cancer histology images. Med Image Anal 2020; 63:101696. [PMID: 32330851 DOI: 10.1016/j.media.2020.101696] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 02/18/2020] [Accepted: 04/02/2020] [Indexed: 02/01/2023]
Abstract
Classification of various types of tissue in cancer histology images based on the cellular compositions is an important step towards the development of computational pathology tools for systematic digital profiling of the spatial tumor microenvironment. Most existing methods for tissue phenotyping are limited to the classification of tumor and stroma and require large amount of annotated histology images which are often not available. In the current work, we pose the problem of identifying distinct tissue phenotypes as finding communities in cellular graphs or networks. First, we train a deep neural network for cell detection and classification into five distinct cellular components. Considering the detected nuclei as nodes, potential cell-cell connections are assigned using Delaunay triangulation resulting in a cell-level graph. Based on this cell graph, a feature vector capturing potential cell-cell connection of different types of cells is computed. These feature vectors are used to construct a patch-level graph based on chi-square distance. We map patch-level nodes to the geometric space by representing each node as a vector of geodesic distances from other nodes in the network and iteratively drifting the patch nodes in the direction of positive density gradients towards maximum density regions. The proposed algorithm is evaluated on a publicly available dataset and another new large-scale dataset consisting of 280K patches of seven tissue phenotypes. The estimated communities have significant biological meanings as verified by the expert pathologists. A comparison with current state-of-the-art methods reveals significant performance improvement in tissue phenotyping.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Khalifa University Center for Autonomous Robotic Systems (KUCARS), Abu Dhabi, P.O. Box 127788, UAE
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan
| | - Muhammad Moazam Fraz
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; National University of Science and Technology (NUST), Islamabad, Pakistan
| | | | - Ksenija Benes
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Yee-Wah Tsang
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Katherine Hewitt
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - David Epstein
- Mathematics Institute, University of Warwick, Coventry, CV4 7AL, UK
| | - David Snead
- Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Walsgrave, Coventry, CV2 2DX, UK; The Alan Turing Institute, London, UK.
| |
Collapse
|
40
|
Interpretable multimodal deep learning for real-time pan-tissue pan-disease pathology search on social media. Mod Pathol 2020; 33:2169-2185. [PMID: 32467650 PMCID: PMC7581495 DOI: 10.1038/s41379-020-0540-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 03/17/2020] [Accepted: 03/17/2020] [Indexed: 02/07/2023]
Abstract
Pathologists are responsible for rapidly providing a diagnosis on critical health issues. Challenging cases benefit from additional opinions of pathologist colleagues. In addition to on-site colleagues, there is an active worldwide community of pathologists on social media for complementary opinions. Such access to pathologists worldwide has the capacity to improve diagnostic accuracy and generate broader consensus on next steps in patient care. From Twitter we curate 13,626 images from 6,351 tweets from 25 pathologists from 13 countries. We supplement the Twitter data with 113,161 images from 1,074,484 PubMed articles. We develop machine learning and deep learning models to (i) accurately identify histopathology stains, (ii) discriminate between tissues, and (iii) differentiate disease states. Area Under Receiver Operating Characteristic (AUROC) is 0.805-0.996 for these tasks. We repurpose the disease classifier to search for similar disease states given an image and clinical covariates. We report precision@k = 1 = 0.7618 ± 0.0018 (chance 0.397 ± 0.004, mean ±stdev ). The classifiers find that texture and tissue are important clinico-visual features of disease. Deep features trained only on natural images (e.g., cats and dogs) substantially improved search performance, while pathology-specific deep features and cell nuclei features further improved search to a lesser extent. We implement a social media bot (@pathobot on Twitter) to use the trained classifiers to aid pathologists in obtaining real-time feedback on challenging cases. If a social media post containing pathology text and images mentions the bot, the bot generates quantitative predictions of disease state (normal/artifact/infection/injury/nontumor, preneoplastic/benign/low-grade-malignant-potential, or malignant) and lists similar cases across social media and PubMed. Our project has become a globally distributed expert system that facilitates pathological diagnosis and brings expertise to underserved regions or hospitals with less expertise in a particular disease. This is the first pan-tissue pan-disease (i.e., from infection to malignancy) method for prediction and search on social media, and the first pathology study prospectively tested in public on social media. We will share data through http://pathobotology.org . We expect our project to cultivate a more connected world of physicians and improve patient care worldwide.
Collapse
|
41
|
Halicek M, Shahedi M, Little JV, Chen AY, Myers LL, Sumer BD, Fei B. Head and Neck Cancer Detection in Digitized Whole-Slide Histology Using Convolutional Neural Networks. Sci Rep 2019; 9:14043. [PMID: 31575946 PMCID: PMC6773771 DOI: 10.1038/s41598-019-50313-x] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2019] [Accepted: 09/10/2019] [Indexed: 01/01/2023] Open
Abstract
Primary management for head and neck cancers, including squamous cell carcinoma (SCC), involves surgical resection with negative cancer margins. Pathologists guide surgeons during these operations by detecting cancer in histology slides made from the excised tissue. In this study, 381 digitized, histological whole-slide images (WSI) from 156 patients with head and neck cancer were used to train, validate, and test an inception-v4 convolutional neural network. The proposed method is able to detect and localize primary head and neck SCC on WSI with an AUC of 0.916 for patients in the SCC testing group and 0.954 for patients in the thyroid carcinoma testing group. Moreover, the proposed method is able to diagnose WSI with cancer versus normal slides with an AUC of 0.944 and 0.995 for the SCC and thyroid carcinoma testing groups, respectively. For comparison, we tested the proposed, diagnostic method on an open-source dataset of WSI from sentinel lymph nodes with breast cancer metastases, CAMELYON 2016, to obtain patch-based cancer localization and slide-level cancer diagnoses. The experimental design yields a robust method with potential to help create a tool to increase efficiency and accuracy of pathologists detecting head and neck cancers in histological images.
Collapse
Affiliation(s)
- Martin Halicek
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX, USA.,Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA
| | - Maysam Shahedi
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX, USA
| | - James V Little
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Amy Y Chen
- Department of Otolaryngology, Emory University School of Medicine, Atlanta, GA, USA
| | - Larry L Myers
- Department of Otolaryngology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Baran D Sumer
- Department of Otolaryngology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX, USA. .,Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX, USA. .,Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
42
|
Joseph J, Roudier MP, Narayanan PL, Augulis R, Ros VR, Pritchard A, Gerrard J, Laurinavicius A, Harrington EA, Barrett JC, Howat WJ. Proliferation Tumour Marker Network (PTM-NET) for the identification of tumour region in Ki67 stained breast cancer whole slide images. Sci Rep 2019; 9:12845. [PMID: 31492872 PMCID: PMC6731323 DOI: 10.1038/s41598-019-49139-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2018] [Accepted: 08/16/2019] [Indexed: 12/20/2022] Open
Abstract
Uncontrolled proliferation is a hallmark of cancer and can be assessed by labelling breast tissue using immunohistochemistry for Ki67, a protein associated with cell proliferation. Accurate measurement of Ki67-positive tumour nuclei is of critical importance, but requires annotation of the tumour regions by a pathologist. This manual annotation process is highly subjective, time-consuming and subject to inter- and intra-annotator experience. To address this challenge, we have developed Proliferation Tumour Marker Network (PTM-NET), a deep learning model that objectively annotates the tumour regions in Ki67-labelled breast cancer digital pathology images using a convolution neural network. Our custom designed deep learning model was trained on 45 immunohistochemical Ki67-labelled whole slide images to classify tumour and non-tumour regions and was validated on 45 whole slide images from two different sources that were stained using different protocols. Our results show a Dice coefficient of 0.74, positive predictive value of 70% and negative predictive value of 88.3% against the manual ground truth annotation for the combined dataset. There were minimal differences between the images from different sources and the model was further tested in oestrogen receptor and progesterone receptor-labelled images. Finally, using an extension of the model, we could identify possible hotspot regions of high proliferation within the tumour. In the future, this approach could be useful in identifying tumour regions in biopsy samples and tissue microarray images.
Collapse
Affiliation(s)
- Jesuchristopher Joseph
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom.
| | - Martine P Roudier
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Priya Lakshmi Narayanan
- Centre for Evolution and Cancer, Division of Molecular Pathology, Institute of Cancer Research London, London, United Kingdom
| | - Renaldas Augulis
- Vilnius University, Faculty of Medicine and the National Centre of Pathology, affiliate of Vilnius University Hospital Santaros Clinics, Vilnius, Lithuania
| | - Vidalba Rocher Ros
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Alison Pritchard
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Joe Gerrard
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - Arvydas Laurinavicius
- Vilnius University, Faculty of Medicine and the National Centre of Pathology, affiliate of Vilnius University Hospital Santaros Clinics, Vilnius, Lithuania
| | - Elizabeth A Harrington
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - J Carl Barrett
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| | - William J Howat
- Molecular Pathology Group, Translational Science, AstraZeneca, Cambridge, United Kingdom
| |
Collapse
|
43
|
Vu QD, Kwak JT. A dense multi-path decoder for tissue segmentation in histopathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 173:119-129. [PMID: 31046986 DOI: 10.1016/j.cmpb.2019.03.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2018] [Revised: 02/19/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Segmenting different tissue components in histopathological images is of great importance for analyzing tissues and tumor environments. In recent years, an encoder-decoder family of convolutional neural networks has increasingly adopted to develop automated segmentation tools. While an encoder has been the main focus of most investigations, the role of a decoder so far has not been well studied and understood. Herein, we proposed an improved design of a decoder for the segmentation of epithelium and stroma components in histopathology images. METHODS The proposed decoder is built upon a multi-path layout and dense shortcut connections between layers to maximize the learning and inference capability. Equipped with the proposed decoder, neural networks are built using three types of encoders (VGG, ResNet and preactived ResNet). To assess the proposed method, breast and prostate tissue datasets are utilized, including 108 and 52 hematoxylin and eosin (H&E) breast tissues images and 224 H&E prostate tissue images. RESULTS Combining the pre-activated ResNet encoder and the proposed decoder, we achieved a pixel wise accuracy (ACC) of 0.9122, a rand index (RAND) score of 0.8398, an area under receiver operating characteristic curve (AUC) of 0.9716, Dice coefficient for stroma (DICE_STR) of 0.9092 and Dice coefficient for epithelium (DICE_EPI) of 0.9150 on the breast tissue dataset. The same network obtained 0.9074 ACC, 0.8320 Rand index, 0.9719 AUC, 0.9021 DICE_EPI and 0.9121 DICE_STR on the prostate dataset. CONCLUSIONS In general, the experimental results confirmed that the proposed network is superior to the networks combined with the conventional decoder. Therefore, the proposed decoder could aid in improving tissue analysis in histopathology images.
Collapse
Affiliation(s)
- Quoc Dang Vu
- Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, 209 Neungdong-ro, Gwangjin-gu, Seoul 05006, Korea.
| |
Collapse
|
44
|
Qaiser T, Tsang YW, Taniyama D, Sakamoto N, Nakane K, Epstein D, Rajpoot N. Fast and accurate tumor segmentation of histology images using persistent homology and deep convolutional features. Med Image Anal 2019; 55:1-14. [PMID: 30991188 DOI: 10.1016/j.media.2019.03.014] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Revised: 03/27/2019] [Accepted: 03/30/2019] [Indexed: 12/17/2022]
Abstract
Tumor segmentation in whole-slide images of histology slides is an important step towards computer-assisted diagnosis. In this work, we propose a tumor segmentation framework based on the novel concept of persistent homology profiles (PHPs). For a given image patch, the homology profiles are derived by efficient computation of persistent homology, which is an algebraic tool from homology theory. We propose an efficient way of computing topological persistence of an image, alternative to simplicial homology. The PHPs are devised to distinguish tumor regions from their normal counterparts by modeling the atypical characteristics of tumor nuclei. We propose two variants of our method for tumor segmentation: one that targets speed without compromising accuracy and the other that targets higher accuracy. The fast version is based on a selection of exemplar image patches from a convolution neural network (CNN) and patch classification by quantifying the divergence between the PHPs of exemplars and the input image patch. Detailed comparative evaluation shows that the proposed algorithm is significantly faster than competing algorithms while achieving comparable results. The accurate version combines the PHPs and high-level CNN features and employs a multi-stage ensemble strategy for image patch labeling. Experimental results demonstrate that the combination of PHPs and CNN features outperform competing algorithms. This study is performed on two independently collected colorectal datasets containing adenoma, adenocarcinoma, signet, and healthy cases. Collectively, the accurate tumor segmentation produces the highest average patch-level F1-score, as compared with competing algorithms, on malignant and healthy cases from both the datasets. Overall the proposed framework highlights the utility of persistent homology for histopathology image analysis.
Collapse
Affiliation(s)
- Talha Qaiser
- Department of Computer Science, University of Warwick, UK.
| | - Yee-Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire, UK
| | - Daiki Taniyama
- Department of Molecular Pathology, Hiroshima University Institute of Biomedical and Health Sciences, Japan
| | - Naoya Sakamoto
- Department of Molecular Pathology, Hiroshima University Institute of Biomedical and Health Sciences, Japan
| | - Kazuaki Nakane
- Graduate School of Medicine, Division of Health Science, Osaka University, Japan
| | | | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, UK; The Alan Turing Institute, UK.
| |
Collapse
|
45
|
Computer aided quantification of intratumoral stroma yields an independent prognosticator in rectal cancer. Cell Oncol (Dordr) 2019; 42:331-341. [PMID: 30825182 DOI: 10.1007/s13402-019-00429-z] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/07/2019] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Tumor-stroma ratio (TSR) serves as an independent prognostic factor in colorectal cancer and other solid malignancies. The recent introduction of digital pathology in routine tissue diagnostics holds opportunities for automated TSR analysis. We investigated the potential of computer-aided quantification of intratumoral stroma in rectal cancer whole-slide images. METHODS Histological slides from 129 rectal adenocarcinoma patients were analyzed by two experts who selected a suitable stroma hot-spot and visually assessed TSR. A semi-automatic method based on deep learning was trained to segment all relevant tissue types in rectal cancer histology and subsequently applied to the hot-spots provided by the experts. Patients were assigned to a 'stroma-high' or 'stroma-low' group by both TSR methods (visual and automated). This allowed for prognostic comparison between the two methods in terms of disease-specific and disease-free survival times. RESULTS With stroma-low as baseline, automated TSR was found to be prognostic independent of age, gender, pT-stage, lymph node status, tumor grade, and whether adjuvant therapy was given, both for disease-specific survival (hazard ratio = 2.48 (95% confidence interval 1.29-4.78)) and for disease-free survival (hazard ratio = 2.05 (95% confidence interval 1.11-3.78)). Visually assessed TSR did not serve as an independent prognostic factor in multivariate analysis. CONCLUSIONS This work shows that TSR is an independent prognosticator in rectal cancer when assessed automatically in user-provided stroma hot-spots. The deep learning-based technology presented here may be a significant aid to pathologists in routine diagnostics.
Collapse
|
46
|
Integrating segmentation with deep learning for enhanced classification of epithelial and stromal tissues in H&E images. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2017.09.015] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
47
|
Gertych A, Swiderska-Chadaj Z, Ma Z, Ing N, Markiewicz T, Cierniak S, Salemi H, Guzman S, Walts AE, Knudsen BS. Convolutional neural networks can accurately distinguish four histologic growth patterns of lung adenocarcinoma in digital slides. Sci Rep 2019; 9:1483. [PMID: 30728398 PMCID: PMC6365499 DOI: 10.1038/s41598-018-37638-9] [Citation(s) in RCA: 98] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 12/06/2018] [Indexed: 12/27/2022] Open
Abstract
During the diagnostic workup of lung adenocarcinomas (LAC), pathologists evaluate distinct histological tumor growth patterns. The percentage of each pattern on multiple slides bears prognostic significance. To assist with the quantification of growth patterns, we constructed a pipeline equipped with a convolutional neural network (CNN) and soft-voting as the decision function to recognize solid, micropapillary, acinar, and cribriform growth patterns, and non-tumor areas. Slides of primary LAC were obtained from Cedars-Sinai Medical Center (CSMC), the Military Institute of Medicine in Warsaw and the TCGA portal. Several CNN models trained with 19,924 image tiles extracted from 78 slides (MIMW and CSMC) were evaluated on 128 test slides from the three sites by F1-score and accuracy using manual tumor annotations by pathologist. The best CNN yielded F1-scores of 0.91 (solid), 0.76 (micropapillary), 0.74 (acinar), 0.6 (cribriform), and 0.96 (non-tumor) respectively. The overall accuracy of distinguishing the five tissue classes was 89.24%. Slide-based accuracy in the CSMC set (88.5%) was significantly better (p < 2.3E-4) than the accuracy in the MIMW (84.2%) and TCGA (84%) sets due to superior slide quality. Our model can work side-by-side with a pathologist to accurately quantify the percentages of growth patterns in tumors with mixed LAC patterns.
Collapse
Affiliation(s)
- Arkadiusz Gertych
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, California, USA. .,Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, California, USA.
| | | | - Zhaoxuan Ma
- Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Nathan Ing
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, California, USA.,Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Tomasz Markiewicz
- Faculty of Electrical Engineering, Warsaw University of Technology, Warsaw, Poland.,Department of Pathology, Military Institute of Medicine, Warsaw, Poland
| | - Szczepan Cierniak
- Department of Pathology, Military Institute of Medicine, Warsaw, Poland
| | - Hootan Salemi
- Department of Surgery, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Samuel Guzman
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Ann E Walts
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Beatrice S Knudsen
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, California, USA.,Department of Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, California, USA
| |
Collapse
|
48
|
Halicek M, Shahedi M, Little JV, Chen AY, Myers LL, Sumer BD, Fei B. Detection of Squamous Cell Carcinoma in Digitized Histological Images from the Head and Neck Using Convolutional Neural Networks. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10956. [PMID: 32476700 DOI: 10.1117/12.2512570] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Primary management for head and neck squamous cell carcinoma (SCC) involves surgical resection with negative cancer margins. Pathologists guide surgeons during these operations by detecting SCC in histology slides made from the excised tissue. In this study, 192 digitized histological images from 84 head and neck SCC patients were used to train, validate, and test an inception-v4 convolutional neural network. The proposed method performs with an AUC of 0.91 and 0.92 for the validation and testing group. The careful experimental design yields a robust method with potential to help create a tool to increase efficiency and accuracy of pathologists for detecting SCC in histological images.
Collapse
Affiliation(s)
- Martin Halicek
- Department of Bioengineering, University of Texas at Dallas, Dallas, TX, USA.,Georgia Inst. of Tech. & Emory Univ., Dept. of Biomedical Engineering, Atlanta, GA.,Medical College of Georgia, Augusta University, Augusta, GA
| | - Maysam Shahedi
- Department of Bioengineering, University of Texas at Dallas, Dallas, TX, USA
| | - James V Little
- Emory Univ. School of Medicine, Dept. of Pathology & Laboratory Medicine, Atlanta, GA
| | - Amy Y Chen
- Emory University School of Medicine, Dept. of Otolaryngology, Atlanta, GA
| | - Larry L Myers
- University of Texas Southwestern Medical Center, Dept. of Otolaryngology, Dallas, TX
| | - Baran D Sumer
- University of Texas Southwestern Medical Center, Dept. of Otolaryngology, Dallas, TX
| | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Dallas, TX, USA.,Univ. of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, TX.,University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX
| |
Collapse
|
49
|
Schilling F, Geppert CE, Strehl J, Hartmann A, Kuerten S, Brehmer A, Jabari S. Digital pathology imaging and computer-aided diagnostics as a novel tool for standardization of evaluation of aganglionic megacolon (Hirschsprung disease) histopathology. Cell Tissue Res 2018; 375:371-381. [PMID: 30175382 DOI: 10.1007/s00441-018-2911-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 08/15/2018] [Indexed: 10/28/2022]
Abstract
Based on a recently introduced immunohistochemical panel (Bachmann et al. 2015) for aganglionic megacolon (AM), also known as Hirschsprung disease, histopathological diagnosis, we evaluated whether the use of digital pathology and 'machine learning' could help to obtain a reliable diagnosis. Slides were obtained from 31 specimens of 27 patients immunohistochemically stained for MAP2, calretinin, S100β and GLUT1. Slides were digitized by whole slide scanning. We used a Definiens Developer Tissue Studios as software for analysis. We configured necessary parameters in combination with 'machine learning' to identify pathological aberrations. A significant difference between AM- and non-AM-affected tissues was found for calretinin (AM 0.55% vs. non-AM 1.44%) and MAP2 (AM 0.004% vs. non-AM 0.07%) staining measurements and software-based evaluations. In contrast, S100β and GLUT1 staining measurements and software-based evaluations showed no significant differences between AM- and non-AM-affected tissues. However, no difference was found in comparison of suction biopsies with resections. Applying machine learning via an ensemble voting classifier, we achieved an accuracy of 87.5% on the test set. Automated diagnosis of AM by applying digital pathology on immunohistochemical panels was successful for calretinin and MAP2, whereas S100β and GLUT1 were not effective in diagnosis. Our method suggests that software-based approaches are capable of diagnosing AM. Our future challenge will be the improvement of efficiency by reduction of the time-consuming need for large pre-labelled training data. With increasing technical improvement, especially in unsupervised training procedures, this method could be helpful in the future.
Collapse
Affiliation(s)
- Florian Schilling
- Institute of Anatomy and Cell Biology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany.,Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany
| | - Carol E Geppert
- Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany
| | - Johanna Strehl
- Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany
| | - Arndt Hartmann
- Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany
| | - Stefanie Kuerten
- Institute of Anatomy and Cell Biology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany
| | - Axel Brehmer
- Institute of Anatomy and Cell Biology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany
| | - Samir Jabari
- Institute of Anatomy and Cell Biology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany. .,Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 9, 91054, Erlangen, Germany.
| |
Collapse
|
50
|
Komura D, Ishikawa S. Machine Learning Methods for Histopathological Image Analysis. Comput Struct Biotechnol J 2018; 16:34-42. [PMID: 30275936 PMCID: PMC6158771 DOI: 10.1016/j.csbj.2018.01.001] [Citation(s) in RCA: 357] [Impact Index Per Article: 59.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Revised: 12/03/2017] [Accepted: 01/14/2018] [Indexed: 12/12/2022] Open
Abstract
Abundant accumulation of digital histopathological images has led to the increased demand for their analysis, such as computer-aided diagnosis using machine learning techniques. However, digital pathological images and related tasks have some issues to be considered. In this mini-review, we introduce the application of digital pathological image analysis using machine learning algorithms, address some problems specific to such analysis, and propose possible solutions.
Collapse
Affiliation(s)
- Daisuke Komura
- Department of Genomic Pathology, Medical Research Institute, Tokyo Medical and Dental University, Tokyo, Japan
| | | |
Collapse
|