1
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
2
|
Sajithkumar A, Thomas J, Saji AM, Ali F, E K HH, Adampulan HAG, Sarathchand S. Artificial Intelligence in pathology: current applications, limitations, and future directions. Ir J Med Sci 2024; 193:1117-1121. [PMID: 37542634 DOI: 10.1007/s11845-023-03479-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 07/26/2023] [Indexed: 08/07/2023]
Abstract
PURPOSE Given AI's recent success in computer vision applications, majority of pathologists anticipate that it will be able to assist them with a variety of digital pathology activities. Massive improvements in deep learning have enabled a synergy between Artificial Intelligence (AI) and deep learning, enabling image-based diagnosis against the backdrop of digital pathology. AI-based solutions are being developed to eliminate errors and save pathologists time. AIMS In this paper, we will discuss the components that went into the use of Artificial Intelligence in Pathology, its use in the medical profession, the obstacles and constraints that it encounters, and the future possibilities of AI in the medical field. CONCLUSIONS Based on these factors, we elaborate upon the use of AI in medical pathology and provide future recommendations for its successful implementation in this field.
Collapse
Affiliation(s)
- Akhil Sajithkumar
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India.
| | - Jubin Thomas
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Ajish Meprathumalil Saji
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Fousiya Ali
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Haneena Hasin E K
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Hannan Abdul Gafoor Adampulan
- Department of Oral Pathology and Microbiology, Malabar Dental College and Research Centre, Manoor Chekanoor Road, Mudur PO, Edappal, Malappuram Dist, 679578, India
| | - Swathy Sarathchand
- Sree Narayana Institute of Medical Sciences, Chalakka - Kuthiathode Rd, North Kuthiathode, Kunnukara, Kerala, 683594, India
| |
Collapse
|
3
|
Lama N, Stanley RJ, Lama B, Maurya A, Nambisan A, Hagerty J, Phan T, Van Stoecker W. LAMA: Lesion-Aware Mixup Augmentation for Skin Lesion Segmentation. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01000-5. [PMID: 38409610 DOI: 10.1007/s10278-024-01000-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 12/18/2023] [Accepted: 12/21/2023] [Indexed: 02/28/2024]
Abstract
Deep learning can exceed dermatologists' diagnostic accuracy in experimental image environments. However, inaccurate segmentation of images with multiple skin lesions can be seen with current methods. Thus, information present in multiple-lesion images, available to specialists, is not retrievable by machine learning. While skin lesion images generally capture a single lesion, there may be cases in which a patient's skin variation may be identified as skin lesions, leading to multiple false positive segmentations in a single image. Conversely, image segmentation methods may find only one region and may not capture multiple lesions in an image. To remedy these problems, we propose a novel and effective data augmentation technique for skin lesion segmentation in dermoscopic images with multiple lesions. The lesion-aware mixup augmentation (LAMA) method generates a synthetic multi-lesion image by mixing two or more lesion images from the training set. We used the publicly available International Skin Imaging Collaboration (ISIC) 2017 Challenge skin lesion segmentation dataset to train the deep neural network with the proposed LAMA method. As none of the previous skin lesion datasets (including ISIC 2017) has considered multiple lesions per image, we created a new multi-lesion (MuLe) segmentation dataset utilizing publicly available ISIC 2020 skin lesion images with multiple lesions per image. MuLe was used as a test set to evaluate the effectiveness of the proposed method. Our test results show that the proposed method improved the Jaccard score 8.3% from 0.687 to 0.744 and the Dice score 5% from 0.7923 to 0.8321 over a baseline model on MuLe test images. On the single-lesion ISIC 2017 test images, LAMA improved the baseline model's segmentation performance by 0.08%, raising the Jaccard score from 0.7947 to 0.8013 and the Dice score 0.6% from 0.8714 to 0.8766. The experimental results showed that LAMA improved the segmentation accuracy on both single-lesion and multi-lesion dermoscopic images. The proposed LAMA technique warrants further study.
Collapse
Affiliation(s)
- Norsang Lama
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | | | | | - Akanksha Maurya
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | - Anand Nambisan
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | | | - Thanh Phan
- Missouri University of Science & Technology, Rolla, MO, 65409, USA
| | | |
Collapse
|
4
|
Malik S, Zaheer S. ChatGPT as an aid for pathological diagnosis of cancer. Pathol Res Pract 2024; 253:154989. [PMID: 38056135 DOI: 10.1016/j.prp.2023.154989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 11/26/2023] [Accepted: 11/27/2023] [Indexed: 12/08/2023]
Abstract
Diagnostic workup of cancer patients is highly reliant on the science of pathology using cytopathology, histopathology, and other ancillary techniques like immunohistochemistry and molecular cytogenetics. Data processing and learning by means of artificial intelligence (AI) has become a spearhead for the advancement of medicine, with pathology and laboratory medicine being no exceptions. ChatGPT, an artificial intelligence (AI)-based chatbot, that was recently launched by OpenAI, is currently a talk of the town, and its role in cancer diagnosis is also being explored meticulously. Pathology workflow by integration of digital slides, implementation of advanced algorithms, and computer-aided diagnostic techniques extend the frontiers of the pathologist's view beyond a microscopic slide and enables effective integration, assimilation, and utilization of knowledge that is beyond human limits and boundaries. Despite of it's numerous advantages in the pathological diagnosis of cancer, it comes with several challenges like integration of digital slides with input language parameters, problems of bias, and legal issues which have to be addressed and worked up soon so that we as a pathologists diagnosing malignancies are on the same band wagon and don't miss the train.
Collapse
Affiliation(s)
- Shaivy Malik
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India
| | - Sufian Zaheer
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India.
| |
Collapse
|
5
|
Tavolara TE, Su Z, Gurcan MN, Niazi MKK. One label is all you need: Interpretable AI-enhanced histopathology for oncology. Semin Cancer Biol 2023; 97:70-85. [PMID: 37832751 DOI: 10.1016/j.semcancer.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Ziyu Su
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Metin N Gurcan
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - M Khalid Khan Niazi
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
| |
Collapse
|
6
|
Haghofer A, Fuchs-Baumgartinger A, Lipnik K, Klopfleisch R, Aubreville M, Scharinger J, Weissenböck H, Winkler SM, Bertram CA. Histological classification of canine and feline lymphoma using a modular approach based on deep learning and advanced image processing. Sci Rep 2023; 13:19436. [PMID: 37945699 PMCID: PMC10636139 DOI: 10.1038/s41598-023-46607-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 11/02/2023] [Indexed: 11/12/2023] Open
Abstract
Histopathological examination of tissue samples is essential for identifying tumor malignancy and the diagnosis of different types of tumor. In the case of lymphoma classification, nuclear size of the neoplastic lymphocytes is one of the key features to differentiate the different subtypes. Based on the combination of artificial intelligence and advanced image processing, we provide a workflow for the classification of lymphoma with regards to their nuclear size (small, intermediate, and large). As the baseline for our workflow testing, we use a Unet++ model trained on histological images of canine lymphoma with individually labeled nuclei. As an alternative to the Unet++, we also used a publicly available pre-trained and unmodified instance segmentation model called Stardist to demonstrate that our modular classification workflow can be combined with different types of segmentation models if they can provide proper nuclei segmentation. Subsequent to nuclear segmentation, we optimize algorithmic parameters for accurate classification of nuclear size using a newly derived reference size and final image classification based on a pathologists-derived ground truth. Our image classification module achieves a classification accuracy of up to 92% on canine lymphoma data. Compared to the accuracy ranging from 66.67 to 84% achieved using measurements provided by three individual pathologists, our algorithm provides a higher accuracy level and reproducible results. Our workflow also demonstrates a high transferability to feline lymphoma, as shown by its accuracy of up to 84.21%, even though our workflow was not optimized for feline lymphoma images. By determining the nuclear size distribution in tumor areas, our workflow can assist pathologists in subtyping lymphoma based on the nuclei size and potentially improve reproducibility. Our proposed approach is modular and comprehensible, thus allowing adaptation for specific tasks and increasing the users' trust in computer-assisted image classification.
Collapse
Affiliation(s)
- Andreas Haghofer
- Bioinformatics Research Group, University of Applied Sciences Upper Austria, Softwarepark 11-13, 4232, Hagenberg, Austria.
- Department of Computer Science, Johannes Kepler University, Altenberger Straße 69, 4040, Linz, Austria.
| | - Andrea Fuchs-Baumgartinger
- Institute of Pathology, University of Veterinary Medicine Vienna, Veterinärplatz 1, 1210, Vienna, Austria
| | - Karoline Lipnik
- Institute of Pathology, University of Veterinary Medicine Vienna, Veterinärplatz 1, 1210, Vienna, Austria
| | - Robert Klopfleisch
- Institute of Veterinary Pathology, Freie Univerisität Berlin, Robert-von-Ostertag-Str. 15, 14163, Berlin, Germany
| | - Marc Aubreville
- Technische Hochschule Ingolstadt, Esplanade 10, 85049, Ingolstadt, Germany
| | - Josef Scharinger
- Institute of Computational Perception, Johannes Kepler University, Altenberger Straße 69, 4040, Linz, Austria
| | - Herbert Weissenböck
- Institute of Pathology, University of Veterinary Medicine Vienna, Veterinärplatz 1, 1210, Vienna, Austria
| | - Stephan M Winkler
- Bioinformatics Research Group, University of Applied Sciences Upper Austria, Softwarepark 11-13, 4232, Hagenberg, Austria
- Department of Computer Science, Johannes Kepler University, Altenberger Straße 69, 4040, Linz, Austria
| | - Christof A Bertram
- Institute of Pathology, University of Veterinary Medicine Vienna, Veterinärplatz 1, 1210, Vienna, Austria
| |
Collapse
|
7
|
Shafi S, Parwani AV. Artificial intelligence in diagnostic pathology. Diagn Pathol 2023; 18:109. [PMID: 37784122 PMCID: PMC10546747 DOI: 10.1186/s13000-023-01375-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 07/21/2023] [Indexed: 10/04/2023] Open
Abstract
Digital pathology (DP) is being increasingly employed in cancer diagnostics, providing additional tools for faster, higher-quality, accurate diagnosis. The practice of diagnostic pathology has gone through a staggering transformation wherein new tools such as digital imaging, advanced artificial intelligence (AI) algorithms, and computer-aided diagnostic techniques are being used for assisting, augmenting and empowering the computational histopathology and AI-enabled diagnostics. This is paving the way for advancement in precision medicine in cancer. Automated whole slide imaging (WSI) scanners are now rendering diagnostic quality, high-resolution images of entire glass slides and combining these images with innovative digital pathology tools is making it possible to integrate imaging into all aspects of pathology reporting including anatomical, clinical, and molecular pathology. The recent approvals of WSI scanners for primary diagnosis by the FDA as well as the approval of prostate AI algorithm has paved the way for starting to incorporate this exciting technology for use in primary diagnosis. AI tools can provide a unique platform for innovations and advances in anatomical and clinical pathology workflows. In this review, we describe the milestones and landmark trials in the use of AI in clinical pathology with emphasis on future directions.
Collapse
Affiliation(s)
- Saba Shafi
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA
| | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, E409 Doan Hall, 410 West 10th Ave, Columbus, OH, 43210, USA.
| |
Collapse
|
8
|
Wang CW, Chu KL, Muzakky H, Lin YJ, Chao TK. Efficient Convolution Network to Assist Breast Cancer Diagnosis and Target Therapy. Cancers (Basel) 2023; 15:3991. [PMID: 37568809 PMCID: PMC10416960 DOI: 10.3390/cancers15153991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 07/30/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023] Open
Abstract
Breast cancer is the leading cause of cancer-related deaths among women worldwide, and early detection and treatment has been shown to significantly reduce fatality rates from severe illness. Moreover, determination of the human epidermal growth factor receptor-2 (HER2) gene amplification by Fluorescence in situ hybridization (FISH) and Dual in situ hybridization (DISH) is critical for the selection of appropriate breast cancer patients for HER2-targeted therapy. However, visual examination of microscopy is time-consuming, subjective and poorly reproducible due to high inter-observer variability among pathologists and cytopathologists. The lack of consistency in identifying carcinoma-like nuclei has led to divergences in the calculation of sensitivity and specificity. This manuscript introduces a highly efficient deep learning method with low computing cost. The experimental results demonstrate that the proposed framework achieves high precision and recall on three essential clinical applications, including breast cancer diagnosis and human epidermal receptor factor 2 (HER2) amplification detection on FISH and DISH slides for HER2 target therapy. Furthermore, the proposed method outperforms the majority of the benchmark methods in terms of IoU by a significant margin (p<0.001) on three essential clinical applications. Importantly, run time analysis shows that the proposed method obtains excellent segmentation results with notably reduced time for Artificial intelligence (AI) training (16.93%), AI inference (17.25%) and memory usage (18.52%), making the proposed framework feasible for practical clinical usage.
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (K.-L.C.); (H.M.)
| | - Kai-Lin Chu
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (K.-L.C.); (H.M.)
| | - Hikam Muzakky
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (K.-L.C.); (H.M.)
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
| |
Collapse
|
9
|
Lama N, Hagerty J, Nambisan A, Stanley RJ, Van Stoecker W. Skin Lesion Segmentation in Dermoscopic Images with Noisy Data. J Digit Imaging 2023; 36:1712-1722. [PMID: 37020149 PMCID: PMC10407008 DOI: 10.1007/s10278-023-00819-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/15/2023] [Accepted: 03/17/2023] [Indexed: 04/07/2023] Open
Abstract
We propose a deep learning approach to segment the skin lesion in dermoscopic images. The proposed network architecture uses a pretrained EfficientNet model in the encoder and squeeze-and-excitation residual structures in the decoder. We applied this approach on the publicly available International Skin Imaging Collaboration (ISIC) 2017 Challenge skin lesion segmentation dataset. This benchmark dataset has been widely used in previous studies. We observed many inaccurate or noisy ground truth labels. To reduce noisy data, we manually sorted all ground truth labels into three categories - good, mildly noisy, and noisy labels. Furthermore, we investigated the effect of such noisy labels in training and test sets. Our test results show that the proposed method achieved Jaccard scores of 0.807 on the official ISIC 2017 test set and 0.832 on the curated ISIC 2017 test set, exhibiting better performance than previously reported methods. Furthermore, the experimental results showed that the noisy labels in the training set did not lower the segmentation performance. However, the noisy labels in the test set adversely affected the evaluation scores. We recommend that the noisy labels should be avoided in the test set in future studies for accurate evaluation of the segmentation algorithms.
Collapse
Affiliation(s)
- Norsang Lama
- Missouri University of Science &Technology, Rolla, MO, 65409, USA
| | | | - Anand Nambisan
- Missouri University of Science &Technology, Rolla, MO, 65409, USA
| | | | | |
Collapse
|
10
|
Moldovanu S, Miron M, Rusu CG, Biswas KC, Moraru L. Refining skin lesions classification performance using geometric features of superpixels. Sci Rep 2023; 13:11463. [PMID: 37454166 PMCID: PMC10349833 DOI: 10.1038/s41598-023-38706-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/13/2023] [Indexed: 07/18/2023] Open
Abstract
This paper introduces superpixels to enhance the detection of skin lesions and to discriminate between melanoma and nevi without false negatives, in dermoscopy images. An improved Simple Linear Iterative Clustering (iSLIC) superpixels algorithm for image segmentation in digital image processing is proposed. The local graph cut method to identify the region of interest (i.e., either the nevi or melanoma lesions) has been adopted. The iSLIC algorithm is then exploited to segment sSPs. iSLIC discards all the SPs belonging to image background based on assigned labels and preserves the segmented skin lesions. A shape and geometric feature extraction task is performed for each segmented SP. The extracted features are fed into six machine learning algorithms such as: random forest, support vector machines, AdaBoost, k-nearest neighbor, decision trees (DT), Gaussian Naïve Bayes and three neural networks. These include Pattern recognition neural network, Feed forward neural network, and 1D Convolutional Neural Network for classification. The method is evaluated on the 7-Point MED-NODE and PAD-UFES-20 datasets and the results have been compared to the state-of-art findings. Extensive experiments show that the proposed method outperforms the compared existing methods in terms of accuracy.
Collapse
Affiliation(s)
- Simona Moldovanu
- Department of Computer Science and Information Technology, Faculty of Automation, Computers, Electrical Engineering and Electronics, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008, Galati, Romania
- The Modelling and Simulation Laboratory, Dunarea de Jos University of Galati, 111 Domneasca Str., 800102, Galati, Romania
| | - Mihaela Miron
- Department of Computer Science and Information Technology, Faculty of Automation, Computers, Electrical Engineering and Electronics, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008, Galati, Romania
| | - Cristinel-Gabriel Rusu
- The Modelling and Simulation Laboratory, Dunarea de Jos University of Galati, 111 Domneasca Str., 800102, Galati, Romania
- Iorgu Iordan Secondary School, 125, 1 Decembrie 1918 Street, 805300, Tecuci, Romania
| | - Keka C Biswas
- Department of Biological Sciences, University of Alabama at Huntsville, Huntsville, AL, 35899, USA
| | - Luminita Moraru
- The Modelling and Simulation Laboratory, Dunarea de Jos University of Galati, 111 Domneasca Str., 800102, Galati, Romania.
- Department of Chemistry, Physics and Environment, Faculty of Sciences and Environment, Dunarea de Jos University of Galati, 47 Domneasca Street, 800008, Galati, Romania.
| |
Collapse
|
11
|
Wang CW, Khalil MA, Lin YJ, Lee YC, Chao TK. Detection of ERBB2 and CEN17 signals in fluorescent in situ hybridization and dual in situ hybridization for guiding breast cancer HER2 target therapy. Artif Intell Med 2023; 141:102568. [PMID: 37295903 DOI: 10.1016/j.artmed.2023.102568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 04/12/2023] [Accepted: 04/27/2023] [Indexed: 06/12/2023]
Abstract
The overexpression of the human epidermal growth factor receptor 2 (HER2) is a predictive biomarker in therapeutic effects for metastatic breast cancer. Accurate HER2 testing is critical for determining the most suitable treatment for patients. Fluorescent in situ hybridization (FISH) and dual in situ hybridization (DISH) have been recognized as FDA-approved methods to determine HER2 overexpression. However, analysis of HER2 overexpression is challenging. Firstly, the boundaries of cells are often unclear and blurry, with large variations in cell shapes and signals, making it challenging to identify the precise areas of HER2-related cells. Secondly, the use of sparsely labeled data, where some unlabeled HER2-related cells are classified as background, can significantly confuse fully supervised AI learning and result in unsatisfactory model outcomes. In this study, we present a weakly supervised Cascade R-CNN (W-CRCNN) model to automatically detect HER2 overexpression in HER2 DISH and FISH images acquired from clinical breast cancer samples. The experimental results demonstrate that the proposed W-CRCNN achieves excellent results in identification of HER2 amplification in three datasets, including two DISH datasets and a FISH dataset. For the FISH dataset, the proposed W-CRCNN achieves an accuracy of 0.970±0.022, precision of 0.974±0.028, recall of 0.917±0.065, F1-score of 0.943±0.042 and Jaccard Index of 0.899±0.073. For DISH datasets, the proposed W-CRCNN achieves an accuracy of 0.971±0.024, precision of 0.969±0.015, recall of 0.925±0.020, F1-score of 0.947±0.036 and Jaccard Index of 0.884±0.103 for dataset 1, and an accuracy of 0.978±0.011, precision of 0.975±0.011, recall of 0.918±0.038, F1-score of 0.946±0.030 and Jaccard Index of 0.884±0.052 for dataset 2, respectively. In comparison with the benchmark methods, the proposed W-CRCNN significantly outperforms all the benchmark approaches in identification of HER2 overexpression in FISH and DISH datasets (p<0.05). With the high degree of accuracy, precision and recall , the results show that the proposed method in DISH analysis for assessment of HER2 overexpression in breast cancer patients has significant potential to assist precision medicine.
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan; Graduate Institute of Applied Science and Technology, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Muhammad-Adil Khalil
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei, Taiwan; Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan
| | - Yu-Ching Lee
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei, Taiwan; Institute of Pathology and Parasitology, National Defense Medical Center, Taipei, Taiwan.
| |
Collapse
|
12
|
Tavolara TE, Gurcan MN, Niazi MKK. Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels. Cancers (Basel) 2022; 14:cancers14235778. [PMID: 36497258 PMCID: PMC9738801 DOI: 10.3390/cancers14235778] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/16/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022] Open
Abstract
Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.
Collapse
|
13
|
Yang L, Martin JA, Brouillette MJ, Buckwalter JA, Goetz JE. Objective evaluation of chondrocyte density & cloning after joint injury using convolutional neural networks. J Orthop Res 2022; 40:2609-2619. [PMID: 35171527 PMCID: PMC9378771 DOI: 10.1002/jor.25295] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 12/01/2021] [Accepted: 02/02/2022] [Indexed: 02/04/2023]
Abstract
Variations in chondrocyte density and organization in cartilage histology sections are associated with osteoarthritis progression. Rapid, accurate quantification of these two features can facilitate the evaluation of cartilage health and advance the understanding of their significance. The goal of this work was to adapt deep-learning-based methods to detect articular chondrocytes and chondrocyte clones from safranin-O-stained cartilage to evaluate chondrocyte cellularity and organization. The U-net and "you-only-look-once" (YOLO) models were trained and validated for identifying chondrocytes and chondrocyte clones, respectively. Validated models were then used to quantify chondrocyte and clone density in talar cartilage from Yucatan minipigs sacrificed 1 week, 3, 6, and 12 months after fixation of an intra-articular fracture of the hock joint. There was excellent/good agreement between expert researchers and the developed models in identifying chondrocytes/clones (U-net: R2 = 0.93, y = 0.90x-0.69; median F1 score: 0.87/YOLO: R2 = 0.79, y = 0.95x; median F1 score: 0.67). Average chondrocyte density increased 1 week after fracture (from 774 to 856 cells/mm2 ), decreased substantially 3 months after fracture (610 cells/mm2 ), and slowly increased 6 and 12 months after fracture (638 and 683 cells/mm2 , respectively). Average detected clone density 3, 6, and 12 months after fracture (11, 11, 9 clones/mm2 ) was higher than the 4-5 clones/mm2 detected in normal tissue or 1 week after fracture and show local increases in clone density that varied across the joint surface with time. The accurate evaluation of cartilage cellularity and organization provided by this deep learning approach will increase objectivity of cartilage injury and regeneration assessments.
Collapse
Affiliation(s)
- Linjun Yang
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA,Department of Biomedical EngineeringUniversity of IowaIowa CityIowaUSA
| | - James A. Martin
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA,Department of Biomedical EngineeringUniversity of IowaIowa CityIowaUSA
| | - Marc J. Brouillette
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA
| | | | - Jessica E. Goetz
- Department of Orthopedics and RehabilitationUniversity of IowaIowa CityIowaUSA,Department of Biomedical EngineeringUniversity of IowaIowa CityIowaUSA
| |
Collapse
|
14
|
A Soft Label Deep Learning to Assist Breast Cancer Target Therapy and Thyroid Cancer Diagnosis. Cancers (Basel) 2022; 14:cancers14215312. [PMID: 36358732 PMCID: PMC9657740 DOI: 10.3390/cancers14215312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 10/20/2022] [Accepted: 10/25/2022] [Indexed: 11/17/2022] Open
Abstract
According to the World Health Organization Report 2022, cancer is the most common cause of death contributing to nearly one out of six deaths worldwide. Early cancer diagnosis and prognosis have become essential in reducing the mortality rate. On the other hand, cancer detection is a challenging task in cancer pathology. Trained pathologists can detect cancer, but their decisions are subjective to high intra- and inter-observer variability, which can lead to poor patient care owing to false-positive and false-negative results. In this study, we present a soft label fully convolutional network (SL-FCN) to assist in breast cancer target therapy and thyroid cancer diagnosis, using four datasets. To aid in breast cancer target therapy, the proposed method automatically segments human epidermal growth factor receptor 2 (HER2) amplification in fluorescence in situ hybridization (FISH) and dual in situ hybridization (DISH) images. To help in thyroid cancer diagnosis, the proposed method automatically segments papillary thyroid carcinoma (PTC) on Papanicolaou-stained fine needle aspiration and thin prep whole slide images (WSIs). In the evaluation of segmentation of HER2 amplification in FISH and DISH images, we compare the proposed method with thirteen deep learning approaches, including U-Net, U-Net with InceptionV5, Ensemble of U-Net with Inception-v4, Inception-Resnet-v2 encoder, and ResNet-34 encoder, SegNet, FCN, modified FCN, YOLOv5, CPN, SOLOv2, BCNet, and DeepLabv3+ with three different backbones, including MobileNet, ResNet, and Xception, on three clinical datasets, including two DISH datasets on two different magnification levels and a FISH dataset. The result on DISH breast dataset 1 shows that the proposed method achieves high accuracy of 87.77 ± 14.97%, recall of 91.20 ± 7.72%, and F1-score of 81.67 ± 17.76%, while, on DISH breast dataset 2, the proposed method achieves high accuracy of 94.64 ± 2.23%, recall of 83.78 ± 6.42%, and F1-score of 85.14 ± 6.61% and, on the FISH breast dataset, the proposed method achieves high accuracy of 93.54 ± 5.24%, recall of 83.52 ± 13.15%, and F1-score of 86.98 ± 9.85%, respectively. Furthermore, the proposed method outperforms most of the benchmark approaches by a significant margin (p <0.001). In evaluation of segmentation of PTC on Papanicolaou-stained WSIs, the proposed method is compared with three deep learning methods, including Modified FCN, U-Net, and SegNet. The experimental result demonstrates that the proposed method achieves high accuracy of 99.99 ± 0.01%, precision of 92.02 ± 16.6%, recall of 90.90 ± 14.25%, and F1-score of 89.82 ± 14.92% and significantly outperforms the baseline methods, including U-Net and FCN (p <0.001). With the high degree of accuracy, precision, and recall, the results show that the proposed method could be used in assisting breast cancer target therapy and thyroid cancer diagnosis with faster evaluation and minimizing human judgment errors.
Collapse
|
15
|
NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering (Basel) 2022; 9:bioengineering9090475. [PMID: 36135021 PMCID: PMC9495364 DOI: 10.3390/bioengineering9090475] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/07/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.
Collapse
|
16
|
Wilm F, Benz M, Bruns V, Baghdadlian S, Dexl J, Hartmann D, Kuritcyn P, Weidenfeller M, Wittenberg T, Merkel S, Hartmann A, Eckstein M, Geppert CI. Fast whole-slide cartography in colon cancer histology using superpixels and CNN classification. J Med Imaging (Bellingham) 2022; 9:027501. [PMID: 35300344 PMCID: PMC8920491 DOI: 10.1117/1.jmi.9.2.027501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 02/17/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automatic outlining of different tissue types in digitized histological specimen provides a basis for follow-up analyses and can potentially guide subsequent medical decisions. The immense size of whole-slide-images (WSIs), however, poses a challenge in terms of computation time. In this regard, the analysis of nonoverlapping patches outperforms pixelwise segmentation approaches but still leaves room for optimization. Furthermore, the division into patches, regardless of the biological structures they contain, is a drawback due to the loss of local dependencies. Approach: We propose to subdivide the WSI into coherent regions prior to classification by grouping visually similar adjacent pixels into superpixels. Afterward, only a random subset of patches per superpixel is classified and patch labels are combined into a superpixel label. We propose a metric for identifying superpixels with an uncertain classification and evaluate two medical applications, namely tumor area and invasive margin estimation and tumor composition analysis. Results: The algorithm has been developed on 159 hand-annotated WSIs of colon resections and its performance is compared with an analysis without prior segmentation. The algorithm shows an average speed-up of 41% and an increase in accuracy from 93.8% to 95.7%. By assigning a rejection label to uncertain superpixels, we further increase the accuracy by 0.4%. While tumor area estimation shows high concordance to the annotated area, the analysis of tumor composition highlights limitations of our approach. Conclusion: By combining superpixel segmentation and patch classification, we designed a fast and accurate framework for whole-slide cartography that is AI-model agnostic and provides the basis for various medical endpoints.
Collapse
Affiliation(s)
- Frauke Wilm
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Michaela Benz
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Volker Bruns
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Serop Baghdadlian
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Jakob Dexl
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - David Hartmann
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Petr Kuritcyn
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Martin Weidenfeller
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Thomas Wittenberg
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Susanne Merkel
- University Hospital Erlangen, Department of Surgery, FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Arndt Hartmann
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Markus Eckstein
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Carol Immanuel Geppert
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
17
|
Cho BJ, Kim JW, Park J, Kwon GY, Hong M, Jang SH, Bang H, Kim G, Park ST. Automated Diagnosis of Cervical Intraepithelial Neoplasia in Histology Images via Deep Learning. Diagnostics (Basel) 2022; 12:diagnostics12020548. [PMID: 35204638 PMCID: PMC8871214 DOI: 10.3390/diagnostics12020548] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 02/05/2022] [Accepted: 02/17/2022] [Indexed: 02/04/2023] Open
Abstract
Artificial intelligence has enabled the automated diagnosis of several cancer types. We aimed to develop and validate deep learning models that automatically classify cervical intraepithelial neoplasia (CIN) based on histological images. Microscopic images of CIN3, CIN2, CIN1, and non-neoplasm were obtained. The performances of two pre-trained convolutional neural network (CNN) models adopting DenseNet-161 and EfficientNet-B7 architectures were evaluated and compared with those of pathologists. The dataset comprised 1106 images from 588 patients; images of 10% of patients were included in the test dataset. The mean accuracies for the four-class classification were 88.5% (95% confidence interval [CI], 86.3–90.6%) by DenseNet-161 and 89.5% (95% CI, 83.3–95.7%) by EfficientNet-B7, which were similar to human performance (93.2% and 89.7%). The mean per-class area under the receiver operating characteristic curve values by EfficientNet-B7 were 0.996, 0.990, 0.971, and 0.956 in the non-neoplasm, CIN3, CIN1, and CIN2 groups, respectively. The class activation map detected the diagnostic area for CIN lesions. In the three-class classification of CIN2 and CIN3 as one group, the mean accuracies of DenseNet-161 and EfficientNet-B7 increased to 91.4% (95% CI, 88.8–94.0%), and 92.6% (95% CI, 90.4–94.9%), respectively. CNN-based deep learning is a promising tool for diagnosing CIN lesions on digital histological images.
Collapse
Affiliation(s)
- Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
- Department of Ophthalmology, Hallym University Sacred Heart Hospital, Hallym University College of Medicine, Anyang 14068, Korea
- Correspondence: (B.-J.C.); (J.-W.K.)
| | - Jeong-Won Kim
- Department of Pathology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul 07441, Korea;
- Correspondence: (B.-J.C.); (J.-W.K.)
| | - Jungkap Park
- Medical Artificial Intelligence Center, Hallym University Medical Center, Anyang 14068, Korea;
| | | | - Mineui Hong
- Department of Pathology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul 06973, Korea;
| | - Si-Hyong Jang
- Department of Pathology, Soonchunhyang University Cheonan Hospital, Soonchunhyang University College of Medicine, Cheonan 31151, Korea;
| | - Heejin Bang
- Department of Pathology, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul 05030, Korea;
| | - Gilhyang Kim
- Department of Pathology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul 07441, Korea;
| | - Sung-Taek Park
- Department of Obstetrics and Gynecology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul 07441, Korea;
| |
Collapse
|
18
|
Wu D, Hacking S, Vitkovski T, Nasim M. Superpixel image segmentation of VISTA expression in colorectal cancer and its relationship to the tumoral microenvironment. Sci Rep 2021; 11:17426. [PMID: 34465822 PMCID: PMC8408240 DOI: 10.1038/s41598-021-96417-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 07/19/2021] [Indexed: 01/22/2023] Open
Abstract
Colorectal cancer (CRC) is the third most common cause of cancer related death in the United States (Jasperson et al. in Gastroenterology 138:2044–2058, 10.1053/j.gastro.2010.01.054, 2010). Many studies have explored prognostic factors in CRC. Today, much focus has been placed on the tumor microenvironment, including different immune cells and the extracellular matrix (ECM). The present study aims to evaluate the role of V-domain immunoglobulin suppressor of T cell activation (VISTA). We utilized QuPath for whole slides image analysis, performing superpixel image segmentation (SIS) on a 226 patient-cohort. High VISTA expression correlated with better disease-free survival (DFS), high tumor infiltrative lymphocyte, microsatellite instability, BRAF mutational status as well as lower tumor stage. High VISTA expression was also associated with mature stromal differentiation (SD). When cohorts were separated based on SD and MMR, only patients with immature SD and microsatellite stability were found to correlate VISTA expression with DFS. Considering raised VISTA expression is associated with improved survival, TILs, mature SD, and MMR in CRC; careful, well-designed clinical trials should be pursued which incorporate the underlying tumoral microenvironment.
Collapse
Affiliation(s)
- Dongling Wu
- Department of Pathology and Laboratory Medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Sean Hacking
- Department of Pathology and Laboratory Medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Taisia Vitkovski
- Department of Pathology and Laboratory Medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Mansoor Nasim
- Department of Pathology and Laboratory Medicine, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA.
| |
Collapse
|
19
|
Sornapudi S, Addanki R, Stanley RJ, Stoecker WV, Long R, Zuna R, Frazier SR, Antani S. Automated Cervical Digitized Histology Whole-Slide Image Analysis Toolbox. J Pathol Inform 2021; 12:26. [PMID: 34447606 PMCID: PMC8356709 DOI: 10.4103/jpi.jpi_52_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/09/2020] [Accepted: 02/09/2021] [Indexed: 01/14/2023] Open
Abstract
Background: Cervical intraepithelial neoplasia (CIN) is regarded as a potential precancerous state of the uterine cervix. Timely and appropriate early treatment of CIN can help reduce cervical cancer mortality. Accurate estimation of CIN grade correlated with human papillomavirus type, which is the primary cause of the disease, helps determine the patient's risk for developing the disease. Colposcopy is used to select women for biopsy. Expert pathologists examine the biopsied cervical epithelial tissue under a microscope. The examination can take a long time and is prone to error and often results in high inter-and intra-observer variability in outcomes. Methodology: We propose a novel image analysis toolbox that can automate CIN diagnosis using whole slide image (digitized biopsies) of cervical tissue samples. The toolbox is built as a four-step deep learning model that detects the epithelium regions, segments the detected epithelial portions, analyzes local vertical segment regions, and finally classifies each epithelium block with localized attention. We propose an epithelium detection network in this study and make use of our earlier research on epithelium segmentation and CIN classification to complete the design of the end-to-end CIN diagnosis toolbox. Results: The results show that automated epithelium detection and segmentation for CIN classification yields comparable results to manually segmented epithelium CIN classification. Conclusion: This highlights the potential as a tool for automated digitized histology slide image analysis to assist expert pathologists.
Collapse
Affiliation(s)
- Sudhir Sornapudi
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | - Ravitej Addanki
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | - R Joe Stanley
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | | | - Rodney Long
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Rosemary Zuna
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Shellaine R Frazier
- Department of Surgical Pathology, University of Missouri Hospitals and Clinics, Columbia, MO, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
20
|
Ali MAS, Misko O, Salumaa SO, Papkov M, Palo K, Fishman D, Parts L. Evaluating Very Deep Convolutional Neural Networks for Nucleus Segmentation from Brightfield Cell Microscopy Images. SLAS DISCOVERY 2021; 26:1125-1137. [PMID: 34167359 PMCID: PMC8458686 DOI: 10.1177/24725552211023214] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Advances in microscopy have increased output data volumes, and powerful image analysis methods are required to match. In particular, finding and characterizing nuclei from microscopy images, a core cytometry task, remains difficult to automate. While deep learning models have given encouraging results on this problem, the most powerful approaches have not yet been tested for attacking it. Here, we review and evaluate state-of-the-art very deep convolutional neural network architectures and training strategies for segmenting nuclei from brightfield cell images. We tested U-Net as a baseline model; considered U-Net++, Tiramisu, and DeepLabv3+ as latest instances of advanced families of segmentation models; and propose PPU-Net, a novel light-weight alternative. The deeper architectures outperformed standard U-Net and results from previous studies on the challenging brightfield images, with balanced pixel-wise accuracies of up to 86%. PPU-Net achieved this performance with 20-fold fewer parameters than the comparably accurate methods. All models perform better on larger nuclei and in sparser images. We further confirmed that in the absence of plentiful training data, augmentation and pretraining on other data improve performance. In particular, using only 16 images with data augmentation is enough to achieve a pixel-wise F1 score that is within 5% of the one achieved with a full data set for all models. The remaining segmentation errors are mainly due to missed nuclei in dense regions, overlapping cells, and imaging artifacts, indicating the major outstanding challenges.
Collapse
Affiliation(s)
- Mohammed A S Ali
- Department of Computer Science, University of Tartu, Tartu, Estonia
| | - Oleg Misko
- Ukrainian Catholic University, Lviv, L'vìvs'ka, Ukraine
| | | | - Mikhail Papkov
- Department of Computer Science, University of Tartu, Tartu, Estonia
| | - Kaupo Palo
- PerkinElmer Cellular Technologies Germany GmbH, Hamburg, Germany
| | - Dmytro Fishman
- Department of Computer Science, University of Tartu, Tartu, Estonia
| | - Leopold Parts
- Department of Computer Science, University of Tartu, Tartu, Estonia.,Wellcome Sanger Institute, Hinxton, Cambridgeshire, UK
| |
Collapse
|
21
|
Zhu X, Li X, Ong K, Zhang W, Li W, Li L, Young D, Su Y, Shang B, Peng L, Xiong W, Liu Y, Liao W, Xu J, Wang F, Liao Q, Li S, Liao M, Li Y, Rao L, Lin J, Shi J, You Z, Zhong W, Liang X, Han H, Zhang Y, Tang N, Hu A, Gao H, Cheng Z, Liang L, Yu W, Ding Y. Hybrid AI-assistive diagnostic model permits rapid TBS classification of cervical liquid-based thin-layer cell smears. Nat Commun 2021; 12:3541. [PMID: 34112790 PMCID: PMC8192526 DOI: 10.1038/s41467-021-23913-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 05/24/2021] [Indexed: 02/05/2023] Open
Abstract
Technical advancements significantly improve earlier diagnosis of cervical cancer, but accurate diagnosis is still difficult due to various factors. We develop an artificial intelligence assistive diagnostic solution, AIATBS, to improve cervical liquid-based thin-layer cell smear diagnosis according to clinical TBS criteria. We train AIATBS with >81,000 retrospective samples. It integrates YOLOv3 for target detection, Xception and Patch-based models to boost target classification, and U-net for nucleus segmentation. We integrate XGBoost and a logical decision tree with these models to optimize the parameters given by the learning process, and we develop a complete cervical liquid-based cytology smear TBS diagnostic system which also includes a quality control solution. We validate the optimized system with >34,000 multicenter prospective samples and achieve better sensitivity compared to senior cytologists, yet retain high specificity while achieving a speed of <180s/slide. Our system is adaptive to sample preparation using different standards, staining protocols and scanners.
Collapse
Affiliation(s)
- Xiaohui Zhu
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Xiaoming Li
- Department of Pathology, Shenzhen Bao'an People's Hospital (group), Shenzhen, Guangdong Province, PR China
| | - Kokhaur Ong
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
- Bioinformatics Institute, A*STAR, Singapore, Singapore
| | - Wenli Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Wencai Li
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Longjie Li
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - David Young
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yongjian Su
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Bin Shang
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Linggan Peng
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wei Xiong
- Guangzhou Kaipu Biotechnology Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Yunke Liu
- Laboratory Department, Guangzhou Tianhe District Maternal and Child Health Care Hospital, Guangzhou, Guangdong Province, PR China
| | - Wenting Liao
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong Province, PR China
| | - Jingjing Xu
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Feifei Wang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Qing Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Shengnan Li
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Minmin Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Yu Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Linshang Rao
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jinquan Lin
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jianyuan Shi
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Zejun You
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wenlong Zhong
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Xinrong Liang
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Hao Han
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yan Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Department of Pathology, Shenzhen Longhua District Maternity & Child Healthcare Hospital, Shenzhen, PR China
| | - Na Tang
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China
| | - Aixia Hu
- Department of Pathology, Henan Provincial People's Hospital, Zhengzhou, Henan Province, PR China
| | - Hongyi Gao
- Department of Pathology, Guangdong Provincial Women's and Children's Dispensary, Shenzhen, Guangdong Province, PR China
| | - Zhiqiang Cheng
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China.
| | - Li Liang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| | - Weimiao Yu
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore.
- Bioinformatics Institute, A*STAR, Singapore, Singapore.
| | - Yanqing Ding
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| |
Collapse
|
22
|
Senousy Z, Abdelsamea MM, Mohamed MM, Gaber MM. 3E-Net: Entropy-Based Elastic Ensemble of Deep Convolutional Neural Networks for Grading of Invasive Breast Carcinoma Histopathological Microscopic Images. ENTROPY (BASEL, SWITZERLAND) 2021; 23:620. [PMID: 34065765 PMCID: PMC8156865 DOI: 10.3390/e23050620] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 05/13/2021] [Accepted: 05/14/2021] [Indexed: 12/21/2022]
Abstract
Automated grading systems using deep convolution neural networks (DCNNs) have proven their capability and potential to distinguish between different breast cancer grades using digitized histopathological images. In digital breast pathology, it is vital to measure how confident a DCNN is in grading using a machine-confidence metric, especially with the presence of major computer vision challenging problems such as the high visual variability of the images. Such a quantitative metric can be employed not only to improve the robustness of automated systems, but also to assist medical professionals in identifying complex cases. In this paper, we propose Entropy-based Elastic Ensemble of DCNN models (3E-Net) for grading invasive breast carcinoma microscopy images which provides an initial stage of explainability (using an uncertainty-aware mechanism adopting entropy). Our proposed model has been designed in a way to (1) exclude images that are less sensitive and highly uncertain to our ensemble model and (2) dynamically grade the non-excluded images using the certain models in the ensemble architecture. We evaluated two variations of 3E-Net on an invasive breast carcinoma dataset and we achieved grading accuracy of 96.15% and 99.50%.
Collapse
Affiliation(s)
- Zakaria Senousy
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
| | - Mohammed M. Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
- Faculty of Computers and Information, Assiut University, Assiut 71515, Egypt
| | - Mona Mostafa Mohamed
- Department of Zoology, Faculty of Science, Cairo University, Giza 12613, Egypt;
- Faculty of Basic Sciences, Galala University, Suez 435611, Egypt
| | - Mohamed Medhat Gaber
- School of Computing and Digital Technology, Birmingham City University, Birmingham B4 7AP, UK; (Z.S.); (M.M.G.)
- Faculty of Computer Science and Engineering, Galala University, Suez 435611, Egypt
| |
Collapse
|
23
|
Tavolara TE, Niazi MKK, Gower AC, Ginese M, Beamer G, Gurcan MN. Deep learning predicts gene expression as an intermediate data modality to identify susceptibility patterns in Mycobacterium tuberculosis infected Diversity Outbred mice. EBioMedicine 2021; 67:103388. [PMID: 34000621 PMCID: PMC8138606 DOI: 10.1016/j.ebiom.2021.103388] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 04/22/2021] [Accepted: 04/23/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Machine learning sustains successful application to many diagnostic and prognostic problems in computational histopathology. Yet, few efforts have been made to model gene expression from histopathology. This study proposes a methodology which predicts selected gene expression values (microarray) from haematoxylin and eosin whole-slide images as an intermediate data modality to identify fulminant-like pulmonary tuberculosis ('supersusceptible') in an experimentally infected cohort of Diversity Outbred mice (n=77). METHODS Gradient-boosted trees were utilized as a novel feature selector to identify gene transcripts predictive of fulminant-like pulmonary tuberculosis. A novel attention-based multiple instance learning model for regression was used to predict selected genes' expression from whole-slide images. Gene expression predictions were shown to be sufficiently replicated to identify supersusceptible mice using gradient-boosted trees trained on ground truth gene expression data. FINDINGS The model was accurate, showing high positive correlations with ground truth gene expression on both cross-validation (n = 77, 0.63 ≤ ρ ≤ 0.84) and external testing sets (n = 33, 0.65 ≤ ρ ≤ 0.84). The sensitivity and specificity for gene expression predictions to identify supersusceptible mice (n=77) were 0.88 and 0.95, respectively, and for an external set of mice (n=33) 0.88 and 0.93, respectively. IMPLICATIONS Our methodology maps histopathology to gene expression with sufficient accuracy to predict a clinical outcome. The proposed methodology exemplifies a computational template for gene expression panels, in which relatively inexpensive and widely available tissue histopathology may be mapped to specific genes' expression to serve as a diagnostic or prognostic tool. FUNDING National Institutes of Health and American Lung Association.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States
| | - M K K Niazi
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States.
| | - Adam C Gower
- Department of Medicine, Boston University School of Medicine, 72 E. Concord St Evans Building, Boston, MA 02118, United States
| | - Melanie Ginese
- Department of Infectious Disease and Global Health, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Gillian Beamer
- Department of Infectious Disease and Global Health, Tufts University Cummings School of Veterinary Medicine, 200 Westboro Rd., North Grafton, MA 01536, United States
| | - Metin N Gurcan
- Center for Biomedical Informatics, Wake Forest School of Medicine, 486 Patterson Avenue, Winston-Salem, NC 27101, United States
| |
Collapse
|
24
|
PathoNet introduced as a deep neural network backend for evaluation of Ki-67 and tumor-infiltrating lymphocytes in breast cancer. Sci Rep 2021; 11:8489. [PMID: 33875676 PMCID: PMC8055887 DOI: 10.1038/s41598-021-86912-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 03/16/2021] [Indexed: 12/16/2022] Open
Abstract
The nuclear protein Ki-67 and Tumor infiltrating lymphocytes (TILs) have been introduced as prognostic factors in predicting both tumor progression and probable response to chemotherapy. The value of Ki-67 index and TILs in approach to heterogeneous tumors such as Breast cancer (BC) that is the most common cancer in women worldwide, has been highlighted in literature. Considering that estimation of both factors are dependent on professional pathologists’ observation and inter-individual variations may also exist, automated methods using machine learning, specifically approaches based on deep learning, have attracted attention. Yet, deep learning methods need considerable annotated data. In the absence of publicly available benchmarks for BC Ki-67 cell detection and further annotated classification of cells, In this study we propose SHIDC-BC-Ki-67 as a dataset for the aforementioned purpose. We also introduce a novel pipeline and backend, for estimation of Ki-67 expression and simultaneous determination of intratumoral TILs score in breast cancer cells. Further, we show that despite the challenges that our proposed model has encountered, our proposed backend, PathoNet, outperforms the state of the art methods proposed to date with regard to harmonic mean measure acquired. Dataset is publicly available in http://shiraz-hidc.com and all experiment codes are published in https://github.com/SHIDCenter/PathoNet.
Collapse
|
25
|
Sornapudi S, Stanley RJ, Stoecker WV, Long R, Xue Z, Zuna R, Frazier SR, Antani S. DeepCIN: Attention-Based Cervical histology Image Classification with Sequential Feature Modeling for Pathologist-Level Accuracy. J Pathol Inform 2020; 11:40. [PMID: 33828898 PMCID: PMC8020842 DOI: 10.4103/jpi.jpi_50_20] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 09/02/2020] [Accepted: 10/21/2020] [Indexed: 11/22/2022] Open
Abstract
Background: Cervical cancer is one of the deadliest cancers affecting women globally. Cervical intraepithelial neoplasia (CIN) assessment using histopathological examination of cervical biopsy slides is subject to interobserver variability. Automated processing of digitized histopathology slides has the potential for more accurate classification for CIN grades from normal to increasing grades of pre-malignancy: CIN1, CIN2, and CIN3. Methodology: Cervix disease is generally understood to progress from the bottom (basement membrane) to the top of the epithelium. To model this relationship of disease severity to spatial distribution of abnormalities, we propose a network pipeline, DeepCIN, to analyze high-resolution epithelium images (manually extracted from whole-slide images) hierarchically by focusing on localized vertical regions and fusing this local information for determining Normal/CIN classification. The pipeline contains two classifier networks: (1) a cross-sectional, vertical segment-level sequence generator is trained using weak supervision to generate feature sequences from the vertical segments to preserve the bottom-to-top feature relationships in the epithelium image data and (2) an attention-based fusion network image-level classifier predicting the final CIN grade by merging vertical segment sequences. Results: The model produces the CIN classification results and also determines the vertical segment contributions to CIN grade prediction. Conclusion: Experiments show that DeepCIN achieves pathologist-level CIN classification accuracy.
Collapse
Affiliation(s)
- Sudhir Sornapudi
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | - R Joe Stanley
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | | | - Rodney Long
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Zhiyun Xue
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Rosemary Zuna
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Shellaine R Frazier
- Department of Surgical Pathology, University of Missouri Hospitals and Clinics, Columbia, MO, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
26
|
Shu J, Liu J, Zhang Y, Fu H, Ilyas M, Faraci G, Della Mea V, Liu B, Qiu G. Marker controlled superpixel nuclei segmentation and automatic counting on immunohistochemistry staining images. Bioinformatics 2020; 36:3225-3233. [PMID: 32073624 DOI: 10.1093/bioinformatics/btaa107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2019] [Revised: 02/03/2020] [Accepted: 02/14/2020] [Indexed: 12/17/2022] Open
Abstract
MOTIVATION For the diagnosis of cancer, manually counting nuclei on massive histopathological images is tedious and the counting results might vary due to the subjective nature of the operation. RESULTS This paper presents a new segmentation and counting method for nuclei, which can automatically provide nucleus counting results. This method segments nuclei with detected nuclei seed markers through a modified simple one-pass superpixel segmentation method. Rather than using a single pixel as a seed, we created a superseed for each nucleus to involve more information for improved segmentation results. Nucleus pixels are extracted by a newly proposed fusing method to reduce stain variations and preserve nucleus contour information. By evaluating segmentation results, the proposed method was compared to five existing methods on a dataset with 52 immunohistochemically (IHC) stained images. Our proposed method produced the highest mean F1-score of 0.668. By evaluating the counting results, another dataset with more than 30 000 IHC stained nuclei in 88 images were prepared. The correlation between automatically generated nucleus counting results and manual nucleus counting results was up to R2 = 0.901 (P < 0.001). By evaluating segmentation results of proposed method-based tool, we tested on a 2018 Data Science Bowl (DSB) competition dataset, three users obtained DSB score of 0.331 ± 0.006. AVAILABILITY AND IMPLEMENTATION The proposed method has been implemented as a plugin tool in ImageJ and the source code can be freely downloaded. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Jie Shu
- School of Information Science and Technology, North China University of Technology.,Beijing Key Laboratory on Integration and Analysis of Large-scale Stream Data, Beijing 100144, China
| | - Jingxin Liu
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Yongmei Zhang
- School of Information Science and Technology, North China University of Technology
| | - Hao Fu
- College of Intelligence Science and Technology, National University of Defense Technology, Hunan 410073, China
| | - Mohammad Ilyas
- Faculty of Medicine & Health Sciences, Nottingham University Hospitals NHS Trust and University of Nottingham, Nottingham NG7 2UH, UK
| | - Giuseppe Faraci
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine 33100, Italy
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, Udine 33100, Italy
| | - Bozhi Liu
- Guangdong Key Laboratory for Intelligent Signal Processing, Shenzhen University, Guangzhou 518061, China
| | - Guoping Qiu
- Histo Pathology Diagnostic Center, Shanghai, China.,Department of Computer Science, University of Nottingham, Nottingham NG8 1BB, UK
| |
Collapse
|
27
|
Comparative Analysis of Rhino-Cytological Specimens with Image Analysis and Deep Learning Techniques. ELECTRONICS 2020. [DOI: 10.3390/electronics9060952] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Cytological study of the nasal mucosa (also known as rhino-cytology) represents an important diagnostic aid that allows highlighting of the presence of some types of rhinitis through the analysis of cellular features visible under a microscope. Nowadays, the automated detection and classification of cells benefit from the capacity of deep learning techniques in processing digital images of the cytological preparation. Even though the results of such automatic systems need to be validated by a specialized rhino-cytologist, this technology represents a valid support that aims at increasing the accuracy of the analysis while reducing the required time and effort. The quality of the rhino-cytological preparation, which is clearly important for the microscope observation phase, is also fundamental for the automatic classification process. In fact, the slide-preparing technique turns out to be a crucial factor among the multiple ones that may modify the morphological and chromatic characteristics of the cells. This paper aims to investigate the possible differences between direct smear (SM) and cytological centrifugation (CYT) slide-preparation techniques, in order to preserve image quality during the observation and cell classification phases in rhino-cytology. Firstly, a comparative study based on image analysis techniques has been put forward. The extraction of densitometric and morphometric features has made it possible to quantify and describe the spatial distribution of the cells in the field images observed under the microscope. Statistical analysis of the distribution of these features has been used to evaluate the degree of similarity between images acquired from SM and CYT slides. The results prove an important difference in the observation process of the cells prepared with the above-mentioned techniques, with reference to cell density and spatial distribution: the analysis of CYT slides has been more difficult than of the SM ones due to the spatial distribution of the cells, which results in a lower cell density than the SM slides. As a marginal part of this study, a performance assessment of the computer-aided diagnosis (CAD) system called Rhino-cyt has also been carried out on both groups of image slide types.
Collapse
|
28
|
Abstract
Rhinology studies anatomy, physiology and diseases affecting the nasal region: one of the most modern techniques to diagnose these diseases is nasal cytology or rhinocytology, which involves analyzing the cells contained in the nasal mucosa under a microscope and researching of other elements such as bacteria, to suspect a pathology. During the microscopic observation, bacteria can be detected in the form of biofilm, that is, a bacterial colony surrounded by an organic extracellular matrix, with a protective function, made of polysaccharides. In the field of nasal cytology, the presence of biofilm in microscopic samples denotes the presence of an infection. In this paper, we describe the design and testing of interesting diagnostic support, for the automatic detection of biofilm, based on a convolutional neural network (CNN). To demonstrate the reliability of the system, alternative solutions based on isolation forest and deep random forest techniques were also tested. Texture analysis is used, with Haralick feature extraction and dominant color. The CNN-based biofilm detection system shows an accuracy of about 98%, an average accuracy of about 100% on the test set and about 99% on the validation set. The CNN-based system designed in this study is confirmed as the most reliable among the best automatic image recognition technologies, in the specific context of this study. The developed system allows the specialist to obtain a rapid and accurate identification of the biofilm in the slide images.
Collapse
|
29
|
Ianni JD, Soans RE, Sankarapandian S, Chamarthi RV, Ayyagari D, Olsen TG, Bonham MJ, Stavish CC, Motaparthi K, Cockerell CJ, Feeser TA, Lee JB. Tailored for Real-World: A Whole Slide Image Classification System Validated on Uncurated Multi-Site Data Emulating the Prospective Pathology Workload. Sci Rep 2020; 10:3217. [PMID: 32081956 PMCID: PMC7035316 DOI: 10.1038/s41598-020-59985-2] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Accepted: 02/06/2020] [Indexed: 11/10/2022] Open
Abstract
Standard of care diagnostic procedure for suspected skin cancer is microscopic examination of hematoxylin & eosin stained tissue by a pathologist. Areas of high inter-pathologist discordance and rising biopsy rates necessitate higher efficiency and diagnostic reproducibility. We present and validate a deep learning system which classifies digitized dermatopathology slides into 4 categories. The system is developed using 5,070 images from a single lab, and tested on an uncurated set of 13,537 images from 3 test labs, using whole slide scanners manufactured by 3 different vendors. The system's use of deep-learning-based confidence scoring as a criterion to consider the result as accurate yields an accuracy of up to 98%, and makes it adoptable in a real-world setting. Without confidence scoring, the system achieved an accuracy of 78%. We anticipate that our deep learning system will serve as a foundation enabling faster diagnosis of skin cancer, identification of cases for specialist review, and targeted diagnostic classifications.
Collapse
Affiliation(s)
| | | | | | | | | | - Thomas G Olsen
- Department of Dermatology, Boonshoft School of Medicine, Wright State University School of Medicine, Dayton, Ohio, USA
- Dermatopathology Laboratory of Central States, Dayton, Ohio, USA
| | | | | | - Kiran Motaparthi
- Department of Dermatology, University of Florida College of Medicine, Gainesville, Florida, USA
| | | | | | - Jason B Lee
- Departments of Dermatology and Cutaneous Biology, Sidney Kimmel Medical College at Thomas Jefferson University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
30
|
Khorrami M, Prasanna P, Gupta A, Patil P, Velu PD, Thawani R, Corredor G, Alilou M, Bera K, Fu P, Feldman M, Velcheti V, Madabhushi A. Changes in CT Radiomic Features Associated with Lymphocyte Distribution Predict Overall Survival and Response to Immunotherapy in Non-Small Cell Lung Cancer. Cancer Immunol Res 2020; 8:108-119. [PMID: 31719058 PMCID: PMC7718609 DOI: 10.1158/2326-6066.cir-19-0476] [Citation(s) in RCA: 162] [Impact Index Per Article: 40.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Revised: 09/04/2019] [Accepted: 11/05/2019] [Indexed: 12/26/2022]
Abstract
No predictive biomarkers can robustly identify patients with non-small cell lung cancer (NSCLC) who will benefit from immune checkpoint inhibitor (ICI) therapies. Here, in a machine learning setting, we compared changes ("delta") in the radiomic texture (DelRADx) of CT patterns both within and outside tumor nodules before and after two to three cycles of ICI therapy. We found that DelRADx patterns could predict response to ICI therapy and overall survival (OS) for patients with NSCLC. We retrospectively analyzed data acquired from 139 patients with NSCLC at two institutions, who were divided into a discovery set (D1 = 50) and two independent validation sets (D2 = 62, D3 = 27). Intranodular and perinodular texture descriptors were extracted, and the relative differences were computed. A linear discriminant analysis (LDA) classifier was trained with 8 DelRADx features to predict RECIST-derived response. Association of delta-radiomic risk score (DRS) with OS was determined. The association of DelRADx features with tumor-infiltrating lymphocyte (TIL) density on the diagnostic biopsies (n = 36) was also evaluated. The LDA classifier yielded an AUC of 0.88 ± 0.08 in distinguishing responders from nonresponders in D1, and 0.85 and 0.81 in D2 and D3 DRS was associated with OS [HR: 1.64; 95% confidence interval (CI), 1.22-2.21; P = 0.0011; C-index = 0.72). Peritumoral Gabor features were associated with the density of TILs on diagnostic biopsy samples. Our results show that DelRADx could be used to identify early functional responses in patients with NSCLC.
Collapse
Affiliation(s)
- Mohammadhadi Khorrami
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Prateek Prasanna
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Amit Gupta
- Department of Radiology-Cardiothoracic Imaging, University Hospitals, Cleveland, Ohio
| | - Pradnya Patil
- Department of Solid Tumor Oncology, Cleveland Clinic, Cleveland, Ohio
| | - Priya D Velu
- Pathology and Laboratory Medicine, Weill Cornell Medicine Physicians, New York, New York
| | - Rajat Thawani
- Department of Internal Medicine, Maimonides Medical Center, Brooklyn, New York
| | - German Corredor
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Mehdi Alilou
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, CWRU, Cleveland, Ohio
| | - Michael Feldman
- Pathology and Laboratory Medicine, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Vamsidhar Velcheti
- Department of Hematology and Oncology, NYU Langone Health, New York, New York
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio
| |
Collapse
|
31
|
Ibrahim A, Gamble P, Jaroensri R, Abdelsamea MM, Mermel CH, Chen PHC, Rakha EA. Artificial intelligence in digital breast pathology: Techniques and applications. Breast 2019; 49:267-273. [PMID: 31935669 PMCID: PMC7375550 DOI: 10.1016/j.breast.2019.12.007] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Accepted: 12/12/2019] [Indexed: 12/16/2022] Open
Abstract
Breast cancer is the most common cancer and second leading cause of cancer-related death worldwide. The mainstay of breast cancer workup is histopathological diagnosis - which guides therapy and prognosis. However, emerging knowledge about the complex nature of cancer and the availability of tailored therapies have exposed opportunities for improvements in diagnostic precision. In parallel, advances in artificial intelligence (AI) along with the growing digitization of pathology slides for the primary diagnosis are a promising approach to meet the demand for more accurate detection, classification and prediction of behaviour of breast tumours. In this article, we cover the current and prospective uses of AI in digital pathology for breast cancer, review the basics of digital pathology and AI, and outline outstanding challenges in the field.
Collapse
Affiliation(s)
- Asmaa Ibrahim
- Department of Histopathology, Division of Cancer and Stem Cells, School of Medicine, The University of Nottingham and Nottingham University Hospitals NHS Trust, Nottingham City Hospital, Nottingham, NG5 1PB, UK
| | | | | | - Mohammed M Abdelsamea
- School of Computing and Digital Technology, Birmingham City University, Birmingham, UK
| | | | | | - Emad A Rakha
- Department of Histopathology, Division of Cancer and Stem Cells, School of Medicine, The University of Nottingham and Nottingham University Hospitals NHS Trust, Nottingham City Hospital, Nottingham, NG5 1PB, UK.
| |
Collapse
|
32
|
P B S, Faruqi F, K S H, Kudva R. Deep Convolution Neural Network for Malignancy Detection and Classification in Microscopic Uterine Cervix Cell Images. Asian Pac J Cancer Prev 2019; 20:3447-3456. [PMID: 31759371 PMCID: PMC7062987 DOI: 10.31557/apjcp.2019.20.11.3447] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Indexed: 11/25/2022] Open
Abstract
Objective: Automated Pap smear cervical screening is one of the most effective imaging based cancer detection tools used for categorizing cervical cell images as normal and abnormal. Traditional classification methods depend on hand-engineered features and show limitations in large, diverse datasets. Effective feature extraction requires an efficient image preprocessing and segmentation, which remains prominent challenge in the field of Pathology. In this paper, a deep learning concept is used for cell image classification in large datasets. Methods: This relatively proposed novel method, combines abstract and complicated representations of data acquired in a hierarchical architecture. Convolution Neural Network (CNN) learns meaningful kernels that simulate the extraction of visual features such as edges, size, shape and colors in image classification. A deep prediction model is built using such a CNN network to classify the various grades of cancer: normal, mild, moderate, severe and carcinoma. It is an effective computational model which uses multiple processing layers to learn complex features. A large dataset is prepared for this study by systematically augmenting the images in Herlev dataset. Result: Among the three sets considered for the study, the first set of single cell enhanced original images achieved an accuracy of 94.1% for 5 class, 96.2% for 4 class, 94.8% for 3 class and 95.7% for 2 class problems. The second set includes contour extracted images showed an accuracy of 92.14%, 92.9%, 94.7% and 89.9% for 5, 4, 3 and 2 class problems. The third set of binary images showed 85.07% for 5 class, 84% for 4 class, 92.07% for 3 class and highest accuracy of 99.97% for 2 class problems. Conclusion: The experimental results of the proposed model showed an effective classification of different grades of cancer in cervical cell images, exhibiting the extensive potential of deep learning in Pap smear cell image classification.
Collapse
Affiliation(s)
- Shanthi P B
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Educaton, Udupi, Karnataka, India
| | - Faraz Faruqi
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Educaton, Udupi, Karnataka, India
| | - Hareesha K S
- Department of Computer Applications, Manipal Institute of Technology, Manipal Academy of Higher Education, Udupi, Karnataka, India
| | - Ranjini Kudva
- Department of Pathology, Kasturba Medical College, Manipal Academy of Higher Educaton, Udupi, Karnataka, India
| |
Collapse
|
33
|
Bera K, Schalper KA, Rimm DL, Velcheti V, Madabhushi A. Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology. Nat Rev Clin Oncol 2019; 16:703-715. [PMID: 31399699 PMCID: PMC6880861 DOI: 10.1038/s41571-019-0252-y] [Citation(s) in RCA: 630] [Impact Index Per Article: 126.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/04/2019] [Indexed: 02/06/2023]
Abstract
In the past decade, advances in precision oncology have resulted in an increased demand for predictive assays that enable the selection and stratification of patients for treatment. The enormous divergence of signalling and transcriptional networks mediating the crosstalk between cancer, stromal and immune cells complicates the development of functionally relevant biomarkers based on a single gene or protein. However, the result of these complex processes can be uniquely captured in the morphometric features of stained tissue specimens. The possibility of digitizing whole-slide images of tissue has led to the advent of artificial intelligence (AI) and machine learning tools in digital pathology, which enable mining of subvisual morphometric phenotypes and might, ultimately, improve patient management. In this Perspective, we critically evaluate various AI-based computational approaches for digital pathology, focusing on deep neural networks and 'hand-crafted' feature-based methodologies. We aim to provide a broad framework for incorporating AI and machine learning tools into clinical oncology, with an emphasis on biomarker development. We discuss some of the challenges relating to the use of AI, including the need for well-curated validation datasets, regulatory approval and fair reimbursement strategies. Finally, we present potential future opportunities for precision oncology.
Collapse
Affiliation(s)
- Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Kurt A Schalper
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - David L Rimm
- Department of Pathology, Yale University School of Medicine, New Haven, CT, USA
| | - Vamsidhar Velcheti
- Thoracic Medical Oncology, Perlmutter Cancer Center, New York University, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
34
|
Conceição T, Braga C, Rosado L, Vasconcelos MJM. A Review of Computational Methods for Cervical Cells Segmentation and Abnormality Classification. Int J Mol Sci 2019; 20:E5114. [PMID: 31618951 PMCID: PMC6834130 DOI: 10.3390/ijms20205114] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/07/2019] [Accepted: 10/09/2019] [Indexed: 02/07/2023] Open
Abstract
Cervical cancer is the one of the most common cancers in women worldwide, affecting around 570,000 new patients each year. Although there have been great improvements over the years, current screening procedures can still suffer from long and tedious workflows and ambiguities. The increasing interest in the development of computer-aided solutions for cervical cancer screening is to aid with these common practical difficulties, which are especially frequent in the low-income countries where most deaths caused by cervical cancer occur. In this review, an overview of the disease and its current screening procedures is firstly introduced. Furthermore, an in-depth analysis of the most relevant computational methods available on the literature for cervical cells analysis is presented. Particularly, this work focuses on topics related to automated quality assessment, segmentation and classification, including an extensive literature review and respective critical discussion. Since the major goal of this timely review is to support the development of new automated tools that can facilitate cervical screening procedures, this work also provides some considerations regarding the next generation of computer-aided diagnosis systems and future research directions.
Collapse
Affiliation(s)
| | | | - Luís Rosado
- Fraunhofer Portugal AICOS, 4200-135 Porto, Portugal.
| | | |
Collapse
|
35
|
Recognition and Clinical Diagnosis of Cervical Cancer Cells Based on our Improved Lightweight Deep Network for Pathological Image. J Med Syst 2019; 43:301. [PMID: 31372766 DOI: 10.1007/s10916-019-1426-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2019] [Accepted: 07/14/2019] [Indexed: 10/26/2022]
Abstract
Accurate recognition of cervical cancer cells is of great significance to clinical diagnosis, but these existing algorithms are designed by low-level manual feature, and their performance improvements are limited an improved algorithm based on residual neural network is proposed to improve the accuracy of diagnosis. Firstly, momentum parameters are introduced into the training model; secondly, by changing the number of training samples, the recognition rate of the algorithm can be improved. Therefore, aiming at the task of object recognition under resource constrained condition, we optimize the design method of the network structure such as convolution operation, model parameter compression and enhancement of feature expression depth, and design and implement the lightweight network model structure for embedded platform. Our proposed deep network model can reduce the parameters of the model and the resources needed for operation under the condition of guaranteeing the precision. The experimental results show that the lightweight deep model has better performance than that of the existing comparison models, and it can achieve the model accuracy of 94.1% under the condition that the model with fewer parameters on cervical cells data set.
Collapse
|
36
|
Region-Based Automated Localization of Colonoscopy and Wireless Capsule Endoscopy Polyps. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9122404] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The early detection of polyps could help prevent colorectal cancer. The automated detection of polyps on the colon walls could reduce the number of false negatives that occur due to manual examination errors or polyps being hidden behind folds, and could also help doctors locate polyps from screening tests such as colonoscopy and wireless capsule endoscopy. Losing polyps may result in lesions evolving badly. In this paper, we propose a modified region-based convolutional neural network (R-CNN) by generating masks around polyps detected from still frames. The locations of the polyps in the image are marked, which assists the doctors examining the polyps. The features from the polyp images are extracted using pre-trained Resnet-50 and Resnet-101 models through feature extraction and fine-tuning techniques. Various publicly available polyp datasets are analyzed with various pertained weights. It is interesting to notice that fine-tuning with balloon data (polyp-like natural images) improved the polyp detection rate. The optimum CNN models on colonoscopy datasets including CVC-ColonDB, CVC-PolypHD, and ETIS-Larib produced values (F1 score, F2 score) of (90.73, 91.27), (80.65, 79.11), and (76.43, 78.70) respectively. The best model on the wireless capsule endoscopy dataset gave a performance of (96.67, 96.10). The experimental results indicate the better localization of polyps compared to recent traditional and deep learning methods.
Collapse
|
37
|
AlMubarak HA, Stanley J, Guo P, Long R, Antani S, Thoma G, Zuna R, Frazier S, Stoecker W. A Hybrid Deep Learning and Handcrafted Feature Approach for Cervical Cancer Digital Histology Image Classification. INTERNATIONAL JOURNAL OF HEALTHCARE INFORMATION SYSTEMS AND INFORMATICS 2019. [DOI: 10.4018/ijhisi.2019040105] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Cervical cancer is the second most common cancer affecting women worldwide but is curable if diagnosed early. Routinely, expert pathologists visually examine histology slides for assessing cervix tissue abnormalities. A localized, fusion-based, hybrid imaging and deep learning approach is explored to classify squamous epithelium into cervical intraepithelial neoplasia (CIN) grades for a dataset of 83 digitized histology images. Partitioning the epithelium region into 10 vertical segments, 27 handcrafted image features and rectangular patch, sliding window-based convolutional neural network features are computed for each segment. The imaging and deep learning patch features are combined and used as inputs to a secondary classifier for individual segment and whole epithelium classification. The hybrid method achieved a 15.51% and 11.66% improvement over the deep learning and imaging approaches alone, respectively, with a 80.72% whole epithelium CIN classification accuracy, showing the enhanced epithelium CIN classification potential of fusing image and deep learning features.
Collapse
Affiliation(s)
- Haidar A AlMubarak
- Missouri University of Science and Technology, Rolla, USA & Advanced Lab for Intelligent Systems Rresearch, Department of Computer Engineering, College of Information and Computer Sciences, King Saud University, Riyadh, Saudi Arabia & Electrical and Computer Engineering Department, Missouri University of Science and Technology, Rolla, USA
| | - Joe Stanley
- Missouri University of Science and Technology, Rolla, USA
| | - Peng Guo
- Missouri University of Science and Technology, Rolla, USA
| | - Rodney Long
- Lister Hill National Center for Biomedical Communications for National Library of Medicine, Bethesda, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications for National Library of Medicine, Bethesda, USA
| | - George Thoma
- Lister Hill National Center for Biomedical Communications for National Library of Medicine, Bethesda, USA
| | - Rosemary Zuna
- Department of Pathology for the University of Oklahoma Health Sciences Center, Oklahoma City, USA
| | | | - William Stoecker
- The Dermatology Center, Missouri University of Science and Technology, Rolla, USA
| |
Collapse
|
38
|
Vaickus LJ, Suriawinata AA, Wei JW, Liu X. Automating the Paris System for urine cytopathology-A hybrid deep-learning and morphometric approach. Cancer Cytopathol 2019; 127:98-115. [PMID: 30702803 DOI: 10.1002/cncy.22099] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 11/09/2018] [Accepted: 12/03/2018] [Indexed: 01/20/2023]
Abstract
BACKGROUND The Paris System for Urine Cytopathology (the Paris System) has succeeded in making the analysis of liquid-based urine preparations more reproducible. Any algorithm seeking to automate this system must accurately estimate the nuclear-to-cytoplasmic (N:C) ratio and produce a qualitative "atypia score." The authors propose a hybrid deep-learning and morphometric model that reliably automates the Paris System. METHODS Whole-slide images (WSI) of liquid-based urine cytology specimens were extracted from 51 negative, 60 atypical, 52 suspicious, and 54 positive cases. Morphometric algorithms were applied to decompose images to their component parts; and statistics, including the NC ratio, were tabulated using segmentation algorithms to create organized data structures, dubbed rich information matrices (RIMs). These RIM objects were enhanced using deep-learning algorithms to include qualitative measures. The augmented RIM objects were then used to reconstruct WSIs with filtering criteria and to generate pancellular statistical information. RESULTS The described system was used to calculate the N:C ratio for all cells, generate object classifications (atypical urothelial cell, squamous cell, crystal, etc), filter the original WSI to remove unwanted objects, rearrange the WSI to an efficient, condensed-grid format, and generate pancellular statistics containing quantitative/qualitative data for every cell in a WSI. In addition to developing novel techniques for managing WSIs, a system capable of automatically tabulating the Paris System criteria also was generated. CONCLUSIONS A hybrid deep-learning and morphometric algorithm was developed for the analysis of urine cytology specimens that could reliably automate the Paris System and provide many avenues for increasing the efficiency of digital screening for urine WSIs and other cytology preparations.
Collapse
Affiliation(s)
- Louis J Vaickus
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Arief A Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Jason W Wei
- Department of Computer Science, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Xiaoying Liu
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| |
Collapse
|
39
|
Zarella MD, Bowman; D, Aeffner F, Farahani N, Xthona; A, Absar SF, Parwani A, Bui M, Hartman DJ. A Practical Guide to Whole Slide Imaging: A White Paper From the Digital Pathology Association. Arch Pathol Lab Med 2018; 143:222-234. [DOI: 10.5858/arpa.2018-0343-ra] [Citation(s) in RCA: 150] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Context.—
Whole slide imaging (WSI) represents a paradigm shift in pathology, serving as a necessary first step for a wide array of digital tools to enter the field. Its basic function is to digitize glass slides, but its impact on pathology workflows, reproducibility, dissemination of educational material, expansion of service to underprivileged areas, and intrainstitutional and interinstitutional collaboration exemplifies a significant innovative movement with far-reaching effects. Although the benefits of WSI to pathology practices, academic centers, and research institutions are many, the complexities of implementation remain an obstacle to widespread adoption. In the wake of the first regulatory clearance of WSI for primary diagnosis in the United States, some barriers to adoption have fallen. Nevertheless, implementation of WSI remains a difficult prospect for many institutions, especially those with stakeholders unfamiliar with the technologies necessary to implement a system or who cannot effectively communicate to executive leadership and sponsors the benefits of a technology that may lack clear and immediate reimbursement opportunity.
Objectives.—
To present an overview of WSI technology—present and future—and to demonstrate several immediate applications of WSI that support pathology practice, medical education, research, and collaboration.
Data Sources.—
Peer-reviewed literature was reviewed by pathologists, scientists, and technologists who have practical knowledge of and experience with WSI.
Conclusions.—
Implementation of WSI is a multifaceted and inherently multidisciplinary endeavor requiring contributions from pathologists, technologists, and executive leadership. Improved understanding of the current challenges to implementation, as well as the benefits and successes of the technology, can help prospective users identify the best path for success.
Collapse
Affiliation(s)
- Mark D. Zarella
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Douglas Bowman;
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Famke Aeffner
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Navid Farahani
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Albert Xthona;
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Syeda Fatima Absar
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Anil Parwani
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Marilyn Bui
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| | - Douglas J. Hartman
- From the Department of Pathology & Laboratory Medicine, Drexel University College of Medicine, Philadelphia, Pennsylvania (Drs Zarella and Absar); Pharma Services, Indica Labs, Inc, Corrales, New Mexico (Mr Bowman); Comparative Biology and Safety Sciences, Amgen, Inc, South San Francisco, California (Dr Aeffner); 3Scan, San Francisco, California (Dr Farahani); Barco, Inc, Beaverton, Oregon (Mr Xt
| |
Collapse
|