51
|
|
52
|
Jiang J, Tekin B, Guo R, Liu H, Huang Y, Wang C. Digital Pathology-based Study of Cell- and Tissue-level Morphologic Features in Serous Borderline Ovarian Tumor and High-grade Serous Ovarian Cancer. J Pathol Inform 2021; 12:24. [PMID: 34447604 PMCID: PMC8356706 DOI: 10.4103/jpi.jpi_76_20] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Revised: 12/28/2020] [Accepted: 02/11/2021] [Indexed: 12/22/2022] Open
Abstract
Background: Serous borderline ovarian tumor (SBOT) and high-grade serous ovarian cancer (HGSOC) are two distinct subtypes of epithelial ovarian tumors, with markedly different biologic background, behavior, prognosis, and treatment. However, the histologic diagnosis of serous ovarian tumors can be subjectively variable and labor-intensive as multiple tumor slides/blocks need to be thoroughly examined to search for these features. Materials and Methods: We developed a novel informatics system to facilitate objective and scalable diagnosis screening for SBOT and HGSOC. The system was built upon Groovy scripts and QuPath to enable interactive annotation and data exchange. Results: The system was used to successfully detect cellular boundaries and extract an expanded set of cellular features representing cell- and tissue-level characteristics. The performance of cell-level classification for both tumor and stroma cells achieved >90% accuracy. The performance of differentiating HGSOC versus SBOT achieved 91%–95% accuracy for 6485 imaging patches which have sufficient tumor and stroma cells (minimum of ten each) and 97% accuracy for classifying patients when aggregating the results to whole-slide image based on consensus. Conclusions: Cellular features digitally extracted from pathological images can be used for cell classification and SBOT v. HGSOC differentiation. Introducing digital pathology into ovarian cancer research could be beneficial to discover potential clinical implications. A larger cohort is required to further evaluate the system.
Collapse
Affiliation(s)
- Jun Jiang
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Burak Tekin
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA
| | - Ruifeng Guo
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA
| | - Hongfang Liu
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA
| | - Yajue Huang
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA
| | - Chen Wang
- Department of Health Science Research, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
53
|
Automated scoring of CerbB2/HER2 receptors using histogram based analysis of immunohistochemistry breast cancer tissue images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102924] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
54
|
Breast DCE-MRI segmentation for lesion detection by multi-level thresholding using student psychological based optimization. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102925] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
55
|
Wirth D, McCall A, Hristova K. Neural network strategies for plasma membrane selection in fluorescence microscopy images. Biophys J 2021; 120:2374-2385. [PMID: 33961865 PMCID: PMC8390876 DOI: 10.1016/j.bpj.2021.04.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/21/2021] [Accepted: 04/29/2021] [Indexed: 10/21/2022] Open
Abstract
In recent years, there has been an explosion of fluorescence microscopy studies of live cells in the literature. The analysis of the images obtained in these studies often requires labor-intensive manual annotation to extract meaningful information. In this study, we explore the utility of a neural network approach to recognize, classify, and select plasma membranes in high-resolution images, thus greatly speeding up data analysis and reducing the need for personnel training for highly repetitive tasks. Two different strategies are tested: 1) a semantic segmentation strategy, and 2) a sequential application of an object detector followed by a semantic segmentation network. Multiple network architectures are evaluated for each strategy, and the best performing solutions are combined and implemented in the Recognition Of Cellular Membranes software. We show that images annotated manually and with the Recognition Of Cellular Membranes software yield identical results by comparing Förster resonance energy transfer binding curves for the membrane protein fibroblast growth factor receptor 3. The approach that we describe in this work can be applied to other image selection tasks in cell biology.
Collapse
Affiliation(s)
- Daniel Wirth
- Department of Materials Science and Engineering, Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, Maryland
| | - Alec McCall
- Department of Materials Science and Engineering, Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, Maryland
| | - Kalina Hristova
- Department of Materials Science and Engineering, Institute for NanoBioTechnology, Johns Hopkins University, Baltimore, Maryland.
| |
Collapse
|
56
|
A Comparative Assessment of Different Approaches of Segmentation and Classification Methods on Childhood Medulloblastoma Images. J Med Biol Eng 2021. [DOI: 10.1007/s40846-021-00612-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
57
|
Tewary S, Mukhopadhyay S. HER2 Molecular Marker Scoring Using Transfer Learning and Decision Level Fusion. J Digit Imaging 2021; 34:667-677. [PMID: 33742331 PMCID: PMC8329150 DOI: 10.1007/s10278-021-00442-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 01/13/2021] [Accepted: 03/01/2021] [Indexed: 01/28/2023] Open
Abstract
In prognostic evaluation of breast cancer, immunohistochemical (IHC) marker human epidermal growth factor receptor 2 (HER2) is used for prognostic evaluation. Accurate assessment of HER2-stained tissue sample is essential in therapeutic decision making for the patients. In regular clinical settings, expert pathologists assess the HER2-stained tissue slide under microscope for manual scoring based on prior experience. Manual scoring is time consuming, tedious, and often prone to inter-observer variation among group of pathologists. With the recent advancement in the area of computer vision and deep learning, medical image analysis has got significant attention. A number of deep learning architectures have been proposed for classification of different image groups. These networks are also used for transfer learning to classify other image classes. In the presented study, a number of transfer learning architectures are used for HER2 scoring. Five pre-trained architectures viz. VGG16, VGG19, ResNet50, MobileNetV2, and NASNetMobile with decimating the fully connected layers to get 3-class classification have been used for the comparative assessment of the networks as well as further scoring of stained tissue sample image based on statistical voting using mode operator. HER2 Challenge dataset from Warwick University is used in this study. A total of 2130 image patches were extracted to generate the training dataset from 300 training images corresponding to 30 training cases. The output model is then tested on 800 new test image patches from 100 test images acquired from 10 test cases (different from training cases) to report the outcome results. The transfer learning models have shown significant accuracy with VGG19 showing the best accuracy for the test images. The accuracy is found to be 93%, which increases to 98% on the image-based scoring using statistical voting mechanism. The output shows a capable quantification pipeline in automated HER2 score generation.
Collapse
Affiliation(s)
- Suman Tewary
- School of Medical Science and Technology, Indian Institute of Technology Kharagpur, Kharagpur, India
- Computational Instrumentation, CSIR-Central Scientific Instruments Organisation, Chandigarh, India
| | - Sudipta Mukhopadhyay
- Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India.
| |
Collapse
|
58
|
Saha M, Guo X, Sharma A. TilGAN: GAN for Facilitating Tumor-Infiltrating Lymphocyte Pathology Image Synthesis With Improved Image Classification. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:79829-79840. [PMID: 34178560 PMCID: PMC8224465 DOI: 10.1109/access.2021.3084597] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Tumor-infiltrating lymphocytes (TILs) act as immune cells against cancer tissues. The manual assessment of TILs is usually erroneous, tedious, costly and subject to inter- and intraobserver variability. Machine learning approaches can solve these issues, but they require a large amount of labeled data for model training, which is expensive and not readily available. In this study, we present an efficient generative adversarial network, TilGAN, to generate high-quality synthetic pathology images followed by classification of TIL and non-TIL regions. Our proposed architecture is constructed with a generator network and a discriminator network. The novelty exists in the TilGAN architecture, loss functions, and evaluation techniques. Our TilGAN-generated images achieved a higher Inception score than the real images (2.90 vs. 2.32, respectively). They also achieved a lower kernel Inception distance (1.44) and a lower Fréchet Inception distance (0.312). It also passed the Turing test performed by experienced pathologists and clinicians. We further extended our evaluation studies and used almost one million synthetic data, generated by TilGAN, to train a classification model. Our proposed classification model achieved a 97.83% accuracy, a 97.37% F1-score, and a 97% area under the curve. Our extensive experiments and superior outcomes show the efficiency and effectiveness of our proposed TilGAN architecture. This architecture can also be used for other types of images for image synthesis.
Collapse
Affiliation(s)
- Monjoy Saha
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| | - Xiaoyuan Guo
- Department of Computer Science, Emory University, Atlanta, GA 30332, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, School of Medicine, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
59
|
IHC-Net: A fully convolutional neural network for automated nuclear segmentation and ensemble classification for Allred scoring in breast pathology. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107136] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
60
|
Lagree A, Mohebpour M, Meti N, Saednia K, Lu FI, Slodkowska E, Gandhi S, Rakovitch E, Shenfield A, Sadeghi-Naini A, Tran WT. A review and comparison of breast tumor cell nuclei segmentation performances using deep convolutional neural networks. Sci Rep 2021; 11:8025. [PMID: 33850222 PMCID: PMC8044238 DOI: 10.1038/s41598-021-87496-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 03/30/2021] [Indexed: 02/07/2023] Open
Abstract
Breast cancer is currently the second most common cause of cancer-related death in women. Presently, the clinical benchmark in cancer diagnosis is tissue biopsy examination. However, the manual process of histopathological analysis is laborious, time-consuming, and limited by the quality of the specimen and the experience of the pathologist. This study's objective was to determine if deep convolutional neural networks can be trained, with transfer learning, on a set of histopathological images independent of breast tissue to segment tumor nuclei of the breast. Various deep convolutional neural networks were evaluated for the study, including U-Net, Mask R-CNN, and a novel network (GB U-Net). The networks were trained on a set of Hematoxylin and Eosin (H&E)-stained images of eight diverse types of tissues. GB U-Net demonstrated superior performance in segmenting sites of invasive diseases (AJI = 0.53, mAP = 0.39 & AJI = 0.54, mAP = 0.38), validated on two hold-out datasets exclusively containing breast tissue images of approximately 7,582 annotated cells. The results of the networks, trained on images independent of breast tissue, demonstrated that tumor nuclei of the breast could be accurately segmented.
Collapse
Affiliation(s)
- Andrew Lagree
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, Canada
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada
| | - Majidreza Mohebpour
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, Canada
| | - Nicholas Meti
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Khadijeh Saednia
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| | - Fang-I Lu
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Elzbieta Slodkowska
- Department of Laboratory Medicine and Molecular Diagnostics, Sunnybrook Health Sciences Centre, Toronto, Canada
| | - Sonal Gandhi
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada
- Division of Medical Oncology, Department of Medicine, University of Toronto, Toronto, Canada
| | - Eileen Rakovitch
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Canada
| | - Alex Shenfield
- Department of Engineering and Mathematics, Sheffield Hallam University, Sheffield, UK
| | - Ali Sadeghi-Naini
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Canada
- Department of Electrical Engineering and Computer Science, York University, Toronto, Canada
| | - William T Tran
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Canada.
- Biological Sciences Platform, Sunnybrook Research Institute, Toronto, Canada.
- Radiogenomics Laboratory, Sunnybrook Health Sciences Centre, Toronto, Canada.
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, Canada.
- Department of Radiation Oncology, University of Toronto, Toronto, Canada.
- Department of Radiation Oncology, University of Toronto and Sunnybrook Health Sciences Centre, 2075 Bayview Avenue, TB 095, Toronto, ON, M4N 3M5, Canada.
| |
Collapse
|
61
|
Rajathi GM. Optimized Radial Basis Neural Network for Classification of Breast Cancer Images. Curr Med Imaging 2021; 17:97-108. [PMID: 32416697 DOI: 10.2174/1573405616666200516172118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/18/2020] [Accepted: 04/25/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND Breast cancer is a curable disease if diagnosed at an early stage. The chances of having breast cancer are the lowest in married women after the breast-feeding phase because the cancer is formed from the blocked milk ducts. INTRODUCTION Nowadays, cancer is considered the leading cause of death globally. Breast cancer is the most common cancer among females. It is possible to develop breast cancer while breast-feeding a baby, but it is rare. Mammography is one of the most effective methods used in hospitals and clinics for early detection of breast cancer. Various researchers are used in artificial intelligence- based mammogram techniques. This process of mammography will reduce the death rate of the patients affected by breast cancer. This process is improved by the image analysing, detection, screening, diagnosing, and other performance measures. METHODS The radial basis neural network will be used for classification purposes. The radial basis neural network is designed with the help of the optimization algorithm. The optimization is to tune the classifier to reduce the error rate with the minimum time for the training process. The cuckoo search algorithm will be used for this purpose. RESULTS Thus, the proposed optimum RBNN is determined to classify breast cancer images. In this, the three sets of properties were classified by performing the feature extraction and feature reduction. In this breast cancer MRI image, the normal, benign, and malignant is taken to perform the classification. The minimum fitness value is determined to evaluate the optimum value of possible locations. The radial basis function is evaluated with the cuckoo search algorithm to optimize the feature reduction process. The proposed methodology is compared with the traditional radial basis neural network using the evaluation parameter like accuracy, precision, recall and f1-score. The whole system model is done by using Matrix Laboratory (MATLAB) with the adaptation of 2018a since the proposed system is most efficient than most recent related literature. CONCLUSION Thus, it concluded with the efficient classification process of RBNN using a cuckoo search algorithm for breast cancer images. The mammogram images are taken into recent research because breast cancer is a major issue for women. This process is carried to classify the various features for three sets of properties. The optimized classifier improves performance and provides a better result. In this proposed research work, the input image is filtered using a wiener filter, and the classifier extracts the feature based on the breast image.
Collapse
Affiliation(s)
- G M Rajathi
- Department of Electronics and Communication Engineering, Sri Ramakrishna Engineering College, Coimbatore, Tamil Nadu, India
| |
Collapse
|
62
|
Harmon SA, Patel PG, Sanford TH, Caven I, Iseman R, Vidotto T, Picanço C, Squire JA, Masoudi S, Mehralivand S, Choyke PL, Berman DM, Turkbey B, Jamaspishvili T. High throughput assessment of biomarkers in tissue microarrays using artificial intelligence: PTEN loss as a proof-of-principle in multi-center prostate cancer cohorts. Mod Pathol 2021; 34:478-489. [PMID: 32884130 PMCID: PMC9152638 DOI: 10.1038/s41379-020-00674-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 08/21/2020] [Accepted: 08/21/2020] [Indexed: 02/06/2023]
Abstract
Phosphatase and tensin homolog (PTEN) loss is associated with adverse outcomes in prostate cancer and has clinical potential as a prognostic biomarker. The objective of this work was to develop an artificial intelligence (AI) system for automated detection and localization of PTEN loss on immunohistochemically (IHC) stained sections. PTEN loss was assessed using IHC in two prostate tissue microarrays (TMA) (internal cohort, n = 272 and external cohort, n = 129 patients). TMA cores were visually scored for PTEN loss by pathologists and, if present, spatially annotated. Cores from each patient within the internal TMA cohort were split into 90% cross-validation (N = 2048) and 10% hold-out testing (N = 224) sets. ResNet-101 architecture was used to train core-based classification using a multi-resolution ensemble approach (×5, ×10, and ×20). For spatial annotations, single resolution pixel-based classification was trained from patches extracted at ×20 resolution, interpolated to ×40 resolution, and applied in a sliding-window fashion. A final AI-based prediction model was created from combining multi-resolution and pixel-based models. Performance was evaluated in 428 cores of external cohort. From both cohorts, a total of 2700 cores were studied, with a frequency of PTEN loss of 14.5% in internal (180/1239) and external 13.5% (43/319) cancer cores. The final AI-based prediction of PTEN status demonstrated 98.1% accuracy (95.0% sensitivity, 98.4% specificity; median dice score = 0.811) in internal cohort cross-validation set and 99.1% accuracy (100% sensitivity, 99.0% specificity; median dice score = 0.804) in internal cohort test set. Overall core-based classification in the external cohort was significantly improved in the external cohort (area under the curve = 0.964, 90.6% sensitivity, 95.7% specificity) when further trained (fine-tuned) using 15% of cohort data (19/124 patients). These results demonstrate a robust and fully automated method for detection and localization of PTEN loss in prostate cancer tissue samples. AI-based algorithms have potential to streamline sample assessment in research and clinical laboratories.
Collapse
Affiliation(s)
- Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
- Clinical Research Directorate, Frederick National Laboratory for Cancer Research, Frederick, MD, USA
| | - Palak G Patel
- Division of Cancer Biology & Genetics, Cancer Research Institute, Queen's University, Kingston, ON, Canada
- Department of Pathology and Molecular Medicine, Queen's University, Kingston, ON, Canada
- Department of Cell Biology at The Arthur and Sonia Labatt Brain Tumour Research Centre at the Hospital for Sick Children, Toronto, ON, Canada
| | - Thomas H Sanford
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
- Department of Urology, Upstate Medical University, Syracuse, NY, USA
| | - Isabelle Caven
- Division of Cancer Biology & Genetics, Cancer Research Institute, Queen's University, Kingston, ON, Canada
- Department of Pathology and Molecular Medicine, Queen's University, Kingston, ON, Canada
| | - Rachael Iseman
- Division of Cancer Biology & Genetics, Cancer Research Institute, Queen's University, Kingston, ON, Canada
- Department of Pathology and Molecular Medicine, Queen's University, Kingston, ON, Canada
| | - Thiago Vidotto
- Department of Genetics, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Clarissa Picanço
- Department of Genetics, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Jeremy A Squire
- Department of Pathology and Molecular Medicine, Queen's University, Kingston, ON, Canada
- Department of Genetics, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Samira Masoudi
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sherif Mehralivand
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - David M Berman
- Division of Cancer Biology & Genetics, Cancer Research Institute, Queen's University, Kingston, ON, Canada
- Department of Pathology and Molecular Medicine, Queen's University, Kingston, ON, Canada
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tamara Jamaspishvili
- Division of Cancer Biology & Genetics, Cancer Research Institute, Queen's University, Kingston, ON, Canada.
- Department of Pathology and Molecular Medicine, Queen's University, Kingston, ON, Canada.
| |
Collapse
|
63
|
Chugh G, Kumar S, Singh N. Survey on Machine Learning and Deep Learning Applications in Breast Cancer Diagnosis. Cognit Comput 2021. [DOI: 10.1007/s12559-020-09813-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
64
|
Initial Experience With Low-Dose 18F-Fluorodeoxyglucose Positron Emission Tomography/Magnetic Resonance Imaging With Deep Learning Enhancement. J Comput Assist Tomogr 2021; 45:637-642. [PMID: 34176877 PMCID: PMC8597977 DOI: 10.1097/rct.0000000000001174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE To demonstrate the utility of deep learning enhancement (DLE) to achieve diagnostic quality low-dose positron emission tomography (PET)/magnetic resonance (MR) imaging. METHODS Twenty subjects with known Crohn disease underwent simultaneous PET/MR imaging after intravenous administration of approximately 185 MBq of 18F-fluorodeoxyglucose (FDG). Five image sets were generated: (1) standard-of-care (reference), (2) low-dose (ie, using 20% of PET counts), (3) DLE-enhanced low-dose using PET data as input, (4) DLE-enhanced low-dose using PET and MR data as input, and (5) DLE-enhanced using no PET data input. Image sets were evaluated by both quantitative metrics and qualitatively by expert readers. RESULTS Although low-dose images (series 2) and images with no PET data input (series 5) were nondiagnostic, DLE of the low-dose images (series 3 and 4) achieved diagnostic quality images that scored more favorably than reference (series 1), both qualitatively and quantitatively. CONCLUSIONS Deep learning enhancement has the potential to enable a 90% reduction of radiotracer while achieving diagnostic quality images.
Collapse
|
65
|
Feng M, Chen J, Xiang X, Deng Y, Zhou Y, Zhang Z, Zheng Z, Bao J, Bu H. An Advanced Automated Image Analysis Model for Scoring of ER, PR, HER-2 and Ki-67 in Breast Carcinoma. IEEE ACCESS 2021; 9:108441-108451. [DOI: 10.1109/access.2020.3011294] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
66
|
Wahab N, Khan A. Multifaceted fused-CNN based scoring of breast cancer whole-slide histopathology images. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106808] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
67
|
Yamami S, Sugimoto K, Takahashi M, Nakano M. Recursive Additive Complement Networks for Cell Membrane Segmentation in Histological Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1392-1395. [PMID: 33018249 DOI: 10.1109/embc44109.2020.9176126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
A recursive additive complement network (RacNet) is introduced to segment cell membranes in histological images as closed lines. Segmenting cell membranes as closed lines is necessary to calculate cell areas and to estimate N/C ratio, which is useful to diagnose early hepatocellular carcinoma. The RacNet is composed of a complement network and an element-wise maximization (EWM) process and is recursively applied to the network output. The complement network complements the lacking parts of cell membranes. The network, however, has a tendency to mistakenly delete some parts of the segmented cell membranes. The EWM process eliminates this unwanted effect.Experiments carried out using unstained hepatic sections showed that the accuracy for segmenting cell membranes as closed lines was significantly improved by using the RacNet.Three imaging methods, bright-field, dark-field, and phase-contrast, were used, as unstained sections show very low contrast in the bright-field imaging commonly used in pathological diagnosis. These imaging methods are available in optical microscopes used by pathologists. Among the three methods, phase-contrast imaging showed the highest accuracy.
Collapse
|
68
|
Tewary S, Arun I, Ahmed R, Chatterjee S, Mukhopadhyay S. AutoIHC-Analyzer: computer-assisted microscopy for automated membrane extraction/scoring in HER2 molecular markers. J Microsc 2020; 281:87-96. [PMID: 32803890 DOI: 10.1111/jmi.12955] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 08/06/2020] [Accepted: 08/12/2020] [Indexed: 11/30/2022]
Abstract
Human epidermal growth factor receptor 2 (HER2) is one of the widely used Immunohistochemical (IHC) markers for prognostic evaluation amongst the patient of breast cancer. Accurate quantification of cell membrane is essential for HER2 scoring in therapeutic decision making. In modern laboratory practice, expert pathologist visually assesses the HER2-stained tissue sample under the bright field microscope for cell membrane assessment. This manual assessment is time consuming, tedious and quite often results in interobserver variability. Further, the burden of increasing number of patients is a challenge for the pathologists. To address these challenges, there is an urgent need with a rapid HER2 cell membrane extraction method. The proposed study aims at developing an automated IHC scoring system, termed as AutoIHC-Analyzer, for automated cell membrane extraction followed by HER2 molecular expression assessment from stained tissue images. A series of image processing approaches have been used to automatically extract the stained cells and membrane region, followed by automatic assessment of complete and broken membrane. Finally, a set of features are used to automatically classify the tissue under observation for the quantitative scoring as 0/1+, 2+ and 3+. In a set of surgically extracted cases of HER2-stained tissues, obtained from collaborative hospital for the testing and validation of the proposed approach AutoIHC-Analyzer and publicly available open source ImmunoMembrane software are compared for 90 set of randomly acquired images with the scores by expert pathologist where significant correlation is observed [(r = 0.9448; p < 0.001) and (r = 0.8521; p < 0.001)] respectively. The output shows promising quantification in automated scoring. LAY DESCRIPTION: In cancer prognosis amongst the patient of breast cancer, human epidermal growth factor receptor 2 (HER2) is used as Immunohistochemical (IHC) biomarker. The correct assessment of HER2 leads to the therapeutic decision making. In regular practice, the stained tissue sample is observed under a bright microscope and the expert pathologists score the sample as negative (0/1+), equivocal (2+) and positive (3+) case. The scoring is based on the standard guidelines relating the complete and broken cell membrane as well as intensity of staining in the membrane boundary. Such evaluation is time consuming, tedious and quite often results in interobserver variability. To assist in rapid HER2 cell membrane assessment, the proposed study aims at developing an automated IHC scoring system, termed as AutoIHC-Analyzer, for automated cell membrane extraction followed by HER2 molecular expression assessment from stained tissue images. The input image is preprocessed using modified white patch and CMYK and RGB colour space were used in extracting the haematoxylin (negatively stained cells) and diaminobenzidine (DAB) stain observed in the tumour cell membrane. Segmentation and postprocessing are applied to create the masks for each of the stain channels. The membrane mask is then quantified as complete or broken using skeletonisation and morphological operations. Six set of features were assessed for the classification from a set of 180 training images. These features are: complete to broken membrane ratio, amount of stain using area of Blue and Saturation channels to the image size, DAB to haematoxylin ratio from segmented masks and average R, G and B from five largest blobs in segmented DAB-masked image. These features are then used in training the SVM classifier with Gaussian kernel using 5-fold cross-validation. The accuracy in the training sample is found to be 88.3%. The model is then used for 90 set of unknown test sample images and the final labelling of stained cells and HER2 scores (as 0/1+, 2+ and 3+) are compared with the ground truth, that is expert pathologists' score from the collaborative hospital. The test sample images were also fed to ImmunoMembrane software for a comparative assessment. The results from the proposed AutoIHC-Analyzer and ImmunoMembrane software were compared with the expert pathologists' score where significant agreement using Pearson's correlation coefficient [(r = 0.9448; p < 0.001) and (r = 0.8521; p < 0.001) respectively] is observed. The results from AutoIHC-Analyzer show promising quantitative assessment of HER2 scoring.
Collapse
Affiliation(s)
- Suman Tewary
- School of Medical Science & Technology, IIT Kharagpur, Kharagpur, West Bengal, India.,Computational Instrumentation Division, CSIR-CSIO, Chandigarh, India
| | - Indu Arun
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Rosina Ahmed
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Sanjoy Chatterjee
- Tata Medical Center, New Town, Rajarhat, Kolkata, West Bengal, India
| | - Sudipta Mukhopadhyay
- Electronics and Electrical Communication Engineering, IIT Kharagpur, Kharagpur, West Bengal, India
| |
Collapse
|
69
|
Yue Z, Ding S, Zhao W, Wang H, Ma J, Zhang Y, Zhang Y. Automatic CIN Grades Prediction of Sequential Cervigram Image Using LSTM With Multistate CNN Features. IEEE J Biomed Health Inform 2020; 24:844-854. [DOI: 10.1109/jbhi.2019.2922682] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
70
|
Zhou C, Ding C, Wang X, Lu Z, Tao D. One-pass Multi-task Networks with Cross-task Guided Attention for Brain Tumor Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:4516-4529. [PMID: 32086210 DOI: 10.1109/tip.2020.2973510] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Class imbalance has emerged as one of the major challenges for medical image segmentation. The model cascade (MC) strategy, a popular scheme, significantly alleviates the class imbalance issue via running a set of individual deep models for coarse-to-fine segmentation. Despite its outstanding performance, however, this method leads to undesired system complexity and also ignores the correlation among the models. To handle these flaws in the MC approach, we propose in this paper a light-weight deep model, i.e., the One-pass Multi-task Network (OM-Net) to solve class imbalance better than MC does, while requiring only one-pass computation for brain tumor segmentation. First, OM-Net integrates the separate segmentation tasks into one deep model, which consists of shared parameters to learn joint features, as well as task-specific parameters to learn discriminative features. Second, to more effectively optimize OM-Net, we take advantage of the correlation among tasks to design both an online training data transfer strategy and a curriculum learning-based training strategy. Third, we further propose sharing prediction results between tasks, which enables us to design a cross-task guided attention (CGA) module. By following the guidance of the prediction results provided by the previous task, CGA can adaptively recalibrate channel-wise feature responses based on the category-specific statistics. Finally, a simple yet effective post-processing method is introduced to refine the segmentation results of the proposed attention network. Extensive experiments are conducted to demonstrate the effectiveness of the proposed techniques. Most impressively, we achieve state-of-the-art performance on the BraTS 2015 testing set and BraTS 2017 online validation set. Using these proposed approaches, we also won joint third place in the BraTS 2018 challenge among 64 participating teams.The code will be made publicly available at https://github.com/chenhong-zhou/OM-Net.
Collapse
|
71
|
Survey of XAI in Digital Pathology. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR DIGITAL PATHOLOGY 2020. [DOI: 10.1007/978-3-030-50402-1_4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
72
|
Wang P, Bai X. Thermal Infrared Pedestrian Segmentation Based on Conditional GAN. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:6007-6021. [PMID: 31265395 DOI: 10.1109/tip.2019.2924171] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
A novel thermal infrared pedestrian segmentation algorithm based on conditional generative adversarial network (IPS-cGAN) is proposed for intelligent vehicular applications. The convolution backbone architecture of the generator is based on the improved U-Net with residual blocks for well utilizing regional semantic information. Moreover, cross entropy loss for segmentation is introduced as the condition for the generator. SandwichNet, a novel convolutional network with symmetrical input, is proposed as the discriminator for real-fake segmented images. Based on the c-GAN framework, good segmentation performance could be achieved for thermal infrared pedestrians. Compared to some supervised and unsupervised segmentation algorithms, the proposed algorithm achieves higher accuracy with better robustness, especially for complex scenes.
Collapse
|
73
|
Qaiser T, Rajpoot NM. Learning Where to See: A Novel Attention Model for Automated Immunohistochemical Scoring. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2620-2631. [PMID: 30908205 DOI: 10.1109/tmi.2019.2907049] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Estimating over-amplification of human epidermal growth factor receptor 2 (HER2) on invasive breast cancer is regarded as a significant predictive and prognostic marker. We propose a novel deep reinforcement learning (DRL)-based model that treats immunohistochemical (IHC) scoring of HER2 as a sequential learning task. For a given image tile sampled from multi-resolution giga-pixel whole slide image (WSI), the model learns to sequentially identify some of the diagnostically relevant regions of interest (ROIs) by following a parameterized policy. The selected ROIs are processed by recurrent and residual convolution networks to learn the discriminative features for different HER2 scores and predict the next location, without requiring to process all the sub-image patches of a given tile for predicting the HER2 score, mimicking the histopathologist who would not usually analyze every part of the slide at the highest magnification. The proposed model incorporates a task-specific regularization term and inhibition of return mechanism to prevent the model from revisiting the previously attended locations. We evaluated our model on two IHC datasets: a publicly available dataset from the HER2 scoring challenge contest and another dataset consisting of WSIs of gastroenteropancreatic neuroendocrine tumor sections stained with Glo1 marker. We demonstrate that the proposed model outperforms other methods based on state-of-the-art deep convolutional networks. To the best of our knowledge, this is the first study using DRL for IHC scoring and could potentially lead to wider use of DRL in the domain of computational pathology reducing the computational burden of the analysis of large multi-gigapixel histology images.
Collapse
|
74
|
Teng J, Abdygametova A, Du J, Ma B, Zhou R, Shyr Y, Ye F. Bayesian Inference of Lymph Node Ratio Estimation and Survival Prognosis for Breast Cancer Patients. IEEE J Biomed Health Inform 2019; 24:354-364. [PMID: 31562112 DOI: 10.1109/jbhi.2019.2943401] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE We evaluated the prognostic value of lymph node ratio (LNR) for the survival of breast cancer patients using Bayesian inference. METHODS Data on 5,279 women with infiltrating duct and lobular carcinoma breast cancer, diagnosed from 2006-2010, was obtained from the NCI SEER Cancer Registry. A prognostic modeling framework was proposed using Bayesian inference to estimate the impact of LNR in breast cancer survival. Based on the proposed model, we then developed a web application for estimating LNR and predicting overall survival. RESULTS The final survival model with LNR outperformed the other models considered (C-statistic 0.71). Compared to directly measured LNR, estimated LNR slightly increased the accuracy of the prognostic model. Model diagnostics and predictive performance confirmed the effectiveness of Bayesian modeling and the prognostic value of the LNR in predicting breast cancer survival. CONCLUSION The estimated LNR was found to have a significant predictive value for the overall survival of breast cancer patients. SIGNIFICANCE We used Bayesian inference to estimate LNR which was then used to predict overall survival. The models were developed from a large population-based cancer registry. We also built a user-friendly web application for individual patient survival prognosis. The diagnostic value of the LNR and the effectiveness of the proposed model were evaluated by comparisons with existing prediction models.
Collapse
|
75
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
76
|
Martin DR, Hanson JA, Gullapalli RR, Schultz FA, Sethi A, Clark DP. A Deep Learning Convolutional Neural Network Can Recognize Common Patterns of Injury in Gastric Pathology. Arch Pathol Lab Med 2019; 144:370-378. [PMID: 31246112 DOI: 10.5858/arpa.2019-0004-oa] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
CONTEXT.— Most deep learning (DL) studies have focused on neoplastic pathology, with the realm of inflammatory pathology remaining largely untouched. OBJECTIVE.— To investigate the use of DL for nonneoplastic gastric biopsies. DESIGN.— Gold standard diagnoses were blindly established by 2 gastrointestinal pathologists. For phase 1, 300 classic cases (100 normal, 100 Helicobacter pylori, 100 reactive gastropathy) that best displayed the desired pathology were scanned and annotated for DL analysis. A total of 70% of the cases for each group were selected for the training set, and 30% were included in the test set. The software assigned colored labels to the test biopsies, which corresponded to the area of the tissue assigned a diagnosis by the DL algorithm, termed area distribution (AD). For Phase 2, an additional 106 consecutive nonclassical gastric biopsies from our archives were tested in the same fashion. RESULTS.— For Phase 1, receiver operating curves showed near perfect agreement with the gold standard diagnoses at an AD percentage cutoff of 50% for normal (area under the curve [AUC] = 99.7%) and H pylori (AUC = 100%), and 40% for reactive gastropathy (AUC = 99.9%). Sensitivity/specificity pairings were as follows: normal (96.7%, 86.7%), H pylori (100%, 98.3%), and reactive gastropathy (96.7%, 96.7%). For phase 2, receiver operating curves were slightly less discriminatory, with optimal AD cutoffs reduced to 40% across diagnostic groups. The AUCs were 91.9% for normal, 100% for H pylori, and 94.0% for reactive gastropathy. Sensitivity/specificity parings were as follows: normal (73.7%, 79.6%), H pylori (95.7%, 100%), reactive gastropathy (100%, 62.5%). CONCLUSIONS.— A convolutional neural network can serve as an effective screening tool/diagnostic aid for H pylori gastritis.
Collapse
Affiliation(s)
- David R Martin
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Joshua A Hanson
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Rama R Gullapalli
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Fred A Schultz
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Aisha Sethi
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| | - Douglas P Clark
- From the Departments of Pathology (Drs Martin, Hanson, Gullapalli, Sethi, and Clark, and Mr Schultz) and Chemical and Biological Engineering (Dr Gullapalli), University of New Mexico, Albuquerque
| |
Collapse
|
77
|
Automated segmentation of cell membranes to evaluate HER2 status in whole slide images using a modified deep learning network. Comput Biol Med 2019; 110:164-174. [PMID: 31163391 DOI: 10.1016/j.compbiomed.2019.05.020] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/24/2019] [Accepted: 05/25/2019] [Indexed: 02/06/2023]
Abstract
The uncontrollable growth of cells in the breast tissue causes breast cancer which is the second most common type of cancer affecting women in the United States. Normally, human epidermal growth factor receptor 2 (HER2) proteins are responsible for the division and growth of healthy breast cells. HER2 status is currently assessed using immunohistochemistry (IHC) as well as in situ hybridization (ISH) in equivocal cases. Manual HER2 evaluation of IHC stained microscopic images involves an error-prone, tedious, inter-observer variable, and time-consuming routine lab work due to diverse staining, overlapped regions, and non-homogeneous remarkable large slides. To address these issues, digital pathology offers reproducible, automatic, and objective analysis and interpretation of whole slide image (WSI). In this paper, we present a machine learning (ML) framework to segment, classify, and quantify IHC breast cancer images in an effective way. The proposed method consists of two major classifying and segmentation parts. Since HER2 is associated with tumors of an epithelial region and most of the breast tumors originate in epithelial tissue, it is crucial to develop an approach to segment different tissue structures. The proposed technique is comprised of three steps. In the first step, a superpixel-based support vector machine (SVM) feature learning classifier is proposed to classify epithelial and stromal regions from WSI. In the second stage, on classified epithelial regions, a convolutional neural network (CNN) based segmentation method is applied to segment membrane regions. Finally, divided tiles are merged and the overall score of each slide is evaluated. Experimental results for 127 slides are presented and compared with state-of-the-art handcraft and deep learning-based approaches. The experiments demonstrate that the proposed method achieved promising performance on IHC stained data. The presented automated algorithm was shown to outperform other approaches in terms of superpixel based classifying of epithelial regions and segmentation of membrane staining using CNN.
Collapse
|
78
|
Sun W, Song Y, Jin Z, Zhao H, Chen C. Unsupervised Orthogonal Facial Representation Extraction via image reconstruction with correlation minimization. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.01.068] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
79
|
Mouelhi A, Rmili H, Ali JB, Sayadi M, Doghri R, Mrad K. Fast unsupervised nuclear segmentation and classification scheme for automatic allred cancer scoring in immunohistochemical breast tissue images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 165:37-51. [PMID: 30337080 DOI: 10.1016/j.cmpb.2018.08.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 07/22/2018] [Accepted: 08/08/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper presents an improved scheme able to perform accurate segmentation and classification of cancer nuclei in immunohistochemical (IHC) breast tissue images in order to provide quantitative evaluation of estrogen or progesterone (ER/PR) receptor status that will assist pathologists in cancer diagnostic process. METHODS The proposed segmentation method is based on adaptive local thresholding and an enhanced morphological procedure, which are applied to extract all stained nuclei regions and to split overlapping nuclei. In fact, a new segmentation approach is presented here for cell nuclei detection from the IHC image using a modified Laplacian filter and an improved watershed algorithm. Stromal cells are then removed from the segmented image using an adaptive criterion in order to get fast tumor nuclei recognition. Finally, unsupervised classification of cancer nuclei is obtained by the combination of four common color separation techniques for a subsequent Allred cancer scoring. RESULTS Experimental results on various IHC tissue images of different cancer affected patients, demonstrate the effectiveness of the proposed scheme when compared to the manual scoring of pathological experts. A statistical analysis is performed on the whole image database between immuno-score of manual and automatic method, and compared with the scores that have reached using other state-of-art segmentation and classification strategies. According to the performance evaluation, we recorded more than 98% for both accuracy of detected nuclei and image cancer scoring over the truths provided by experienced pathologists which shows the best correlation with the expert's score (Pearson's correlation coefficient = 0.993, p-value < 0.005) and the lowest computational total time of 72.3 s/image (±1.9) compared to recent studied methods. CONCLUSIONS The proposed scheme can be easily applied for any histopathological diagnostic process that needs stained nuclear quantification and cancer grading. Moreover, the reduced processing time and manual interactions of our procedure can facilitate its implementation in a real-time device to construct a fully online evaluation system of IHC tissue images.
Collapse
MESH Headings
- Algorithms
- Breast Neoplasms/classification
- Breast Neoplasms/diagnostic imaging
- Breast Neoplasms/metabolism
- Carcinoma, Ductal, Breast/classification
- Carcinoma, Ductal, Breast/diagnostic imaging
- Carcinoma, Ductal, Breast/metabolism
- Cell Nucleus/classification
- Cell Nucleus/metabolism
- Cell Nucleus/pathology
- Female
- Humans
- Image Interpretation, Computer-Assisted/methods
- Image Interpretation, Computer-Assisted/statistics & numerical data
- Immunohistochemistry/methods
- Immunohistochemistry/statistics & numerical data
- Receptors, Estrogen/metabolism
- Receptors, Progesterone/metabolism
- Staining and Labeling
- Unsupervised Machine Learning
Collapse
Affiliation(s)
- Aymen Mouelhi
- University of Tunis, ENSIT, LR13ES03 SIME, Montfleury 1008, Tunisia.
| | - Hana Rmili
- University of Tunis El-Manar, ISTMT, Laboratory of Biophysics and Medical Technologies, Tunisia.
| | - Jaouher Ben Ali
- University of Tunis, ENSIT, LR13ES03 SIME, Montfleury 1008, Tunisia; FEMTO-ST Institute, AS2M department, UMR CNRS 6174 - UFC / ENSMM /UTBM, Besançon 25000, France.
| | - Mounir Sayadi
- University of Tunis, ENSIT, LR13ES03 SIME, Montfleury 1008, Tunisia.
| | - Raoudha Doghri
- Salah Azaiez Institute of Oncology, Morbid Anatomy Service, bd du 9 avril, Bab Saadoun, Tunis 1006, Tunisia.
| | - Karima Mrad
- Salah Azaiez Institute of Oncology, Morbid Anatomy Service, bd du 9 avril, Bab Saadoun, Tunis 1006, Tunisia.
| |
Collapse
|
80
|
Imanishi A, Murata T, Sato M, Hotta K, Imayoshi I, Matsuda M, Terai K. A Novel Morphological Marker for the Analysis of Molecular Activities at the Single-cell Level. Cell Struct Funct 2018; 43:129-140. [DOI: 10.1247/csf.18013] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Affiliation(s)
- Ayako Imanishi
- Research Center for Dynamic Living Systems, Graduate School of Biostudies, Kyoto University
| | | | - Masaya Sato
- Graduate School of Science and Technology, Meijo University
| | - Kazuhiro Hotta
- Graduate School of Science and Technology, Meijo University
| | - Itaru Imayoshi
- Research Center for Dynamic Living Systems, Graduate School of Biostudies, Kyoto University
| | - Michiyuki Matsuda
- Research Center for Dynamic Living Systems, Graduate School of Biostudies, Kyoto University
- Department of Pathology and Biology of Diseases, Graduate School of Medicine, Kyoto University
| | - Kenta Terai
- Research Center for Dynamic Living Systems, Graduate School of Biostudies, Kyoto University
| |
Collapse
|